All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/11] qspinlock with paravirt support
@ 2014-06-15 12:46 ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

Since Waiman seems incapable of doing simple things; here's my take on the
paravirt crap.

The first few patches are taken from Waiman's latest series, but the virt
support is completely new. Its primary aim is to not mess up the native code.

I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
smp guests. I've not done Xen, but the patch should be simple and similar.

I ripped out all the unfair nonsense as its not at all required for paravirt
and optimizations that make paravirt better at the cost of code clarity and/or
native performance are just not worth it.

Also; if we were to ever add some of that unfair nonsense you do so _after_ you
got the simple things working.

The thing I'm least sure about is the head tracking, I chose to do something
different from what Waiman did, because his is O(nr_cpus) and had the
assumption that guests have small nr_cpus. AFAIK this is not at all true. The
biggest problem I have with what I did is that it contains wait loops itself.




^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 00/11] qspinlock with paravirt support
@ 2014-06-15 12:46 ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

Since Waiman seems incapable of doing simple things; here's my take on the
paravirt crap.

The first few patches are taken from Waiman's latest series, but the virt
support is completely new. Its primary aim is to not mess up the native code.

I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
smp guests. I've not done Xen, but the patch should be simple and similar.

I ripped out all the unfair nonsense as its not at all required for paravirt
and optimizations that make paravirt better at the cost of code clarity and/or
native performance are just not worth it.

Also; if we were to ever add some of that unfair nonsense you do so _after_ you
got the simple things working.

The thing I'm least sure about is the head tracking, I chose to do something
different from what Waiman did, because his is O(nr_cpus) and had the
assumption that guests have small nr_cpus. AFAIK this is not at all true. The
biggest problem I have with what I did is that it contains wait loops itself.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-15 12:46 ` Peter Zijlstra
@ 2014-06-15 12:46   ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: waiman_long-qspinlock-a_simple_generic_4-byte_queue_spinlock.patch --]
[-- Type: text/plain, Size: 15414 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.

Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.

This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities.  By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.

Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock.h       |  118 ++++++++++++++++++++
 include/asm-generic/qspinlock_types.h |   61 ++++++++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 
 kernel/locking/mcs_spinlock.h         |    1 
 kernel/locking/qspinlock.c            |  197 ++++++++++++++++++++++++++++++++++
 6 files changed, 385 insertions(+)
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c

Index: linux-2.6/include/asm-generic/qspinlock.h
===================================================================
--- /dev/null
+++ linux-2.6/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+	return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ *      locked wrt the lockref code to avoid lock stealing by the lockref
+ *      code and change things underneath the lock. This also allows some
+ *      optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+	return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+	return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+	if (!atomic_read(&lock->val) &&
+	   (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+		return 1;
+	return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+	u32 val;
+
+	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+	if (likely(val == 0))
+		return;
+	queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	/*
+	 * smp_mb__before_atomic() in order to guarantee release semantics
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l)		queue_spin_is_locked(l)
+#define arch_spin_is_contended(l)	queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l)	queue_spin_value_unlocked(l)
+#define arch_spin_lock(l)		queue_spin_lock(l)
+#define arch_spin_trylock(l)		queue_spin_trylock(l)
+#define arch_spin_unlock(l)		queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f)	queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
Index: linux-2.6/include/asm-generic/qspinlock_types.h
===================================================================
--- /dev/null
+++ linux-2.6/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+	atomic_t	val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ *  0- 7: locked byte
+ *  8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
+				      << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET	0
+#define _Q_LOCKED_BITS		8
+#define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS	2
+#define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET	(_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
Index: linux-2.6/kernel/Kconfig.locks
===================================================================
--- linux-2.6.orig/kernel/Kconfig.locks
+++ linux-2.6/kernel/Kconfig.locks
@@ -224,6 +224,13 @@ config MUTEX_SPIN_ON_OWNER
 	def_bool y
 	depends on SMP && !DEBUG_MUTEXES && !PARISC
 
+config ARCH_USE_QUEUE_SPINLOCK
+	bool
+
+config QUEUE_SPINLOCK
+	def_bool y if ARCH_USE_QUEUE_SPINLOCK
+	depends on SMP && !PARAVIRT_SPINLOCKS
+
 config ARCH_USE_QUEUE_RWLOCK
 	bool
 
Index: linux-2.6/kernel/locking/Makefile
===================================================================
--- linux-2.6.orig/kernel/locking/Makefile
+++ linux-2.6/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
 obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_SMP) += lglock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
Index: linux-2.6/kernel/locking/mcs_spinlock.h
===================================================================
--- linux-2.6.orig/kernel/locking/mcs_spinlock.h
+++ linux-2.6/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
 struct mcs_spinlock {
 	struct mcs_spinlock *next;
 	int locked; /* 1 if lock acquired */
+	int count;
 };
 
 #ifndef arch_mcs_spin_lock_contended
Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- /dev/null
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ *          Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+	u32 tail;
+
+	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+	return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+	int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+	int idx = (tail &  _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+	return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ *              fast      :    slow                                  :    unlock
+ *                        :                                          :
+ * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
+ *                        :       | ^--------.                    /  :
+ *                        :       v           \                   |  :
+ * uncontended            :    (n,x) --+--> (n,0)                 |  :
+ *   queue                :       | ^--'                          |  :
+ *                        :       v                               |  :
+ * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
+ *   queue                :         ^--'                             :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+	struct mcs_spinlock *prev, *next, *node;
+	u32 new, old, tail;
+	int idx;
+
+	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+	node = this_cpu_ptr(&mcs_nodes[0]);
+	idx = node->count++;
+	tail = encode_tail(smp_processor_id(), idx);
+
+	node += idx;
+	node->locked = 0;
+	node->next = NULL;
+
+	/*
+	 * trylock || xchg(lock, node)
+	 *
+	 * 0,0 -> 0,1 ; trylock
+	 * p,x -> n,x ; prev = xchg(lock, node)
+	 */
+	for (;;) {
+		new = _Q_LOCKED_VAL;
+		if (val)
+			new = tail | (val & _Q_LOCKED_MASK);
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock; forget about queueing.
+	 */
+	if (new == _Q_LOCKED_VAL)
+		goto release;
+
+	/*
+	 * if there was a previous node; link it and wait.
+	 */
+	if (old & ~_Q_LOCKED_MASK) {
+		prev = decode_tail(old);
+		ACCESS_ONCE(prev->next) = node;
+
+		arch_mcs_spin_lock_contended(&node->locked);
+	}
+
+	/*
+	 * we're at the head of the waitqueue, wait for the owner to go away.
+	 *
+	 * *,x -> *,0
+	 */
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+		cpu_relax();
+
+	/*
+	 * claim the lock:
+	 *
+	 * n,0 -> 0,1 : lock, uncontended
+	 * *,0 -> *,1 : lock, contended
+	 */
+	for (;;) {
+		new = _Q_LOCKED_VAL;
+		if (val != tail)
+			new |= val;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * contended path; wait for next, release.
+	 */
+	if (new != _Q_LOCKED_VAL) {
+		while (!(next = ACCESS_ONCE(node->next)))
+			cpu_relax();
+
+		arch_mcs_spin_unlock_contended(&next->locked);
+	}
+
+release:
+	/*
+	 * release the node
+	 */
+	this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-15 12:46   ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock-a_simple_generic_4-byte_queue_spinlock.patch --]
[-- Type: text/plain, Size: 15412 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.

Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.

This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities.  By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.

Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock.h       |  118 ++++++++++++++++++++
 include/asm-generic/qspinlock_types.h |   61 ++++++++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 
 kernel/locking/mcs_spinlock.h         |    1 
 kernel/locking/qspinlock.c            |  197 ++++++++++++++++++++++++++++++++++
 6 files changed, 385 insertions(+)
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c

Index: linux-2.6/include/asm-generic/qspinlock.h
===================================================================
--- /dev/null
+++ linux-2.6/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+	return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ *      locked wrt the lockref code to avoid lock stealing by the lockref
+ *      code and change things underneath the lock. This also allows some
+ *      optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+	return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+	return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+	if (!atomic_read(&lock->val) &&
+	   (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+		return 1;
+	return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+	u32 val;
+
+	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+	if (likely(val == 0))
+		return;
+	queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	/*
+	 * smp_mb__before_atomic() in order to guarantee release semantics
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l)		queue_spin_is_locked(l)
+#define arch_spin_is_contended(l)	queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l)	queue_spin_value_unlocked(l)
+#define arch_spin_lock(l)		queue_spin_lock(l)
+#define arch_spin_trylock(l)		queue_spin_trylock(l)
+#define arch_spin_unlock(l)		queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f)	queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
Index: linux-2.6/include/asm-generic/qspinlock_types.h
===================================================================
--- /dev/null
+++ linux-2.6/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+	atomic_t	val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ *  0- 7: locked byte
+ *  8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
+				      << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET	0
+#define _Q_LOCKED_BITS		8
+#define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS	2
+#define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET	(_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
Index: linux-2.6/kernel/Kconfig.locks
===================================================================
--- linux-2.6.orig/kernel/Kconfig.locks
+++ linux-2.6/kernel/Kconfig.locks
@@ -224,6 +224,13 @@ config MUTEX_SPIN_ON_OWNER
 	def_bool y
 	depends on SMP && !DEBUG_MUTEXES && !PARISC
 
+config ARCH_USE_QUEUE_SPINLOCK
+	bool
+
+config QUEUE_SPINLOCK
+	def_bool y if ARCH_USE_QUEUE_SPINLOCK
+	depends on SMP && !PARAVIRT_SPINLOCKS
+
 config ARCH_USE_QUEUE_RWLOCK
 	bool
 
Index: linux-2.6/kernel/locking/Makefile
===================================================================
--- linux-2.6.orig/kernel/locking/Makefile
+++ linux-2.6/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
 obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_SMP) += lglock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
Index: linux-2.6/kernel/locking/mcs_spinlock.h
===================================================================
--- linux-2.6.orig/kernel/locking/mcs_spinlock.h
+++ linux-2.6/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
 struct mcs_spinlock {
 	struct mcs_spinlock *next;
 	int locked; /* 1 if lock acquired */
+	int count;
 };
 
 #ifndef arch_mcs_spin_lock_contended
Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- /dev/null
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ *          Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+	u32 tail;
+
+	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+	return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+	int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+	int idx = (tail &  _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+	return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ *              fast      :    slow                                  :    unlock
+ *                        :                                          :
+ * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
+ *                        :       | ^--------.                    /  :
+ *                        :       v           \                   |  :
+ * uncontended            :    (n,x) --+--> (n,0)                 |  :
+ *   queue                :       | ^--'                          |  :
+ *                        :       v                               |  :
+ * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
+ *   queue                :         ^--'                             :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+	struct mcs_spinlock *prev, *next, *node;
+	u32 new, old, tail;
+	int idx;
+
+	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+	node = this_cpu_ptr(&mcs_nodes[0]);
+	idx = node->count++;
+	tail = encode_tail(smp_processor_id(), idx);
+
+	node += idx;
+	node->locked = 0;
+	node->next = NULL;
+
+	/*
+	 * trylock || xchg(lock, node)
+	 *
+	 * 0,0 -> 0,1 ; trylock
+	 * p,x -> n,x ; prev = xchg(lock, node)
+	 */
+	for (;;) {
+		new = _Q_LOCKED_VAL;
+		if (val)
+			new = tail | (val & _Q_LOCKED_MASK);
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock; forget about queueing.
+	 */
+	if (new == _Q_LOCKED_VAL)
+		goto release;
+
+	/*
+	 * if there was a previous node; link it and wait.
+	 */
+	if (old & ~_Q_LOCKED_MASK) {
+		prev = decode_tail(old);
+		ACCESS_ONCE(prev->next) = node;
+
+		arch_mcs_spin_lock_contended(&node->locked);
+	}
+
+	/*
+	 * we're at the head of the waitqueue, wait for the owner to go away.
+	 *
+	 * *,x -> *,0
+	 */
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+		cpu_relax();
+
+	/*
+	 * claim the lock:
+	 *
+	 * n,0 -> 0,1 : lock, uncontended
+	 * *,0 -> *,1 : lock, contended
+	 */
+	for (;;) {
+		new = _Q_LOCKED_VAL;
+		if (val != tail)
+			new |= val;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * contended path; wait for next, release.
+	 */
+	if (new != _Q_LOCKED_VAL) {
+		while (!(next = ACCESS_ONCE(node->next)))
+			cpu_relax();
+
+		arch_mcs_spin_unlock_contended(&next->locked);
+	}
+
+release:
+	/*
+	 * release the node
+	 */
+	this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-15 12:46 ` Peter Zijlstra
  (?)
@ 2014-06-15 12:46 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock-a_simple_generic_4-byte_queue_spinlock.patch --]
[-- Type: text/plain, Size: 15412 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.

Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.

This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities.  By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.

Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock.h       |  118 ++++++++++++++++++++
 include/asm-generic/qspinlock_types.h |   61 ++++++++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 
 kernel/locking/mcs_spinlock.h         |    1 
 kernel/locking/qspinlock.c            |  197 ++++++++++++++++++++++++++++++++++
 6 files changed, 385 insertions(+)
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c

Index: linux-2.6/include/asm-generic/qspinlock.h
===================================================================
--- /dev/null
+++ linux-2.6/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+	return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ *      locked wrt the lockref code to avoid lock stealing by the lockref
+ *      code and change things underneath the lock. This also allows some
+ *      optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+	return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+	return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+	if (!atomic_read(&lock->val) &&
+	   (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+		return 1;
+	return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+	u32 val;
+
+	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+	if (likely(val == 0))
+		return;
+	queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	/*
+	 * smp_mb__before_atomic() in order to guarantee release semantics
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l)		queue_spin_is_locked(l)
+#define arch_spin_is_contended(l)	queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l)	queue_spin_value_unlocked(l)
+#define arch_spin_lock(l)		queue_spin_lock(l)
+#define arch_spin_trylock(l)		queue_spin_trylock(l)
+#define arch_spin_unlock(l)		queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f)	queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
Index: linux-2.6/include/asm-generic/qspinlock_types.h
===================================================================
--- /dev/null
+++ linux-2.6/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+	atomic_t	val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ *  0- 7: locked byte
+ *  8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
+				      << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET	0
+#define _Q_LOCKED_BITS		8
+#define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS	2
+#define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET	(_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
Index: linux-2.6/kernel/Kconfig.locks
===================================================================
--- linux-2.6.orig/kernel/Kconfig.locks
+++ linux-2.6/kernel/Kconfig.locks
@@ -224,6 +224,13 @@ config MUTEX_SPIN_ON_OWNER
 	def_bool y
 	depends on SMP && !DEBUG_MUTEXES && !PARISC
 
+config ARCH_USE_QUEUE_SPINLOCK
+	bool
+
+config QUEUE_SPINLOCK
+	def_bool y if ARCH_USE_QUEUE_SPINLOCK
+	depends on SMP && !PARAVIRT_SPINLOCKS
+
 config ARCH_USE_QUEUE_RWLOCK
 	bool
 
Index: linux-2.6/kernel/locking/Makefile
===================================================================
--- linux-2.6.orig/kernel/locking/Makefile
+++ linux-2.6/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
 obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_SMP) += lglock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
Index: linux-2.6/kernel/locking/mcs_spinlock.h
===================================================================
--- linux-2.6.orig/kernel/locking/mcs_spinlock.h
+++ linux-2.6/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
 struct mcs_spinlock {
 	struct mcs_spinlock *next;
 	int locked; /* 1 if lock acquired */
+	int count;
 };
 
 #ifndef arch_mcs_spin_lock_contended
Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- /dev/null
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ *          Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+	u32 tail;
+
+	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+	return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+	int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+	int idx = (tail &  _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+	return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ *              fast      :    slow                                  :    unlock
+ *                        :                                          :
+ * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
+ *                        :       | ^--------.                    /  :
+ *                        :       v           \                   |  :
+ * uncontended            :    (n,x) --+--> (n,0)                 |  :
+ *   queue                :       | ^--'                          |  :
+ *                        :       v                               |  :
+ * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
+ *   queue                :         ^--'                             :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+	struct mcs_spinlock *prev, *next, *node;
+	u32 new, old, tail;
+	int idx;
+
+	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+	node = this_cpu_ptr(&mcs_nodes[0]);
+	idx = node->count++;
+	tail = encode_tail(smp_processor_id(), idx);
+
+	node += idx;
+	node->locked = 0;
+	node->next = NULL;
+
+	/*
+	 * trylock || xchg(lock, node)
+	 *
+	 * 0,0 -> 0,1 ; trylock
+	 * p,x -> n,x ; prev = xchg(lock, node)
+	 */
+	for (;;) {
+		new = _Q_LOCKED_VAL;
+		if (val)
+			new = tail | (val & _Q_LOCKED_MASK);
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock; forget about queueing.
+	 */
+	if (new == _Q_LOCKED_VAL)
+		goto release;
+
+	/*
+	 * if there was a previous node; link it and wait.
+	 */
+	if (old & ~_Q_LOCKED_MASK) {
+		prev = decode_tail(old);
+		ACCESS_ONCE(prev->next) = node;
+
+		arch_mcs_spin_lock_contended(&node->locked);
+	}
+
+	/*
+	 * we're at the head of the waitqueue, wait for the owner to go away.
+	 *
+	 * *,x -> *,0
+	 */
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+		cpu_relax();
+
+	/*
+	 * claim the lock:
+	 *
+	 * n,0 -> 0,1 : lock, uncontended
+	 * *,0 -> *,1 : lock, contended
+	 */
+	for (;;) {
+		new = _Q_LOCKED_VAL;
+		if (val != tail)
+			new |= val;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * contended path; wait for next, release.
+	 */
+	if (new != _Q_LOCKED_VAL) {
+		while (!(next = ACCESS_ONCE(node->next)))
+			cpu_relax();
+
+		arch_mcs_spin_unlock_contended(&next->locked);
+	}
+
+release:
+	/*
+	 * release the node
+	 */
+	this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (4 preceding siblings ...)
  (?)
@ 2014-06-15 12:46 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: waiman_long-qspinlock_x86-enable_x86-64_to_use_queue_spinlock.patch --]
[-- Type: text/plain, Size: 3329 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.

Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.

The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/Kconfig                      |    1 +
 arch/x86/include/asm/qspinlock.h      |   25 +++++++++++++++++++++++++
 arch/x86/include/asm/spinlock.h       |    5 +++++
 arch/x86/include/asm/spinlock_types.h |    4 ++++
 4 files changed, 35 insertions(+)
 create mode 100644 arch/x86/include/asm/qspinlock.h

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
 	select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
 	select ARCH_SUPPORTS_INT128 if X86_64
 	select ARCH_WANTS_PROT_NUMA_PROT_NONE
+	select ARCH_USE_QUEUE_SPINLOCK
 	select HAVE_IDE
 	select HAVE_OPROFILE
 	select HAVE_PCSPKR_PLATFORM
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,25 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define	queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * An effective smp_store_release() on the least-significant byte.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	barrier();
+	ACCESS_ONCE(*(u8 *)lock) = 0;
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
 extern struct static_key paravirt_ticketlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
 static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lo
 {
 	arch_spin_lock(lock);
 }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 #ifdef CONFIG_QUEUE_RWLOCK
 #include <asm-generic/qrwlock_types.h>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (2 preceding siblings ...)
  (?)
@ 2014-06-15 12:46 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock_x86-enable_x86-64_to_use_queue_spinlock.patch --]
[-- Type: text/plain, Size: 3327 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.

Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.

The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/Kconfig                      |    1 +
 arch/x86/include/asm/qspinlock.h      |   25 +++++++++++++++++++++++++
 arch/x86/include/asm/spinlock.h       |    5 +++++
 arch/x86/include/asm/spinlock_types.h |    4 ++++
 4 files changed, 35 insertions(+)
 create mode 100644 arch/x86/include/asm/qspinlock.h

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
 	select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
 	select ARCH_SUPPORTS_INT128 if X86_64
 	select ARCH_WANTS_PROT_NUMA_PROT_NONE
+	select ARCH_USE_QUEUE_SPINLOCK
 	select HAVE_IDE
 	select HAVE_OPROFILE
 	select HAVE_PCSPKR_PLATFORM
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,25 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define	queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * An effective smp_store_release() on the least-significant byte.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	barrier();
+	ACCESS_ONCE(*(u8 *)lock) = 0;
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
 extern struct static_key paravirt_ticketlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
 static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lo
 {
 	arch_spin_lock(lock);
 }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 #ifdef CONFIG_QUEUE_RWLOCK
 #include <asm-generic/qrwlock_types.h>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (3 preceding siblings ...)
  (?)
@ 2014-06-15 12:46 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock_x86-enable_x86-64_to_use_queue_spinlock.patch --]
[-- Type: text/plain, Size: 3327 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.

Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.

The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/Kconfig                      |    1 +
 arch/x86/include/asm/qspinlock.h      |   25 +++++++++++++++++++++++++
 arch/x86/include/asm/spinlock.h       |    5 +++++
 arch/x86/include/asm/spinlock_types.h |    4 ++++
 4 files changed, 35 insertions(+)
 create mode 100644 arch/x86/include/asm/qspinlock.h

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
 	select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
 	select ARCH_SUPPORTS_INT128 if X86_64
 	select ARCH_WANTS_PROT_NUMA_PROT_NONE
+	select ARCH_USE_QUEUE_SPINLOCK
 	select HAVE_IDE
 	select HAVE_OPROFILE
 	select HAVE_PCSPKR_PLATFORM
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,25 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define	queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * An effective smp_store_release() on the least-significant byte.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	barrier();
+	ACCESS_ONCE(*(u8 *)lock) = 0;
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
 extern struct static_key paravirt_ticketlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
 static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lo
 {
 	arch_spin_lock(lock);
 }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 #ifdef CONFIG_QUEUE_RWLOCK
 #include <asm-generic/qrwlock_types.h>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 03/11] qspinlock: Add pending bit
  2014-06-15 12:46 ` Peter Zijlstra
@ 2014-06-15 12:47   ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: peter_zijlstra-qspinlock-add_pending_bit.patch --]
[-- Type: text/plain, Size: 6468 bytes --]

Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |   12 ++-
 kernel/locking/qspinlock.c            |  109 +++++++++++++++++++++++++++-------
 2 files changed, 97 insertions(+), 24 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
  * Bitfields in the atomic value:
  *
  *  0- 7: locked byte
- *  8- 9: tail index
- * 10-31: tail cpu (+1)
+ *     8: pending
+ *  9-10: tail index
+ * 11-31: tail cpu (+1)
  */
 #define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
 				      << _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
 #define _Q_LOCKED_BITS		8
 #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
 
-#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS		1
+#define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
 #define _Q_TAIL_IDX_BITS	2
 #define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
 
@@ -57,5 +62,6 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
 
 #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod
 	return per_cpu_ptr(&mcs_nodes[idx], cpu);
 }
 
+#define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
  *
- * (queue tail, lock bit)
- *
- *              fast      :    slow                                  :    unlock
- *                        :                                          :
- * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
- *                        :       | ^--------.                    /  :
- *                        :       v           \                   |  :
- * uncontended            :    (n,x) --+--> (n,0)                 |  :
- *   queue                :       | ^--'                          |  :
- *                        :       v                               |  :
- * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
- *   queue                :         ^--'                             :
+ * (queue tail, pending bit, lock bit)
  *
+ *              fast     :    slow                                  :    unlock
+ *                       :                                          :
+ * uncontended  (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ *                       :       | ^--------.------.             /  :
+ *                       :       v           \      \            |  :
+ * pending               :    (0,1,1) +--> (0,1,0)   \           |  :
+ *                       :       | ^--'              |           |  :
+ *                       :       v                   |           |  :
+ * uncontended           :    (n,x,y) +--> (n,0,0) --'           |  :
+ *   queue               :       | ^--'                          |  :
+ *                       :       v                               |  :
+ * contended             :    (*,x,y) +--> (*,0,0) ---> (*,0,1) -'  :
+ *   queue               :         ^--'                             :
  */
 void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 {
@@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	/*
+	 * trylock || pending
+	 *
+	 * 0,0,0 -> 0,0,1 ; trylock
+	 * 0,0,1 -> 0,1,1 ; pending
+	 */
+	for (;;) {
+		/*
+		 * If we observe any contention; queue.
+		 */
+		if (val & ~_Q_LOCKED_MASK)
+			goto queue;
+
+		new = _Q_LOCKED_VAL;
+		if (val == new)
+			new |= _Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock
+	 */
+	if (new == _Q_LOCKED_VAL)
+		return;
+
+	/*
+	 * we're pending, wait for the owner to go away.
+	 *
+	 * *,1,1 -> *,1,0
+	 */
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+		cpu_relax();
+
+	/*
+	 * take ownership and clear the pending bit.
+	 *
+	 * *,1,0 -> *,0,1
+	 */
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return;
+
+	/*
+	 * End of pending bit optimistic spinning and beginning of MCS
+	 * queuing.
+	 */
+queue:
 	node = this_cpu_ptr(&mcs_nodes[0]);
 	idx = node->count++;
 	tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp
 	node->next = NULL;
 
 	/*
+	 * we already touched the queueing cacheline; don't bother with pending
+	 * stuff.
+	 *
 	 * trylock || xchg(lock, node)
 	 *
-	 * 0,0 -> 0,1 ; trylock
-	 * p,x -> n,x ; prev = xchg(lock, node)
+	 * 0,0,0 -> 0,0,1 ; trylock
+	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
 	 */
 	for (;;) {
 		new = _Q_LOCKED_VAL;
 		if (val)
-			new = tail | (val & _Q_LOCKED_MASK);
+			new = tail | (val & _Q_LOCKED_PENDING_MASK);
 
 		old = atomic_cmpxchg(&lock->val, val, new);
 		if (old == val)
@@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
-	if (old & ~_Q_LOCKED_MASK) {
+	if (old & ~_Q_LOCKED_PENDING_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;
 
@@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp
 	}
 
 	/*
-	 * we're at the head of the waitqueue, wait for the owner to go away.
+	 * we're at the head of the waitqueue, wait for the owner & pending to
+	 * go away.
 	 *
-	 * *,x -> *,0
+	 * *,x,y -> *,0,0
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
 		cpu_relax();
 
 	/*
 	 * claim the lock:
 	 *
-	 * n,0 -> 0,1 : lock, uncontended
-	 * *,0 -> *,1 : lock, contended
+	 * n,0,0 -> 0,0,1 : lock, uncontended
+	 * *,0,0 -> *,0,1 : lock, contended
 	 */
 	for (;;) {
 		new = _Q_LOCKED_VAL;



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-15 12:47   ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: peter_zijlstra-qspinlock-add_pending_bit.patch --]
[-- Type: text/plain, Size: 6466 bytes --]

Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |   12 ++-
 kernel/locking/qspinlock.c            |  109 +++++++++++++++++++++++++++-------
 2 files changed, 97 insertions(+), 24 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
  * Bitfields in the atomic value:
  *
  *  0- 7: locked byte
- *  8- 9: tail index
- * 10-31: tail cpu (+1)
+ *     8: pending
+ *  9-10: tail index
+ * 11-31: tail cpu (+1)
  */
 #define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
 				      << _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
 #define _Q_LOCKED_BITS		8
 #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
 
-#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS		1
+#define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
 #define _Q_TAIL_IDX_BITS	2
 #define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
 
@@ -57,5 +62,6 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
 
 #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod
 	return per_cpu_ptr(&mcs_nodes[idx], cpu);
 }
 
+#define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
  *
- * (queue tail, lock bit)
- *
- *              fast      :    slow                                  :    unlock
- *                        :                                          :
- * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
- *                        :       | ^--------.                    /  :
- *                        :       v           \                   |  :
- * uncontended            :    (n,x) --+--> (n,0)                 |  :
- *   queue                :       | ^--'                          |  :
- *                        :       v                               |  :
- * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
- *   queue                :         ^--'                             :
+ * (queue tail, pending bit, lock bit)
  *
+ *              fast     :    slow                                  :    unlock
+ *                       :                                          :
+ * uncontended  (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ *                       :       | ^--------.------.             /  :
+ *                       :       v           \      \            |  :
+ * pending               :    (0,1,1) +--> (0,1,0)   \           |  :
+ *                       :       | ^--'              |           |  :
+ *                       :       v                   |           |  :
+ * uncontended           :    (n,x,y) +--> (n,0,0) --'           |  :
+ *   queue               :       | ^--'                          |  :
+ *                       :       v                               |  :
+ * contended             :    (*,x,y) +--> (*,0,0) ---> (*,0,1) -'  :
+ *   queue               :         ^--'                             :
  */
 void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 {
@@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	/*
+	 * trylock || pending
+	 *
+	 * 0,0,0 -> 0,0,1 ; trylock
+	 * 0,0,1 -> 0,1,1 ; pending
+	 */
+	for (;;) {
+		/*
+		 * If we observe any contention; queue.
+		 */
+		if (val & ~_Q_LOCKED_MASK)
+			goto queue;
+
+		new = _Q_LOCKED_VAL;
+		if (val == new)
+			new |= _Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock
+	 */
+	if (new == _Q_LOCKED_VAL)
+		return;
+
+	/*
+	 * we're pending, wait for the owner to go away.
+	 *
+	 * *,1,1 -> *,1,0
+	 */
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+		cpu_relax();
+
+	/*
+	 * take ownership and clear the pending bit.
+	 *
+	 * *,1,0 -> *,0,1
+	 */
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return;
+
+	/*
+	 * End of pending bit optimistic spinning and beginning of MCS
+	 * queuing.
+	 */
+queue:
 	node = this_cpu_ptr(&mcs_nodes[0]);
 	idx = node->count++;
 	tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp
 	node->next = NULL;
 
 	/*
+	 * we already touched the queueing cacheline; don't bother with pending
+	 * stuff.
+	 *
 	 * trylock || xchg(lock, node)
 	 *
-	 * 0,0 -> 0,1 ; trylock
-	 * p,x -> n,x ; prev = xchg(lock, node)
+	 * 0,0,0 -> 0,0,1 ; trylock
+	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
 	 */
 	for (;;) {
 		new = _Q_LOCKED_VAL;
 		if (val)
-			new = tail | (val & _Q_LOCKED_MASK);
+			new = tail | (val & _Q_LOCKED_PENDING_MASK);
 
 		old = atomic_cmpxchg(&lock->val, val, new);
 		if (old == val)
@@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
-	if (old & ~_Q_LOCKED_MASK) {
+	if (old & ~_Q_LOCKED_PENDING_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;
 
@@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp
 	}
 
 	/*
-	 * we're at the head of the waitqueue, wait for the owner to go away.
+	 * we're at the head of the waitqueue, wait for the owner & pending to
+	 * go away.
 	 *
-	 * *,x -> *,0
+	 * *,x,y -> *,0,0
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
 		cpu_relax();
 
 	/*
 	 * claim the lock:
 	 *
-	 * n,0 -> 0,1 : lock, uncontended
-	 * *,0 -> *,1 : lock, contended
+	 * n,0,0 -> 0,0,1 : lock, uncontended
+	 * *,0,0 -> *,0,1 : lock, contended
 	 */
 	for (;;) {
 		new = _Q_LOCKED_VAL;

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 03/11] qspinlock: Add pending bit
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (5 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: peter_zijlstra-qspinlock-add_pending_bit.patch --]
[-- Type: text/plain, Size: 6466 bytes --]

Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |   12 ++-
 kernel/locking/qspinlock.c            |  109 +++++++++++++++++++++++++++-------
 2 files changed, 97 insertions(+), 24 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
  * Bitfields in the atomic value:
  *
  *  0- 7: locked byte
- *  8- 9: tail index
- * 10-31: tail cpu (+1)
+ *     8: pending
+ *  9-10: tail index
+ * 11-31: tail cpu (+1)
  */
 #define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
 				      << _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
 #define _Q_LOCKED_BITS		8
 #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
 
-#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS		1
+#define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
 #define _Q_TAIL_IDX_BITS	2
 #define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
 
@@ -57,5 +62,6 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
 
 #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod
 	return per_cpu_ptr(&mcs_nodes[idx], cpu);
 }
 
+#define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
  *
- * (queue tail, lock bit)
- *
- *              fast      :    slow                                  :    unlock
- *                        :                                          :
- * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
- *                        :       | ^--------.                    /  :
- *                        :       v           \                   |  :
- * uncontended            :    (n,x) --+--> (n,0)                 |  :
- *   queue                :       | ^--'                          |  :
- *                        :       v                               |  :
- * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
- *   queue                :         ^--'                             :
+ * (queue tail, pending bit, lock bit)
  *
+ *              fast     :    slow                                  :    unlock
+ *                       :                                          :
+ * uncontended  (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ *                       :       | ^--------.------.             /  :
+ *                       :       v           \      \            |  :
+ * pending               :    (0,1,1) +--> (0,1,0)   \           |  :
+ *                       :       | ^--'              |           |  :
+ *                       :       v                   |           |  :
+ * uncontended           :    (n,x,y) +--> (n,0,0) --'           |  :
+ *   queue               :       | ^--'                          |  :
+ *                       :       v                               |  :
+ * contended             :    (*,x,y) +--> (*,0,0) ---> (*,0,1) -'  :
+ *   queue               :         ^--'                             :
  */
 void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 {
@@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	/*
+	 * trylock || pending
+	 *
+	 * 0,0,0 -> 0,0,1 ; trylock
+	 * 0,0,1 -> 0,1,1 ; pending
+	 */
+	for (;;) {
+		/*
+		 * If we observe any contention; queue.
+		 */
+		if (val & ~_Q_LOCKED_MASK)
+			goto queue;
+
+		new = _Q_LOCKED_VAL;
+		if (val == new)
+			new |= _Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock
+	 */
+	if (new == _Q_LOCKED_VAL)
+		return;
+
+	/*
+	 * we're pending, wait for the owner to go away.
+	 *
+	 * *,1,1 -> *,1,0
+	 */
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+		cpu_relax();
+
+	/*
+	 * take ownership and clear the pending bit.
+	 *
+	 * *,1,0 -> *,0,1
+	 */
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return;
+
+	/*
+	 * End of pending bit optimistic spinning and beginning of MCS
+	 * queuing.
+	 */
+queue:
 	node = this_cpu_ptr(&mcs_nodes[0]);
 	idx = node->count++;
 	tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp
 	node->next = NULL;
 
 	/*
+	 * we already touched the queueing cacheline; don't bother with pending
+	 * stuff.
+	 *
 	 * trylock || xchg(lock, node)
 	 *
-	 * 0,0 -> 0,1 ; trylock
-	 * p,x -> n,x ; prev = xchg(lock, node)
+	 * 0,0,0 -> 0,0,1 ; trylock
+	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
 	 */
 	for (;;) {
 		new = _Q_LOCKED_VAL;
 		if (val)
-			new = tail | (val & _Q_LOCKED_MASK);
+			new = tail | (val & _Q_LOCKED_PENDING_MASK);
 
 		old = atomic_cmpxchg(&lock->val, val, new);
 		if (old == val)
@@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
-	if (old & ~_Q_LOCKED_MASK) {
+	if (old & ~_Q_LOCKED_PENDING_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;
 
@@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp
 	}
 
 	/*
-	 * we're at the head of the waitqueue, wait for the owner to go away.
+	 * we're at the head of the waitqueue, wait for the owner & pending to
+	 * go away.
 	 *
-	 * *,x -> *,0
+	 * *,x,y -> *,0,0
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
 		cpu_relax();
 
 	/*
 	 * claim the lock:
 	 *
-	 * n,0 -> 0,1 : lock, uncontended
-	 * *,0 -> *,1 : lock, contended
+	 * n,0,0 -> 0,0,1 : lock, uncontended
+	 * *,0,0 -> *,0,1 : lock, contended
 	 */
 	for (;;) {
 		new = _Q_LOCKED_VAL;

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (8 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  2014-06-17 20:55     ` Konrad Rzeszutek Wilk
  2014-06-17 20:55   ` Konrad Rzeszutek Wilk
  -1 siblings, 2 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: waiman_long-qspinlock-extract_out_the_exchange_of_tail_code_word.patch --]
[-- Type: text/plain, Size: 3004 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |    2 +
 kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
 2 files changed, 38 insertions(+), 22 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -61,6 +61,8 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
+#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
+
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
 #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
 
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
 #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
 /**
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
+{
+	u32 old, new, val = atomic_read(&lock->val);
+
+	for (;;) {
+		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return old;
+}
+
+/**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
@@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
 	node->next = NULL;
 
 	/*
-	 * we already touched the queueing cacheline; don't bother with pending
-	 * stuff.
-	 *
-	 * trylock || xchg(lock, node)
-	 *
-	 * 0,0,0 -> 0,0,1 ; trylock
-	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
+	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
+	 * attempt the trylock once more in the hope someone let go while we
+	 * weren't watching.
 	 */
-	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val)
-			new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
+	if (queue_spin_trylock(lock))
+		goto release;
 
 	/*
-	 * we won the trylock; forget about queueing.
+	 * we already touched the queueing cacheline; don't bother with pending
+	 * stuff.
+	 *
+	 * p,*,* -> n,*,*
 	 */
-	if (new == _Q_LOCKED_VAL)
-		goto release;
+	old = xchg_tail(lock, tail);
 
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
-	if (old & ~_Q_LOCKED_PENDING_MASK) {
+	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;
 



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (7 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock-extract_out_the_exchange_of_tail_code_word.patch --]
[-- Type: text/plain, Size: 3000 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |    2 +
 kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
 2 files changed, 38 insertions(+), 22 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -61,6 +61,8 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
+#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
+
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
 #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
 
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
 #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
 /**
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
+{
+	u32 old, new, val = atomic_read(&lock->val);
+
+	for (;;) {
+		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return old;
+}
+
+/**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
@@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
 	node->next = NULL;
 
 	/*
-	 * we already touched the queueing cacheline; don't bother with pending
-	 * stuff.
-	 *
-	 * trylock || xchg(lock, node)
-	 *
-	 * 0,0,0 -> 0,0,1 ; trylock
-	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
+	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
+	 * attempt the trylock once more in the hope someone let go while we
+	 * weren't watching.
 	 */
-	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val)
-			new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
+	if (queue_spin_trylock(lock))
+		goto release;
 
 	/*
-	 * we won the trylock; forget about queueing.
+	 * we already touched the queueing cacheline; don't bother with pending
+	 * stuff.
+	 *
+	 * p,*,* -> n,*,*
 	 */
-	if (new == _Q_LOCKED_VAL)
-		goto release;
+	old = xchg_tail(lock, tail);
 
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
-	if (old & ~_Q_LOCKED_PENDING_MASK) {
+	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (9 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock-extract_out_the_exchange_of_tail_code_word.patch --]
[-- Type: text/plain, Size: 3000 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |    2 +
 kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
 2 files changed, 38 insertions(+), 22 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -61,6 +61,8 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
+#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
+
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
 #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
 
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
 #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
 /**
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
+{
+	u32 old, new, val = atomic_read(&lock->val);
+
+	for (;;) {
+		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return old;
+}
+
+/**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
@@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
 	node->next = NULL;
 
 	/*
-	 * we already touched the queueing cacheline; don't bother with pending
-	 * stuff.
-	 *
-	 * trylock || xchg(lock, node)
-	 *
-	 * 0,0,0 -> 0,0,1 ; trylock
-	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
+	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
+	 * attempt the trylock once more in the hope someone let go while we
+	 * weren't watching.
 	 */
-	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val)
-			new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
+	if (queue_spin_trylock(lock))
+		goto release;
 
 	/*
-	 * we won the trylock; forget about queueing.
+	 * we already touched the queueing cacheline; don't bother with pending
+	 * stuff.
+	 *
+	 * p,*,* -> n,*,*
 	 */
-	if (new == _Q_LOCKED_VAL)
-		goto release;
+	old = xchg_tail(lock, tail);
 
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
-	if (old & ~_Q_LOCKED_PENDING_MASK) {
+	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-15 12:46 ` Peter Zijlstra
@ 2014-06-15 12:47   ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: peter_zijlstra-qspinlock-optimize_for_smaller_nr_cpus.patch --]
[-- Type: text/plain, Size: 5811 bytes --]

From: Peter Zijlstra <peterz@infradead.org>

When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.

By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.

This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.

All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |   13 ++++
 kernel/locking/qspinlock.c            |  103 ++++++++++++++++++++++++++++++----
 2 files changed, 106 insertions(+), 10 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
 /*
  * Bitfields in the atomic value:
  *
+ * When NR_CPUS < 16K
+ *  0- 7: locked byte
+ *     8: pending
+ *  9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
  *  0- 7: locked byte
  *     8: pending
  *  9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
 #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
 
 #define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS		8
+#else
 #define _Q_PENDING_BITS		1
+#endif
 #define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
 
 #define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
+#define _Q_TAIL_OFFSET		_Q_TAIL_IDX_OFFSET
 #define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
 
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
 #include <linux/percpu.h>
 #include <linux/hardirq.h>
 #include <linux/mutex.h>
+#include <asm/byteorder.h>
 #include <asm/qspinlock.h>
 
 /*
@@ -48,6 +49,9 @@
  * We can further change the first spinner to spin on a bit in the lock word
  * instead of its node; whereby avoiding the need to carry a node from lock to
  * unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ *      atomic operations on smaller 8-bit and 16-bit data types.
  */
 
 #include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod
 
 #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+	union {
+		atomic_t val;
+		struct {
+#ifdef __LITTLE_ENDIAN
+			u16	locked_pending;
+			u16	tail;
+#else
+			u16	tail;
+			u16	locked_pending;
+#endif
+		};
+	};
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ *
+ * Lock stealing is not allowed if this function is used.
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+	u32 new, old;
+
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+}
+
 /**
  * xchg_tail - Put in the new queue tail code word & retrieve previous one
  * @lock : Pointer to queue spinlock structure
@@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str
 	}
 	return old;
 }
+#endif /* _Q_PENDING_BITS == 8 */
 
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
@@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
 	 * we're pending, wait for the owner to go away.
 	 *
 	 * *,1,1 -> *,1,0
+	 *
+	 * this wait loop must be a load-acquire such that we match the
+	 * store-release that clears the locked bit and create lock
+	 * sequentiality; this because not all clear_pending_set_locked()
+	 * implementations imply full barriers.
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
 		cpu_relax();
 
 	/*
@@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * *,1,0 -> *,0,1
 	 */
-	for (;;) {
-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
+	clear_pending_set_locked(lock, val);
 	return;
 
 	/*



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
@ 2014-06-15 12:47   ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: peter_zijlstra-qspinlock-optimize_for_smaller_nr_cpus.patch --]
[-- Type: text/plain, Size: 5809 bytes --]

From: Peter Zijlstra <peterz@infradead.org>

When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.

By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.

This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.

All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |   13 ++++
 kernel/locking/qspinlock.c            |  103 ++++++++++++++++++++++++++++++----
 2 files changed, 106 insertions(+), 10 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
 /*
  * Bitfields in the atomic value:
  *
+ * When NR_CPUS < 16K
+ *  0- 7: locked byte
+ *     8: pending
+ *  9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
  *  0- 7: locked byte
  *     8: pending
  *  9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
 #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
 
 #define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS		8
+#else
 #define _Q_PENDING_BITS		1
+#endif
 #define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
 
 #define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
+#define _Q_TAIL_OFFSET		_Q_TAIL_IDX_OFFSET
 #define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
 
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
 #include <linux/percpu.h>
 #include <linux/hardirq.h>
 #include <linux/mutex.h>
+#include <asm/byteorder.h>
 #include <asm/qspinlock.h>
 
 /*
@@ -48,6 +49,9 @@
  * We can further change the first spinner to spin on a bit in the lock word
  * instead of its node; whereby avoiding the need to carry a node from lock to
  * unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ *      atomic operations on smaller 8-bit and 16-bit data types.
  */
 
 #include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod
 
 #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+	union {
+		atomic_t val;
+		struct {
+#ifdef __LITTLE_ENDIAN
+			u16	locked_pending;
+			u16	tail;
+#else
+			u16	tail;
+			u16	locked_pending;
+#endif
+		};
+	};
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ *
+ * Lock stealing is not allowed if this function is used.
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+	u32 new, old;
+
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+}
+
 /**
  * xchg_tail - Put in the new queue tail code word & retrieve previous one
  * @lock : Pointer to queue spinlock structure
@@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str
 	}
 	return old;
 }
+#endif /* _Q_PENDING_BITS == 8 */
 
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
@@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
 	 * we're pending, wait for the owner to go away.
 	 *
 	 * *,1,1 -> *,1,0
+	 *
+	 * this wait loop must be a load-acquire such that we match the
+	 * store-release that clears the locked bit and create lock
+	 * sequentiality; this because not all clear_pending_set_locked()
+	 * implementations imply full barriers.
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
 		cpu_relax();
 
 	/*
@@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * *,1,0 -> *,0,1
 	 */
-	for (;;) {
-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
+	clear_pending_set_locked(lock, val);
 	return;
 
 	/*

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (10 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: peter_zijlstra-qspinlock-optimize_for_smaller_nr_cpus.patch --]
[-- Type: text/plain, Size: 5809 bytes --]

From: Peter Zijlstra <peterz@infradead.org>

When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.

By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.

This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.

All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 include/asm-generic/qspinlock_types.h |   13 ++++
 kernel/locking/qspinlock.c            |  103 ++++++++++++++++++++++++++++++----
 2 files changed, 106 insertions(+), 10 deletions(-)

--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
 /*
  * Bitfields in the atomic value:
  *
+ * When NR_CPUS < 16K
+ *  0- 7: locked byte
+ *     8: pending
+ *  9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
  *  0- 7: locked byte
  *     8: pending
  *  9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
 #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
 
 #define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS		8
+#else
 #define _Q_PENDING_BITS		1
+#endif
 #define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
 
 #define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
 #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
 
+#define _Q_TAIL_OFFSET		_Q_TAIL_IDX_OFFSET
 #define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
 
 #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
 #include <linux/percpu.h>
 #include <linux/hardirq.h>
 #include <linux/mutex.h>
+#include <asm/byteorder.h>
 #include <asm/qspinlock.h>
 
 /*
@@ -48,6 +49,9 @@
  * We can further change the first spinner to spin on a bit in the lock word
  * instead of its node; whereby avoiding the need to carry a node from lock to
  * unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ *      atomic operations on smaller 8-bit and 16-bit data types.
  */
 
 #include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod
 
 #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+	union {
+		atomic_t val;
+		struct {
+#ifdef __LITTLE_ENDIAN
+			u16	locked_pending;
+			u16	tail;
+#else
+			u16	tail;
+			u16	locked_pending;
+#endif
+		};
+	};
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ *
+ * Lock stealing is not allowed if this function is used.
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+	u32 new, old;
+
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+}
+
 /**
  * xchg_tail - Put in the new queue tail code word & retrieve previous one
  * @lock : Pointer to queue spinlock structure
@@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str
 	}
 	return old;
 }
+#endif /* _Q_PENDING_BITS == 8 */
 
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
@@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
 	 * we're pending, wait for the owner to go away.
 	 *
 	 * *,1,1 -> *,1,0
+	 *
+	 * this wait loop must be a load-acquire such that we match the
+	 * store-release that clears the locked bit and create lock
+	 * sequentiality; this because not all clear_pending_set_locked()
+	 * implementations imply full barriers.
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
 		cpu_relax();
 
 	/*
@@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * *,1,0 -> *,0,1
 	 */
-	for (;;) {
-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
+	clear_pending_set_locked(lock, val);
 	return;
 
 	/*

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 06/11] qspinlock: Optimize pending bit
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (14 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  2014-06-18 11:42   ` Paolo Bonzini
  2014-06-18 11:42     ` Paolo Bonzini
  -1 siblings, 2 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: peterz-prolong-pending.patch --]
[-- Type: text/plain, Size: 1127 bytes --]

XXX: merge into the pending bit patch..

It is possible so observe the pending bit without the locked bit when
the last owner has just released but the pending owner has not yet
taken ownership.

In this case we would normally queue -- because the pending bit is
already taken. However, in this case the pending bit is guaranteed to
be released 'soon', therefore wait for it and avoid queueing.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/locking/qspinlock.c |   10 ++++++++++
 1 file changed, 10 insertions(+)

Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- linux-2.6.orig/kernel/locking/qspinlock.c
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -226,6 +226,16 @@ void queue_spin_lock_slowpath(struct qsp
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
 	/*
+	 * wait for in-progress pending->locked hand-overs
+	 *
+	 * 0,1,0 -> 0,0,1
+	 */
+	if (val == _Q_PENDING_VAL) {
+		while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL)
+			cpu_relax();
+	}
+
+	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 06/11] qspinlock: Optimize pending bit
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (12 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: peterz-prolong-pending.patch --]
[-- Type: text/plain, Size: 1125 bytes --]

XXX: merge into the pending bit patch..

It is possible so observe the pending bit without the locked bit when
the last owner has just released but the pending owner has not yet
taken ownership.

In this case we would normally queue -- because the pending bit is
already taken. However, in this case the pending bit is guaranteed to
be released 'soon', therefore wait for it and avoid queueing.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/locking/qspinlock.c |   10 ++++++++++
 1 file changed, 10 insertions(+)

Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- linux-2.6.orig/kernel/locking/qspinlock.c
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -226,6 +226,16 @@ void queue_spin_lock_slowpath(struct qsp
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
 	/*
+	 * wait for in-progress pending->locked hand-overs
+	 *
+	 * 0,1,0 -> 0,0,1
+	 */
+	if (val == _Q_PENDING_VAL) {
+		while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL)
+			cpu_relax();
+	}
+
+	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 06/11] qspinlock: Optimize pending bit
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (13 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: peterz-prolong-pending.patch --]
[-- Type: text/plain, Size: 1125 bytes --]

XXX: merge into the pending bit patch..

It is possible so observe the pending bit without the locked bit when
the last owner has just released but the pending owner has not yet
taken ownership.

In this case we would normally queue -- because the pending bit is
already taken. However, in this case the pending bit is guaranteed to
be released 'soon', therefore wait for it and avoid queueing.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/locking/qspinlock.c |   10 ++++++++++
 1 file changed, 10 insertions(+)

Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- linux-2.6.orig/kernel/locking/qspinlock.c
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -226,6 +226,16 @@ void queue_spin_lock_slowpath(struct qsp
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
 	/*
+	 * wait for in-progress pending->locked hand-overs
+	 *
+	 * 0,1,0 -> 0,0,1
+	 */
+	if (val == _Q_PENDING_VAL) {
+		while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL)
+			cpu_relax();
+	}
+
+	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
  2014-06-15 12:46 ` Peter Zijlstra
@ 2014-06-15 12:47   ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: waiman_long-qspinlock-use_a_simple_write_to_grab_the_lock_if_applicable.patch --]
[-- Type: text/plain, Size: 5953 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
queue code word (tail) in the lock is set.

With that change, the are some slight improvement in the performance
of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.

		[Standalone/Embedded - same node]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	 2324/2321	2248/2265	 -3%/-2%
       4	 2890/2896	2819/2831	 -2%/-2%
       5	 3611/3595	3522/3512	 -2%/-2%
       6	 4281/4276	4173/4160	 -3%/-3%
       7	 5018/5001	4875/4861	 -3%/-3%
       8	 5759/5750	5563/5568	 -3%/-3%

		[Standalone/Embedded - different nodes]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	12242/12237	12087/12093	 -1%/-1%
       4	10688/10696	10507/10521	 -2%/-2%

It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queue spinlock.

The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:

                AIM7 XFS Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            5678233    3.17       96.61       5.81
  qspinlock             5750799    3.13       94.83       5.97

                AIM7 EXT4 Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            1114551   16.15      509.72       7.11
  qspinlock             2184466    8.24      232.99       6.01

The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.

The "ebizzy -m" test was also run with the following results:

  kernel               records/s  Real Time   Sys Time    Usr Time
  -----                ---------  ---------   --------    --------
  ticketlock             2075       10.00      216.35       3.49
  qspinlock              3023       10.00      198.20       4.80

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/locking/qspinlock.c |   59 ++++++++++++++++++++++++++++++++-------------
 1 file changed, 43 insertions(+), 16 deletions(-)

--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod
  * By using the whole 2nd least significant byte for the pending bit, we
  * can allow better optimization of the lock acquisition for the pending
  * bit holder.
+ *
+ * This internal structure is also used by the set_locked function which
+ * is not restricted to _Q_PENDING_BITS == 8.
  */
-#if _Q_PENDING_BITS == 8
-
 struct __qspinlock {
 	union {
 		atomic_t val;
-		struct {
 #ifdef __LITTLE_ENDIAN
+		u8	 locked;
+		struct {
 			u16	locked_pending;
 			u16	tail;
+		};
 #else
+		struct {
 			u16	tail;
 			u16	locked_pending;
-#endif
 		};
+		struct {
+			u8	reserved[3];
+			u8	locked;
+		};
+#endif
 	};
 };
 
+#if _Q_PENDING_BITS == 8
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queue spinlock structure
@@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str
 #endif /* _Q_PENDING_BITS == 8 */
 
 /**
+ * set_locked - Set the lock bit and own the lock
+ * @lock: Pointer to queue spinlock structure
+ *
+ * *,*,0 -> *,0,1
+ */
+static __always_inline void set_locked(struct qspinlock *lock)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
+}
+
+/**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
@@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * we're at the head of the waitqueue, wait for the owner & pending to
 	 * go away.
+	 * Load-acquired is used here because the set_locked()
+	 * function below may not be a full memory barrier.
 	 *
 	 * *,x,y -> *,0,0
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
+	while ((val = smp_load_acquire(&lock->val.counter)) &
+			_Q_LOCKED_PENDING_MASK)
 		cpu_relax();
 
 	/*
@@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
 	 * *,0,0 -> *,0,1 : lock, contended
+	 *
+	 * If the queue head is the only one in the queue (lock value == tail),
+	 * clear the tail code and grab the lock. Otherwise, we only need
+	 * to grab the lock.
 	 */
 	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val != tail)
-			new |= val;
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
+		if (val != tail) {
+			set_locked(lock);
 			break;
+		}
+		old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+		if (old == val)
+			goto release;	/* No contention */
 
 		val = old;
 	}
@@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * contended path; wait for next, release.
 	 */
-	if (new != _Q_LOCKED_VAL) {
-		while (!(next = ACCESS_ONCE(node->next)))
-			cpu_relax();
+	while (!(next = ACCESS_ONCE(node->next)))
+		cpu_relax();
 
-		arch_mcs_spin_unlock_contended(&next->locked);
-	}
+	arch_mcs_spin_unlock_contended(&next->locked);
 
 release:
 	/*



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-06-15 12:47   ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock-use_a_simple_write_to_grab_the_lock_if_applicable.patch --]
[-- Type: text/plain, Size: 5951 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
queue code word (tail) in the lock is set.

With that change, the are some slight improvement in the performance
of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.

		[Standalone/Embedded - same node]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	 2324/2321	2248/2265	 -3%/-2%
       4	 2890/2896	2819/2831	 -2%/-2%
       5	 3611/3595	3522/3512	 -2%/-2%
       6	 4281/4276	4173/4160	 -3%/-3%
       7	 5018/5001	4875/4861	 -3%/-3%
       8	 5759/5750	5563/5568	 -3%/-3%

		[Standalone/Embedded - different nodes]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	12242/12237	12087/12093	 -1%/-1%
       4	10688/10696	10507/10521	 -2%/-2%

It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queue spinlock.

The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:

                AIM7 XFS Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            5678233    3.17       96.61       5.81
  qspinlock             5750799    3.13       94.83       5.97

                AIM7 EXT4 Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            1114551   16.15      509.72       7.11
  qspinlock             2184466    8.24      232.99       6.01

The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.

The "ebizzy -m" test was also run with the following results:

  kernel               records/s  Real Time   Sys Time    Usr Time
  -----                ---------  ---------   --------    --------
  ticketlock             2075       10.00      216.35       3.49
  qspinlock              3023       10.00      198.20       4.80

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/locking/qspinlock.c |   59 ++++++++++++++++++++++++++++++++-------------
 1 file changed, 43 insertions(+), 16 deletions(-)

--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod
  * By using the whole 2nd least significant byte for the pending bit, we
  * can allow better optimization of the lock acquisition for the pending
  * bit holder.
+ *
+ * This internal structure is also used by the set_locked function which
+ * is not restricted to _Q_PENDING_BITS == 8.
  */
-#if _Q_PENDING_BITS == 8
-
 struct __qspinlock {
 	union {
 		atomic_t val;
-		struct {
 #ifdef __LITTLE_ENDIAN
+		u8	 locked;
+		struct {
 			u16	locked_pending;
 			u16	tail;
+		};
 #else
+		struct {
 			u16	tail;
 			u16	locked_pending;
-#endif
 		};
+		struct {
+			u8	reserved[3];
+			u8	locked;
+		};
+#endif
 	};
 };
 
+#if _Q_PENDING_BITS == 8
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queue spinlock structure
@@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str
 #endif /* _Q_PENDING_BITS == 8 */
 
 /**
+ * set_locked - Set the lock bit and own the lock
+ * @lock: Pointer to queue spinlock structure
+ *
+ * *,*,0 -> *,0,1
+ */
+static __always_inline void set_locked(struct qspinlock *lock)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
+}
+
+/**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
@@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * we're at the head of the waitqueue, wait for the owner & pending to
 	 * go away.
+	 * Load-acquired is used here because the set_locked()
+	 * function below may not be a full memory barrier.
 	 *
 	 * *,x,y -> *,0,0
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
+	while ((val = smp_load_acquire(&lock->val.counter)) &
+			_Q_LOCKED_PENDING_MASK)
 		cpu_relax();
 
 	/*
@@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
 	 * *,0,0 -> *,0,1 : lock, contended
+	 *
+	 * If the queue head is the only one in the queue (lock value == tail),
+	 * clear the tail code and grab the lock. Otherwise, we only need
+	 * to grab the lock.
 	 */
 	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val != tail)
-			new |= val;
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
+		if (val != tail) {
+			set_locked(lock);
 			break;
+		}
+		old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+		if (old == val)
+			goto release;	/* No contention */
 
 		val = old;
 	}
@@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * contended path; wait for next, release.
 	 */
-	if (new != _Q_LOCKED_VAL) {
-		while (!(next = ACCESS_ONCE(node->next)))
-			cpu_relax();
+	while (!(next = ACCESS_ONCE(node->next)))
+		cpu_relax();
 
-		arch_mcs_spin_unlock_contended(&next->locked);
-	}
+	arch_mcs_spin_unlock_contended(&next->locked);
 
 release:
 	/*

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (16 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: waiman_long-qspinlock-use_a_simple_write_to_grab_the_lock_if_applicable.patch --]
[-- Type: text/plain, Size: 5951 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
queue code word (tail) in the lock is set.

With that change, the are some slight improvement in the performance
of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.

		[Standalone/Embedded - same node]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	 2324/2321	2248/2265	 -3%/-2%
       4	 2890/2896	2819/2831	 -2%/-2%
       5	 3611/3595	3522/3512	 -2%/-2%
       6	 4281/4276	4173/4160	 -3%/-3%
       7	 5018/5001	4875/4861	 -3%/-3%
       8	 5759/5750	5563/5568	 -3%/-3%

		[Standalone/Embedded - different nodes]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	12242/12237	12087/12093	 -1%/-1%
       4	10688/10696	10507/10521	 -2%/-2%

It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queue spinlock.

The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:

                AIM7 XFS Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            5678233    3.17       96.61       5.81
  qspinlock             5750799    3.13       94.83       5.97

                AIM7 EXT4 Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            1114551   16.15      509.72       7.11
  qspinlock             2184466    8.24      232.99       6.01

The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.

The "ebizzy -m" test was also run with the following results:

  kernel               records/s  Real Time   Sys Time    Usr Time
  -----                ---------  ---------   --------    --------
  ticketlock             2075       10.00      216.35       3.49
  qspinlock              3023       10.00      198.20       4.80

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/locking/qspinlock.c |   59 ++++++++++++++++++++++++++++++++-------------
 1 file changed, 43 insertions(+), 16 deletions(-)

--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod
  * By using the whole 2nd least significant byte for the pending bit, we
  * can allow better optimization of the lock acquisition for the pending
  * bit holder.
+ *
+ * This internal structure is also used by the set_locked function which
+ * is not restricted to _Q_PENDING_BITS == 8.
  */
-#if _Q_PENDING_BITS == 8
-
 struct __qspinlock {
 	union {
 		atomic_t val;
-		struct {
 #ifdef __LITTLE_ENDIAN
+		u8	 locked;
+		struct {
 			u16	locked_pending;
 			u16	tail;
+		};
 #else
+		struct {
 			u16	tail;
 			u16	locked_pending;
-#endif
 		};
+		struct {
+			u8	reserved[3];
+			u8	locked;
+		};
+#endif
 	};
 };
 
+#if _Q_PENDING_BITS == 8
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queue spinlock structure
@@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str
 #endif /* _Q_PENDING_BITS == 8 */
 
 /**
+ * set_locked - Set the lock bit and own the lock
+ * @lock: Pointer to queue spinlock structure
+ *
+ * *,*,0 -> *,0,1
+ */
+static __always_inline void set_locked(struct qspinlock *lock)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
+}
+
+/**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
@@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * we're at the head of the waitqueue, wait for the owner & pending to
 	 * go away.
+	 * Load-acquired is used here because the set_locked()
+	 * function below may not be a full memory barrier.
 	 *
 	 * *,x,y -> *,0,0
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
+	while ((val = smp_load_acquire(&lock->val.counter)) &
+			_Q_LOCKED_PENDING_MASK)
 		cpu_relax();
 
 	/*
@@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
 	 * *,0,0 -> *,0,1 : lock, contended
+	 *
+	 * If the queue head is the only one in the queue (lock value == tail),
+	 * clear the tail code and grab the lock. Otherwise, we only need
+	 * to grab the lock.
 	 */
 	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val != tail)
-			new |= val;
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
+		if (val != tail) {
+			set_locked(lock);
 			break;
+		}
+		old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+		if (old == val)
+			goto release;	/* No contention */
 
 		val = old;
 	}
@@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * contended path; wait for next, release.
 	 */
-	if (new != _Q_LOCKED_VAL) {
-		while (!(next = ACCESS_ONCE(node->next)))
-			cpu_relax();
+	while (!(next = ACCESS_ONCE(node->next)))
+		cpu_relax();
 
-		arch_mcs_spin_unlock_contended(&next->locked);
-	}
+	arch_mcs_spin_unlock_contended(&next->locked);
 
 release:
 	/*

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
  2014-06-15 12:46 ` Peter Zijlstra
@ 2014-06-15 12:47   ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: peterz-qspinlock-test-and-set.patch --]
[-- Type: text/plain, Size: 1777 bytes --]

When we detect a hypervisor (!paravirt, see later patches), revert to
a simple test-and-set lock to avoid the horrors of queue preemption.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
 include/asm-generic/qspinlock.h  |    7 +++++++
 kernel/locking/qspinlock.c       |    3 +++
 3 files changed, 24 insertions(+)

--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -1,6 +1,7 @@
 #ifndef _ASM_X86_QSPINLOCK_H
 #define _ASM_X86_QSPINLOCK_H
 
+#include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
 
 #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
@@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
 
 #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
 
+#define virt_queue_spin_lock virt_queue_spin_lock
+
+static inline bool virt_queue_spin_lock(struct qspinlock *lock)
+{
+	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+		return false;
+
+	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
+		cpu_relax();
+
+	return true;
+}
+
 #include <asm-generic/qspinlock.h>
 
 #endif /* _ASM_X86_QSPINLOCK_H */
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
 }
 #endif
 
+#ifndef virt_queue_spin_lock
+static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
+{
+	return false;
+}
+#endif
+
 /*
  * Initializier
  */
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	if (virt_queue_spin_lock(lock))
+		return;
+
 	/*
 	 * wait for in-progress pending->locked hand-overs
 	 *



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
@ 2014-06-15 12:47   ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: peterz-qspinlock-test-and-set.patch --]
[-- Type: text/plain, Size: 1775 bytes --]

When we detect a hypervisor (!paravirt, see later patches), revert to
a simple test-and-set lock to avoid the horrors of queue preemption.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
 include/asm-generic/qspinlock.h  |    7 +++++++
 kernel/locking/qspinlock.c       |    3 +++
 3 files changed, 24 insertions(+)

--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -1,6 +1,7 @@
 #ifndef _ASM_X86_QSPINLOCK_H
 #define _ASM_X86_QSPINLOCK_H
 
+#include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
 
 #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
@@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
 
 #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
 
+#define virt_queue_spin_lock virt_queue_spin_lock
+
+static inline bool virt_queue_spin_lock(struct qspinlock *lock)
+{
+	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+		return false;
+
+	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
+		cpu_relax();
+
+	return true;
+}
+
 #include <asm-generic/qspinlock.h>
 
 #endif /* _ASM_X86_QSPINLOCK_H */
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
 }
 #endif
 
+#ifndef virt_queue_spin_lock
+static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
+{
+	return false;
+}
+#endif
+
 /*
  * Initializier
  */
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	if (virt_queue_spin_lock(lock))
+		return;
+
 	/*
 	 * wait for in-progress pending->locked hand-overs
 	 *

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (17 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: peterz-qspinlock-test-and-set.patch --]
[-- Type: text/plain, Size: 1775 bytes --]

When we detect a hypervisor (!paravirt, see later patches), revert to
a simple test-and-set lock to avoid the horrors of queue preemption.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
 include/asm-generic/qspinlock.h  |    7 +++++++
 kernel/locking/qspinlock.c       |    3 +++
 3 files changed, 24 insertions(+)

--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -1,6 +1,7 @@
 #ifndef _ASM_X86_QSPINLOCK_H
 #define _ASM_X86_QSPINLOCK_H
 
+#include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
 
 #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
@@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
 
 #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
 
+#define virt_queue_spin_lock virt_queue_spin_lock
+
+static inline bool virt_queue_spin_lock(struct qspinlock *lock)
+{
+	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+		return false;
+
+	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
+		cpu_relax();
+
+	return true;
+}
+
 #include <asm-generic/qspinlock.h>
 
 #endif /* _ASM_X86_QSPINLOCK_H */
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
 }
 #endif
 
+#ifndef virt_queue_spin_lock
+static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
+{
+	return false;
+}
+#endif
+
 /*
  * Initializier
  */
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	if (virt_queue_spin_lock(lock))
+		return;
+
 	/*
 	 * wait for in-progress pending->locked hand-overs
 	 *

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (19 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  2014-06-18 16:43   ` Konrad Rzeszutek Wilk
  2014-06-18 16:43     ` Konrad Rzeszutek Wilk
  -1 siblings, 2 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: waiman_long-pvqspinlock_x86-rename_paravirt_ticketlocks_enabled.patch --]
[-- Type: text/plain, Size: 2322 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/spinlock.h      |    4 ++--
 arch/x86/kernel/kvm.c                |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
 arch/x86/xen/spinlock.c              |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -39,7 +39,7 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
 #ifdef CONFIG_QUEUE_SPINLOCK
@@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	if (TICKET_SLOWPATH_FLAG &&
-	    static_key_false(&paravirt_ticketlocks_enabled)) {
+	    static_key_false(&paravirt_spinlocks_enabled)) {
 		arch_spinlock_t prev;
 
 		prev = *lock;
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
 
 	return 0;
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum
 	if (!xen_domain())
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	return 0;
 }
 early_initcall(xen_init_spinlocks_jump);



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (21 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: waiman_long-pvqspinlock_x86-rename_paravirt_ticketlocks_enabled.patch --]
[-- Type: text/plain, Size: 2320 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/spinlock.h      |    4 ++--
 arch/x86/kernel/kvm.c                |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
 arch/x86/xen/spinlock.c              |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -39,7 +39,7 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
 #ifdef CONFIG_QUEUE_SPINLOCK
@@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	if (TICKET_SLOWPATH_FLAG &&
-	    static_key_false(&paravirt_ticketlocks_enabled)) {
+	    static_key_false(&paravirt_spinlocks_enabled)) {
 		arch_spinlock_t prev;
 
 		prev = *lock;
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
 
 	return 0;
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum
 	if (!xen_domain())
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	return 0;
 }
 early_initcall(xen_init_spinlocks_jump);

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (20 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: waiman_long-pvqspinlock_x86-rename_paravirt_ticketlocks_enabled.patch --]
[-- Type: text/plain, Size: 2320 bytes --]

From: Waiman Long <Waiman.Long@hp.com>

This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/spinlock.h      |    4 ++--
 arch/x86/kernel/kvm.c                |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
 arch/x86/xen/spinlock.c              |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -39,7 +39,7 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
 #ifdef CONFIG_QUEUE_SPINLOCK
@@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	if (TICKET_SLOWPATH_FLAG &&
-	    static_key_false(&paravirt_ticketlocks_enabled)) {
+	    static_key_false(&paravirt_spinlocks_enabled)) {
 		arch_spinlock_t prev;
 
 		prev = *lock;
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
 
 	return 0;
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum
 	if (!xen_domain())
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	return 0;
 }
 early_initcall(xen_init_spinlocks_jump);

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:46 ` Peter Zijlstra
@ 2014-06-15 12:47   ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: peterz-qspinlock-pv.patch --]
[-- Type: text/plain, Size: 14514 bytes --]

Add minimal paravirt support.

The code aims for minimal impact on the native case.

On the lock side we add one jump label (asm_goto) and 4 paravirt
callee saved calls that default to NOPs. The only effects are the
extra NOPs and some pointless MOVs to accomodate the calling
convention. No register spills happen because of this (x86_64).

On the unlock side we have one paravirt callee saved call, which
defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP.

The actual paravirt code comes in 3 parts;

 - init_node; this initializes the extra data members required for PV
   state. PV state data is kept 1 cacheline ahead of the regular data.

 - link_and_wait_node/kick_node; these are paired with the regular MCS
   queueing and are placed resp. before/after the paired MCS ops.

 - wait_head/queue_unlock; the interesting part here is finding the
   head node to kick.

Tracking the head is done in two parts, firstly the pv_wait_head will
store its cpu number in whichever node is pointed to by the tail part
of the lock word. Secondly, pv_link_and_wait_node() will propagate the
existing head from the old to the new tail node.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/paravirt.h       |   39 +++++++
 arch/x86/include/asm/paravirt_types.h |   15 ++
 arch/x86/include/asm/qspinlock.h      |   25 ++++
 arch/x86/kernel/paravirt-spinlocks.c  |   22 ++++
 arch/x86/kernel/paravirt_patch_32.c   |    7 +
 arch/x86/kernel/paravirt_patch_64.c   |    7 +
 include/asm-generic/qspinlock.h       |   11 ++
 kernel/locking/qspinlock.c            |  179 +++++++++++++++++++++++++++++++++-
 8 files changed, 302 insertions(+), 3 deletions(-)

Index: linux-2.6/arch/x86/include/asm/paravirt.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/paravirt.h
+++ linux-2.6/arch/x86/include/asm/paravirt.h
@@ -712,6 +712,44 @@ static inline void __set_fixmap(unsigned
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+static __always_inline void pv_init_node(struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE1(pv_lock_ops.init_node, node);
+}
+
+static __always_inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE2(pv_lock_ops.link_and_wait_node, old, node);
+}
+
+static __always_inline void pv_kick_node(struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE1(pv_lock_ops.kick_node, node);
+}
+
+static __always_inline void pv_wait_head(struct qspinlock *lock)
+{
+	PVOP_VCALLEE1(pv_lock_ops.wait_head, lock);
+}
+
+static __always_inline void pv_queue_unlock(struct qspinlock *lock)
+{
+	PVOP_VCALLEE1(pv_lock_ops.queue_unlock, lock);
+}
+
+static __always_inline void pv_wait(int *ptr, int val)
+{
+	PVOP_VCALL2(pv_lock_ops.wait, ptr, val);
+}
+
+static __always_inline void pv_kick(int cpu)
+{
+	PVOP_VCALL1(pv_lock_ops.kick, cpu);
+}
+
+#else
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -723,6 +761,7 @@ static __always_inline void __ticket_unl
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
+#endif
 
 #endif
 
Index: linux-2.6/arch/x86/include/asm/paravirt_types.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/paravirt_types.h
+++ linux-2.6/arch/x86/include/asm/paravirt_types.h
@@ -326,6 +326,9 @@ struct pv_mmu_ops {
 			   phys_addr_t phys, pgprot_t flags);
 };
 
+struct mcs_spinlock;
+struct qspinlock;
+
 struct arch_spinlock;
 #ifdef CONFIG_SMP
 #include <asm/spinlock_types.h>
@@ -334,8 +337,20 @@ typedef u16 __ticket_t;
 #endif
 
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+	struct paravirt_callee_save init_node;
+	struct paravirt_callee_save link_and_wait_node;
+	struct paravirt_callee_save kick_node;
+
+	struct paravirt_callee_save wait_head;
+	struct paravirt_callee_save queue_unlock;
+
+	void (*wait)(int *ptr, int val);
+	void (*kick)(int cpu);
+#else
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
 };
 
 /* This contains all the paravirt structures: we get a convenient
Index: linux-2.6/arch/x86/include/asm/qspinlock.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/qspinlock.h
+++ linux-2.6/arch/x86/include/asm/qspinlock.h
@@ -3,24 +3,45 @@
 
 #include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
+#include <asm/paravirt.h>
 
 #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
 
-#define	queue_spin_unlock queue_spin_unlock
 /**
  * queue_spin_unlock - release a queue spinlock
  * @lock : Pointer to queue spinlock structure
  *
  * An effective smp_store_release() on the least-significant byte.
  */
-static inline void queue_spin_unlock(struct qspinlock *lock)
+static inline void native_queue_unlock(struct qspinlock *lock)
 {
 	barrier();
 	ACCESS_ONCE(*(u8 *)lock) = 0;
 }
 
+#else
+
+static inline void native_queue_unlock(struct qspinlock *lock)
+{
+	atomic_dec(&lock->val);
+}
+
 #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
 
+#define	queue_spin_unlock queue_spin_unlock
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	pv_queue_unlock(lock);
+}
+#else
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	native_queue_unlock(lock);
+}
+#endif
+
 #define virt_queue_spin_lock virt_queue_spin_lock
 
 static inline bool virt_queue_spin_lock(struct qspinlock *lock)
Index: linux-2.6/arch/x86/kernel/paravirt-spinlocks.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt-spinlocks.c
+++ linux-2.6/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,11 +8,33 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+void __native_queue_unlock(struct qspinlock *lock)
+{
+	native_queue_unlock(lock);
+}
+PV_CALLEE_SAVE_REGS_THUNK(__native_queue_unlock);
+#endif
+#endif
+
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+	.init_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.link_and_wait_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.kick_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+
+	.wait_head = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.queue_unlock = PV_CALLEE_SAVE(__native_queue_unlock),
+
+	.wait = paravirt_nop,
+	.kick = paravirt_nop,
+#else
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
+#endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
Index: linux-2.6/arch/x86/kernel/paravirt_patch_32.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_32.c
+++ linux-2.6/arch/x86/kernel/paravirt_patch_32.c
@@ -12,6 +12,10 @@ DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %c
 DEF_NATIVE(pv_cpu_ops, clts, "clts");
 DEF_NATIVE(pv_cpu_ops, read_tsc, "rdtsc");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%eax)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	/* arg in %eax, return in %eax */
@@ -47,6 +51,9 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_mmu_ops, write_cr3);
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_cpu_ops, read_tsc);
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+		PATCH_SITE(pv_lock_ops, queue_unlock);
+#endif
 
 	patch_site:
 		ret = paravirt_patch_insns(ibuf, len, start, end);
Index: linux-2.6/arch/x86/kernel/paravirt_patch_64.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_64.c
+++ linux-2.6/arch/x86/kernel/paravirt_patch_64.c
@@ -22,6 +22,10 @@ DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs")
 DEF_NATIVE(, mov32, "mov %edi, %eax");
 DEF_NATIVE(, mov64, "mov %rdi, %rax");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%rdi)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	return paravirt_patch_insns(insnbuf, len,
@@ -61,6 +65,9 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_mmu_ops, flush_tlb_single);
 		PATCH_SITE(pv_cpu_ops, wbinvd);
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+		PATCH_SITE(pv_lock_ops, queue_unlock);
+#endif
 
 	patch_site:
 		ret = paravirt_patch_insns(ibuf, len, start, end);
Index: linux-2.6/include/asm-generic/qspinlock.h
===================================================================
--- linux-2.6.orig/include/asm-generic/qspinlock.h
+++ linux-2.6/include/asm-generic/qspinlock.h
@@ -105,6 +105,17 @@ static __always_inline bool virt_queue_s
 }
 #endif
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+struct mcs_spinlock;
+
+extern void __pv_init_node(struct mcs_spinlock *node);
+extern void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node);
+extern void __pv_kick_node(struct mcs_spinlock *node);
+
+extern void __pv_wait_head(struct qspinlock *lock);
+extern void __pv_queue_unlock(struct qspinlock *lock);
+#endif
+
 /*
  * Initializier
  */
Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- linux-2.6.orig/kernel/locking/qspinlock.c
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -56,13 +56,33 @@
 
 #include "mcs_spinlock.h"
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+#define MAX_NODES	8
+
+static inline bool pv_enabled(void)
+{
+	return static_key_false(&paravirt_spinlocks_enabled);
+}
+#else /* !PARAVIRT_SPINLOCKS */
+
+#define MAX_NODES	4
+
+static inline bool pv_enabled(void)
+{
+	return false;
+}
+#endif /* PARAVIRT_SPINLOCKS */
+
 /*
  * Per-CPU queue node structures; we can never have more than 4 nested
  * contexts: task, softirq, hardirq, nmi.
  *
  * Exactly fits one cacheline.
+ *
+ * PV doubles the storage and uses the second cacheline for PV state.
  */
-static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[MAX_NODES]);
 
 /*
  * We must be able to distinguish between no-tail and the tail at 0:0,
@@ -218,6 +238,156 @@ static __always_inline void set_locked(s
 	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
 }
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+/*
+ * Write a comment about how all this works...
+ */
+
+#define _Q_LOCKED_SLOW	(2U << _Q_LOCKED_OFFSET)
+
+struct pv_node {
+	struct mcs_spinlock	mcs;
+	struct mcs_spinlock	__offset[3];
+	int cpu, head;
+};
+
+#define INVALID_HEAD	-1
+#define NO_HEAD		nr_cpu_ids
+
+void __pv_init_node(struct mcs_spinlock *node)
+{
+	struct pv_node *pn = (struct pv_node *)node;
+
+	BUILD_BUG_ON(sizeof(struct pv_node) > 5*sizeof(struct mcs_spinlock));
+
+	pn->cpu = smp_processor_id();
+	pn->head = INVALID_HEAD;
+}
+
+static inline struct pv_node *pv_decode_tail(u32 tail)
+{
+	return (struct pv_node *)decode_tail(tail);
+}
+
+void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
+{
+	struct pv_node *ppn, *pn = (struct pv_node *)node;
+	unsigned int count;
+
+	if (!(old & _Q_TAIL_MASK)) {
+		pn->head = NO_HEAD;
+		return;
+	}
+
+	ppn = pv_decode_tail(old);
+	ACCESS_ONCE(ppn->mcs.next) = node;
+
+	while (ppn->head == INVALID_HEAD)
+		cpu_relax();
+
+	pn->head = ppn->head;
+
+	for (;;) {
+		count = SPIN_THRESHOLD;
+
+		do {
+			if (smp_load_acquire(&node->locked))
+				return;
+
+			cpu_relax();
+		} while (--count);
+
+		pv_wait(&node->locked, 1);
+	}
+}
+
+void __pv_kick_node(struct mcs_spinlock *node)
+{
+	struct pv_node *pn = (struct pv_node *)node;
+
+	pv_kick(pn->cpu);
+}
+
+void __pv_wait_head(struct qspinlock *lock)
+{
+	unsigned int count;
+	struct pv_node *pn;
+	int val, old, new;
+
+	for (;;) {
+		count = SPIN_THRESHOLD;
+
+		do {
+			val = smp_load_acquire(&lock->val.counter);
+			if (!(val & _Q_LOCKED_PENDING_MASK))
+				return;
+		} while (--count);
+
+		do {
+			pn = pv_decode_tail(atomic_read(&lock->val));
+
+			while (pn->head == INVALID_HEAD)
+				cpu_relax();
+
+			pn->head = smp_processor_id();
+
+		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
+
+		/*
+		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
+		 */
+		val = atomic_read(&lock->val);
+		for (;;) {
+			if (!(val & _Q_LOCKED_PENDING_MASK))
+				return;
+			new = val | _Q_LOCKED_SLOW;
+			old = atomic_cmpxchg(&lock->val, val, new);
+			if (old == val)
+				break;
+			val = old;
+		}
+
+		/* XXX 16bit would be better */
+		pv_wait(&lock->val.counter, new);
+	}
+}
+
+static void ___pv_kick_head(struct qspinlock *lock)
+{
+	struct pv_node *pn;
+
+	pn = pv_decode_tail(atomic_read(&lock->val));
+
+	while (pn->head == INVALID_HEAD)
+		cpu_relax();
+
+	if (WARN_ON_ONCE(pn->head == NO_HEAD))
+		return;
+
+	pv_kick(pn->head);
+}
+
+void __pv_queue_unlock(struct qspinlock *lock)
+{
+	int val = atomic_read(&lock->val);
+
+	native_queue_unlock(lock);
+
+	if (val & _Q_LOCKED_SLOW)
+		___pv_kick_head(lock);
+}
+
+#else
+
+static inline void pv_init_node(struct mcs_spinlock *node) { }
+static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
+static inline void pv_kick_node(struct mcs_spinlock *node) { }
+
+static inline void pv_wait_head(struct qspinlock *lock) { }
+
+#endif
+
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
@@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	if (pv_enabled())
+		goto queue;
+
 	if (virt_queue_spin_lock(lock))
 		return;
 
@@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
 	node += idx;
 	node->locked = 0;
 	node->next = NULL;
+	pv_init_node(node);
 
 	/*
 	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
+	pv_link_and_wait_node(old, node);
 	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;
@@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * *,x,y -> *,0,0
 	 */
+	pv_wait_head(lock);
 	while ((val = smp_load_acquire(&lock->val.counter)) &
 			_Q_LOCKED_PENDING_MASK)
 		cpu_relax();
@@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
 		cpu_relax();
 
 	arch_mcs_spin_unlock_contended(&next->locked);
+	pv_kick_node(next);
 
 release:
 	/*



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 10/11] qspinlock: Paravirt support
@ 2014-06-15 12:47   ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: peterz-qspinlock-pv.patch --]
[-- Type: text/plain, Size: 14512 bytes --]

Add minimal paravirt support.

The code aims for minimal impact on the native case.

On the lock side we add one jump label (asm_goto) and 4 paravirt
callee saved calls that default to NOPs. The only effects are the
extra NOPs and some pointless MOVs to accomodate the calling
convention. No register spills happen because of this (x86_64).

On the unlock side we have one paravirt callee saved call, which
defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP.

The actual paravirt code comes in 3 parts;

 - init_node; this initializes the extra data members required for PV
   state. PV state data is kept 1 cacheline ahead of the regular data.

 - link_and_wait_node/kick_node; these are paired with the regular MCS
   queueing and are placed resp. before/after the paired MCS ops.

 - wait_head/queue_unlock; the interesting part here is finding the
   head node to kick.

Tracking the head is done in two parts, firstly the pv_wait_head will
store its cpu number in whichever node is pointed to by the tail part
of the lock word. Secondly, pv_link_and_wait_node() will propagate the
existing head from the old to the new tail node.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/paravirt.h       |   39 +++++++
 arch/x86/include/asm/paravirt_types.h |   15 ++
 arch/x86/include/asm/qspinlock.h      |   25 ++++
 arch/x86/kernel/paravirt-spinlocks.c  |   22 ++++
 arch/x86/kernel/paravirt_patch_32.c   |    7 +
 arch/x86/kernel/paravirt_patch_64.c   |    7 +
 include/asm-generic/qspinlock.h       |   11 ++
 kernel/locking/qspinlock.c            |  179 +++++++++++++++++++++++++++++++++-
 8 files changed, 302 insertions(+), 3 deletions(-)

Index: linux-2.6/arch/x86/include/asm/paravirt.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/paravirt.h
+++ linux-2.6/arch/x86/include/asm/paravirt.h
@@ -712,6 +712,44 @@ static inline void __set_fixmap(unsigned
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+static __always_inline void pv_init_node(struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE1(pv_lock_ops.init_node, node);
+}
+
+static __always_inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE2(pv_lock_ops.link_and_wait_node, old, node);
+}
+
+static __always_inline void pv_kick_node(struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE1(pv_lock_ops.kick_node, node);
+}
+
+static __always_inline void pv_wait_head(struct qspinlock *lock)
+{
+	PVOP_VCALLEE1(pv_lock_ops.wait_head, lock);
+}
+
+static __always_inline void pv_queue_unlock(struct qspinlock *lock)
+{
+	PVOP_VCALLEE1(pv_lock_ops.queue_unlock, lock);
+}
+
+static __always_inline void pv_wait(int *ptr, int val)
+{
+	PVOP_VCALL2(pv_lock_ops.wait, ptr, val);
+}
+
+static __always_inline void pv_kick(int cpu)
+{
+	PVOP_VCALL1(pv_lock_ops.kick, cpu);
+}
+
+#else
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -723,6 +761,7 @@ static __always_inline void __ticket_unl
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
+#endif
 
 #endif
 
Index: linux-2.6/arch/x86/include/asm/paravirt_types.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/paravirt_types.h
+++ linux-2.6/arch/x86/include/asm/paravirt_types.h
@@ -326,6 +326,9 @@ struct pv_mmu_ops {
 			   phys_addr_t phys, pgprot_t flags);
 };
 
+struct mcs_spinlock;
+struct qspinlock;
+
 struct arch_spinlock;
 #ifdef CONFIG_SMP
 #include <asm/spinlock_types.h>
@@ -334,8 +337,20 @@ typedef u16 __ticket_t;
 #endif
 
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+	struct paravirt_callee_save init_node;
+	struct paravirt_callee_save link_and_wait_node;
+	struct paravirt_callee_save kick_node;
+
+	struct paravirt_callee_save wait_head;
+	struct paravirt_callee_save queue_unlock;
+
+	void (*wait)(int *ptr, int val);
+	void (*kick)(int cpu);
+#else
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
 };
 
 /* This contains all the paravirt structures: we get a convenient
Index: linux-2.6/arch/x86/include/asm/qspinlock.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/qspinlock.h
+++ linux-2.6/arch/x86/include/asm/qspinlock.h
@@ -3,24 +3,45 @@
 
 #include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
+#include <asm/paravirt.h>
 
 #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
 
-#define	queue_spin_unlock queue_spin_unlock
 /**
  * queue_spin_unlock - release a queue spinlock
  * @lock : Pointer to queue spinlock structure
  *
  * An effective smp_store_release() on the least-significant byte.
  */
-static inline void queue_spin_unlock(struct qspinlock *lock)
+static inline void native_queue_unlock(struct qspinlock *lock)
 {
 	barrier();
 	ACCESS_ONCE(*(u8 *)lock) = 0;
 }
 
+#else
+
+static inline void native_queue_unlock(struct qspinlock *lock)
+{
+	atomic_dec(&lock->val);
+}
+
 #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
 
+#define	queue_spin_unlock queue_spin_unlock
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	pv_queue_unlock(lock);
+}
+#else
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	native_queue_unlock(lock);
+}
+#endif
+
 #define virt_queue_spin_lock virt_queue_spin_lock
 
 static inline bool virt_queue_spin_lock(struct qspinlock *lock)
Index: linux-2.6/arch/x86/kernel/paravirt-spinlocks.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt-spinlocks.c
+++ linux-2.6/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,11 +8,33 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+void __native_queue_unlock(struct qspinlock *lock)
+{
+	native_queue_unlock(lock);
+}
+PV_CALLEE_SAVE_REGS_THUNK(__native_queue_unlock);
+#endif
+#endif
+
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+	.init_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.link_and_wait_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.kick_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+
+	.wait_head = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.queue_unlock = PV_CALLEE_SAVE(__native_queue_unlock),
+
+	.wait = paravirt_nop,
+	.kick = paravirt_nop,
+#else
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
+#endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
Index: linux-2.6/arch/x86/kernel/paravirt_patch_32.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_32.c
+++ linux-2.6/arch/x86/kernel/paravirt_patch_32.c
@@ -12,6 +12,10 @@ DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %c
 DEF_NATIVE(pv_cpu_ops, clts, "clts");
 DEF_NATIVE(pv_cpu_ops, read_tsc, "rdtsc");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%eax)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	/* arg in %eax, return in %eax */
@@ -47,6 +51,9 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_mmu_ops, write_cr3);
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_cpu_ops, read_tsc);
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+		PATCH_SITE(pv_lock_ops, queue_unlock);
+#endif
 
 	patch_site:
 		ret = paravirt_patch_insns(ibuf, len, start, end);
Index: linux-2.6/arch/x86/kernel/paravirt_patch_64.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_64.c
+++ linux-2.6/arch/x86/kernel/paravirt_patch_64.c
@@ -22,6 +22,10 @@ DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs")
 DEF_NATIVE(, mov32, "mov %edi, %eax");
 DEF_NATIVE(, mov64, "mov %rdi, %rax");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%rdi)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	return paravirt_patch_insns(insnbuf, len,
@@ -61,6 +65,9 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_mmu_ops, flush_tlb_single);
 		PATCH_SITE(pv_cpu_ops, wbinvd);
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+		PATCH_SITE(pv_lock_ops, queue_unlock);
+#endif
 
 	patch_site:
 		ret = paravirt_patch_insns(ibuf, len, start, end);
Index: linux-2.6/include/asm-generic/qspinlock.h
===================================================================
--- linux-2.6.orig/include/asm-generic/qspinlock.h
+++ linux-2.6/include/asm-generic/qspinlock.h
@@ -105,6 +105,17 @@ static __always_inline bool virt_queue_s
 }
 #endif
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+struct mcs_spinlock;
+
+extern void __pv_init_node(struct mcs_spinlock *node);
+extern void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node);
+extern void __pv_kick_node(struct mcs_spinlock *node);
+
+extern void __pv_wait_head(struct qspinlock *lock);
+extern void __pv_queue_unlock(struct qspinlock *lock);
+#endif
+
 /*
  * Initializier
  */
Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- linux-2.6.orig/kernel/locking/qspinlock.c
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -56,13 +56,33 @@
 
 #include "mcs_spinlock.h"
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+#define MAX_NODES	8
+
+static inline bool pv_enabled(void)
+{
+	return static_key_false(&paravirt_spinlocks_enabled);
+}
+#else /* !PARAVIRT_SPINLOCKS */
+
+#define MAX_NODES	4
+
+static inline bool pv_enabled(void)
+{
+	return false;
+}
+#endif /* PARAVIRT_SPINLOCKS */
+
 /*
  * Per-CPU queue node structures; we can never have more than 4 nested
  * contexts: task, softirq, hardirq, nmi.
  *
  * Exactly fits one cacheline.
+ *
+ * PV doubles the storage and uses the second cacheline for PV state.
  */
-static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[MAX_NODES]);
 
 /*
  * We must be able to distinguish between no-tail and the tail at 0:0,
@@ -218,6 +238,156 @@ static __always_inline void set_locked(s
 	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
 }
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+/*
+ * Write a comment about how all this works...
+ */
+
+#define _Q_LOCKED_SLOW	(2U << _Q_LOCKED_OFFSET)
+
+struct pv_node {
+	struct mcs_spinlock	mcs;
+	struct mcs_spinlock	__offset[3];
+	int cpu, head;
+};
+
+#define INVALID_HEAD	-1
+#define NO_HEAD		nr_cpu_ids
+
+void __pv_init_node(struct mcs_spinlock *node)
+{
+	struct pv_node *pn = (struct pv_node *)node;
+
+	BUILD_BUG_ON(sizeof(struct pv_node) > 5*sizeof(struct mcs_spinlock));
+
+	pn->cpu = smp_processor_id();
+	pn->head = INVALID_HEAD;
+}
+
+static inline struct pv_node *pv_decode_tail(u32 tail)
+{
+	return (struct pv_node *)decode_tail(tail);
+}
+
+void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
+{
+	struct pv_node *ppn, *pn = (struct pv_node *)node;
+	unsigned int count;
+
+	if (!(old & _Q_TAIL_MASK)) {
+		pn->head = NO_HEAD;
+		return;
+	}
+
+	ppn = pv_decode_tail(old);
+	ACCESS_ONCE(ppn->mcs.next) = node;
+
+	while (ppn->head == INVALID_HEAD)
+		cpu_relax();
+
+	pn->head = ppn->head;
+
+	for (;;) {
+		count = SPIN_THRESHOLD;
+
+		do {
+			if (smp_load_acquire(&node->locked))
+				return;
+
+			cpu_relax();
+		} while (--count);
+
+		pv_wait(&node->locked, 1);
+	}
+}
+
+void __pv_kick_node(struct mcs_spinlock *node)
+{
+	struct pv_node *pn = (struct pv_node *)node;
+
+	pv_kick(pn->cpu);
+}
+
+void __pv_wait_head(struct qspinlock *lock)
+{
+	unsigned int count;
+	struct pv_node *pn;
+	int val, old, new;
+
+	for (;;) {
+		count = SPIN_THRESHOLD;
+
+		do {
+			val = smp_load_acquire(&lock->val.counter);
+			if (!(val & _Q_LOCKED_PENDING_MASK))
+				return;
+		} while (--count);
+
+		do {
+			pn = pv_decode_tail(atomic_read(&lock->val));
+
+			while (pn->head == INVALID_HEAD)
+				cpu_relax();
+
+			pn->head = smp_processor_id();
+
+		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
+
+		/*
+		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
+		 */
+		val = atomic_read(&lock->val);
+		for (;;) {
+			if (!(val & _Q_LOCKED_PENDING_MASK))
+				return;
+			new = val | _Q_LOCKED_SLOW;
+			old = atomic_cmpxchg(&lock->val, val, new);
+			if (old == val)
+				break;
+			val = old;
+		}
+
+		/* XXX 16bit would be better */
+		pv_wait(&lock->val.counter, new);
+	}
+}
+
+static void ___pv_kick_head(struct qspinlock *lock)
+{
+	struct pv_node *pn;
+
+	pn = pv_decode_tail(atomic_read(&lock->val));
+
+	while (pn->head == INVALID_HEAD)
+		cpu_relax();
+
+	if (WARN_ON_ONCE(pn->head == NO_HEAD))
+		return;
+
+	pv_kick(pn->head);
+}
+
+void __pv_queue_unlock(struct qspinlock *lock)
+{
+	int val = atomic_read(&lock->val);
+
+	native_queue_unlock(lock);
+
+	if (val & _Q_LOCKED_SLOW)
+		___pv_kick_head(lock);
+}
+
+#else
+
+static inline void pv_init_node(struct mcs_spinlock *node) { }
+static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
+static inline void pv_kick_node(struct mcs_spinlock *node) { }
+
+static inline void pv_wait_head(struct qspinlock *lock) { }
+
+#endif
+
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
@@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	if (pv_enabled())
+		goto queue;
+
 	if (virt_queue_spin_lock(lock))
 		return;
 
@@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
 	node += idx;
 	node->locked = 0;
 	node->next = NULL;
+	pv_init_node(node);
 
 	/*
 	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
+	pv_link_and_wait_node(old, node);
 	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;
@@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * *,x,y -> *,0,0
 	 */
+	pv_wait_head(lock);
 	while ((val = smp_load_acquire(&lock->val.counter)) &
 			_Q_LOCKED_PENDING_MASK)
 		cpu_relax();
@@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
 		cpu_relax();
 
 	arch_mcs_spin_unlock_contended(&next->locked);
+	pv_kick_node(next);
 
 release:
 	/*

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (22 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: peterz-qspinlock-pv.patch --]
[-- Type: text/plain, Size: 14512 bytes --]

Add minimal paravirt support.

The code aims for minimal impact on the native case.

On the lock side we add one jump label (asm_goto) and 4 paravirt
callee saved calls that default to NOPs. The only effects are the
extra NOPs and some pointless MOVs to accomodate the calling
convention. No register spills happen because of this (x86_64).

On the unlock side we have one paravirt callee saved call, which
defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP.

The actual paravirt code comes in 3 parts;

 - init_node; this initializes the extra data members required for PV
   state. PV state data is kept 1 cacheline ahead of the regular data.

 - link_and_wait_node/kick_node; these are paired with the regular MCS
   queueing and are placed resp. before/after the paired MCS ops.

 - wait_head/queue_unlock; the interesting part here is finding the
   head node to kick.

Tracking the head is done in two parts, firstly the pv_wait_head will
store its cpu number in whichever node is pointed to by the tail part
of the lock word. Secondly, pv_link_and_wait_node() will propagate the
existing head from the old to the new tail node.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/paravirt.h       |   39 +++++++
 arch/x86/include/asm/paravirt_types.h |   15 ++
 arch/x86/include/asm/qspinlock.h      |   25 ++++
 arch/x86/kernel/paravirt-spinlocks.c  |   22 ++++
 arch/x86/kernel/paravirt_patch_32.c   |    7 +
 arch/x86/kernel/paravirt_patch_64.c   |    7 +
 include/asm-generic/qspinlock.h       |   11 ++
 kernel/locking/qspinlock.c            |  179 +++++++++++++++++++++++++++++++++-
 8 files changed, 302 insertions(+), 3 deletions(-)

Index: linux-2.6/arch/x86/include/asm/paravirt.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/paravirt.h
+++ linux-2.6/arch/x86/include/asm/paravirt.h
@@ -712,6 +712,44 @@ static inline void __set_fixmap(unsigned
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+static __always_inline void pv_init_node(struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE1(pv_lock_ops.init_node, node);
+}
+
+static __always_inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE2(pv_lock_ops.link_and_wait_node, old, node);
+}
+
+static __always_inline void pv_kick_node(struct mcs_spinlock *node)
+{
+	PVOP_VCALLEE1(pv_lock_ops.kick_node, node);
+}
+
+static __always_inline void pv_wait_head(struct qspinlock *lock)
+{
+	PVOP_VCALLEE1(pv_lock_ops.wait_head, lock);
+}
+
+static __always_inline void pv_queue_unlock(struct qspinlock *lock)
+{
+	PVOP_VCALLEE1(pv_lock_ops.queue_unlock, lock);
+}
+
+static __always_inline void pv_wait(int *ptr, int val)
+{
+	PVOP_VCALL2(pv_lock_ops.wait, ptr, val);
+}
+
+static __always_inline void pv_kick(int cpu)
+{
+	PVOP_VCALL1(pv_lock_ops.kick, cpu);
+}
+
+#else
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -723,6 +761,7 @@ static __always_inline void __ticket_unl
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
+#endif
 
 #endif
 
Index: linux-2.6/arch/x86/include/asm/paravirt_types.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/paravirt_types.h
+++ linux-2.6/arch/x86/include/asm/paravirt_types.h
@@ -326,6 +326,9 @@ struct pv_mmu_ops {
 			   phys_addr_t phys, pgprot_t flags);
 };
 
+struct mcs_spinlock;
+struct qspinlock;
+
 struct arch_spinlock;
 #ifdef CONFIG_SMP
 #include <asm/spinlock_types.h>
@@ -334,8 +337,20 @@ typedef u16 __ticket_t;
 #endif
 
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+	struct paravirt_callee_save init_node;
+	struct paravirt_callee_save link_and_wait_node;
+	struct paravirt_callee_save kick_node;
+
+	struct paravirt_callee_save wait_head;
+	struct paravirt_callee_save queue_unlock;
+
+	void (*wait)(int *ptr, int val);
+	void (*kick)(int cpu);
+#else
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
 };
 
 /* This contains all the paravirt structures: we get a convenient
Index: linux-2.6/arch/x86/include/asm/qspinlock.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/qspinlock.h
+++ linux-2.6/arch/x86/include/asm/qspinlock.h
@@ -3,24 +3,45 @@
 
 #include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
+#include <asm/paravirt.h>
 
 #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
 
-#define	queue_spin_unlock queue_spin_unlock
 /**
  * queue_spin_unlock - release a queue spinlock
  * @lock : Pointer to queue spinlock structure
  *
  * An effective smp_store_release() on the least-significant byte.
  */
-static inline void queue_spin_unlock(struct qspinlock *lock)
+static inline void native_queue_unlock(struct qspinlock *lock)
 {
 	barrier();
 	ACCESS_ONCE(*(u8 *)lock) = 0;
 }
 
+#else
+
+static inline void native_queue_unlock(struct qspinlock *lock)
+{
+	atomic_dec(&lock->val);
+}
+
 #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
 
+#define	queue_spin_unlock queue_spin_unlock
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	pv_queue_unlock(lock);
+}
+#else
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	native_queue_unlock(lock);
+}
+#endif
+
 #define virt_queue_spin_lock virt_queue_spin_lock
 
 static inline bool virt_queue_spin_lock(struct qspinlock *lock)
Index: linux-2.6/arch/x86/kernel/paravirt-spinlocks.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt-spinlocks.c
+++ linux-2.6/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,11 +8,33 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+void __native_queue_unlock(struct qspinlock *lock)
+{
+	native_queue_unlock(lock);
+}
+PV_CALLEE_SAVE_REGS_THUNK(__native_queue_unlock);
+#endif
+#endif
+
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+	.init_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.link_and_wait_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.kick_node = __PV_IS_CALLEE_SAVE(paravirt_nop),
+
+	.wait_head = __PV_IS_CALLEE_SAVE(paravirt_nop),
+	.queue_unlock = PV_CALLEE_SAVE(__native_queue_unlock),
+
+	.wait = paravirt_nop,
+	.kick = paravirt_nop,
+#else
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
+#endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
Index: linux-2.6/arch/x86/kernel/paravirt_patch_32.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_32.c
+++ linux-2.6/arch/x86/kernel/paravirt_patch_32.c
@@ -12,6 +12,10 @@ DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %c
 DEF_NATIVE(pv_cpu_ops, clts, "clts");
 DEF_NATIVE(pv_cpu_ops, read_tsc, "rdtsc");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%eax)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	/* arg in %eax, return in %eax */
@@ -47,6 +51,9 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_mmu_ops, write_cr3);
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_cpu_ops, read_tsc);
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+		PATCH_SITE(pv_lock_ops, queue_unlock);
+#endif
 
 	patch_site:
 		ret = paravirt_patch_insns(ibuf, len, start, end);
Index: linux-2.6/arch/x86/kernel/paravirt_patch_64.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_64.c
+++ linux-2.6/arch/x86/kernel/paravirt_patch_64.c
@@ -22,6 +22,10 @@ DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs")
 DEF_NATIVE(, mov32, "mov %edi, %eax");
 DEF_NATIVE(, mov64, "mov %rdi, %rax");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%rdi)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	return paravirt_patch_insns(insnbuf, len,
@@ -61,6 +65,9 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_mmu_ops, flush_tlb_single);
 		PATCH_SITE(pv_cpu_ops, wbinvd);
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK)
+		PATCH_SITE(pv_lock_ops, queue_unlock);
+#endif
 
 	patch_site:
 		ret = paravirt_patch_insns(ibuf, len, start, end);
Index: linux-2.6/include/asm-generic/qspinlock.h
===================================================================
--- linux-2.6.orig/include/asm-generic/qspinlock.h
+++ linux-2.6/include/asm-generic/qspinlock.h
@@ -105,6 +105,17 @@ static __always_inline bool virt_queue_s
 }
 #endif
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+struct mcs_spinlock;
+
+extern void __pv_init_node(struct mcs_spinlock *node);
+extern void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node);
+extern void __pv_kick_node(struct mcs_spinlock *node);
+
+extern void __pv_wait_head(struct qspinlock *lock);
+extern void __pv_queue_unlock(struct qspinlock *lock);
+#endif
+
 /*
  * Initializier
  */
Index: linux-2.6/kernel/locking/qspinlock.c
===================================================================
--- linux-2.6.orig/kernel/locking/qspinlock.c
+++ linux-2.6/kernel/locking/qspinlock.c
@@ -56,13 +56,33 @@
 
 #include "mcs_spinlock.h"
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+#define MAX_NODES	8
+
+static inline bool pv_enabled(void)
+{
+	return static_key_false(&paravirt_spinlocks_enabled);
+}
+#else /* !PARAVIRT_SPINLOCKS */
+
+#define MAX_NODES	4
+
+static inline bool pv_enabled(void)
+{
+	return false;
+}
+#endif /* PARAVIRT_SPINLOCKS */
+
 /*
  * Per-CPU queue node structures; we can never have more than 4 nested
  * contexts: task, softirq, hardirq, nmi.
  *
  * Exactly fits one cacheline.
+ *
+ * PV doubles the storage and uses the second cacheline for PV state.
  */
-static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[MAX_NODES]);
 
 /*
  * We must be able to distinguish between no-tail and the tail at 0:0,
@@ -218,6 +238,156 @@ static __always_inline void set_locked(s
 	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
 }
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+/*
+ * Write a comment about how all this works...
+ */
+
+#define _Q_LOCKED_SLOW	(2U << _Q_LOCKED_OFFSET)
+
+struct pv_node {
+	struct mcs_spinlock	mcs;
+	struct mcs_spinlock	__offset[3];
+	int cpu, head;
+};
+
+#define INVALID_HEAD	-1
+#define NO_HEAD		nr_cpu_ids
+
+void __pv_init_node(struct mcs_spinlock *node)
+{
+	struct pv_node *pn = (struct pv_node *)node;
+
+	BUILD_BUG_ON(sizeof(struct pv_node) > 5*sizeof(struct mcs_spinlock));
+
+	pn->cpu = smp_processor_id();
+	pn->head = INVALID_HEAD;
+}
+
+static inline struct pv_node *pv_decode_tail(u32 tail)
+{
+	return (struct pv_node *)decode_tail(tail);
+}
+
+void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
+{
+	struct pv_node *ppn, *pn = (struct pv_node *)node;
+	unsigned int count;
+
+	if (!(old & _Q_TAIL_MASK)) {
+		pn->head = NO_HEAD;
+		return;
+	}
+
+	ppn = pv_decode_tail(old);
+	ACCESS_ONCE(ppn->mcs.next) = node;
+
+	while (ppn->head == INVALID_HEAD)
+		cpu_relax();
+
+	pn->head = ppn->head;
+
+	for (;;) {
+		count = SPIN_THRESHOLD;
+
+		do {
+			if (smp_load_acquire(&node->locked))
+				return;
+
+			cpu_relax();
+		} while (--count);
+
+		pv_wait(&node->locked, 1);
+	}
+}
+
+void __pv_kick_node(struct mcs_spinlock *node)
+{
+	struct pv_node *pn = (struct pv_node *)node;
+
+	pv_kick(pn->cpu);
+}
+
+void __pv_wait_head(struct qspinlock *lock)
+{
+	unsigned int count;
+	struct pv_node *pn;
+	int val, old, new;
+
+	for (;;) {
+		count = SPIN_THRESHOLD;
+
+		do {
+			val = smp_load_acquire(&lock->val.counter);
+			if (!(val & _Q_LOCKED_PENDING_MASK))
+				return;
+		} while (--count);
+
+		do {
+			pn = pv_decode_tail(atomic_read(&lock->val));
+
+			while (pn->head == INVALID_HEAD)
+				cpu_relax();
+
+			pn->head = smp_processor_id();
+
+		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
+
+		/*
+		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
+		 */
+		val = atomic_read(&lock->val);
+		for (;;) {
+			if (!(val & _Q_LOCKED_PENDING_MASK))
+				return;
+			new = val | _Q_LOCKED_SLOW;
+			old = atomic_cmpxchg(&lock->val, val, new);
+			if (old == val)
+				break;
+			val = old;
+		}
+
+		/* XXX 16bit would be better */
+		pv_wait(&lock->val.counter, new);
+	}
+}
+
+static void ___pv_kick_head(struct qspinlock *lock)
+{
+	struct pv_node *pn;
+
+	pn = pv_decode_tail(atomic_read(&lock->val));
+
+	while (pn->head == INVALID_HEAD)
+		cpu_relax();
+
+	if (WARN_ON_ONCE(pn->head == NO_HEAD))
+		return;
+
+	pv_kick(pn->head);
+}
+
+void __pv_queue_unlock(struct qspinlock *lock)
+{
+	int val = atomic_read(&lock->val);
+
+	native_queue_unlock(lock);
+
+	if (val & _Q_LOCKED_SLOW)
+		___pv_kick_head(lock);
+}
+
+#else
+
+static inline void pv_init_node(struct mcs_spinlock *node) { }
+static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
+static inline void pv_kick_node(struct mcs_spinlock *node) { }
+
+static inline void pv_wait_head(struct qspinlock *lock) { }
+
+#endif
+
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
@@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
 
+	if (pv_enabled())
+		goto queue;
+
 	if (virt_queue_spin_lock(lock))
 		return;
 
@@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
 	node += idx;
 	node->locked = 0;
 	node->next = NULL;
+	pv_init_node(node);
 
 	/*
 	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
 	/*
 	 * if there was a previous node; link it and wait.
 	 */
+	pv_link_and_wait_node(old, node);
 	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
 		ACCESS_ONCE(prev->next) = node;
@@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
 	 *
 	 * *,x,y -> *,0,0
 	 */
+	pv_wait_head(lock);
 	while ((val = smp_load_acquire(&lock->val.counter)) &
 			_Q_LOCKED_PENDING_MASK)
 		cpu_relax();
@@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
 		cpu_relax();
 
 	arch_mcs_spin_unlock_contended(&next->locked);
+	pv_kick_node(next);
 
 release:
 	/*

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 11/11] qspinlock, kvm: Add paravirt support
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (24 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  2014-06-22 16:36     ` Raghavendra K T
  2014-06-22 16:36   ` Raghavendra K T
  -1 siblings, 2 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

[-- Attachment #1: peterz-qspinlock-kvm.patch --]
[-- Type: text/plain, Size: 2921 bytes --]



Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/kernel/kvm.c |   58 ++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/Kconfig.locks  |    2 -
 2 files changed, 59 insertions(+), 1 deletion(-)

Index: linux-2.6/arch/x86/kernel/kvm.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/kvm.c
+++ linux-2.6/arch/x86/kernel/kvm.c
@@ -569,6 +569,7 @@ static void kvm_kick_cpu(int cpu)
 	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
 }
 
+#ifndef CONFIG_QUEUE_SPINLOCK
 enum kvm_contention_stat {
 	TAKEN_SLOW,
 	TAKEN_SLOW_PICKUP,
@@ -796,6 +797,51 @@ static void kvm_unlock_kick(struct arch_
 		}
 	}
 }
+#else /* QUEUE_SPINLOCK */
+
+#include <asm-generic/qspinlock.h>
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_init_node);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_link_and_wait_node);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_kick_node);
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_wait_head);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_unlock);
+
+void kvm_wait(int *ptr, int val)
+{
+	unsigned long flags;
+
+	if (in_nmi())
+		return;
+
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
+
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking.
+	 */
+	if (ACCESS_ONCE(*ptr) != val)
+		goto out;
+
+	/*
+	 * halt until it's our turn and kicked. Note that we do safe halt
+	 * for irq enabled case to avoid hang when lock info is overwritten
+	 * in irq spinlock slowpath and no spurious interrupt occur to save us.
+	 */
+	if (arch_irqs_disabled_flags(flags))
+		halt();
+	else
+		safe_halt();
+
+out:
+	local_irq_restore(flags);
+}
+#endif /* QUEUE_SPINLOCK */
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -808,8 +854,20 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+	pv_lock_ops.init_node = PV_CALLEE_SAVE(__pv_init_node);
+	pv_lock_ops.link_and_wait_node = PV_CALLEE_SAVE(__pv_link_and_wait_node);
+	pv_lock_ops.kick_node = PV_CALLEE_SAVE(__pv_kick_node);
+
+	pv_lock_ops.wait_head = PV_CALLEE_SAVE(__pv_wait_head);
+	pv_lock_ops.queue_unlock = PV_CALLEE_SAVE(__pv_queue_unlock);
+
+	pv_lock_ops.wait = kvm_wait;
+	pv_lock_ops.kick = kvm_kick_cpu;
+#else
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
 	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
 }
 
 static __init int kvm_spinlock_init_jump(void)
Index: linux-2.6/kernel/Kconfig.locks
===================================================================
--- linux-2.6.orig/kernel/Kconfig.locks
+++ linux-2.6/kernel/Kconfig.locks
@@ -229,7 +229,7 @@ config ARCH_USE_QUEUE_SPINLOCK
 
 config QUEUE_SPINLOCK
 	def_bool y if ARCH_USE_QUEUE_SPINLOCK
-	depends on SMP && !PARAVIRT_SPINLOCKS
+	depends on SMP && !(PARAVIRT_SPINLOCKS && XEN)
 
 config ARCH_USE_QUEUE_RWLOCK
 	bool



^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 11/11] qspinlock, kvm: Add paravirt support
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (25 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

[-- Attachment #1: peterz-qspinlock-kvm.patch --]
[-- Type: text/plain, Size: 2919 bytes --]



Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/kernel/kvm.c |   58 ++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/Kconfig.locks  |    2 -
 2 files changed, 59 insertions(+), 1 deletion(-)

Index: linux-2.6/arch/x86/kernel/kvm.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/kvm.c
+++ linux-2.6/arch/x86/kernel/kvm.c
@@ -569,6 +569,7 @@ static void kvm_kick_cpu(int cpu)
 	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
 }
 
+#ifndef CONFIG_QUEUE_SPINLOCK
 enum kvm_contention_stat {
 	TAKEN_SLOW,
 	TAKEN_SLOW_PICKUP,
@@ -796,6 +797,51 @@ static void kvm_unlock_kick(struct arch_
 		}
 	}
 }
+#else /* QUEUE_SPINLOCK */
+
+#include <asm-generic/qspinlock.h>
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_init_node);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_link_and_wait_node);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_kick_node);
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_wait_head);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_unlock);
+
+void kvm_wait(int *ptr, int val)
+{
+	unsigned long flags;
+
+	if (in_nmi())
+		return;
+
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
+
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking.
+	 */
+	if (ACCESS_ONCE(*ptr) != val)
+		goto out;
+
+	/*
+	 * halt until it's our turn and kicked. Note that we do safe halt
+	 * for irq enabled case to avoid hang when lock info is overwritten
+	 * in irq spinlock slowpath and no spurious interrupt occur to save us.
+	 */
+	if (arch_irqs_disabled_flags(flags))
+		halt();
+	else
+		safe_halt();
+
+out:
+	local_irq_restore(flags);
+}
+#endif /* QUEUE_SPINLOCK */
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -808,8 +854,20 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+	pv_lock_ops.init_node = PV_CALLEE_SAVE(__pv_init_node);
+	pv_lock_ops.link_and_wait_node = PV_CALLEE_SAVE(__pv_link_and_wait_node);
+	pv_lock_ops.kick_node = PV_CALLEE_SAVE(__pv_kick_node);
+
+	pv_lock_ops.wait_head = PV_CALLEE_SAVE(__pv_wait_head);
+	pv_lock_ops.queue_unlock = PV_CALLEE_SAVE(__pv_queue_unlock);
+
+	pv_lock_ops.wait = kvm_wait;
+	pv_lock_ops.kick = kvm_kick_cpu;
+#else
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
 	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
 }
 
 static __init int kvm_spinlock_init_jump(void)
Index: linux-2.6/kernel/Kconfig.locks
===================================================================
--- linux-2.6.orig/kernel/Kconfig.locks
+++ linux-2.6/kernel/Kconfig.locks
@@ -229,7 +229,7 @@ config ARCH_USE_QUEUE_SPINLOCK
 
 config QUEUE_SPINLOCK
 	def_bool y if ARCH_USE_QUEUE_SPINLOCK
-	depends on SMP && !PARAVIRT_SPINLOCKS
+	depends on SMP && !(PARAVIRT_SPINLOCKS && XEN)
 
 config ARCH_USE_QUEUE_RWLOCK
 	bool

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 11/11] qspinlock, kvm: Add paravirt support
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (26 preceding siblings ...)
  (?)
@ 2014-06-15 12:47 ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:47 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

[-- Attachment #1: peterz-qspinlock-kvm.patch --]
[-- Type: text/plain, Size: 2919 bytes --]



Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/kernel/kvm.c |   58 ++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/Kconfig.locks  |    2 -
 2 files changed, 59 insertions(+), 1 deletion(-)

Index: linux-2.6/arch/x86/kernel/kvm.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/kvm.c
+++ linux-2.6/arch/x86/kernel/kvm.c
@@ -569,6 +569,7 @@ static void kvm_kick_cpu(int cpu)
 	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
 }
 
+#ifndef CONFIG_QUEUE_SPINLOCK
 enum kvm_contention_stat {
 	TAKEN_SLOW,
 	TAKEN_SLOW_PICKUP,
@@ -796,6 +797,51 @@ static void kvm_unlock_kick(struct arch_
 		}
 	}
 }
+#else /* QUEUE_SPINLOCK */
+
+#include <asm-generic/qspinlock.h>
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_init_node);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_link_and_wait_node);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_kick_node);
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_wait_head);
+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_unlock);
+
+void kvm_wait(int *ptr, int val)
+{
+	unsigned long flags;
+
+	if (in_nmi())
+		return;
+
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
+
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking.
+	 */
+	if (ACCESS_ONCE(*ptr) != val)
+		goto out;
+
+	/*
+	 * halt until it's our turn and kicked. Note that we do safe halt
+	 * for irq enabled case to avoid hang when lock info is overwritten
+	 * in irq spinlock slowpath and no spurious interrupt occur to save us.
+	 */
+	if (arch_irqs_disabled_flags(flags))
+		halt();
+	else
+		safe_halt();
+
+out:
+	local_irq_restore(flags);
+}
+#endif /* QUEUE_SPINLOCK */
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -808,8 +854,20 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+	pv_lock_ops.init_node = PV_CALLEE_SAVE(__pv_init_node);
+	pv_lock_ops.link_and_wait_node = PV_CALLEE_SAVE(__pv_link_and_wait_node);
+	pv_lock_ops.kick_node = PV_CALLEE_SAVE(__pv_kick_node);
+
+	pv_lock_ops.wait_head = PV_CALLEE_SAVE(__pv_wait_head);
+	pv_lock_ops.queue_unlock = PV_CALLEE_SAVE(__pv_queue_unlock);
+
+	pv_lock_ops.wait = kvm_wait;
+	pv_lock_ops.kick = kvm_kick_cpu;
+#else
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
 	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
 }
 
 static __init int kvm_spinlock_init_jump(void)
Index: linux-2.6/kernel/Kconfig.locks
===================================================================
--- linux-2.6.orig/kernel/Kconfig.locks
+++ linux-2.6/kernel/Kconfig.locks
@@ -229,7 +229,7 @@ config ARCH_USE_QUEUE_SPINLOCK
 
 config QUEUE_SPINLOCK
 	def_bool y if ARCH_USE_QUEUE_SPINLOCK
-	depends on SMP && !PARAVIRT_SPINLOCKS
+	depends on SMP && !(PARAVIRT_SPINLOCKS && XEN)
 
 config ARCH_USE_QUEUE_RWLOCK
 	bool

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-15 12:46   ` Peter Zijlstra
@ 2014-06-16 20:49     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-16 20:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:46:58PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch introduces a new generic queue spinlock implementation that
> can serve as an alternative to the default ticket spinlock. Compared
> with the ticket spinlock, this queue spinlock should be almost as fair
> as the ticket spinlock. It has about the same speed in single-thread
> and it can be much faster in high contention situations especially when
> the spinlock is embedded within the data structure to be protected.
> 
> Only in light to moderate contention where the average queue depth
> is around 1-3 will this queue spinlock be potentially a bit slower
> due to the higher slowpath overhead.
> 
> This queue spinlock is especially suit to NUMA machines with a large
> number of cores as the chance of spinlock contention is much higher
> in those machines. The cost of contention is also higher because of
> slower inter-node memory traffic.
> 
> Due to the fact that spinlocks are acquired with preemption disabled,
> the process will not be migrated to another CPU while it is trying
> to get a spinlock. Ignoring interrupt handling, a CPU can only be
> contending in one spinlock at any one time. Counting soft IRQ, hard
> IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
> activities.  By allocating a set of per-cpu queue nodes and used them
> to form a waiting queue, we can encode the queue node address into a
> much smaller 24-bit size (including CPU number and queue node index)
> leaving one byte for the lock.
> 
> Please note that the queue node is only needed when waiting for the
> lock. Once the lock is acquired, the queue node can be released to
> be used later.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

Thank you for the repost. I have some questions about the implementation
that hopefully will be easy to answer and said answers I hope can
be added in the code to enlighten other folks.

See below.
.. snip..

> Index: linux-2.6/kernel/locking/mcs_spinlock.h
> ===================================================================
> --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> +++ linux-2.6/kernel/locking/mcs_spinlock.h
> @@ -17,6 +17,7 @@
>  struct mcs_spinlock {
>  	struct mcs_spinlock *next;
>  	int locked; /* 1 if lock acquired */
> +	int count;

This could use a comment.

>  };
>  
>  #ifndef arch_mcs_spin_lock_contended
> Index: linux-2.6/kernel/locking/qspinlock.c
> ===================================================================
> --- /dev/null
> +++ linux-2.6/kernel/locking/qspinlock.c
> @@ -0,0 +1,197 @@
> +/*
> + * Queue spinlock
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
> + *
> + * Authors: Waiman Long <waiman.long@hp.com>
> + *          Peter Zijlstra <pzijlstr@redhat.com>
> + */
> +#include <linux/smp.h>
> +#include <linux/bug.h>
> +#include <linux/cpumask.h>
> +#include <linux/percpu.h>
> +#include <linux/hardirq.h>
> +#include <linux/mutex.h>
> +#include <asm/qspinlock.h>
> +
> +/*
> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.
> + */
> +
> +#include "mcs_spinlock.h"
> +
> +/*
> + * Per-CPU queue node structures; we can never have more than 4 nested
> + * contexts: task, softirq, hardirq, nmi.
> + *
> + * Exactly fits one cacheline.
> + */
> +static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
> +
> +/*
> + * We must be able to distinguish between no-tail and the tail at 0:0,
> + * therefore increment the cpu number by one.
> + */
> +
> +static inline u32 encode_tail(int cpu, int idx)
> +{
> +	u32 tail;
> +
> +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */

Should there an

ASSSERT (idx < 4)

just in case we screw up somehow (I can't figure out how, but
that is partially why ASSERTS are added).

> +
> +	return tail;
> +}
> +
> +static inline struct mcs_spinlock *decode_tail(u32 tail)
> +{
> +	int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
> +	int idx = (tail &  _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
> +
> +	return per_cpu_ptr(&mcs_nodes[idx], cpu);
> +}
> +
> +/**
> + * queue_spin_lock_slowpath - acquire the queue spinlock
> + * @lock: Pointer to queue spinlock structure
> + * @val: Current value of the queue spinlock 32-bit word
> + *
> + * (queue tail, lock bit)

Except it is not a lock bit. It is a lock uint8_t.

Is the queue tail at this point the composite of 'cpu|idx'?

> + *
> + *              fast      :    slow                                  :    unlock
> + *                        :                                          :
> + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> + *                        :       | ^--------.                    /  :
> + *                        :       v           \                   |  :
> + * uncontended            :    (n,x) --+--> (n,0)                 |  :

So many CPUn come in right? Is 'n' for the number of CPUs?


> + *   queue                :       | ^--'                          |  :
> + *                        :       v                               |  :
> + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> + *   queue                :         ^--'                             :

And here um, what are the '*' for? Are they the four different
types of handlers that can be nested? So task, sofitrq, hardisk, and
nmi?

> + *
> + */
> +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> +{
> +	struct mcs_spinlock *prev, *next, *node;
> +	u32 new, old, tail;
> +	int idx;
> +
> +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> +
> +	node = this_cpu_ptr(&mcs_nodes[0]);
> +	idx = node->count++;

If this is the first time we enter this, wouldn't idx end up
being 1?

> +	tail = encode_tail(smp_processor_id(), idx);
> +
> +	node += idx;

Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
on the first 'level' (task, softirq, hardirq, nmi)? Won't that
cause us to blow past the array when we are nested at the nmi
handler?

> +	node->locked = 0;
> +	node->next = NULL;
> +
> +	/*
> +	 * trylock || xchg(lock, node)
> +	 *
> +	 * 0,0 -> 0,1 ; trylock
> +	 * p,x -> n,x ; prev = xchg(lock, node)

I looked at that for 10 seconds and I was not sure what you meant.
Is this related to the MCS document you had pointed to? It would help
if you mention that the comments follow the document. (But they
don't seem to)

I presume what you mean is that if we are the next after the
lock-holder we need only to update the 'next' (or the
composite value of smp_processor_idx | idx) to point to us.

As in, swap the 'L' with 'I' (looking at the doc)

> +	 */
> +	for (;;) {
> +		new = _Q_LOCKED_VAL;
> +		if (val)

Could you add a comment here, like this:

/*
 * N.B. Initially 'val' will have some value (as we are called
 * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
 * But on subsequent iterations, either the lock holder will
 * decrement the val (queue_spin_unlock - to zero) and we
 * needn't to record our status in the queue as we have set the
 * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
 * in line and need to record our 'next' (aka, smp_processor_id() | idx)
 * position. */
 */

> +			new = tail | (val & _Q_LOCKED_MASK);
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock; forget about queueing.
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		goto release;
> +
> +	/*
> +	 * if there was a previous node; link it and wait.
> +	 */
> +	if (old & ~_Q_LOCKED_MASK) {
> +		prev = decode_tail(old);
> +		ACCESS_ONCE(prev->next) = node;
> +
> +		arch_mcs_spin_lock_contended(&node->locked);
> +	}
> +
> +	/*
> +	 * we're at the head of the waitqueue, wait for the owner to go away.
> +	 *
> +	 * *,x -> *,0
> +	 */
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +		cpu_relax();
> +
> +	/*
> +	 * claim the lock:
> +	 *
> +	 * n,0 -> 0,1 : lock, uncontended
> +	 * *,0 -> *,1 : lock, contended
> +	 */
> +	for (;;) {
> +		new = _Q_LOCKED_VAL;
> +		if (val != tail)
> +			new |= val;

You lost me here. If we are at the head of the queue, and the owner
has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
loop, how can val != tail?

I suspect it has something to do with the comment, but I am still unsure
what it means.

Could you help a bit in explaining it in English please?

> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * contended path; wait for next, release.
> +	 */
> +	if (new != _Q_LOCKED_VAL) {

Hm, wouldn't it be just easier to do a 'goto restart' where
restart label points at the first loop statement? Ah never
mind - we have already inserted ourselves in the previous's
node.

But that is confusing - we have done: "prev->next = node;"

And then exited out of 'val = atomic_read(&lock->val))' which
suggests that queue_spin_unlock has called us. How can we be
contended again?


Thanks!
> +		while (!(next = ACCESS_ONCE(node->next)))
> +			cpu_relax();
> +
> +		arch_mcs_spin_unlock_contended(&next->locked);
> +	}
> +
> +release:
> +	/*
> +	 * release the node
> +	 */
> +	this_cpu_dec(mcs_nodes[0].count);
> +}
> +EXPORT_SYMBOL(queue_spin_lock_slowpath);
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-16 20:49     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-16 20:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:46:58PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch introduces a new generic queue spinlock implementation that
> can serve as an alternative to the default ticket spinlock. Compared
> with the ticket spinlock, this queue spinlock should be almost as fair
> as the ticket spinlock. It has about the same speed in single-thread
> and it can be much faster in high contention situations especially when
> the spinlock is embedded within the data structure to be protected.
> 
> Only in light to moderate contention where the average queue depth
> is around 1-3 will this queue spinlock be potentially a bit slower
> due to the higher slowpath overhead.
> 
> This queue spinlock is especially suit to NUMA machines with a large
> number of cores as the chance of spinlock contention is much higher
> in those machines. The cost of contention is also higher because of
> slower inter-node memory traffic.
> 
> Due to the fact that spinlocks are acquired with preemption disabled,
> the process will not be migrated to another CPU while it is trying
> to get a spinlock. Ignoring interrupt handling, a CPU can only be
> contending in one spinlock at any one time. Counting soft IRQ, hard
> IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
> activities.  By allocating a set of per-cpu queue nodes and used them
> to form a waiting queue, we can encode the queue node address into a
> much smaller 24-bit size (including CPU number and queue node index)
> leaving one byte for the lock.
> 
> Please note that the queue node is only needed when waiting for the
> lock. Once the lock is acquired, the queue node can be released to
> be used later.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

Thank you for the repost. I have some questions about the implementation
that hopefully will be easy to answer and said answers I hope can
be added in the code to enlighten other folks.

See below.
.. snip..

> Index: linux-2.6/kernel/locking/mcs_spinlock.h
> ===================================================================
> --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> +++ linux-2.6/kernel/locking/mcs_spinlock.h
> @@ -17,6 +17,7 @@
>  struct mcs_spinlock {
>  	struct mcs_spinlock *next;
>  	int locked; /* 1 if lock acquired */
> +	int count;

This could use a comment.

>  };
>  
>  #ifndef arch_mcs_spin_lock_contended
> Index: linux-2.6/kernel/locking/qspinlock.c
> ===================================================================
> --- /dev/null
> +++ linux-2.6/kernel/locking/qspinlock.c
> @@ -0,0 +1,197 @@
> +/*
> + * Queue spinlock
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
> + *
> + * Authors: Waiman Long <waiman.long@hp.com>
> + *          Peter Zijlstra <pzijlstr@redhat.com>
> + */
> +#include <linux/smp.h>
> +#include <linux/bug.h>
> +#include <linux/cpumask.h>
> +#include <linux/percpu.h>
> +#include <linux/hardirq.h>
> +#include <linux/mutex.h>
> +#include <asm/qspinlock.h>
> +
> +/*
> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.
> + */
> +
> +#include "mcs_spinlock.h"
> +
> +/*
> + * Per-CPU queue node structures; we can never have more than 4 nested
> + * contexts: task, softirq, hardirq, nmi.
> + *
> + * Exactly fits one cacheline.
> + */
> +static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
> +
> +/*
> + * We must be able to distinguish between no-tail and the tail at 0:0,
> + * therefore increment the cpu number by one.
> + */
> +
> +static inline u32 encode_tail(int cpu, int idx)
> +{
> +	u32 tail;
> +
> +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */

Should there an

ASSSERT (idx < 4)

just in case we screw up somehow (I can't figure out how, but
that is partially why ASSERTS are added).

> +
> +	return tail;
> +}
> +
> +static inline struct mcs_spinlock *decode_tail(u32 tail)
> +{
> +	int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
> +	int idx = (tail &  _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
> +
> +	return per_cpu_ptr(&mcs_nodes[idx], cpu);
> +}
> +
> +/**
> + * queue_spin_lock_slowpath - acquire the queue spinlock
> + * @lock: Pointer to queue spinlock structure
> + * @val: Current value of the queue spinlock 32-bit word
> + *
> + * (queue tail, lock bit)

Except it is not a lock bit. It is a lock uint8_t.

Is the queue tail at this point the composite of 'cpu|idx'?

> + *
> + *              fast      :    slow                                  :    unlock
> + *                        :                                          :
> + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> + *                        :       | ^--------.                    /  :
> + *                        :       v           \                   |  :
> + * uncontended            :    (n,x) --+--> (n,0)                 |  :

So many CPUn come in right? Is 'n' for the number of CPUs?


> + *   queue                :       | ^--'                          |  :
> + *                        :       v                               |  :
> + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> + *   queue                :         ^--'                             :

And here um, what are the '*' for? Are they the four different
types of handlers that can be nested? So task, sofitrq, hardisk, and
nmi?

> + *
> + */
> +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> +{
> +	struct mcs_spinlock *prev, *next, *node;
> +	u32 new, old, tail;
> +	int idx;
> +
> +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> +
> +	node = this_cpu_ptr(&mcs_nodes[0]);
> +	idx = node->count++;

If this is the first time we enter this, wouldn't idx end up
being 1?

> +	tail = encode_tail(smp_processor_id(), idx);
> +
> +	node += idx;

Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
on the first 'level' (task, softirq, hardirq, nmi)? Won't that
cause us to blow past the array when we are nested at the nmi
handler?

> +	node->locked = 0;
> +	node->next = NULL;
> +
> +	/*
> +	 * trylock || xchg(lock, node)
> +	 *
> +	 * 0,0 -> 0,1 ; trylock
> +	 * p,x -> n,x ; prev = xchg(lock, node)

I looked at that for 10 seconds and I was not sure what you meant.
Is this related to the MCS document you had pointed to? It would help
if you mention that the comments follow the document. (But they
don't seem to)

I presume what you mean is that if we are the next after the
lock-holder we need only to update the 'next' (or the
composite value of smp_processor_idx | idx) to point to us.

As in, swap the 'L' with 'I' (looking at the doc)

> +	 */
> +	for (;;) {
> +		new = _Q_LOCKED_VAL;
> +		if (val)

Could you add a comment here, like this:

/*
 * N.B. Initially 'val' will have some value (as we are called
 * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
 * But on subsequent iterations, either the lock holder will
 * decrement the val (queue_spin_unlock - to zero) and we
 * needn't to record our status in the queue as we have set the
 * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
 * in line and need to record our 'next' (aka, smp_processor_id() | idx)
 * position. */
 */

> +			new = tail | (val & _Q_LOCKED_MASK);
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock; forget about queueing.
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		goto release;
> +
> +	/*
> +	 * if there was a previous node; link it and wait.
> +	 */
> +	if (old & ~_Q_LOCKED_MASK) {
> +		prev = decode_tail(old);
> +		ACCESS_ONCE(prev->next) = node;
> +
> +		arch_mcs_spin_lock_contended(&node->locked);
> +	}
> +
> +	/*
> +	 * we're at the head of the waitqueue, wait for the owner to go away.
> +	 *
> +	 * *,x -> *,0
> +	 */
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +		cpu_relax();
> +
> +	/*
> +	 * claim the lock:
> +	 *
> +	 * n,0 -> 0,1 : lock, uncontended
> +	 * *,0 -> *,1 : lock, contended
> +	 */
> +	for (;;) {
> +		new = _Q_LOCKED_VAL;
> +		if (val != tail)
> +			new |= val;

You lost me here. If we are at the head of the queue, and the owner
has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
loop, how can val != tail?

I suspect it has something to do with the comment, but I am still unsure
what it means.

Could you help a bit in explaining it in English please?

> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * contended path; wait for next, release.
> +	 */
> +	if (new != _Q_LOCKED_VAL) {

Hm, wouldn't it be just easier to do a 'goto restart' where
restart label points at the first loop statement? Ah never
mind - we have already inserted ourselves in the previous's
node.

But that is confusing - we have done: "prev->next = node;"

And then exited out of 'val = atomic_read(&lock->val))' which
suggests that queue_spin_unlock has called us. How can we be
contended again?


Thanks!
> +		while (!(next = ACCESS_ONCE(node->next)))
> +			cpu_relax();
> +
> +		arch_mcs_spin_unlock_contended(&next->locked);
> +	}
> +
> +release:
> +	/*
> +	 * release the node
> +	 */
> +	this_cpu_dec(mcs_nodes[0].count);
> +}
> +EXPORT_SYMBOL(queue_spin_lock_slowpath);
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-15 12:46   ` Peter Zijlstra
  (?)
@ 2014-06-16 20:49   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-16 20:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:46:58PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch introduces a new generic queue spinlock implementation that
> can serve as an alternative to the default ticket spinlock. Compared
> with the ticket spinlock, this queue spinlock should be almost as fair
> as the ticket spinlock. It has about the same speed in single-thread
> and it can be much faster in high contention situations especially when
> the spinlock is embedded within the data structure to be protected.
> 
> Only in light to moderate contention where the average queue depth
> is around 1-3 will this queue spinlock be potentially a bit slower
> due to the higher slowpath overhead.
> 
> This queue spinlock is especially suit to NUMA machines with a large
> number of cores as the chance of spinlock contention is much higher
> in those machines. The cost of contention is also higher because of
> slower inter-node memory traffic.
> 
> Due to the fact that spinlocks are acquired with preemption disabled,
> the process will not be migrated to another CPU while it is trying
> to get a spinlock. Ignoring interrupt handling, a CPU can only be
> contending in one spinlock at any one time. Counting soft IRQ, hard
> IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
> activities.  By allocating a set of per-cpu queue nodes and used them
> to form a waiting queue, we can encode the queue node address into a
> much smaller 24-bit size (including CPU number and queue node index)
> leaving one byte for the lock.
> 
> Please note that the queue node is only needed when waiting for the
> lock. Once the lock is acquired, the queue node can be released to
> be used later.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

Thank you for the repost. I have some questions about the implementation
that hopefully will be easy to answer and said answers I hope can
be added in the code to enlighten other folks.

See below.
.. snip..

> Index: linux-2.6/kernel/locking/mcs_spinlock.h
> ===================================================================
> --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> +++ linux-2.6/kernel/locking/mcs_spinlock.h
> @@ -17,6 +17,7 @@
>  struct mcs_spinlock {
>  	struct mcs_spinlock *next;
>  	int locked; /* 1 if lock acquired */
> +	int count;

This could use a comment.

>  };
>  
>  #ifndef arch_mcs_spin_lock_contended
> Index: linux-2.6/kernel/locking/qspinlock.c
> ===================================================================
> --- /dev/null
> +++ linux-2.6/kernel/locking/qspinlock.c
> @@ -0,0 +1,197 @@
> +/*
> + * Queue spinlock
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
> + *
> + * Authors: Waiman Long <waiman.long@hp.com>
> + *          Peter Zijlstra <pzijlstr@redhat.com>
> + */
> +#include <linux/smp.h>
> +#include <linux/bug.h>
> +#include <linux/cpumask.h>
> +#include <linux/percpu.h>
> +#include <linux/hardirq.h>
> +#include <linux/mutex.h>
> +#include <asm/qspinlock.h>
> +
> +/*
> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.
> + */
> +
> +#include "mcs_spinlock.h"
> +
> +/*
> + * Per-CPU queue node structures; we can never have more than 4 nested
> + * contexts: task, softirq, hardirq, nmi.
> + *
> + * Exactly fits one cacheline.
> + */
> +static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
> +
> +/*
> + * We must be able to distinguish between no-tail and the tail at 0:0,
> + * therefore increment the cpu number by one.
> + */
> +
> +static inline u32 encode_tail(int cpu, int idx)
> +{
> +	u32 tail;
> +
> +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */

Should there an

ASSSERT (idx < 4)

just in case we screw up somehow (I can't figure out how, but
that is partially why ASSERTS are added).

> +
> +	return tail;
> +}
> +
> +static inline struct mcs_spinlock *decode_tail(u32 tail)
> +{
> +	int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
> +	int idx = (tail &  _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
> +
> +	return per_cpu_ptr(&mcs_nodes[idx], cpu);
> +}
> +
> +/**
> + * queue_spin_lock_slowpath - acquire the queue spinlock
> + * @lock: Pointer to queue spinlock structure
> + * @val: Current value of the queue spinlock 32-bit word
> + *
> + * (queue tail, lock bit)

Except it is not a lock bit. It is a lock uint8_t.

Is the queue tail at this point the composite of 'cpu|idx'?

> + *
> + *              fast      :    slow                                  :    unlock
> + *                        :                                          :
> + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> + *                        :       | ^--------.                    /  :
> + *                        :       v           \                   |  :
> + * uncontended            :    (n,x) --+--> (n,0)                 |  :

So many CPUn come in right? Is 'n' for the number of CPUs?


> + *   queue                :       | ^--'                          |  :
> + *                        :       v                               |  :
> + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> + *   queue                :         ^--'                             :

And here um, what are the '*' for? Are they the four different
types of handlers that can be nested? So task, sofitrq, hardisk, and
nmi?

> + *
> + */
> +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> +{
> +	struct mcs_spinlock *prev, *next, *node;
> +	u32 new, old, tail;
> +	int idx;
> +
> +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> +
> +	node = this_cpu_ptr(&mcs_nodes[0]);
> +	idx = node->count++;

If this is the first time we enter this, wouldn't idx end up
being 1?

> +	tail = encode_tail(smp_processor_id(), idx);
> +
> +	node += idx;

Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
on the first 'level' (task, softirq, hardirq, nmi)? Won't that
cause us to blow past the array when we are nested at the nmi
handler?

> +	node->locked = 0;
> +	node->next = NULL;
> +
> +	/*
> +	 * trylock || xchg(lock, node)
> +	 *
> +	 * 0,0 -> 0,1 ; trylock
> +	 * p,x -> n,x ; prev = xchg(lock, node)

I looked at that for 10 seconds and I was not sure what you meant.
Is this related to the MCS document you had pointed to? It would help
if you mention that the comments follow the document. (But they
don't seem to)

I presume what you mean is that if we are the next after the
lock-holder we need only to update the 'next' (or the
composite value of smp_processor_idx | idx) to point to us.

As in, swap the 'L' with 'I' (looking at the doc)

> +	 */
> +	for (;;) {
> +		new = _Q_LOCKED_VAL;
> +		if (val)

Could you add a comment here, like this:

/*
 * N.B. Initially 'val' will have some value (as we are called
 * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
 * But on subsequent iterations, either the lock holder will
 * decrement the val (queue_spin_unlock - to zero) and we
 * needn't to record our status in the queue as we have set the
 * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
 * in line and need to record our 'next' (aka, smp_processor_id() | idx)
 * position. */
 */

> +			new = tail | (val & _Q_LOCKED_MASK);
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock; forget about queueing.
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		goto release;
> +
> +	/*
> +	 * if there was a previous node; link it and wait.
> +	 */
> +	if (old & ~_Q_LOCKED_MASK) {
> +		prev = decode_tail(old);
> +		ACCESS_ONCE(prev->next) = node;
> +
> +		arch_mcs_spin_lock_contended(&node->locked);
> +	}
> +
> +	/*
> +	 * we're at the head of the waitqueue, wait for the owner to go away.
> +	 *
> +	 * *,x -> *,0
> +	 */
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +		cpu_relax();
> +
> +	/*
> +	 * claim the lock:
> +	 *
> +	 * n,0 -> 0,1 : lock, uncontended
> +	 * *,0 -> *,1 : lock, contended
> +	 */
> +	for (;;) {
> +		new = _Q_LOCKED_VAL;
> +		if (val != tail)
> +			new |= val;

You lost me here. If we are at the head of the queue, and the owner
has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
loop, how can val != tail?

I suspect it has something to do with the comment, but I am still unsure
what it means.

Could you help a bit in explaining it in English please?

> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * contended path; wait for next, release.
> +	 */
> +	if (new != _Q_LOCKED_VAL) {

Hm, wouldn't it be just easier to do a 'goto restart' where
restart label points at the first loop statement? Ah never
mind - we have already inserted ourselves in the previous's
node.

But that is confusing - we have done: "prev->next = node;"

And then exited out of 'val = atomic_read(&lock->val))' which
suggests that queue_spin_unlock has called us. How can we be
contended again?


Thanks!
> +		while (!(next = ACCESS_ONCE(node->next)))
> +			cpu_relax();
> +
> +		arch_mcs_spin_unlock_contended(&next->locked);
> +	}
> +
> +release:
> +	/*
> +	 * release the node
> +	 */
> +	this_cpu_dec(mcs_nodes[0].count);
> +}
> +EXPORT_SYMBOL(queue_spin_lock_slowpath);
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 00/11] qspinlock with paravirt support
  2014-06-15 12:46 ` Peter Zijlstra
@ 2014-06-16 20:52   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-16 20:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:46:57PM +0200, Peter Zijlstra wrote:
> Since Waiman seems incapable of doing simple things; here's my take on the
> paravirt crap.
> 
> The first few patches are taken from Waiman's latest series, but the virt
> support is completely new. Its primary aim is to not mess up the native code.

OK. I finally cleared some time to look over this and are reading the code
in details to make sure I have it clear in mind. I will most likely ask
some questions that are naive - hopefully they will lead to the code being
self-explanatory for anybody else taking a stab at understanding them when
bugs appear.
> 
> I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
> smp guests. I've not done Xen, but the patch should be simple and similar.

Looking forward to seeing it. Glancing over the KVM one and comparing it
to the original version that Waiman posted it should be fairly simple. Perhaps
even some of the code could be shared?

> 
> I ripped out all the unfair nonsense as its not at all required for paravirt
> and optimizations that make paravirt better at the cost of code clarity and/or
> native performance are just not worth it.
> 
> Also; if we were to ever add some of that unfair nonsense you do so _after_ you
> got the simple things working.
> 
> The thing I'm least sure about is the head tracking, I chose to do something
> different from what Waiman did, because his is O(nr_cpus) and had the
> assumption that guests have small nr_cpus. AFAIK this is not at all true. The
> biggest problem I have with what I did is that it contains wait loops itself.
> 
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 00/11] qspinlock with paravirt support
@ 2014-06-16 20:52   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-16 20:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:46:57PM +0200, Peter Zijlstra wrote:
> Since Waiman seems incapable of doing simple things; here's my take on the
> paravirt crap.
> 
> The first few patches are taken from Waiman's latest series, but the virt
> support is completely new. Its primary aim is to not mess up the native code.

OK. I finally cleared some time to look over this and are reading the code
in details to make sure I have it clear in mind. I will most likely ask
some questions that are naive - hopefully they will lead to the code being
self-explanatory for anybody else taking a stab at understanding them when
bugs appear.
> 
> I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
> smp guests. I've not done Xen, but the patch should be simple and similar.

Looking forward to seeing it. Glancing over the KVM one and comparing it
to the original version that Waiman posted it should be fairly simple. Perhaps
even some of the code could be shared?

> 
> I ripped out all the unfair nonsense as its not at all required for paravirt
> and optimizations that make paravirt better at the cost of code clarity and/or
> native performance are just not worth it.
> 
> Also; if we were to ever add some of that unfair nonsense you do so _after_ you
> got the simple things working.
> 
> The thing I'm least sure about is the head tracking, I chose to do something
> different from what Waiman did, because his is O(nr_cpus) and had the
> assumption that guests have small nr_cpus. AFAIK this is not at all true. The
> biggest problem I have with what I did is that it contains wait loops itself.
> 
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 00/11] qspinlock with paravirt support
  2014-06-15 12:46 ` Peter Zijlstra
                   ` (28 preceding siblings ...)
  (?)
@ 2014-06-16 20:52 ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-16 20:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:46:57PM +0200, Peter Zijlstra wrote:
> Since Waiman seems incapable of doing simple things; here's my take on the
> paravirt crap.
> 
> The first few patches are taken from Waiman's latest series, but the virt
> support is completely new. Its primary aim is to not mess up the native code.

OK. I finally cleared some time to look over this and are reading the code
in details to make sure I have it clear in mind. I will most likely ask
some questions that are naive - hopefully they will lead to the code being
self-explanatory for anybody else taking a stab at understanding them when
bugs appear.
> 
> I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
> smp guests. I've not done Xen, but the patch should be simple and similar.

Looking forward to seeing it. Glancing over the KVM one and comparing it
to the original version that Waiman posted it should be fairly simple. Perhaps
even some of the code could be shared?

> 
> I ripped out all the unfair nonsense as its not at all required for paravirt
> and optimizations that make paravirt better at the cost of code clarity and/or
> native performance are just not worth it.
> 
> Also; if we were to ever add some of that unfair nonsense you do so _after_ you
> got the simple things working.
> 
> The thing I'm least sure about is the head tracking, I chose to do something
> different from what Waiman did, because his is O(nr_cpus) and had the
> assumption that guests have small nr_cpus. AFAIK this is not at all true. The
> biggest problem I have with what I did is that it contains wait loops itself.
> 
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
  2014-06-15 12:47   ` Peter Zijlstra
  (?)
@ 2014-06-16 21:57   ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-16 21:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tglx, mingo, linux-arch, linux-kernel, virtualization, xen-devel,
	kvm, paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to
> a simple test-and-set lock to avoid the horrors of queue preemption.
>
> Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> ---
>   arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
>   include/asm-generic/qspinlock.h  |    7 +++++++
>   kernel/locking/qspinlock.c       |    3 +++
>   3 files changed, 24 insertions(+)
>
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -1,6 +1,7 @@
>   #ifndef _ASM_X86_QSPINLOCK_H
>   #define _ASM_X86_QSPINLOCK_H
>
> +#include<asm/cpufeature.h>
>   #include<asm-generic/qspinlock_types.h>
>
>   #if !defined(CONFIG_X86_OOSTORE)&&  !defined(CONFIG_X86_PPRO_FENCE)
> @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
>
>   #endif /* !CONFIG_X86_OOSTORE&&  !CONFIG_X86_PPRO_FENCE */
>
> +#define virt_queue_spin_lock virt_queue_spin_lock
> +
> +static inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return false;
> +
> +	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
> +		cpu_relax();
> +
> +	return true;
> +}
> +
>   #include<asm-generic/qspinlock.h>
>
>   #endif /* _ASM_X86_QSPINLOCK_H */
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
>   }
>   #endif
>
> +#ifndef virt_queue_spin_lock
> +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	return false;
> +}
> +#endif
> +
>   /*
>    * Initializier
>    */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (virt_queue_spin_lock(lock))
> +		return;
> +
>   	/*
>   	 * wait for in-progress pending->locked hand-overs
>   	 *

I just wonder if it is better to allow the kernel distributors to decide 
if unfair lock should be the default for virtual guest. Anyway, I have 
no objection to that myself.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
  2014-06-15 12:47   ` Peter Zijlstra
                     ` (2 preceding siblings ...)
  (?)
@ 2014-06-16 21:57   ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-16 21:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to
> a simple test-and-set lock to avoid the horrors of queue preemption.
>
> Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> ---
>   arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
>   include/asm-generic/qspinlock.h  |    7 +++++++
>   kernel/locking/qspinlock.c       |    3 +++
>   3 files changed, 24 insertions(+)
>
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -1,6 +1,7 @@
>   #ifndef _ASM_X86_QSPINLOCK_H
>   #define _ASM_X86_QSPINLOCK_H
>
> +#include<asm/cpufeature.h>
>   #include<asm-generic/qspinlock_types.h>
>
>   #if !defined(CONFIG_X86_OOSTORE)&&  !defined(CONFIG_X86_PPRO_FENCE)
> @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
>
>   #endif /* !CONFIG_X86_OOSTORE&&  !CONFIG_X86_PPRO_FENCE */
>
> +#define virt_queue_spin_lock virt_queue_spin_lock
> +
> +static inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return false;
> +
> +	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
> +		cpu_relax();
> +
> +	return true;
> +}
> +
>   #include<asm-generic/qspinlock.h>
>
>   #endif /* _ASM_X86_QSPINLOCK_H */
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
>   }
>   #endif
>
> +#ifndef virt_queue_spin_lock
> +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	return false;
> +}
> +#endif
> +
>   /*
>    * Initializier
>    */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (virt_queue_spin_lock(lock))
> +		return;
> +
>   	/*
>   	 * wait for in-progress pending->locked hand-overs
>   	 *

I just wonder if it is better to allow the kernel distributors to decide 
if unfair lock should be the default for virtual guest. Anyway, I have 
no objection to that myself.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
  2014-06-15 12:47   ` Peter Zijlstra
  (?)
  (?)
@ 2014-06-16 21:57   ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-16 21:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, gleb, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	Peter Zijlstra, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to
> a simple test-and-set lock to avoid the horrors of queue preemption.
>
> Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> ---
>   arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
>   include/asm-generic/qspinlock.h  |    7 +++++++
>   kernel/locking/qspinlock.c       |    3 +++
>   3 files changed, 24 insertions(+)
>
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -1,6 +1,7 @@
>   #ifndef _ASM_X86_QSPINLOCK_H
>   #define _ASM_X86_QSPINLOCK_H
>
> +#include<asm/cpufeature.h>
>   #include<asm-generic/qspinlock_types.h>
>
>   #if !defined(CONFIG_X86_OOSTORE)&&  !defined(CONFIG_X86_PPRO_FENCE)
> @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
>
>   #endif /* !CONFIG_X86_OOSTORE&&  !CONFIG_X86_PPRO_FENCE */
>
> +#define virt_queue_spin_lock virt_queue_spin_lock
> +
> +static inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return false;
> +
> +	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
> +		cpu_relax();
> +
> +	return true;
> +}
> +
>   #include<asm-generic/qspinlock.h>
>
>   #endif /* _ASM_X86_QSPINLOCK_H */
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
>   }
>   #endif
>
> +#ifndef virt_queue_spin_lock
> +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	return false;
> +}
> +#endif
> +
>   /*
>    * Initializier
>    */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (virt_queue_spin_lock(lock))
> +		return;
> +
>   	/*
>   	 * wait for in-progress pending->locked hand-overs
>   	 *

I just wonder if it is better to allow the kernel distributors to decide 
if unfair lock should be the default for virtual guest. Anyway, I have 
no objection to that myself.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
  (?)
@ 2014-06-16 22:08   ` Waiman Long
  2014-06-18 12:03       ` Paolo Bonzini
                       ` (3 more replies)
  -1 siblings, 4 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-16 22:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 5449 bytes --]

On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
>
>
>
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +/*
> + * Write a comment about how all this works...
> + */
> +
> +#define _Q_LOCKED_SLOW	(2U<<  _Q_LOCKED_OFFSET)
> +
> +struct pv_node {
> +	struct mcs_spinlock	mcs;
> +	struct mcs_spinlock	__offset[3];
> +	int cpu, head;
> +};

I am wondering why you need the separate cpu and head variables. I 
thought one will be enough here. The wait code put the cpu number in 
head, the the kick_cpu code kick the one in cpu which is just the cpu # 
of the tail.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

I think it is better to use a constant like -2 for NO_HEAD instead of an 
external variable.

> +void __pv_init_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> +
> +	pn->cpu = smp_processor_id();
> +	pn->head = INVALID_HEAD;
> +}
> +
> +static inline struct pv_node *pv_decode_tail(u32 tail)
> +{
> +	return (struct pv_node *)decode_tail(tail);
> +}
> +
> +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> +{
> +	struct pv_node *ppn, *pn = (struct pv_node *)node;
> +	unsigned int count;
> +
> +	if (!(old&  _Q_TAIL_MASK)) {
> +		pn->head = NO_HEAD;
> +		return;
> +	}
> +
> +	ppn = pv_decode_tail(old);
> +	ACCESS_ONCE(ppn->mcs.next) = node;
> +
> +	while (ppn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	pn->head = ppn->head;

A race can happen here as pn->head can be changed to the head cpu by the 
head waiter while being changed by this function at the same time. It is 
safer to use cmpxchg to make sure that there is no accidental 
overwriting of the head CPU number.

> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (smp_load_acquire(&node->locked))
> +				return;
> +
> +			cpu_relax();
> +		} while (--count);
> +
> +		pv_wait(&node->locked, 1);
> +	}
> +}
> +
> +void __pv_kick_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	pv_kick(pn->cpu);
> +}
> +
> +void __pv_wait_head(struct qspinlock *lock)
> +{
> +	unsigned int count;
> +	struct pv_node *pn;
> +	int val, old, new;
> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			val = smp_load_acquire(&lock->val.counter);
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +		} while (--count);
> +
> +		do {
> +			pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +			while (pn->head == INVALID_HEAD)
> +				cpu_relax();
> +
> +			pn->head = smp_processor_id();
> +
> +		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
> +
> +		/*
> +		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
> +		 */
> +		val = atomic_read(&lock->val);
> +		for (;;) {
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +			new = val | _Q_LOCKED_SLOW;
> +			old = atomic_cmpxchg(&lock->val, val, new);
> +			if (old == val)
> +				break;
> +			val = old;
> +		}
> +
> +		/* XXX 16bit would be better */
> +		pv_wait(&lock->val.counter, new);
> +	}
> +}
> +
> +static void ___pv_kick_head(struct qspinlock *lock)
> +{
> +	struct pv_node *pn;
> +
> +	pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +	while (pn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	if (WARN_ON_ONCE(pn->head == NO_HEAD))
> +		return;
> +
> +	pv_kick(pn->head);
> +}
> +
> +void __pv_queue_unlock(struct qspinlock *lock)
> +{
> +	int val = atomic_read(&lock->val);
> +
> +	native_queue_unlock(lock);
> +
> +	if (val&  _Q_LOCKED_SLOW)
> +		___pv_kick_head(lock);
> +}
> +

Again a race can happen here between the reading and writing of the lock 
value. I can't think of a good way to do that without using cmpxchg.

> +#else
> +
> +static inline void pv_init_node(struct mcs_spinlock *node) { }
> +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
> +static inline void pv_kick_node(struct mcs_spinlock *node) { }
> +
> +static inline void pv_wait_head(struct qspinlock *lock) { }
> +
> +#endif
> +
>   /**
>    * queue_spin_lock_slowpath - acquire the queue spinlock
>    * @lock: Pointer to queue spinlock structure
> @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (pv_enabled())
> +		goto queue;
> +
>   	if (virt_queue_spin_lock(lock))
>   		return;
>
> @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	node += idx;
>   	node->locked = 0;
>   	node->next = NULL;
> +	pv_init_node(node);
>
>   	/*
>   	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	/*
>   	 * if there was a previous node; link it and wait.
>   	 */
> +	pv_link_and_wait_node(old, node);
>   	if (old&  _Q_TAIL_MASK) {
>   		prev = decode_tail(old);
>   		ACCESS_ONCE(prev->next) = node;
> @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	 *
>   	 * *,x,y ->  *,0,0
>   	 */
> +	pv_wait_head(lock);
>   	while ((val = smp_load_acquire(&lock->val.counter))&
>   			_Q_LOCKED_PENDING_MASK)
>   		cpu_relax();
> @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
>   		cpu_relax();
>
>   	arch_mcs_spin_unlock_contended(&next->locked);
> +	pv_kick_node(next);
>   

pv_kick_node is an expensive operation and it can significantly slow 
down the locking operation if we have to do it for every subsequent task 
in the queue.

-Longman


[-- Attachment #1.2: Type: text/html, Size: 7006 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
  (?)
  (?)
@ 2014-06-16 22:08   ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-16 22:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, gleb, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	Peter Zijlstra, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 5449 bytes --]

On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
>
>
>
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +/*
> + * Write a comment about how all this works...
> + */
> +
> +#define _Q_LOCKED_SLOW	(2U<<  _Q_LOCKED_OFFSET)
> +
> +struct pv_node {
> +	struct mcs_spinlock	mcs;
> +	struct mcs_spinlock	__offset[3];
> +	int cpu, head;
> +};

I am wondering why you need the separate cpu and head variables. I 
thought one will be enough here. The wait code put the cpu number in 
head, the the kick_cpu code kick the one in cpu which is just the cpu # 
of the tail.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

I think it is better to use a constant like -2 for NO_HEAD instead of an 
external variable.

> +void __pv_init_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> +
> +	pn->cpu = smp_processor_id();
> +	pn->head = INVALID_HEAD;
> +}
> +
> +static inline struct pv_node *pv_decode_tail(u32 tail)
> +{
> +	return (struct pv_node *)decode_tail(tail);
> +}
> +
> +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> +{
> +	struct pv_node *ppn, *pn = (struct pv_node *)node;
> +	unsigned int count;
> +
> +	if (!(old&  _Q_TAIL_MASK)) {
> +		pn->head = NO_HEAD;
> +		return;
> +	}
> +
> +	ppn = pv_decode_tail(old);
> +	ACCESS_ONCE(ppn->mcs.next) = node;
> +
> +	while (ppn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	pn->head = ppn->head;

A race can happen here as pn->head can be changed to the head cpu by the 
head waiter while being changed by this function at the same time. It is 
safer to use cmpxchg to make sure that there is no accidental 
overwriting of the head CPU number.

> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (smp_load_acquire(&node->locked))
> +				return;
> +
> +			cpu_relax();
> +		} while (--count);
> +
> +		pv_wait(&node->locked, 1);
> +	}
> +}
> +
> +void __pv_kick_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	pv_kick(pn->cpu);
> +}
> +
> +void __pv_wait_head(struct qspinlock *lock)
> +{
> +	unsigned int count;
> +	struct pv_node *pn;
> +	int val, old, new;
> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			val = smp_load_acquire(&lock->val.counter);
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +		} while (--count);
> +
> +		do {
> +			pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +			while (pn->head == INVALID_HEAD)
> +				cpu_relax();
> +
> +			pn->head = smp_processor_id();
> +
> +		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
> +
> +		/*
> +		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
> +		 */
> +		val = atomic_read(&lock->val);
> +		for (;;) {
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +			new = val | _Q_LOCKED_SLOW;
> +			old = atomic_cmpxchg(&lock->val, val, new);
> +			if (old == val)
> +				break;
> +			val = old;
> +		}
> +
> +		/* XXX 16bit would be better */
> +		pv_wait(&lock->val.counter, new);
> +	}
> +}
> +
> +static void ___pv_kick_head(struct qspinlock *lock)
> +{
> +	struct pv_node *pn;
> +
> +	pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +	while (pn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	if (WARN_ON_ONCE(pn->head == NO_HEAD))
> +		return;
> +
> +	pv_kick(pn->head);
> +}
> +
> +void __pv_queue_unlock(struct qspinlock *lock)
> +{
> +	int val = atomic_read(&lock->val);
> +
> +	native_queue_unlock(lock);
> +
> +	if (val&  _Q_LOCKED_SLOW)
> +		___pv_kick_head(lock);
> +}
> +

Again a race can happen here between the reading and writing of the lock 
value. I can't think of a good way to do that without using cmpxchg.

> +#else
> +
> +static inline void pv_init_node(struct mcs_spinlock *node) { }
> +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
> +static inline void pv_kick_node(struct mcs_spinlock *node) { }
> +
> +static inline void pv_wait_head(struct qspinlock *lock) { }
> +
> +#endif
> +
>   /**
>    * queue_spin_lock_slowpath - acquire the queue spinlock
>    * @lock: Pointer to queue spinlock structure
> @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (pv_enabled())
> +		goto queue;
> +
>   	if (virt_queue_spin_lock(lock))
>   		return;
>
> @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	node += idx;
>   	node->locked = 0;
>   	node->next = NULL;
> +	pv_init_node(node);
>
>   	/*
>   	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	/*
>   	 * if there was a previous node; link it and wait.
>   	 */
> +	pv_link_and_wait_node(old, node);
>   	if (old&  _Q_TAIL_MASK) {
>   		prev = decode_tail(old);
>   		ACCESS_ONCE(prev->next) = node;
> @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	 *
>   	 * *,x,y ->  *,0,0
>   	 */
> +	pv_wait_head(lock);
>   	while ((val = smp_load_acquire(&lock->val.counter))&
>   			_Q_LOCKED_PENDING_MASK)
>   		cpu_relax();
> @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
>   		cpu_relax();
>
>   	arch_mcs_spin_unlock_contended(&next->locked);
> +	pv_kick_node(next);
>   

pv_kick_node is an expensive operation and it can significantly slow 
down the locking operation if we have to do it for every subsequent task 
in the queue.

-Longman


[-- Attachment #1.2: Type: text/html, Size: 7006 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-17  0:53     ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17  0:53 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tglx, mingo, linux-arch, linux-kernel, virtualization, xen-devel,
	kvm, paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

I am resending it as my original reply has some HTML code & hence 
rejected by the mailing lists.


On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
>
>
>
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +/*
> + * Write a comment about how all this works...
> + */
> +
> +#define _Q_LOCKED_SLOW	(2U<<  _Q_LOCKED_OFFSET)
> +
> +struct pv_node {
> +	struct mcs_spinlock	mcs;
> +	struct mcs_spinlock	__offset[3];
> +	int cpu, head;
> +};

I am wondering why you need the separate cpu and head variables. I 
thought one will be enough here. The wait code put the cpu number in 
head, the the kick_cpu code kick the one in cpu which is just the cpu # 
of the tail.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

I think it is better to use a constant like -2 for NO_HEAD instead of an 
external variable.

> +void __pv_init_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> +
> +	pn->cpu = smp_processor_id();
> +	pn->head = INVALID_HEAD;
> +}
> +
> +static inline struct pv_node *pv_decode_tail(u32 tail)
> +{
> +	return (struct pv_node *)decode_tail(tail);
> +}
> +
> +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> +{
> +	struct pv_node *ppn, *pn = (struct pv_node *)node;
> +	unsigned int count;
> +
> +	if (!(old&  _Q_TAIL_MASK)) {
> +		pn->head = NO_HEAD;
> +		return;
> +	}
> +
> +	ppn = pv_decode_tail(old);
> +	ACCESS_ONCE(ppn->mcs.next) = node;
> +
> +	while (ppn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	pn->head = ppn->head;

A race can happen here as pn->head can be changed to the head cpu by the 
head waiter while being changed by this function at the same time. It is 
safer to use cmpxchg to make sure that there is no accidental 
overwriting of the head CPU number.

> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (smp_load_acquire(&node->locked))
> +				return;
> +
> +			cpu_relax();
> +		} while (--count);
> +
> +		pv_wait(&node->locked, 1);
> +	}
> +}
> +
> +void __pv_kick_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	pv_kick(pn->cpu);
> +}
> +
> +void __pv_wait_head(struct qspinlock *lock)
> +{
> +	unsigned int count;
> +	struct pv_node *pn;
> +	int val, old, new;
> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			val = smp_load_acquire(&lock->val.counter);
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +		} while (--count);
> +
> +		do {
> +			pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +			while (pn->head == INVALID_HEAD)
> +				cpu_relax();
> +
> +			pn->head = smp_processor_id();
> +
> +		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
> +
> +		/*
> +		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
> +		 */
> +		val = atomic_read(&lock->val);
> +		for (;;) {
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +			new = val | _Q_LOCKED_SLOW;
> +			old = atomic_cmpxchg(&lock->val, val, new);
> +			if (old == val)
> +				break;
> +			val = old;
> +		}
> +
> +		/* XXX 16bit would be better */
> +		pv_wait(&lock->val.counter, new);
> +	}
> +}
> +
> +static void ___pv_kick_head(struct qspinlock *lock)
> +{
> +	struct pv_node *pn;
> +
> +	pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +	while (pn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	if (WARN_ON_ONCE(pn->head == NO_HEAD))
> +		return;
> +
> +	pv_kick(pn->head);
> +}
> +
> +void __pv_queue_unlock(struct qspinlock *lock)
> +{
> +	int val = atomic_read(&lock->val);
> +
> +	native_queue_unlock(lock);
> +
> +	if (val&  _Q_LOCKED_SLOW)
> +		___pv_kick_head(lock);
> +}
> +

Again a race can happen here between the reading and writing of the lock 
value. I can't think of a good way to do that without using cmpxchg.

> +#else
> +
> +static inline void pv_init_node(struct mcs_spinlock *node) { }
> +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
> +static inline void pv_kick_node(struct mcs_spinlock *node) { }
> +
> +static inline void pv_wait_head(struct qspinlock *lock) { }
> +
> +#endif
> +
>   /**
>    * queue_spin_lock_slowpath - acquire the queue spinlock
>    * @lock: Pointer to queue spinlock structure
> @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (pv_enabled())
> +		goto queue;
> +
>   	if (virt_queue_spin_lock(lock))
>   		return;
>
> @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	node += idx;
>   	node->locked = 0;
>   	node->next = NULL;
> +	pv_init_node(node);
>
>   	/*
>   	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	/*
>   	 * if there was a previous node; link it and wait.
>   	 */
> +	pv_link_and_wait_node(old, node);
>   	if (old&  _Q_TAIL_MASK) {
>   		prev = decode_tail(old);
>   		ACCESS_ONCE(prev->next) = node;
> @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	 *
>   	 * *,x,y ->  *,0,0
>   	 */
> +	pv_wait_head(lock);
>   	while ((val = smp_load_acquire(&lock->val.counter))&
>   			_Q_LOCKED_PENDING_MASK)
>   		cpu_relax();
> @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
>   		cpu_relax();
>
>   	arch_mcs_spin_unlock_contended(&next->locked);
> +	pv_kick_node(next);
>   

pv_kick_node is an expensive operation and it can significantly slow 
down the locking operation if we have to do it for every subsequent task 
in the queue.

-Longman


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-06-17  0:53     ` Waiman Long
  0 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17  0:53 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

I am resending it as my original reply has some HTML code & hence 
rejected by the mailing lists.


On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
>
>
>
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +/*
> + * Write a comment about how all this works...
> + */
> +
> +#define _Q_LOCKED_SLOW	(2U<<  _Q_LOCKED_OFFSET)
> +
> +struct pv_node {
> +	struct mcs_spinlock	mcs;
> +	struct mcs_spinlock	__offset[3];
> +	int cpu, head;
> +};

I am wondering why you need the separate cpu and head variables. I 
thought one will be enough here. The wait code put the cpu number in 
head, the the kick_cpu code kick the one in cpu which is just the cpu # 
of the tail.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

I think it is better to use a constant like -2 for NO_HEAD instead of an 
external variable.

> +void __pv_init_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> +
> +	pn->cpu = smp_processor_id();
> +	pn->head = INVALID_HEAD;
> +}
> +
> +static inline struct pv_node *pv_decode_tail(u32 tail)
> +{
> +	return (struct pv_node *)decode_tail(tail);
> +}
> +
> +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> +{
> +	struct pv_node *ppn, *pn = (struct pv_node *)node;
> +	unsigned int count;
> +
> +	if (!(old&  _Q_TAIL_MASK)) {
> +		pn->head = NO_HEAD;
> +		return;
> +	}
> +
> +	ppn = pv_decode_tail(old);
> +	ACCESS_ONCE(ppn->mcs.next) = node;
> +
> +	while (ppn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	pn->head = ppn->head;

A race can happen here as pn->head can be changed to the head cpu by the 
head waiter while being changed by this function at the same time. It is 
safer to use cmpxchg to make sure that there is no accidental 
overwriting of the head CPU number.

> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (smp_load_acquire(&node->locked))
> +				return;
> +
> +			cpu_relax();
> +		} while (--count);
> +
> +		pv_wait(&node->locked, 1);
> +	}
> +}
> +
> +void __pv_kick_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	pv_kick(pn->cpu);
> +}
> +
> +void __pv_wait_head(struct qspinlock *lock)
> +{
> +	unsigned int count;
> +	struct pv_node *pn;
> +	int val, old, new;
> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			val = smp_load_acquire(&lock->val.counter);
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +		} while (--count);
> +
> +		do {
> +			pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +			while (pn->head == INVALID_HEAD)
> +				cpu_relax();
> +
> +			pn->head = smp_processor_id();
> +
> +		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
> +
> +		/*
> +		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
> +		 */
> +		val = atomic_read(&lock->val);
> +		for (;;) {
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +			new = val | _Q_LOCKED_SLOW;
> +			old = atomic_cmpxchg(&lock->val, val, new);
> +			if (old == val)
> +				break;
> +			val = old;
> +		}
> +
> +		/* XXX 16bit would be better */
> +		pv_wait(&lock->val.counter, new);
> +	}
> +}
> +
> +static void ___pv_kick_head(struct qspinlock *lock)
> +{
> +	struct pv_node *pn;
> +
> +	pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +	while (pn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	if (WARN_ON_ONCE(pn->head == NO_HEAD))
> +		return;
> +
> +	pv_kick(pn->head);
> +}
> +
> +void __pv_queue_unlock(struct qspinlock *lock)
> +{
> +	int val = atomic_read(&lock->val);
> +
> +	native_queue_unlock(lock);
> +
> +	if (val&  _Q_LOCKED_SLOW)
> +		___pv_kick_head(lock);
> +}
> +

Again a race can happen here between the reading and writing of the lock 
value. I can't think of a good way to do that without using cmpxchg.

> +#else
> +
> +static inline void pv_init_node(struct mcs_spinlock *node) { }
> +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
> +static inline void pv_kick_node(struct mcs_spinlock *node) { }
> +
> +static inline void pv_wait_head(struct qspinlock *lock) { }
> +
> +#endif
> +
>   /**
>    * queue_spin_lock_slowpath - acquire the queue spinlock
>    * @lock: Pointer to queue spinlock structure
> @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (pv_enabled())
> +		goto queue;
> +
>   	if (virt_queue_spin_lock(lock))
>   		return;
>
> @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	node += idx;
>   	node->locked = 0;
>   	node->next = NULL;
> +	pv_init_node(node);
>
>   	/*
>   	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	/*
>   	 * if there was a previous node; link it and wait.
>   	 */
> +	pv_link_and_wait_node(old, node);
>   	if (old&  _Q_TAIL_MASK) {
>   		prev = decode_tail(old);
>   		ACCESS_ONCE(prev->next) = node;
> @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	 *
>   	 * *,x,y ->  *,0,0
>   	 */
> +	pv_wait_head(lock);
>   	while ((val = smp_load_acquire(&lock->val.counter))&
>   			_Q_LOCKED_PENDING_MASK)
>   		cpu_relax();
> @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
>   		cpu_relax();
>
>   	arch_mcs_spin_unlock_contended(&next->locked);
> +	pv_kick_node(next);
>   

pv_kick_node is an expensive operation and it can significantly slow 
down the locking operation if we have to do it for every subsequent task 
in the queue.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
                     ` (3 preceding siblings ...)
  (?)
@ 2014-06-17  0:53   ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17  0:53 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-arch, gleb, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	Peter Zijlstra, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

I am resending it as my original reply has some HTML code & hence 
rejected by the mailing lists.


On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
>
>
>
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +/*
> + * Write a comment about how all this works...
> + */
> +
> +#define _Q_LOCKED_SLOW	(2U<<  _Q_LOCKED_OFFSET)
> +
> +struct pv_node {
> +	struct mcs_spinlock	mcs;
> +	struct mcs_spinlock	__offset[3];
> +	int cpu, head;
> +};

I am wondering why you need the separate cpu and head variables. I 
thought one will be enough here. The wait code put the cpu number in 
head, the the kick_cpu code kick the one in cpu which is just the cpu # 
of the tail.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

I think it is better to use a constant like -2 for NO_HEAD instead of an 
external variable.

> +void __pv_init_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> +
> +	pn->cpu = smp_processor_id();
> +	pn->head = INVALID_HEAD;
> +}
> +
> +static inline struct pv_node *pv_decode_tail(u32 tail)
> +{
> +	return (struct pv_node *)decode_tail(tail);
> +}
> +
> +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> +{
> +	struct pv_node *ppn, *pn = (struct pv_node *)node;
> +	unsigned int count;
> +
> +	if (!(old&  _Q_TAIL_MASK)) {
> +		pn->head = NO_HEAD;
> +		return;
> +	}
> +
> +	ppn = pv_decode_tail(old);
> +	ACCESS_ONCE(ppn->mcs.next) = node;
> +
> +	while (ppn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	pn->head = ppn->head;

A race can happen here as pn->head can be changed to the head cpu by the 
head waiter while being changed by this function at the same time. It is 
safer to use cmpxchg to make sure that there is no accidental 
overwriting of the head CPU number.

> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (smp_load_acquire(&node->locked))
> +				return;
> +
> +			cpu_relax();
> +		} while (--count);
> +
> +		pv_wait(&node->locked, 1);
> +	}
> +}
> +
> +void __pv_kick_node(struct mcs_spinlock *node)
> +{
> +	struct pv_node *pn = (struct pv_node *)node;
> +
> +	pv_kick(pn->cpu);
> +}
> +
> +void __pv_wait_head(struct qspinlock *lock)
> +{
> +	unsigned int count;
> +	struct pv_node *pn;
> +	int val, old, new;
> +
> +	for (;;) {
> +		count = SPIN_THRESHOLD;
> +
> +		do {
> +			val = smp_load_acquire(&lock->val.counter);
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +		} while (--count);
> +
> +		do {
> +			pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +			while (pn->head == INVALID_HEAD)
> +				cpu_relax();
> +
> +			pn->head = smp_processor_id();
> +
> +		} while (pn != pv_decode_tail(atomic_read(&lock->val)));
> +
> +		/*
> +		 * Set _Q_LOCKED_SLOW; bail when the lock is free.
> +		 */
> +		val = atomic_read(&lock->val);
> +		for (;;) {
> +			if (!(val&  _Q_LOCKED_PENDING_MASK))
> +				return;
> +			new = val | _Q_LOCKED_SLOW;
> +			old = atomic_cmpxchg(&lock->val, val, new);
> +			if (old == val)
> +				break;
> +			val = old;
> +		}
> +
> +		/* XXX 16bit would be better */
> +		pv_wait(&lock->val.counter, new);
> +	}
> +}
> +
> +static void ___pv_kick_head(struct qspinlock *lock)
> +{
> +	struct pv_node *pn;
> +
> +	pn = pv_decode_tail(atomic_read(&lock->val));
> +
> +	while (pn->head == INVALID_HEAD)
> +		cpu_relax();
> +
> +	if (WARN_ON_ONCE(pn->head == NO_HEAD))
> +		return;
> +
> +	pv_kick(pn->head);
> +}
> +
> +void __pv_queue_unlock(struct qspinlock *lock)
> +{
> +	int val = atomic_read(&lock->val);
> +
> +	native_queue_unlock(lock);
> +
> +	if (val&  _Q_LOCKED_SLOW)
> +		___pv_kick_head(lock);
> +}
> +

Again a race can happen here between the reading and writing of the lock 
value. I can't think of a good way to do that without using cmpxchg.

> +#else
> +
> +static inline void pv_init_node(struct mcs_spinlock *node) { }
> +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { }
> +static inline void pv_kick_node(struct mcs_spinlock *node) { }
> +
> +static inline void pv_wait_head(struct qspinlock *lock) { }
> +
> +#endif
> +
>   /**
>    * queue_spin_lock_slowpath - acquire the queue spinlock
>    * @lock: Pointer to queue spinlock structure
> @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp
>
>   	BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<<  _Q_TAIL_CPU_BITS));
>
> +	if (pv_enabled())
> +		goto queue;
> +
>   	if (virt_queue_spin_lock(lock))
>   		return;
>
> @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	node += idx;
>   	node->locked = 0;
>   	node->next = NULL;
> +	pv_init_node(node);
>
>   	/*
>   	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	/*
>   	 * if there was a previous node; link it and wait.
>   	 */
> +	pv_link_and_wait_node(old, node);
>   	if (old&  _Q_TAIL_MASK) {
>   		prev = decode_tail(old);
>   		ACCESS_ONCE(prev->next) = node;
> @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
>   	 *
>   	 * *,x,y ->  *,0,0
>   	 */
> +	pv_wait_head(lock);
>   	while ((val = smp_load_acquire(&lock->val.counter))&
>   			_Q_LOCKED_PENDING_MASK)
>   		cpu_relax();
> @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
>   		cpu_relax();
>
>   	arch_mcs_spin_unlock_contended(&next->locked);
> +	pv_kick_node(next);
>   

pv_kick_node is an expensive operation and it can significantly slow 
down the locking operation if we have to do it for every subsequent task 
in the queue.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-16 20:49     ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2014-06-17 20:03       ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:03 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);

Could you add a comment here:

/* We are spinning forever until the previous node updates locked - which
it does once the it has updated lock->val with our tail number. */

> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
..snip..
> 
> Could you help a bit in explaining it in English please?

After looking at the assembler code I finally figured out how
we can get here. And the 'contended' part threw me off. Somehow
I imagined there are two more more CPUs stampeding here and 
trying to update the lock->val. But in reality the other CPUs
are stuck in the arch_mcs_spin_lock_contended spinning on their
local value.

Perhaps you could add this comment.

/* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
the lock->val and still preserving the tail data), the winner gets to
claim the ticket. Since we still need the other CPUs to continue and
preserve the strict ordering in which they setup node->next, we:
 1) update lock->val to the tail value (so tail CPU and its index) with
    _Q_LOCKED_VAL.
 2). Once we are done, we poke the other CPU (the one that linked to
    us) by writting to node->locked (below) so they can make progress and
    loop on lock->val changing from _Q_LOCKED_MASK to zero).

*/

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-17 20:03       ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);

Could you add a comment here:

/* We are spinning forever until the previous node updates locked - which
it does once the it has updated lock->val with our tail number. */

> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
..snip..
> 
> Could you help a bit in explaining it in English please?

After looking at the assembler code I finally figured out how
we can get here. And the 'contended' part threw me off. Somehow
I imagined there are two more more CPUs stampeding here and 
trying to update the lock->val. But in reality the other CPUs
are stuck in the arch_mcs_spin_lock_contended spinning on their
local value.

Perhaps you could add this comment.

/* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
the lock->val and still preserving the tail data), the winner gets to
claim the ticket. Since we still need the other CPUs to continue and
preserve the strict ordering in which they setup node->next, we:
 1) update lock->val to the tail value (so tail CPU and its index) with
    _Q_LOCKED_VAL.
 2). Once we are done, we poke the other CPU (the one that linked to
    us) by writting to node->locked (below) so they can make progress and
    loop on lock->val changing from _Q_LOCKED_MASK to zero).

*/

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-17 20:03       ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:03 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long
  Cc: tglx, mingo, linux-arch, linux-kernel, virtualization, xen-devel,
	kvm, paolo.bonzini, boris.ostrovsky, paulmck, riel, torvalds,
	raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);

Could you add a comment here:

/* We are spinning forever until the previous node updates locked - which
it does once the it has updated lock->val with our tail number. */

> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
..snip..
> 
> Could you help a bit in explaining it in English please?

After looking at the assembler code I finally figured out how
we can get here. And the 'contended' part threw me off. Somehow
I imagined there are two more more CPUs stampeding here and 
trying to update the lock->val. But in reality the other CPUs
are stuck in the arch_mcs_spin_lock_contended spinning on their
local value.

Perhaps you could add this comment.

/* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
the lock->val and still preserving the tail data), the winner gets to
claim the ticket. Since we still need the other CPUs to continue and
preserve the strict ordering in which they setup node->next, we:
 1) update lock->val to the tail value (so tail CPU and its index) with
    _Q_LOCKED_VAL.
 2). Once we are done, we poke the other CPU (the one that linked to
    us) by writting to node->locked (below) so they can make progress and
    loop on lock->val changing from _Q_LOCKED_MASK to zero).

*/

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-17 20:03       ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:03 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);

Could you add a comment here:

/* We are spinning forever until the previous node updates locked - which
it does once the it has updated lock->val with our tail number. */

> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
..snip..
> 
> Could you help a bit in explaining it in English please?

After looking at the assembler code I finally figured out how
we can get here. And the 'contended' part threw me off. Somehow
I imagined there are two more more CPUs stampeding here and 
trying to update the lock->val. But in reality the other CPUs
are stuck in the arch_mcs_spin_lock_contended spinning on their
local value.

Perhaps you could add this comment.

/* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
the lock->val and still preserving the tail data), the winner gets to
claim the ticket. Since we still need the other CPUs to continue and
preserve the strict ordering in which they setup node->next, we:
 1) update lock->val to the tail value (so tail CPU and its index) with
    _Q_LOCKED_VAL.
 2). Once we are done, we poke the other CPU (the one that linked to
    us) by writting to node->locked (below) so they can make progress and
    loop on lock->val changing from _Q_LOCKED_MASK to zero).

*/

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-16 20:49     ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2014-06-17 20:03     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);

Could you add a comment here:

/* We are spinning forever until the previous node updates locked - which
it does once the it has updated lock->val with our tail number. */

> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
..snip..
> 
> Could you help a bit in explaining it in English please?

After looking at the assembler code I finally figured out how
we can get here. And the 'contended' part threw me off. Somehow
I imagined there are two more more CPUs stampeding here and 
trying to update the lock->val. But in reality the other CPUs
are stuck in the arch_mcs_spin_lock_contended spinning on their
local value.

Perhaps you could add this comment.

/* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
the lock->val and still preserving the tail data), the winner gets to
claim the ticket. Since we still need the other CPUs to continue and
preserve the strict ordering in which they setup node->next, we:
 1) update lock->val to the tail value (so tail CPU and its index) with
    _Q_LOCKED_VAL.
 2). Once we are done, we poke the other CPU (the one that linked to
    us) by writting to node->locked (below) so they can make progress and
    loop on lock->val changing from _Q_LOCKED_MASK to zero).

*/

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-15 12:46   ` Peter Zijlstra
  (?)
  (?)
@ 2014-06-17 20:05     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:05 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.

You also made changes (compared to the MCS) in that the unlock path is not
spinning waiting for the successor and that the job of passing the lock
is not done in the unlock path either.

Instead all of that is now done in the path of the lock acquirer logic. 

Could you update the comment to say that please?

Thanks.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-17 20:05     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.

You also made changes (compared to the MCS) in that the unlock path is not
spinning waiting for the successor and that the job of passing the lock
is not done in the unlock path either.

Instead all of that is now done in the path of the lock acquirer logic. 

Could you update the comment to say that please?

Thanks.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-17 20:05     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:05 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long
  Cc: tglx, mingo, linux-arch, linux-kernel, virtualization, xen-devel,
	kvm, paolo.bonzini, boris.ostrovsky, paulmck, riel, torvalds,
	raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.

You also made changes (compared to the MCS) in that the unlock path is not
spinning waiting for the successor and that the job of passing the lock
is not done in the unlock path either.

Instead all of that is now done in the path of the lock acquirer logic. 

Could you update the comment to say that please?

Thanks.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-17 20:05     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:05 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.

You also made changes (compared to the MCS) in that the unlock path is not
spinning waiting for the successor and that the job of passing the lock
is not done in the unlock path either.

Instead all of that is now done in the path of the lock acquirer logic. 

Could you update the comment to say that please?

Thanks.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-15 12:46   ` Peter Zijlstra
                     ` (3 preceding siblings ...)
  (?)
@ 2014-06-17 20:05   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.

You also made changes (compared to the MCS) in that the unlock path is not
spinning waiting for the successor and that the job of passing the lock
is not done in the unlock path either.

Instead all of that is now done in the path of the lock acquirer logic. 

Could you update the comment to say that please?

Thanks.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-17 20:36     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.

Could you add this in the description please:

And by second cacheline we mean the local 'node'. That is the:
mcs_nodes[0] and mcs_nodes[idx]

Perhaps it might be better then to split this in the header file
as this is trying to not be a slowpath code - but rather - a
pre-slow-path-lets-try-if-we can do another cmpxchg in case
the unlocker has just unlocked itself.

So something like:

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index e8a7ae8..29cc9c7 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
  */
 static __always_inline void queue_spin_lock(struct qspinlock *lock)
 {
-	u32 val;
+	u32 val, new;
 
 	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
 	if (likely(val == 0))
 		return;
+
+	/* One more attempt - but if we fail mark it as pending. */
+	if (val == _Q_LOCKED_VAL) {
+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == _Q_LOCKED_VAL) /* YEEY! */
+			return;
+		val = old;
+	}
 	queue_spin_lock_slowpath(lock, val);
 }

and then the slowpath preserves most of the old logic path
(with the pending bit stuff)?

 
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |   12 ++-
>  kernel/locking/qspinlock.c            |  109 +++++++++++++++++++++++++++-------
>  2 files changed, 97 insertions(+), 24 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -39,8 +39,9 @@ typedef struct qspinlock {
>   * Bitfields in the atomic value:
>   *
>   *  0- 7: locked byte
> - *  8- 9: tail index
> - * 10-31: tail cpu (+1)
> + *     8: pending
> + *  9-10: tail index
> + * 11-31: tail cpu (+1)
>   */
>  #define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
>  				      << _Q_ ## type ## _OFFSET)
> @@ -48,7 +49,11 @@ typedef struct qspinlock {
>  #define _Q_LOCKED_BITS		8
>  #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
>  
> -#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#define _Q_PENDING_BITS		1
> +#define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
> +
> +#define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
>  #define _Q_TAIL_IDX_BITS	2
>  #define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
>  
> @@ -57,5 +62,6 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
> +#define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>  
>  #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod
>  	return per_cpu_ptr(&mcs_nodes[idx], cpu);
>  }
>  
> +#define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
> +
>  /**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
>   *
> - * (queue tail, lock bit)
> - *
> - *              fast      :    slow                                  :    unlock
> - *                        :                                          :
> - * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> - *                        :       | ^--------.                    /  :
> - *                        :       v           \                   |  :
> - * uncontended            :    (n,x) --+--> (n,0)                 |  :
> - *   queue                :       | ^--'                          |  :
> - *                        :       v                               |  :
> - * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> - *   queue                :         ^--'                             :
> + * (queue tail, pending bit, lock bit)
>   *
> + *              fast     :    slow                                  :    unlock
> + *                       :                                          :
> + * uncontended  (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
> + *                       :       | ^--------.------.             /  :
> + *                       :       v           \      \            |  :
> + * pending               :    (0,1,1) +--> (0,1,0)   \           |  :
> + *                       :       | ^--'              |           |  :
> + *                       :       v                   |           |  :
> + * uncontended           :    (n,x,y) +--> (n,0,0) --'           |  :
> + *   queue               :       | ^--'                          |  :
> + *                       :       v                               |  :
> + * contended             :    (*,x,y) +--> (*,0,0) ---> (*,0,1) -'  :
> + *   queue               :         ^--'                             :
>   */
>  void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>  {
> @@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp
>  
>  	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>  
> +	/*
> +	 * trylock || pending
> +	 *
> +	 * 0,0,0 -> 0,0,1 ; trylock
> +	 * 0,0,1 -> 0,1,1 ; pending
> +	 */
> +	for (;;) {
> +		/*
> +		 * If we observe any contention; queue.
> +		 */
> +		if (val & ~_Q_LOCKED_MASK)
> +			goto queue;
> +
> +		new = _Q_LOCKED_VAL;
> +		if (val == new)
> +			new |= _Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		return;
> +
> +	/*
> +	 * we're pending, wait for the owner to go away.
> +	 *
> +	 * *,1,1 -> *,1,0
> +	 */
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +		cpu_relax();
> +
> +	/*
> +	 * take ownership and clear the pending bit.
> +	 *
> +	 * *,1,0 -> *,0,1
> +	 */
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return;
> +
> +	/*
> +	 * End of pending bit optimistic spinning and beginning of MCS
> +	 * queuing.
> +	 */
> +queue:
>  	node = this_cpu_ptr(&mcs_nodes[0]);
>  	idx = node->count++;
>  	tail = encode_tail(smp_processor_id(), idx);
> @@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp
>  	node->next = NULL;
>  
>  	/*
> +	 * we already touched the queueing cacheline; don't bother with pending
> +	 * stuff.
> +	 *
>  	 * trylock || xchg(lock, node)
>  	 *
> -	 * 0,0 -> 0,1 ; trylock
> -	 * p,x -> n,x ; prev = xchg(lock, node)
> +	 * 0,0,0 -> 0,0,1 ; trylock
> +	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
>  	 */
>  	for (;;) {
>  		new = _Q_LOCKED_VAL;
>  		if (val)
> -			new = tail | (val & _Q_LOCKED_MASK);
> +			new = tail | (val & _Q_LOCKED_PENDING_MASK);
>  
>  		old = atomic_cmpxchg(&lock->val, val, new);
>  		if (old == val)
> @@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * if there was a previous node; link it and wait.
>  	 */
> -	if (old & ~_Q_LOCKED_MASK) {
> +	if (old & ~_Q_LOCKED_PENDING_MASK) {
>  		prev = decode_tail(old);
>  		ACCESS_ONCE(prev->next) = node;
>  
> @@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp
>  	}
>  
>  	/*
> -	 * we're at the head of the waitqueue, wait for the owner to go away.
> +	 * we're at the head of the waitqueue, wait for the owner & pending to
> +	 * go away.
>  	 *
> -	 * *,x -> *,0
> +	 * *,x,y -> *,0,0
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
>  		cpu_relax();
>  
>  	/*
>  	 * claim the lock:
>  	 *
> -	 * n,0 -> 0,1 : lock, uncontended
> -	 * *,0 -> *,1 : lock, contended
> +	 * n,0,0 -> 0,0,1 : lock, uncontended
> +	 * *,0,0 -> *,0,1 : lock, contended
>  	 */
>  	for (;;) {
>  		new = _Q_LOCKED_VAL;
> 
> 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-17 20:36     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.

Could you add this in the description please:

And by second cacheline we mean the local 'node'. That is the:
mcs_nodes[0] and mcs_nodes[idx]

Perhaps it might be better then to split this in the header file
as this is trying to not be a slowpath code - but rather - a
pre-slow-path-lets-try-if-we can do another cmpxchg in case
the unlocker has just unlocked itself.

So something like:

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index e8a7ae8..29cc9c7 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
  */
 static __always_inline void queue_spin_lock(struct qspinlock *lock)
 {
-	u32 val;
+	u32 val, new;
 
 	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
 	if (likely(val == 0))
 		return;
+
+	/* One more attempt - but if we fail mark it as pending. */
+	if (val == _Q_LOCKED_VAL) {
+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == _Q_LOCKED_VAL) /* YEEY! */
+			return;
+		val = old;
+	}
 	queue_spin_lock_slowpath(lock, val);
 }

and then the slowpath preserves most of the old logic path
(with the pending bit stuff)?

 
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |   12 ++-
>  kernel/locking/qspinlock.c            |  109 +++++++++++++++++++++++++++-------
>  2 files changed, 97 insertions(+), 24 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -39,8 +39,9 @@ typedef struct qspinlock {
>   * Bitfields in the atomic value:
>   *
>   *  0- 7: locked byte
> - *  8- 9: tail index
> - * 10-31: tail cpu (+1)
> + *     8: pending
> + *  9-10: tail index
> + * 11-31: tail cpu (+1)
>   */
>  #define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
>  				      << _Q_ ## type ## _OFFSET)
> @@ -48,7 +49,11 @@ typedef struct qspinlock {
>  #define _Q_LOCKED_BITS		8
>  #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
>  
> -#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#define _Q_PENDING_BITS		1
> +#define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
> +
> +#define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
>  #define _Q_TAIL_IDX_BITS	2
>  #define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
>  
> @@ -57,5 +62,6 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
> +#define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>  
>  #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod
>  	return per_cpu_ptr(&mcs_nodes[idx], cpu);
>  }
>  
> +#define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
> +
>  /**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
>   *
> - * (queue tail, lock bit)
> - *
> - *              fast      :    slow                                  :    unlock
> - *                        :                                          :
> - * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> - *                        :       | ^--------.                    /  :
> - *                        :       v           \                   |  :
> - * uncontended            :    (n,x) --+--> (n,0)                 |  :
> - *   queue                :       | ^--'                          |  :
> - *                        :       v                               |  :
> - * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> - *   queue                :         ^--'                             :
> + * (queue tail, pending bit, lock bit)
>   *
> + *              fast     :    slow                                  :    unlock
> + *                       :                                          :
> + * uncontended  (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
> + *                       :       | ^--------.------.             /  :
> + *                       :       v           \      \            |  :
> + * pending               :    (0,1,1) +--> (0,1,0)   \           |  :
> + *                       :       | ^--'              |           |  :
> + *                       :       v                   |           |  :
> + * uncontended           :    (n,x,y) +--> (n,0,0) --'           |  :
> + *   queue               :       | ^--'                          |  :
> + *                       :       v                               |  :
> + * contended             :    (*,x,y) +--> (*,0,0) ---> (*,0,1) -'  :
> + *   queue               :         ^--'                             :
>   */
>  void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>  {
> @@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp
>  
>  	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>  
> +	/*
> +	 * trylock || pending
> +	 *
> +	 * 0,0,0 -> 0,0,1 ; trylock
> +	 * 0,0,1 -> 0,1,1 ; pending
> +	 */
> +	for (;;) {
> +		/*
> +		 * If we observe any contention; queue.
> +		 */
> +		if (val & ~_Q_LOCKED_MASK)
> +			goto queue;
> +
> +		new = _Q_LOCKED_VAL;
> +		if (val == new)
> +			new |= _Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		return;
> +
> +	/*
> +	 * we're pending, wait for the owner to go away.
> +	 *
> +	 * *,1,1 -> *,1,0
> +	 */
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +		cpu_relax();
> +
> +	/*
> +	 * take ownership and clear the pending bit.
> +	 *
> +	 * *,1,0 -> *,0,1
> +	 */
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return;
> +
> +	/*
> +	 * End of pending bit optimistic spinning and beginning of MCS
> +	 * queuing.
> +	 */
> +queue:
>  	node = this_cpu_ptr(&mcs_nodes[0]);
>  	idx = node->count++;
>  	tail = encode_tail(smp_processor_id(), idx);
> @@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp
>  	node->next = NULL;
>  
>  	/*
> +	 * we already touched the queueing cacheline; don't bother with pending
> +	 * stuff.
> +	 *
>  	 * trylock || xchg(lock, node)
>  	 *
> -	 * 0,0 -> 0,1 ; trylock
> -	 * p,x -> n,x ; prev = xchg(lock, node)
> +	 * 0,0,0 -> 0,0,1 ; trylock
> +	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
>  	 */
>  	for (;;) {
>  		new = _Q_LOCKED_VAL;
>  		if (val)
> -			new = tail | (val & _Q_LOCKED_MASK);
> +			new = tail | (val & _Q_LOCKED_PENDING_MASK);
>  
>  		old = atomic_cmpxchg(&lock->val, val, new);
>  		if (old == val)
> @@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * if there was a previous node; link it and wait.
>  	 */
> -	if (old & ~_Q_LOCKED_MASK) {
> +	if (old & ~_Q_LOCKED_PENDING_MASK) {
>  		prev = decode_tail(old);
>  		ACCESS_ONCE(prev->next) = node;
>  
> @@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp
>  	}
>  
>  	/*
> -	 * we're at the head of the waitqueue, wait for the owner to go away.
> +	 * we're at the head of the waitqueue, wait for the owner & pending to
> +	 * go away.
>  	 *
> -	 * *,x -> *,0
> +	 * *,x,y -> *,0,0
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
>  		cpu_relax();
>  
>  	/*
>  	 * claim the lock:
>  	 *
> -	 * n,0 -> 0,1 : lock, uncontended
> -	 * *,0 -> *,1 : lock, contended
> +	 * n,0,0 -> 0,0,1 : lock, uncontended
> +	 * *,0,0 -> *,0,1 : lock, contended
>  	 */
>  	for (;;) {
>  		new = _Q_LOCKED_VAL;
> 
> 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-15 12:47   ` Peter Zijlstra
  (?)
@ 2014-06-17 20:36   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.

Could you add this in the description please:

And by second cacheline we mean the local 'node'. That is the:
mcs_nodes[0] and mcs_nodes[idx]

Perhaps it might be better then to split this in the header file
as this is trying to not be a slowpath code - but rather - a
pre-slow-path-lets-try-if-we can do another cmpxchg in case
the unlocker has just unlocked itself.

So something like:

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index e8a7ae8..29cc9c7 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
  */
 static __always_inline void queue_spin_lock(struct qspinlock *lock)
 {
-	u32 val;
+	u32 val, new;
 
 	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
 	if (likely(val == 0))
 		return;
+
+	/* One more attempt - but if we fail mark it as pending. */
+	if (val == _Q_LOCKED_VAL) {
+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == _Q_LOCKED_VAL) /* YEEY! */
+			return;
+		val = old;
+	}
 	queue_spin_lock_slowpath(lock, val);
 }

and then the slowpath preserves most of the old logic path
(with the pending bit stuff)?

 
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |   12 ++-
>  kernel/locking/qspinlock.c            |  109 +++++++++++++++++++++++++++-------
>  2 files changed, 97 insertions(+), 24 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -39,8 +39,9 @@ typedef struct qspinlock {
>   * Bitfields in the atomic value:
>   *
>   *  0- 7: locked byte
> - *  8- 9: tail index
> - * 10-31: tail cpu (+1)
> + *     8: pending
> + *  9-10: tail index
> + * 11-31: tail cpu (+1)
>   */
>  #define	_Q_SET_MASK(type)	(((1U << _Q_ ## type ## _BITS) - 1)\
>  				      << _Q_ ## type ## _OFFSET)
> @@ -48,7 +49,11 @@ typedef struct qspinlock {
>  #define _Q_LOCKED_BITS		8
>  #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
>  
> -#define _Q_TAIL_IDX_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#define _Q_PENDING_BITS		1
> +#define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
> +
> +#define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
>  #define _Q_TAIL_IDX_BITS	2
>  #define _Q_TAIL_IDX_MASK	_Q_SET_MASK(TAIL_IDX)
>  
> @@ -57,5 +62,6 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
> +#define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>  
>  #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod
>  	return per_cpu_ptr(&mcs_nodes[idx], cpu);
>  }
>  
> +#define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
> +
>  /**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
>   *
> - * (queue tail, lock bit)
> - *
> - *              fast      :    slow                                  :    unlock
> - *                        :                                          :
> - * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> - *                        :       | ^--------.                    /  :
> - *                        :       v           \                   |  :
> - * uncontended            :    (n,x) --+--> (n,0)                 |  :
> - *   queue                :       | ^--'                          |  :
> - *                        :       v                               |  :
> - * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> - *   queue                :         ^--'                             :
> + * (queue tail, pending bit, lock bit)
>   *
> + *              fast     :    slow                                  :    unlock
> + *                       :                                          :
> + * uncontended  (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
> + *                       :       | ^--------.------.             /  :
> + *                       :       v           \      \            |  :
> + * pending               :    (0,1,1) +--> (0,1,0)   \           |  :
> + *                       :       | ^--'              |           |  :
> + *                       :       v                   |           |  :
> + * uncontended           :    (n,x,y) +--> (n,0,0) --'           |  :
> + *   queue               :       | ^--'                          |  :
> + *                       :       v                               |  :
> + * contended             :    (*,x,y) +--> (*,0,0) ---> (*,0,1) -'  :
> + *   queue               :         ^--'                             :
>   */
>  void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>  {
> @@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp
>  
>  	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>  
> +	/*
> +	 * trylock || pending
> +	 *
> +	 * 0,0,0 -> 0,0,1 ; trylock
> +	 * 0,0,1 -> 0,1,1 ; pending
> +	 */
> +	for (;;) {
> +		/*
> +		 * If we observe any contention; queue.
> +		 */
> +		if (val & ~_Q_LOCKED_MASK)
> +			goto queue;
> +
> +		new = _Q_LOCKED_VAL;
> +		if (val == new)
> +			new |= _Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		return;
> +
> +	/*
> +	 * we're pending, wait for the owner to go away.
> +	 *
> +	 * *,1,1 -> *,1,0
> +	 */
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +		cpu_relax();
> +
> +	/*
> +	 * take ownership and clear the pending bit.
> +	 *
> +	 * *,1,0 -> *,0,1
> +	 */
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return;
> +
> +	/*
> +	 * End of pending bit optimistic spinning and beginning of MCS
> +	 * queuing.
> +	 */
> +queue:
>  	node = this_cpu_ptr(&mcs_nodes[0]);
>  	idx = node->count++;
>  	tail = encode_tail(smp_processor_id(), idx);
> @@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp
>  	node->next = NULL;
>  
>  	/*
> +	 * we already touched the queueing cacheline; don't bother with pending
> +	 * stuff.
> +	 *
>  	 * trylock || xchg(lock, node)
>  	 *
> -	 * 0,0 -> 0,1 ; trylock
> -	 * p,x -> n,x ; prev = xchg(lock, node)
> +	 * 0,0,0 -> 0,0,1 ; trylock
> +	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
>  	 */
>  	for (;;) {
>  		new = _Q_LOCKED_VAL;
>  		if (val)
> -			new = tail | (val & _Q_LOCKED_MASK);
> +			new = tail | (val & _Q_LOCKED_PENDING_MASK);
>  
>  		old = atomic_cmpxchg(&lock->val, val, new);
>  		if (old == val)
> @@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * if there was a previous node; link it and wait.
>  	 */
> -	if (old & ~_Q_LOCKED_MASK) {
> +	if (old & ~_Q_LOCKED_PENDING_MASK) {
>  		prev = decode_tail(old);
>  		ACCESS_ONCE(prev->next) = node;
>  
> @@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp
>  	}
>  
>  	/*
> -	 * we're at the head of the waitqueue, wait for the owner to go away.
> +	 * we're at the head of the waitqueue, wait for the owner & pending to
> +	 * go away.
>  	 *
> -	 * *,x -> *,0
> +	 * *,x,y -> *,0,0
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
>  		cpu_relax();
>  
>  	/*
>  	 * claim the lock:
>  	 *
> -	 * n,0 -> 0,1 : lock, uncontended
> -	 * *,0 -> *,1 : lock, contended
> +	 * n,0,0 -> 0,0,1 : lock, uncontended
> +	 * *,0,0 -> *,0,1 : lock, contended
>  	 */
>  	for (;;) {
>  		new = _Q_LOCKED_VAL;
> 
> 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:36     ` Konrad Rzeszutek Wilk
@ 2014-06-17 20:51       ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17 20:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Peter Zijlstra, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
>> Because the qspinlock needs to touch a second cacheline; add a pending
>> bit and allow a single in-word spinner before we punt to the second
>> cacheline.
> Could you add this in the description please:
>
> And by second cacheline we mean the local 'node'. That is the:
> mcs_nodes[0] and mcs_nodes[idx]
>
> Perhaps it might be better then to split this in the header file
> as this is trying to not be a slowpath code - but rather - a
> pre-slow-path-lets-try-if-we can do another cmpxchg in case
> the unlocker has just unlocked itself.
>
> So something like:
>
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index e8a7ae8..29cc9c7 100644
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>    */
>   static __always_inline void queue_spin_lock(struct qspinlock *lock)
>   {
> -	u32 val;
> +	u32 val, new;
>
>   	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>   	if (likely(val == 0))
>   		return;
> +
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;

No, it can leave like that. The unlock path will not clear the pending 
bit. We are trying to make the fastpath as simple as possible as it may 
be inlined. The complexity of the queue spinlock is in the slowpath.

Moreover, an cmpxchg followed immediately followed by another cmpxchg 
will just increase the level of memory contention when a lock is fairly 
contended. The chance of second cmpxchg() succeeding will be pretty low.

-Longman



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-17 20:51       ` Waiman Long
  0 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17 20:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-arch, riel, Peter Zijlstra, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
>> Because the qspinlock needs to touch a second cacheline; add a pending
>> bit and allow a single in-word spinner before we punt to the second
>> cacheline.
> Could you add this in the description please:
>
> And by second cacheline we mean the local 'node'. That is the:
> mcs_nodes[0] and mcs_nodes[idx]
>
> Perhaps it might be better then to split this in the header file
> as this is trying to not be a slowpath code - but rather - a
> pre-slow-path-lets-try-if-we can do another cmpxchg in case
> the unlocker has just unlocked itself.
>
> So something like:
>
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index e8a7ae8..29cc9c7 100644
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>    */
>   static __always_inline void queue_spin_lock(struct qspinlock *lock)
>   {
> -	u32 val;
> +	u32 val, new;
>
>   	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>   	if (likely(val == 0))
>   		return;
> +
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;

No, it can leave like that. The unlock path will not clear the pending 
bit. We are trying to make the fastpath as simple as possible as it may 
be inlined. The complexity of the queue spinlock is in the slowpath.

Moreover, an cmpxchg followed immediately followed by another cmpxchg 
will just increase the level of memory contention when a lock is fairly 
contended. The chance of second cmpxchg() succeeding will be pretty low.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:36     ` Konrad Rzeszutek Wilk
  (?)
@ 2014-06-17 20:51     ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17 20:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-arch, Peter Zijlstra, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
>> Because the qspinlock needs to touch a second cacheline; add a pending
>> bit and allow a single in-word spinner before we punt to the second
>> cacheline.
> Could you add this in the description please:
>
> And by second cacheline we mean the local 'node'. That is the:
> mcs_nodes[0] and mcs_nodes[idx]
>
> Perhaps it might be better then to split this in the header file
> as this is trying to not be a slowpath code - but rather - a
> pre-slow-path-lets-try-if-we can do another cmpxchg in case
> the unlocker has just unlocked itself.
>
> So something like:
>
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index e8a7ae8..29cc9c7 100644
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>    */
>   static __always_inline void queue_spin_lock(struct qspinlock *lock)
>   {
> -	u32 val;
> +	u32 val, new;
>
>   	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>   	if (likely(val == 0))
>   		return;
> +
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;

No, it can leave like that. The unlock path will not clear the pending 
bit. We are trying to make the fastpath as simple as possible as it may 
be inlined. The complexity of the queue spinlock is in the slowpath.

Moreover, an cmpxchg followed immediately followed by another cmpxchg 
will just increase the level of memory contention when a lock is fairly 
contended. The chance of second cmpxchg() succeeding will be pretty low.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-15 12:47 ` Peter Zijlstra
@ 2014-06-17 20:55     ` Konrad Rzeszutek Wilk
  2014-06-17 20:55   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch extracts the logic for the exchange of new and previous tail
> code words into a new xchg_tail() function which can be optimized in a
> later patch.

And also adds a third try on acquiring the lock. That I think should
be a seperate patch.

And instead of saying 'later patch' you should spell out the name
of the patch. Especially as this might not be obvious from somebody
doing git bisection.

> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |    2 +
>  kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
>  2 files changed, 38 insertions(+), 22 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -61,6 +61,8 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
> +#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
> +
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
>  #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>  
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>  
>  /**
> + * xchg_tail - Put in the new queue tail code word & retrieve previous one
> + * @lock : Pointer to queue spinlock structure
> + * @tail : The new queue tail code word
> + * Return: The previous queue tail code word
> + *
> + * xchg(lock, tail)
> + *
> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
> + */
> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
> +{
> +	u32 old, new, val = atomic_read(&lock->val);
> +
> +	for (;;) {
> +		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return old;
> +}
> +
> +/**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
> @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
>  	node->next = NULL;
>  
>  	/*
> -	 * we already touched the queueing cacheline; don't bother with pending
> -	 * stuff.
> -	 *
> -	 * trylock || xchg(lock, node)
> -	 *
> -	 * 0,0,0 -> 0,0,1 ; trylock
> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
> +	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> +	 * attempt the trylock once more in the hope someone let go while we
> +	 * weren't watching.
>  	 */
> -	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val)
> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	if (queue_spin_trylock(lock))
> +		goto release;

So now are three of them? One in queue_spin_lock, then at the start
of this function when checking for the pending bit, and the once more
here. And that is because the local cache line might be cold for the
'mcs_index' struct?

That all seems to be a bit of experimental. But then we are already
in the slowpath so we could as well do:

for (i = 0; i < 10; i++)
	if (queue_spin_trylock(lock))
		goto release;

And would have the same effect.


>  
>  	/*
> -	 * we won the trylock; forget about queueing.
> +	 * we already touched the queueing cacheline; don't bother with pending
> +	 * stuff.

I guess we could also just erase the pending bit if we wanted too. The
optimistic spinning will still hit go to the queue label as lock->val will
have the tail value.

> +	 *
> +	 * p,*,* -> n,*,*
>  	 */
> -	if (new == _Q_LOCKED_VAL)
> -		goto release;
> +	old = xchg_tail(lock, tail);
>  
>  	/*
>  	 * if there was a previous node; link it and wait.
>  	 */
> -	if (old & ~_Q_LOCKED_PENDING_MASK) {
> +	if (old & _Q_TAIL_MASK) {
>  		prev = decode_tail(old);
>  		ACCESS_ONCE(prev->next) = node;
>  
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
@ 2014-06-17 20:55     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch extracts the logic for the exchange of new and previous tail
> code words into a new xchg_tail() function which can be optimized in a
> later patch.

And also adds a third try on acquiring the lock. That I think should
be a seperate patch.

And instead of saying 'later patch' you should spell out the name
of the patch. Especially as this might not be obvious from somebody
doing git bisection.

> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |    2 +
>  kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
>  2 files changed, 38 insertions(+), 22 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -61,6 +61,8 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
> +#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
> +
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
>  #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>  
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>  
>  /**
> + * xchg_tail - Put in the new queue tail code word & retrieve previous one
> + * @lock : Pointer to queue spinlock structure
> + * @tail : The new queue tail code word
> + * Return: The previous queue tail code word
> + *
> + * xchg(lock, tail)
> + *
> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
> + */
> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
> +{
> +	u32 old, new, val = atomic_read(&lock->val);
> +
> +	for (;;) {
> +		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return old;
> +}
> +
> +/**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
> @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
>  	node->next = NULL;
>  
>  	/*
> -	 * we already touched the queueing cacheline; don't bother with pending
> -	 * stuff.
> -	 *
> -	 * trylock || xchg(lock, node)
> -	 *
> -	 * 0,0,0 -> 0,0,1 ; trylock
> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
> +	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> +	 * attempt the trylock once more in the hope someone let go while we
> +	 * weren't watching.
>  	 */
> -	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val)
> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	if (queue_spin_trylock(lock))
> +		goto release;

So now are three of them? One in queue_spin_lock, then at the start
of this function when checking for the pending bit, and the once more
here. And that is because the local cache line might be cold for the
'mcs_index' struct?

That all seems to be a bit of experimental. But then we are already
in the slowpath so we could as well do:

for (i = 0; i < 10; i++)
	if (queue_spin_trylock(lock))
		goto release;

And would have the same effect.


>  
>  	/*
> -	 * we won the trylock; forget about queueing.
> +	 * we already touched the queueing cacheline; don't bother with pending
> +	 * stuff.

I guess we could also just erase the pending bit if we wanted too. The
optimistic spinning will still hit go to the queue label as lock->val will
have the tail value.

> +	 *
> +	 * p,*,* -> n,*,*
>  	 */
> -	if (new == _Q_LOCKED_VAL)
> -		goto release;
> +	old = xchg_tail(lock, tail);
>  
>  	/*
>  	 * if there was a previous node; link it and wait.
>  	 */
> -	if (old & ~_Q_LOCKED_PENDING_MASK) {
> +	if (old & _Q_TAIL_MASK) {
>  		prev = decode_tail(old);
>  		ACCESS_ONCE(prev->next) = node;
>  
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-15 12:47 ` Peter Zijlstra
  2014-06-17 20:55     ` Konrad Rzeszutek Wilk
@ 2014-06-17 20:55   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 20:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch extracts the logic for the exchange of new and previous tail
> code words into a new xchg_tail() function which can be optimized in a
> later patch.

And also adds a third try on acquiring the lock. That I think should
be a seperate patch.

And instead of saying 'later patch' you should spell out the name
of the patch. Especially as this might not be obvious from somebody
doing git bisection.

> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |    2 +
>  kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
>  2 files changed, 38 insertions(+), 22 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -61,6 +61,8 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
> +#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
> +
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
>  #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>  
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>  
>  /**
> + * xchg_tail - Put in the new queue tail code word & retrieve previous one
> + * @lock : Pointer to queue spinlock structure
> + * @tail : The new queue tail code word
> + * Return: The previous queue tail code word
> + *
> + * xchg(lock, tail)
> + *
> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
> + */
> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
> +{
> +	u32 old, new, val = atomic_read(&lock->val);
> +
> +	for (;;) {
> +		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return old;
> +}
> +
> +/**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
> @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
>  	node->next = NULL;
>  
>  	/*
> -	 * we already touched the queueing cacheline; don't bother with pending
> -	 * stuff.
> -	 *
> -	 * trylock || xchg(lock, node)
> -	 *
> -	 * 0,0,0 -> 0,0,1 ; trylock
> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
> +	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
> +	 * attempt the trylock once more in the hope someone let go while we
> +	 * weren't watching.
>  	 */
> -	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val)
> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	if (queue_spin_trylock(lock))
> +		goto release;

So now are three of them? One in queue_spin_lock, then at the start
of this function when checking for the pending bit, and the once more
here. And that is because the local cache line might be cold for the
'mcs_index' struct?

That all seems to be a bit of experimental. But then we are already
in the slowpath so we could as well do:

for (i = 0; i < 10; i++)
	if (queue_spin_trylock(lock))
		goto release;

And would have the same effect.


>  
>  	/*
> -	 * we won the trylock; forget about queueing.
> +	 * we already touched the queueing cacheline; don't bother with pending
> +	 * stuff.

I guess we could also just erase the pending bit if we wanted too. The
optimistic spinning will still hit go to the queue label as lock->val will
have the tail value.

> +	 *
> +	 * p,*,* -> n,*,*
>  	 */
> -	if (new == _Q_LOCKED_VAL)
> -		goto release;
> +	old = xchg_tail(lock, tail);
>  
>  	/*
>  	 * if there was a previous node; link it and wait.
>  	 */
> -	if (old & ~_Q_LOCKED_PENDING_MASK) {
> +	if (old & _Q_TAIL_MASK) {
>  		prev = decode_tail(old);
>  		ACCESS_ONCE(prev->next) = node;
>  
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:51       ` Waiman Long
@ 2014-06-17 21:07         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 21:07 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> >>Because the qspinlock needs to touch a second cacheline; add a pending
> >>bit and allow a single in-word spinner before we punt to the second
> >>cacheline.
> >Could you add this in the description please:
> >
> >And by second cacheline we mean the local 'node'. That is the:
> >mcs_nodes[0] and mcs_nodes[idx]
> >
> >Perhaps it might be better then to split this in the header file
> >as this is trying to not be a slowpath code - but rather - a
> >pre-slow-path-lets-try-if-we can do another cmpxchg in case
> >the unlocker has just unlocked itself.
> >
> >So something like:
> >
> >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> >index e8a7ae8..29cc9c7 100644
> >--- a/include/asm-generic/qspinlock.h
> >+++ b/include/asm-generic/qspinlock.h
> >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> >   */
> >  static __always_inline void queue_spin_lock(struct qspinlock *lock)
> >  {
> >-	u32 val;
> >+	u32 val, new;
> >
> >  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> >  	if (likely(val == 0))
> >  		return;
> >+
> >+	/* One more attempt - but if we fail mark it as pending. */
> >+	if (val == _Q_LOCKED_VAL) {
> >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+		old = atomic_cmpxchg(&lock->val, val, new);
> >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+			return;
> 
> No, it can leave like that. The unlock path will not clear the pending bit.

Err, you are right. It needs to go back in the slowpath.

> We are trying to make the fastpath as simple as possible as it may be
> inlined. The complexity of the queue spinlock is in the slowpath.

Sure, but then it shouldn't be called slowpath anymore as it is not
slow. It is a combination of fast path (the potential chance of
grabbing the lock and setting the pending lock) and the real slow
path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?

> 
> Moreover, an cmpxchg followed immediately followed by another cmpxchg will
> just increase the level of memory contention when a lock is fairly
> contended. The chance of second cmpxchg() succeeding will be pretty low.

Then why even do the pending bit - which is what the slowpath does
for the first time. And if it grabs it (And sets the pending bit) it
immediately exits. Why not perculate that piece of code in-to this header.

And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath.

> 
> -Longman
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-17 21:07         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 21:07 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, riel, Peter Zijlstra, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> >>Because the qspinlock needs to touch a second cacheline; add a pending
> >>bit and allow a single in-word spinner before we punt to the second
> >>cacheline.
> >Could you add this in the description please:
> >
> >And by second cacheline we mean the local 'node'. That is the:
> >mcs_nodes[0] and mcs_nodes[idx]
> >
> >Perhaps it might be better then to split this in the header file
> >as this is trying to not be a slowpath code - but rather - a
> >pre-slow-path-lets-try-if-we can do another cmpxchg in case
> >the unlocker has just unlocked itself.
> >
> >So something like:
> >
> >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> >index e8a7ae8..29cc9c7 100644
> >--- a/include/asm-generic/qspinlock.h
> >+++ b/include/asm-generic/qspinlock.h
> >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> >   */
> >  static __always_inline void queue_spin_lock(struct qspinlock *lock)
> >  {
> >-	u32 val;
> >+	u32 val, new;
> >
> >  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> >  	if (likely(val == 0))
> >  		return;
> >+
> >+	/* One more attempt - but if we fail mark it as pending. */
> >+	if (val == _Q_LOCKED_VAL) {
> >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+		old = atomic_cmpxchg(&lock->val, val, new);
> >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+			return;
> 
> No, it can leave like that. The unlock path will not clear the pending bit.

Err, you are right. It needs to go back in the slowpath.

> We are trying to make the fastpath as simple as possible as it may be
> inlined. The complexity of the queue spinlock is in the slowpath.

Sure, but then it shouldn't be called slowpath anymore as it is not
slow. It is a combination of fast path (the potential chance of
grabbing the lock and setting the pending lock) and the real slow
path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?

> 
> Moreover, an cmpxchg followed immediately followed by another cmpxchg will
> just increase the level of memory contention when a lock is fairly
> contended. The chance of second cmpxchg() succeeding will be pretty low.

Then why even do the pending bit - which is what the slowpath does
for the first time. And if it grabs it (And sets the pending bit) it
immediately exits. Why not perculate that piece of code in-to this header.

And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath.

> 
> -Longman
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:51       ` Waiman Long
  (?)
@ 2014-06-17 21:07       ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 21:07 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, Peter Zijlstra, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> >>Because the qspinlock needs to touch a second cacheline; add a pending
> >>bit and allow a single in-word spinner before we punt to the second
> >>cacheline.
> >Could you add this in the description please:
> >
> >And by second cacheline we mean the local 'node'. That is the:
> >mcs_nodes[0] and mcs_nodes[idx]
> >
> >Perhaps it might be better then to split this in the header file
> >as this is trying to not be a slowpath code - but rather - a
> >pre-slow-path-lets-try-if-we can do another cmpxchg in case
> >the unlocker has just unlocked itself.
> >
> >So something like:
> >
> >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> >index e8a7ae8..29cc9c7 100644
> >--- a/include/asm-generic/qspinlock.h
> >+++ b/include/asm-generic/qspinlock.h
> >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> >   */
> >  static __always_inline void queue_spin_lock(struct qspinlock *lock)
> >  {
> >-	u32 val;
> >+	u32 val, new;
> >
> >  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> >  	if (likely(val == 0))
> >  		return;
> >+
> >+	/* One more attempt - but if we fail mark it as pending. */
> >+	if (val == _Q_LOCKED_VAL) {
> >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+		old = atomic_cmpxchg(&lock->val, val, new);
> >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+			return;
> 
> No, it can leave like that. The unlock path will not clear the pending bit.

Err, you are right. It needs to go back in the slowpath.

> We are trying to make the fastpath as simple as possible as it may be
> inlined. The complexity of the queue spinlock is in the slowpath.

Sure, but then it shouldn't be called slowpath anymore as it is not
slow. It is a combination of fast path (the potential chance of
grabbing the lock and setting the pending lock) and the real slow
path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?

> 
> Moreover, an cmpxchg followed immediately followed by another cmpxchg will
> just increase the level of memory contention when a lock is fairly
> contended. The chance of second cmpxchg() succeeding will be pretty low.

Then why even do the pending bit - which is what the slowpath does
for the first time. And if it grabs it (And sets the pending bit) it
immediately exits. Why not perculate that piece of code in-to this header.

And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath.

> 
> -Longman
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 21:07         ` Konrad Rzeszutek Wilk
@ 2014-06-17 21:10           ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 21:10 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> > On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> > >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> > >>Because the qspinlock needs to touch a second cacheline; add a pending
> > >>bit and allow a single in-word spinner before we punt to the second
> > >>cacheline.
> > >Could you add this in the description please:
> > >
> > >And by second cacheline we mean the local 'node'. That is the:
> > >mcs_nodes[0] and mcs_nodes[idx]
> > >
> > >Perhaps it might be better then to split this in the header file
> > >as this is trying to not be a slowpath code - but rather - a
> > >pre-slow-path-lets-try-if-we can do another cmpxchg in case
> > >the unlocker has just unlocked itself.
> > >
> > >So something like:
> > >
> > >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> > >index e8a7ae8..29cc9c7 100644
> > >--- a/include/asm-generic/qspinlock.h
> > >+++ b/include/asm-generic/qspinlock.h
> > >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> > >   */
> > >  static __always_inline void queue_spin_lock(struct qspinlock *lock)
> > >  {
> > >-	u32 val;
> > >+	u32 val, new;
> > >
> > >  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> > >  	if (likely(val == 0))
> > >  		return;
> > >+
> > >+	/* One more attempt - but if we fail mark it as pending. */
> > >+	if (val == _Q_LOCKED_VAL) {
> > >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> > >+
> > >+		old = atomic_cmpxchg(&lock->val, val, new);
> > >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> > >+			return;
> > 
> > No, it can leave like that. The unlock path will not clear the pending bit.
> 
> Err, you are right. It needs to go back in the slowpath.

What I should have wrote is:

if (old == 0) /* YEEY */
  return;

As that would the same thing as this patch does on the pending bit - that
is if we can on the second compare and exchange set the pending bit (and the
lock) and the lock has been released - we are good.

And it is a quick path.

> 
> > We are trying to make the fastpath as simple as possible as it may be
> > inlined. The complexity of the queue spinlock is in the slowpath.
> 
> Sure, but then it shouldn't be called slowpath anymore as it is not
> slow. It is a combination of fast path (the potential chance of
> grabbing the lock and setting the pending lock) and the real slow
> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?
> 

I forgot to mention - that was the crux of my comments - just change
the slowpath to complex name at that point to better reflect what
it does.

> > 
> > Moreover, an cmpxchg followed immediately followed by another cmpxchg will
> > just increase the level of memory contention when a lock is fairly
> > contended. The chance of second cmpxchg() succeeding will be pretty low.
> 
> Then why even do the pending bit - which is what the slowpath does
> for the first time. And if it grabs it (And sets the pending bit) it
> immediately exits. Why not perculate that piece of code in-to this header.
> 
> And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath.
> 
> > 
> > -Longman
> > 
> > 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-17 21:10           ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 21:10 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, riel, Peter Zijlstra, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> > On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> > >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> > >>Because the qspinlock needs to touch a second cacheline; add a pending
> > >>bit and allow a single in-word spinner before we punt to the second
> > >>cacheline.
> > >Could you add this in the description please:
> > >
> > >And by second cacheline we mean the local 'node'. That is the:
> > >mcs_nodes[0] and mcs_nodes[idx]
> > >
> > >Perhaps it might be better then to split this in the header file
> > >as this is trying to not be a slowpath code - but rather - a
> > >pre-slow-path-lets-try-if-we can do another cmpxchg in case
> > >the unlocker has just unlocked itself.
> > >
> > >So something like:
> > >
> > >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> > >index e8a7ae8..29cc9c7 100644
> > >--- a/include/asm-generic/qspinlock.h
> > >+++ b/include/asm-generic/qspinlock.h
> > >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> > >   */
> > >  static __always_inline void queue_spin_lock(struct qspinlock *lock)
> > >  {
> > >-	u32 val;
> > >+	u32 val, new;
> > >
> > >  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> > >  	if (likely(val == 0))
> > >  		return;
> > >+
> > >+	/* One more attempt - but if we fail mark it as pending. */
> > >+	if (val == _Q_LOCKED_VAL) {
> > >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> > >+
> > >+		old = atomic_cmpxchg(&lock->val, val, new);
> > >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> > >+			return;
> > 
> > No, it can leave like that. The unlock path will not clear the pending bit.
> 
> Err, you are right. It needs to go back in the slowpath.

What I should have wrote is:

if (old == 0) /* YEEY */
  return;

As that would the same thing as this patch does on the pending bit - that
is if we can on the second compare and exchange set the pending bit (and the
lock) and the lock has been released - we are good.

And it is a quick path.

> 
> > We are trying to make the fastpath as simple as possible as it may be
> > inlined. The complexity of the queue spinlock is in the slowpath.
> 
> Sure, but then it shouldn't be called slowpath anymore as it is not
> slow. It is a combination of fast path (the potential chance of
> grabbing the lock and setting the pending lock) and the real slow
> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?
> 

I forgot to mention - that was the crux of my comments - just change
the slowpath to complex name at that point to better reflect what
it does.

> > 
> > Moreover, an cmpxchg followed immediately followed by another cmpxchg will
> > just increase the level of memory contention when a lock is fairly
> > contended. The chance of second cmpxchg() succeeding will be pretty low.
> 
> Then why even do the pending bit - which is what the slowpath does
> for the first time. And if it grabs it (And sets the pending bit) it
> immediately exits. Why not perculate that piece of code in-to this header.
> 
> And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath.
> 
> > 
> > -Longman
> > 
> > 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 21:07         ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2014-06-17 21:10         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-17 21:10 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, Peter Zijlstra, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> > On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> > >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> > >>Because the qspinlock needs to touch a second cacheline; add a pending
> > >>bit and allow a single in-word spinner before we punt to the second
> > >>cacheline.
> > >Could you add this in the description please:
> > >
> > >And by second cacheline we mean the local 'node'. That is the:
> > >mcs_nodes[0] and mcs_nodes[idx]
> > >
> > >Perhaps it might be better then to split this in the header file
> > >as this is trying to not be a slowpath code - but rather - a
> > >pre-slow-path-lets-try-if-we can do another cmpxchg in case
> > >the unlocker has just unlocked itself.
> > >
> > >So something like:
> > >
> > >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> > >index e8a7ae8..29cc9c7 100644
> > >--- a/include/asm-generic/qspinlock.h
> > >+++ b/include/asm-generic/qspinlock.h
> > >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> > >   */
> > >  static __always_inline void queue_spin_lock(struct qspinlock *lock)
> > >  {
> > >-	u32 val;
> > >+	u32 val, new;
> > >
> > >  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> > >  	if (likely(val == 0))
> > >  		return;
> > >+
> > >+	/* One more attempt - but if we fail mark it as pending. */
> > >+	if (val == _Q_LOCKED_VAL) {
> > >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> > >+
> > >+		old = atomic_cmpxchg(&lock->val, val, new);
> > >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> > >+			return;
> > 
> > No, it can leave like that. The unlock path will not clear the pending bit.
> 
> Err, you are right. It needs to go back in the slowpath.

What I should have wrote is:

if (old == 0) /* YEEY */
  return;

As that would the same thing as this patch does on the pending bit - that
is if we can on the second compare and exchange set the pending bit (and the
lock) and the lock has been released - we are good.

And it is a quick path.

> 
> > We are trying to make the fastpath as simple as possible as it may be
> > inlined. The complexity of the queue spinlock is in the slowpath.
> 
> Sure, but then it shouldn't be called slowpath anymore as it is not
> slow. It is a combination of fast path (the potential chance of
> grabbing the lock and setting the pending lock) and the real slow
> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?
> 

I forgot to mention - that was the crux of my comments - just change
the slowpath to complex name at that point to better reflect what
it does.

> > 
> > Moreover, an cmpxchg followed immediately followed by another cmpxchg will
> > just increase the level of memory contention when a lock is fairly
> > contended. The chance of second cmpxchg() succeeding will be pretty low.
> 
> Then why even do the pending bit - which is what the slowpath does
> for the first time. And if it grabs it (And sets the pending bit) it
> immediately exits. Why not perculate that piece of code in-to this header.
> 
> And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath.
> 
> > 
> > -Longman
> > 
> > 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 21:10           ` Konrad Rzeszutek Wilk
@ 2014-06-17 22:25             ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17 22:25 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Peter Zijlstra, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
>>> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
>>>> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
>>>>> Because the qspinlock needs to touch a second cacheline; add a pending
>>>>> bit and allow a single in-word spinner before we punt to the second
>>>>> cacheline.
>>>> Could you add this in the description please:
>>>>
>>>> And by second cacheline we mean the local 'node'. That is the:
>>>> mcs_nodes[0] and mcs_nodes[idx]
>>>>
>>>> Perhaps it might be better then to split this in the header file
>>>> as this is trying to not be a slowpath code - but rather - a
>>>> pre-slow-path-lets-try-if-we can do another cmpxchg in case
>>>> the unlocker has just unlocked itself.
>>>>
>>>> So something like:
>>>>
>>>> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
>>>> index e8a7ae8..29cc9c7 100644
>>>> --- a/include/asm-generic/qspinlock.h
>>>> +++ b/include/asm-generic/qspinlock.h
>>>> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>>>>    */
>>>>   static __always_inline void queue_spin_lock(struct qspinlock *lock)
>>>>   {
>>>> -	u32 val;
>>>> +	u32 val, new;
>>>>
>>>>   	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>>>>   	if (likely(val == 0))
>>>>   		return;
>>>> +
>>>> +	/* One more attempt - but if we fail mark it as pending. */
>>>> +	if (val == _Q_LOCKED_VAL) {
>>>> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
>>>> +
>>>> +		old = atomic_cmpxchg(&lock->val, val, new);
>>>> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
>>>> +			return;
>>> No, it can leave like that. The unlock path will not clear the pending bit.
>> Err, you are right. It needs to go back in the slowpath.
> What I should have wrote is:
>
> if (old == 0) /* YEEY */
>    return;

Unfortunately, that still doesn't work. If old is 0, it just meant the 
cmpxchg failed. It still haven't got the lock.
> As that would the same thing as this patch does on the pending bit - that
> is if we can on the second compare and exchange set the pending bit (and the
> lock) and the lock has been released - we are good.

That is not true. When the lock is freed, the pending bit holder will 
still have to clear the pending bit and set the lock bit as is done in 
the slowpath. We cannot skip the step here. The problem of moving the 
pending code here is that it includes a wait loop which we don't want to 
put in the fastpath.
>
> And it is a quick path.
>
>>> We are trying to make the fastpath as simple as possible as it may be
>>> inlined. The complexity of the queue spinlock is in the slowpath.
>> Sure, but then it shouldn't be called slowpath anymore as it is not
>> slow. It is a combination of fast path (the potential chance of
>> grabbing the lock and setting the pending lock) and the real slow
>> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?
>>
> I forgot to mention - that was the crux of my comments - just change
> the slowpath to complex name at that point to better reflect what
> it does.

Actually in my v11 patch, I subdivided the slowpath into a slowpath for 
the pending code and slowerpath for actual queuing. Perhaps, we could 
use quickpath and slowpath instead. Anyway, it is a minor detail that we 
can discuss after the core code get merged.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-17 22:25             ` Waiman Long
  0 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17 22:25 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-arch, riel, Peter Zijlstra, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
>>> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
>>>> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
>>>>> Because the qspinlock needs to touch a second cacheline; add a pending
>>>>> bit and allow a single in-word spinner before we punt to the second
>>>>> cacheline.
>>>> Could you add this in the description please:
>>>>
>>>> And by second cacheline we mean the local 'node'. That is the:
>>>> mcs_nodes[0] and mcs_nodes[idx]
>>>>
>>>> Perhaps it might be better then to split this in the header file
>>>> as this is trying to not be a slowpath code - but rather - a
>>>> pre-slow-path-lets-try-if-we can do another cmpxchg in case
>>>> the unlocker has just unlocked itself.
>>>>
>>>> So something like:
>>>>
>>>> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
>>>> index e8a7ae8..29cc9c7 100644
>>>> --- a/include/asm-generic/qspinlock.h
>>>> +++ b/include/asm-generic/qspinlock.h
>>>> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>>>>    */
>>>>   static __always_inline void queue_spin_lock(struct qspinlock *lock)
>>>>   {
>>>> -	u32 val;
>>>> +	u32 val, new;
>>>>
>>>>   	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>>>>   	if (likely(val == 0))
>>>>   		return;
>>>> +
>>>> +	/* One more attempt - but if we fail mark it as pending. */
>>>> +	if (val == _Q_LOCKED_VAL) {
>>>> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
>>>> +
>>>> +		old = atomic_cmpxchg(&lock->val, val, new);
>>>> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
>>>> +			return;
>>> No, it can leave like that. The unlock path will not clear the pending bit.
>> Err, you are right. It needs to go back in the slowpath.
> What I should have wrote is:
>
> if (old == 0) /* YEEY */
>    return;

Unfortunately, that still doesn't work. If old is 0, it just meant the 
cmpxchg failed. It still haven't got the lock.
> As that would the same thing as this patch does on the pending bit - that
> is if we can on the second compare and exchange set the pending bit (and the
> lock) and the lock has been released - we are good.

That is not true. When the lock is freed, the pending bit holder will 
still have to clear the pending bit and set the lock bit as is done in 
the slowpath. We cannot skip the step here. The problem of moving the 
pending code here is that it includes a wait loop which we don't want to 
put in the fastpath.
>
> And it is a quick path.
>
>>> We are trying to make the fastpath as simple as possible as it may be
>>> inlined. The complexity of the queue spinlock is in the slowpath.
>> Sure, but then it shouldn't be called slowpath anymore as it is not
>> slow. It is a combination of fast path (the potential chance of
>> grabbing the lock and setting the pending lock) and the real slow
>> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?
>>
> I forgot to mention - that was the crux of my comments - just change
> the slowpath to complex name at that point to better reflect what
> it does.

Actually in my v11 patch, I subdivided the slowpath into a slowpath for 
the pending code and slowerpath for actual queuing. Perhaps, we could 
use quickpath and slowpath instead. Anyway, it is a minor detail that we 
can discuss after the core code get merged.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 21:10           ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2014-06-17 22:25           ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-17 22:25 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-arch, Peter Zijlstra, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
>>> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
>>>> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
>>>>> Because the qspinlock needs to touch a second cacheline; add a pending
>>>>> bit and allow a single in-word spinner before we punt to the second
>>>>> cacheline.
>>>> Could you add this in the description please:
>>>>
>>>> And by second cacheline we mean the local 'node'. That is the:
>>>> mcs_nodes[0] and mcs_nodes[idx]
>>>>
>>>> Perhaps it might be better then to split this in the header file
>>>> as this is trying to not be a slowpath code - but rather - a
>>>> pre-slow-path-lets-try-if-we can do another cmpxchg in case
>>>> the unlocker has just unlocked itself.
>>>>
>>>> So something like:
>>>>
>>>> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
>>>> index e8a7ae8..29cc9c7 100644
>>>> --- a/include/asm-generic/qspinlock.h
>>>> +++ b/include/asm-generic/qspinlock.h
>>>> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>>>>    */
>>>>   static __always_inline void queue_spin_lock(struct qspinlock *lock)
>>>>   {
>>>> -	u32 val;
>>>> +	u32 val, new;
>>>>
>>>>   	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>>>>   	if (likely(val == 0))
>>>>   		return;
>>>> +
>>>> +	/* One more attempt - but if we fail mark it as pending. */
>>>> +	if (val == _Q_LOCKED_VAL) {
>>>> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
>>>> +
>>>> +		old = atomic_cmpxchg(&lock->val, val, new);
>>>> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
>>>> +			return;
>>> No, it can leave like that. The unlock path will not clear the pending bit.
>> Err, you are right. It needs to go back in the slowpath.
> What I should have wrote is:
>
> if (old == 0) /* YEEY */
>    return;

Unfortunately, that still doesn't work. If old is 0, it just meant the 
cmpxchg failed. It still haven't got the lock.
> As that would the same thing as this patch does on the pending bit - that
> is if we can on the second compare and exchange set the pending bit (and the
> lock) and the lock has been released - we are good.

That is not true. When the lock is freed, the pending bit holder will 
still have to clear the pending bit and set the lock bit as is done in 
the slowpath. We cannot skip the step here. The problem of moving the 
pending code here is that it includes a wait loop which we don't want to 
put in the fastpath.
>
> And it is a quick path.
>
>>> We are trying to make the fastpath as simple as possible as it may be
>>> inlined. The complexity of the queue spinlock is in the slowpath.
>> Sure, but then it shouldn't be called slowpath anymore as it is not
>> slow. It is a combination of fast path (the potential chance of
>> grabbing the lock and setting the pending lock) and the real slow
>> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ?
>>
> I forgot to mention - that was the crux of my comments - just change
> the slowpath to complex name at that point to better reflect what
> it does.

Actually in my v11 patch, I subdivided the slowpath into a slowpath for 
the pending code and slowerpath for actual queuing. Perhaps, we could 
use quickpath and slowpath instead. Anyway, it is a minor detail that we 
can discuss after the core code get merged.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:36     ` Konrad Rzeszutek Wilk
@ 2014-06-18 11:29       ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:29 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;
> +		val = old;
> +	}

Note that Peter's code is in a for(;;) loop:


+	for (;;) {
+		/*
+		 * If we observe any contention; queue.
+		 */
+		if (val & ~_Q_LOCKED_MASK)
+			goto queue;
+
+		new = _Q_LOCKED_VAL;
+		if (val == new)
+			new |= _Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock
+	 */
+	if (new == _Q_LOCKED_VAL)
+		return;

So what you'd have is basically:

	/*
	 * One more attempt if no one is already in queue.  Perhaps
	 * they have unlocked the spinlock already.
	 */
	if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) {
		old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
		if (old == 0) /* YEEY! */
			return;
		val = old;
	}

But I agree with Waiman that this is unlikely to trigger often enough. 
It does have to be handled in the slowpath for correctness, but the most 
likely path is (0,0,1)->(0,1,1).

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-18 11:29       ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:29 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;
> +		val = old;
> +	}

Note that Peter's code is in a for(;;) loop:


+	for (;;) {
+		/*
+		 * If we observe any contention; queue.
+		 */
+		if (val & ~_Q_LOCKED_MASK)
+			goto queue;
+
+		new = _Q_LOCKED_VAL;
+		if (val == new)
+			new |= _Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock
+	 */
+	if (new == _Q_LOCKED_VAL)
+		return;

So what you'd have is basically:

	/*
	 * One more attempt if no one is already in queue.  Perhaps
	 * they have unlocked the spinlock already.
	 */
	if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) {
		old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
		if (old == 0) /* YEEY! */
			return;
		val = old;
	}

But I agree with Waiman that this is unlikely to trigger often enough. 
It does have to be handled in the slowpath for correctness, but the most 
likely path is (0,0,1)->(0,1,1).

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:36     ` Konrad Rzeszutek Wilk
                       ` (2 preceding siblings ...)
  (?)
@ 2014-06-18 11:29     ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:29 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;
> +		val = old;
> +	}

Note that Peter's code is in a for(;;) loop:


+	for (;;) {
+		/*
+		 * If we observe any contention; queue.
+		 */
+		if (val & ~_Q_LOCKED_MASK)
+			goto queue;
+
+		new = _Q_LOCKED_VAL;
+		if (val == new)
+			new |= _Q_PENDING_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+
+	/*
+	 * we won the trylock
+	 */
+	if (new == _Q_LOCKED_VAL)
+		return;

So what you'd have is basically:

	/*
	 * One more attempt if no one is already in queue.  Perhaps
	 * they have unlocked the spinlock already.
	 */
	if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) {
		old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
		if (old == 0) /* YEEY! */
			return;
		val = old;
	}

But I agree with Waiman that this is unlikely to trigger often enough. 
It does have to be handled in the slowpath for correctness, but the most 
likely path is (0,0,1)->(0,1,1).

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-17 20:55     ` Konrad Rzeszutek Wilk
@ 2014-06-18 11:37       ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
> On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
>> From: Waiman Long <Waiman.Long@hp.com>
>>
>> This patch extracts the logic for the exchange of new and previous tail
>> code words into a new xchg_tail() function which can be optimized in a
>> later patch.
>
> And also adds a third try on acquiring the lock. That I think should
> be a seperate patch.

It doesn't really add a new try, the old code is:


-	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val)
-			new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}

  	/*
-	 * we won the trylock; forget about queueing.
  	 */
-	if (new == _Q_LOCKED_VAL)
-		goto release;

The trylock happens if the "if (val)" hits the else branch.

What the patch does is change it from attempting two transition with a 
single cmpxchg:

-	 * 0,0,0 -> 0,0,1 ; trylock
-	 * p,y,x -> n,y,x ; prev = xchg(lock, node)

to first doing the trylock, then the xchg.  If the trylock passes and 
the xchg returns prev=0,0,0, the next step of the algorithm goes to the 
locked/uncontended state

+	/*
+	 * claim the lock:
+	 *
+	 * n,0 -> 0,1 : lock, uncontended

Similar to your suggestion of patch 3, it's expected that the xchg will 
*not* return prev=0,0,0 after a failed trylock.

However, I *do* agree with you that it's simpler to just squash this 
patch into 01/11.

Paolo

> And instead of saying 'later patch' you should spell out the name
> of the patch. Especially as this might not be obvious from somebody
> doing git bisection.
>
>>
>> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
>> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
>> ---
>>  include/asm-generic/qspinlock_types.h |    2 +
>>  kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
>>  2 files changed, 38 insertions(+), 22 deletions(-)
>>
>> --- a/include/asm-generic/qspinlock_types.h
>> +++ b/include/asm-generic/qspinlock_types.h
>> @@ -61,6 +61,8 @@ typedef struct qspinlock {
>>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>>
>> +#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
>> +
>>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
>>  #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>>
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
>>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>>
>>  /**
>> + * xchg_tail - Put in the new queue tail code word & retrieve previous one
>> + * @lock : Pointer to queue spinlock structure
>> + * @tail : The new queue tail code word
>> + * Return: The previous queue tail code word
>> + *
>> + * xchg(lock, tail)
>> + *
>> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
>> + */
>> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
>> +{
>> +	u32 old, new, val = atomic_read(&lock->val);
>> +
>> +	for (;;) {
>> +		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
>> +		old = atomic_cmpxchg(&lock->val, val, new);
>> +		if (old == val)
>> +			break;
>> +
>> +		val = old;
>> +	}
>> +	return old;
>> +}
>> +
>> +/**
>>   * queue_spin_lock_slowpath - acquire the queue spinlock
>>   * @lock: Pointer to queue spinlock structure
>>   * @val: Current value of the queue spinlock 32-bit word
>> @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
>>  	node->next = NULL;
>>
>>  	/*
>> -	 * we already touched the queueing cacheline; don't bother with pending
>> -	 * stuff.
>> -	 *
>> -	 * trylock || xchg(lock, node)
>> -	 *
>> -	 * 0,0,0 -> 0,0,1 ; trylock
>> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
>> +	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
>> +	 * attempt the trylock once more in the hope someone let go while we
>> +	 * weren't watching.
>>  	 */
>> -	for (;;) {
>> -		new = _Q_LOCKED_VAL;
>> -		if (val)
>> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
>> -
>> -		old = atomic_cmpxchg(&lock->val, val, new);
>> -		if (old == val)
>> -			break;
>> -
>> -		val = old;
>> -	}
>> +	if (queue_spin_trylock(lock))
>> +		goto release;
>
> So now are three of them? One in queue_spin_lock, then at the start
> of this function when checking for the pending bit, and the once more
> here. And that is because the local cache line might be cold for the
> 'mcs_index' struct?
>
> That all seems to be a bit of experimental. But then we are already
> in the slowpath so we could as well do:
>
> for (i = 0; i < 10; i++)
> 	if (queue_spin_trylock(lock))
> 		goto release;
>
> And would have the same effect.
>
>
>>
>>  	/*
>> -	 * we won the trylock; forget about queueing.
>> +	 * we already touched the queueing cacheline; don't bother with pending
>> +	 * stuff.
>
> I guess we could also just erase the pending bit if we wanted too. The
> optimistic spinning will still hit go to the queue label as lock->val will
> have the tail value.
>
>> +	 *
>> +	 * p,*,* -> n,*,*
>>  	 */
>> -	if (new == _Q_LOCKED_VAL)
>> -		goto release;
>> +	old = xchg_tail(lock, tail);
>>
>>  	/*
>>  	 * if there was a previous node; link it and wait.
>>  	 */
>> -	if (old & ~_Q_LOCKED_PENDING_MASK) {
>> +	if (old & _Q_TAIL_MASK) {
>>  		prev = decode_tail(old);
>>  		ACCESS_ONCE(prev->next) = node;
>>
>>
>>


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
@ 2014-06-18 11:37       ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
> On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
>> From: Waiman Long <Waiman.Long@hp.com>
>>
>> This patch extracts the logic for the exchange of new and previous tail
>> code words into a new xchg_tail() function which can be optimized in a
>> later patch.
>
> And also adds a third try on acquiring the lock. That I think should
> be a seperate patch.

It doesn't really add a new try, the old code is:


-	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val)
-			new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}

  	/*
-	 * we won the trylock; forget about queueing.
  	 */
-	if (new == _Q_LOCKED_VAL)
-		goto release;

The trylock happens if the "if (val)" hits the else branch.

What the patch does is change it from attempting two transition with a 
single cmpxchg:

-	 * 0,0,0 -> 0,0,1 ; trylock
-	 * p,y,x -> n,y,x ; prev = xchg(lock, node)

to first doing the trylock, then the xchg.  If the trylock passes and 
the xchg returns prev=0,0,0, the next step of the algorithm goes to the 
locked/uncontended state

+	/*
+	 * claim the lock:
+	 *
+	 * n,0 -> 0,1 : lock, uncontended

Similar to your suggestion of patch 3, it's expected that the xchg will 
*not* return prev=0,0,0 after a failed trylock.

However, I *do* agree with you that it's simpler to just squash this 
patch into 01/11.

Paolo

> And instead of saying 'later patch' you should spell out the name
> of the patch. Especially as this might not be obvious from somebody
> doing git bisection.
>
>>
>> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
>> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
>> ---
>>  include/asm-generic/qspinlock_types.h |    2 +
>>  kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
>>  2 files changed, 38 insertions(+), 22 deletions(-)
>>
>> --- a/include/asm-generic/qspinlock_types.h
>> +++ b/include/asm-generic/qspinlock_types.h
>> @@ -61,6 +61,8 @@ typedef struct qspinlock {
>>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>>
>> +#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
>> +
>>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
>>  #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>>
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
>>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>>
>>  /**
>> + * xchg_tail - Put in the new queue tail code word & retrieve previous one
>> + * @lock : Pointer to queue spinlock structure
>> + * @tail : The new queue tail code word
>> + * Return: The previous queue tail code word
>> + *
>> + * xchg(lock, tail)
>> + *
>> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
>> + */
>> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
>> +{
>> +	u32 old, new, val = atomic_read(&lock->val);
>> +
>> +	for (;;) {
>> +		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
>> +		old = atomic_cmpxchg(&lock->val, val, new);
>> +		if (old == val)
>> +			break;
>> +
>> +		val = old;
>> +	}
>> +	return old;
>> +}
>> +
>> +/**
>>   * queue_spin_lock_slowpath - acquire the queue spinlock
>>   * @lock: Pointer to queue spinlock structure
>>   * @val: Current value of the queue spinlock 32-bit word
>> @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
>>  	node->next = NULL;
>>
>>  	/*
>> -	 * we already touched the queueing cacheline; don't bother with pending
>> -	 * stuff.
>> -	 *
>> -	 * trylock || xchg(lock, node)
>> -	 *
>> -	 * 0,0,0 -> 0,0,1 ; trylock
>> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
>> +	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
>> +	 * attempt the trylock once more in the hope someone let go while we
>> +	 * weren't watching.
>>  	 */
>> -	for (;;) {
>> -		new = _Q_LOCKED_VAL;
>> -		if (val)
>> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
>> -
>> -		old = atomic_cmpxchg(&lock->val, val, new);
>> -		if (old == val)
>> -			break;
>> -
>> -		val = old;
>> -	}
>> +	if (queue_spin_trylock(lock))
>> +		goto release;
>
> So now are three of them? One in queue_spin_lock, then at the start
> of this function when checking for the pending bit, and the once more
> here. And that is because the local cache line might be cold for the
> 'mcs_index' struct?
>
> That all seems to be a bit of experimental. But then we are already
> in the slowpath so we could as well do:
>
> for (i = 0; i < 10; i++)
> 	if (queue_spin_trylock(lock))
> 		goto release;
>
> And would have the same effect.
>
>
>>
>>  	/*
>> -	 * we won the trylock; forget about queueing.
>> +	 * we already touched the queueing cacheline; don't bother with pending
>> +	 * stuff.
>
> I guess we could also just erase the pending bit if we wanted too. The
> optimistic spinning will still hit go to the queue label as lock->val will
> have the tail value.
>
>> +	 *
>> +	 * p,*,* -> n,*,*
>>  	 */
>> -	if (new == _Q_LOCKED_VAL)
>> -		goto release;
>> +	old = xchg_tail(lock, tail);
>>
>>  	/*
>>  	 * if there was a previous node; link it and wait.
>>  	 */
>> -	if (old & ~_Q_LOCKED_PENDING_MASK) {
>> +	if (old & _Q_TAIL_MASK) {
>>  		prev = decode_tail(old);
>>  		ACCESS_ONCE(prev->next) = node;
>>
>>
>>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-17 20:55     ` Konrad Rzeszutek Wilk
  (?)
@ 2014-06-18 11:37     ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
> On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
>> From: Waiman Long <Waiman.Long@hp.com>
>>
>> This patch extracts the logic for the exchange of new and previous tail
>> code words into a new xchg_tail() function which can be optimized in a
>> later patch.
>
> And also adds a third try on acquiring the lock. That I think should
> be a seperate patch.

It doesn't really add a new try, the old code is:


-	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val)
-			new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}

  	/*
-	 * we won the trylock; forget about queueing.
  	 */
-	if (new == _Q_LOCKED_VAL)
-		goto release;

The trylock happens if the "if (val)" hits the else branch.

What the patch does is change it from attempting two transition with a 
single cmpxchg:

-	 * 0,0,0 -> 0,0,1 ; trylock
-	 * p,y,x -> n,y,x ; prev = xchg(lock, node)

to first doing the trylock, then the xchg.  If the trylock passes and 
the xchg returns prev=0,0,0, the next step of the algorithm goes to the 
locked/uncontended state

+	/*
+	 * claim the lock:
+	 *
+	 * n,0 -> 0,1 : lock, uncontended

Similar to your suggestion of patch 3, it's expected that the xchg will 
*not* return prev=0,0,0 after a failed trylock.

However, I *do* agree with you that it's simpler to just squash this 
patch into 01/11.

Paolo

> And instead of saying 'later patch' you should spell out the name
> of the patch. Especially as this might not be obvious from somebody
> doing git bisection.
>
>>
>> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
>> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
>> ---
>>  include/asm-generic/qspinlock_types.h |    2 +
>>  kernel/locking/qspinlock.c            |   58 +++++++++++++++++++++-------------
>>  2 files changed, 38 insertions(+), 22 deletions(-)
>>
>> --- a/include/asm-generic/qspinlock_types.h
>> +++ b/include/asm-generic/qspinlock_types.h
>> @@ -61,6 +61,8 @@ typedef struct qspinlock {
>>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>>
>> +#define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
>> +
>>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
>>  #define _Q_PENDING_VAL		(1U << _Q_PENDING_OFFSET)
>>
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod
>>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>>
>>  /**
>> + * xchg_tail - Put in the new queue tail code word & retrieve previous one
>> + * @lock : Pointer to queue spinlock structure
>> + * @tail : The new queue tail code word
>> + * Return: The previous queue tail code word
>> + *
>> + * xchg(lock, tail)
>> + *
>> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
>> + */
>> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
>> +{
>> +	u32 old, new, val = atomic_read(&lock->val);
>> +
>> +	for (;;) {
>> +		new = (val & _Q_LOCKED_PENDING_MASK) | tail;
>> +		old = atomic_cmpxchg(&lock->val, val, new);
>> +		if (old == val)
>> +			break;
>> +
>> +		val = old;
>> +	}
>> +	return old;
>> +}
>> +
>> +/**
>>   * queue_spin_lock_slowpath - acquire the queue spinlock
>>   * @lock: Pointer to queue spinlock structure
>>   * @val: Current value of the queue spinlock 32-bit word
>> @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp
>>  	node->next = NULL;
>>
>>  	/*
>> -	 * we already touched the queueing cacheline; don't bother with pending
>> -	 * stuff.
>> -	 *
>> -	 * trylock || xchg(lock, node)
>> -	 *
>> -	 * 0,0,0 -> 0,0,1 ; trylock
>> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
>> +	 * We touched a (possibly) cold cacheline in the per-cpu queue node;
>> +	 * attempt the trylock once more in the hope someone let go while we
>> +	 * weren't watching.
>>  	 */
>> -	for (;;) {
>> -		new = _Q_LOCKED_VAL;
>> -		if (val)
>> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
>> -
>> -		old = atomic_cmpxchg(&lock->val, val, new);
>> -		if (old == val)
>> -			break;
>> -
>> -		val = old;
>> -	}
>> +	if (queue_spin_trylock(lock))
>> +		goto release;
>
> So now are three of them? One in queue_spin_lock, then at the start
> of this function when checking for the pending bit, and the once more
> here. And that is because the local cache line might be cold for the
> 'mcs_index' struct?
>
> That all seems to be a bit of experimental. But then we are already
> in the slowpath so we could as well do:
>
> for (i = 0; i < 10; i++)
> 	if (queue_spin_trylock(lock))
> 		goto release;
>
> And would have the same effect.
>
>
>>
>>  	/*
>> -	 * we won the trylock; forget about queueing.
>> +	 * we already touched the queueing cacheline; don't bother with pending
>> +	 * stuff.
>
> I guess we could also just erase the pending bit if we wanted too. The
> optimistic spinning will still hit go to the queue label as lock->val will
> have the tail value.
>
>> +	 *
>> +	 * p,*,* -> n,*,*
>>  	 */
>> -	if (new == _Q_LOCKED_VAL)
>> -		goto release;
>> +	old = xchg_tail(lock, tail);
>>
>>  	/*
>>  	 * if there was a previous node; link it and wait.
>>  	 */
>> -	if (old & ~_Q_LOCKED_PENDING_MASK) {
>> +	if (old & _Q_TAIL_MASK) {
>>  		prev = decode_tail(old);
>>  		ACCESS_ONCE(prev->next) = node;
>>
>>
>>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-18 11:39     ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:39 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>
> -	for (;;) {
> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	clear_pending_set_locked(lock, val);
>  	return;


Might as well add clear_pending_set_locked already in patch 3.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
@ 2014-06-18 11:39     ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:39 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>
> -	for (;;) {
> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	clear_pending_set_locked(lock, val);
>  	return;


Might as well add clear_pending_set_locked already in patch 3.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-15 12:47   ` Peter Zijlstra
  (?)
@ 2014-06-18 11:39   ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:39 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>
> -	for (;;) {
> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	clear_pending_set_locked(lock, val);
>  	return;


Might as well add clear_pending_set_locked already in patch 3.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 06/11] qspinlock: Optimize pending bit
  2014-06-15 12:47 ` Peter Zijlstra
@ 2014-06-18 11:42     ` Paolo Bonzini
  2014-06-18 11:42     ` Paolo Bonzini
  1 sibling, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:42 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> XXX: merge into the pending bit patch..

Agree, or if not move it right after the pending bit patch, before the 
NR_CPUS optimization.

Paolo

> It is possible so observe the pending bit without the locked bit when
> the last owner has just released but the pending owner has not yet
> taken ownership.
>
> In this case we would normally queue -- because the pending bit is
> already taken. However, in this case the pending bit is guaranteed to
> be released 'soon', therefore wait for it and avoid queueing.
>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 06/11] qspinlock: Optimize pending bit
@ 2014-06-18 11:42     ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:42 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> XXX: merge into the pending bit patch..

Agree, or if not move it right after the pending bit patch, before the 
NR_CPUS optimization.

Paolo

> It is possible so observe the pending bit without the locked bit when
> the last owner has just released but the pending owner has not yet
> taken ownership.
>
> In this case we would normally queue -- because the pending bit is
> already taken. However, in this case the pending bit is guaranteed to
> be released 'soon', therefore wait for it and avoid queueing.
>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 06/11] qspinlock: Optimize pending bit
  2014-06-15 12:47 ` Peter Zijlstra
@ 2014-06-18 11:42   ` Paolo Bonzini
  2014-06-18 11:42     ` Paolo Bonzini
  1 sibling, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 11:42 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> XXX: merge into the pending bit patch..

Agree, or if not move it right after the pending bit patch, before the 
NR_CPUS optimization.

Paolo

> It is possible so observe the pending bit without the locked bit when
> the last owner has just released but the pending owner has not yet
> taken ownership.
>
> In this case we would normally queue -- because the pending bit is
> already taken. However, in this case the pending bit is guaranteed to
> be released 'soon', therefore wait for it and avoid queueing.
>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-16 22:08   ` Waiman Long
@ 2014-06-18 12:03       ` Paolo Bonzini
  2014-06-18 12:03     ` Paolo Bonzini
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 12:03 UTC (permalink / raw)
  To: Waiman Long, Peter Zijlstra
  Cc: tglx, mingo, linux-arch, linux-kernel, virtualization, xen-devel,
	kvm, paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

Il 17/06/2014 00:08, Waiman Long ha scritto:
>> +void __pv_queue_unlock(struct qspinlock *lock)
>> +{
>> +	int val = atomic_read(&lock->val);
>> +
>> +	native_queue_unlock(lock);
>> +
>> +	if (val & _Q_LOCKED_SLOW)
>> +		___pv_kick_head(lock);
>> +}
>> +
>
> Again a race can happen here between the reading and writing of the lock
> value. I can't think of a good way to do that without using cmpxchg.

Could you just use xchg on the locked byte?

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-06-18 12:03       ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 12:03 UTC (permalink / raw)
  To: Waiman Long, Peter Zijlstra
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 17/06/2014 00:08, Waiman Long ha scritto:
>> +void __pv_queue_unlock(struct qspinlock *lock)
>> +{
>> +	int val = atomic_read(&lock->val);
>> +
>> +	native_queue_unlock(lock);
>> +
>> +	if (val & _Q_LOCKED_SLOW)
>> +		___pv_kick_head(lock);
>> +}
>> +
>
> Again a race can happen here between the reading and writing of the lock
> value. I can't think of a good way to do that without using cmpxchg.

Could you just use xchg on the locked byte?

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-16 22:08   ` Waiman Long
  2014-06-18 12:03       ` Paolo Bonzini
@ 2014-06-18 12:03     ` Paolo Bonzini
  2014-07-07 15:20     ` Peter Zijlstra
  2014-07-07 15:20       ` Peter Zijlstra
  3 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 12:03 UTC (permalink / raw)
  To: Waiman Long, Peter Zijlstra
  Cc: linux-arch, gleb, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	Peter Zijlstra, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

Il 17/06/2014 00:08, Waiman Long ha scritto:
>> +void __pv_queue_unlock(struct qspinlock *lock)
>> +{
>> +	int val = atomic_read(&lock->val);
>> +
>> +	native_queue_unlock(lock);
>> +
>> +	if (val & _Q_LOCKED_SLOW)
>> +		___pv_kick_head(lock);
>> +}
>> +
>
> Again a race can happen here between the reading and writing of the lock
> value. I can't think of a good way to do that without using cmpxchg.

Could you just use xchg on the locked byte?

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-18 12:04     ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 12:04 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, linux-kernel, virtualization, xen-devel, kvm,
	paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod, Peter Zijlstra

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>
>
>  #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
>
> -#define	queue_spin_unlock queue_spin_unlock
>  /**
>   * queue_spin_unlock - release a queue spinlock
>   * @lock : Pointer to queue spinlock structure
>   *
>   * An effective smp_store_release() on the least-significant byte.
>   */
> -static inline void queue_spin_unlock(struct qspinlock *lock)
> +static inline void native_queue_unlock(struct qspinlock *lock)
>  {
>  	barrier();
>  	ACCESS_ONCE(*(u8 *)lock) = 0;
>  }
>
> +#else
> +
> +static inline void native_queue_unlock(struct qspinlock *lock)
> +{
> +	atomic_dec(&lock->val);
> +}
> +
>  #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */


Should be (part of) an earlier patch?  Also, does it get wrong if 
(CONFIG_X86_OOSTORE || CONFIG_X86_PPRO_FENCE) && paravirt patches the 
unlock to a single movb?  Of course the paravirt spinlocks could simply 
depend on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

-2, like Waiman said.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-06-18 12:04     ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 12:04 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, Peter Zijlstra,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel,
	boris.ostrovsky, paulmck, torvalds

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>
>
>  #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
>
> -#define	queue_spin_unlock queue_spin_unlock
>  /**
>   * queue_spin_unlock - release a queue spinlock
>   * @lock : Pointer to queue spinlock structure
>   *
>   * An effective smp_store_release() on the least-significant byte.
>   */
> -static inline void queue_spin_unlock(struct qspinlock *lock)
> +static inline void native_queue_unlock(struct qspinlock *lock)
>  {
>  	barrier();
>  	ACCESS_ONCE(*(u8 *)lock) = 0;
>  }
>
> +#else
> +
> +static inline void native_queue_unlock(struct qspinlock *lock)
> +{
> +	atomic_dec(&lock->val);
> +}
> +
>  #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */


Should be (part of) an earlier patch?  Also, does it get wrong if 
(CONFIG_X86_OOSTORE || CONFIG_X86_PPRO_FENCE) && paravirt patches the 
unlock to a single movb?  Of course the paravirt spinlocks could simply 
depend on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

-2, like Waiman said.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
                     ` (4 preceding siblings ...)
  (?)
@ 2014-06-18 12:04   ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 12:04 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>
>
>  #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
>
> -#define	queue_spin_unlock queue_spin_unlock
>  /**
>   * queue_spin_unlock - release a queue spinlock
>   * @lock : Pointer to queue spinlock structure
>   *
>   * An effective smp_store_release() on the least-significant byte.
>   */
> -static inline void queue_spin_unlock(struct qspinlock *lock)
> +static inline void native_queue_unlock(struct qspinlock *lock)
>  {
>  	barrier();
>  	ACCESS_ONCE(*(u8 *)lock) = 0;
>  }
>
> +#else
> +
> +static inline void native_queue_unlock(struct qspinlock *lock)
> +{
> +	atomic_dec(&lock->val);
> +}
> +
>  #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */


Should be (part of) an earlier patch?  Also, does it get wrong if 
(CONFIG_X86_OOSTORE || CONFIG_X86_PPRO_FENCE) && paravirt patches the 
unlock to a single movb?  Of course the paravirt spinlocks could simply 
depend on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE.

> +
> +#define INVALID_HEAD	-1
> +#define NO_HEAD		nr_cpu_ids
> +

-2, like Waiman said.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-18 11:29       ` Paolo Bonzini
@ 2014-06-18 13:36         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Zijlstra, Waiman.Long, tglx, mingo, linux-arch,
	linux-kernel, virtualization, xen-devel, kvm, paolo.bonzini,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod,
	Peter Zijlstra

On Wed, Jun 18, 2014 at 01:29:48PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> >+	/* One more attempt - but if we fail mark it as pending. */
> >+	if (val == _Q_LOCKED_VAL) {
> >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+		old = atomic_cmpxchg(&lock->val, val, new);
> >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+			return;
> >+		val = old;
> >+	}
> 
> Note that Peter's code is in a for(;;) loop:
> 
> 
> +	for (;;) {
> +		/*
> +		 * If we observe any contention; queue.
> +		 */
> +		if (val & ~_Q_LOCKED_MASK)
> +			goto queue;
> +
> +		new = _Q_LOCKED_VAL;
> +		if (val == new)
> +			new |= _Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		return;
> 
> So what you'd have is basically:
> 
> 	/*
> 	 * One more attempt if no one is already in queue.  Perhaps
> 	 * they have unlocked the spinlock already.
> 	 */
> 	if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) {
> 		old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> 		if (old == 0) /* YEEY! */
> 			return;
> 		val = old;
> 	}
> 
> But I agree with Waiman that this is unlikely to trigger often enough. It
> does have to be handled in the slowpath for correctness, but the most likely
> path is (0,0,1)->(0,1,1).

<nods>
> 
> Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-18 13:36         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, riel, Peter Zijlstra, kvm,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, gleb, virtualization, Peter Zijlstra, chegu_vinod,
	david.vrabel, oleg, xen-devel, tglx, paulmck, torvalds, mingo

On Wed, Jun 18, 2014 at 01:29:48PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> >+	/* One more attempt - but if we fail mark it as pending. */
> >+	if (val == _Q_LOCKED_VAL) {
> >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+		old = atomic_cmpxchg(&lock->val, val, new);
> >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+			return;
> >+		val = old;
> >+	}
> 
> Note that Peter's code is in a for(;;) loop:
> 
> 
> +	for (;;) {
> +		/*
> +		 * If we observe any contention; queue.
> +		 */
> +		if (val & ~_Q_LOCKED_MASK)
> +			goto queue;
> +
> +		new = _Q_LOCKED_VAL;
> +		if (val == new)
> +			new |= _Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		return;
> 
> So what you'd have is basically:
> 
> 	/*
> 	 * One more attempt if no one is already in queue.  Perhaps
> 	 * they have unlocked the spinlock already.
> 	 */
> 	if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) {
> 		old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> 		if (old == 0) /* YEEY! */
> 			return;
> 		val = old;
> 	}
> 
> But I agree with Waiman that this is unlikely to trigger often enough. It
> does have to be handled in the slowpath for correctness, but the most likely
> path is (0,0,1)->(0,1,1).

<nods>
> 
> Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-18 11:29       ` Paolo Bonzini
  (?)
@ 2014-06-18 13:36       ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 13:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, Peter Zijlstra, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Wed, Jun 18, 2014 at 01:29:48PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> >+	/* One more attempt - but if we fail mark it as pending. */
> >+	if (val == _Q_LOCKED_VAL) {
> >+		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+		old = atomic_cmpxchg(&lock->val, val, new);
> >+		if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+			return;
> >+		val = old;
> >+	}
> 
> Note that Peter's code is in a for(;;) loop:
> 
> 
> +	for (;;) {
> +		/*
> +		 * If we observe any contention; queue.
> +		 */
> +		if (val & ~_Q_LOCKED_MASK)
> +			goto queue;
> +
> +		new = _Q_LOCKED_VAL;
> +		if (val == new)
> +			new |= _Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +
> +	/*
> +	 * we won the trylock
> +	 */
> +	if (new == _Q_LOCKED_VAL)
> +		return;
> 
> So what you'd have is basically:
> 
> 	/*
> 	 * One more attempt if no one is already in queue.  Perhaps
> 	 * they have unlocked the spinlock already.
> 	 */
> 	if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) {
> 		old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> 		if (old == 0) /* YEEY! */
> 			return;
> 		val = old;
> 	}
> 
> But I agree with Waiman that this is unlikely to trigger often enough. It
> does have to be handled in the slowpath for correctness, but the most likely
> path is (0,0,1)->(0,1,1).

<nods>
> 
> Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 11:37       ` Paolo Bonzini
@ 2014-06-18 13:50         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 13:50 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Zijlstra, Waiman.Long, tglx, mingo, linux-arch,
	linux-kernel, virtualization, xen-devel, kvm, paolo.bonzini,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod,
	Peter Zijlstra

On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
> >On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> >>From: Waiman Long <Waiman.Long@hp.com>
> >>
> >>This patch extracts the logic for the exchange of new and previous tail
> >>code words into a new xchg_tail() function which can be optimized in a
> >>later patch.
> >
> >And also adds a third try on acquiring the lock. That I think should
> >be a seperate patch.
> 
> It doesn't really add a new try, the old code is:
> 
> 
> -	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val)
> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> 
>  	/*
> -	 * we won the trylock; forget about queueing.
>  	 */
> -	if (new == _Q_LOCKED_VAL)
> -		goto release;
> 
> The trylock happens if the "if (val)" hits the else branch.
> 
> What the patch does is change it from attempting two transition with a
> single cmpxchg:
> 
> -	 * 0,0,0 -> 0,0,1 ; trylock
> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
> 
> to first doing the trylock, then the xchg.  If the trylock passes and the
> xchg returns prev=0,0,0, the next step of the algorithm goes to the
> locked/uncontended state
> 
> +	/*
> +	 * claim the lock:
> +	 *
> +	 * n,0 -> 0,1 : lock, uncontended
> 
> Similar to your suggestion of patch 3, it's expected that the xchg will
> *not* return prev=0,0,0 after a failed trylock.

I do like your explanation. I hope that Peter will put it in the
description as it explains the change quite well.

> 
> However, I *do* agree with you that it's simpler to just squash this patch
> into 01/11.

Uh, did I say that? Oh I said why don't make it right the first time!

I meant in terms of seperating the slowpath (aka the bytelock on the pending
bit) from the queue (MCS code). Or renaming the function to be called
'complex' instead of 'slowpath' as it is getting quite hairy.

The #1 patch is nice by itself - as it lays out the foundation of the
MCS-similar code - and if Ingo decides he does not want this pending
byte-lock bit business - it can be easily reverted or dropped.

In terms of squashing this in #1 - I would advocate against that.

Thanks!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
@ 2014-06-18 13:50         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 13:50 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, riel, Peter Zijlstra, kvm,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, gleb, virtualization, Peter Zijlstra, chegu_vinod,
	david.vrabel, oleg, xen-devel, tglx, paulmck, torvalds, mingo

On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
> >On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> >>From: Waiman Long <Waiman.Long@hp.com>
> >>
> >>This patch extracts the logic for the exchange of new and previous tail
> >>code words into a new xchg_tail() function which can be optimized in a
> >>later patch.
> >
> >And also adds a third try on acquiring the lock. That I think should
> >be a seperate patch.
> 
> It doesn't really add a new try, the old code is:
> 
> 
> -	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val)
> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> 
>  	/*
> -	 * we won the trylock; forget about queueing.
>  	 */
> -	if (new == _Q_LOCKED_VAL)
> -		goto release;
> 
> The trylock happens if the "if (val)" hits the else branch.
> 
> What the patch does is change it from attempting two transition with a
> single cmpxchg:
> 
> -	 * 0,0,0 -> 0,0,1 ; trylock
> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
> 
> to first doing the trylock, then the xchg.  If the trylock passes and the
> xchg returns prev=0,0,0, the next step of the algorithm goes to the
> locked/uncontended state
> 
> +	/*
> +	 * claim the lock:
> +	 *
> +	 * n,0 -> 0,1 : lock, uncontended
> 
> Similar to your suggestion of patch 3, it's expected that the xchg will
> *not* return prev=0,0,0 after a failed trylock.

I do like your explanation. I hope that Peter will put it in the
description as it explains the change quite well.

> 
> However, I *do* agree with you that it's simpler to just squash this patch
> into 01/11.

Uh, did I say that? Oh I said why don't make it right the first time!

I meant in terms of seperating the slowpath (aka the bytelock on the pending
bit) from the queue (MCS code). Or renaming the function to be called
'complex' instead of 'slowpath' as it is getting quite hairy.

The #1 patch is nice by itself - as it lays out the foundation of the
MCS-similar code - and if Ingo decides he does not want this pending
byte-lock bit business - it can be easily reverted or dropped.

In terms of squashing this in #1 - I would advocate against that.

Thanks!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 11:37       ` Paolo Bonzini
  (?)
@ 2014-06-18 13:50       ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 13:50 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, Peter Zijlstra, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
> >On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> >>From: Waiman Long <Waiman.Long@hp.com>
> >>
> >>This patch extracts the logic for the exchange of new and previous tail
> >>code words into a new xchg_tail() function which can be optimized in a
> >>later patch.
> >
> >And also adds a third try on acquiring the lock. That I think should
> >be a seperate patch.
> 
> It doesn't really add a new try, the old code is:
> 
> 
> -	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val)
> -			new = tail | (val & _Q_LOCKED_PENDING_MASK);
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> 
>  	/*
> -	 * we won the trylock; forget about queueing.
>  	 */
> -	if (new == _Q_LOCKED_VAL)
> -		goto release;
> 
> The trylock happens if the "if (val)" hits the else branch.
> 
> What the patch does is change it from attempting two transition with a
> single cmpxchg:
> 
> -	 * 0,0,0 -> 0,0,1 ; trylock
> -	 * p,y,x -> n,y,x ; prev = xchg(lock, node)
> 
> to first doing the trylock, then the xchg.  If the trylock passes and the
> xchg returns prev=0,0,0, the next step of the algorithm goes to the
> locked/uncontended state
> 
> +	/*
> +	 * claim the lock:
> +	 *
> +	 * n,0 -> 0,1 : lock, uncontended
> 
> Similar to your suggestion of patch 3, it's expected that the xchg will
> *not* return prev=0,0,0 after a failed trylock.

I do like your explanation. I hope that Peter will put it in the
description as it explains the change quite well.

> 
> However, I *do* agree with you that it's simpler to just squash this patch
> into 01/11.

Uh, did I say that? Oh I said why don't make it right the first time!

I meant in terms of seperating the slowpath (aka the bytelock on the pending
bit) from the queue (MCS code). Or renaming the function to be called
'complex' instead of 'slowpath' as it is getting quite hairy.

The #1 patch is nice by itself - as it lays out the foundation of the
MCS-similar code - and if Ingo decides he does not want this pending
byte-lock bit business - it can be easily reverted or dropped.

In terms of squashing this in #1 - I would advocate against that.

Thanks!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-18 12:03       ` Paolo Bonzini
@ 2014-06-18 15:26         ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-18 15:26 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Zijlstra, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod,
	Peter Zijlstra

On 06/18/2014 08:03 AM, Paolo Bonzini wrote:
> Il 17/06/2014 00:08, Waiman Long ha scritto:
>>> +void __pv_queue_unlock(struct qspinlock *lock)
>>> +{
>>> +    int val = atomic_read(&lock->val);
>>> +
>>> +    native_queue_unlock(lock);
>>> +
>>> +    if (val & _Q_LOCKED_SLOW)
>>> +        ___pv_kick_head(lock);
>>> +}
>>> +
>>
>> Again a race can happen here between the reading and writing of the lock
>> value. I can't think of a good way to do that without using cmpxchg.
>
> Could you just use xchg on the locked byte?
>
> Paolo

The slowpath flag is just an indication that the queue head cpu might 
have been suspended. It may not be due to spurious wakeup. Releasing the 
lock unconditionally may cause the queue to be changed while it is being 
inspected. It really depending on how the cpu kicking is being handled. 
My patch delays the unlocking until all the inspections had been done to 
make sure that we don't waste time doing a cpu kick that is not needed.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-06-18 15:26         ` Waiman Long
  0 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-18 15:26 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-arch, riel, Peter Zijlstra, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, gleb, virtualization, Peter Zijlstra, chegu_vinod,
	david.vrabel, oleg, xen-devel, tglx, paulmck, torvalds, mingo

On 06/18/2014 08:03 AM, Paolo Bonzini wrote:
> Il 17/06/2014 00:08, Waiman Long ha scritto:
>>> +void __pv_queue_unlock(struct qspinlock *lock)
>>> +{
>>> +    int val = atomic_read(&lock->val);
>>> +
>>> +    native_queue_unlock(lock);
>>> +
>>> +    if (val & _Q_LOCKED_SLOW)
>>> +        ___pv_kick_head(lock);
>>> +}
>>> +
>>
>> Again a race can happen here between the reading and writing of the lock
>> value. I can't think of a good way to do that without using cmpxchg.
>
> Could you just use xchg on the locked byte?
>
> Paolo

The slowpath flag is just an indication that the queue head cpu might 
have been suspended. It may not be due to spurious wakeup. Releasing the 
lock unconditionally may cause the queue to be changed while it is being 
inspected. It really depending on how the cpu kicking is being handled. 
My patch delays the unlocking until all the inspections had been done to 
make sure that we don't waste time doing a cpu kick that is not needed.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-18 12:03       ` Paolo Bonzini
  (?)
@ 2014-06-18 15:26       ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-18 15:26 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-arch, Peter Zijlstra, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On 06/18/2014 08:03 AM, Paolo Bonzini wrote:
> Il 17/06/2014 00:08, Waiman Long ha scritto:
>>> +void __pv_queue_unlock(struct qspinlock *lock)
>>> +{
>>> +    int val = atomic_read(&lock->val);
>>> +
>>> +    native_queue_unlock(lock);
>>> +
>>> +    if (val & _Q_LOCKED_SLOW)
>>> +        ___pv_kick_head(lock);
>>> +}
>>> +
>>
>> Again a race can happen here between the reading and writing of the lock
>> value. I can't think of a good way to do that without using cmpxchg.
>
> Could you just use xchg on the locked byte?
>
> Paolo

The slowpath flag is just an indication that the queue head cpu might 
have been suspended. It may not be due to spurious wakeup. Releasing the 
lock unconditionally may cause the queue to be changed while it is being 
inspected. It really depending on how the cpu kicking is being handled. 
My patch delays the unlocking until all the inspections had been done to 
make sure that we don't waste time doing a cpu kick that is not needed.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 13:50         ` Konrad Rzeszutek Wilk
@ 2014-06-18 15:46           ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-18 15:46 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Paolo Bonzini, Peter Zijlstra, tglx, mingo, linux-arch,
	linux-kernel, virtualization, xen-devel, kvm, paolo.bonzini,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod,
	Peter Zijlstra

On 06/18/2014 09:50 AM, Konrad Rzeszutek Wilk wrote:
> On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
>>> On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
>>>> From: Waiman Long<Waiman.Long@hp.com>
>>>>
>>>> This patch extracts the logic for the exchange of new and previous tail
>>>> code words into a new xchg_tail() function which can be optimized in a
>>>> later patch.
>>> And also adds a third try on acquiring the lock. That I think should
>>> be a seperate patch.
>> It doesn't really add a new try, the old code is:
>>
>>
>> -	for (;;) {
>> -		new = _Q_LOCKED_VAL;
>> -		if (val)
>> -			new = tail | (val&  _Q_LOCKED_PENDING_MASK);
>> -
>> -		old = atomic_cmpxchg(&lock->val, val, new);
>> -		if (old == val)
>> -			break;
>> -
>> -		val = old;
>> -	}
>>
>>   	/*
>> -	 * we won the trylock; forget about queueing.
>>   	 */
>> -	if (new == _Q_LOCKED_VAL)
>> -		goto release;
>>
>> The trylock happens if the "if (val)" hits the else branch.
>>
>> What the patch does is change it from attempting two transition with a
>> single cmpxchg:
>>
>> -	 * 0,0,0 ->  0,0,1 ; trylock
>> -	 * p,y,x ->  n,y,x ; prev = xchg(lock, node)
>>
>> to first doing the trylock, then the xchg.  If the trylock passes and the
>> xchg returns prev=0,0,0, the next step of the algorithm goes to the
>> locked/uncontended state
>>
>> +	/*
>> +	 * claim the lock:
>> +	 *
>> +	 * n,0 ->  0,1 : lock, uncontended
>>
>> Similar to your suggestion of patch 3, it's expected that the xchg will
>> *not* return prev=0,0,0 after a failed trylock.
> I do like your explanation. I hope that Peter will put it in the
> description as it explains the change quite well.
>
>> However, I *do* agree with you that it's simpler to just squash this patch
>> into 01/11.
> Uh, did I say that? Oh I said why don't make it right the first time!
>
> I meant in terms of seperating the slowpath (aka the bytelock on the pending
> bit) from the queue (MCS code). Or renaming the function to be called
> 'complex' instead of 'slowpath' as it is getting quite hairy.
>
> The #1 patch is nice by itself - as it lays out the foundation of the
> MCS-similar code - and if Ingo decides he does not want this pending
> byte-lock bit business - it can be easily reverted or dropped.

The pending bit code is needed for performance parity with ticket 
spinlock for light load. My own measurement indicates that the queuing 
overhead will cause the queue spinlock to be slower than ticket spinlock 
with 2-4 contending tasks. The pending bit solves the performance 
problem with 2 contending tasks, leave only the 3-4 tasks cases being a 
bit slower than the ticket spinlock which should be more than 
compensated by its superior performance with heavy contention and 
slightly better performance with no contention.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
@ 2014-06-18 15:46           ` Waiman Long
  0 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-18 15:46 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-arch, riel, Peter Zijlstra, kvm, oleg, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel,
	xen-devel, Paolo Bonzini, tglx, paulmck, torvalds, mingo

On 06/18/2014 09:50 AM, Konrad Rzeszutek Wilk wrote:
> On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
>>> On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
>>>> From: Waiman Long<Waiman.Long@hp.com>
>>>>
>>>> This patch extracts the logic for the exchange of new and previous tail
>>>> code words into a new xchg_tail() function which can be optimized in a
>>>> later patch.
>>> And also adds a third try on acquiring the lock. That I think should
>>> be a seperate patch.
>> It doesn't really add a new try, the old code is:
>>
>>
>> -	for (;;) {
>> -		new = _Q_LOCKED_VAL;
>> -		if (val)
>> -			new = tail | (val&  _Q_LOCKED_PENDING_MASK);
>> -
>> -		old = atomic_cmpxchg(&lock->val, val, new);
>> -		if (old == val)
>> -			break;
>> -
>> -		val = old;
>> -	}
>>
>>   	/*
>> -	 * we won the trylock; forget about queueing.
>>   	 */
>> -	if (new == _Q_LOCKED_VAL)
>> -		goto release;
>>
>> The trylock happens if the "if (val)" hits the else branch.
>>
>> What the patch does is change it from attempting two transition with a
>> single cmpxchg:
>>
>> -	 * 0,0,0 ->  0,0,1 ; trylock
>> -	 * p,y,x ->  n,y,x ; prev = xchg(lock, node)
>>
>> to first doing the trylock, then the xchg.  If the trylock passes and the
>> xchg returns prev=0,0,0, the next step of the algorithm goes to the
>> locked/uncontended state
>>
>> +	/*
>> +	 * claim the lock:
>> +	 *
>> +	 * n,0 ->  0,1 : lock, uncontended
>>
>> Similar to your suggestion of patch 3, it's expected that the xchg will
>> *not* return prev=0,0,0 after a failed trylock.
> I do like your explanation. I hope that Peter will put it in the
> description as it explains the change quite well.
>
>> However, I *do* agree with you that it's simpler to just squash this patch
>> into 01/11.
> Uh, did I say that? Oh I said why don't make it right the first time!
>
> I meant in terms of seperating the slowpath (aka the bytelock on the pending
> bit) from the queue (MCS code). Or renaming the function to be called
> 'complex' instead of 'slowpath' as it is getting quite hairy.
>
> The #1 patch is nice by itself - as it lays out the foundation of the
> MCS-similar code - and if Ingo decides he does not want this pending
> byte-lock bit business - it can be easily reverted or dropped.

The pending bit code is needed for performance parity with ticket 
spinlock for light load. My own measurement indicates that the queuing 
overhead will cause the queue spinlock to be slower than ticket spinlock 
with 2-4 contending tasks. The pending bit solves the performance 
problem with 2 contending tasks, leave only the 3-4 tasks cases being a 
bit slower than the ticket spinlock which should be more than 
compensated by its superior performance with heavy contention and 
slightly better performance with no contention.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 13:50         ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2014-06-18 15:46         ` Waiman Long
  -1 siblings, 0 replies; 192+ messages in thread
From: Waiman Long @ 2014-06-18 15:46 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-arch, Peter Zijlstra, kvm, oleg, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel,
	xen-devel, Paolo Bonzini, tglx, paulmck, torvalds, mingo

On 06/18/2014 09:50 AM, Konrad Rzeszutek Wilk wrote:
> On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
>>> On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
>>>> From: Waiman Long<Waiman.Long@hp.com>
>>>>
>>>> This patch extracts the logic for the exchange of new and previous tail
>>>> code words into a new xchg_tail() function which can be optimized in a
>>>> later patch.
>>> And also adds a third try on acquiring the lock. That I think should
>>> be a seperate patch.
>> It doesn't really add a new try, the old code is:
>>
>>
>> -	for (;;) {
>> -		new = _Q_LOCKED_VAL;
>> -		if (val)
>> -			new = tail | (val&  _Q_LOCKED_PENDING_MASK);
>> -
>> -		old = atomic_cmpxchg(&lock->val, val, new);
>> -		if (old == val)
>> -			break;
>> -
>> -		val = old;
>> -	}
>>
>>   	/*
>> -	 * we won the trylock; forget about queueing.
>>   	 */
>> -	if (new == _Q_LOCKED_VAL)
>> -		goto release;
>>
>> The trylock happens if the "if (val)" hits the else branch.
>>
>> What the patch does is change it from attempting two transition with a
>> single cmpxchg:
>>
>> -	 * 0,0,0 ->  0,0,1 ; trylock
>> -	 * p,y,x ->  n,y,x ; prev = xchg(lock, node)
>>
>> to first doing the trylock, then the xchg.  If the trylock passes and the
>> xchg returns prev=0,0,0, the next step of the algorithm goes to the
>> locked/uncontended state
>>
>> +	/*
>> +	 * claim the lock:
>> +	 *
>> +	 * n,0 ->  0,1 : lock, uncontended
>>
>> Similar to your suggestion of patch 3, it's expected that the xchg will
>> *not* return prev=0,0,0 after a failed trylock.
> I do like your explanation. I hope that Peter will put it in the
> description as it explains the change quite well.
>
>> However, I *do* agree with you that it's simpler to just squash this patch
>> into 01/11.
> Uh, did I say that? Oh I said why don't make it right the first time!
>
> I meant in terms of seperating the slowpath (aka the bytelock on the pending
> bit) from the queue (MCS code). Or renaming the function to be called
> 'complex' instead of 'slowpath' as it is getting quite hairy.
>
> The #1 patch is nice by itself - as it lays out the foundation of the
> MCS-similar code - and if Ingo decides he does not want this pending
> byte-lock bit business - it can be easily reverted or dropped.

The pending bit code is needed for performance parity with ticket 
spinlock for light load. My own measurement indicates that the queuing 
overhead will cause the queue spinlock to be slower than ticket spinlock 
with 2-4 contending tasks. The pending bit solves the performance 
problem with 2 contending tasks, leave only the 3-4 tasks cases being a 
bit slower than the ticket spinlock which should be more than 
compensated by its superior performance with heavy contention and 
slightly better performance with no contention.

-Longman

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 15:46           ` Waiman Long
@ 2014-06-18 15:49             ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 15:49 UTC (permalink / raw)
  To: Waiman Long, Konrad Rzeszutek Wilk
  Cc: Peter Zijlstra, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

Il 18/06/2014 17:46, Waiman Long ha scritto:
>>
>>
>> The #1 patch is nice by itself - as it lays out the foundation of the
>> MCS-similar code - and if Ingo decides he does not want this pending
>> byte-lock bit business - it can be easily reverted or dropped.
>
> The pending bit code is needed for performance parity with ticket
> spinlock for light load. My own measurement indicates that the queuing
> overhead will cause the queue spinlock to be slower than ticket spinlock
> with 2-4 contending tasks. The pending bit solves the performance
> problem with 2 contending tasks, leave only the 3-4 tasks cases being a
> bit slower than the ticket spinlock which should be more than
> compensated by its superior performance with heavy contention and
> slightly better performance with no contention.

Note that this patch is not related to the pending bit, only to the 
trylock bit which is already in patch 1.  It serializes two 
previously-parallel checks for transitions.  This is why I thought it 
could already belong in patch 1.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
@ 2014-06-18 15:49             ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 15:49 UTC (permalink / raw)
  To: Waiman Long, Konrad Rzeszutek Wilk
  Cc: linux-arch, riel, Peter Zijlstra, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 18/06/2014 17:46, Waiman Long ha scritto:
>>
>>
>> The #1 patch is nice by itself - as it lays out the foundation of the
>> MCS-similar code - and if Ingo decides he does not want this pending
>> byte-lock bit business - it can be easily reverted or dropped.
>
> The pending bit code is needed for performance parity with ticket
> spinlock for light load. My own measurement indicates that the queuing
> overhead will cause the queue spinlock to be slower than ticket spinlock
> with 2-4 contending tasks. The pending bit solves the performance
> problem with 2 contending tasks, leave only the 3-4 tasks cases being a
> bit slower than the ticket spinlock which should be more than
> compensated by its superior performance with heavy contention and
> slightly better performance with no contention.

Note that this patch is not related to the pending bit, only to the 
trylock bit which is already in patch 1.  It serializes two 
previously-parallel checks for transitions.  This is why I thought it 
could already belong in patch 1.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 15:46           ` Waiman Long
  (?)
  (?)
@ 2014-06-18 15:49           ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-06-18 15:49 UTC (permalink / raw)
  To: Waiman Long, Konrad Rzeszutek Wilk
  Cc: linux-arch, Peter Zijlstra, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 18/06/2014 17:46, Waiman Long ha scritto:
>>
>>
>> The #1 patch is nice by itself - as it lays out the foundation of the
>> MCS-similar code - and if Ingo decides he does not want this pending
>> byte-lock bit business - it can be easily reverted or dropped.
>
> The pending bit code is needed for performance parity with ticket
> spinlock for light load. My own measurement indicates that the queuing
> overhead will cause the queue spinlock to be slower than ticket spinlock
> with 2-4 contending tasks. The pending bit solves the performance
> problem with 2 contending tasks, leave only the 3-4 tasks cases being a
> bit slower than the ticket spinlock which should be more than
> compensated by its superior performance with heavy contention and
> slightly better performance with no contention.

Note that this patch is not related to the pending bit, only to the 
trylock bit which is already in patch 1.  It serializes two 
previously-parallel checks for transitions.  This is why I thought it 
could already belong in patch 1.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-18 15:57     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 15:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> When we allow for a max NR_CPUS < 2^14 we can optimize the pending
> wait-acquire and the xchg_tail() operations.
> 
> By growing the pending bit to a byte, we reduce the tail to 16bit.
> This means we can use xchg16 for the tail part and do away with all
> the repeated compxchg() operations.
> 
> This in turn allows us to unconditionally acquire; the locked state
> as observed by the wait loops cannot change. And because both locked
> and pending are now a full byte we can use simple stores for the
> state transition, obviating one atomic operation entirely.

I have to ask - how much more performance do you get from this?

Is this extra atomic operation hurting that much?
> 
> All this is horribly broken on Alpha pre EV56 (and any other arch that
> cannot do single-copy atomic byte stores).
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |   13 ++++
>  kernel/locking/qspinlock.c            |  103 ++++++++++++++++++++++++++++++----
>  2 files changed, 106 insertions(+), 10 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -38,6 +38,14 @@ typedef struct qspinlock {
>  /*
>   * Bitfields in the atomic value:
>   *
> + * When NR_CPUS < 16K
> + *  0- 7: locked byte
> + *     8: pending
> + *  9-15: not used
> + * 16-17: tail index
> + * 18-31: tail cpu (+1)
> + *
> + * When NR_CPUS >= 16K
>   *  0- 7: locked byte
>   *     8: pending
>   *  9-10: tail index
> @@ -50,7 +58,11 @@ typedef struct qspinlock {
>  #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
>  
>  #define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#if CONFIG_NR_CPUS < (1U << 14)
> +#define _Q_PENDING_BITS		8
> +#else
>  #define _Q_PENDING_BITS		1
> +#endif
>  #define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
>  
>  #define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
> @@ -61,6 +73,7 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
> +#define _Q_TAIL_OFFSET		_Q_TAIL_IDX_OFFSET
>  #define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
>  
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -22,6 +22,7 @@
>  #include <linux/percpu.h>
>  #include <linux/hardirq.h>
>  #include <linux/mutex.h>
> +#include <asm/byteorder.h>
>  #include <asm/qspinlock.h>
>  
>  /*
> @@ -48,6 +49,9 @@
>   * We can further change the first spinner to spin on a bit in the lock word
>   * instead of its node; whereby avoiding the need to carry a node from lock to
>   * unlock, and preserving API.
> + *
> + * N.B. The current implementation only supports architectures that allow
> + *      atomic operations on smaller 8-bit and 16-bit data types.
>   */
>  
>  #include "mcs_spinlock.h"
> @@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod
>  
>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>  
> +/*
> + * By using the whole 2nd least significant byte for the pending bit, we
> + * can allow better optimization of the lock acquisition for the pending
> + * bit holder.
> + */
> +#if _Q_PENDING_BITS == 8
> +
> +struct __qspinlock {
> +	union {
> +		atomic_t val;
> +		struct {
> +#ifdef __LITTLE_ENDIAN
> +			u16	locked_pending;
> +			u16	tail;
> +#else
> +			u16	tail;
> +			u16	locked_pending;
> +#endif
> +		};
> +	};
> +};
> +
> +/**
> + * clear_pending_set_locked - take ownership and clear the pending bit.
> + * @lock: Pointer to queue spinlock structure
> + * @val : Current value of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + *
> + * Lock stealing is not allowed if this function is used.
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
> +}
> +
> +/*
> + * xchg_tail - Put in the new queue tail code word & retrieve previous one

Missing full stop.
> + * @lock : Pointer to queue spinlock structure
> + * @tail : The new queue tail code word
> + * Return: The previous queue tail code word
> + *
> + * xchg(lock, tail)
> + *
> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
> + */
> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
> +}
> +
> +#else /* _Q_PENDING_BITS == 8 */
> +
> +/**
> + * clear_pending_set_locked - take ownership and clear the pending bit.
> + * @lock: Pointer to queue spinlock structure
> + * @val : Current value of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> +	u32 new, old;
> +
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +}
> +
>  /**
>   * xchg_tail - Put in the new queue tail code word & retrieve previous one
>   * @lock : Pointer to queue spinlock structure
> @@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str
>  	}
>  	return old;
>  }
> +#endif /* _Q_PENDING_BITS == 8 */
>  
>  /**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
> @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
>  	 * we're pending, wait for the owner to go away.
>  	 *
>  	 * *,1,1 -> *,1,0
> +	 *
> +	 * this wait loop must be a load-acquire such that we match the
> +	 * store-release that clears the locked bit and create lock
> +	 * sequentiality; this because not all clear_pending_set_locked()
> +	 * implementations imply full barriers.
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)

lock->val.counter? Ugh, all to deal with the 'int' -> 'u32' (or 'u64')

Could you introduce a macro in atomic.h called 'atomic_read_raw' which
would do the this? Like this:


diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index fef3a80..5a83750 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -160,6 +160,8 @@ static inline void atomic_or(int i, atomic_t *v)
 }
 #endif /* #ifndef CONFIG_ARCH_HAS_ATOMIC_OR */
 
+#define atomic_read_raw(v)	(v.counter)
+
 #include <asm-generic/atomic-long.h>
 #ifdef CONFIG_GENERIC_ATOMIC64
 #include <asm-generic/atomic64.h>
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fc7fd8c..2833fe1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -265,7 +265,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * sequentiality; this because not all clear_pending_set_locked()
 	 * implementations imply full barriers.
 	 */
-	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+	while ((val = smp_load_acquire(atomic_read_raw(&lock->val))) & _Q_LOCKED_MASK)
 		arch_mutex_cpu_relax();
 
 	/*

?
>  		cpu_relax();
>  
>  	/*
> @@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp
>  	 *
>  	 * *,1,0 -> *,0,1
>  	 */
> -	for (;;) {
> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	clear_pending_set_locked(lock, val);
>  	return;
>  
>  	/*
> 
> 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
@ 2014-06-18 15:57     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 15:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> When we allow for a max NR_CPUS < 2^14 we can optimize the pending
> wait-acquire and the xchg_tail() operations.
> 
> By growing the pending bit to a byte, we reduce the tail to 16bit.
> This means we can use xchg16 for the tail part and do away with all
> the repeated compxchg() operations.
> 
> This in turn allows us to unconditionally acquire; the locked state
> as observed by the wait loops cannot change. And because both locked
> and pending are now a full byte we can use simple stores for the
> state transition, obviating one atomic operation entirely.

I have to ask - how much more performance do you get from this?

Is this extra atomic operation hurting that much?
> 
> All this is horribly broken on Alpha pre EV56 (and any other arch that
> cannot do single-copy atomic byte stores).
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |   13 ++++
>  kernel/locking/qspinlock.c            |  103 ++++++++++++++++++++++++++++++----
>  2 files changed, 106 insertions(+), 10 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -38,6 +38,14 @@ typedef struct qspinlock {
>  /*
>   * Bitfields in the atomic value:
>   *
> + * When NR_CPUS < 16K
> + *  0- 7: locked byte
> + *     8: pending
> + *  9-15: not used
> + * 16-17: tail index
> + * 18-31: tail cpu (+1)
> + *
> + * When NR_CPUS >= 16K
>   *  0- 7: locked byte
>   *     8: pending
>   *  9-10: tail index
> @@ -50,7 +58,11 @@ typedef struct qspinlock {
>  #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
>  
>  #define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#if CONFIG_NR_CPUS < (1U << 14)
> +#define _Q_PENDING_BITS		8
> +#else
>  #define _Q_PENDING_BITS		1
> +#endif
>  #define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
>  
>  #define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
> @@ -61,6 +73,7 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
> +#define _Q_TAIL_OFFSET		_Q_TAIL_IDX_OFFSET
>  #define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
>  
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -22,6 +22,7 @@
>  #include <linux/percpu.h>
>  #include <linux/hardirq.h>
>  #include <linux/mutex.h>
> +#include <asm/byteorder.h>
>  #include <asm/qspinlock.h>
>  
>  /*
> @@ -48,6 +49,9 @@
>   * We can further change the first spinner to spin on a bit in the lock word
>   * instead of its node; whereby avoiding the need to carry a node from lock to
>   * unlock, and preserving API.
> + *
> + * N.B. The current implementation only supports architectures that allow
> + *      atomic operations on smaller 8-bit and 16-bit data types.
>   */
>  
>  #include "mcs_spinlock.h"
> @@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod
>  
>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>  
> +/*
> + * By using the whole 2nd least significant byte for the pending bit, we
> + * can allow better optimization of the lock acquisition for the pending
> + * bit holder.
> + */
> +#if _Q_PENDING_BITS == 8
> +
> +struct __qspinlock {
> +	union {
> +		atomic_t val;
> +		struct {
> +#ifdef __LITTLE_ENDIAN
> +			u16	locked_pending;
> +			u16	tail;
> +#else
> +			u16	tail;
> +			u16	locked_pending;
> +#endif
> +		};
> +	};
> +};
> +
> +/**
> + * clear_pending_set_locked - take ownership and clear the pending bit.
> + * @lock: Pointer to queue spinlock structure
> + * @val : Current value of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + *
> + * Lock stealing is not allowed if this function is used.
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
> +}
> +
> +/*
> + * xchg_tail - Put in the new queue tail code word & retrieve previous one

Missing full stop.
> + * @lock : Pointer to queue spinlock structure
> + * @tail : The new queue tail code word
> + * Return: The previous queue tail code word
> + *
> + * xchg(lock, tail)
> + *
> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
> + */
> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
> +}
> +
> +#else /* _Q_PENDING_BITS == 8 */
> +
> +/**
> + * clear_pending_set_locked - take ownership and clear the pending bit.
> + * @lock: Pointer to queue spinlock structure
> + * @val : Current value of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> +	u32 new, old;
> +
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +}
> +
>  /**
>   * xchg_tail - Put in the new queue tail code word & retrieve previous one
>   * @lock : Pointer to queue spinlock structure
> @@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str
>  	}
>  	return old;
>  }
> +#endif /* _Q_PENDING_BITS == 8 */
>  
>  /**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
> @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
>  	 * we're pending, wait for the owner to go away.
>  	 *
>  	 * *,1,1 -> *,1,0
> +	 *
> +	 * this wait loop must be a load-acquire such that we match the
> +	 * store-release that clears the locked bit and create lock
> +	 * sequentiality; this because not all clear_pending_set_locked()
> +	 * implementations imply full barriers.
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)

lock->val.counter? Ugh, all to deal with the 'int' -> 'u32' (or 'u64')

Could you introduce a macro in atomic.h called 'atomic_read_raw' which
would do the this? Like this:


diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index fef3a80..5a83750 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -160,6 +160,8 @@ static inline void atomic_or(int i, atomic_t *v)
 }
 #endif /* #ifndef CONFIG_ARCH_HAS_ATOMIC_OR */
 
+#define atomic_read_raw(v)	(v.counter)
+
 #include <asm-generic/atomic-long.h>
 #ifdef CONFIG_GENERIC_ATOMIC64
 #include <asm-generic/atomic64.h>
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fc7fd8c..2833fe1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -265,7 +265,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * sequentiality; this because not all clear_pending_set_locked()
 	 * implementations imply full barriers.
 	 */
-	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+	while ((val = smp_load_acquire(atomic_read_raw(&lock->val))) & _Q_LOCKED_MASK)
 		arch_mutex_cpu_relax();
 
 	/*

?
>  		cpu_relax();
>  
>  	/*
> @@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp
>  	 *
>  	 * *,1,0 -> *,0,1
>  	 */
> -	for (;;) {
> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	clear_pending_set_locked(lock, val);
>  	return;
>  
>  	/*
> 
> 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-15 12:47   ` Peter Zijlstra
                     ` (3 preceding siblings ...)
  (?)
@ 2014-06-18 15:57   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 15:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> When we allow for a max NR_CPUS < 2^14 we can optimize the pending
> wait-acquire and the xchg_tail() operations.
> 
> By growing the pending bit to a byte, we reduce the tail to 16bit.
> This means we can use xchg16 for the tail part and do away with all
> the repeated compxchg() operations.
> 
> This in turn allows us to unconditionally acquire; the locked state
> as observed by the wait loops cannot change. And because both locked
> and pending are now a full byte we can use simple stores for the
> state transition, obviating one atomic operation entirely.

I have to ask - how much more performance do you get from this?

Is this extra atomic operation hurting that much?
> 
> All this is horribly broken on Alpha pre EV56 (and any other arch that
> cannot do single-copy atomic byte stores).
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/asm-generic/qspinlock_types.h |   13 ++++
>  kernel/locking/qspinlock.c            |  103 ++++++++++++++++++++++++++++++----
>  2 files changed, 106 insertions(+), 10 deletions(-)
> 
> --- a/include/asm-generic/qspinlock_types.h
> +++ b/include/asm-generic/qspinlock_types.h
> @@ -38,6 +38,14 @@ typedef struct qspinlock {
>  /*
>   * Bitfields in the atomic value:
>   *
> + * When NR_CPUS < 16K
> + *  0- 7: locked byte
> + *     8: pending
> + *  9-15: not used
> + * 16-17: tail index
> + * 18-31: tail cpu (+1)
> + *
> + * When NR_CPUS >= 16K
>   *  0- 7: locked byte
>   *     8: pending
>   *  9-10: tail index
> @@ -50,7 +58,11 @@ typedef struct qspinlock {
>  #define _Q_LOCKED_MASK		_Q_SET_MASK(LOCKED)
>  
>  #define _Q_PENDING_OFFSET	(_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
> +#if CONFIG_NR_CPUS < (1U << 14)
> +#define _Q_PENDING_BITS		8
> +#else
>  #define _Q_PENDING_BITS		1
> +#endif
>  #define _Q_PENDING_MASK		_Q_SET_MASK(PENDING)
>  
>  #define _Q_TAIL_IDX_OFFSET	(_Q_PENDING_OFFSET + _Q_PENDING_BITS)
> @@ -61,6 +73,7 @@ typedef struct qspinlock {
>  #define _Q_TAIL_CPU_BITS	(32 - _Q_TAIL_CPU_OFFSET)
>  #define _Q_TAIL_CPU_MASK	_Q_SET_MASK(TAIL_CPU)
>  
> +#define _Q_TAIL_OFFSET		_Q_TAIL_IDX_OFFSET
>  #define _Q_TAIL_MASK		(_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
>  
>  #define _Q_LOCKED_VAL		(1U << _Q_LOCKED_OFFSET)
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -22,6 +22,7 @@
>  #include <linux/percpu.h>
>  #include <linux/hardirq.h>
>  #include <linux/mutex.h>
> +#include <asm/byteorder.h>
>  #include <asm/qspinlock.h>
>  
>  /*
> @@ -48,6 +49,9 @@
>   * We can further change the first spinner to spin on a bit in the lock word
>   * instead of its node; whereby avoiding the need to carry a node from lock to
>   * unlock, and preserving API.
> + *
> + * N.B. The current implementation only supports architectures that allow
> + *      atomic operations on smaller 8-bit and 16-bit data types.
>   */
>  
>  #include "mcs_spinlock.h"
> @@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod
>  
>  #define _Q_LOCKED_PENDING_MASK	(_Q_LOCKED_MASK | _Q_PENDING_MASK)
>  
> +/*
> + * By using the whole 2nd least significant byte for the pending bit, we
> + * can allow better optimization of the lock acquisition for the pending
> + * bit holder.
> + */
> +#if _Q_PENDING_BITS == 8
> +
> +struct __qspinlock {
> +	union {
> +		atomic_t val;
> +		struct {
> +#ifdef __LITTLE_ENDIAN
> +			u16	locked_pending;
> +			u16	tail;
> +#else
> +			u16	tail;
> +			u16	locked_pending;
> +#endif
> +		};
> +	};
> +};
> +
> +/**
> + * clear_pending_set_locked - take ownership and clear the pending bit.
> + * @lock: Pointer to queue spinlock structure
> + * @val : Current value of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + *
> + * Lock stealing is not allowed if this function is used.
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
> +}
> +
> +/*
> + * xchg_tail - Put in the new queue tail code word & retrieve previous one

Missing full stop.
> + * @lock : Pointer to queue spinlock structure
> + * @tail : The new queue tail code word
> + * Return: The previous queue tail code word
> + *
> + * xchg(lock, tail)
> + *
> + * p,*,* -> n,*,* ; prev = xchg(lock, node)
> + */
> +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
> +}
> +
> +#else /* _Q_PENDING_BITS == 8 */
> +
> +/**
> + * clear_pending_set_locked - take ownership and clear the pending bit.
> + * @lock: Pointer to queue spinlock structure
> + * @val : Current value of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> +	u32 new, old;
> +
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +}
> +
>  /**
>   * xchg_tail - Put in the new queue tail code word & retrieve previous one
>   * @lock : Pointer to queue spinlock structure
> @@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str
>  	}
>  	return old;
>  }
> +#endif /* _Q_PENDING_BITS == 8 */
>  
>  /**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
> @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
>  	 * we're pending, wait for the owner to go away.
>  	 *
>  	 * *,1,1 -> *,1,0
> +	 *
> +	 * this wait loop must be a load-acquire such that we match the
> +	 * store-release that clears the locked bit and create lock
> +	 * sequentiality; this because not all clear_pending_set_locked()
> +	 * implementations imply full barriers.
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)

lock->val.counter? Ugh, all to deal with the 'int' -> 'u32' (or 'u64')

Could you introduce a macro in atomic.h called 'atomic_read_raw' which
would do the this? Like this:


diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index fef3a80..5a83750 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -160,6 +160,8 @@ static inline void atomic_or(int i, atomic_t *v)
 }
 #endif /* #ifndef CONFIG_ARCH_HAS_ATOMIC_OR */
 
+#define atomic_read_raw(v)	(v.counter)
+
 #include <asm-generic/atomic-long.h>
 #ifdef CONFIG_GENERIC_ATOMIC64
 #include <asm-generic/atomic64.h>
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fc7fd8c..2833fe1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -265,7 +265,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * sequentiality; this because not all clear_pending_set_locked()
 	 * implementations imply full barriers.
 	 */
-	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+	while ((val = smp_load_acquire(atomic_read_raw(&lock->val))) & _Q_LOCKED_MASK)
 		arch_mutex_cpu_relax();
 
 	/*

?
>  		cpu_relax();
>  
>  	/*
> @@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp
>  	 *
>  	 * *,1,0 -> *,0,1
>  	 */
> -	for (;;) {
> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> -			break;
> -
> -		val = old;
> -	}
> +	clear_pending_set_locked(lock, val);
>  	return;
>  
>  	/*
> 
> 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 15:46           ` Waiman Long
@ 2014-06-18 16:02             ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:02 UTC (permalink / raw)
  To: Waiman Long
  Cc: Paolo Bonzini, Peter Zijlstra, tglx, mingo, linux-arch,
	linux-kernel, virtualization, xen-devel, kvm, paolo.bonzini,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod,
	Peter Zijlstra

> >>However, I *do* agree with you that it's simpler to just squash this patch
> >>into 01/11.
> >Uh, did I say that? Oh I said why don't make it right the first time!
> >
> >I meant in terms of seperating the slowpath (aka the bytelock on the pending
> >bit) from the queue (MCS code). Or renaming the function to be called
> >'complex' instead of 'slowpath' as it is getting quite hairy.
> >
> >The #1 patch is nice by itself - as it lays out the foundation of the
> >MCS-similar code - and if Ingo decides he does not want this pending
> >byte-lock bit business - it can be easily reverted or dropped.
> 
> The pending bit code is needed for performance parity with ticket spinlock
> for light load. My own measurement indicates that the queuing overhead will
> cause the queue spinlock to be slower than ticket spinlock with 2-4
> contending tasks. The pending bit solves the performance problem with 2

Aha!

> contending tasks, leave only the 3-4 tasks cases being a bit slower than the
> ticket spinlock which should be more than compensated by its superior
> performance with heavy contention and slightly better performance with no
> contention.

That should be mentioned in the commit description as the rationale for
the patch "qspinlock: Add pending bit" and also in the code.

Thank you!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
@ 2014-06-18 16:02             ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:02 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, riel, Peter Zijlstra, kvm, oleg, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel,
	xen-devel, Paolo Bonzini, tglx, paulmck, torvalds, mingo

> >>However, I *do* agree with you that it's simpler to just squash this patch
> >>into 01/11.
> >Uh, did I say that? Oh I said why don't make it right the first time!
> >
> >I meant in terms of seperating the slowpath (aka the bytelock on the pending
> >bit) from the queue (MCS code). Or renaming the function to be called
> >'complex' instead of 'slowpath' as it is getting quite hairy.
> >
> >The #1 patch is nice by itself - as it lays out the foundation of the
> >MCS-similar code - and if Ingo decides he does not want this pending
> >byte-lock bit business - it can be easily reverted or dropped.
> 
> The pending bit code is needed for performance parity with ticket spinlock
> for light load. My own measurement indicates that the queuing overhead will
> cause the queue spinlock to be slower than ticket spinlock with 2-4
> contending tasks. The pending bit solves the performance problem with 2

Aha!

> contending tasks, leave only the 3-4 tasks cases being a bit slower than the
> ticket spinlock which should be more than compensated by its superior
> performance with heavy contention and slightly better performance with no
> contention.

That should be mentioned in the commit description as the rationale for
the patch "qspinlock: Add pending bit" and also in the code.

Thank you!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 15:46           ` Waiman Long
                             ` (2 preceding siblings ...)
  (?)
@ 2014-06-18 16:02           ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:02 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, Peter Zijlstra, kvm, oleg, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel, gleb,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel,
	xen-devel, Paolo Bonzini, tglx, paulmck, torvalds, mingo

> >>However, I *do* agree with you that it's simpler to just squash this patch
> >>into 01/11.
> >Uh, did I say that? Oh I said why don't make it right the first time!
> >
> >I meant in terms of seperating the slowpath (aka the bytelock on the pending
> >bit) from the queue (MCS code). Or renaming the function to be called
> >'complex' instead of 'slowpath' as it is getting quite hairy.
> >
> >The #1 patch is nice by itself - as it lays out the foundation of the
> >MCS-similar code - and if Ingo decides he does not want this pending
> >byte-lock bit business - it can be easily reverted or dropped.
> 
> The pending bit code is needed for performance parity with ticket spinlock
> for light load. My own measurement indicates that the queuing overhead will
> cause the queue spinlock to be slower than ticket spinlock with 2-4
> contending tasks. The pending bit solves the performance problem with 2

Aha!

> contending tasks, leave only the 3-4 tasks cases being a bit slower than the
> ticket spinlock which should be more than compensated by its superior
> performance with heavy contention and slightly better performance with no
> contention.

That should be mentioned in the commit description as the rationale for
the patch "qspinlock: Add pending bit" and also in the code.

Thank you!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-18 16:36     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> Currently, atomic_cmpxchg() is used to get the lock. However, this is
> not really necessary if there is more than one task in the queue and
> the queue head don't need to reset the queue code word. For that case,

s/queue code word/tail {number,value}/ ?


> a simple write to set the lock bit is enough as the queue head will
> be the only one eligible to get the lock as long as it checks that
> both the lock and pending bits are not set. The current pending bit
> waiting code will ensure that the bit will not be set as soon as the
> queue code word (tail) in the lock is set.

Just use the same word as above.
> 
> With that change, the are some slight improvement in the performance
> of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
> Westere-EX machine as shown in the tables below.
> 
> 		[Standalone/Embedded - same node]
>   # of tasks	Before patch	After patch	%Change
>   ----------	-----------	----------	-------
>        3	 2324/2321	2248/2265	 -3%/-2%
>        4	 2890/2896	2819/2831	 -2%/-2%
>        5	 3611/3595	3522/3512	 -2%/-2%
>        6	 4281/4276	4173/4160	 -3%/-3%
>        7	 5018/5001	4875/4861	 -3%/-3%
>        8	 5759/5750	5563/5568	 -3%/-3%
> 
> 		[Standalone/Embedded - different nodes]
>   # of tasks	Before patch	After patch	%Change
>   ----------	-----------	----------	-------
>        3	12242/12237	12087/12093	 -1%/-1%
>        4	10688/10696	10507/10521	 -2%/-2%
> 
> It was also found that this change produced a much bigger performance
> improvement in the newer IvyBridge-EX chip and was essentially to close
> the performance gap between the ticket spinlock and queue spinlock.
> 
> The disk workload of the AIM7 benchmark was run on a 4-socket
> Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
> on a 3.14 based kernel. The results of the test runs were:
> 
>                 AIM7 XFS Disk Test
>   kernel                 JPM    Real Time   Sys Time    Usr Time
>   -----                  ---    ---------   --------    --------
>   ticketlock            5678233    3.17       96.61       5.81
>   qspinlock             5750799    3.13       94.83       5.97
> 
>                 AIM7 EXT4 Disk Test
>   kernel                 JPM    Real Time   Sys Time    Usr Time
>   -----                  ---    ---------   --------    --------
>   ticketlock            1114551   16.15      509.72       7.11
>   qspinlock             2184466    8.24      232.99       6.01
> 
> The ext4 filesystem run had a much higher spinlock contention than
> the xfs filesystem run.
> 
> The "ebizzy -m" test was also run with the following results:
> 
>   kernel               records/s  Real Time   Sys Time    Usr Time
>   -----                ---------  ---------   --------    --------
>   ticketlock             2075       10.00      216.35       3.49
>   qspinlock              3023       10.00      198.20       4.80
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  kernel/locking/qspinlock.c |   59 ++++++++++++++++++++++++++++++++-------------
>  1 file changed, 43 insertions(+), 16 deletions(-)
> 
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod
>   * By using the whole 2nd least significant byte for the pending bit, we
>   * can allow better optimization of the lock acquisition for the pending
>   * bit holder.
> + *
> + * This internal structure is also used by the set_locked function which
> + * is not restricted to _Q_PENDING_BITS == 8.
>   */
> -#if _Q_PENDING_BITS == 8
> -
>  struct __qspinlock {
>  	union {
>  		atomic_t val;
> -		struct {
>  #ifdef __LITTLE_ENDIAN
> +		u8	 locked;
> +		struct {
>  			u16	locked_pending;
>  			u16	tail;
> +		};
>  #else
> +		struct {
>  			u16	tail;
>  			u16	locked_pending;
> -#endif
>  		};
> +		struct {
> +			u8	reserved[3];
> +			u8	locked;
> +		};
> +#endif
>  	};
>  };
>  
> +#if _Q_PENDING_BITS == 8
>  /**
>   * clear_pending_set_locked - take ownership and clear the pending bit.
>   * @lock: Pointer to queue spinlock structure
> @@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str
>  #endif /* _Q_PENDING_BITS == 8 */
>  
>  /**
> + * set_locked - Set the lock bit and own the lock

Full stop missing.

> + * @lock: Pointer to queue spinlock structure

Ditto.
> + *
> + * *,*,0 -> *,0,1
> + */
> +static __always_inline void set_locked(struct qspinlock *lock)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> +}
> +
> +/**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
> @@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * we're at the head of the waitqueue, wait for the owner & pending to
>  	 * go away.
> +	 * Load-acquired is used here because the set_locked()
> +	 * function below may not be a full memory barrier.
>  	 *
>  	 * *,x,y -> *,0,0
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
> +	while ((val = smp_load_acquire(&lock->val.counter)) &
> +			_Q_LOCKED_PENDING_MASK)
>  		cpu_relax();
>  
>  	/*
> @@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp
>  	 *
>  	 * n,0,0 -> 0,0,1 : lock, uncontended
>  	 * *,0,0 -> *,0,1 : lock, contended
> +	 *
> +	 * If the queue head is the only one in the queue (lock value == tail),
> +	 * clear the tail code and grab the lock. Otherwise, we only need
> +	 * to grab the lock.
>  	 */
>  	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val != tail)
> -			new |= val;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> +		if (val != tail) {
> +			set_locked(lock);
>  			break;
> +		}
> +		old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
> +		if (old == val)
> +			goto release;	/* No contention */
>  
>  		val = old;
>  	}
> @@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * contended path; wait for next, release.
>  	 */
> -	if (new != _Q_LOCKED_VAL) {
> -		while (!(next = ACCESS_ONCE(node->next)))
> -			cpu_relax();
> +	while (!(next = ACCESS_ONCE(node->next)))
> +		cpu_relax();
>  
> -		arch_mcs_spin_unlock_contended(&next->locked);
> -	}
> +	arch_mcs_spin_unlock_contended(&next->locked);
>  
>  release:
>  	/*
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-06-18 16:36     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> Currently, atomic_cmpxchg() is used to get the lock. However, this is
> not really necessary if there is more than one task in the queue and
> the queue head don't need to reset the queue code word. For that case,

s/queue code word/tail {number,value}/ ?


> a simple write to set the lock bit is enough as the queue head will
> be the only one eligible to get the lock as long as it checks that
> both the lock and pending bits are not set. The current pending bit
> waiting code will ensure that the bit will not be set as soon as the
> queue code word (tail) in the lock is set.

Just use the same word as above.
> 
> With that change, the are some slight improvement in the performance
> of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
> Westere-EX machine as shown in the tables below.
> 
> 		[Standalone/Embedded - same node]
>   # of tasks	Before patch	After patch	%Change
>   ----------	-----------	----------	-------
>        3	 2324/2321	2248/2265	 -3%/-2%
>        4	 2890/2896	2819/2831	 -2%/-2%
>        5	 3611/3595	3522/3512	 -2%/-2%
>        6	 4281/4276	4173/4160	 -3%/-3%
>        7	 5018/5001	4875/4861	 -3%/-3%
>        8	 5759/5750	5563/5568	 -3%/-3%
> 
> 		[Standalone/Embedded - different nodes]
>   # of tasks	Before patch	After patch	%Change
>   ----------	-----------	----------	-------
>        3	12242/12237	12087/12093	 -1%/-1%
>        4	10688/10696	10507/10521	 -2%/-2%
> 
> It was also found that this change produced a much bigger performance
> improvement in the newer IvyBridge-EX chip and was essentially to close
> the performance gap between the ticket spinlock and queue spinlock.
> 
> The disk workload of the AIM7 benchmark was run on a 4-socket
> Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
> on a 3.14 based kernel. The results of the test runs were:
> 
>                 AIM7 XFS Disk Test
>   kernel                 JPM    Real Time   Sys Time    Usr Time
>   -----                  ---    ---------   --------    --------
>   ticketlock            5678233    3.17       96.61       5.81
>   qspinlock             5750799    3.13       94.83       5.97
> 
>                 AIM7 EXT4 Disk Test
>   kernel                 JPM    Real Time   Sys Time    Usr Time
>   -----                  ---    ---------   --------    --------
>   ticketlock            1114551   16.15      509.72       7.11
>   qspinlock             2184466    8.24      232.99       6.01
> 
> The ext4 filesystem run had a much higher spinlock contention than
> the xfs filesystem run.
> 
> The "ebizzy -m" test was also run with the following results:
> 
>   kernel               records/s  Real Time   Sys Time    Usr Time
>   -----                ---------  ---------   --------    --------
>   ticketlock             2075       10.00      216.35       3.49
>   qspinlock              3023       10.00      198.20       4.80
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  kernel/locking/qspinlock.c |   59 ++++++++++++++++++++++++++++++++-------------
>  1 file changed, 43 insertions(+), 16 deletions(-)
> 
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod
>   * By using the whole 2nd least significant byte for the pending bit, we
>   * can allow better optimization of the lock acquisition for the pending
>   * bit holder.
> + *
> + * This internal structure is also used by the set_locked function which
> + * is not restricted to _Q_PENDING_BITS == 8.
>   */
> -#if _Q_PENDING_BITS == 8
> -
>  struct __qspinlock {
>  	union {
>  		atomic_t val;
> -		struct {
>  #ifdef __LITTLE_ENDIAN
> +		u8	 locked;
> +		struct {
>  			u16	locked_pending;
>  			u16	tail;
> +		};
>  #else
> +		struct {
>  			u16	tail;
>  			u16	locked_pending;
> -#endif
>  		};
> +		struct {
> +			u8	reserved[3];
> +			u8	locked;
> +		};
> +#endif
>  	};
>  };
>  
> +#if _Q_PENDING_BITS == 8
>  /**
>   * clear_pending_set_locked - take ownership and clear the pending bit.
>   * @lock: Pointer to queue spinlock structure
> @@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str
>  #endif /* _Q_PENDING_BITS == 8 */
>  
>  /**
> + * set_locked - Set the lock bit and own the lock

Full stop missing.

> + * @lock: Pointer to queue spinlock structure

Ditto.
> + *
> + * *,*,0 -> *,0,1
> + */
> +static __always_inline void set_locked(struct qspinlock *lock)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> +}
> +
> +/**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
> @@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * we're at the head of the waitqueue, wait for the owner & pending to
>  	 * go away.
> +	 * Load-acquired is used here because the set_locked()
> +	 * function below may not be a full memory barrier.
>  	 *
>  	 * *,x,y -> *,0,0
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
> +	while ((val = smp_load_acquire(&lock->val.counter)) &
> +			_Q_LOCKED_PENDING_MASK)
>  		cpu_relax();
>  
>  	/*
> @@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp
>  	 *
>  	 * n,0,0 -> 0,0,1 : lock, uncontended
>  	 * *,0,0 -> *,0,1 : lock, contended
> +	 *
> +	 * If the queue head is the only one in the queue (lock value == tail),
> +	 * clear the tail code and grab the lock. Otherwise, we only need
> +	 * to grab the lock.
>  	 */
>  	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val != tail)
> -			new |= val;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> +		if (val != tail) {
> +			set_locked(lock);
>  			break;
> +		}
> +		old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
> +		if (old == val)
> +			goto release;	/* No contention */
>  
>  		val = old;
>  	}
> @@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * contended path; wait for next, release.
>  	 */
> -	if (new != _Q_LOCKED_VAL) {
> -		while (!(next = ACCESS_ONCE(node->next)))
> -			cpu_relax();
> +	while (!(next = ACCESS_ONCE(node->next)))
> +		cpu_relax();
>  
> -		arch_mcs_spin_unlock_contended(&next->locked);
> -	}
> +	arch_mcs_spin_unlock_contended(&next->locked);
>  
>  release:
>  	/*
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
  2014-06-15 12:47   ` Peter Zijlstra
  (?)
@ 2014-06-18 16:36   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> Currently, atomic_cmpxchg() is used to get the lock. However, this is
> not really necessary if there is more than one task in the queue and
> the queue head don't need to reset the queue code word. For that case,

s/queue code word/tail {number,value}/ ?


> a simple write to set the lock bit is enough as the queue head will
> be the only one eligible to get the lock as long as it checks that
> both the lock and pending bits are not set. The current pending bit
> waiting code will ensure that the bit will not be set as soon as the
> queue code word (tail) in the lock is set.

Just use the same word as above.
> 
> With that change, the are some slight improvement in the performance
> of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
> Westere-EX machine as shown in the tables below.
> 
> 		[Standalone/Embedded - same node]
>   # of tasks	Before patch	After patch	%Change
>   ----------	-----------	----------	-------
>        3	 2324/2321	2248/2265	 -3%/-2%
>        4	 2890/2896	2819/2831	 -2%/-2%
>        5	 3611/3595	3522/3512	 -2%/-2%
>        6	 4281/4276	4173/4160	 -3%/-3%
>        7	 5018/5001	4875/4861	 -3%/-3%
>        8	 5759/5750	5563/5568	 -3%/-3%
> 
> 		[Standalone/Embedded - different nodes]
>   # of tasks	Before patch	After patch	%Change
>   ----------	-----------	----------	-------
>        3	12242/12237	12087/12093	 -1%/-1%
>        4	10688/10696	10507/10521	 -2%/-2%
> 
> It was also found that this change produced a much bigger performance
> improvement in the newer IvyBridge-EX chip and was essentially to close
> the performance gap between the ticket spinlock and queue spinlock.
> 
> The disk workload of the AIM7 benchmark was run on a 4-socket
> Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
> on a 3.14 based kernel. The results of the test runs were:
> 
>                 AIM7 XFS Disk Test
>   kernel                 JPM    Real Time   Sys Time    Usr Time
>   -----                  ---    ---------   --------    --------
>   ticketlock            5678233    3.17       96.61       5.81
>   qspinlock             5750799    3.13       94.83       5.97
> 
>                 AIM7 EXT4 Disk Test
>   kernel                 JPM    Real Time   Sys Time    Usr Time
>   -----                  ---    ---------   --------    --------
>   ticketlock            1114551   16.15      509.72       7.11
>   qspinlock             2184466    8.24      232.99       6.01
> 
> The ext4 filesystem run had a much higher spinlock contention than
> the xfs filesystem run.
> 
> The "ebizzy -m" test was also run with the following results:
> 
>   kernel               records/s  Real Time   Sys Time    Usr Time
>   -----                ---------  ---------   --------    --------
>   ticketlock             2075       10.00      216.35       3.49
>   qspinlock              3023       10.00      198.20       4.80
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  kernel/locking/qspinlock.c |   59 ++++++++++++++++++++++++++++++++-------------
>  1 file changed, 43 insertions(+), 16 deletions(-)
> 
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod
>   * By using the whole 2nd least significant byte for the pending bit, we
>   * can allow better optimization of the lock acquisition for the pending
>   * bit holder.
> + *
> + * This internal structure is also used by the set_locked function which
> + * is not restricted to _Q_PENDING_BITS == 8.
>   */
> -#if _Q_PENDING_BITS == 8
> -
>  struct __qspinlock {
>  	union {
>  		atomic_t val;
> -		struct {
>  #ifdef __LITTLE_ENDIAN
> +		u8	 locked;
> +		struct {
>  			u16	locked_pending;
>  			u16	tail;
> +		};
>  #else
> +		struct {
>  			u16	tail;
>  			u16	locked_pending;
> -#endif
>  		};
> +		struct {
> +			u8	reserved[3];
> +			u8	locked;
> +		};
> +#endif
>  	};
>  };
>  
> +#if _Q_PENDING_BITS == 8
>  /**
>   * clear_pending_set_locked - take ownership and clear the pending bit.
>   * @lock: Pointer to queue spinlock structure
> @@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str
>  #endif /* _Q_PENDING_BITS == 8 */
>  
>  /**
> + * set_locked - Set the lock bit and own the lock

Full stop missing.

> + * @lock: Pointer to queue spinlock structure

Ditto.
> + *
> + * *,*,0 -> *,0,1
> + */
> +static __always_inline void set_locked(struct qspinlock *lock)
> +{
> +	struct __qspinlock *l = (void *)lock;
> +
> +	ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> +}
> +
> +/**
>   * queue_spin_lock_slowpath - acquire the queue spinlock
>   * @lock: Pointer to queue spinlock structure
>   * @val: Current value of the queue spinlock 32-bit word
> @@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * we're at the head of the waitqueue, wait for the owner & pending to
>  	 * go away.
> +	 * Load-acquired is used here because the set_locked()
> +	 * function below may not be a full memory barrier.
>  	 *
>  	 * *,x,y -> *,0,0
>  	 */
> -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
> +	while ((val = smp_load_acquire(&lock->val.counter)) &
> +			_Q_LOCKED_PENDING_MASK)
>  		cpu_relax();
>  
>  	/*
> @@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp
>  	 *
>  	 * n,0,0 -> 0,0,1 : lock, uncontended
>  	 * *,0,0 -> *,0,1 : lock, contended
> +	 *
> +	 * If the queue head is the only one in the queue (lock value == tail),
> +	 * clear the tail code and grab the lock. Otherwise, we only need
> +	 * to grab the lock.
>  	 */
>  	for (;;) {
> -		new = _Q_LOCKED_VAL;
> -		if (val != tail)
> -			new |= val;
> -
> -		old = atomic_cmpxchg(&lock->val, val, new);
> -		if (old == val)
> +		if (val != tail) {
> +			set_locked(lock);
>  			break;
> +		}
> +		old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
> +		if (old == val)
> +			goto release;	/* No contention */
>  
>  		val = old;
>  	}
> @@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp
>  	/*
>  	 * contended path; wait for next, release.
>  	 */
> -	if (new != _Q_LOCKED_VAL) {
> -		while (!(next = ACCESS_ONCE(node->next)))
> -			cpu_relax();
> +	while (!(next = ACCESS_ONCE(node->next)))
> +		cpu_relax();
>  
> -		arch_mcs_spin_unlock_contended(&next->locked);
> -	}
> +	arch_mcs_spin_unlock_contended(&next->locked);
>  
>  release:
>  	/*
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-18 16:40     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:40 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:47:05PM +0200, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to

Please spell out the name of the patches.

> a simple test-and-set lock to avoid the horrors of queue preemption.

Heheh.
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
>  include/asm-generic/qspinlock.h  |    7 +++++++
>  kernel/locking/qspinlock.c       |    3 +++
>  3 files changed, 24 insertions(+)
> 
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -1,6 +1,7 @@
>  #ifndef _ASM_X86_QSPINLOCK_H
>  #define _ASM_X86_QSPINLOCK_H
>  
> +#include <asm/cpufeature.h>
>  #include <asm-generic/qspinlock_types.h>
>  
>  #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
> @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
>  
>  #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
>  
> +#define virt_queue_spin_lock virt_queue_spin_lock
> +
> +static inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return false;
> +
> +	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
> +		cpu_relax();
> +
> +	return true;
> +}
> +
>  #include <asm-generic/qspinlock.h>
>  
>  #endif /* _ASM_X86_QSPINLOCK_H */
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
>  }
>  #endif
>  
> +#ifndef virt_queue_spin_lock
> +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	return false;
> +}
> +#endif
> +
>  /*
>   * Initializier
>   */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
>  
>  	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>  
> +	if (virt_queue_spin_lock(lock))
> +		return;
> +
>  	/*
>  	 * wait for in-progress pending->locked hand-overs
>  	 *
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
@ 2014-06-18 16:40     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:40 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:05PM +0200, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to

Please spell out the name of the patches.

> a simple test-and-set lock to avoid the horrors of queue preemption.

Heheh.
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
>  include/asm-generic/qspinlock.h  |    7 +++++++
>  kernel/locking/qspinlock.c       |    3 +++
>  3 files changed, 24 insertions(+)
> 
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -1,6 +1,7 @@
>  #ifndef _ASM_X86_QSPINLOCK_H
>  #define _ASM_X86_QSPINLOCK_H
>  
> +#include <asm/cpufeature.h>
>  #include <asm-generic/qspinlock_types.h>
>  
>  #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
> @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
>  
>  #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
>  
> +#define virt_queue_spin_lock virt_queue_spin_lock
> +
> +static inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return false;
> +
> +	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
> +		cpu_relax();
> +
> +	return true;
> +}
> +
>  #include <asm-generic/qspinlock.h>
>  
>  #endif /* _ASM_X86_QSPINLOCK_H */
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
>  }
>  #endif
>  
> +#ifndef virt_queue_spin_lock
> +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	return false;
> +}
> +#endif
> +
>  /*
>   * Initializier
>   */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
>  
>  	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>  
> +	if (virt_queue_spin_lock(lock))
> +		return;
> +
>  	/*
>  	 * wait for in-progress pending->locked hand-overs
>  	 *
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
  2014-06-15 12:47   ` Peter Zijlstra
                     ` (3 preceding siblings ...)
  (?)
@ 2014-06-18 16:40   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:40 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:05PM +0200, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to

Please spell out the name of the patches.

> a simple test-and-set lock to avoid the horrors of queue preemption.

Heheh.
> 
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  arch/x86/include/asm/qspinlock.h |   14 ++++++++++++++
>  include/asm-generic/qspinlock.h  |    7 +++++++
>  kernel/locking/qspinlock.c       |    3 +++
>  3 files changed, 24 insertions(+)
> 
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -1,6 +1,7 @@
>  #ifndef _ASM_X86_QSPINLOCK_H
>  #define _ASM_X86_QSPINLOCK_H
>  
> +#include <asm/cpufeature.h>
>  #include <asm-generic/qspinlock_types.h>
>  
>  #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
> @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
>  
>  #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
>  
> +#define virt_queue_spin_lock virt_queue_spin_lock
> +
> +static inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return false;
> +
> +	while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
> +		cpu_relax();
> +
> +	return true;
> +}
> +
>  #include <asm-generic/qspinlock.h>
>  
>  #endif /* _ASM_X86_QSPINLOCK_H */
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
>  }
>  #endif
>  
> +#ifndef virt_queue_spin_lock
> +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> +	return false;
> +}
> +#endif
> +
>  /*
>   * Initializier
>   */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
>  
>  	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>  
> +	if (virt_queue_spin_lock(lock))
> +		return;
> +
>  	/*
>  	 * wait for in-progress pending->locked hand-overs
>  	 *
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  2014-06-15 12:47 ` [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Peter Zijlstra
@ 2014-06-18 16:43     ` Konrad Rzeszutek Wilk
  2014-06-18 16:43     ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:47:06PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch renames the paravirt_ticketlocks_enabled static key to a
> more generic paravirt_spinlocks_enabled name.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  arch/x86/include/asm/spinlock.h      |    4 ++--
>  arch/x86/kernel/kvm.c                |    2 +-
>  arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
>  arch/x86/xen/spinlock.c              |    2 +-
>  4 files changed, 6 insertions(+), 6 deletions(-)
> 
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -39,7 +39,7 @@
>  /* How long a lock should spin before we consider blocking */
>  #define SPIN_THRESHOLD	(1 << 15)
>  
> -extern struct static_key paravirt_ticketlocks_enabled;
> +extern struct static_key paravirt_spinlocks_enabled;
>  static __always_inline bool static_key_false(struct static_key *key);
>  
>  #ifdef CONFIG_QUEUE_SPINLOCK
> @@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp
>  static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
>  	if (TICKET_SLOWPATH_FLAG &&
> -	    static_key_false(&paravirt_ticketlocks_enabled)) {
> +	    static_key_false(&paravirt_spinlocks_enabled)) {
>  		arch_spinlock_t prev;
>  
>  		prev = *lock;
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump
>  	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>  		return 0;
>  
> -	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +	static_key_slow_inc(&paravirt_spinlocks_enabled);
>  	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
>  
>  	return 0;
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
>  
> -struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
> -EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
> +struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
> +EXPORT_SYMBOL(paravirt_spinlocks_enabled);
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum
>  	if (!xen_domain())
>  		return 0;
>  
> -	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +	static_key_slow_inc(&paravirt_spinlocks_enabled);
>  	return 0;
>  }
>  early_initcall(xen_init_spinlocks_jump);
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
@ 2014-06-18 16:43     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:06PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch renames the paravirt_ticketlocks_enabled static key to a
> more generic paravirt_spinlocks_enabled name.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  arch/x86/include/asm/spinlock.h      |    4 ++--
>  arch/x86/kernel/kvm.c                |    2 +-
>  arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
>  arch/x86/xen/spinlock.c              |    2 +-
>  4 files changed, 6 insertions(+), 6 deletions(-)
> 
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -39,7 +39,7 @@
>  /* How long a lock should spin before we consider blocking */
>  #define SPIN_THRESHOLD	(1 << 15)
>  
> -extern struct static_key paravirt_ticketlocks_enabled;
> +extern struct static_key paravirt_spinlocks_enabled;
>  static __always_inline bool static_key_false(struct static_key *key);
>  
>  #ifdef CONFIG_QUEUE_SPINLOCK
> @@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp
>  static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
>  	if (TICKET_SLOWPATH_FLAG &&
> -	    static_key_false(&paravirt_ticketlocks_enabled)) {
> +	    static_key_false(&paravirt_spinlocks_enabled)) {
>  		arch_spinlock_t prev;
>  
>  		prev = *lock;
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump
>  	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>  		return 0;
>  
> -	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +	static_key_slow_inc(&paravirt_spinlocks_enabled);
>  	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
>  
>  	return 0;
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
>  
> -struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
> -EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
> +struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
> +EXPORT_SYMBOL(paravirt_spinlocks_enabled);
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum
>  	if (!xen_domain())
>  		return 0;
>  
> -	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +	static_key_slow_inc(&paravirt_spinlocks_enabled);
>  	return 0;
>  }
>  early_initcall(xen_init_spinlocks_jump);
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  2014-06-15 12:47 ` [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Peter Zijlstra
@ 2014-06-18 16:43   ` Konrad Rzeszutek Wilk
  2014-06-18 16:43     ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-18 16:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:06PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@hp.com>
> 
> This patch renames the paravirt_ticketlocks_enabled static key to a
> more generic paravirt_spinlocks_enabled name.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  arch/x86/include/asm/spinlock.h      |    4 ++--
>  arch/x86/kernel/kvm.c                |    2 +-
>  arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
>  arch/x86/xen/spinlock.c              |    2 +-
>  4 files changed, 6 insertions(+), 6 deletions(-)
> 
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -39,7 +39,7 @@
>  /* How long a lock should spin before we consider blocking */
>  #define SPIN_THRESHOLD	(1 << 15)
>  
> -extern struct static_key paravirt_ticketlocks_enabled;
> +extern struct static_key paravirt_spinlocks_enabled;
>  static __always_inline bool static_key_false(struct static_key *key);
>  
>  #ifdef CONFIG_QUEUE_SPINLOCK
> @@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp
>  static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
>  	if (TICKET_SLOWPATH_FLAG &&
> -	    static_key_false(&paravirt_ticketlocks_enabled)) {
> +	    static_key_false(&paravirt_spinlocks_enabled)) {
>  		arch_spinlock_t prev;
>  
>  		prev = *lock;
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump
>  	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>  		return 0;
>  
> -	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +	static_key_slow_inc(&paravirt_spinlocks_enabled);
>  	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
>  
>  	return 0;
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
>  
> -struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
> -EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
> +struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
> +EXPORT_SYMBOL(paravirt_spinlocks_enabled);
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum
>  	if (!xen_domain())
>  		return 0;
>  
> -	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +	static_key_slow_inc(&paravirt_spinlocks_enabled);
>  	return 0;
>  }
>  early_initcall(xen_init_spinlocks_jump);
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
@ 2014-06-20 13:46     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-20 13:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On Sun, Jun 15, 2014 at 02:47:07PM +0200, Peter Zijlstra wrote:
> Add minimal paravirt support.
> 
> The code aims for minimal impact on the native case.

Woot!
> 
> On the lock side we add one jump label (asm_goto) and 4 paravirt
> callee saved calls that default to NOPs. The only effects are the
> extra NOPs and some pointless MOVs to accomodate the calling
> convention. No register spills happen because of this (x86_64).
> 
> On the unlock side we have one paravirt callee saved call, which
> defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP.
> 
> The actual paravirt code comes in 3 parts;
> 
>  - init_node; this initializes the extra data members required for PV
>    state. PV state data is kept 1 cacheline ahead of the regular data.
> 
>  - link_and_wait_node/kick_node; these are paired with the regular MCS
>    queueing and are placed resp. before/after the paired MCS ops.
> 
>  - wait_head/queue_unlock; the interesting part here is finding the
>    head node to kick.
> 
> Tracking the head is done in two parts, firstly the pv_wait_head will
> store its cpu number in whichever node is pointed to by the tail part
> of the lock word. Secondly, pv_link_and_wait_node() will propagate the
> existing head from the old to the new tail node.

I dug in the code and I have some comments about it, but before
I post them I was wondering if you have any plans to run any performance
tests against the PV ticketlock with normal and over-committed scenarios?

Looking at this with a pen and paper I see that compared to
PV ticketlock for the CPUs that are contending on the queue (so they
go to pv_wait_head_and_link, then progress to pv_wait_head), they
go sleep twice and get woken up twice. In PV ticketlock the
contending CPUs would only go to sleep once and woken up once it
was their turn.

That of course is the worst case scenario - where the CPU
that has the lock is taking forever to do its job and the
host is quite overcommitted.

Thanks!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-06-20 13:46     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-20 13:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:07PM +0200, Peter Zijlstra wrote:
> Add minimal paravirt support.
> 
> The code aims for minimal impact on the native case.

Woot!
> 
> On the lock side we add one jump label (asm_goto) and 4 paravirt
> callee saved calls that default to NOPs. The only effects are the
> extra NOPs and some pointless MOVs to accomodate the calling
> convention. No register spills happen because of this (x86_64).
> 
> On the unlock side we have one paravirt callee saved call, which
> defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP.
> 
> The actual paravirt code comes in 3 parts;
> 
>  - init_node; this initializes the extra data members required for PV
>    state. PV state data is kept 1 cacheline ahead of the regular data.
> 
>  - link_and_wait_node/kick_node; these are paired with the regular MCS
>    queueing and are placed resp. before/after the paired MCS ops.
> 
>  - wait_head/queue_unlock; the interesting part here is finding the
>    head node to kick.
> 
> Tracking the head is done in two parts, firstly the pv_wait_head will
> store its cpu number in whichever node is pointed to by the tail part
> of the lock word. Secondly, pv_link_and_wait_node() will propagate the
> existing head from the old to the new tail node.

I dug in the code and I have some comments about it, but before
I post them I was wondering if you have any plans to run any performance
tests against the PV ticketlock with normal and over-committed scenarios?

Looking at this with a pen and paper I see that compared to
PV ticketlock for the CPUs that are contending on the queue (so they
go to pv_wait_head_and_link, then progress to pv_wait_head), they
go sleep twice and get woken up twice. In PV ticketlock the
contending CPUs would only go to sleep once and woken up once it
was their turn.

That of course is the worst case scenario - where the CPU
that has the lock is taking forever to do its job and the
host is quite overcommitted.

Thanks!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-15 12:47   ` Peter Zijlstra
                     ` (6 preceding siblings ...)
  (?)
@ 2014-06-20 13:46   ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-20 13:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Sun, Jun 15, 2014 at 02:47:07PM +0200, Peter Zijlstra wrote:
> Add minimal paravirt support.
> 
> The code aims for minimal impact on the native case.

Woot!
> 
> On the lock side we add one jump label (asm_goto) and 4 paravirt
> callee saved calls that default to NOPs. The only effects are the
> extra NOPs and some pointless MOVs to accomodate the calling
> convention. No register spills happen because of this (x86_64).
> 
> On the unlock side we have one paravirt callee saved call, which
> defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP.
> 
> The actual paravirt code comes in 3 parts;
> 
>  - init_node; this initializes the extra data members required for PV
>    state. PV state data is kept 1 cacheline ahead of the regular data.
> 
>  - link_and_wait_node/kick_node; these are paired with the regular MCS
>    queueing and are placed resp. before/after the paired MCS ops.
> 
>  - wait_head/queue_unlock; the interesting part here is finding the
>    head node to kick.
> 
> Tracking the head is done in two parts, firstly the pv_wait_head will
> store its cpu number in whichever node is pointed to by the tail part
> of the lock word. Secondly, pv_link_and_wait_node() will propagate the
> existing head from the old to the new tail node.

I dug in the code and I have some comments about it, but before
I post them I was wondering if you have any plans to run any performance
tests against the PV ticketlock with normal and over-committed scenarios?

Looking at this with a pen and paper I see that compared to
PV ticketlock for the CPUs that are contending on the queue (so they
go to pv_wait_head_and_link, then progress to pv_wait_head), they
go sleep twice and get woken up twice. In PV ticketlock the
contending CPUs would only go to sleep once and woken up once it
was their turn.

That of course is the worst case scenario - where the CPU
that has the lock is taking forever to do its job and the
host is quite overcommitted.

Thanks!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 11/11] qspinlock, kvm: Add paravirt support
  2014-06-15 12:47 ` [PATCH 11/11] qspinlock, kvm: Add paravirt support Peter Zijlstra
@ 2014-06-22 16:36     ` Raghavendra K T
  2014-06-22 16:36   ` Raghavendra K T
  1 sibling, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2014-06-22 16:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod, Peter Zijlstra

On 06/15/2014 06:17 PM, Peter Zijlstra wrote:
> Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> ---
[...]
> +
> +void kvm_wait(int *ptr, int val)
> +{
> +	unsigned long flags;
> +
> +	if (in_nmi())
> +		return;
> +
> +	/*
> +	 * Make sure an interrupt handler can't upset things in a
> +	 * partially setup state.
> +	 */

I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
looking further with gdb I see one cpu is stuck with native_halt with
slowpath flag(_Q_LOCKED_SLOW) set when it was called.

(gdb) bt
#0  native_halt () at /test/master/arch/x86/include/asm/irqflags.h:55
#1  0xffffffff81033118 in halt (ptr=0xffffffff81eb0e58, val=524291) at 
/test/master/arch/x86/include/asm/paravirt.h:116
#2  kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at 
arch/x86/kernel/kvm.c:835
#3  kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at 
arch/x86/kernel/kvm.c:809
#4  0xffffffff810a2d8e in pv_wait (lock=0xffffffff81eb0e58) at 
/test/master/arch/x86/include/asm/paravirt.h:744
#5  __pv_wait_head (lock=0xffffffff81eb0e58) at 
kernel/locking/qspinlock.c:352

Value of lock seem to be 524288 (means already unlocked?)
So apart from races Waiman mentioned, are we also in need of smp_mb()
here and/or native_queue_unlock()?.

Interestingly I see other cpu stuck at multi_cpu_stop().

(gdb) thr 1
[Switching to thread 1 (Thread 1)]#0  multi_cpu_stop 
(data=0xffff8802140d1da0) at kernel/stop_machine.c:192
192			if (msdata->state != curstate) {

Or is it I am missing something.

please let me know if .config need to be shared.

> +	local_irq_save(flags);
> +
> +	/*
> +	 * check again make sure it didn't become free while
> +	 * we weren't looking.
> +	 */
> +	if (ACCESS_ONCE(*ptr) != val)
> +		goto out;
> +
> +	/*
> +	 * halt until it's our turn and kicked. Note that we do safe halt
> +	 * for irq enabled case to avoid hang when lock info is overwritten
> +	 * in irq spinlock slowpath and no spurious interrupt occur to save us.
> +	 */
> +	if (arch_irqs_disabled_flags(flags))
> +		halt();
> +	else
> +		safe_halt();
> +
> +out:
> +	local_irq_restore(flags);
> +}
> +#endif /* QUEUE_SPINLOCK */


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 11/11] qspinlock, kvm: Add paravirt support
@ 2014-06-22 16:36     ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2014-06-22 16:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, paolo.bonzini, linux-kernel,
	virtualization, Peter Zijlstra, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On 06/15/2014 06:17 PM, Peter Zijlstra wrote:
> Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> ---
[...]
> +
> +void kvm_wait(int *ptr, int val)
> +{
> +	unsigned long flags;
> +
> +	if (in_nmi())
> +		return;
> +
> +	/*
> +	 * Make sure an interrupt handler can't upset things in a
> +	 * partially setup state.
> +	 */

I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
looking further with gdb I see one cpu is stuck with native_halt with
slowpath flag(_Q_LOCKED_SLOW) set when it was called.

(gdb) bt
#0  native_halt () at /test/master/arch/x86/include/asm/irqflags.h:55
#1  0xffffffff81033118 in halt (ptr=0xffffffff81eb0e58, val=524291) at 
/test/master/arch/x86/include/asm/paravirt.h:116
#2  kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at 
arch/x86/kernel/kvm.c:835
#3  kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at 
arch/x86/kernel/kvm.c:809
#4  0xffffffff810a2d8e in pv_wait (lock=0xffffffff81eb0e58) at 
/test/master/arch/x86/include/asm/paravirt.h:744
#5  __pv_wait_head (lock=0xffffffff81eb0e58) at 
kernel/locking/qspinlock.c:352

Value of lock seem to be 524288 (means already unlocked?)
So apart from races Waiman mentioned, are we also in need of smp_mb()
here and/or native_queue_unlock()?.

Interestingly I see other cpu stuck at multi_cpu_stop().

(gdb) thr 1
[Switching to thread 1 (Thread 1)]#0  multi_cpu_stop 
(data=0xffff8802140d1da0) at kernel/stop_machine.c:192
192			if (msdata->state != curstate) {

Or is it I am missing something.

please let me know if .config need to be shared.

> +	local_irq_save(flags);
> +
> +	/*
> +	 * check again make sure it didn't become free while
> +	 * we weren't looking.
> +	 */
> +	if (ACCESS_ONCE(*ptr) != val)
> +		goto out;
> +
> +	/*
> +	 * halt until it's our turn and kicked. Note that we do safe halt
> +	 * for irq enabled case to avoid hang when lock info is overwritten
> +	 * in irq spinlock slowpath and no spurious interrupt occur to save us.
> +	 */
> +	if (arch_irqs_disabled_flags(flags))
> +		halt();
> +	else
> +		safe_halt();
> +
> +out:
> +	local_irq_restore(flags);
> +}
> +#endif /* QUEUE_SPINLOCK */

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 11/11] qspinlock, kvm: Add paravirt support
  2014-06-15 12:47 ` [PATCH 11/11] qspinlock, kvm: Add paravirt support Peter Zijlstra
  2014-06-22 16:36     ` Raghavendra K T
@ 2014-06-22 16:36   ` Raghavendra K T
  1 sibling, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2014-06-22 16:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, paolo.bonzini, linux-kernel, virtualization,
	Peter Zijlstra, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On 06/15/2014 06:17 PM, Peter Zijlstra wrote:
> Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> ---
[...]
> +
> +void kvm_wait(int *ptr, int val)
> +{
> +	unsigned long flags;
> +
> +	if (in_nmi())
> +		return;
> +
> +	/*
> +	 * Make sure an interrupt handler can't upset things in a
> +	 * partially setup state.
> +	 */

I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
looking further with gdb I see one cpu is stuck with native_halt with
slowpath flag(_Q_LOCKED_SLOW) set when it was called.

(gdb) bt
#0  native_halt () at /test/master/arch/x86/include/asm/irqflags.h:55
#1  0xffffffff81033118 in halt (ptr=0xffffffff81eb0e58, val=524291) at 
/test/master/arch/x86/include/asm/paravirt.h:116
#2  kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at 
arch/x86/kernel/kvm.c:835
#3  kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at 
arch/x86/kernel/kvm.c:809
#4  0xffffffff810a2d8e in pv_wait (lock=0xffffffff81eb0e58) at 
/test/master/arch/x86/include/asm/paravirt.h:744
#5  __pv_wait_head (lock=0xffffffff81eb0e58) at 
kernel/locking/qspinlock.c:352

Value of lock seem to be 524288 (means already unlocked?)
So apart from races Waiman mentioned, are we also in need of smp_mb()
here and/or native_queue_unlock()?.

Interestingly I see other cpu stuck at multi_cpu_stop().

(gdb) thr 1
[Switching to thread 1 (Thread 1)]#0  multi_cpu_stop 
(data=0xffff8802140d1da0) at kernel/stop_machine.c:192
192			if (msdata->state != curstate) {

Or is it I am missing something.

please let me know if .config need to be shared.

> +	local_irq_save(flags);
> +
> +	/*
> +	 * check again make sure it didn't become free while
> +	 * we weren't looking.
> +	 */
> +	if (ACCESS_ONCE(*ptr) != val)
> +		goto out;
> +
> +	/*
> +	 * halt until it's our turn and kicked. Note that we do safe halt
> +	 * for irq enabled case to avoid hang when lock info is overwritten
> +	 * in irq spinlock slowpath and no spurious interrupt occur to save us.
> +	 */
> +	if (arch_irqs_disabled_flags(flags))
> +		halt();
> +	else
> +		safe_halt();
> +
> +out:
> +	local_irq_restore(flags);
> +}
> +#endif /* QUEUE_SPINLOCK */

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-16 20:49     ` Konrad Rzeszutek Wilk
                       ` (4 preceding siblings ...)
  (?)
@ 2014-06-23 15:56     ` Peter Zijlstra
  2014-06-23 16:16         ` Konrad Rzeszutek Wilk
  2014-06-23 16:16       ` Konrad Rzeszutek Wilk
  -1 siblings, 2 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 15:56 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Mon, Jun 16, 2014 at 04:49:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > Index: linux-2.6/kernel/locking/mcs_spinlock.h
> > ===================================================================
> > --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> > +++ linux-2.6/kernel/locking/mcs_spinlock.h
> > @@ -17,6 +17,7 @@
> >  struct mcs_spinlock {
> >  	struct mcs_spinlock *next;
> >  	int locked; /* 1 if lock acquired */
> > +	int count;
> 
> This could use a comment.

like so?

	int count; /* nesting count, see qspinlock.c */


> > +static inline u32 encode_tail(int cpu, int idx)
> > +{
> > +	u32 tail;
> > +
> > +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> > +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
> 
> Should there an
> 
> ASSSERT (idx < 4)
> 
> just in case we screw up somehow (I can't figure out how, but
> that is partially why ASSERTS are added).

#ifdef CONFIG_DEBUG_SPINLOCK
	BUG_ON(idx > 3);
#endif

might do, I suppose.

> > +/**
> > + * queue_spin_lock_slowpath - acquire the queue spinlock
> > + * @lock: Pointer to queue spinlock structure
> > + * @val: Current value of the queue spinlock 32-bit word
> > + *
> > + * (queue tail, lock bit)
> 
> Except it is not a lock bit. It is a lock uint8_t.

It is indeed, although that's an accident of implementation. I could do
s/bit// and not mention the entire storage angle at all?

> Is the queue tail at this point the composite of 'cpu|idx'?

Yes, as per {en,de}code_tail() above.

> > + *
> > + *              fast      :    slow                                  :    unlock
> > + *                        :                                          :
> > + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> > + *                        :       | ^--------.                    /  :
> > + *                        :       v           \                   |  :
> > + * uncontended            :    (n,x) --+--> (n,0)                 |  :
> 
> So many CPUn come in right? Is 'n' for the number of CPUs?

Nope, 'n' for any one specific tail, in particular the first one to
arrive. This is the 'uncontended queue' case as per the label, so we
need a named value for the first, in order to distinguish between the
state to the right (same tail, but unlocked) and the state below
(different tail).

> > + *   queue                :       | ^--'                          |  :
> > + *                        :       v                               |  :
> > + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> > + *   queue                :         ^--'                             :
> 
> And here um, what are the '*' for? Are they the four different
> types of handlers that can be nested? So task, sofitrq, hardisk, and
> nmi?

'*' as in wildcard, any tail, specifically not 'n'.

> > +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > +{
> > +	struct mcs_spinlock *prev, *next, *node;
> > +	u32 new, old, tail;
> > +	int idx;
> > +
> > +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> > +
> > +	node = this_cpu_ptr(&mcs_nodes[0]);
> > +	idx = node->count++;
> 
> If this is the first time we enter this, wouldn't idx end up
> being 1?

Nope, postfix ++ returns first and increments later.

> > +	tail = encode_tail(smp_processor_id(), idx);
> > +
> > +	node += idx;
> 
> Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
> on the first 'level' (task, softirq, hardirq, nmi)? Won't that
> cause us to blow past the array when we are nested at the nmi
> handler?

Seeing how its all static storage, which is automagically initialized to
0, combined with the postfix ++ (as opposed to the prefix ++) we should
be getting 0 here.

> > +	node->locked = 0;
> > +	node->next = NULL;
> > +
> > +	/*
> > +	 * trylock || xchg(lock, node)
> > +	 *
> > +	 * 0,0 -> 0,1 ; trylock
> > +	 * p,x -> n,x ; prev = xchg(lock, node)
> 
> I looked at that for 10 seconds and I was not sure what you meant.
> Is this related to the MCS document you had pointed to? It would help
> if you mention that the comments follow the document. (But they
> don't seem to)
> 
> I presume what you mean is that if we are the next after the
> lock-holder we need only to update the 'next' (or the
> composite value of smp_processor_idx | idx) to point to us.
> 
> As in, swap the 'L' with 'I' (looking at the doc)

They are the 'tail','lock' tuples, so this composite atomic operation
completes either:

  0,0 -> 0,1  -- we had no tail, not locked; into: no tail, locked.

OR

  p,x -> n,x  -- tail was p; into: tail is n; preserving locked.

> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val)
> 
> Could you add a comment here, like this:
> 
> /*
>  * N.B. Initially 'val' will have some value (as we are called
>  * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
>  * But on subsequent iterations, either the lock holder will
>  * decrement the val (queue_spin_unlock - to zero) and we
>  * needn't to record our status in the queue as we have set the
>  * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
>  * in line and need to record our 'next' (aka, smp_processor_id() | idx)
>  * position. */
>  */

The idea was that:

  0,0 -> 0,1
  p,x -> n,x

Completely covers what this composite atomic does.

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);
> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
> You lost me here. If we are at the head of the queue, and the owner
> has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
> loop, how can val != tail?

Remember:

> > +	tail = encode_tail(smp_processor_id(), idx);

So if value != tail, that means the tail pointer doesn't point to us
anymore, another cpu/idx queued itself and is now last.

> I suspect it has something to do with the comment, but I am still unsure
> what it means.
> 
> Could you help a bit in explaining it in English please?

(refer to the state diagram, if we count states left->right,
top->bottom, then this is: 5->2 or 7->8

 n,0 -> 0,1:

   the lock is free and the tail points to the first queued; this means
   that unqueueing implies wiping the tail, at the same time, acquire
   the lock.

 *,0 -> *,1:

   the lock is free and the tail doesn't point to the first queued; this
   means that unqueueing doesn't touch the tail pointer but only sets
   the lock.

> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * contended path; wait for next, release.
> > +	 */
> > +	if (new != _Q_LOCKED_VAL) {
> 
> Hm, wouldn't it be just easier to do a 'goto restart' where
> restart label points at the first loop statement? Ah never
> mind - we have already inserted ourselves in the previous's
> node.
> 
> But that is confusing - we have done: "prev->next = node;"
> 
> And then exited out of 'val = atomic_read(&lock->val))' which
> suggests that queue_spin_unlock has called us. How can we be
> contended again?

We're not contended again; we're in the 'contended queued' case, which
means that 'tail' didn't point to us anymore, in that case, we must kick
our next node such that it will now drop out of
arch_mcs_spin_lock_contended() and goes wait on the 'locked' state.

So what we do here is wait for 'node->next' to be set; it might still be
NULL if the other cpu is between:

  prev = xchg(lock->tail, node);

and:

  prev->next = node;

Once we observe the next node, we call arch_mcs_spin_unlock_contended()
on it, which sets its mcs_spinlock::locked and makes the new 'top of
queue' drop out of arch_mcs_spin_lock_contended and spin on the 'locked'
state as said above.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-16 20:49     ` Konrad Rzeszutek Wilk
                       ` (3 preceding siblings ...)
  (?)
@ 2014-06-23 15:56     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 15:56 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 16, 2014 at 04:49:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > Index: linux-2.6/kernel/locking/mcs_spinlock.h
> > ===================================================================
> > --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> > +++ linux-2.6/kernel/locking/mcs_spinlock.h
> > @@ -17,6 +17,7 @@
> >  struct mcs_spinlock {
> >  	struct mcs_spinlock *next;
> >  	int locked; /* 1 if lock acquired */
> > +	int count;
> 
> This could use a comment.

like so?

	int count; /* nesting count, see qspinlock.c */


> > +static inline u32 encode_tail(int cpu, int idx)
> > +{
> > +	u32 tail;
> > +
> > +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> > +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
> 
> Should there an
> 
> ASSSERT (idx < 4)
> 
> just in case we screw up somehow (I can't figure out how, but
> that is partially why ASSERTS are added).

#ifdef CONFIG_DEBUG_SPINLOCK
	BUG_ON(idx > 3);
#endif

might do, I suppose.

> > +/**
> > + * queue_spin_lock_slowpath - acquire the queue spinlock
> > + * @lock: Pointer to queue spinlock structure
> > + * @val: Current value of the queue spinlock 32-bit word
> > + *
> > + * (queue tail, lock bit)
> 
> Except it is not a lock bit. It is a lock uint8_t.

It is indeed, although that's an accident of implementation. I could do
s/bit// and not mention the entire storage angle at all?

> Is the queue tail at this point the composite of 'cpu|idx'?

Yes, as per {en,de}code_tail() above.

> > + *
> > + *              fast      :    slow                                  :    unlock
> > + *                        :                                          :
> > + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> > + *                        :       | ^--------.                    /  :
> > + *                        :       v           \                   |  :
> > + * uncontended            :    (n,x) --+--> (n,0)                 |  :
> 
> So many CPUn come in right? Is 'n' for the number of CPUs?

Nope, 'n' for any one specific tail, in particular the first one to
arrive. This is the 'uncontended queue' case as per the label, so we
need a named value for the first, in order to distinguish between the
state to the right (same tail, but unlocked) and the state below
(different tail).

> > + *   queue                :       | ^--'                          |  :
> > + *                        :       v                               |  :
> > + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> > + *   queue                :         ^--'                             :
> 
> And here um, what are the '*' for? Are they the four different
> types of handlers that can be nested? So task, sofitrq, hardisk, and
> nmi?

'*' as in wildcard, any tail, specifically not 'n'.

> > +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > +{
> > +	struct mcs_spinlock *prev, *next, *node;
> > +	u32 new, old, tail;
> > +	int idx;
> > +
> > +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> > +
> > +	node = this_cpu_ptr(&mcs_nodes[0]);
> > +	idx = node->count++;
> 
> If this is the first time we enter this, wouldn't idx end up
> being 1?

Nope, postfix ++ returns first and increments later.

> > +	tail = encode_tail(smp_processor_id(), idx);
> > +
> > +	node += idx;
> 
> Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
> on the first 'level' (task, softirq, hardirq, nmi)? Won't that
> cause us to blow past the array when we are nested at the nmi
> handler?

Seeing how its all static storage, which is automagically initialized to
0, combined with the postfix ++ (as opposed to the prefix ++) we should
be getting 0 here.

> > +	node->locked = 0;
> > +	node->next = NULL;
> > +
> > +	/*
> > +	 * trylock || xchg(lock, node)
> > +	 *
> > +	 * 0,0 -> 0,1 ; trylock
> > +	 * p,x -> n,x ; prev = xchg(lock, node)
> 
> I looked at that for 10 seconds and I was not sure what you meant.
> Is this related to the MCS document you had pointed to? It would help
> if you mention that the comments follow the document. (But they
> don't seem to)
> 
> I presume what you mean is that if we are the next after the
> lock-holder we need only to update the 'next' (or the
> composite value of smp_processor_idx | idx) to point to us.
> 
> As in, swap the 'L' with 'I' (looking at the doc)

They are the 'tail','lock' tuples, so this composite atomic operation
completes either:

  0,0 -> 0,1  -- we had no tail, not locked; into: no tail, locked.

OR

  p,x -> n,x  -- tail was p; into: tail is n; preserving locked.

> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val)
> 
> Could you add a comment here, like this:
> 
> /*
>  * N.B. Initially 'val' will have some value (as we are called
>  * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
>  * But on subsequent iterations, either the lock holder will
>  * decrement the val (queue_spin_unlock - to zero) and we
>  * needn't to record our status in the queue as we have set the
>  * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
>  * in line and need to record our 'next' (aka, smp_processor_id() | idx)
>  * position. */
>  */

The idea was that:

  0,0 -> 0,1
  p,x -> n,x

Completely covers what this composite atomic does.

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);
> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
> You lost me here. If we are at the head of the queue, and the owner
> has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
> loop, how can val != tail?

Remember:

> > +	tail = encode_tail(smp_processor_id(), idx);

So if value != tail, that means the tail pointer doesn't point to us
anymore, another cpu/idx queued itself and is now last.

> I suspect it has something to do with the comment, but I am still unsure
> what it means.
> 
> Could you help a bit in explaining it in English please?

(refer to the state diagram, if we count states left->right,
top->bottom, then this is: 5->2 or 7->8

 n,0 -> 0,1:

   the lock is free and the tail points to the first queued; this means
   that unqueueing implies wiping the tail, at the same time, acquire
   the lock.

 *,0 -> *,1:

   the lock is free and the tail doesn't point to the first queued; this
   means that unqueueing doesn't touch the tail pointer but only sets
   the lock.

> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * contended path; wait for next, release.
> > +	 */
> > +	if (new != _Q_LOCKED_VAL) {
> 
> Hm, wouldn't it be just easier to do a 'goto restart' where
> restart label points at the first loop statement? Ah never
> mind - we have already inserted ourselves in the previous's
> node.
> 
> But that is confusing - we have done: "prev->next = node;"
> 
> And then exited out of 'val = atomic_read(&lock->val))' which
> suggests that queue_spin_unlock has called us. How can we be
> contended again?

We're not contended again; we're in the 'contended queued' case, which
means that 'tail' didn't point to us anymore, in that case, we must kick
our next node such that it will now drop out of
arch_mcs_spin_lock_contended() and goes wait on the 'locked' state.

So what we do here is wait for 'node->next' to be set; it might still be
NULL if the other cpu is between:

  prev = xchg(lock->tail, node);

and:

  prev->next = node;

Once we observe the next node, we call arch_mcs_spin_unlock_contended()
on it, which sets its mcs_spinlock::locked and makes the new 'top of
queue' drop out of arch_mcs_spin_lock_contended and spin on the 'locked'
state as said above.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-16 20:49     ` Konrad Rzeszutek Wilk
                       ` (2 preceding siblings ...)
  (?)
@ 2014-06-23 15:56     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 15:56 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 16, 2014 at 04:49:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > Index: linux-2.6/kernel/locking/mcs_spinlock.h
> > ===================================================================
> > --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> > +++ linux-2.6/kernel/locking/mcs_spinlock.h
> > @@ -17,6 +17,7 @@
> >  struct mcs_spinlock {
> >  	struct mcs_spinlock *next;
> >  	int locked; /* 1 if lock acquired */
> > +	int count;
> 
> This could use a comment.

like so?

	int count; /* nesting count, see qspinlock.c */


> > +static inline u32 encode_tail(int cpu, int idx)
> > +{
> > +	u32 tail;
> > +
> > +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> > +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
> 
> Should there an
> 
> ASSSERT (idx < 4)
> 
> just in case we screw up somehow (I can't figure out how, but
> that is partially why ASSERTS are added).

#ifdef CONFIG_DEBUG_SPINLOCK
	BUG_ON(idx > 3);
#endif

might do, I suppose.

> > +/**
> > + * queue_spin_lock_slowpath - acquire the queue spinlock
> > + * @lock: Pointer to queue spinlock structure
> > + * @val: Current value of the queue spinlock 32-bit word
> > + *
> > + * (queue tail, lock bit)
> 
> Except it is not a lock bit. It is a lock uint8_t.

It is indeed, although that's an accident of implementation. I could do
s/bit// and not mention the entire storage angle at all?

> Is the queue tail at this point the composite of 'cpu|idx'?

Yes, as per {en,de}code_tail() above.

> > + *
> > + *              fast      :    slow                                  :    unlock
> > + *                        :                                          :
> > + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> > + *                        :       | ^--------.                    /  :
> > + *                        :       v           \                   |  :
> > + * uncontended            :    (n,x) --+--> (n,0)                 |  :
> 
> So many CPUn come in right? Is 'n' for the number of CPUs?

Nope, 'n' for any one specific tail, in particular the first one to
arrive. This is the 'uncontended queue' case as per the label, so we
need a named value for the first, in order to distinguish between the
state to the right (same tail, but unlocked) and the state below
(different tail).

> > + *   queue                :       | ^--'                          |  :
> > + *                        :       v                               |  :
> > + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> > + *   queue                :         ^--'                             :
> 
> And here um, what are the '*' for? Are they the four different
> types of handlers that can be nested? So task, sofitrq, hardisk, and
> nmi?

'*' as in wildcard, any tail, specifically not 'n'.

> > +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > +{
> > +	struct mcs_spinlock *prev, *next, *node;
> > +	u32 new, old, tail;
> > +	int idx;
> > +
> > +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> > +
> > +	node = this_cpu_ptr(&mcs_nodes[0]);
> > +	idx = node->count++;
> 
> If this is the first time we enter this, wouldn't idx end up
> being 1?

Nope, postfix ++ returns first and increments later.

> > +	tail = encode_tail(smp_processor_id(), idx);
> > +
> > +	node += idx;
> 
> Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
> on the first 'level' (task, softirq, hardirq, nmi)? Won't that
> cause us to blow past the array when we are nested at the nmi
> handler?

Seeing how its all static storage, which is automagically initialized to
0, combined with the postfix ++ (as opposed to the prefix ++) we should
be getting 0 here.

> > +	node->locked = 0;
> > +	node->next = NULL;
> > +
> > +	/*
> > +	 * trylock || xchg(lock, node)
> > +	 *
> > +	 * 0,0 -> 0,1 ; trylock
> > +	 * p,x -> n,x ; prev = xchg(lock, node)
> 
> I looked at that for 10 seconds and I was not sure what you meant.
> Is this related to the MCS document you had pointed to? It would help
> if you mention that the comments follow the document. (But they
> don't seem to)
> 
> I presume what you mean is that if we are the next after the
> lock-holder we need only to update the 'next' (or the
> composite value of smp_processor_idx | idx) to point to us.
> 
> As in, swap the 'L' with 'I' (looking at the doc)

They are the 'tail','lock' tuples, so this composite atomic operation
completes either:

  0,0 -> 0,1  -- we had no tail, not locked; into: no tail, locked.

OR

  p,x -> n,x  -- tail was p; into: tail is n; preserving locked.

> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val)
> 
> Could you add a comment here, like this:
> 
> /*
>  * N.B. Initially 'val' will have some value (as we are called
>  * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
>  * But on subsequent iterations, either the lock holder will
>  * decrement the val (queue_spin_unlock - to zero) and we
>  * needn't to record our status in the queue as we have set the
>  * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
>  * in line and need to record our 'next' (aka, smp_processor_id() | idx)
>  * position. */
>  */

The idea was that:

  0,0 -> 0,1
  p,x -> n,x

Completely covers what this composite atomic does.

> > +			new = tail | (val & _Q_LOCKED_MASK);
> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * we won the trylock; forget about queueing.
> > +	 */
> > +	if (new == _Q_LOCKED_VAL)
> > +		goto release;
> > +
> > +	/*
> > +	 * if there was a previous node; link it and wait.
> > +	 */
> > +	if (old & ~_Q_LOCKED_MASK) {
> > +		prev = decode_tail(old);
> > +		ACCESS_ONCE(prev->next) = node;
> > +
> > +		arch_mcs_spin_lock_contended(&node->locked);
> > +	}
> > +
> > +	/*
> > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > +	 *
> > +	 * *,x -> *,0
> > +	 */
> > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +		cpu_relax();
> > +
> > +	/*
> > +	 * claim the lock:
> > +	 *
> > +	 * n,0 -> 0,1 : lock, uncontended
> > +	 * *,0 -> *,1 : lock, contended
> > +	 */
> > +	for (;;) {
> > +		new = _Q_LOCKED_VAL;
> > +		if (val != tail)
> > +			new |= val;
> 
> You lost me here. If we are at the head of the queue, and the owner
> has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
> loop, how can val != tail?

Remember:

> > +	tail = encode_tail(smp_processor_id(), idx);

So if value != tail, that means the tail pointer doesn't point to us
anymore, another cpu/idx queued itself and is now last.

> I suspect it has something to do with the comment, but I am still unsure
> what it means.
> 
> Could you help a bit in explaining it in English please?

(refer to the state diagram, if we count states left->right,
top->bottom, then this is: 5->2 or 7->8

 n,0 -> 0,1:

   the lock is free and the tail points to the first queued; this means
   that unqueueing implies wiping the tail, at the same time, acquire
   the lock.

 *,0 -> *,1:

   the lock is free and the tail doesn't point to the first queued; this
   means that unqueueing doesn't touch the tail pointer but only sets
   the lock.

> > +
> > +		old = atomic_cmpxchg(&lock->val, val, new);
> > +		if (old == val)
> > +			break;
> > +
> > +		val = old;
> > +	}
> > +
> > +	/*
> > +	 * contended path; wait for next, release.
> > +	 */
> > +	if (new != _Q_LOCKED_VAL) {
> 
> Hm, wouldn't it be just easier to do a 'goto restart' where
> restart label points at the first loop statement? Ah never
> mind - we have already inserted ourselves in the previous's
> node.
> 
> But that is confusing - we have done: "prev->next = node;"
> 
> And then exited out of 'val = atomic_read(&lock->val))' which
> suggests that queue_spin_unlock has called us. How can we be
> contended again?

We're not contended again; we're in the 'contended queued' case, which
means that 'tail' didn't point to us anymore, in that case, we must kick
our next node such that it will now drop out of
arch_mcs_spin_lock_contended() and goes wait on the 'locked' state.

So what we do here is wait for 'node->next' to be set; it might still be
NULL if the other cpu is between:

  prev = xchg(lock->tail, node);

and:

  prev->next = node;

Once we observe the next node, we call arch_mcs_spin_unlock_contended()
on it, which sets its mcs_spinlock::locked and makes the new 'top of
queue' drop out of arch_mcs_spin_lock_contended and spin on the 'locked'
state as said above.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-17 20:03       ` Konrad Rzeszutek Wilk
@ 2014-06-23 16:12         ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:12 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Tue, Jun 17, 2014 at 04:03:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * we won the trylock; forget about queueing.
> > > +	 */
> > > +	if (new == _Q_LOCKED_VAL)
> > > +		goto release;
> > > +
> > > +	/*
> > > +	 * if there was a previous node; link it and wait.
> > > +	 */
> > > +	if (old & ~_Q_LOCKED_MASK) {
> > > +		prev = decode_tail(old);
> > > +		ACCESS_ONCE(prev->next) = node;
> > > +
> > > +		arch_mcs_spin_lock_contended(&node->locked);
> 
> Could you add a comment here:
> 
> /* We are spinning forever until the previous node updates locked - which
> it does once the it has updated lock->val with our tail number. */

That's incorrect -- or at least, I understand that to be incorrect. The
previous node will not have changed the tail to point to us. You always
change to tail to point to yourself, seeing how you add yourself to the
tail.

Is the existing comment any better if I s/wait./wait for it to release
us./ ?

> > > +	/*
> > > +	 * claim the lock:
> > > +	 *
> > > +	 * n,0 -> 0,1 : lock, uncontended
> > > +	 * *,0 -> *,1 : lock, contended
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val != tail)
> > > +			new |= val;
> > 
> ..snip..
> > 
> > Could you help a bit in explaining it in English please?
> 
> After looking at the assembler code I finally figured out how
> we can get here. And the 'contended' part threw me off. Somehow
> I imagined there are two more more CPUs stampeding here and 
> trying to update the lock->val. But in reality the other CPUs
> are stuck in the arch_mcs_spin_lock_contended spinning on their
> local value.

Well, the lock as a whole is contended (there's >1 waiters), and the
point of MCS style locks it to make sure they're not actually pounding
on the same cacheline. So the whole thing is consistent.

> Perhaps you could add this comment.
> 
> /* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
> the lock->val and still preserving the tail data), the winner gets to
> claim the ticket. 

There's no tickets :/

> Since we still need the other CPUs to continue and
> preserve the strict ordering in which they setup node->next, we:
>  1) update lock->val to the tail value (so tail CPU and its index) with
>     _Q_LOCKED_VAL.

We don't, we preserve the tail value, unless we're the tail, in which
case we clear the tail.

>  2). Once we are done, we poke the other CPU (the one that linked to
>     us) by writting to node->locked (below) so they can make progress and
>     loop on lock->val changing from _Q_LOCKED_MASK to zero).

_If_ there was another cpu, ie. the tail didn't point to us.

---

I don't do well with natural language comments like that; they tend to
confuse me more than anything.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-23 16:12         ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:12 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:03:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * we won the trylock; forget about queueing.
> > > +	 */
> > > +	if (new == _Q_LOCKED_VAL)
> > > +		goto release;
> > > +
> > > +	/*
> > > +	 * if there was a previous node; link it and wait.
> > > +	 */
> > > +	if (old & ~_Q_LOCKED_MASK) {
> > > +		prev = decode_tail(old);
> > > +		ACCESS_ONCE(prev->next) = node;
> > > +
> > > +		arch_mcs_spin_lock_contended(&node->locked);
> 
> Could you add a comment here:
> 
> /* We are spinning forever until the previous node updates locked - which
> it does once the it has updated lock->val with our tail number. */

That's incorrect -- or at least, I understand that to be incorrect. The
previous node will not have changed the tail to point to us. You always
change to tail to point to yourself, seeing how you add yourself to the
tail.

Is the existing comment any better if I s/wait./wait for it to release
us./ ?

> > > +	/*
> > > +	 * claim the lock:
> > > +	 *
> > > +	 * n,0 -> 0,1 : lock, uncontended
> > > +	 * *,0 -> *,1 : lock, contended
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val != tail)
> > > +			new |= val;
> > 
> ..snip..
> > 
> > Could you help a bit in explaining it in English please?
> 
> After looking at the assembler code I finally figured out how
> we can get here. And the 'contended' part threw me off. Somehow
> I imagined there are two more more CPUs stampeding here and 
> trying to update the lock->val. But in reality the other CPUs
> are stuck in the arch_mcs_spin_lock_contended spinning on their
> local value.

Well, the lock as a whole is contended (there's >1 waiters), and the
point of MCS style locks it to make sure they're not actually pounding
on the same cacheline. So the whole thing is consistent.

> Perhaps you could add this comment.
> 
> /* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
> the lock->val and still preserving the tail data), the winner gets to
> claim the ticket. 

There's no tickets :/

> Since we still need the other CPUs to continue and
> preserve the strict ordering in which they setup node->next, we:
>  1) update lock->val to the tail value (so tail CPU and its index) with
>     _Q_LOCKED_VAL.

We don't, we preserve the tail value, unless we're the tail, in which
case we clear the tail.

>  2). Once we are done, we poke the other CPU (the one that linked to
>     us) by writting to node->locked (below) so they can make progress and
>     loop on lock->val changing from _Q_LOCKED_MASK to zero).

_If_ there was another cpu, ie. the tail didn't point to us.

---

I don't do well with natural language comments like that; they tend to
confuse me more than anything.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-17 20:03       ` Konrad Rzeszutek Wilk
                         ` (2 preceding siblings ...)
  (?)
@ 2014-06-23 16:12       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:12 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:03:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * we won the trylock; forget about queueing.
> > > +	 */
> > > +	if (new == _Q_LOCKED_VAL)
> > > +		goto release;
> > > +
> > > +	/*
> > > +	 * if there was a previous node; link it and wait.
> > > +	 */
> > > +	if (old & ~_Q_LOCKED_MASK) {
> > > +		prev = decode_tail(old);
> > > +		ACCESS_ONCE(prev->next) = node;
> > > +
> > > +		arch_mcs_spin_lock_contended(&node->locked);
> 
> Could you add a comment here:
> 
> /* We are spinning forever until the previous node updates locked - which
> it does once the it has updated lock->val with our tail number. */

That's incorrect -- or at least, I understand that to be incorrect. The
previous node will not have changed the tail to point to us. You always
change to tail to point to yourself, seeing how you add yourself to the
tail.

Is the existing comment any better if I s/wait./wait for it to release
us./ ?

> > > +	/*
> > > +	 * claim the lock:
> > > +	 *
> > > +	 * n,0 -> 0,1 : lock, uncontended
> > > +	 * *,0 -> *,1 : lock, contended
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val != tail)
> > > +			new |= val;
> > 
> ..snip..
> > 
> > Could you help a bit in explaining it in English please?
> 
> After looking at the assembler code I finally figured out how
> we can get here. And the 'contended' part threw me off. Somehow
> I imagined there are two more more CPUs stampeding here and 
> trying to update the lock->val. But in reality the other CPUs
> are stuck in the arch_mcs_spin_lock_contended spinning on their
> local value.

Well, the lock as a whole is contended (there's >1 waiters), and the
point of MCS style locks it to make sure they're not actually pounding
on the same cacheline. So the whole thing is consistent.

> Perhaps you could add this comment.
> 
> /* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
> the lock->val and still preserving the tail data), the winner gets to
> claim the ticket. 

There's no tickets :/

> Since we still need the other CPUs to continue and
> preserve the strict ordering in which they setup node->next, we:
>  1) update lock->val to the tail value (so tail CPU and its index) with
>     _Q_LOCKED_VAL.

We don't, we preserve the tail value, unless we're the tail, in which
case we clear the tail.

>  2). Once we are done, we poke the other CPU (the one that linked to
>     us) by writting to node->locked (below) so they can make progress and
>     loop on lock->val changing from _Q_LOCKED_MASK to zero).

_If_ there was another cpu, ie. the tail didn't point to us.

---

I don't do well with natural language comments like that; they tend to
confuse me more than anything.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-23 15:56     ` Peter Zijlstra
@ 2014-06-23 16:16         ` Konrad Rzeszutek Wilk
  2014-06-23 16:16       ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Mon, Jun 23, 2014 at 05:56:50PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 16, 2014 at 04:49:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > > Index: linux-2.6/kernel/locking/mcs_spinlock.h
> > > ===================================================================
> > > --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> > > +++ linux-2.6/kernel/locking/mcs_spinlock.h
> > > @@ -17,6 +17,7 @@
> > >  struct mcs_spinlock {
> > >  	struct mcs_spinlock *next;
> > >  	int locked; /* 1 if lock acquired */
> > > +	int count;
> > 
> > This could use a comment.
> 
> like so?
> 
> 	int count; /* nesting count, see qspinlock.c */

/* nested level -  in user, softirq, hard irq or nmi context. */ ?

> 
> 
> > > +static inline u32 encode_tail(int cpu, int idx)
> > > +{
> > > +	u32 tail;
> > > +
> > > +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> > > +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
> > 
> > Should there an
> > 
> > ASSSERT (idx < 4)
> > 
> > just in case we screw up somehow (I can't figure out how, but
> > that is partially why ASSERTS are added).
> 
> #ifdef CONFIG_DEBUG_SPINLOCK
> 	BUG_ON(idx > 3);
> #endif
> 
> might do, I suppose.

<nods>
> 
> > > +/**
> > > + * queue_spin_lock_slowpath - acquire the queue spinlock
> > > + * @lock: Pointer to queue spinlock structure
> > > + * @val: Current value of the queue spinlock 32-bit word
> > > + *
> > > + * (queue tail, lock bit)
> > 
> > Except it is not a lock bit. It is a lock uint8_t.
> 
> It is indeed, although that's an accident of implementation. I could do
> s/bit// and not mention the entire storage angle at all?

I think giving as much details as possible is good.

What you said 'accident of implementation' is a could be woven
in there?
> 
> > Is the queue tail at this point the composite of 'cpu|idx'?
> 
> Yes, as per {en,de}code_tail() above.
> 
> > > + *
> > > + *              fast      :    slow                                  :    unlock
> > > + *                        :                                          :
> > > + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> > > + *                        :       | ^--------.                    /  :
> > > + *                        :       v           \                   |  :
> > > + * uncontended            :    (n,x) --+--> (n,0)                 |  :
> > 
> > So many CPUn come in right? Is 'n' for the number of CPUs?
> 
> Nope, 'n' for any one specific tail, in particular the first one to
> arrive. This is the 'uncontended queue' case as per the label, so we
> need a named value for the first, in order to distinguish between the
> state to the right (same tail, but unlocked) and the state below
> (different tail).
> 
> > > + *   queue                :       | ^--'                          |  :
> > > + *                        :       v                               |  :
> > > + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> > > + *   queue                :         ^--'                             :
> > 
> > And here um, what are the '*' for? Are they the four different
> > types of handlers that can be nested? So task, sofitrq, hardisk, and
> > nmi?
> 
> '*' as in wildcard, any tail, specifically not 'n'.

Ah, thank you for the explanation! Would it be possible to include
that in the comment please?

> 
> > > +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > > +{
> > > +	struct mcs_spinlock *prev, *next, *node;
> > > +	u32 new, old, tail;
> > > +	int idx;
> > > +
> > > +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> > > +
> > > +	node = this_cpu_ptr(&mcs_nodes[0]);
> > > +	idx = node->count++;
> > 
> > If this is the first time we enter this, wouldn't idx end up
> > being 1?
> 
> Nope, postfix ++ returns first and increments later.

<blushes> Yes it does.
> 
> > > +	tail = encode_tail(smp_processor_id(), idx);
> > > +
> > > +	node += idx;
> > 
> > Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
> > on the first 'level' (task, softirq, hardirq, nmi)? Won't that
> > cause us to blow past the array when we are nested at the nmi
> > handler?
> 
> Seeing how its all static storage, which is automagically initialized to
> 0, combined with the postfix ++ (as opposed to the prefix ++) we should
> be getting 0 here.

I've no idea what I was thinking, but thank you for setting me straight.

> 
> > > +	node->locked = 0;
> > > +	node->next = NULL;
> > > +
> > > +	/*
> > > +	 * trylock || xchg(lock, node)
> > > +	 *
> > > +	 * 0,0 -> 0,1 ; trylock
> > > +	 * p,x -> n,x ; prev = xchg(lock, node)
> > 
> > I looked at that for 10 seconds and I was not sure what you meant.
> > Is this related to the MCS document you had pointed to? It would help
> > if you mention that the comments follow the document. (But they
> > don't seem to)
> > 
> > I presume what you mean is that if we are the next after the
> > lock-holder we need only to update the 'next' (or the
> > composite value of smp_processor_idx | idx) to point to us.
> > 
> > As in, swap the 'L' with 'I' (looking at the doc)
> 
> They are the 'tail','lock' tuples, so this composite atomic operation
> completes either:
> 
>   0,0 -> 0,1  -- we had no tail, not locked; into: no tail, locked.
> 
> OR
> 
>   p,x -> n,x  -- tail was p; into: tail is n; preserving locked.

Oh this is good!
> 
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val)
> > 
> > Could you add a comment here, like this:
> > 
> > /*
> >  * N.B. Initially 'val' will have some value (as we are called
> >  * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
> >  * But on subsequent iterations, either the lock holder will
> >  * decrement the val (queue_spin_unlock - to zero) and we
> >  * needn't to record our status in the queue as we have set the
> >  * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
> >  * in line and need to record our 'next' (aka, smp_processor_id() | idx)
> >  * position. */
> >  */
> 
> The idea was that:
> 
>   0,0 -> 0,1
>   p,x -> n,x
> 
> Completely covers what this composite atomic does.
> 
> > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * we won the trylock; forget about queueing.
> > > +	 */
> > > +	if (new == _Q_LOCKED_VAL)
> > > +		goto release;
> > > +
> > > +	/*
> > > +	 * if there was a previous node; link it and wait.
> > > +	 */
> > > +	if (old & ~_Q_LOCKED_MASK) {
> > > +		prev = decode_tail(old);
> > > +		ACCESS_ONCE(prev->next) = node;
> > > +
> > > +		arch_mcs_spin_lock_contended(&node->locked);
> > > +	}
> > > +
> > > +	/*
> > > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > > +	 *
> > > +	 * *,x -> *,0
> > > +	 */
> > > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > > +		cpu_relax();
> > > +
> > > +	/*
> > > +	 * claim the lock:
> > > +	 *
> > > +	 * n,0 -> 0,1 : lock, uncontended
> > > +	 * *,0 -> *,1 : lock, contended
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val != tail)
> > > +			new |= val;
> > 
> > You lost me here. If we are at the head of the queue, and the owner
> > has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
> > loop, how can val != tail?
> 
> Remember:
> 
> > > +	tail = encode_tail(smp_processor_id(), idx);
> 
> So if value != tail, that means the tail pointer doesn't point to us
> anymore, another cpu/idx queued itself and is now last.
> 
> > I suspect it has something to do with the comment, but I am still unsure
> > what it means.
> > 
> > Could you help a bit in explaining it in English please?
> 
> (refer to the state diagram, if we count states left->right,
> top->bottom, then this is: 5->2 or 7->8
> 
>  n,0 -> 0,1:
> 
>    the lock is free and the tail points to the first queued; this means
>    that unqueueing implies wiping the tail, at the same time, acquire
>    the lock.
> 
>  *,0 -> *,1:
> 
>    the lock is free and the tail doesn't point to the first queued; this
>    means that unqueueing doesn't touch the tail pointer but only sets
>    the lock.
> 
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * contended path; wait for next, release.
> > > +	 */
> > > +	if (new != _Q_LOCKED_VAL) {
> > 
> > Hm, wouldn't it be just easier to do a 'goto restart' where
> > restart label points at the first loop statement? Ah never
> > mind - we have already inserted ourselves in the previous's
> > node.
> > 
> > But that is confusing - we have done: "prev->next = node;"
> > 
> > And then exited out of 'val = atomic_read(&lock->val))' which
> > suggests that queue_spin_unlock has called us. How can we be
> > contended again?
> 
> We're not contended again; we're in the 'contended queued' case, which
> means that 'tail' didn't point to us anymore, in that case, we must kick
> our next node such that it will now drop out of
> arch_mcs_spin_lock_contended() and goes wait on the 'locked' state.

<nods>
> 
> So what we do here is wait for 'node->next' to be set; it might still be
> NULL if the other cpu is between:
> 
>   prev = xchg(lock->tail, node);
> 
> and:
> 
>   prev->next = node;
> 
> Once we observe the next node, we call arch_mcs_spin_unlock_contended()
> on it, which sets its mcs_spinlock::locked and makes the new 'top of
> queue' drop out of arch_mcs_spin_lock_contended and spin on the 'locked'
> state as said above.

Thank you for your detailed explanation!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-23 16:16         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 23, 2014 at 05:56:50PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 16, 2014 at 04:49:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > > Index: linux-2.6/kernel/locking/mcs_spinlock.h
> > > ===================================================================
> > > --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> > > +++ linux-2.6/kernel/locking/mcs_spinlock.h
> > > @@ -17,6 +17,7 @@
> > >  struct mcs_spinlock {
> > >  	struct mcs_spinlock *next;
> > >  	int locked; /* 1 if lock acquired */
> > > +	int count;
> > 
> > This could use a comment.
> 
> like so?
> 
> 	int count; /* nesting count, see qspinlock.c */

/* nested level -  in user, softirq, hard irq or nmi context. */ ?

> 
> 
> > > +static inline u32 encode_tail(int cpu, int idx)
> > > +{
> > > +	u32 tail;
> > > +
> > > +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> > > +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
> > 
> > Should there an
> > 
> > ASSSERT (idx < 4)
> > 
> > just in case we screw up somehow (I can't figure out how, but
> > that is partially why ASSERTS are added).
> 
> #ifdef CONFIG_DEBUG_SPINLOCK
> 	BUG_ON(idx > 3);
> #endif
> 
> might do, I suppose.

<nods>
> 
> > > +/**
> > > + * queue_spin_lock_slowpath - acquire the queue spinlock
> > > + * @lock: Pointer to queue spinlock structure
> > > + * @val: Current value of the queue spinlock 32-bit word
> > > + *
> > > + * (queue tail, lock bit)
> > 
> > Except it is not a lock bit. It is a lock uint8_t.
> 
> It is indeed, although that's an accident of implementation. I could do
> s/bit// and not mention the entire storage angle at all?

I think giving as much details as possible is good.

What you said 'accident of implementation' is a could be woven
in there?
> 
> > Is the queue tail at this point the composite of 'cpu|idx'?
> 
> Yes, as per {en,de}code_tail() above.
> 
> > > + *
> > > + *              fast      :    slow                                  :    unlock
> > > + *                        :                                          :
> > > + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> > > + *                        :       | ^--------.                    /  :
> > > + *                        :       v           \                   |  :
> > > + * uncontended            :    (n,x) --+--> (n,0)                 |  :
> > 
> > So many CPUn come in right? Is 'n' for the number of CPUs?
> 
> Nope, 'n' for any one specific tail, in particular the first one to
> arrive. This is the 'uncontended queue' case as per the label, so we
> need a named value for the first, in order to distinguish between the
> state to the right (same tail, but unlocked) and the state below
> (different tail).
> 
> > > + *   queue                :       | ^--'                          |  :
> > > + *                        :       v                               |  :
> > > + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> > > + *   queue                :         ^--'                             :
> > 
> > And here um, what are the '*' for? Are they the four different
> > types of handlers that can be nested? So task, sofitrq, hardisk, and
> > nmi?
> 
> '*' as in wildcard, any tail, specifically not 'n'.

Ah, thank you for the explanation! Would it be possible to include
that in the comment please?

> 
> > > +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > > +{
> > > +	struct mcs_spinlock *prev, *next, *node;
> > > +	u32 new, old, tail;
> > > +	int idx;
> > > +
> > > +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> > > +
> > > +	node = this_cpu_ptr(&mcs_nodes[0]);
> > > +	idx = node->count++;
> > 
> > If this is the first time we enter this, wouldn't idx end up
> > being 1?
> 
> Nope, postfix ++ returns first and increments later.

<blushes> Yes it does.
> 
> > > +	tail = encode_tail(smp_processor_id(), idx);
> > > +
> > > +	node += idx;
> > 
> > Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
> > on the first 'level' (task, softirq, hardirq, nmi)? Won't that
> > cause us to blow past the array when we are nested at the nmi
> > handler?
> 
> Seeing how its all static storage, which is automagically initialized to
> 0, combined with the postfix ++ (as opposed to the prefix ++) we should
> be getting 0 here.

I've no idea what I was thinking, but thank you for setting me straight.

> 
> > > +	node->locked = 0;
> > > +	node->next = NULL;
> > > +
> > > +	/*
> > > +	 * trylock || xchg(lock, node)
> > > +	 *
> > > +	 * 0,0 -> 0,1 ; trylock
> > > +	 * p,x -> n,x ; prev = xchg(lock, node)
> > 
> > I looked at that for 10 seconds and I was not sure what you meant.
> > Is this related to the MCS document you had pointed to? It would help
> > if you mention that the comments follow the document. (But they
> > don't seem to)
> > 
> > I presume what you mean is that if we are the next after the
> > lock-holder we need only to update the 'next' (or the
> > composite value of smp_processor_idx | idx) to point to us.
> > 
> > As in, swap the 'L' with 'I' (looking at the doc)
> 
> They are the 'tail','lock' tuples, so this composite atomic operation
> completes either:
> 
>   0,0 -> 0,1  -- we had no tail, not locked; into: no tail, locked.
> 
> OR
> 
>   p,x -> n,x  -- tail was p; into: tail is n; preserving locked.

Oh this is good!
> 
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val)
> > 
> > Could you add a comment here, like this:
> > 
> > /*
> >  * N.B. Initially 'val' will have some value (as we are called
> >  * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
> >  * But on subsequent iterations, either the lock holder will
> >  * decrement the val (queue_spin_unlock - to zero) and we
> >  * needn't to record our status in the queue as we have set the
> >  * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
> >  * in line and need to record our 'next' (aka, smp_processor_id() | idx)
> >  * position. */
> >  */
> 
> The idea was that:
> 
>   0,0 -> 0,1
>   p,x -> n,x
> 
> Completely covers what this composite atomic does.
> 
> > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * we won the trylock; forget about queueing.
> > > +	 */
> > > +	if (new == _Q_LOCKED_VAL)
> > > +		goto release;
> > > +
> > > +	/*
> > > +	 * if there was a previous node; link it and wait.
> > > +	 */
> > > +	if (old & ~_Q_LOCKED_MASK) {
> > > +		prev = decode_tail(old);
> > > +		ACCESS_ONCE(prev->next) = node;
> > > +
> > > +		arch_mcs_spin_lock_contended(&node->locked);
> > > +	}
> > > +
> > > +	/*
> > > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > > +	 *
> > > +	 * *,x -> *,0
> > > +	 */
> > > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > > +		cpu_relax();
> > > +
> > > +	/*
> > > +	 * claim the lock:
> > > +	 *
> > > +	 * n,0 -> 0,1 : lock, uncontended
> > > +	 * *,0 -> *,1 : lock, contended
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val != tail)
> > > +			new |= val;
> > 
> > You lost me here. If we are at the head of the queue, and the owner
> > has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
> > loop, how can val != tail?
> 
> Remember:
> 
> > > +	tail = encode_tail(smp_processor_id(), idx);
> 
> So if value != tail, that means the tail pointer doesn't point to us
> anymore, another cpu/idx queued itself and is now last.
> 
> > I suspect it has something to do with the comment, but I am still unsure
> > what it means.
> > 
> > Could you help a bit in explaining it in English please?
> 
> (refer to the state diagram, if we count states left->right,
> top->bottom, then this is: 5->2 or 7->8
> 
>  n,0 -> 0,1:
> 
>    the lock is free and the tail points to the first queued; this means
>    that unqueueing implies wiping the tail, at the same time, acquire
>    the lock.
> 
>  *,0 -> *,1:
> 
>    the lock is free and the tail doesn't point to the first queued; this
>    means that unqueueing doesn't touch the tail pointer but only sets
>    the lock.
> 
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * contended path; wait for next, release.
> > > +	 */
> > > +	if (new != _Q_LOCKED_VAL) {
> > 
> > Hm, wouldn't it be just easier to do a 'goto restart' where
> > restart label points at the first loop statement? Ah never
> > mind - we have already inserted ourselves in the previous's
> > node.
> > 
> > But that is confusing - we have done: "prev->next = node;"
> > 
> > And then exited out of 'val = atomic_read(&lock->val))' which
> > suggests that queue_spin_unlock has called us. How can we be
> > contended again?
> 
> We're not contended again; we're in the 'contended queued' case, which
> means that 'tail' didn't point to us anymore, in that case, we must kick
> our next node such that it will now drop out of
> arch_mcs_spin_lock_contended() and goes wait on the 'locked' state.

<nods>
> 
> So what we do here is wait for 'node->next' to be set; it might still be
> NULL if the other cpu is between:
> 
>   prev = xchg(lock->tail, node);
> 
> and:
> 
>   prev->next = node;
> 
> Once we observe the next node, we call arch_mcs_spin_unlock_contended()
> on it, which sets its mcs_spinlock::locked and makes the new 'top of
> queue' drop out of arch_mcs_spin_lock_contended and spin on the 'locked'
> state as said above.

Thank you for your detailed explanation!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-23 15:56     ` Peter Zijlstra
  2014-06-23 16:16         ` Konrad Rzeszutek Wilk
@ 2014-06-23 16:16       ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 23, 2014 at 05:56:50PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 16, 2014 at 04:49:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > > Index: linux-2.6/kernel/locking/mcs_spinlock.h
> > > ===================================================================
> > > --- linux-2.6.orig/kernel/locking/mcs_spinlock.h
> > > +++ linux-2.6/kernel/locking/mcs_spinlock.h
> > > @@ -17,6 +17,7 @@
> > >  struct mcs_spinlock {
> > >  	struct mcs_spinlock *next;
> > >  	int locked; /* 1 if lock acquired */
> > > +	int count;
> > 
> > This could use a comment.
> 
> like so?
> 
> 	int count; /* nesting count, see qspinlock.c */

/* nested level -  in user, softirq, hard irq or nmi context. */ ?

> 
> 
> > > +static inline u32 encode_tail(int cpu, int idx)
> > > +{
> > > +	u32 tail;
> > > +
> > > +	tail  = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
> > > +	tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
> > 
> > Should there an
> > 
> > ASSSERT (idx < 4)
> > 
> > just in case we screw up somehow (I can't figure out how, but
> > that is partially why ASSERTS are added).
> 
> #ifdef CONFIG_DEBUG_SPINLOCK
> 	BUG_ON(idx > 3);
> #endif
> 
> might do, I suppose.

<nods>
> 
> > > +/**
> > > + * queue_spin_lock_slowpath - acquire the queue spinlock
> > > + * @lock: Pointer to queue spinlock structure
> > > + * @val: Current value of the queue spinlock 32-bit word
> > > + *
> > > + * (queue tail, lock bit)
> > 
> > Except it is not a lock bit. It is a lock uint8_t.
> 
> It is indeed, although that's an accident of implementation. I could do
> s/bit// and not mention the entire storage angle at all?

I think giving as much details as possible is good.

What you said 'accident of implementation' is a could be woven
in there?
> 
> > Is the queue tail at this point the composite of 'cpu|idx'?
> 
> Yes, as per {en,de}code_tail() above.
> 
> > > + *
> > > + *              fast      :    slow                                  :    unlock
> > > + *                        :                                          :
> > > + * uncontended  (0,0)   --:--> (0,1) --------------------------------:--> (*,0)
> > > + *                        :       | ^--------.                    /  :
> > > + *                        :       v           \                   |  :
> > > + * uncontended            :    (n,x) --+--> (n,0)                 |  :
> > 
> > So many CPUn come in right? Is 'n' for the number of CPUs?
> 
> Nope, 'n' for any one specific tail, in particular the first one to
> arrive. This is the 'uncontended queue' case as per the label, so we
> need a named value for the first, in order to distinguish between the
> state to the right (same tail, but unlocked) and the state below
> (different tail).
> 
> > > + *   queue                :       | ^--'                          |  :
> > > + *                        :       v                               |  :
> > > + * contended              :    (*,x) --+--> (*,0) -----> (*,1) ---'  :
> > > + *   queue                :         ^--'                             :
> > 
> > And here um, what are the '*' for? Are they the four different
> > types of handlers that can be nested? So task, sofitrq, hardisk, and
> > nmi?
> 
> '*' as in wildcard, any tail, specifically not 'n'.

Ah, thank you for the explanation! Would it be possible to include
that in the comment please?

> 
> > > +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > > +{
> > > +	struct mcs_spinlock *prev, *next, *node;
> > > +	u32 new, old, tail;
> > > +	int idx;
> > > +
> > > +	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
> > > +
> > > +	node = this_cpu_ptr(&mcs_nodes[0]);
> > > +	idx = node->count++;
> > 
> > If this is the first time we enter this, wouldn't idx end up
> > being 1?
> 
> Nope, postfix ++ returns first and increments later.

<blushes> Yes it does.
> 
> > > +	tail = encode_tail(smp_processor_id(), idx);
> > > +
> > > +	node += idx;
> > 
> > Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even
> > on the first 'level' (task, softirq, hardirq, nmi)? Won't that
> > cause us to blow past the array when we are nested at the nmi
> > handler?
> 
> Seeing how its all static storage, which is automagically initialized to
> 0, combined with the postfix ++ (as opposed to the prefix ++) we should
> be getting 0 here.

I've no idea what I was thinking, but thank you for setting me straight.

> 
> > > +	node->locked = 0;
> > > +	node->next = NULL;
> > > +
> > > +	/*
> > > +	 * trylock || xchg(lock, node)
> > > +	 *
> > > +	 * 0,0 -> 0,1 ; trylock
> > > +	 * p,x -> n,x ; prev = xchg(lock, node)
> > 
> > I looked at that for 10 seconds and I was not sure what you meant.
> > Is this related to the MCS document you had pointed to? It would help
> > if you mention that the comments follow the document. (But they
> > don't seem to)
> > 
> > I presume what you mean is that if we are the next after the
> > lock-holder we need only to update the 'next' (or the
> > composite value of smp_processor_idx | idx) to point to us.
> > 
> > As in, swap the 'L' with 'I' (looking at the doc)
> 
> They are the 'tail','lock' tuples, so this composite atomic operation
> completes either:
> 
>   0,0 -> 0,1  -- we had no tail, not locked; into: no tail, locked.
> 
> OR
> 
>   p,x -> n,x  -- tail was p; into: tail is n; preserving locked.

Oh this is good!
> 
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val)
> > 
> > Could you add a comment here, like this:
> > 
> > /*
> >  * N.B. Initially 'val' will have some value (as we are called
> >  * after the _Q_LOCKED_VAL could not be set by queue_spin_lock).
> >  * But on subsequent iterations, either the lock holder will
> >  * decrement the val (queue_spin_unlock - to zero) and we
> >  * needn't to record our status in the queue as we have set the
> >  * Q_LOCKED_VAL (new) and are the lock holder. Or we are next
> >  * in line and need to record our 'next' (aka, smp_processor_id() | idx)
> >  * position. */
> >  */
> 
> The idea was that:
> 
>   0,0 -> 0,1
>   p,x -> n,x
> 
> Completely covers what this composite atomic does.
> 
> > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * we won the trylock; forget about queueing.
> > > +	 */
> > > +	if (new == _Q_LOCKED_VAL)
> > > +		goto release;
> > > +
> > > +	/*
> > > +	 * if there was a previous node; link it and wait.
> > > +	 */
> > > +	if (old & ~_Q_LOCKED_MASK) {
> > > +		prev = decode_tail(old);
> > > +		ACCESS_ONCE(prev->next) = node;
> > > +
> > > +		arch_mcs_spin_lock_contended(&node->locked);
> > > +	}
> > > +
> > > +	/*
> > > +	 * we're at the head of the waitqueue, wait for the owner to go away.
> > > +	 *
> > > +	 * *,x -> *,0
> > > +	 */
> > > +	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > > +		cpu_relax();
> > > +
> > > +	/*
> > > +	 * claim the lock:
> > > +	 *
> > > +	 * n,0 -> 0,1 : lock, uncontended
> > > +	 * *,0 -> *,1 : lock, contended
> > > +	 */
> > > +	for (;;) {
> > > +		new = _Q_LOCKED_VAL;
> > > +		if (val != tail)
> > > +			new |= val;
> > 
> > You lost me here. If we are at the head of the queue, and the owner
> > has called queue_spin_unlock (hence made us get out of the 'val = atomic_read'
> > loop, how can val != tail?
> 
> Remember:
> 
> > > +	tail = encode_tail(smp_processor_id(), idx);
> 
> So if value != tail, that means the tail pointer doesn't point to us
> anymore, another cpu/idx queued itself and is now last.
> 
> > I suspect it has something to do with the comment, but I am still unsure
> > what it means.
> > 
> > Could you help a bit in explaining it in English please?
> 
> (refer to the state diagram, if we count states left->right,
> top->bottom, then this is: 5->2 or 7->8
> 
>  n,0 -> 0,1:
> 
>    the lock is free and the tail points to the first queued; this means
>    that unqueueing implies wiping the tail, at the same time, acquire
>    the lock.
> 
>  *,0 -> *,1:
> 
>    the lock is free and the tail doesn't point to the first queued; this
>    means that unqueueing doesn't touch the tail pointer but only sets
>    the lock.
> 
> > > +
> > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > +		if (old == val)
> > > +			break;
> > > +
> > > +		val = old;
> > > +	}
> > > +
> > > +	/*
> > > +	 * contended path; wait for next, release.
> > > +	 */
> > > +	if (new != _Q_LOCKED_VAL) {
> > 
> > Hm, wouldn't it be just easier to do a 'goto restart' where
> > restart label points at the first loop statement? Ah never
> > mind - we have already inserted ourselves in the previous's
> > node.
> > 
> > But that is confusing - we have done: "prev->next = node;"
> > 
> > And then exited out of 'val = atomic_read(&lock->val))' which
> > suggests that queue_spin_unlock has called us. How can we be
> > contended again?
> 
> We're not contended again; we're in the 'contended queued' case, which
> means that 'tail' didn't point to us anymore, in that case, we must kick
> our next node such that it will now drop out of
> arch_mcs_spin_lock_contended() and goes wait on the 'locked' state.

<nods>
> 
> So what we do here is wait for 'node->next' to be set; it might still be
> NULL if the other cpu is between:
> 
>   prev = xchg(lock->tail, node);
> 
> and:
> 
>   prev->next = node;
> 
> Once we observe the next node, we call arch_mcs_spin_unlock_contended()
> on it, which sets its mcs_spinlock::locked and makes the new 'top of
> queue' drop out of arch_mcs_spin_lock_contended and spin on the 'locked'
> state as said above.

Thank you for your detailed explanation!

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-23 16:12         ` Peter Zijlstra
@ 2014-06-23 16:20           ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Mon, Jun 23, 2014 at 06:12:00PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:03:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > > +
> > > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > > +		if (old == val)
> > > > +			break;
> > > > +
> > > > +		val = old;
> > > > +	}
> > > > +
> > > > +	/*
> > > > +	 * we won the trylock; forget about queueing.
> > > > +	 */
> > > > +	if (new == _Q_LOCKED_VAL)
> > > > +		goto release;
> > > > +
> > > > +	/*
> > > > +	 * if there was a previous node; link it and wait.
> > > > +	 */
> > > > +	if (old & ~_Q_LOCKED_MASK) {
> > > > +		prev = decode_tail(old);
> > > > +		ACCESS_ONCE(prev->next) = node;
> > > > +
> > > > +		arch_mcs_spin_lock_contended(&node->locked);
> > 
> > Could you add a comment here:
> > 
> > /* We are spinning forever until the previous node updates locked - which
> > it does once the it has updated lock->val with our tail number. */
> 
> That's incorrect -- or at least, I understand that to be incorrect. The
> previous node will not have changed the tail to point to us. You always
> change to tail to point to yourself, seeing how you add yourself to the
> tail.
> 
> Is the existing comment any better if I s/wait./wait for it to release
> us./ ?

Yes!
> 
> > > > +	/*
> > > > +	 * claim the lock:
> > > > +	 *
> > > > +	 * n,0 -> 0,1 : lock, uncontended
> > > > +	 * *,0 -> *,1 : lock, contended
> > > > +	 */
> > > > +	for (;;) {
> > > > +		new = _Q_LOCKED_VAL;
> > > > +		if (val != tail)
> > > > +			new |= val;
> > > 
> > ..snip..
> > > 
> > > Could you help a bit in explaining it in English please?
> > 
> > After looking at the assembler code I finally figured out how
> > we can get here. And the 'contended' part threw me off. Somehow
> > I imagined there are two more more CPUs stampeding here and 
> > trying to update the lock->val. But in reality the other CPUs
> > are stuck in the arch_mcs_spin_lock_contended spinning on their
> > local value.
> 
> Well, the lock as a whole is contended (there's >1 waiters), and the
> point of MCS style locks it to make sure they're not actually pounding
> on the same cacheline. So the whole thing is consistent.
> 
> > Perhaps you could add this comment.
> > 
> > /* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
> > the lock->val and still preserving the tail data), the winner gets to
> > claim the ticket. 
> 
> There's no tickets :/

s/ticket/be first in line/ ?

> 
> > Since we still need the other CPUs to continue and
> > preserve the strict ordering in which they setup node->next, we:
> >  1) update lock->val to the tail value (so tail CPU and its index) with
> >     _Q_LOCKED_VAL.
> 
> We don't, we preserve the tail value, unless we're the tail, in which
> case we clear the tail.
> 
> >  2). Once we are done, we poke the other CPU (the one that linked to
> >     us) by writting to node->locked (below) so they can make progress and
> >     loop on lock->val changing from _Q_LOCKED_MASK to zero).
> 
> _If_ there was another cpu, ie. the tail didn't point to us.

<nods>
> 
> ---
> 
> I don't do well with natural language comments like that; they tend to
> confuse me more than anything.
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-23 16:20           ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 23, 2014 at 06:12:00PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:03:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > > +
> > > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > > +		if (old == val)
> > > > +			break;
> > > > +
> > > > +		val = old;
> > > > +	}
> > > > +
> > > > +	/*
> > > > +	 * we won the trylock; forget about queueing.
> > > > +	 */
> > > > +	if (new == _Q_LOCKED_VAL)
> > > > +		goto release;
> > > > +
> > > > +	/*
> > > > +	 * if there was a previous node; link it and wait.
> > > > +	 */
> > > > +	if (old & ~_Q_LOCKED_MASK) {
> > > > +		prev = decode_tail(old);
> > > > +		ACCESS_ONCE(prev->next) = node;
> > > > +
> > > > +		arch_mcs_spin_lock_contended(&node->locked);
> > 
> > Could you add a comment here:
> > 
> > /* We are spinning forever until the previous node updates locked - which
> > it does once the it has updated lock->val with our tail number. */
> 
> That's incorrect -- or at least, I understand that to be incorrect. The
> previous node will not have changed the tail to point to us. You always
> change to tail to point to yourself, seeing how you add yourself to the
> tail.
> 
> Is the existing comment any better if I s/wait./wait for it to release
> us./ ?

Yes!
> 
> > > > +	/*
> > > > +	 * claim the lock:
> > > > +	 *
> > > > +	 * n,0 -> 0,1 : lock, uncontended
> > > > +	 * *,0 -> *,1 : lock, contended
> > > > +	 */
> > > > +	for (;;) {
> > > > +		new = _Q_LOCKED_VAL;
> > > > +		if (val != tail)
> > > > +			new |= val;
> > > 
> > ..snip..
> > > 
> > > Could you help a bit in explaining it in English please?
> > 
> > After looking at the assembler code I finally figured out how
> > we can get here. And the 'contended' part threw me off. Somehow
> > I imagined there are two more more CPUs stampeding here and 
> > trying to update the lock->val. But in reality the other CPUs
> > are stuck in the arch_mcs_spin_lock_contended spinning on their
> > local value.
> 
> Well, the lock as a whole is contended (there's >1 waiters), and the
> point of MCS style locks it to make sure they're not actually pounding
> on the same cacheline. So the whole thing is consistent.
> 
> > Perhaps you could add this comment.
> > 
> > /* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
> > the lock->val and still preserving the tail data), the winner gets to
> > claim the ticket. 
> 
> There's no tickets :/

s/ticket/be first in line/ ?

> 
> > Since we still need the other CPUs to continue and
> > preserve the strict ordering in which they setup node->next, we:
> >  1) update lock->val to the tail value (so tail CPU and its index) with
> >     _Q_LOCKED_VAL.
> 
> We don't, we preserve the tail value, unless we're the tail, in which
> case we clear the tail.
> 
> >  2). Once we are done, we poke the other CPU (the one that linked to
> >     us) by writting to node->locked (below) so they can make progress and
> >     loop on lock->val changing from _Q_LOCKED_MASK to zero).
> 
> _If_ there was another cpu, ie. the tail didn't point to us.

<nods>
> 
> ---
> 
> I don't do well with natural language comments like that; they tend to
> confuse me more than anything.
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-23 16:12         ` Peter Zijlstra
  (?)
@ 2014-06-23 16:20         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 23, 2014 at 06:12:00PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:03:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > +			new = tail | (val & _Q_LOCKED_MASK);
> > > > +
> > > > +		old = atomic_cmpxchg(&lock->val, val, new);
> > > > +		if (old == val)
> > > > +			break;
> > > > +
> > > > +		val = old;
> > > > +	}
> > > > +
> > > > +	/*
> > > > +	 * we won the trylock; forget about queueing.
> > > > +	 */
> > > > +	if (new == _Q_LOCKED_VAL)
> > > > +		goto release;
> > > > +
> > > > +	/*
> > > > +	 * if there was a previous node; link it and wait.
> > > > +	 */
> > > > +	if (old & ~_Q_LOCKED_MASK) {
> > > > +		prev = decode_tail(old);
> > > > +		ACCESS_ONCE(prev->next) = node;
> > > > +
> > > > +		arch_mcs_spin_lock_contended(&node->locked);
> > 
> > Could you add a comment here:
> > 
> > /* We are spinning forever until the previous node updates locked - which
> > it does once the it has updated lock->val with our tail number. */
> 
> That's incorrect -- or at least, I understand that to be incorrect. The
> previous node will not have changed the tail to point to us. You always
> change to tail to point to yourself, seeing how you add yourself to the
> tail.
> 
> Is the existing comment any better if I s/wait./wait for it to release
> us./ ?

Yes!
> 
> > > > +	/*
> > > > +	 * claim the lock:
> > > > +	 *
> > > > +	 * n,0 -> 0,1 : lock, uncontended
> > > > +	 * *,0 -> *,1 : lock, contended
> > > > +	 */
> > > > +	for (;;) {
> > > > +		new = _Q_LOCKED_VAL;
> > > > +		if (val != tail)
> > > > +			new |= val;
> > > 
> > ..snip..
> > > 
> > > Could you help a bit in explaining it in English please?
> > 
> > After looking at the assembler code I finally figured out how
> > we can get here. And the 'contended' part threw me off. Somehow
> > I imagined there are two more more CPUs stampeding here and 
> > trying to update the lock->val. But in reality the other CPUs
> > are stuck in the arch_mcs_spin_lock_contended spinning on their
> > local value.
> 
> Well, the lock as a whole is contended (there's >1 waiters), and the
> point of MCS style locks it to make sure they're not actually pounding
> on the same cacheline. So the whole thing is consistent.
> 
> > Perhaps you could add this comment.
> > 
> > /* Once queue_spin_unlock is called (which _subtracts_ _Q_LOCKED_VAL from
> > the lock->val and still preserving the tail data), the winner gets to
> > claim the ticket. 
> 
> There's no tickets :/

s/ticket/be first in line/ ?

> 
> > Since we still need the other CPUs to continue and
> > preserve the strict ordering in which they setup node->next, we:
> >  1) update lock->val to the tail value (so tail CPU and its index) with
> >     _Q_LOCKED_VAL.
> 
> We don't, we preserve the tail value, unless we're the tail, in which
> case we clear the tail.
> 
> >  2). Once we are done, we poke the other CPU (the one that linked to
> >     us) by writting to node->locked (below) so they can make progress and
> >     loop on lock->val changing from _Q_LOCKED_MASK to zero).
> 
> _If_ there was another cpu, ie. the tail didn't point to us.

<nods>
> 
> ---
> 
> I don't do well with natural language comments like that; they tend to
> confuse me more than anything.
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-17 20:05     ` Konrad Rzeszutek Wilk
@ 2014-06-23 16:26       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:26 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Tue, Jun 17, 2014 at 04:05:31PM -0400, Konrad Rzeszutek Wilk wrote:
> > + * The basic principle of a queue-based spinlock can best be understood
> > + * by studying a classic queue-based spinlock implementation called the
> > + * MCS lock. The paper below provides a good description for this kind
> > + * of lock.
> > + *
> > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> > + *
> > + * This queue spinlock implementation is based on the MCS lock, however to make
> > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> > + * API, we must modify it some.
> > + *
> > + * In particular; where the traditional MCS lock consists of a tail pointer
> > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> > + * unlock the next pending (next->locked), we compress both these: {tail,
> > + * next->locked} into a single u32 value.
> > + *
> > + * Since a spinlock disables recursion of its own context and there is a limit
> > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> > + * encode the tail as and index indicating this context and a cpu number.
> > + *
> > + * We can further change the first spinner to spin on a bit in the lock word
> > + * instead of its node; whereby avoiding the need to carry a node from lock to
> > + * unlock, and preserving API.
> 
> You also made changes (compared to the MCS) in that the unlock path is not
> spinning waiting for the successor and that the job of passing the lock
> is not done in the unlock path either.
> 
> Instead all of that is now done in the path of the lock acquirer logic. 
> 
> Could you update the comment to say that please?

I _think_ I know what you mean.. So that is actually implied by the last
paragraph, but I suppose I can make it explicit; something like:

  *
  * Another way to look at it is:
  *
  *  lock(tail,locked)
  *    struct mcs_spinlock node;
  *    mcs_spin_lock(tail, &node);
  *    test-and-set locked;
  *    mcs_spin_unlock(tail, &node);
  *
  *  unlock(tail,locked)
  *    clear locked
  *
  * Where we have compressed (tail,locked) into a single u32 word.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-23 16:26       ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:26 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:05:31PM -0400, Konrad Rzeszutek Wilk wrote:
> > + * The basic principle of a queue-based spinlock can best be understood
> > + * by studying a classic queue-based spinlock implementation called the
> > + * MCS lock. The paper below provides a good description for this kind
> > + * of lock.
> > + *
> > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> > + *
> > + * This queue spinlock implementation is based on the MCS lock, however to make
> > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> > + * API, we must modify it some.
> > + *
> > + * In particular; where the traditional MCS lock consists of a tail pointer
> > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> > + * unlock the next pending (next->locked), we compress both these: {tail,
> > + * next->locked} into a single u32 value.
> > + *
> > + * Since a spinlock disables recursion of its own context and there is a limit
> > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> > + * encode the tail as and index indicating this context and a cpu number.
> > + *
> > + * We can further change the first spinner to spin on a bit in the lock word
> > + * instead of its node; whereby avoiding the need to carry a node from lock to
> > + * unlock, and preserving API.
> 
> You also made changes (compared to the MCS) in that the unlock path is not
> spinning waiting for the successor and that the job of passing the lock
> is not done in the unlock path either.
> 
> Instead all of that is now done in the path of the lock acquirer logic. 
> 
> Could you update the comment to say that please?

I _think_ I know what you mean.. So that is actually implied by the last
paragraph, but I suppose I can make it explicit; something like:

  *
  * Another way to look at it is:
  *
  *  lock(tail,locked)
  *    struct mcs_spinlock node;
  *    mcs_spin_lock(tail, &node);
  *    test-and-set locked;
  *    mcs_spin_unlock(tail, &node);
  *
  *  unlock(tail,locked)
  *    clear locked
  *
  * Where we have compressed (tail,locked) into a single u32 word.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-17 20:05     ` Konrad Rzeszutek Wilk
                       ` (3 preceding siblings ...)
  (?)
@ 2014-06-23 16:26     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:26 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:05:31PM -0400, Konrad Rzeszutek Wilk wrote:
> > + * The basic principle of a queue-based spinlock can best be understood
> > + * by studying a classic queue-based spinlock implementation called the
> > + * MCS lock. The paper below provides a good description for this kind
> > + * of lock.
> > + *
> > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> > + *
> > + * This queue spinlock implementation is based on the MCS lock, however to make
> > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> > + * API, we must modify it some.
> > + *
> > + * In particular; where the traditional MCS lock consists of a tail pointer
> > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> > + * unlock the next pending (next->locked), we compress both these: {tail,
> > + * next->locked} into a single u32 value.
> > + *
> > + * Since a spinlock disables recursion of its own context and there is a limit
> > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> > + * encode the tail as and index indicating this context and a cpu number.
> > + *
> > + * We can further change the first spinner to spin on a bit in the lock word
> > + * instead of its node; whereby avoiding the need to carry a node from lock to
> > + * unlock, and preserving API.
> 
> You also made changes (compared to the MCS) in that the unlock path is not
> spinning waiting for the successor and that the job of passing the lock
> is not done in the unlock path either.
> 
> Instead all of that is now done in the path of the lock acquirer logic. 
> 
> Could you update the comment to say that please?

I _think_ I know what you mean.. So that is actually implied by the last
paragraph, but I suppose I can make it explicit; something like:

  *
  * Another way to look at it is:
  *
  *  lock(tail,locked)
  *    struct mcs_spinlock node;
  *    mcs_spin_lock(tail, &node);
  *    test-and-set locked;
  *    mcs_spin_unlock(tail, &node);
  *
  *  unlock(tail,locked)
  *    clear locked
  *
  * Where we have compressed (tail,locked) into a single u32 word.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:36     ` Konrad Rzeszutek Wilk
@ 2014-06-23 16:35       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:35 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Tue, Jun 17, 2014 at 04:36:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> > Because the qspinlock needs to touch a second cacheline; add a pending
> > bit and allow a single in-word spinner before we punt to the second
> > cacheline.
> 
> Could you add this in the description please:
> 
> And by second cacheline we mean the local 'node'. That is the:
> mcs_nodes[0] and mcs_nodes[idx]

Those should be the very same cacheline :), but yes, I can add something
like that.

> Perhaps it might be better then to split this in the header file
> as this is trying to not be a slowpath code - but rather - a
> pre-slow-path-lets-try-if-we can do another cmpxchg in case
> the unlocker has just unlocked itself.
> 
> So something like:
> 
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index e8a7ae8..29cc9c7 100644
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>   */
>  static __always_inline void queue_spin_lock(struct qspinlock *lock)
>  {
> -	u32 val;
> +	u32 val, new;
>  
>  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>  	if (likely(val == 0))
>  		return;
> +
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;
> +		val = old;
> +	}
>  	queue_spin_lock_slowpath(lock, val);
>  }

I think that's too big for an inline function.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-23 16:35       ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:35 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:36:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> > Because the qspinlock needs to touch a second cacheline; add a pending
> > bit and allow a single in-word spinner before we punt to the second
> > cacheline.
> 
> Could you add this in the description please:
> 
> And by second cacheline we mean the local 'node'. That is the:
> mcs_nodes[0] and mcs_nodes[idx]

Those should be the very same cacheline :), but yes, I can add something
like that.

> Perhaps it might be better then to split this in the header file
> as this is trying to not be a slowpath code - but rather - a
> pre-slow-path-lets-try-if-we can do another cmpxchg in case
> the unlocker has just unlocked itself.
> 
> So something like:
> 
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index e8a7ae8..29cc9c7 100644
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>   */
>  static __always_inline void queue_spin_lock(struct qspinlock *lock)
>  {
> -	u32 val;
> +	u32 val, new;
>  
>  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>  	if (likely(val == 0))
>  		return;
> +
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;
> +		val = old;
> +	}
>  	queue_spin_lock_slowpath(lock, val);
>  }

I think that's too big for an inline function.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 20:36     ` Konrad Rzeszutek Wilk
                       ` (4 preceding siblings ...)
  (?)
@ 2014-06-23 16:35     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-23 16:35 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 04:36:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> > Because the qspinlock needs to touch a second cacheline; add a pending
> > bit and allow a single in-word spinner before we punt to the second
> > cacheline.
> 
> Could you add this in the description please:
> 
> And by second cacheline we mean the local 'node'. That is the:
> mcs_nodes[0] and mcs_nodes[idx]

Those should be the very same cacheline :), but yes, I can add something
like that.

> Perhaps it might be better then to split this in the header file
> as this is trying to not be a slowpath code - but rather - a
> pre-slow-path-lets-try-if-we can do another cmpxchg in case
> the unlocker has just unlocked itself.
> 
> So something like:
> 
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index e8a7ae8..29cc9c7 100644
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>   */
>  static __always_inline void queue_spin_lock(struct qspinlock *lock)
>  {
> -	u32 val;
> +	u32 val, new;
>  
>  	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>  	if (likely(val == 0))
>  		return;
> +
> +	/* One more attempt - but if we fail mark it as pending. */
> +	if (val == _Q_LOCKED_VAL) {
> +		new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == _Q_LOCKED_VAL) /* YEEY! */
> +			return;
> +		val = old;
> +	}
>  	queue_spin_lock_slowpath(lock, val);
>  }

I think that's too big for an inline function.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-23 16:26       ` Peter Zijlstra
@ 2014-06-23 16:45         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Mon, Jun 23, 2014 at 06:26:22PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:05:31PM -0400, Konrad Rzeszutek Wilk wrote:
> > > + * The basic principle of a queue-based spinlock can best be understood
> > > + * by studying a classic queue-based spinlock implementation called the
> > > + * MCS lock. The paper below provides a good description for this kind
> > > + * of lock.
> > > + *
> > > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> > > + *
> > > + * This queue spinlock implementation is based on the MCS lock, however to make
> > > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> > > + * API, we must modify it some.
> > > + *
> > > + * In particular; where the traditional MCS lock consists of a tail pointer
> > > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> > > + * unlock the next pending (next->locked), we compress both these: {tail,
> > > + * next->locked} into a single u32 value.
> > > + *
> > > + * Since a spinlock disables recursion of its own context and there is a limit
> > > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> > > + * encode the tail as and index indicating this context and a cpu number.
> > > + *
> > > + * We can further change the first spinner to spin on a bit in the lock word
> > > + * instead of its node; whereby avoiding the need to carry a node from lock to
> > > + * unlock, and preserving API.
> > 
> > You also made changes (compared to the MCS) in that the unlock path is not
> > spinning waiting for the successor and that the job of passing the lock
> > is not done in the unlock path either.
> > 
> > Instead all of that is now done in the path of the lock acquirer logic. 
> > 
> > Could you update the comment to say that please?
> 
> I _think_ I know what you mean.. So that is actually implied by the last

You do :-)

> paragraph, but I suppose I can make it explicit; something like:
> 
>   *
>   * Another way to look at it is:
>   *
>   *  lock(tail,locked)
>   *    struct mcs_spinlock node;
>   *    mcs_spin_lock(tail, &node);
>   *    test-and-set locked;
>   *    mcs_spin_unlock(tail, &node);
>   *
>   *  unlock(tail,locked)
>   *    clear locked
>   *
>   * Where we have compressed (tail,locked) into a single u32 word.
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-06-23 16:45         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 23, 2014 at 06:26:22PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:05:31PM -0400, Konrad Rzeszutek Wilk wrote:
> > > + * The basic principle of a queue-based spinlock can best be understood
> > > + * by studying a classic queue-based spinlock implementation called the
> > > + * MCS lock. The paper below provides a good description for this kind
> > > + * of lock.
> > > + *
> > > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> > > + *
> > > + * This queue spinlock implementation is based on the MCS lock, however to make
> > > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> > > + * API, we must modify it some.
> > > + *
> > > + * In particular; where the traditional MCS lock consists of a tail pointer
> > > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> > > + * unlock the next pending (next->locked), we compress both these: {tail,
> > > + * next->locked} into a single u32 value.
> > > + *
> > > + * Since a spinlock disables recursion of its own context and there is a limit
> > > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> > > + * encode the tail as and index indicating this context and a cpu number.
> > > + *
> > > + * We can further change the first spinner to spin on a bit in the lock word
> > > + * instead of its node; whereby avoiding the need to carry a node from lock to
> > > + * unlock, and preserving API.
> > 
> > You also made changes (compared to the MCS) in that the unlock path is not
> > spinning waiting for the successor and that the job of passing the lock
> > is not done in the unlock path either.
> > 
> > Instead all of that is now done in the path of the lock acquirer logic. 
> > 
> > Could you update the comment to say that please?
> 
> I _think_ I know what you mean.. So that is actually implied by the last

You do :-)

> paragraph, but I suppose I can make it explicit; something like:
> 
>   *
>   * Another way to look at it is:
>   *
>   *  lock(tail,locked)
>   *    struct mcs_spinlock node;
>   *    mcs_spin_lock(tail, &node);
>   *    test-and-set locked;
>   *    mcs_spin_unlock(tail, &node);
>   *
>   *  unlock(tail,locked)
>   *    clear locked
>   *
>   * Where we have compressed (tail,locked) into a single u32 word.
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
  2014-06-23 16:26       ` Peter Zijlstra
  (?)
  (?)
@ 2014-06-23 16:45       ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-06-23 16:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jun 23, 2014 at 06:26:22PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:05:31PM -0400, Konrad Rzeszutek Wilk wrote:
> > > + * The basic principle of a queue-based spinlock can best be understood
> > > + * by studying a classic queue-based spinlock implementation called the
> > > + * MCS lock. The paper below provides a good description for this kind
> > > + * of lock.
> > > + *
> > > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> > > + *
> > > + * This queue spinlock implementation is based on the MCS lock, however to make
> > > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> > > + * API, we must modify it some.
> > > + *
> > > + * In particular; where the traditional MCS lock consists of a tail pointer
> > > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> > > + * unlock the next pending (next->locked), we compress both these: {tail,
> > > + * next->locked} into a single u32 value.
> > > + *
> > > + * Since a spinlock disables recursion of its own context and there is a limit
> > > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> > > + * encode the tail as and index indicating this context and a cpu number.
> > > + *
> > > + * We can further change the first spinner to spin on a bit in the lock word
> > > + * instead of its node; whereby avoiding the need to carry a node from lock to
> > > + * unlock, and preserving API.
> > 
> > You also made changes (compared to the MCS) in that the unlock path is not
> > spinning waiting for the successor and that the job of passing the lock
> > is not done in the unlock path either.
> > 
> > Instead all of that is now done in the path of the lock acquirer logic. 
> > 
> > Could you update the comment to say that please?
> 
> I _think_ I know what you mean.. So that is actually implied by the last

You do :-)

> paragraph, but I suppose I can make it explicit; something like:
> 
>   *
>   * Another way to look at it is:
>   *
>   *  lock(tail,locked)
>   *    struct mcs_spinlock node;
>   *    mcs_spin_lock(tail, &node);
>   *    test-and-set locked;
>   *    mcs_spin_unlock(tail, &node);
>   *
>   *  unlock(tail,locked)
>   *    clear locked
>   *
>   * Where we have compressed (tail,locked) into a single u32 word.
> 
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 21:07         ` Konrad Rzeszutek Wilk
@ 2014-06-24  8:24           ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-24  8:24 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > We are trying to make the fastpath as simple as possible as it may be
> > inlined. The complexity of the queue spinlock is in the slowpath.
> 
> Sure, but then it shouldn't be called slowpath anymore as it is not
> slow.

Its common terminology to call the inline part the fast path and the
out-of-line call on failure the slow path.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
@ 2014-06-24  8:24           ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-24  8:24 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > We are trying to make the fastpath as simple as possible as it may be
> > inlined. The complexity of the queue spinlock is in the slowpath.
> 
> Sure, but then it shouldn't be called slowpath anymore as it is not
> slow.

Its common terminology to call the inline part the fast path and the
out-of-line call on failure the slow path.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 03/11] qspinlock: Add pending bit
  2014-06-17 21:07         ` Konrad Rzeszutek Wilk
                           ` (3 preceding siblings ...)
  (?)
@ 2014-06-24  8:24         ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-24  8:24 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > We are trying to make the fastpath as simple as possible as it may be
> > inlined. The complexity of the queue spinlock is in the slowpath.
> 
> Sure, but then it shouldn't be called slowpath anymore as it is not
> slow.

Its common terminology to call the inline part the fast path and the
out-of-line call on failure the slow path.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 11:37       ` Paolo Bonzini
@ 2014-06-24 10:47         ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-24 10:47 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Konrad Rzeszutek Wilk, Waiman.Long, tglx, mingo, linux-arch,
	linux-kernel, virtualization, xen-devel, kvm, paolo.bonzini,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod

On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
> However, I *do* agree with you that it's simpler to just squash this patch
> into 01/11.

So I explicitly broke out these optimizations into separate patches so
that we can see them independently and agree they're idempotent wrt the
state machine.

The initial patches by Waiman were totally unreadable, partly because
the optimizations made the code terribly complex.

Luckily waiman then dropped the most horrible optimizations
(optimization for the very large nr_cpus case, were we cannot have a
pending byte), so the end result isn't quite as complex as it used to
be.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
@ 2014-06-24 10:47         ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-24 10:47 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, Konrad Rzeszutek Wilk,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, virtualization, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
> However, I *do* agree with you that it's simpler to just squash this patch
> into 01/11.

So I explicitly broke out these optimizations into separate patches so
that we can see them independently and agree they're idempotent wrt the
state machine.

The initial patches by Waiman were totally unreadable, partly because
the optimizations made the code terribly complex.

Luckily waiman then dropped the most horrible optimizations
(optimization for the very large nr_cpus case, were we cannot have a
pending byte), so the end result isn't quite as complex as it used to
be.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 04/11] qspinlock: Extract out the exchange of tail code word
  2014-06-18 11:37       ` Paolo Bonzini
                         ` (3 preceding siblings ...)
  (?)
@ 2014-06-24 10:47       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-24 10:47 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
> However, I *do* agree with you that it's simpler to just squash this patch
> into 01/11.

So I explicitly broke out these optimizations into separate patches so
that we can see them independently and agree they're idempotent wrt the
state machine.

The initial patches by Waiman were totally unreadable, partly because
the optimizations made the code terribly complex.

Luckily waiman then dropped the most horrible optimizations
(optimization for the very large nr_cpus case, were we cannot have a
pending byte), so the end result isn't quite as complex as it used to
be.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-18 15:57     ` Konrad Rzeszutek Wilk
@ 2014-07-07 14:33       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:33 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 2248 bytes --]

On Wed, Jun 18, 2014 at 11:57:30AM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:
> > From: Peter Zijlstra <peterz@infradead.org>
> > 
> > When we allow for a max NR_CPUS < 2^14 we can optimize the pending
> > wait-acquire and the xchg_tail() operations.
> > 
> > By growing the pending bit to a byte, we reduce the tail to 16bit.
> > This means we can use xchg16 for the tail part and do away with all
> > the repeated compxchg() operations.
> > 
> > This in turn allows us to unconditionally acquire; the locked state
> > as observed by the wait loops cannot change. And because both locked
> > and pending are now a full byte we can use simple stores for the
> > state transition, obviating one atomic operation entirely.
> 
> I have to ask - how much more performance do you get from this?
> 
> Is this extra atomic operation hurting that much?

Its not extra, its a cmpxchg loop vs an unconditional xchg.

And yes, its somewhat tedious to show, but on 4 socket systems you can
really see it make a difference. I'll try and run some numbers, I need
to reinstall the box.

(there were numbers in the previous threads, but you're right, I
should've put some in the Changelog).

> >  /**
> >   * queue_spin_lock_slowpath - acquire the queue spinlock
> > @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
> >  	 * we're pending, wait for the owner to go away.
> >  	 *
> >  	 * *,1,1 -> *,1,0
> > +	 *
> > +	 * this wait loop must be a load-acquire such that we match the
> > +	 * store-release that clears the locked bit and create lock
> > +	 * sequentiality; this because not all clear_pending_set_locked()
> > +	 * implementations imply full barriers.
> >  	 */
> > -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
> 
> lock->val.counter? Ugh, all to deal with the 'int' -> 'u32' (or 'u64')

No, to do atomic_t -> int.

> Could you introduce a macro in atomic.h called 'atomic_read_raw' which
> would do the this? Like this:

That would be worse I think. It looks like a function returning an
rvalue whereas we really want an lvalue.



[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
@ 2014-07-07 14:33       ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:33 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 2248 bytes --]

On Wed, Jun 18, 2014 at 11:57:30AM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:
> > From: Peter Zijlstra <peterz@infradead.org>
> > 
> > When we allow for a max NR_CPUS < 2^14 we can optimize the pending
> > wait-acquire and the xchg_tail() operations.
> > 
> > By growing the pending bit to a byte, we reduce the tail to 16bit.
> > This means we can use xchg16 for the tail part and do away with all
> > the repeated compxchg() operations.
> > 
> > This in turn allows us to unconditionally acquire; the locked state
> > as observed by the wait loops cannot change. And because both locked
> > and pending are now a full byte we can use simple stores for the
> > state transition, obviating one atomic operation entirely.
> 
> I have to ask - how much more performance do you get from this?
> 
> Is this extra atomic operation hurting that much?

Its not extra, its a cmpxchg loop vs an unconditional xchg.

And yes, its somewhat tedious to show, but on 4 socket systems you can
really see it make a difference. I'll try and run some numbers, I need
to reinstall the box.

(there were numbers in the previous threads, but you're right, I
should've put some in the Changelog).

> >  /**
> >   * queue_spin_lock_slowpath - acquire the queue spinlock
> > @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
> >  	 * we're pending, wait for the owner to go away.
> >  	 *
> >  	 * *,1,1 -> *,1,0
> > +	 *
> > +	 * this wait loop must be a load-acquire such that we match the
> > +	 * store-release that clears the locked bit and create lock
> > +	 * sequentiality; this because not all clear_pending_set_locked()
> > +	 * implementations imply full barriers.
> >  	 */
> > -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
> 
> lock->val.counter? Ugh, all to deal with the 'int' -> 'u32' (or 'u64')

No, to do atomic_t -> int.

> Could you introduce a macro in atomic.h called 'atomic_read_raw' which
> would do the this? Like this:

That would be worse I think. It looks like a function returning an
rvalue whereas we really want an lvalue.



[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-18 15:57     ` Konrad Rzeszutek Wilk
  (?)
@ 2014-07-07 14:33     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:33 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 2248 bytes --]

On Wed, Jun 18, 2014 at 11:57:30AM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:
> > From: Peter Zijlstra <peterz@infradead.org>
> > 
> > When we allow for a max NR_CPUS < 2^14 we can optimize the pending
> > wait-acquire and the xchg_tail() operations.
> > 
> > By growing the pending bit to a byte, we reduce the tail to 16bit.
> > This means we can use xchg16 for the tail part and do away with all
> > the repeated compxchg() operations.
> > 
> > This in turn allows us to unconditionally acquire; the locked state
> > as observed by the wait loops cannot change. And because both locked
> > and pending are now a full byte we can use simple stores for the
> > state transition, obviating one atomic operation entirely.
> 
> I have to ask - how much more performance do you get from this?
> 
> Is this extra atomic operation hurting that much?

Its not extra, its a cmpxchg loop vs an unconditional xchg.

And yes, its somewhat tedious to show, but on 4 socket systems you can
really see it make a difference. I'll try and run some numbers, I need
to reinstall the box.

(there were numbers in the previous threads, but you're right, I
should've put some in the Changelog).

> >  /**
> >   * queue_spin_lock_slowpath - acquire the queue spinlock
> > @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp
> >  	 * we're pending, wait for the owner to go away.
> >  	 *
> >  	 * *,1,1 -> *,1,0
> > +	 *
> > +	 * this wait loop must be a load-acquire such that we match the
> > +	 * store-release that clears the locked bit and create lock
> > +	 * sequentiality; this because not all clear_pending_set_locked()
> > +	 * implementations imply full barriers.
> >  	 */
> > -	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> > +	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
> 
> lock->val.counter? Ugh, all to deal with the 'int' -> 'u32' (or 'u64')

No, to do atomic_t -> int.

> Could you introduce a macro in atomic.h called 'atomic_read_raw' which
> would do the this? Like this:

That would be worse I think. It looks like a function returning an
rvalue whereas we really want an lvalue.



[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-18 11:39     ` Paolo Bonzini
@ 2014-07-07 14:35       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 475 bytes --]

On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
> Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> >
> >-	for (;;) {
> >-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> >-
> >-		old = atomic_cmpxchg(&lock->val, val, new);
> >-		if (old == val)
> >-			break;
> >-
> >-		val = old;
> >-	}
> >+	clear_pending_set_locked(lock, val);
> > 	return;
> 
> 
> Might as well add clear_pending_set_locked already in patch 3.

Patch 4, if anywhere.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
@ 2014-07-07 14:35       ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, virtualization, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 475 bytes --]

On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
> Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> >
> >-	for (;;) {
> >-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> >-
> >-		old = atomic_cmpxchg(&lock->val, val, new);
> >-		if (old == val)
> >-			break;
> >-
> >-		val = old;
> >-	}
> >+	clear_pending_set_locked(lock, val);
> > 	return;
> 
> 
> Might as well add clear_pending_set_locked already in patch 3.

Patch 4, if anywhere.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-06-18 11:39     ` Paolo Bonzini
  (?)
@ 2014-07-07 14:35     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 475 bytes --]

On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
> Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> >
> >-	for (;;) {
> >-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> >-
> >-		old = atomic_cmpxchg(&lock->val, val, new);
> >-		if (old == val)
> >-			break;
> >-
> >-		val = old;
> >-	}
> >+	clear_pending_set_locked(lock, val);
> > 	return;
> 
> 
> Might as well add clear_pending_set_locked already in patch 3.

Patch 4, if anywhere.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
  2014-06-18 16:36     ` Konrad Rzeszutek Wilk
@ 2014-07-07 14:51       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 1377 bytes --]

On Wed, Jun 18, 2014 at 12:36:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:
> > From: Waiman Long <Waiman.Long@hp.com>
> > 
> > Currently, atomic_cmpxchg() is used to get the lock. However, this is
> > not really necessary if there is more than one task in the queue and
> > the queue head don't need to reset the queue code word. For that case,
> 
> s/queue code word/tail {number,value}/ ?
> 
> 
> > a simple write to set the lock bit is enough as the queue head will
> > be the only one eligible to get the lock as long as it checks that
> > both the lock and pending bits are not set. The current pending bit
> > waiting code will ensure that the bit will not be set as soon as the
> > queue code word (tail) in the lock is set.
> 
> Just use the same word as above.

I made that:

Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head doesn't need to reset the queue tail.

For that case, a simple write to set the lock byte is enough as the
queue head will be the only one eligible to get the lock as long as it
checks that both the lock and pending bits are not set. The current
pending bit waiting code will ensure that the bit will not be set as
soon as the queue tail is set.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-07-07 14:51       ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 1377 bytes --]

On Wed, Jun 18, 2014 at 12:36:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:
> > From: Waiman Long <Waiman.Long@hp.com>
> > 
> > Currently, atomic_cmpxchg() is used to get the lock. However, this is
> > not really necessary if there is more than one task in the queue and
> > the queue head don't need to reset the queue code word. For that case,
> 
> s/queue code word/tail {number,value}/ ?
> 
> 
> > a simple write to set the lock bit is enough as the queue head will
> > be the only one eligible to get the lock as long as it checks that
> > both the lock and pending bits are not set. The current pending bit
> > waiting code will ensure that the bit will not be set as soon as the
> > queue code word (tail) in the lock is set.
> 
> Just use the same word as above.

I made that:

Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head doesn't need to reset the queue tail.

For that case, a simple write to set the lock byte is enough as the
queue head will be the only one eligible to get the lock as long as it
checks that both the lock and pending bits are not set. The current
pending bit waiting code will ensure that the bit will not be set as
soon as the queue tail is set.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
  2014-06-18 16:36     ` Konrad Rzeszutek Wilk
  (?)
@ 2014-07-07 14:51     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 14:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 1377 bytes --]

On Wed, Jun 18, 2014 at 12:36:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:
> > From: Waiman Long <Waiman.Long@hp.com>
> > 
> > Currently, atomic_cmpxchg() is used to get the lock. However, this is
> > not really necessary if there is more than one task in the queue and
> > the queue head don't need to reset the queue code word. For that case,
> 
> s/queue code word/tail {number,value}/ ?
> 
> 
> > a simple write to set the lock bit is enough as the queue head will
> > be the only one eligible to get the lock as long as it checks that
> > both the lock and pending bits are not set. The current pending bit
> > waiting code will ensure that the bit will not be set as soon as the
> > queue code word (tail) in the lock is set.
> 
> Just use the same word as above.

I made that:

Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head doesn't need to reset the queue tail.

For that case, a simple write to set the lock byte is enough as the
queue head will be the only one eligible to get the lock as long as it
checks that both the lock and pending bits are not set. The current
pending bit waiting code will ensure that the bit will not be set as
soon as the queue tail is set.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-07-07 14:35       ` Peter Zijlstra
                         ` (2 preceding siblings ...)
  (?)
@ 2014-07-07 15:08       ` Paolo Bonzini
  2014-07-07 15:35           ` Peter Zijlstra
  2014-07-07 15:35         ` Peter Zijlstra
  -1 siblings, 2 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-07-07 15:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod

Il 07/07/2014 16:35, Peter Zijlstra ha scritto:
> On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
>> Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>>>
>>> -	for (;;) {
>>> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
>>> -
>>> -		old = atomic_cmpxchg(&lock->val, val, new);
>>> -		if (old == val)
>>> -			break;
>>> -
>>> -		val = old;
>>> -	}
>>> +	clear_pending_set_locked(lock, val);
>>> 	return;
>>
>>
>> Might as well add clear_pending_set_locked already in patch 3.
>
> Patch 4, if anywhere.
>

This code is added in patch 3:

+	/*
+	 * take ownership and clear the pending bit.
+	 *
+	 * *,1,0 -> *,0,1
+	 */
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return;

Unlike the change in patch 4, clear_pending_set_locked doesn't change 
how qspinlock moves from a state to the next.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-07-07 14:35       ` Peter Zijlstra
  (?)
  (?)
@ 2014-07-07 15:08       ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-07-07 15:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, virtualization, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 07/07/2014 16:35, Peter Zijlstra ha scritto:
> On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
>> Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>>>
>>> -	for (;;) {
>>> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
>>> -
>>> -		old = atomic_cmpxchg(&lock->val, val, new);
>>> -		if (old == val)
>>> -			break;
>>> -
>>> -		val = old;
>>> -	}
>>> +	clear_pending_set_locked(lock, val);
>>> 	return;
>>
>>
>> Might as well add clear_pending_set_locked already in patch 3.
>
> Patch 4, if anywhere.
>

This code is added in patch 3:

+	/*
+	 * take ownership and clear the pending bit.
+	 *
+	 * *,1,0 -> *,0,1
+	 */
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return;

Unlike the change in patch 4, clear_pending_set_locked doesn't change 
how qspinlock moves from a state to the next.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-07-07 14:35       ` Peter Zijlstra
  (?)
@ 2014-07-07 15:08       ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-07-07 15:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

Il 07/07/2014 16:35, Peter Zijlstra ha scritto:
> On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
>> Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
>>>
>>> -	for (;;) {
>>> -		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
>>> -
>>> -		old = atomic_cmpxchg(&lock->val, val, new);
>>> -		if (old == val)
>>> -			break;
>>> -
>>> -		val = old;
>>> -	}
>>> +	clear_pending_set_locked(lock, val);
>>> 	return;
>>
>>
>> Might as well add clear_pending_set_locked already in patch 3.
>
> Patch 4, if anywhere.
>

This code is added in patch 3:

+	/*
+	 * take ownership and clear the pending bit.
+	 *
+	 * *,1,0 -> *,0,1
+	 */
+	for (;;) {
+		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+		old = atomic_cmpxchg(&lock->val, val, new);
+		if (old == val)
+			break;
+
+		val = old;
+	}
+	return;

Unlike the change in patch 4, clear_pending_set_locked doesn't change 
how qspinlock moves from a state to the next.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-16 22:08   ` Waiman Long
@ 2014-07-07 15:20       ` Peter Zijlstra
  2014-06-18 12:03     ` Paolo Bonzini
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:20 UTC (permalink / raw)
  To: Waiman Long
  Cc: tglx, mingo, linux-arch, linux-kernel, virtualization, xen-devel,
	kvm, paolo.bonzini, konrad.wilk, boris.ostrovsky, paulmck, riel,
	torvalds, raghavendra.kt, david.vrabel, oleg, gleb, scott.norton,
	chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 4370 bytes --]

On Mon, Jun 16, 2014 at 06:08:21PM -0400, Waiman Long wrote:
> On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> >+struct pv_node {
> >+	struct mcs_spinlock	mcs;
> >+	struct mcs_spinlock	__offset[3];
> >+	int cpu, head;
> >+};
> 
> I am wondering why you need the separate cpu and head variables. I thought
> one will be enough here. The wait code put the cpu number in head, the the
> kick_cpu code kick the one in cpu which is just the cpu # of the tail.

The @cpu is the current cpu, the @head is the encoded pointer to the
queue head, they aren't necessarily the same.

The @head thing is not unlike your next pointer, just backwards.

> >+#define INVALID_HEAD	-1
> >+#define NO_HEAD		nr_cpu_ids
> >+
> 
> I think it is better to use a constant like -2 for NO_HEAD instead of an
> external variable.

Sure..

> >+void __pv_init_node(struct mcs_spinlock *node)
> >+{
> >+	struct pv_node *pn = (struct pv_node *)node;
> >+
> >+	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> >+
> >+	pn->cpu = smp_processor_id();
> >+	pn->head = INVALID_HEAD;
> >+}
> >+
> >+static inline struct pv_node *pv_decode_tail(u32 tail)
> >+{
> >+	return (struct pv_node *)decode_tail(tail);
> >+}
> >+
> >+void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> >+{
> >+	struct pv_node *ppn, *pn = (struct pv_node *)node;
> >+	unsigned int count;
> >+
> >+	if (!(old&  _Q_TAIL_MASK)) {
> >+		pn->head = NO_HEAD;
> >+		return;
> >+	}
> >+
> >+	ppn = pv_decode_tail(old);
> >+	ACCESS_ONCE(ppn->mcs.next) = node;
> >+
> >+	while (ppn->head == INVALID_HEAD)
> >+		cpu_relax();
> >+
> >+	pn->head = ppn->head;
> 
> A race can happen here as pn->head can be changed to the head cpu by the
> head waiter while being changed by this function at the same time. It is
> safer to use cmpxchg to make sure that there is no accidental overwriting of
> the head CPU number.

Ok, so I'm not entirely sure I see the race, although its entirely
possible, this is far too fragile. But I couldn't get rid of the race
with cmpxchg/xchg either.

So the idea is 'simple'; have link_and_wait propagate the 'head'
'pointer' from the old to the new tail, and have wait_head set the
'head' pointer on the current tail every time the top waiter goes to
sleep.

There's the obvious race where both happen at the same time and you're
not sure which 'head' 'pointer' won. To solve that what I did was:

init:
  INVALID_HEAD

link_and_wait:
  INVALID_HEAD -> pprev->head , NO_HEAD

wait_head:
  !INVALID_HEAD -> new head

This way wait_head must wait for link_and_wait to finish before writing
the new head value. Furthermore, if we race such that we obtained the
'old' tail and link_and_wait propagated the 'old' head to the 'new'
tail, wait_head will detect this by verifying the tail pointer after
writing the new head.

We don't need atomics here afaict, but we have wait loops, which of
course suck arse for virt :/

I'm not too fond of this scheme; but I thought I'd try and get rid of
that O(n) loop you had for finding the head, we simply cannot assume
'small' number of vcpus.

> >+void __pv_queue_unlock(struct qspinlock *lock)
> >+{
> >+	int val = atomic_read(&lock->val);
> >+
> >+	native_queue_unlock(lock);
> >+
> >+	if (val&  _Q_LOCKED_SLOW)
> >+		___pv_kick_head(lock);
> >+}
> >+
> 
> Again a race can happen here between the reading and writing of the lock
> value. I can't think of a good way to do that without using cmpxchg.

Indeed so, xchg it is I suppose :/

> >@@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
> >  	 *
> >  	 * *,x,y ->  *,0,0
> >  	 */
> >+	pv_wait_head(lock);
> >  	while ((val = smp_load_acquire(&lock->val.counter))&
> >  			_Q_LOCKED_PENDING_MASK)
> >  		cpu_relax();
> >@@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
> >  		cpu_relax();
> >
> >  	arch_mcs_spin_unlock_contended(&next->locked);
> >+	pv_kick_node(next);
> 
> pv_kick_node is an expensive operation and it can significantly slow down
> the locking operation if we have to do it for every subsequent task in the
> queue.

You might by now have noticed that I don't particularly care too much
about (para)virt performance :-) Also, I'm very much trying to get
'simple' things working before making them more complex.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-07-07 15:20       ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:20 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, riel, gleb, kvm, konrad.wilk, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 4370 bytes --]

On Mon, Jun 16, 2014 at 06:08:21PM -0400, Waiman Long wrote:
> On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> >+struct pv_node {
> >+	struct mcs_spinlock	mcs;
> >+	struct mcs_spinlock	__offset[3];
> >+	int cpu, head;
> >+};
> 
> I am wondering why you need the separate cpu and head variables. I thought
> one will be enough here. The wait code put the cpu number in head, the the
> kick_cpu code kick the one in cpu which is just the cpu # of the tail.

The @cpu is the current cpu, the @head is the encoded pointer to the
queue head, they aren't necessarily the same.

The @head thing is not unlike your next pointer, just backwards.

> >+#define INVALID_HEAD	-1
> >+#define NO_HEAD		nr_cpu_ids
> >+
> 
> I think it is better to use a constant like -2 for NO_HEAD instead of an
> external variable.

Sure..

> >+void __pv_init_node(struct mcs_spinlock *node)
> >+{
> >+	struct pv_node *pn = (struct pv_node *)node;
> >+
> >+	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> >+
> >+	pn->cpu = smp_processor_id();
> >+	pn->head = INVALID_HEAD;
> >+}
> >+
> >+static inline struct pv_node *pv_decode_tail(u32 tail)
> >+{
> >+	return (struct pv_node *)decode_tail(tail);
> >+}
> >+
> >+void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> >+{
> >+	struct pv_node *ppn, *pn = (struct pv_node *)node;
> >+	unsigned int count;
> >+
> >+	if (!(old&  _Q_TAIL_MASK)) {
> >+		pn->head = NO_HEAD;
> >+		return;
> >+	}
> >+
> >+	ppn = pv_decode_tail(old);
> >+	ACCESS_ONCE(ppn->mcs.next) = node;
> >+
> >+	while (ppn->head == INVALID_HEAD)
> >+		cpu_relax();
> >+
> >+	pn->head = ppn->head;
> 
> A race can happen here as pn->head can be changed to the head cpu by the
> head waiter while being changed by this function at the same time. It is
> safer to use cmpxchg to make sure that there is no accidental overwriting of
> the head CPU number.

Ok, so I'm not entirely sure I see the race, although its entirely
possible, this is far too fragile. But I couldn't get rid of the race
with cmpxchg/xchg either.

So the idea is 'simple'; have link_and_wait propagate the 'head'
'pointer' from the old to the new tail, and have wait_head set the
'head' pointer on the current tail every time the top waiter goes to
sleep.

There's the obvious race where both happen at the same time and you're
not sure which 'head' 'pointer' won. To solve that what I did was:

init:
  INVALID_HEAD

link_and_wait:
  INVALID_HEAD -> pprev->head , NO_HEAD

wait_head:
  !INVALID_HEAD -> new head

This way wait_head must wait for link_and_wait to finish before writing
the new head value. Furthermore, if we race such that we obtained the
'old' tail and link_and_wait propagated the 'old' head to the 'new'
tail, wait_head will detect this by verifying the tail pointer after
writing the new head.

We don't need atomics here afaict, but we have wait loops, which of
course suck arse for virt :/

I'm not too fond of this scheme; but I thought I'd try and get rid of
that O(n) loop you had for finding the head, we simply cannot assume
'small' number of vcpus.

> >+void __pv_queue_unlock(struct qspinlock *lock)
> >+{
> >+	int val = atomic_read(&lock->val);
> >+
> >+	native_queue_unlock(lock);
> >+
> >+	if (val&  _Q_LOCKED_SLOW)
> >+		___pv_kick_head(lock);
> >+}
> >+
> 
> Again a race can happen here between the reading and writing of the lock
> value. I can't think of a good way to do that without using cmpxchg.

Indeed so, xchg it is I suppose :/

> >@@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
> >  	 *
> >  	 * *,x,y ->  *,0,0
> >  	 */
> >+	pv_wait_head(lock);
> >  	while ((val = smp_load_acquire(&lock->val.counter))&
> >  			_Q_LOCKED_PENDING_MASK)
> >  		cpu_relax();
> >@@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
> >  		cpu_relax();
> >
> >  	arch_mcs_spin_unlock_contended(&next->locked);
> >+	pv_kick_node(next);
> 
> pv_kick_node is an expensive operation and it can significantly slow down
> the locking operation if we have to do it for every subsequent task in the
> queue.

You might by now have noticed that I don't particularly care too much
about (para)virt performance :-) Also, I'm very much trying to get
'simple' things working before making them more complex.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-16 22:08   ` Waiman Long
  2014-06-18 12:03       ` Paolo Bonzini
  2014-06-18 12:03     ` Paolo Bonzini
@ 2014-07-07 15:20     ` Peter Zijlstra
  2014-07-07 15:20       ` Peter Zijlstra
  3 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:20 UTC (permalink / raw)
  To: Waiman Long
  Cc: linux-arch, gleb, kvm, boris.ostrovsky, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, tglx, paulmck,
	torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 4370 bytes --]

On Mon, Jun 16, 2014 at 06:08:21PM -0400, Waiman Long wrote:
> On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> >+struct pv_node {
> >+	struct mcs_spinlock	mcs;
> >+	struct mcs_spinlock	__offset[3];
> >+	int cpu, head;
> >+};
> 
> I am wondering why you need the separate cpu and head variables. I thought
> one will be enough here. The wait code put the cpu number in head, the the
> kick_cpu code kick the one in cpu which is just the cpu # of the tail.

The @cpu is the current cpu, the @head is the encoded pointer to the
queue head, they aren't necessarily the same.

The @head thing is not unlike your next pointer, just backwards.

> >+#define INVALID_HEAD	-1
> >+#define NO_HEAD		nr_cpu_ids
> >+
> 
> I think it is better to use a constant like -2 for NO_HEAD instead of an
> external variable.

Sure..

> >+void __pv_init_node(struct mcs_spinlock *node)
> >+{
> >+	struct pv_node *pn = (struct pv_node *)node;
> >+
> >+	BUILD_BUG_ON(sizeof(struct pv_node)>  5*sizeof(struct mcs_spinlock));
> >+
> >+	pn->cpu = smp_processor_id();
> >+	pn->head = INVALID_HEAD;
> >+}
> >+
> >+static inline struct pv_node *pv_decode_tail(u32 tail)
> >+{
> >+	return (struct pv_node *)decode_tail(tail);
> >+}
> >+
> >+void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node)
> >+{
> >+	struct pv_node *ppn, *pn = (struct pv_node *)node;
> >+	unsigned int count;
> >+
> >+	if (!(old&  _Q_TAIL_MASK)) {
> >+		pn->head = NO_HEAD;
> >+		return;
> >+	}
> >+
> >+	ppn = pv_decode_tail(old);
> >+	ACCESS_ONCE(ppn->mcs.next) = node;
> >+
> >+	while (ppn->head == INVALID_HEAD)
> >+		cpu_relax();
> >+
> >+	pn->head = ppn->head;
> 
> A race can happen here as pn->head can be changed to the head cpu by the
> head waiter while being changed by this function at the same time. It is
> safer to use cmpxchg to make sure that there is no accidental overwriting of
> the head CPU number.

Ok, so I'm not entirely sure I see the race, although its entirely
possible, this is far too fragile. But I couldn't get rid of the race
with cmpxchg/xchg either.

So the idea is 'simple'; have link_and_wait propagate the 'head'
'pointer' from the old to the new tail, and have wait_head set the
'head' pointer on the current tail every time the top waiter goes to
sleep.

There's the obvious race where both happen at the same time and you're
not sure which 'head' 'pointer' won. To solve that what I did was:

init:
  INVALID_HEAD

link_and_wait:
  INVALID_HEAD -> pprev->head , NO_HEAD

wait_head:
  !INVALID_HEAD -> new head

This way wait_head must wait for link_and_wait to finish before writing
the new head value. Furthermore, if we race such that we obtained the
'old' tail and link_and_wait propagated the 'old' head to the 'new'
tail, wait_head will detect this by verifying the tail pointer after
writing the new head.

We don't need atomics here afaict, but we have wait loops, which of
course suck arse for virt :/

I'm not too fond of this scheme; but I thought I'd try and get rid of
that O(n) loop you had for finding the head, we simply cannot assume
'small' number of vcpus.

> >+void __pv_queue_unlock(struct qspinlock *lock)
> >+{
> >+	int val = atomic_read(&lock->val);
> >+
> >+	native_queue_unlock(lock);
> >+
> >+	if (val&  _Q_LOCKED_SLOW)
> >+		___pv_kick_head(lock);
> >+}
> >+
> 
> Again a race can happen here between the reading and writing of the lock
> value. I can't think of a good way to do that without using cmpxchg.

Indeed so, xchg it is I suppose :/

> >@@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp
> >  	 *
> >  	 * *,x,y ->  *,0,0
> >  	 */
> >+	pv_wait_head(lock);
> >  	while ((val = smp_load_acquire(&lock->val.counter))&
> >  			_Q_LOCKED_PENDING_MASK)
> >  		cpu_relax();
> >@@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp
> >  		cpu_relax();
> >
> >  	arch_mcs_spin_unlock_contended(&next->locked);
> >+	pv_kick_node(next);
> 
> pv_kick_node is an expensive operation and it can significantly slow down
> the locking operation if we have to do it for every subsequent task in the
> queue.

You might by now have noticed that I don't particularly care too much
about (para)virt performance :-) Also, I'm very much trying to get
'simple' things working before making them more complex.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-18 12:03       ` Paolo Bonzini
@ 2014-07-07 15:20         ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:20 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 587 bytes --]

On Wed, Jun 18, 2014 at 02:03:12PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 00:08, Waiman Long ha scritto:
> >>+void __pv_queue_unlock(struct qspinlock *lock)
> >>+{
> >>+	int val = atomic_read(&lock->val);
> >>+
> >>+	native_queue_unlock(lock);
> >>+
> >>+	if (val & _Q_LOCKED_SLOW)
> >>+		___pv_kick_head(lock);
> >>+}
> >>+
> >
> >Again a race can happen here between the reading and writing of the lock
> >value. I can't think of a good way to do that without using cmpxchg.
> 
> Could you just use xchg on the locked byte?

I'll have to, indeed. This is racy.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-07-07 15:20         ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:20 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman Long, linux-arch, riel, gleb, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, virtualization, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 587 bytes --]

On Wed, Jun 18, 2014 at 02:03:12PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 00:08, Waiman Long ha scritto:
> >>+void __pv_queue_unlock(struct qspinlock *lock)
> >>+{
> >>+	int val = atomic_read(&lock->val);
> >>+
> >>+	native_queue_unlock(lock);
> >>+
> >>+	if (val & _Q_LOCKED_SLOW)
> >>+		___pv_kick_head(lock);
> >>+}
> >>+
> >
> >Again a race can happen here between the reading and writing of the lock
> >value. I can't think of a good way to do that without using cmpxchg.
> 
> Could you just use xchg on the locked byte?

I'll have to, indeed. This is racy.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-18 12:03       ` Paolo Bonzini
                         ` (2 preceding siblings ...)
  (?)
@ 2014-07-07 15:20       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:20 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 587 bytes --]

On Wed, Jun 18, 2014 at 02:03:12PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 00:08, Waiman Long ha scritto:
> >>+void __pv_queue_unlock(struct qspinlock *lock)
> >>+{
> >>+	int val = atomic_read(&lock->val);
> >>+
> >>+	native_queue_unlock(lock);
> >>+
> >>+	if (val & _Q_LOCKED_SLOW)
> >>+		___pv_kick_head(lock);
> >>+}
> >>+
> >
> >Again a race can happen here between the reading and writing of the lock
> >value. I can't think of a good way to do that without using cmpxchg.
> 
> Could you just use xchg on the locked byte?

I'll have to, indeed. This is racy.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 11/11] qspinlock, kvm: Add paravirt support
  2014-06-22 16:36     ` Raghavendra K T
@ 2014-07-07 15:23       ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:23 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 1019 bytes --]

On Sun, Jun 22, 2014 at 10:06:18PM +0530, Raghavendra K T wrote:
> On 06/15/2014 06:17 PM, Peter Zijlstra wrote:
> >Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> >---
> [...]
> >+
> >+void kvm_wait(int *ptr, int val)
> >+{
> >+	unsigned long flags;
> >+
> >+	if (in_nmi())
> >+		return;
> >+
> >+	/*
> >+	 * Make sure an interrupt handler can't upset things in a
> >+	 * partially setup state.
> >+	 */
> 
> I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
> looking further with gdb I see one cpu is stuck with native_halt with
> slowpath flag(_Q_LOCKED_SLOW) set when it was called.

Like said in 0/n I think, I only booted the kernel in kvm, didn't
actually do anything with it.

It took me most of the day to figure out how to get paravirt working at
all, didn't feel like spending another many hours trying to figure out
how to make the crap thing do actual work.

But I'll see what I can do after we can 'conceptually' agree on the
paravirt patch.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 11/11] qspinlock, kvm: Add paravirt support
@ 2014-07-07 15:23       ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:23 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 1019 bytes --]

On Sun, Jun 22, 2014 at 10:06:18PM +0530, Raghavendra K T wrote:
> On 06/15/2014 06:17 PM, Peter Zijlstra wrote:
> >Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> >---
> [...]
> >+
> >+void kvm_wait(int *ptr, int val)
> >+{
> >+	unsigned long flags;
> >+
> >+	if (in_nmi())
> >+		return;
> >+
> >+	/*
> >+	 * Make sure an interrupt handler can't upset things in a
> >+	 * partially setup state.
> >+	 */
> 
> I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
> looking further with gdb I see one cpu is stuck with native_halt with
> slowpath flag(_Q_LOCKED_SLOW) set when it was called.

Like said in 0/n I think, I only booted the kernel in kvm, didn't
actually do anything with it.

It took me most of the day to figure out how to get paravirt working at
all, didn't feel like spending another many hours trying to figure out
how to make the crap thing do actual work.

But I'll see what I can do after we can 'conceptually' agree on the
paravirt patch.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 11/11] qspinlock, kvm: Add paravirt support
  2014-06-22 16:36     ` Raghavendra K T
  (?)
  (?)
@ 2014-07-07 15:23     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:23 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, tglx, paulmck,
	torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 1019 bytes --]

On Sun, Jun 22, 2014 at 10:06:18PM +0530, Raghavendra K T wrote:
> On 06/15/2014 06:17 PM, Peter Zijlstra wrote:
> >Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> >---
> [...]
> >+
> >+void kvm_wait(int *ptr, int val)
> >+{
> >+	unsigned long flags;
> >+
> >+	if (in_nmi())
> >+		return;
> >+
> >+	/*
> >+	 * Make sure an interrupt handler can't upset things in a
> >+	 * partially setup state.
> >+	 */
> 
> I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
> looking further with gdb I see one cpu is stuck with native_halt with
> slowpath flag(_Q_LOCKED_SLOW) set when it was called.

Like said in 0/n I think, I only booted the kernel in kvm, didn't
actually do anything with it.

It took me most of the day to figure out how to get paravirt working at
all, didn't feel like spending another many hours trying to figure out
how to make the crap thing do actual work.

But I'll see what I can do after we can 'conceptually' agree on the
paravirt patch.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-20 13:46     ` Konrad Rzeszutek Wilk
  (?)
@ 2014-07-07 15:27     ` Peter Zijlstra
  2014-07-15 14:23       ` Konrad Rzeszutek Wilk
  2014-07-15 14:23         ` Konrad Rzeszutek Wilk
  -1 siblings, 2 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:27 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 415 bytes --]

On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> I dug in the code and I have some comments about it, but before
> I post them I was wondering if you have any plans to run any performance
> tests against the PV ticketlock with normal and over-committed scenarios?

I can barely boot a guest.. I'm not sure I can make them do anything
much at all yet. All this virt crap is totally painful.


[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-20 13:46     ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2014-07-07 15:27     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:27 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 415 bytes --]

On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> I dug in the code and I have some comments about it, but before
> I post them I was wondering if you have any plans to run any performance
> tests against the PV ticketlock with normal and over-committed scenarios?

I can barely boot a guest.. I'm not sure I can make them do anything
much at all yet. All this virt crap is totally painful.


[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-06-20 13:46     ` Konrad Rzeszutek Wilk
                       ` (2 preceding siblings ...)
  (?)
@ 2014-07-07 15:27     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:27 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 415 bytes --]

On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> I dug in the code and I have some comments about it, but before
> I post them I was wondering if you have any plans to run any performance
> tests against the PV ticketlock with normal and over-committed scenarios?

I can barely boot a guest.. I'm not sure I can make them do anything
much at all yet. All this virt crap is totally painful.


[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-07-07 15:08       ` Paolo Bonzini
@ 2014-07-07 15:35           ` Peter Zijlstra
  2014-07-07 15:35         ` Peter Zijlstra
  1 sibling, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod

[-- Attachment #1: Type: text/plain, Size: 1407 bytes --]

On Mon, Jul 07, 2014 at 05:08:17PM +0200, Paolo Bonzini wrote:
> Il 07/07/2014 16:35, Peter Zijlstra ha scritto:
> >On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
> >>Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> >>>
> >>>-	for (;;) {
> >>>-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> >>>-
> >>>-		old = atomic_cmpxchg(&lock->val, val, new);
> >>>-		if (old == val)
> >>>-			break;
> >>>-
> >>>-		val = old;
> >>>-	}
> >>>+	clear_pending_set_locked(lock, val);
> >>>	return;
> >>
> >>
> >>Might as well add clear_pending_set_locked already in patch 3.
> >
> >Patch 4, if anywhere.
> >
> 
> This code is added in patch 3:
> 
> +	/*
> +	 * take ownership and clear the pending bit.
> +	 *
> +	 * *,1,0 -> *,0,1
> +	 */
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return;
> 
> Unlike the change in patch 4, clear_pending_set_locked doesn't change how
> qspinlock moves from a state to the next.

True, but its where we start to break up into smaller functions. And the
only reason we break them out is because we'll get different
implementations depending on NR_CPUS.

So we can view the breakout of xchg_tail and clear_and_set_pending as
preparatory work for introducing the NR_CPUS thing.

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
@ 2014-07-07 15:35           ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, virtualization, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 1407 bytes --]

On Mon, Jul 07, 2014 at 05:08:17PM +0200, Paolo Bonzini wrote:
> Il 07/07/2014 16:35, Peter Zijlstra ha scritto:
> >On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
> >>Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> >>>
> >>>-	for (;;) {
> >>>-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> >>>-
> >>>-		old = atomic_cmpxchg(&lock->val, val, new);
> >>>-		if (old == val)
> >>>-			break;
> >>>-
> >>>-		val = old;
> >>>-	}
> >>>+	clear_pending_set_locked(lock, val);
> >>>	return;
> >>
> >>
> >>Might as well add clear_pending_set_locked already in patch 3.
> >
> >Patch 4, if anywhere.
> >
> 
> This code is added in patch 3:
> 
> +	/*
> +	 * take ownership and clear the pending bit.
> +	 *
> +	 * *,1,0 -> *,0,1
> +	 */
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return;
> 
> Unlike the change in patch 4, clear_pending_set_locked doesn't change how
> qspinlock moves from a state to the next.

True, but its where we start to break up into smaller functions. And the
only reason we break them out is because we'll get different
implementations depending on NR_CPUS.

So we can view the breakout of xchg_tail and clear_and_set_pending as
preparatory work for introducing the NR_CPUS thing.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-07-07 15:08       ` Paolo Bonzini
  2014-07-07 15:35           ` Peter Zijlstra
@ 2014-07-07 15:35         ` Peter Zijlstra
  1 sibling, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-07-07 15:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo


[-- Attachment #1.1: Type: text/plain, Size: 1407 bytes --]

On Mon, Jul 07, 2014 at 05:08:17PM +0200, Paolo Bonzini wrote:
> Il 07/07/2014 16:35, Peter Zijlstra ha scritto:
> >On Wed, Jun 18, 2014 at 01:39:52PM +0200, Paolo Bonzini wrote:
> >>Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
> >>>
> >>>-	for (;;) {
> >>>-		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> >>>-
> >>>-		old = atomic_cmpxchg(&lock->val, val, new);
> >>>-		if (old == val)
> >>>-			break;
> >>>-
> >>>-		val = old;
> >>>-	}
> >>>+	clear_pending_set_locked(lock, val);
> >>>	return;
> >>
> >>
> >>Might as well add clear_pending_set_locked already in patch 3.
> >
> >Patch 4, if anywhere.
> >
> 
> This code is added in patch 3:
> 
> +	/*
> +	 * take ownership and clear the pending bit.
> +	 *
> +	 * *,1,0 -> *,0,1
> +	 */
> +	for (;;) {
> +		new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
> +
> +		old = atomic_cmpxchg(&lock->val, val, new);
> +		if (old == val)
> +			break;
> +
> +		val = old;
> +	}
> +	return;
> 
> Unlike the change in patch 4, clear_pending_set_locked doesn't change how
> qspinlock moves from a state to the next.

True, but its where we start to break up into smaller functions. And the
only reason we break them out is because we'll get different
implementations depending on NR_CPUS.

So we can view the breakout of xchg_tail and clear_and_set_pending as
preparatory work for introducing the NR_CPUS thing.

[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-07-07 15:35           ` Peter Zijlstra
@ 2014-07-07 16:10             ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-07-07 16:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, konrad.wilk,
	boris.ostrovsky, paulmck, riel, torvalds, raghavendra.kt,
	david.vrabel, oleg, gleb, scott.norton, chegu_vinod

Il 07/07/2014 17:35, Peter Zijlstra ha scritto:
>> > Unlike the change in patch 4, clear_pending_set_locked doesn't change how
>> > qspinlock moves from a state to the next.
> True, but its where we start to break up into smaller functions. And the
> only reason we break them out is because we'll get different
> implementations depending on NR_CPUS.
>
> So we can view the breakout of xchg_tail and clear_and_set_pending as
> preparatory work for introducing the NR_CPUS thing.

That's a different reasoning, but it's fine by me too.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
@ 2014-07-07 16:10             ` Paolo Bonzini
  0 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-07-07 16:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, konrad.wilk,
	boris.ostrovsky, scott.norton, raghavendra.kt, paolo.bonzini,
	linux-kernel, virtualization, chegu_vinod, david.vrabel, oleg,
	xen-devel, tglx, paulmck, torvalds, mingo

Il 07/07/2014 17:35, Peter Zijlstra ha scritto:
>> > Unlike the change in patch 4, clear_pending_set_locked doesn't change how
>> > qspinlock moves from a state to the next.
> True, but its where we start to break up into smaller functions. And the
> only reason we break them out is because we'll get different
> implementations depending on NR_CPUS.
>
> So we can view the breakout of xchg_tail and clear_and_set_pending as
> preparatory work for introducing the NR_CPUS thing.

That's a different reasoning, but it's fine by me too.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
  2014-07-07 15:35           ` Peter Zijlstra
  (?)
  (?)
@ 2014-07-07 16:10           ` Paolo Bonzini
  -1 siblings, 0 replies; 192+ messages in thread
From: Paolo Bonzini @ 2014-07-07 16:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

Il 07/07/2014 17:35, Peter Zijlstra ha scritto:
>> > Unlike the change in patch 4, clear_pending_set_locked doesn't change how
>> > qspinlock moves from a state to the next.
> True, but its where we start to break up into smaller functions. And the
> only reason we break them out is because we'll get different
> implementations depending on NR_CPUS.
>
> So we can view the breakout of xchg_tail and clear_and_set_pending as
> preparatory work for introducing the NR_CPUS thing.

That's a different reasoning, but it's fine by me too.

Paolo

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-07-07 15:27     ` Peter Zijlstra
@ 2014-07-15 14:23         ` Konrad Rzeszutek Wilk
  2014-07-15 14:23         ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-07-15 14:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, tglx, mingo, linux-arch, linux-kernel,
	virtualization, xen-devel, kvm, paolo.bonzini, boris.ostrovsky,
	paulmck, riel, torvalds, raghavendra.kt, david.vrabel, oleg,
	gleb, scott.norton, chegu_vinod

On Mon, Jul 07, 2014 at 05:27:34PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> > I dug in the code and I have some comments about it, but before
> > I post them I was wondering if you have any plans to run any performance
> > tests against the PV ticketlock with normal and over-committed scenarios?
> 
> I can barely boot a guest.. I'm not sure I can make them do anything
> much at all yet. All this virt crap is totally painful.

HA!

The reason I asked about that is from a pen-and-paper view it looks
suboptimal in the worst case scenario compared to PV ticketlock.

The 'worst case scenario' is when we over-commit (more CPUs than there
are physical CPUs) or have to delay guests (the sum of all virtual
CPUs > physical CPUs and all of the guests are compiling kernels).

In those cases the PV ticketlock goes to sleep and gets woken up
once the ticket holder has finished. In the PV qspinlock we do
wake up the first in queue, but we also wake the next one in queue
so it can progress further. And so on.

Perhaps a better mechanism is just ditch the queue part and utilize
the byte part and under KVM and Xen just do bytelocking (since we
have 8 bits). For the PV halt/waking we can stash in the 'struct mcs'
the current lock that each CPU is waiting for. And the unlocker
can iterate over all of those and wake them all up. Perhaps make
the iteration random. Anyhow, that is how the old PV bytelock under
Xen worked (before 3.11) and it had worked pretty well (it didn't
do it random thought - always started with 'for_each_online_cpu').

Squashing in the ticketlock concept in qspinlock for PV looks
scary.

And as I said - this is all pen-and-paper - so it might be that this
'wake-up-go-sleep-on-the-queue' kick is actually not that bad?

Lastly - thank you for taking a stab at this.
> 



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
@ 2014-07-15 14:23         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-07-15 14:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, riel, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jul 07, 2014 at 05:27:34PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> > I dug in the code and I have some comments about it, but before
> > I post them I was wondering if you have any plans to run any performance
> > tests against the PV ticketlock with normal and over-committed scenarios?
> 
> I can barely boot a guest.. I'm not sure I can make them do anything
> much at all yet. All this virt crap is totally painful.

HA!

The reason I asked about that is from a pen-and-paper view it looks
suboptimal in the worst case scenario compared to PV ticketlock.

The 'worst case scenario' is when we over-commit (more CPUs than there
are physical CPUs) or have to delay guests (the sum of all virtual
CPUs > physical CPUs and all of the guests are compiling kernels).

In those cases the PV ticketlock goes to sleep and gets woken up
once the ticket holder has finished. In the PV qspinlock we do
wake up the first in queue, but we also wake the next one in queue
so it can progress further. And so on.

Perhaps a better mechanism is just ditch the queue part and utilize
the byte part and under KVM and Xen just do bytelocking (since we
have 8 bits). For the PV halt/waking we can stash in the 'struct mcs'
the current lock that each CPU is waiting for. And the unlocker
can iterate over all of those and wake them all up. Perhaps make
the iteration random. Anyhow, that is how the old PV bytelock under
Xen worked (before 3.11) and it had worked pretty well (it didn't
do it random thought - always started with 'for_each_online_cpu').

Squashing in the ticketlock concept in qspinlock for PV looks
scary.

And as I said - this is all pen-and-paper - so it might be that this
'wake-up-go-sleep-on-the-queue' kick is actually not that bad?

Lastly - thank you for taking a stab at this.
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH 10/11] qspinlock: Paravirt support
  2014-07-07 15:27     ` Peter Zijlstra
@ 2014-07-15 14:23       ` Konrad Rzeszutek Wilk
  2014-07-15 14:23         ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-07-15 14:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Waiman.Long, linux-arch, gleb, kvm, boris.ostrovsky,
	scott.norton, raghavendra.kt, paolo.bonzini, linux-kernel,
	virtualization, chegu_vinod, david.vrabel, oleg, xen-devel, tglx,
	paulmck, torvalds, mingo

On Mon, Jul 07, 2014 at 05:27:34PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> > I dug in the code and I have some comments about it, but before
> > I post them I was wondering if you have any plans to run any performance
> > tests against the PV ticketlock with normal and over-committed scenarios?
> 
> I can barely boot a guest.. I'm not sure I can make them do anything
> much at all yet. All this virt crap is totally painful.

HA!

The reason I asked about that is from a pen-and-paper view it looks
suboptimal in the worst case scenario compared to PV ticketlock.

The 'worst case scenario' is when we over-commit (more CPUs than there
are physical CPUs) or have to delay guests (the sum of all virtual
CPUs > physical CPUs and all of the guests are compiling kernels).

In those cases the PV ticketlock goes to sleep and gets woken up
once the ticket holder has finished. In the PV qspinlock we do
wake up the first in queue, but we also wake the next one in queue
so it can progress further. And so on.

Perhaps a better mechanism is just ditch the queue part and utilize
the byte part and under KVM and Xen just do bytelocking (since we
have 8 bits). For the PV halt/waking we can stash in the 'struct mcs'
the current lock that each CPU is waiting for. And the unlocker
can iterate over all of those and wake them all up. Perhaps make
the iteration random. Anyhow, that is how the old PV bytelock under
Xen worked (before 3.11) and it had worked pretty well (it didn't
do it random thought - always started with 'for_each_online_cpu').

Squashing in the ticketlock concept in qspinlock for PV looks
scary.

And as I said - this is all pen-and-paper - so it might be that this
'wake-up-go-sleep-on-the-queue' kick is actually not that bad?

Lastly - thank you for taking a stab at this.
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH 00/11] qspinlock with paravirt support
@ 2014-06-15 12:46 Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2014-06-15 12:46 UTC (permalink / raw)
  To: Waiman.Long, tglx, mingo
  Cc: linux-arch, gleb, kvm, Peter Zijlstra, scott.norton,
	raghavendra.kt, paolo.bonzini, linux-kernel, virtualization,
	chegu_vinod, david.vrabel, oleg, xen-devel, boris.ostrovsky,
	paulmck, torvalds

Since Waiman seems incapable of doing simple things; here's my take on the
paravirt crap.

The first few patches are taken from Waiman's latest series, but the virt
support is completely new. Its primary aim is to not mess up the native code.

I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
smp guests. I've not done Xen, but the patch should be simple and similar.

I ripped out all the unfair nonsense as its not at all required for paravirt
and optimizations that make paravirt better at the cost of code clarity and/or
native performance are just not worth it.

Also; if we were to ever add some of that unfair nonsense you do so _after_ you
got the simple things working.

The thing I'm least sure about is the head tracking, I chose to do something
different from what Waiman did, because his is O(nr_cpus) and had the
assumption that guests have small nr_cpus. AFAIK this is not at all true. The
biggest problem I have with what I did is that it contains wait loops itself.

^ permalink raw reply	[flat|nested] 192+ messages in thread

end of thread, other threads:[~2014-07-15 14:24 UTC | newest]

Thread overview: 192+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-15 12:46 [PATCH 00/11] qspinlock with paravirt support Peter Zijlstra
2014-06-15 12:46 ` Peter Zijlstra
2014-06-15 12:46 ` [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock Peter Zijlstra
2014-06-15 12:46 ` Peter Zijlstra
2014-06-15 12:46   ` Peter Zijlstra
2014-06-16 20:49   ` Konrad Rzeszutek Wilk
2014-06-16 20:49   ` Konrad Rzeszutek Wilk
2014-06-16 20:49     ` Konrad Rzeszutek Wilk
2014-06-17 20:03     ` Konrad Rzeszutek Wilk
2014-06-17 20:03       ` Konrad Rzeszutek Wilk
2014-06-17 20:03       ` Konrad Rzeszutek Wilk
2014-06-17 20:03       ` Konrad Rzeszutek Wilk
2014-06-23 16:12       ` Peter Zijlstra
2014-06-23 16:12       ` Peter Zijlstra
2014-06-23 16:12         ` Peter Zijlstra
2014-06-23 16:20         ` Konrad Rzeszutek Wilk
2014-06-23 16:20         ` Konrad Rzeszutek Wilk
2014-06-23 16:20           ` Konrad Rzeszutek Wilk
2014-06-17 20:03     ` Konrad Rzeszutek Wilk
2014-06-23 15:56     ` Peter Zijlstra
2014-06-23 15:56     ` Peter Zijlstra
2014-06-23 15:56     ` Peter Zijlstra
2014-06-23 16:16       ` Konrad Rzeszutek Wilk
2014-06-23 16:16         ` Konrad Rzeszutek Wilk
2014-06-23 16:16       ` Konrad Rzeszutek Wilk
2014-06-17 20:05   ` Konrad Rzeszutek Wilk
2014-06-17 20:05     ` Konrad Rzeszutek Wilk
2014-06-17 20:05     ` Konrad Rzeszutek Wilk
2014-06-17 20:05     ` Konrad Rzeszutek Wilk
2014-06-23 16:26     ` Peter Zijlstra
2014-06-23 16:26       ` Peter Zijlstra
2014-06-23 16:45       ` Konrad Rzeszutek Wilk
2014-06-23 16:45         ` Konrad Rzeszutek Wilk
2014-06-23 16:45       ` Konrad Rzeszutek Wilk
2014-06-23 16:26     ` Peter Zijlstra
2014-06-17 20:05   ` Konrad Rzeszutek Wilk
2014-06-15 12:46 ` [PATCH 02/11] qspinlock, x86: Enable x86-64 to use " Peter Zijlstra
2014-06-15 12:46 ` Peter Zijlstra
2014-06-15 12:46 ` Peter Zijlstra
2014-06-15 12:47 ` [PATCH 03/11] qspinlock: Add pending bit Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47   ` Peter Zijlstra
2014-06-17 20:36   ` Konrad Rzeszutek Wilk
2014-06-17 20:36   ` Konrad Rzeszutek Wilk
2014-06-17 20:36     ` Konrad Rzeszutek Wilk
2014-06-17 20:51     ` Waiman Long
2014-06-17 20:51     ` Waiman Long
2014-06-17 20:51       ` Waiman Long
2014-06-17 21:07       ` Konrad Rzeszutek Wilk
2014-06-17 21:07       ` Konrad Rzeszutek Wilk
2014-06-17 21:07         ` Konrad Rzeszutek Wilk
2014-06-17 21:10         ` Konrad Rzeszutek Wilk
2014-06-17 21:10           ` Konrad Rzeszutek Wilk
2014-06-17 22:25           ` Waiman Long
2014-06-17 22:25             ` Waiman Long
2014-06-17 22:25           ` Waiman Long
2014-06-17 21:10         ` Konrad Rzeszutek Wilk
2014-06-24  8:24         ` Peter Zijlstra
2014-06-24  8:24           ` Peter Zijlstra
2014-06-24  8:24         ` Peter Zijlstra
2014-06-18 11:29     ` Paolo Bonzini
2014-06-18 11:29     ` Paolo Bonzini
2014-06-18 11:29       ` Paolo Bonzini
2014-06-18 13:36       ` Konrad Rzeszutek Wilk
2014-06-18 13:36       ` Konrad Rzeszutek Wilk
2014-06-18 13:36         ` Konrad Rzeszutek Wilk
2014-06-23 16:35     ` Peter Zijlstra
2014-06-23 16:35     ` Peter Zijlstra
2014-06-23 16:35       ` Peter Zijlstra
2014-06-15 12:47 ` [PATCH 04/11] qspinlock: Extract out the exchange of tail code word Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-17 20:55   ` Konrad Rzeszutek Wilk
2014-06-17 20:55     ` Konrad Rzeszutek Wilk
2014-06-18 11:37     ` Paolo Bonzini
2014-06-18 11:37     ` Paolo Bonzini
2014-06-18 11:37       ` Paolo Bonzini
2014-06-18 13:50       ` Konrad Rzeszutek Wilk
2014-06-18 13:50       ` Konrad Rzeszutek Wilk
2014-06-18 13:50         ` Konrad Rzeszutek Wilk
2014-06-18 15:46         ` Waiman Long
2014-06-18 15:46           ` Waiman Long
2014-06-18 15:49           ` Paolo Bonzini
2014-06-18 15:49             ` Paolo Bonzini
2014-06-18 15:49           ` Paolo Bonzini
2014-06-18 16:02           ` Konrad Rzeszutek Wilk
2014-06-18 16:02           ` Konrad Rzeszutek Wilk
2014-06-18 16:02             ` Konrad Rzeszutek Wilk
2014-06-18 15:46         ` Waiman Long
2014-06-24 10:47       ` Peter Zijlstra
2014-06-24 10:47         ` Peter Zijlstra
2014-06-24 10:47       ` Peter Zijlstra
2014-06-17 20:55   ` Konrad Rzeszutek Wilk
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47 ` [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47   ` Peter Zijlstra
2014-06-18 11:39   ` Paolo Bonzini
2014-06-18 11:39   ` Paolo Bonzini
2014-06-18 11:39     ` Paolo Bonzini
2014-07-07 14:35     ` Peter Zijlstra
2014-07-07 14:35     ` Peter Zijlstra
2014-07-07 14:35       ` Peter Zijlstra
2014-07-07 15:08       ` Paolo Bonzini
2014-07-07 15:08       ` Paolo Bonzini
2014-07-07 15:08       ` Paolo Bonzini
2014-07-07 15:35         ` Peter Zijlstra
2014-07-07 15:35           ` Peter Zijlstra
2014-07-07 16:10           ` Paolo Bonzini
2014-07-07 16:10             ` Paolo Bonzini
2014-07-07 16:10           ` Paolo Bonzini
2014-07-07 15:35         ` Peter Zijlstra
2014-06-18 15:57   ` Konrad Rzeszutek Wilk
2014-06-18 15:57     ` Konrad Rzeszutek Wilk
2014-07-07 14:33     ` Peter Zijlstra
2014-07-07 14:33     ` Peter Zijlstra
2014-07-07 14:33       ` Peter Zijlstra
2014-06-18 15:57   ` Konrad Rzeszutek Wilk
2014-06-15 12:47 ` [PATCH 06/11] qspinlock: Optimize pending bit Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-18 11:42   ` Paolo Bonzini
2014-06-18 11:42   ` Paolo Bonzini
2014-06-18 11:42     ` Paolo Bonzini
2014-06-15 12:47 ` [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable Peter Zijlstra
2014-06-15 12:47   ` Peter Zijlstra
2014-06-18 16:36   ` Konrad Rzeszutek Wilk
2014-06-18 16:36   ` Konrad Rzeszutek Wilk
2014-06-18 16:36     ` Konrad Rzeszutek Wilk
2014-07-07 14:51     ` Peter Zijlstra
2014-07-07 14:51     ` Peter Zijlstra
2014-07-07 14:51       ` Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47 ` [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47   ` Peter Zijlstra
2014-06-16 21:57   ` Waiman Long
2014-06-16 21:57   ` Waiman Long
2014-06-16 21:57   ` Waiman Long
2014-06-18 16:40   ` Konrad Rzeszutek Wilk
2014-06-18 16:40   ` Konrad Rzeszutek Wilk
2014-06-18 16:40     ` Konrad Rzeszutek Wilk
2014-06-15 12:47 ` [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Peter Zijlstra
2014-06-18 16:43   ` Konrad Rzeszutek Wilk
2014-06-18 16:43   ` Konrad Rzeszutek Wilk
2014-06-18 16:43     ` Konrad Rzeszutek Wilk
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47 ` [PATCH 10/11] qspinlock: Paravirt support Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47   ` Peter Zijlstra
2014-06-16 22:08   ` Waiman Long
2014-06-18 12:03     ` Paolo Bonzini
2014-06-18 12:03       ` Paolo Bonzini
2014-06-18 15:26       ` Waiman Long
2014-06-18 15:26       ` Waiman Long
2014-06-18 15:26         ` Waiman Long
2014-07-07 15:20       ` Peter Zijlstra
2014-07-07 15:20       ` Peter Zijlstra
2014-07-07 15:20         ` Peter Zijlstra
2014-06-18 12:03     ` Paolo Bonzini
2014-07-07 15:20     ` Peter Zijlstra
2014-07-07 15:20     ` Peter Zijlstra
2014-07-07 15:20       ` Peter Zijlstra
2014-06-16 22:08   ` Waiman Long
2014-06-17  0:53   ` Waiman Long
2014-06-17  0:53     ` Waiman Long
2014-06-17  0:53   ` Waiman Long
2014-06-18 12:04   ` Paolo Bonzini
2014-06-18 12:04   ` Paolo Bonzini
2014-06-18 12:04     ` Paolo Bonzini
2014-06-20 13:46   ` Konrad Rzeszutek Wilk
2014-06-20 13:46   ` Konrad Rzeszutek Wilk
2014-06-20 13:46     ` Konrad Rzeszutek Wilk
2014-07-07 15:27     ` Peter Zijlstra
2014-07-15 14:23       ` Konrad Rzeszutek Wilk
2014-07-15 14:23       ` Konrad Rzeszutek Wilk
2014-07-15 14:23         ` Konrad Rzeszutek Wilk
2014-07-07 15:27     ` Peter Zijlstra
2014-07-07 15:27     ` Peter Zijlstra
2014-06-15 12:47 ` [PATCH 11/11] qspinlock, kvm: Add paravirt support Peter Zijlstra
2014-06-22 16:36   ` Raghavendra K T
2014-06-22 16:36     ` Raghavendra K T
2014-07-07 15:23     ` Peter Zijlstra
2014-07-07 15:23       ` Peter Zijlstra
2014-07-07 15:23     ` Peter Zijlstra
2014-06-22 16:36   ` Raghavendra K T
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-16 20:52 ` [PATCH 00/11] qspinlock with " Konrad Rzeszutek Wilk
2014-06-16 20:52   ` Konrad Rzeszutek Wilk
2014-06-16 20:52 ` Konrad Rzeszutek Wilk
2014-06-15 12:46 Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.