linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
@ 2010-11-03 14:59 Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 01/20] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
                   ` (20 more replies)
  0 siblings, 21 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Hi all,

This series does two major things:

1. It converts the bulk of the implementation to C, and makes the
   "small ticket" and "large ticket" code common.  Only the actual
   size-dependent asm instructions are specific to the ticket size.
   The resulting generated asm is very similar to the current
   hand-written code.

   This results in a very large reduction in lines of code.

2. Get rid of pv spinlocks, and replace them with pv ticket locks.
   Currently we have the notion of "pv spinlocks" where a pv-ops
   backend can completely replace the spinlock implementation with a
   new one.  This has two disadvantages:
    - its a completely separate spinlock implementation, and
    - there's an extra layer of indirection in front of every
      spinlock operation.

   To replace this, this series introduces the notion of pv ticket
   locks.  In this design, the ticket lock fastpath is the standard
   ticketlock algorithm.  However, after an iteration threshold it
   falls into a slow path which invokes a pv-op to block the spinning
   CPU.  Conversely, on unlock it does the normal unlock, and then
   checks to see if it needs to do a special "kick" to wake the next
   CPU.

   The net result is that the pv-op calls are restricted to the slow
   paths, and the normal fast-paths are largely unaffected.

   There are still some overheads, however:
   - When locking, there's some extra tests to count the spin iterations.
     There are no extra instructions in the uncontended case though.

   - When unlocking, there are two ways to detect when it is necessary
     to kick a blocked CPU:

      - with an unmodified struct spinlock, it can check to see if
        head == tail after unlock; if not, then there's someone else
        trying to lock, and we can do a kick.  Unfortunately this
        generates very high level of redundant kicks, because the
        waiting CPU might not have blocked yet (which is the common
        case)

      - With a struct spinlock modified to include a "waiters" field,
        to keep count of blocked CPUs, which is a much tighter test.
        But it does increase the size of each spinlock by 50%
	(doubled with padding).

The series is very fine-grained, and I've left a lightly cleaned up
history of the various options I evaluated, since they're not all
obvious.

Jeremy Fitzhardinge (20):
  x86/ticketlock: clean up types and accessors
  x86/ticketlock: convert spin loop to C
  x86/ticketlock: Use C for __ticket_spin_unlock
  x86/ticketlock: make large and small ticket versions of spin_lock the
    same
  x86/ticketlock: make __ticket_spin_lock common
  x86/ticketlock: make __ticket_spin_trylock common
  x86/spinlocks: replace pv spinlocks with pv ticketlocks
  x86/ticketlock: collapse a layer of functions
  xen/pvticketlock: Xen implementation for PV ticket locks
  x86/pvticketlock: keep count of blocked cpus
  x86/pvticketlock: use callee-save for lock_spinning
  x86/pvticketlock: use callee-save for unlock_kick as well
  x86/pvticketlock: make sure unlock is seen by everyone before
    checking waiters
  x86/ticketlock: loosen ordering restraints on unlock
  x86/ticketlock: prevent compiler reordering into locked region
  x86/ticketlock: don't inline _spin_unlock when using paravirt
    spinlocks
  x86/ticketlock: clarify barrier in arch_spin_lock
  x86/ticketlock: remove .slock
  x86/ticketlocks: use overlapping read to eliminate mb()
  x86/ticketlock: rename ticketpair to head_tail

 arch/x86/Kconfig                      |    3 +
 arch/x86/include/asm/paravirt.h       |   30 +---
 arch/x86/include/asm/paravirt_types.h |    8 +-
 arch/x86/include/asm/spinlock.h       |  250 +++++++++++++++--------------
 arch/x86/include/asm/spinlock_types.h |   41 +++++-
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +--
 arch/x86/xen/spinlock.c               |  282 +++++----------------------------
 kernel/Kconfig.locks                  |    2 +-
 8 files changed, 221 insertions(+), 410 deletions(-)

-- 
1.7.2.3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 01/20] x86/ticketlock: clean up types and accessors
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-13  9:57   ` Américo Wang
  2010-11-03 14:59 ` [PATCH 02/20] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
                   ` (19 subsequent siblings)
  20 siblings, 1 reply; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

A few cleanups to the way spinlocks are defined and accessed:
 - define __ticket_t which is the size of a spinlock ticket (ie, enough
   bits to hold all the cpus)
 - Define struct arch_spinlock as a union containing plain slock and
   the head and tail tickets
 - Use head and tail to implement some of the spinlock predicates.
 - Make all ticket variables unsigned.
 - Use TICKET_SHIFT to form constants

Most of this will be used in later patches.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   24 ++++++++++--------------
 arch/x86/include/asm/spinlock_types.h |   20 ++++++++++++++++++--
 2 files changed, 28 insertions(+), 16 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 3089f70..d6d5784 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -56,11 +56,9 @@
  * much between them in performance though, especially as locks are out of line.
  */
 #if (NR_CPUS < 256)
-#define TICKET_SHIFT 8
-
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	short inc = 0x0100;
+	unsigned short inc = 1 << TICKET_SHIFT;
 
 	asm volatile (
 		LOCK_PREFIX "xaddw %w0, %1\n"
@@ -79,7 +77,7 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
-	int tmp, new;
+	unsigned int tmp, new;
 
 	asm volatile("movzwl %2, %0\n\t"
 		     "cmpb %h0,%b0\n\t"
@@ -104,12 +102,10 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 		     : "memory", "cc");
 }
 #else
-#define TICKET_SHIFT 16
-
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	int inc = 0x00010000;
-	int tmp;
+	unsigned inc = 1 << TICKET_SHIFT;
+	unsigned tmp;
 
 	asm volatile(LOCK_PREFIX "xaddl %0, %1\n"
 		     "movzwl %w0, %2\n\t"
@@ -129,8 +125,8 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
-	int tmp;
-	int new;
+	unsigned tmp;
+	unsigned new;
 
 	asm volatile("movl %2,%0\n\t"
 		     "movl %0,%1\n\t"
@@ -160,16 +156,16 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
 {
-	int tmp = ACCESS_ONCE(lock->slock);
+	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return !!(((tmp >> TICKET_SHIFT) ^ tmp) & ((1 << TICKET_SHIFT) - 1));
+	return !!(tmp.tail ^ tmp.head);
 }
 
 static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 {
-	int tmp = ACCESS_ONCE(lock->slock);
+	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return (((tmp >> TICKET_SHIFT) - tmp) & ((1 << TICKET_SHIFT) - 1)) > 1;
+	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
 
 #ifndef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index dcb48b2..4582640 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,11 +5,27 @@
 # error "please don't include this file directly"
 #endif
 
+#include <linux/types.h>
+
+#if (CONFIG_NR_CPUS < 256)
+typedef u8  __ticket_t;
+#else
+typedef u16 __ticket_t;
+#endif
+
+#define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
+#define TICKET_MASK	((1 << TICKET_SHIFT) - 1)
+
 typedef struct arch_spinlock {
-	unsigned int slock;
+	union {
+		unsigned int slock;
+		struct __raw_tickets {
+			__ticket_t head, tail;
+		} tickets;
+	};
 } arch_spinlock_t;
 
-#define __ARCH_SPIN_LOCK_UNLOCKED	{ 0 }
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ { .slock = 0 } }
 
 typedef struct {
 	unsigned int lock;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 02/20] x86/ticketlock: convert spin loop to C
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 01/20] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 15:11   ` Eric Dumazet
  2010-11-03 14:59 ` [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
                   ` (18 subsequent siblings)
  20 siblings, 1 reply; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

The inner loop of __ticket_spin_lock isn't doing anything very special,
so reimplement it in C.

For the 8 bit ticket lock variant, we use a register union to get direct
access to the lower and upper bytes in the tickets, but unfortunately gcc
won't generate a direct comparison between the two halves of the register,
so the generated asm isn't quite as pretty as the hand-coded version.
However benchmarking shows that this is actually a small improvement in
runtime performance on some benchmarks, and never a slowdown.

We also need to make sure there's a barrier at the end of the lock loop
to make sure that the compiler doesn't move any instructions from within
the locked region into the region where we don't yet own the lock.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   58 +++++++++++++++++++-------------------
 1 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index d6d5784..6711d36 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -58,21 +58,21 @@
 #if (NR_CPUS < 256)
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	unsigned short inc = 1 << TICKET_SHIFT;
-
-	asm volatile (
-		LOCK_PREFIX "xaddw %w0, %1\n"
-		"1:\t"
-		"cmpb %h0, %b0\n\t"
-		"je 2f\n\t"
-		"rep ; nop\n\t"
-		"movb %1, %b0\n\t"
-		/* don't need lfence here, because loads are in-order */
-		"jmp 1b\n"
-		"2:"
-		: "+Q" (inc), "+m" (lock->slock)
-		:
-		: "memory", "cc");
+	register union {
+		struct __raw_tickets tickets;
+		unsigned short slock;
+	} inc = { .slock = 1 << TICKET_SHIFT };
+
+	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
+		      : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
+
+	for (;;) {
+		if (inc.tickets.head == inc.tickets.tail)
+			return;
+		cpu_relax();
+		inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
+	}
+	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -105,22 +105,22 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
 	unsigned inc = 1 << TICKET_SHIFT;
-	unsigned tmp;
+	__ticket_t tmp;
 
-	asm volatile(LOCK_PREFIX "xaddl %0, %1\n"
-		     "movzwl %w0, %2\n\t"
-		     "shrl $16, %0\n\t"
-		     "1:\t"
-		     "cmpl %0, %2\n\t"
-		     "je 2f\n\t"
-		     "rep ; nop\n\t"
-		     "movzwl %1, %2\n\t"
-		     /* don't need lfence here, because loads are in-order */
-		     "jmp 1b\n"
-		     "2:"
-		     : "+r" (inc), "+m" (lock->slock), "=&r" (tmp)
-		     :
-		     : "memory", "cc");
+	asm volatile(LOCK_PREFIX "xaddl %0, %1\n\t"
+		     : "+r" (inc), "+m" (lock->slock)
+		     : : "memory", "cc");
+
+	tmp = inc;
+	inc >>= TICKET_SHIFT;
+
+	for (;;) {
+		if ((__ticket_t)inc == tmp)
+			return;
+		cpu_relax();
+		tmp = ACCESS_ONCE(lock->tickets.head);
+	}
+	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 01/20] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 02/20] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 15:13   ` Eric Dumazet
  2010-11-13 10:05   ` Américo Wang
  2010-11-03 14:59 ` [PATCH 04/20] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
                   ` (17 subsequent siblings)
  20 siblings, 2 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If we don't need to use a locked inc for unlock, then implement it in C.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   33 ++++++++++++++++++---------------
 1 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6711d36..082990a 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -33,9 +33,23 @@
  * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
  * (PPro errata 66, 92)
  */
-# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->tickets.head) == sizeof(u8))
+		asm (LOCK_PREFIX "incb %0"
+		     : "+m" (lock->tickets.head) : : "memory");
+	else
+		asm (LOCK_PREFIX "incw %0"
+		     : "+m" (lock->tickets.head) : : "memory");
+
+}
 #else
-# define UNLOCK_LOCK_PREFIX
+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
+{
+	barrier();
+	lock->tickets.head++;
+	barrier();
+}
 #endif
 
 /*
@@ -93,14 +107,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 	return tmp;
 }
-
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
-{
-	asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
-		     : "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
-}
 #else
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
@@ -144,15 +150,12 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 	return tmp;
 }
+#endif
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
-		     : "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
+	__ticket_unlock_release(lock);
 }
-#endif
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
 {
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 04/20] x86/ticketlock: make large and small ticket versions of spin_lock the same
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (2 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-12 12:19   ` Srivatsa Vaddagiri
  2010-11-03 14:59 ` [PATCH 05/20] x86/ticketlock: make __ticket_spin_lock common Jeremy Fitzhardinge
                   ` (16 subsequent siblings)
  20 siblings, 1 reply; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Make the bulk of __ticket_spin_lock look identical for large and small
number of cpus.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   23 ++++++++---------------
 1 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 082990a..7586d7a 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -72,19 +72,16 @@ static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
 #if (NR_CPUS < 256)
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	register union {
-		struct __raw_tickets tickets;
-		unsigned short slock;
-	} inc = { .slock = 1 << TICKET_SHIFT };
+	register struct __raw_tickets inc = { .tail = 1 };
 
 	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
-		      : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
+		      : "+r" (inc), "+m" (lock->tickets) : : "memory", "cc");
 
 	for (;;) {
-		if (inc.tickets.head == inc.tickets.tail)
+		if (inc.head == inc.tail)
 			return;
 		cpu_relax();
-		inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
+		inc.head = ACCESS_ONCE(lock->tickets.head);
 	}
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
@@ -110,21 +107,17 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 #else
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	unsigned inc = 1 << TICKET_SHIFT;
-	__ticket_t tmp;
+	register struct __raw_tickets inc = { .tickets.tail = 1 };
 
 	asm volatile(LOCK_PREFIX "xaddl %0, %1\n\t"
-		     : "+r" (inc), "+m" (lock->slock)
+		     : "+r" (inc), "+m" (lock->tickets)
 		     : : "memory", "cc");
 
-	tmp = inc;
-	inc >>= TICKET_SHIFT;
-
 	for (;;) {
-		if ((__ticket_t)inc == tmp)
+		if (inc.head == inc.tail)
 			return;
 		cpu_relax();
-		tmp = ACCESS_ONCE(lock->tickets.head);
+		inc.head = ACCESS_ONCE(lock->tickets.head);
 	}
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 05/20] x86/ticketlock: make __ticket_spin_lock common
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (3 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 04/20] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Aside from the particular form of the xadd instruction, they're identical.
So factor out the xadd and use common code for the rest.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   42 ++++++++++++++++++--------------------
 1 files changed, 20 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 7586d7a..4f9fa24 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -69,13 +69,27 @@ static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
  * save some instructions and make the code more elegant. There really isn't
  * much between them in performance though, especially as locks are out of line.
  */
-#if (NR_CPUS < 256)
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline struct __raw_tickets __ticket_spin_claim(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets tickets = { .tail = 1 };
+
+	if (sizeof(lock->tickets.head) == sizeof(u8))
+		asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
+			      : "+r" (tickets), "+m" (lock->tickets)
+			      : : "memory", "cc");
+	else
+		asm volatile (LOCK_PREFIX "xaddl %0, %1\n"
+			     : "+r" (tickets), "+m" (lock->tickets)
+			     : : "memory", "cc");
 
-	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
-		      : "+r" (inc), "+m" (lock->tickets) : : "memory", "cc");
+	return tickets;
+}
+
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+{
+	register struct __raw_tickets inc;
+
+	inc = __ticket_spin_claim(lock);
 
 	for (;;) {
 		if (inc.head == inc.tail)
@@ -86,6 +100,7 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
+#if (NR_CPUS < 256)
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
 	unsigned int tmp, new;
@@ -105,23 +120,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return tmp;
 }
 #else
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
-{
-	register struct __raw_tickets inc = { .tickets.tail = 1 };
-
-	asm volatile(LOCK_PREFIX "xaddl %0, %1\n\t"
-		     : "+r" (inc), "+m" (lock->tickets)
-		     : : "memory", "cc");
-
-	for (;;) {
-		if (inc.head == inc.tail)
-			return;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
-	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
-}
-
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
 	unsigned tmp;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (4 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 05/20] x86/ticketlock: make __ticket_spin_lock common Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-13 10:17   ` Américo Wang
  2010-11-03 14:59 ` [PATCH 07/20] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
                   ` (14 subsequent siblings)
  20 siblings, 1 reply; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Make trylock code common regardless of ticket size.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   55 +++++++++-----------------------
 arch/x86/include/asm/spinlock_types.h |    3 ++
 2 files changed, 19 insertions(+), 39 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 4f9fa24..507f83c 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -100,48 +100,25 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-#if (NR_CPUS < 256)
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
-	unsigned int tmp, new;
-
-	asm volatile("movzwl %2, %0\n\t"
-		     "cmpb %h0,%b0\n\t"
-		     "leal 0x100(%" REG_PTR_MODE "0), %1\n\t"
-		     "jne 1f\n\t"
-		     LOCK_PREFIX "cmpxchgw %w1,%2\n\t"
-		     "1:"
-		     "sete %b1\n\t"
-		     "movzbl %b1,%0\n\t"
-		     : "=&a" (tmp), "=&q" (new), "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
-
-	return tmp;
-}
-#else
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
-{
-	unsigned tmp;
-	unsigned new;
-
-	asm volatile("movl %2,%0\n\t"
-		     "movl %0,%1\n\t"
-		     "roll $16, %0\n\t"
-		     "cmpl %0,%1\n\t"
-		     "leal 0x00010000(%" REG_PTR_MODE "0), %1\n\t"
-		     "jne 1f\n\t"
-		     LOCK_PREFIX "cmpxchgl %1,%2\n\t"
-		     "1:"
-		     "sete %b1\n\t"
-		     "movzbl %b1,%0\n\t"
-		     : "=&a" (tmp), "=&q" (new), "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
-
-	return tmp;
+	union {
+		struct __raw_tickets tickets;
+		__ticketpair_t slock;
+	} tmp, new;
+	int ret;
+
+	tmp.tickets = ACCESS_ONCE(lock->tickets);
+	if (tmp.tickets.head != tmp.tickets.tail)
+		return 0;
+
+	new.slock = tmp.slock + (1 << TICKET_SHIFT);
+
+	ret = cmpxchg(&lock->ticketpair, tmp.slock, new.slock) == tmp.slock;
+	barrier();		/* just make nothing creeps before lock is claimed */
+
+	return ret;
 }
-#endif
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4582640..48dafc3 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -9,8 +9,10 @@
 
 #if (CONFIG_NR_CPUS < 256)
 typedef u8  __ticket_t;
+typedef u16 __ticketpair_t;
 #else
 typedef u16 __ticket_t;
+typedef u32 __ticketpair_t;
 #endif
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
@@ -19,6 +21,7 @@ typedef u16 __ticket_t;
 typedef struct arch_spinlock {
 	union {
 		unsigned int slock;
+		__ticketpair_t ticketpair;
 		struct __raw_tickets {
 			__ticket_t head, tail;
 		} tickets;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 07/20] x86/spinlocks: replace pv spinlocks with pv ticketlocks
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (5 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 08/20] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |   30 +++-------------------
 arch/x86/include/asm/paravirt_types.h |    8 +----
 arch/x86/include/asm/spinlock.h       |   44 +++++++++++++++++++++++++++------
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +---------
 arch/x86/xen/spinlock.c               |    7 ++++-
 5 files changed, 50 insertions(+), 54 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 5653f43..79c77e5 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -722,36 +722,14 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index db9ef55..968ee31 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -316,12 +316,8 @@ struct pv_mmu_ops {
 
 struct arch_spinlock;
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, unsigned ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 507f83c..06815f3 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -52,6 +52,21 @@ static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
 }
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -85,6 +100,16 @@ static __always_inline struct __raw_tickets __ticket_spin_claim(struct arch_spin
 	return tickets;
 }
 
+/* 
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc;
@@ -92,10 +117,15 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 	inc = __ticket_spin_claim(lock);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			return;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				return;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
@@ -122,7 +152,9 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
+	__ticket_t next = lock->tickets.head + 1;
 	__ticket_unlock_release(lock);
+	__ticket_unlock_kick(lock, next);
 }
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
@@ -139,8 +171,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -173,8 +203,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index e0500646..7e9eb52 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -121,6 +121,9 @@ struct xen_spinlock {
 	unsigned short spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -148,7 +151,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -338,6 +340,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -373,12 +376,14 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 08/20] x86/ticketlock: collapse a layer of functions
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (6 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 07/20] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 09/20] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 06815f3..a79dfee 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -110,7 +110,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
 		____ticket_unlock_kick(lock, next);
 }
 
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc;
 
@@ -130,7 +130,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	union {
 		struct __raw_tickets tickets;
@@ -150,53 +150,28 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return ret;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 	__ticket_unlock_release(lock);
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 09/20] xen/pvticketlock: Xen implementation for PV ticket locks
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (7 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 08/20] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 10/20] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |  281 ++++++-----------------------------------------
 1 files changed, 36 insertions(+), 245 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 7e9eb52..5feb897 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-	u64 taken;
 	u32 taken_slow;
-	u32 taken_slow_nested;
 	u32 taken_slow_pickup;
 	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
 
-	u64 released;
 	u32 released_slow;
 	u32 released_slow_kicked;
 
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
 	if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,76 @@ static inline u64 spin_time_start(void)
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	unsigned short spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __get_cpu_var(lock_spinners);
-	__get_cpu_var(lock_spinners) = xl;
-
-	wmb();			/* set lock of interest before count */
-
-	asm(LOCK_PREFIX " incw %0"
-	    : "+m" (xl->spinners) : : "memory");
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	asm(LOCK_PREFIX " decw %0"
-	    : "+m" (xl->spinners) : : "memory");
-	wmb();			/* decrement count before restoring lock */
-	__get_cpu_var(lock_spinners) = prev;
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
+static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __get_cpu_var(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
+	w->want = want;
+	w->lock = lock;
+
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
 
 	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
 
-		flags = __raw_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* Only check lock once pending cleared */
+	barrier();
 
-		raw_local_irq_restore(flags);
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		ADD_STATS(taken_slow_pickup, 1);
+		goto out;
+	}
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 {
 	int cpu;
 
 	ADD_STATS(released_slow, 1);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+
+		if (w->lock == lock && w->want == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
@@ -320,28 +155,6 @@ static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -376,14 +189,8 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -401,37 +208,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_pickup);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow_kicked);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	xen_debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	xen_debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	xen_debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 10/20] x86/pvticketlock: keep count of blocked cpus
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (8 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 09/20] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 11/20] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

When a CPU blocks by calling into __ticket_lock_spinning, keep a count in
the spinlock.  This allows __ticket_lock_kick to more accurately tell
whether it has any work to do (in many cases, a spinlock may be contended,
but none of the waiters have gone into blocking).

This adds two locked instructions to the spinlock slow path (once the
lock has already spun for SPIN_THRESHOLD iterations), and adds another
one or two bytes to struct arch_spinlock.

We need to make sure we increment the waiting counter before doing the
last-chance check of the lock to see if we picked it up in the meantime.
If we don't then there's a potential deadlock:

	lock holder		lock waiter

				clear event channel
				check lock for pickup (did not)
	release lock
	check waiting counter
		(=0, no kick)
				add waiting counter
				block (=deadlock)

Moving the "add waiting counter earler" avoids the deadlock:

	lock holder		lock waiter

				clear event channel
				add waiting counter
				check lock for pickup (did not)
	release lock
	check waiting counter
		(=1, kick)
				block (and immediately wake)

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   27 ++++++++++++++++++++++++++-
 arch/x86/include/asm/spinlock_types.h |    3 +++
 arch/x86/xen/spinlock.c               |    4 ++++
 3 files changed, 33 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index a79dfee..3deabca 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -65,6 +65,31 @@ static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, u
 {
 }
 
+static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
+{
+	return false;
+}
+#else
+static inline void __ticket_add_waiting(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->waiting) == sizeof(u8))
+		asm (LOCK_PREFIX "addb $1, %0" : "+m" (lock->waiting) : : "memory");
+	else
+		asm (LOCK_PREFIX "addw $1, %0" : "+m" (lock->waiting) : : "memory");
+}
+
+static inline void __ticket_sub_waiting(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->waiting) == sizeof(u8))
+		asm (LOCK_PREFIX "subb $1, %0" : "+m" (lock->waiting) : : "memory");
+	else
+		asm (LOCK_PREFIX "subw $1, %0" : "+m" (lock->waiting) : : "memory");
+}
+
+static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
+{
+	return ACCESS_ONCE(lock->waiting) != 0;
+}
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
 
 /*
@@ -106,7 +131,7 @@ static __always_inline struct __raw_tickets __ticket_spin_claim(struct arch_spin
  */
 static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
-	if (unlikely(lock->tickets.tail != next))
+	if (unlikely(__ticket_lock_waiters(lock)))
 		____ticket_unlock_kick(lock, next);
 }
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 48dafc3..b396ed5 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -26,6 +26,9 @@ typedef struct arch_spinlock {
 			__ticket_t head, tail;
 		} tickets;
 	};
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+	__ticket_t waiting;
+#endif
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { .slock = 0 } }
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 5feb897..b8f09d5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -119,6 +119,8 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	__ticket_add_waiting(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -133,6 +135,8 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
+	__ticket_sub_waiting(lock);
+
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
 	spin_time_accum_blocked(start);
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 11/20] x86/pvticketlock: use callee-save for lock_spinning
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (9 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 10/20] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 12/20] x86/pvticketlock: use callee-save for unlock_kick as well Jeremy Fitzhardinge
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 79c77e5..dbd5914 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -724,7 +724,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 968ee31..9ad6d088 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -316,7 +316,7 @@ struct pv_mmu_ops {
 
 struct arch_spinlock;
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, unsigned ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index b8f09d5..2bd51d8 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -141,6 +141,7 @@ out:
 	w->lock = NULL;
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 {
@@ -193,7 +194,7 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 12/20] x86/pvticketlock: use callee-save for unlock_kick as well
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (10 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 11/20] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 13/20] x86/pvticketlock: make sure unlock is seen by everyone before checking waiters Jeremy Fitzhardinge
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

The unlock code is typically inlined throughout the kernel, so its useful
to make sure there's minimal register pressure overhead from the presence
of the unlock_tick pvop call.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index dbd5914..b6ff89a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -729,7 +729,7 @@ static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned t
 
 static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 9ad6d088..bb02831 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -317,7 +317,7 @@ struct pv_mmu_ops {
 struct arch_spinlock;
 struct pv_lock_ops {
 	struct paravirt_callee_save lock_spinning;
-	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
+	struct paravirt_callee_save unlock_kick;
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..efad219 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -10,7 +10,7 @@
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
-	.unlock_kick = paravirt_nop,
+	.unlock_kick = __PV_IS_CALLEE_SAVE(paravirt_nop),
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2bd51d8..bf2f409 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -159,6 +159,7 @@ static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 		}
 	}
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_unlock_kick);
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -195,7 +196,7 @@ void xen_uninit_lock_cpu(int cpu)
 void __init xen_init_spinlocks(void)
 {
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
-	pv_lock_ops.unlock_kick = xen_unlock_kick;
+	pv_lock_ops.unlock_kick = PV_CALLEE_SAVE(xen_unlock_kick);
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 13/20] x86/pvticketlock: make sure unlock is seen by everyone before checking waiters
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (11 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 12/20] x86/pvticketlock: use callee-save for unlock_kick as well Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 14/20] x86/ticketlock: loosen ordering restraints on unlock Jeremy Fitzhardinge
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If we don't put a real memory barrier between the unlocking increment
of the queue head and the check for lock waiters, we can end up with a
deadlock as a result of the unload write being reordered with respect
to the waiters read.  In effect, the check on the waiters count could
happen before the unlock, which leads to a deadlock:

	unlocker		locker
	check waiters (none)
				check lock head (timeout)
				add to waiters
				block
	release lock		=> deadlock

Adding a mb() barrier prevents the unlocker's waiter check from happening
before the lock release.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 3deabca..b9a1aae 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -88,6 +88,13 @@ static inline void __ticket_sub_waiting(struct arch_spinlock *lock)
 
 static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
 {
+	/*
+	 * Make sure everyone sees the unlock write before we check the
+	 * waiting count.  The processor ordering doesn't guarantee this
+	 * because they're different memory locations.
+	 */
+	mb();
+
 	return ACCESS_ONCE(lock->waiting) != 0;
 }
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 14/20] x86/ticketlock: loosen ordering restraints on unlock
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (12 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 13/20] x86/pvticketlock: make sure unlock is seen by everyone before checking waiters Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 15/20] x86/ticketlock: prevent compiler reordering into locked region Jeremy Fitzhardinge
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

It's only necessary to prevent the compiler from reordering code out of
the locked region past the unlock.  Putting too many barriers in prevents
the compiler from generating the best code when PV spinlocks are enabled.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index b9a1aae..d6de5e7 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -46,9 +46,7 @@ static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
 #else
 static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
 {
-	barrier();
 	lock->tickets.head++;
-	barrier();
 }
 #endif
 
@@ -184,7 +182,11 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next;
+
+	barrier();		/* prevent reordering out of locked region */
+
+	next = lock->tickets.head + 1;
 	__ticket_unlock_release(lock);
 	__ticket_unlock_kick(lock, next);
 }
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 15/20] x86/ticketlock: prevent compiler reordering into locked region
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (13 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 14/20] x86/ticketlock: loosen ordering restraints on unlock Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 16/20] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Add a barrier() at the end of __raw_spin_unlock() to prevent instructions
from after the locked region from being reordered into it.  In theory doing
so shouldn't cause any problems, but in practice, the system locks up
under load...

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index d6de5e7..158b330 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -189,6 +189,8 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 	next = lock->tickets.head + 1;
 	__ticket_unlock_release(lock);
 	__ticket_unlock_kick(lock, next);
+
+	barrier();		/* prevent reordering into locked region */
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 16/20] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (14 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 15/20] x86/ticketlock: prevent compiler reordering into locked region Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 17/20] x86/ticketlock: clarify barrier in arch_spin_lock Jeremy Fitzhardinge
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

The code size expands somewhat, and its probably better to just call
a function rather than inline it.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/Kconfig     |    3 +++
 kernel/Kconfig.locks |    2 +-
 2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index cea0cd9..e67bf49 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -577,6 +577,9 @@ config PARAVIRT_SPINLOCKS
 
 	  If you are unsure how to answer this question, answer N.
 
+config ARCH_NOINLINE_SPIN_UNLOCK
+       def_bool PARAVIRT_SPINLOCKS
+
 config PARAVIRT_CLOCK
 	bool
 
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 88c92fb..3216c22 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -125,7 +125,7 @@ config INLINE_SPIN_LOCK_IRQSAVE
 		 ARCH_INLINE_SPIN_LOCK_IRQSAVE
 
 config INLINE_SPIN_UNLOCK
-	def_bool !DEBUG_SPINLOCK && (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)
+	def_bool !DEBUG_SPINLOCK && (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK) && !ARCH_NOINLINE_SPIN_UNLOCK
 
 config INLINE_SPIN_UNLOCK_BH
 	def_bool !DEBUG_SPINLOCK && ARCH_INLINE_SPIN_UNLOCK_BH
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 17/20] x86/ticketlock: clarify barrier in arch_spin_lock
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (15 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 16/20] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 14:59 ` [PATCH 18/20] x86/ticketlock: remove .slock Jeremy Fitzhardinge
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Make sure the barrier in arch_spin_lock is definitely in the code path.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 158b330..c7455e1 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -151,13 +151,13 @@ static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 
 		do {
 			if (inc.head == inc.tail)
-				return;
+				goto out;
 			cpu_relax();
 			inc.head = ACCESS_ONCE(lock->tickets.head);
 		} while (--count);
 		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 18/20] x86/ticketlock: remove .slock
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (16 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 17/20] x86/ticketlock: clarify barrier in arch_spin_lock Jeremy Fitzhardinge
@ 2010-11-03 14:59 ` Jeremy Fitzhardinge
  2010-11-03 15:00 ` [PATCH 19/20] x86/ticketlocks: use overlapping read to eliminate mb() Jeremy Fitzhardinge
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock_types.h |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index b396ed5..def8010 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -20,7 +20,6 @@ typedef u32 __ticketpair_t;
 
 typedef struct arch_spinlock {
 	union {
-		unsigned int slock;
 		__ticketpair_t ticketpair;
 		struct __raw_tickets {
 			__ticket_t head, tail;
@@ -31,7 +30,7 @@ typedef struct arch_spinlock {
 #endif
 } arch_spinlock_t;
 
-#define __ARCH_SPIN_LOCK_UNLOCKED	{ { .slock = 0 } }
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ { .tickets = {0, 0} } }
 
 typedef struct {
 	unsigned int lock;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 19/20] x86/ticketlocks: use overlapping read to eliminate mb()
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (17 preceding siblings ...)
  2010-11-03 14:59 ` [PATCH 18/20] x86/ticketlock: remove .slock Jeremy Fitzhardinge
@ 2010-11-03 15:00 ` Jeremy Fitzhardinge
  2010-11-03 15:00 ` [PATCH 20/20] x86/ticketlock: rename ticketpair to head_tail Jeremy Fitzhardinge
  2010-11-12 22:12 ` [PATCH 00/20] x86: ticket lock rewrite and paravirtualization H. Peter Anvin
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 15:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

When reading the 'waiting' counter, use a longer-than-necessary read
which also overlaps 'head'.  This read is guaranteed to be in-order
with respect to and unlock writes to 'head', thereby eliminating the
need for an explicit mb() to enforce the read-after-write ordering.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   14 +++++++-------
 arch/x86/include/asm/spinlock_types.h |   24 ++++++++++++++++++++----
 2 files changed, 27 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index c7455e1..a16b6e4 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -86,14 +86,14 @@ static inline void __ticket_sub_waiting(struct arch_spinlock *lock)
 
 static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
 {
-	/*
-	 * Make sure everyone sees the unlock write before we check the
-	 * waiting count.  The processor ordering doesn't guarantee this
-	 * because they're different memory locations.
+	/* 
+	 * lock->waiting_head is a union element which aliases both
+	 * 'waiting' and 'head'.  Since the memory access overlaps
+	 * 'head', the read is forced to be in-order with respect to
+	 * unlock writes to 'head', eliminating the need for an
+	 * explicit mb(). (Intel memory ordering rules.)
 	 */
-	mb();
-
-	return ACCESS_ONCE(lock->waiting) != 0;
+	return ((__ticket_t)ACCESS_ONCE(lock->waiting_head)) != 0;
 }
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index def8010..307ef0b 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -18,6 +18,24 @@ typedef u32 __ticketpair_t;
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 #define TICKET_MASK	((1 << TICKET_SHIFT) - 1)
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+typedef struct arch_spinlock {
+	union {
+		struct {
+			__ticket_t waiting;
+			union {
+				__ticketpair_t ticketpair;
+				struct __raw_tickets {
+					__ticket_t head, tail;
+				} tickets;
+			};
+		};
+		__ticketpair_t waiting_head; /* aliases waiting and head */
+	};
+} arch_spinlock_t;
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ { { 0, { 0 } } } }
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t ticketpair;
@@ -25,12 +43,10 @@ typedef struct arch_spinlock {
 			__ticket_t head, tail;
 		} tickets;
 	};
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
-	__ticket_t waiting;
-#endif
 } arch_spinlock_t;
 
-#define __ARCH_SPIN_LOCK_UNLOCKED	{ { .tickets = {0, 0} } }
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif
 
 typedef struct {
 	unsigned int lock;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH 20/20] x86/ticketlock: rename ticketpair to head_tail
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (18 preceding siblings ...)
  2010-11-03 15:00 ` [PATCH 19/20] x86/ticketlocks: use overlapping read to eliminate mb() Jeremy Fitzhardinge
@ 2010-11-03 15:00 ` Jeremy Fitzhardinge
  2010-11-12 22:12 ` [PATCH 00/20] x86: ticket lock rewrite and paravirtualization H. Peter Anvin
  20 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 15:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Make it clearer what fields head_tail is actually overlapping with.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |    2 +-
 arch/x86/include/asm/spinlock_types.h |    4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index a16b6e4..6850884 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -174,7 +174,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 	new.slock = tmp.slock + (1 << TICKET_SHIFT);
 
-	ret = cmpxchg(&lock->ticketpair, tmp.slock, new.slock) == tmp.slock;
+	ret = cmpxchg(&lock->head_tail, tmp.slock, new.slock) == tmp.slock;
 	barrier();		/* just make nothing creeps before lock is claimed */
 
 	return ret;
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 307ef0b..df031ec 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -24,7 +24,7 @@ typedef struct arch_spinlock {
 		struct {
 			__ticket_t waiting;
 			union {
-				__ticketpair_t ticketpair;
+				__ticketpair_t head_tail;
 				struct __raw_tickets {
 					__ticket_t head, tail;
 				} tickets;
@@ -38,7 +38,7 @@ typedef struct arch_spinlock {
 #else
 typedef struct arch_spinlock {
 	union {
-		__ticketpair_t ticketpair;
+		__ticketpair_t head_tail;
 		struct __raw_tickets {
 			__ticket_t head, tail;
 		} tickets;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH 02/20] x86/ticketlock: convert spin loop to C
  2010-11-03 14:59 ` [PATCH 02/20] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
@ 2010-11-03 15:11   ` Eric Dumazet
  2010-11-03 15:38     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 43+ messages in thread
From: Eric Dumazet @ 2010-11-03 15:11 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
écrit :
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> The inner loop of __ticket_spin_lock isn't doing anything very special,
> so reimplement it in C.
> 
> For the 8 bit ticket lock variant, we use a register union to get direct
> access to the lower and upper bytes in the tickets, but unfortunately gcc
> won't generate a direct comparison between the two halves of the register,
> so the generated asm isn't quite as pretty as the hand-coded version.
> However benchmarking shows that this is actually a small improvement in
> runtime performance on some benchmarks, and never a slowdown.
> 
> We also need to make sure there's a barrier at the end of the lock loop
> to make sure that the compiler doesn't move any instructions from within
> the locked region into the region where we don't yet own the lock.
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> ---
>  arch/x86/include/asm/spinlock.h |   58 +++++++++++++++++++-------------------
>  1 files changed, 29 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index d6d5784..6711d36 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -58,21 +58,21 @@
>  #if (NR_CPUS < 256)
>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>  {
> -	unsigned short inc = 1 << TICKET_SHIFT;
> -
> -	asm volatile (
> -		LOCK_PREFIX "xaddw %w0, %1\n"
> -		"1:\t"
> -		"cmpb %h0, %b0\n\t"
> -		"je 2f\n\t"
> -		"rep ; nop\n\t"
> -		"movb %1, %b0\n\t"
> -		/* don't need lfence here, because loads are in-order */
> -		"jmp 1b\n"
> -		"2:"
> -		: "+Q" (inc), "+m" (lock->slock)
> -		:
> -		: "memory", "cc");
> +	register union {
> +		struct __raw_tickets tickets;
> +		unsigned short slock;
> +	} inc = { .slock = 1 << TICKET_SHIFT };
> +
> +	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
> +		      : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
> +
> +	for (;;) {
> +		if (inc.tickets.head == inc.tickets.tail)
> +			return;
> +		cpu_relax();
> +		inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
> +	}
> +	barrier();		/* make sure nothing creeps before the lock is taken */

Isnt this barrier() never reached ?


>  }
>  
>  static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
> @@ -105,22 +105,22 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>  {
>  	unsigned inc = 1 << TICKET_SHIFT;
> -	unsigned tmp;
> +	__ticket_t tmp;
>  
> -	asm volatile(LOCK_PREFIX "xaddl %0, %1\n"
> -		     "movzwl %w0, %2\n\t"
> -		     "shrl $16, %0\n\t"
> -		     "1:\t"
> -		     "cmpl %0, %2\n\t"
> -		     "je 2f\n\t"
> -		     "rep ; nop\n\t"
> -		     "movzwl %1, %2\n\t"
> -		     /* don't need lfence here, because loads are in-order */
> -		     "jmp 1b\n"
> -		     "2:"
> -		     : "+r" (inc), "+m" (lock->slock), "=&r" (tmp)
> -		     :
> -		     : "memory", "cc");
> +	asm volatile(LOCK_PREFIX "xaddl %0, %1\n\t"
> +		     : "+r" (inc), "+m" (lock->slock)
> +		     : : "memory", "cc");
> +
> +	tmp = inc;
> +	inc >>= TICKET_SHIFT;
> +
> +	for (;;) {
> +		if ((__ticket_t)inc == tmp)
> +			return;
> +		cpu_relax();
> +		tmp = ACCESS_ONCE(lock->tickets.head);
> +	}
> +	barrier();		/* make sure nothing creeps before the lock is taken */

same here

>  }
>  
>  static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)



^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-11-03 14:59 ` [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
@ 2010-11-03 15:13   ` Eric Dumazet
  2010-11-03 18:00     ` Jeremy Fitzhardinge
  2010-11-13 10:05   ` Américo Wang
  1 sibling, 1 reply; 43+ messages in thread
From: Eric Dumazet @ 2010-11-03 15:13 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
écrit :
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> If we don't need to use a locked inc for unlock, then implement it in C.
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> ---
>  arch/x86/include/asm/spinlock.h |   33 ++++++++++++++++++---------------
>  1 files changed, 18 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 6711d36..082990a 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -33,9 +33,23 @@
>   * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>   * (PPro errata 66, 92)
>   */
> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> +	if (sizeof(lock->tickets.head) == sizeof(u8))
> +		asm (LOCK_PREFIX "incb %0"
> +		     : "+m" (lock->tickets.head) : : "memory");
> +	else
> +		asm (LOCK_PREFIX "incw %0"
> +		     : "+m" (lock->tickets.head) : : "memory");
> +
> +}
>  #else
> -# define UNLOCK_LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> +	barrier();

technically speaking, it should be : smp_wmb()

> +	lock->tickets.head++;
> +	barrier();

Not sure you need this one.

> +}
>  #endif
>  


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 02/20] x86/ticketlock: convert spin loop to C
  2010-11-03 15:11   ` Eric Dumazet
@ 2010-11-03 15:38     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 15:38 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 11/03/2010 11:11 AM, Eric Dumazet wrote:
> Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
> écrit :
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>>
>> The inner loop of __ticket_spin_lock isn't doing anything very special,
>> so reimplement it in C.
>>
>> For the 8 bit ticket lock variant, we use a register union to get direct
>> access to the lower and upper bytes in the tickets, but unfortunately gcc
>> won't generate a direct comparison between the two halves of the register,
>> so the generated asm isn't quite as pretty as the hand-coded version.
>> However benchmarking shows that this is actually a small improvement in
>> runtime performance on some benchmarks, and never a slowdown.
>>
>> We also need to make sure there's a barrier at the end of the lock loop
>> to make sure that the compiler doesn't move any instructions from within
>> the locked region into the region where we don't yet own the lock.
>>
>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>> ---
>>  arch/x86/include/asm/spinlock.h |   58 +++++++++++++++++++-------------------
>>  1 files changed, 29 insertions(+), 29 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
>> index d6d5784..6711d36 100644
>> --- a/arch/x86/include/asm/spinlock.h
>> +++ b/arch/x86/include/asm/spinlock.h
>> @@ -58,21 +58,21 @@
>>  #if (NR_CPUS < 256)
>>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>>  {
>> -	unsigned short inc = 1 << TICKET_SHIFT;
>> -
>> -	asm volatile (
>> -		LOCK_PREFIX "xaddw %w0, %1\n"
>> -		"1:\t"
>> -		"cmpb %h0, %b0\n\t"
>> -		"je 2f\n\t"
>> -		"rep ; nop\n\t"
>> -		"movb %1, %b0\n\t"
>> -		/* don't need lfence here, because loads are in-order */
>> -		"jmp 1b\n"
>> -		"2:"
>> -		: "+Q" (inc), "+m" (lock->slock)
>> -		:
>> -		: "memory", "cc");
>> +	register union {
>> +		struct __raw_tickets tickets;
>> +		unsigned short slock;
>> +	} inc = { .slock = 1 << TICKET_SHIFT };
>> +
>> +	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
>> +		      : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
>> +
>> +	for (;;) {
>> +		if (inc.tickets.head == inc.tickets.tail)
>> +			return;
>> +		cpu_relax();
>> +		inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
>> +	}
>> +	barrier();		/* make sure nothing creeps before the lock is taken */
> Isnt this barrier() never reached ?

Sorry, a later patch makes this clearer.  I should have folded it in.

    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-11-03 15:13   ` Eric Dumazet
@ 2010-11-03 18:00     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-03 18:00 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Jan Beulich,
	Avi Kivity, Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 11/03/2010 11:13 AM, Eric Dumazet wrote:
> Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
> écrit :
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>>
>> If we don't need to use a locked inc for unlock, then implement it in C.
>>
>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>> ---
>>  arch/x86/include/asm/spinlock.h |   33 ++++++++++++++++++---------------
>>  1 files changed, 18 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
>> index 6711d36..082990a 100644
>> --- a/arch/x86/include/asm/spinlock.h
>> +++ b/arch/x86/include/asm/spinlock.h
>> @@ -33,9 +33,23 @@
>>   * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>>   * (PPro errata 66, 92)
>>   */
>> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>> +{
>> +	if (sizeof(lock->tickets.head) == sizeof(u8))
>> +		asm (LOCK_PREFIX "incb %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
>> +	else
>> +		asm (LOCK_PREFIX "incw %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
>> +
>> +}
>>  #else
>> -# define UNLOCK_LOCK_PREFIX
>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>> +{
>> +	barrier();
> technically speaking, it should be : smp_wmb()

Perhaps.  In practise it won't make a difference because it is defined
as barrier() unless OOSTORE is defined, in which case we need to do a
locked increment anyway.

Thanks,
    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 04/20] x86/ticketlock: make large and small ticket versions of spin_lock the same
  2010-11-03 14:59 ` [PATCH 04/20] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
@ 2010-11-12 12:19   ` Srivatsa Vaddagiri
  2010-11-12 16:27     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 43+ messages in thread
From: Srivatsa Vaddagiri @ 2010-11-12 12:19 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Jeremy Fitzhardinge

On Wed, Nov 03, 2010 at 10:59:45AM -0400, Jeremy Fitzhardinge wrote:
> Make the bulk of __ticket_spin_lock look identical for large and small
> number of cpus.

[snip]

>  #if (NR_CPUS < 256)
>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>  {
> -	register union {
> -		struct __raw_tickets tickets;
> -		unsigned short slock;
> -	} inc = { .slock = 1 << TICKET_SHIFT };
> +	register struct __raw_tickets inc = { .tail = 1 };

[snip]

>  #else
>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>  {
> -	unsigned inc = 1 << TICKET_SHIFT;
> -	__ticket_t tmp;
> +	register struct __raw_tickets inc = { .tickets.tail = 1 };

s/.tickets//?

Otherwise I get a compile error for NR_CPUS > 256, with just 4 patches applied.

- vatsa

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 04/20] x86/ticketlock: make large and small ticket versions of spin_lock the same
  2010-11-12 12:19   ` Srivatsa Vaddagiri
@ 2010-11-12 16:27     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-12 16:27 UTC (permalink / raw)
  To: vatsa
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Jeremy Fitzhardinge

On 11/12/2010 04:19 AM, Srivatsa Vaddagiri wrote:
> On Wed, Nov 03, 2010 at 10:59:45AM -0400, Jeremy Fitzhardinge wrote:
>> Make the bulk of __ticket_spin_lock look identical for large and small
>> number of cpus.
> [snip]
>
>>  #if (NR_CPUS < 256)
>>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>>  {
>> -	register union {
>> -		struct __raw_tickets tickets;
>> -		unsigned short slock;
>> -	} inc = { .slock = 1 << TICKET_SHIFT };
>> +	register struct __raw_tickets inc = { .tail = 1 };
> [snip]
>
>>  #else
>>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>>  {
>> -	unsigned inc = 1 << TICKET_SHIFT;
>> -	__ticket_t tmp;
>> +	register struct __raw_tickets inc = { .tickets.tail = 1 };
> s/.tickets//?
>
> Otherwise I get a compile error for NR_CPUS > 256, with just 4 patches applied.

Yeah, likely.  That's precisely why I wanted to make them the same ;).

    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
                   ` (19 preceding siblings ...)
  2010-11-03 15:00 ` [PATCH 20/20] x86/ticketlock: rename ticketpair to head_tail Jeremy Fitzhardinge
@ 2010-11-12 22:12 ` H. Peter Anvin
  2010-11-12 22:17   ` Jeremy Fitzhardinge
  20 siblings, 1 reply; 43+ messages in thread
From: H. Peter Anvin @ 2010-11-12 22:12 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 11/03/2010 07:59 AM, Jeremy Fitzhardinge wrote:
> 
>       - with an unmodified struct spinlock, it can check to see if
>         head == tail after unlock; if not, then there's someone else
>         trying to lock, and we can do a kick.  Unfortunately this
>         generates very high level of redundant kicks, because the
>         waiting CPU might not have blocked yet (which is the common
>         case)
> 

How high is "very high" here -- most of the time (so that any mitigation
on the slow patch is useless)?

	-hpa

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-12 22:12 ` [PATCH 00/20] x86: ticket lock rewrite and paravirtualization H. Peter Anvin
@ 2010-11-12 22:17   ` Jeremy Fitzhardinge
  2010-11-12 22:20     ` H. Peter Anvin
  0 siblings, 1 reply; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-12 22:17 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, Linux Kernel Mailing List,
	Nick Piggin, Jan Beulich, Avi Kivity, Xen-devel,
	Linux Virtualization, Srivatsa Vaddagiri

On 11/12/2010 02:12 PM, H. Peter Anvin wrote:
> On 11/03/2010 07:59 AM, Jeremy Fitzhardinge wrote:
>>       - with an unmodified struct spinlock, it can check to see if
>>         head == tail after unlock; if not, then there's someone else
>>         trying to lock, and we can do a kick.  Unfortunately this
>>         generates very high level of redundant kicks, because the
>>         waiting CPU might not have blocked yet (which is the common
>>         case)
>>
> How high is "very high" here -- most of the time (so that any mitigation
> on the slow patch is useless)?

I'll need to remeasure, but I think around 90% of the slowpath entries
were spurious without this.  In other words, when spinlocks do contend,
most of the time it isn't very serious and the other cpu doesn't spend
much time spinning.

    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-12 22:17   ` Jeremy Fitzhardinge
@ 2010-11-12 22:20     ` H. Peter Anvin
  2010-11-15 20:00       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 43+ messages in thread
From: H. Peter Anvin @ 2010-11-12 22:20 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, Linux Virtualization,
	Srivatsa Vaddagiri

On 11/12/2010 02:17 PM, Jeremy Fitzhardinge wrote:
> On 11/12/2010 02:12 PM, H. Peter Anvin wrote:
>> On 11/03/2010 07:59 AM, Jeremy Fitzhardinge wrote:
>>>       - with an unmodified struct spinlock, it can check to see if
>>>         head == tail after unlock; if not, then there's someone else
>>>         trying to lock, and we can do a kick.  Unfortunately this
>>>         generates very high level of redundant kicks, because the
>>>         waiting CPU might not have blocked yet (which is the common
>>>         case)
>>>
>> How high is "very high" here -- most of the time (so that any mitigation
>> on the slow patch is useless)?
> 
> I'll need to remeasure, but I think around 90% of the slowpath entries
> were spurious without this.  In other words, when spinlocks do contend,
> most of the time it isn't very serious and the other cpu doesn't spend
> much time spinning.
> 

90% of the slowpath entries is one thing, my real question is the
fraction of fastpath entries that get diverted to the slowpath.  It
affects where mitigation needs to happen.

	-hpa


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/20] x86/ticketlock: clean up types and accessors
  2010-11-03 14:59 ` [PATCH 01/20] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
@ 2010-11-13  9:57   ` Américo Wang
  2010-11-15 19:36     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 43+ messages in thread
From: Américo Wang @ 2010-11-13  9:57 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

On Wed, Nov 03, 2010 at 10:59:42AM -0400, Jeremy Fitzhardinge wrote:
...
> static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> {
>-	int inc = 0x00010000;
>-	int tmp;
>+	unsigned inc = 1 << TICKET_SHIFT;
>+	unsigned tmp;

Please don't use old implicit-int.

> 
>-	return (((tmp >> TICKET_SHIFT) - tmp) & ((1 << TICKET_SHIFT) - 1)) > 1;
>+	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;


There is a type promotion here.


> }
> 
> #ifndef CONFIG_PARAVIRT_SPINLOCKS
>diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
>index dcb48b2..4582640 100644
>--- a/arch/x86/include/asm/spinlock_types.h
>+++ b/arch/x86/include/asm/spinlock_types.h
>@@ -5,11 +5,27 @@
> # error "please don't include this file directly"
> #endif
> 
>+#include <linux/types.h>
>+
>+#if (CONFIG_NR_CPUS < 256)
>+typedef u8  __ticket_t;
>+#else
>+typedef u16 __ticket_t;
>+#endif
>+
>+#define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
>+#define TICKET_MASK	((1 << TICKET_SHIFT) - 1)


So here you may need to cast the result to __ticket_t.


Thanks.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-11-03 14:59 ` [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
  2010-11-03 15:13   ` Eric Dumazet
@ 2010-11-13 10:05   ` Américo Wang
  2010-11-13 22:34     ` Paolo Bonzini
  2010-11-15 19:38     ` Jeremy Fitzhardinge
  1 sibling, 2 replies; 43+ messages in thread
From: Américo Wang @ 2010-11-13 10:05 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

On Wed, Nov 03, 2010 at 10:59:44AM -0400, Jeremy Fitzhardinge wrote:
>  * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>  * (PPro errata 66, 92)
>  */
>-# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
>+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>+{
>+	if (sizeof(lock->tickets.head) == sizeof(u8))
>+		asm (LOCK_PREFIX "incb %0"
>+		     : "+m" (lock->tickets.head) : : "memory");
>+	else
>+		asm (LOCK_PREFIX "incw %0"
>+		     : "+m" (lock->tickets.head) : : "memory");

This 'if/else' really should be done with #ifdef, even though
the compiler may be smart enough to remove it.

>+
>+}
> #else
>-# define UNLOCK_LOCK_PREFIX
>+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>+{
>+	barrier();
>+	lock->tickets.head++;
>+	barrier();

The second barrier() is not needed.

Thanks.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common
  2010-11-03 14:59 ` [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
@ 2010-11-13 10:17   ` Américo Wang
  2010-11-13 10:48     ` Eric Dumazet
  0 siblings, 1 reply; 43+ messages in thread
From: Américo Wang @ 2010-11-13 10:17 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

On Wed, Nov 03, 2010 at 10:59:47AM -0400, Jeremy Fitzhardinge wrote:

>+	union {
>+		struct __raw_tickets tickets;
>+		__ticketpair_t slock;
>+	} tmp, new;
>+	int ret;
>+
>+	tmp.tickets = ACCESS_ONCE(lock->tickets);
>+	if (tmp.tickets.head != tmp.tickets.tail)
>+		return 0;
>+
>+	new.slock = tmp.slock + (1 << TICKET_SHIFT);
>+
>+	ret = cmpxchg(&lock->ticketpair, tmp.slock, new.slock) == tmp.slock;
>+	barrier();		/* just make nothing creeps before lock is claimed */

This one should be smp_wmb(), right? No CONFIG_X86_OOSTORE protected.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common
  2010-11-13 10:17   ` Américo Wang
@ 2010-11-13 10:48     ` Eric Dumazet
  2010-11-15 19:39       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 43+ messages in thread
From: Eric Dumazet @ 2010-11-13 10:48 UTC (permalink / raw)
  To: Américo Wang
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, Linux Kernel Mailing List,
	Nick Piggin, Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

Le samedi 13 novembre 2010 à 18:17 +0800, Américo Wang a écrit :
> On Wed, Nov 03, 2010 at 10:59:47AM -0400, Jeremy Fitzhardinge wrote:
> 
> >+	union {
> >+		struct __raw_tickets tickets;
> >+		__ticketpair_t slock;
> >+	} tmp, new;
> >+	int ret;
> >+
> >+	tmp.tickets = ACCESS_ONCE(lock->tickets);
> >+	if (tmp.tickets.head != tmp.tickets.tail)
> >+		return 0;
> >+
> >+	new.slock = tmp.slock + (1 << TICKET_SHIFT);
> >+
> >+	ret = cmpxchg(&lock->ticketpair, tmp.slock, new.slock) == tmp.slock;
> >+	barrier();		/* just make nothing creeps before lock is claimed */
> 
> This one should be smp_wmb(), right? No CONFIG_X86_OOSTORE protected.

cmpxchg() is a full memory barrier, no need for smp_wmb() or barrier()




^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-11-13 10:05   ` Américo Wang
@ 2010-11-13 22:34     ` Paolo Bonzini
  2010-11-15 19:38     ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 43+ messages in thread
From: Paolo Bonzini @ 2010-11-13 22:34 UTC (permalink / raw)
  To: Américo Wang
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, Linux Kernel Mailing List,
	Nick Piggin, Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 11/13/2010 11:05 AM, Américo Wang wrote:
> On Wed, Nov 03, 2010 at 10:59:44AM -0400, Jeremy Fitzhardinge wrote:
>>   * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>>   * (PPro errata 66, 92)
>>   */
>> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>> +{
>> +	if (sizeof(lock->tickets.head) == sizeof(u8))
>> +		asm (LOCK_PREFIX "incb %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
>> +	else
>> +		asm (LOCK_PREFIX "incw %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
>
> This 'if/else' really should be done with #ifdef, even though
> the compiler may be smart enough to remove it.

That's a sure path to bitrot.

Paolo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/20] x86/ticketlock: clean up types and accessors
  2010-11-13  9:57   ` Américo Wang
@ 2010-11-15 19:36     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-15 19:36 UTC (permalink / raw)
  To: Américo Wang
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Jan Beulich,
	Avi Kivity, Xen-devel, H. Peter Anvin, Linux Virtualization,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 11/13/2010 01:57 AM, Américo Wang wrote:
> On Wed, Nov 03, 2010 at 10:59:42AM -0400, Jeremy Fitzhardinge wrote:
> ...
>> static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>> {
>> -	int inc = 0x00010000;
>> -	int tmp;
>> +	unsigned inc = 1 << TICKET_SHIFT;
>> +	unsigned tmp;
> Please don't use old implicit-int.

I don't mind much either way, but I don't think I've seen anyone worry
about this before.

>> -	return (((tmp >> TICKET_SHIFT) - tmp) & ((1 << TICKET_SHIFT) - 1)) > 1;
>> +	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
>
> There is a type promotion here.

...

>> }
>>
>> #ifndef CONFIG_PARAVIRT_SPINLOCKS
>> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
>> index dcb48b2..4582640 100644
>> --- a/arch/x86/include/asm/spinlock_types.h
>> +++ b/arch/x86/include/asm/spinlock_types.h
>> @@ -5,11 +5,27 @@
>> # error "please don't include this file directly"
>> #endif
>>
>> +#include <linux/types.h>
>> +
>> +#if (CONFIG_NR_CPUS < 256)
>> +typedef u8  __ticket_t;
>> +#else
>> +typedef u16 __ticket_t;
>> +#endif
>> +
>> +#define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
>> +#define TICKET_MASK	((1 << TICKET_SHIFT) - 1)
>
> So here you may need to cast the result to __ticket_t.

OK.

    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-11-13 10:05   ` Américo Wang
  2010-11-13 22:34     ` Paolo Bonzini
@ 2010-11-15 19:38     ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-15 19:38 UTC (permalink / raw)
  To: Américo Wang
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 11/13/2010 02:05 AM, Américo Wang wrote:
> On Wed, Nov 03, 2010 at 10:59:44AM -0400, Jeremy Fitzhardinge wrote:
>>  * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>>  * (PPro errata 66, 92)
>>  */
>> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>> +{
>> +	if (sizeof(lock->tickets.head) == sizeof(u8))
>> +		asm (LOCK_PREFIX "incb %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
>> +	else
>> +		asm (LOCK_PREFIX "incw %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
> This 'if/else' really should be done with #ifdef, even though
> the compiler may be smart enough to remove it.

No, we depend on if/else with constant arguments doing the right thing
all over the kernel.  It is always preferable to use it instead of
#ifdef where possible, so that the two branches of code are always
subjected to compiler checking, even if they're not being used.

>> +
>> +}
>> #else
>> -# define UNLOCK_LOCK_PREFIX
>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>> +{
>> +	barrier();
>> +	lock->tickets.head++;
>> +	barrier();
> The second barrier() is not needed.

Agreed.  It gets removed in a later patch.

    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common
  2010-11-13 10:48     ` Eric Dumazet
@ 2010-11-15 19:39       ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-15 19:39 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Américo Wang, Peter Zijlstra, Linux Kernel Mailing List,
	Nick Piggin, Jan Beulich, Avi Kivity, Xen-devel, H. Peter Anvin,
	Linux Virtualization, Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 11/13/2010 02:48 AM, Eric Dumazet wrote:
> Le samedi 13 novembre 2010 à 18:17 +0800, Américo Wang a écrit :
>> On Wed, Nov 03, 2010 at 10:59:47AM -0400, Jeremy Fitzhardinge wrote:
>>
>>> +	union {
>>> +		struct __raw_tickets tickets;
>>> +		__ticketpair_t slock;
>>> +	} tmp, new;
>>> +	int ret;
>>> +
>>> +	tmp.tickets = ACCESS_ONCE(lock->tickets);
>>> +	if (tmp.tickets.head != tmp.tickets.tail)
>>> +		return 0;
>>> +
>>> +	new.slock = tmp.slock + (1 << TICKET_SHIFT);
>>> +
>>> +	ret = cmpxchg(&lock->ticketpair, tmp.slock, new.slock) == tmp.slock;
>>> +	barrier();		/* just make nothing creeps before lock is claimed */
>> This one should be smp_wmb(), right? No CONFIG_X86_OOSTORE protected.
> cmpxchg() is a full memory barrier, no need for smp_wmb() or barrier()

Agreed.

    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-12 22:20     ` H. Peter Anvin
@ 2010-11-15 20:00       ` Jeremy Fitzhardinge
  2010-11-15 20:03         ` H. Peter Anvin
  0 siblings, 1 reply; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-15 20:00 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Jan Beulich,
	Avi Kivity, Xen-devel, Linux Virtualization, Srivatsa Vaddagiri,
	Mathieu Desnoyers

On 11/12/2010 02:20 PM, H. Peter Anvin wrote:
> On 11/12/2010 02:17 PM, Jeremy Fitzhardinge wrote:
>> On 11/12/2010 02:12 PM, H. Peter Anvin wrote:
>>> On 11/03/2010 07:59 AM, Jeremy Fitzhardinge wrote:
>>>>       - with an unmodified struct spinlock, it can check to see if
>>>>         head == tail after unlock; if not, then there's someone else
>>>>         trying to lock, and we can do a kick.  Unfortunately this
>>>>         generates very high level of redundant kicks, because the
>>>>         waiting CPU might not have blocked yet (which is the common
>>>>         case)
>>>>
>>> How high is "very high" here -- most of the time (so that any mitigation
>>> on the slow patch is useless)?
>> I'll need to remeasure, but I think around 90% of the slowpath entries
>> were spurious without this.  In other words, when spinlocks do contend,
>> most of the time it isn't very serious and the other cpu doesn't spend
>> much time spinning.
>>
> 90% of the slowpath entries is one thing, my real question is the
> fraction of fastpath entries that get diverted to the slowpath.  It
> affects where mitigation needs to happen.

There are two questions: how many unlock events *must* go into the
slowpath for correctness reasons (ie, because the corresponding lock
also went slowpath and got blocked there), and how many end up going
into the slowpath due to imperfect heuristics?

The number of lock events which go slowpath is very dependent on the
workload of the kernel in question and of the machine overall.  On a
system with no CPU overcommit it should be zero (assuming that in the
native case no Linux spinlock remains contended for so long that it will
trigger the slowpath).  On a very overcommitted system, it comes down to
what the likelihood that a VCPU will get preempted while running in a
critical region: Tcrit * Phz, where Tcrit is the critical section time
in S and Phz is the preemption rate of the VCPU scheduler in Hz.  So,
for example, a lock with a 10uS critical section and a 100Hz preemption
rate will have a .1% chance of getting preempted and possibly causing
the other lockers to enter the slow path.

On the unlock side, it needs to test whether lock has any waiters in a
slowpath state.  A conservative test is whether there are any
outstanding tickets, but in my measurements 90% of CPUs which spun on a
lock ended up getting it without having to take the slowpath.  This lead
me to investigate more precise tests, which is currently a count of
slowpath-entering CPUs waiting on the lock.

Another approach I discussed with PeterZ and Mathieu is to steal the LSB
of the ticket counters (halving the max CPU count) to use as a "there is
someone in slowpath waiting on this lock".  But I haven't spent the time
to work out an algorithm to maintain that flag (or flags, since there
are bits available) in a correct and efficient way.

    J

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-15 20:00       ` Jeremy Fitzhardinge
@ 2010-11-15 20:03         ` H. Peter Anvin
  2010-11-15 20:14           ` Peter Zijlstra
  0 siblings, 1 reply; 43+ messages in thread
From: H. Peter Anvin @ 2010-11-15 20:03 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linux Kernel Mailing List, Jan Beulich,
	Avi Kivity, Xen-devel, Linux Virtualization, Srivatsa Vaddagiri,
	Mathieu Desnoyers

On 11/15/2010 12:00 PM, Jeremy Fitzhardinge wrote:
> 
> Another approach I discussed with PeterZ and Mathieu is to steal the LSB
> of the ticket counters (halving the max CPU count) to use as a "there is
> someone in slowpath waiting on this lock".  But I haven't spent the time
> to work out an algorithm to maintain that flag (or flags, since there
> are bits available) in a correct and efficient way.
> 

Definitely worth pondering.

	-hpa


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-15 20:03         ` H. Peter Anvin
@ 2010-11-15 20:14           ` Peter Zijlstra
  2010-11-15 21:02             ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 43+ messages in thread
From: Peter Zijlstra @ 2010-11-15 20:14 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Jeremy Fitzhardinge, Linux Kernel Mailing List, Jan Beulich,
	Avi Kivity, Xen-devel, Linux Virtualization, Srivatsa Vaddagiri,
	Mathieu Desnoyers

On Mon, 2010-11-15 at 12:03 -0800, H. Peter Anvin wrote:
> On 11/15/2010 12:00 PM, Jeremy Fitzhardinge wrote:
> > 
> > Another approach I discussed with PeterZ and Mathieu is to steal the LSB
> > of the ticket counters (halving the max CPU count) to use as a "there is
> > someone in slowpath waiting on this lock".  But I haven't spent the time
> > to work out an algorithm to maintain that flag (or flags, since there
> > are bits available) in a correct and efficient way.
> > 
> 
> Definitely worth pondering.

Right, so the idea was to make the ticket increment 2, which would leave
the LSB of both the head and tail available. I think that if one were to
set both (using a cmpxchg), the ticket fast-path wouldn't need any
changes since head==tail is still the correct condition for acquisition.

Then the unlock needs an added conditional:
  if (tail & 1) 
	unlock_slowpath()


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-15 20:14           ` Peter Zijlstra
@ 2010-11-15 21:02             ` Jeremy Fitzhardinge
  2010-11-15 21:08               ` Peter Zijlstra
  0 siblings, 1 reply; 43+ messages in thread
From: Jeremy Fitzhardinge @ 2010-11-15 21:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: H. Peter Anvin, Linux Kernel Mailing List, Jan Beulich,
	Avi Kivity, Xen-devel, Linux Virtualization, Srivatsa Vaddagiri,
	Mathieu Desnoyers

On 11/15/2010 12:14 PM, Peter Zijlstra wrote:
> On Mon, 2010-11-15 at 12:03 -0800, H. Peter Anvin wrote:
>> On 11/15/2010 12:00 PM, Jeremy Fitzhardinge wrote:
>>> Another approach I discussed with PeterZ and Mathieu is to steal the LSB
>>> of the ticket counters (halving the max CPU count) to use as a "there is
>>> someone in slowpath waiting on this lock".  But I haven't spent the time
>>> to work out an algorithm to maintain that flag (or flags, since there
>>> are bits available) in a correct and efficient way.
>>>
>> Definitely worth pondering.
> Right, so the idea was to make the ticket increment 2, which would leave
> the LSB of both the head and tail available. I think that if one were to
> set both (using a cmpxchg), the ticket fast-path wouldn't need any
> changes since head==tail is still the correct condition for acquisition.
>
> Then the unlock needs an added conditional:
>   if (tail & 1) 
> 	unlock_slowpath()

The tricky part is knowing how to clear the bit(s) on the last person
dropping out of the slow path, and making that race-free with respect to
new lockers entering the slow path.  I guess you could leave it in
slowpath state until you're the last unlocker (ie, you're unlocking into
uncontended state), whereupon you also clear the bits; I guess that
would probably need a cmpxchg to make it safe WRT new lockers entering
slowpath.

As a heuristic, it shouldn't be too bad performancewise, since
(handwaving) if ticketholder N has entered the slowpath, then its likely
that N+1 will as well.

    J


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
  2010-11-15 21:02             ` Jeremy Fitzhardinge
@ 2010-11-15 21:08               ` Peter Zijlstra
  0 siblings, 0 replies; 43+ messages in thread
From: Peter Zijlstra @ 2010-11-15 21:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linux Kernel Mailing List, Jan Beulich,
	Avi Kivity, Xen-devel, Linux Virtualization, Srivatsa Vaddagiri,
	Mathieu Desnoyers

On Mon, 2010-11-15 at 13:02 -0800, Jeremy Fitzhardinge wrote:
> 
> As a heuristic, it shouldn't be too bad performancewise, since
> (handwaving) if ticketholder N has entered the slowpath, then its likely
> that N+1 will as well. 

Yes, esp. if the whole slow unlock path takes more cycles than you spin
for to begin with.

I think this approach is definitely worth trying.

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2010-11-15 21:08 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-11-03 14:59 [PATCH 00/20] x86: ticket lock rewrite and paravirtualization Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 01/20] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
2010-11-13  9:57   ` Américo Wang
2010-11-15 19:36     ` Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 02/20] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
2010-11-03 15:11   ` Eric Dumazet
2010-11-03 15:38     ` Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 03/20] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
2010-11-03 15:13   ` Eric Dumazet
2010-11-03 18:00     ` Jeremy Fitzhardinge
2010-11-13 10:05   ` Américo Wang
2010-11-13 22:34     ` Paolo Bonzini
2010-11-15 19:38     ` Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 04/20] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
2010-11-12 12:19   ` Srivatsa Vaddagiri
2010-11-12 16:27     ` Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 05/20] x86/ticketlock: make __ticket_spin_lock common Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 06/20] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
2010-11-13 10:17   ` Américo Wang
2010-11-13 10:48     ` Eric Dumazet
2010-11-15 19:39       ` Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 07/20] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 08/20] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 09/20] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 10/20] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 11/20] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 12/20] x86/pvticketlock: use callee-save for unlock_kick as well Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 13/20] x86/pvticketlock: make sure unlock is seen by everyone before checking waiters Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 14/20] x86/ticketlock: loosen ordering restraints on unlock Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 15/20] x86/ticketlock: prevent compiler reordering into locked region Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 16/20] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 17/20] x86/ticketlock: clarify barrier in arch_spin_lock Jeremy Fitzhardinge
2010-11-03 14:59 ` [PATCH 18/20] x86/ticketlock: remove .slock Jeremy Fitzhardinge
2010-11-03 15:00 ` [PATCH 19/20] x86/ticketlocks: use overlapping read to eliminate mb() Jeremy Fitzhardinge
2010-11-03 15:00 ` [PATCH 20/20] x86/ticketlock: rename ticketpair to head_tail Jeremy Fitzhardinge
2010-11-12 22:12 ` [PATCH 00/20] x86: ticket lock rewrite and paravirtualization H. Peter Anvin
2010-11-12 22:17   ` Jeremy Fitzhardinge
2010-11-12 22:20     ` H. Peter Anvin
2010-11-15 20:00       ` Jeremy Fitzhardinge
2010-11-15 20:03         ` H. Peter Anvin
2010-11-15 20:14           ` Peter Zijlstra
2010-11-15 21:02             ` Jeremy Fitzhardinge
2010-11-15 21:08               ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).