linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC 00/12] X86 ticket lock cleanups and improvements
@ 2010-07-17  1:03 Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
                   ` (11 more replies)
  0 siblings, 12 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

[ Sorry, resent with sensible threading and Nick's email corrected ]

Hi all,

This series does three things:

 - A general cleanup of the ticketlock implementation, including
   moving most of it into C, removing a pile of inline asm and ifdefs.

 - Convert the PV spinlock mechanism (enabled with
   CONFIG_PARAVIRT_SPINLOCKS) to a PV ticketlock mechanism.  The old
   way completely replaced the spinlock implementation, changing all
   the spinlock calls into indirect ones via paravirt-ops.  This was
   overkill, and caused noticable performance regressions on some
   microarchitectures.

   The new scheme keeps the ticketlock algorithm, and uses the
   standard ticketlock code for both native and PV uses.  But it adds
   a couple of pvops hooks for the slow paths: one when we've been
   waiting a long time on a lock, and one when we're unlocking a lock
   which has people waiting on it.

 - A Xen implementation of these new pvop hooks, which shows how much
   simpler they make the backend code.

I've benchmarked these changes with lmbench lat_mmap, which shows that
- at worst - these changes have no detremental effect to performance
when run native.  In some cases there are surprising improvements
(running native with the pvop hooks enabled was noticably faster than
without, for example).  (I tried also using mmap-perf, but it seems to
hang indefinitely when I run it on 4 threads.)

The patches are against v2.6.33, but merge cleanly with current
linux-2.6.git.

Thanks,
	J

Jeremy Fitzhardinge (12):
  x86/ticketlock: clean up types and accessors
  x86/ticketlock: convert spin loop to C
  x86/ticketlock: Use C for __ticket_spin_unlock
  x86/ticketlock: make large and small ticket versions of spin_lock the
    same
  x86/ticketlock: make __ticket_spin_lock common
  x86/ticketlock: make __ticket_spin_trylock common
  x86/spinlocks: replace pv spinlocks with pv ticketlocks
  x86/ticketlock: collapse a layer of functions
  xen/pvticketlock: Xen implementation for PV ticket locks
  x86/pvticketlock: keep count of blocked cpus
  x86/pvticketlock: use callee-save for lock_spinning
  x86/pvticketlock: use callee-save for unlock_kick as well

 arch/x86/include/asm/paravirt.h       |   30 +---
 arch/x86/include/asm/paravirt_types.h |    8 +-
 arch/x86/include/asm/spinlock.h       |  241 ++++++++++++++--------------
 arch/x86/include/asm/spinlock_types.h |   26 +++-
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +--
 arch/x86/xen/spinlock.c               |  282 +++++----------------------------
 6 files changed, 192 insertions(+), 410 deletions(-)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (7 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 05/12] x86/ticketlock: make __ticket_spin_lock common Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-08-03  8:32   ` Peter Zijlstra
  2010-07-17  1:03 ` [PATCH RFC 12/12] x86/pvticketlock: use callee-save for unlock_kick as well Jeremy Fitzhardinge
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

When a CPU blocks by calling into __ticket_lock_spinning, keep a count in
the spinlock.  This allows __ticket_lock_kick to more accurately tell
whether it has any work to do (in many cases, a spinlock may be contended,
but none of the waiters have gone into blocking).

This adds two locked instructions to the spinlock slow path (once the
lock has already spun for SPIN_THRESHOLD iterations), and adds another
one or two bytes to struct arch_spinlock.

We need to make sure we increment the waiting counter before doing the
last-chance check of the lock to see if we picked it up in the meantime.
If we don't then there's a potential deadlock:

	lock holder		lock waiter

				clear event channel
				check lock for pickup (did not)
	release lock
	check waiting counter
		(=0, no kick)
				add waiting counter
				block (=deadlock)

Moving the "add waiting counter earler" avoids the deadlock:

	lock holder		lock waiter

				clear event channel
				add waiting counter
				check lock for pickup (did not)
	release lock
	check waiting counter
		(=1, kick)
				block (and immediately wake)

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   27 ++++++++++++++++++++++++++-
 arch/x86/include/asm/spinlock_types.h |    3 +++
 arch/x86/xen/spinlock.c               |    4 ++++
 3 files changed, 33 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index a79dfee..3deabca 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -65,6 +65,31 @@ static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, u
 {
 }
 
+static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
+{
+	return false;
+}
+#else
+static inline void __ticket_add_waiting(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->waiting) == sizeof(u8))
+		asm (LOCK_PREFIX "addb $1, %0" : "+m" (lock->waiting) : : "memory");
+	else
+		asm (LOCK_PREFIX "addw $1, %0" : "+m" (lock->waiting) : : "memory");
+}
+
+static inline void __ticket_sub_waiting(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->waiting) == sizeof(u8))
+		asm (LOCK_PREFIX "subb $1, %0" : "+m" (lock->waiting) : : "memory");
+	else
+		asm (LOCK_PREFIX "subw $1, %0" : "+m" (lock->waiting) : : "memory");
+}
+
+static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
+{
+	return ACCESS_ONCE(lock->waiting) != 0;
+}
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
 
 /*
@@ -106,7 +131,7 @@ static __always_inline struct __raw_tickets __ticket_spin_claim(struct arch_spin
  */
 static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
-	if (unlikely(lock->tickets.tail != next))
+	if (unlikely(__ticket_lock_waiters(lock)))
 		____ticket_unlock_kick(lock, next);
 }
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 48dafc3..b396ed5 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -26,6 +26,9 @@ typedef struct arch_spinlock {
 			__ticket_t head, tail;
 		} tickets;
 	};
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+	__ticket_t waiting;
+#endif
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { .slock = 0 } }
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index e60d5f1..2f81d5e 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -118,6 +118,8 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	__ticket_add_waiting(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -132,6 +134,8 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
+	__ticket_sub_waiting(lock);
+
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
 	spin_time_accum_blocked(start);
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 12/12] x86/pvticketlock: use callee-save for unlock_kick as well
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (8 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 08/12] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

The unlock code is typically inlined throughout the kernel, so its useful
to make sure there's minimal register pressure overhead from the presence
of the unlock_tick pvop call.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index d02c5fe..487f965 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -738,7 +738,7 @@ static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned t
 
 static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index b8205c4..be6f2a4 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -321,7 +321,7 @@ struct pv_mmu_ops {
 struct arch_spinlock;
 struct pv_lock_ops {
 	struct paravirt_callee_save lock_spinning;
-	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
+	struct paravirt_callee_save unlock_kick;
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..efad219 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -10,7 +10,7 @@
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
-	.unlock_kick = paravirt_nop,
+	.unlock_kick = __PV_IS_CALLEE_SAVE(paravirt_nop),
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 598e126..d07eff3 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -158,6 +158,7 @@ static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 		}
 	}
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_unlock_kick);
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -194,7 +195,7 @@ void xen_uninit_lock_cpu(int cpu)
 void __init xen_init_spinlocks(void)
 {
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
-	pv_lock_ops.unlock_kick = xen_unlock_kick;
+	pv_lock_ops.unlock_kick = PV_CALLEE_SAVE(xen_unlock_kick);
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
-- 
1.7.1.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 08/12] x86/ticketlock: collapse a layer of functions
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (9 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 12/12] x86/pvticketlock: use callee-save for unlock_kick as well Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 06815f3..a79dfee 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -110,7 +110,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
 		____ticket_unlock_kick(lock, next);
 }
 
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc;
 
@@ -130,7 +130,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	union {
 		struct __raw_tickets tickets;
@@ -150,53 +150,28 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return ret;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 	__ticket_unlock_release(lock);
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-20 15:38   ` [Xen-devel] " Konrad Rzeszutek Wilk
  2010-07-17  1:03 ` [PATCH RFC 06/12] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

If we don't need to use a locked inc for unlock, then implement it in C.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   33 ++++++++++++++++++---------------
 1 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6711d36..082990a 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -33,9 +33,23 @@
  * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
  * (PPro errata 66, 92)
  */
-# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->tickets.head) == sizeof(u8))
+		asm (LOCK_PREFIX "incb %0"
+		     : "+m" (lock->tickets.head) : : "memory");
+	else
+		asm (LOCK_PREFIX "incw %0"
+		     : "+m" (lock->tickets.head) : : "memory");
+
+}
 #else
-# define UNLOCK_LOCK_PREFIX
+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
+{
+	barrier();
+	lock->tickets.head++;
+	barrier();
+}
 #endif
 
 /*
@@ -93,14 +107,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 	return tmp;
 }
-
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
-{
-	asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
-		     : "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
-}
 #else
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
@@ -144,15 +150,12 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 	return tmp;
 }
+#endif
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
-		     : "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
+	__ticket_unlock_release(lock);
 }
-#endif
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
 {
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 07/12] x86/spinlocks: replace pv spinlocks with pv ticketlocks
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (3 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 04/12] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |   30 +++-------------------
 arch/x86/include/asm/paravirt_types.h |    8 +----
 arch/x86/include/asm/spinlock.h       |   44 +++++++++++++++++++++++++++------
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +---------
 arch/x86/xen/spinlock.c               |    7 ++++-
 5 files changed, 50 insertions(+), 54 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index dd59a85..6370e69 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -731,36 +731,14 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index b1e70d5..48aed65 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -320,12 +320,8 @@ struct pv_mmu_ops {
 
 struct arch_spinlock;
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, unsigned ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 507f83c..06815f3 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -52,6 +52,21 @@ static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
 }
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -85,6 +100,16 @@ static __always_inline struct __raw_tickets __ticket_spin_claim(struct arch_spin
 	return tickets;
 }
 
+/* 
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc;
@@ -92,10 +117,15 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 	inc = __ticket_spin_claim(lock);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			return;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				return;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
@@ -122,7 +152,9 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
+	__ticket_t next = lock->tickets.head + 1;
 	__ticket_unlock_release(lock);
+	__ticket_unlock_kick(lock, next);
 }
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
@@ -139,8 +171,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -173,8 +203,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 24ded31..1cf5705 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -120,6 +120,9 @@ struct xen_spinlock {
 	unsigned short spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -147,7 +150,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -337,6 +339,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -372,12 +375,14 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 11/12] x86/pvticketlock: use callee-save for lock_spinning
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 06/12] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 6370e69..d02c5fe 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -733,7 +733,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 48aed65..b8205c4 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -320,7 +320,7 @@ struct pv_mmu_ops {
 
 struct arch_spinlock;
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, unsigned ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2f81d5e..598e126 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -140,6 +140,7 @@ out:
 	w->lock = NULL;
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 {
@@ -192,7 +193,7 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (2 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 11/12] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-09-26 11:39   ` Srivatsa Vaddagiri
  2010-07-17  1:03 ` [PATCH RFC 07/12] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |  281 ++++++-----------------------------------------
 1 files changed, 36 insertions(+), 245 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 1cf5705..e60d5f1 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -18,32 +18,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-	u64 taken;
 	u32 taken_slow;
-	u32 taken_slow_nested;
 	u32 taken_slow_pickup;
 	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
 
-	u64 released;
 	u32 released_slow;
 	u32 released_slow_kicked;
 
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
 	if (unlikely(zero_stats)) {
@@ -72,22 +61,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -104,214 +77,76 @@ static inline u64 spin_time_start(void)
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	unsigned short spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __get_cpu_var(lock_spinners);
-	__get_cpu_var(lock_spinners) = xl;
-
-	wmb();			/* set lock of interest before count */
-
-	asm(LOCK_PREFIX " incw %0"
-	    : "+m" (xl->spinners) : : "memory");
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	asm(LOCK_PREFIX " decw %0"
-	    : "+m" (xl->spinners) : : "memory");
-	wmb();			/* decrement count before restoring lock */
-	__get_cpu_var(lock_spinners) = prev;
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
+static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __get_cpu_var(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
+	w->want = want;
+	w->lock = lock;
+
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
 
 	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
 
-		flags = __raw_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* Only check lock once pending cleared */
+	barrier();
 
-		raw_local_irq_restore(flags);
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		ADD_STATS(taken_slow_pickup, 1);
+		goto out;
+	}
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 {
 	int cpu;
 
 	ADD_STATS(released_slow, 1);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+
+		if (w->lock == lock && w->want == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
@@ -319,28 +154,6 @@ static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -375,14 +188,8 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -400,37 +207,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_pickup);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow_kicked);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	xen_debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	xen_debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	xen_debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 05/12] x86/ticketlock: make __ticket_spin_lock common
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (6 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 01/12] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

Aside from the particular form of the xadd instruction, they're identical.
So factor out the xadd and use common code for the rest.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   42 ++++++++++++++++++--------------------
 1 files changed, 20 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 7586d7a..4f9fa24 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -69,13 +69,27 @@ static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
  * save some instructions and make the code more elegant. There really isn't
  * much between them in performance though, especially as locks are out of line.
  */
-#if (NR_CPUS < 256)
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline struct __raw_tickets __ticket_spin_claim(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets tickets = { .tail = 1 };
+
+	if (sizeof(lock->tickets.head) == sizeof(u8))
+		asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
+			      : "+r" (tickets), "+m" (lock->tickets)
+			      : : "memory", "cc");
+	else
+		asm volatile (LOCK_PREFIX "xaddl %0, %1\n"
+			     : "+r" (tickets), "+m" (lock->tickets)
+			     : : "memory", "cc");
 
-	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
-		      : "+r" (inc), "+m" (lock->tickets) : : "memory", "cc");
+	return tickets;
+}
+
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+{
+	register struct __raw_tickets inc;
+
+	inc = __ticket_spin_claim(lock);
 
 	for (;;) {
 		if (inc.head == inc.tail)
@@ -86,6 +100,7 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
+#if (NR_CPUS < 256)
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
 	unsigned int tmp, new;
@@ -105,23 +120,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return tmp;
 }
 #else
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
-{
-	register struct __raw_tickets inc = { .tickets.tail = 1 };
-
-	asm volatile(LOCK_PREFIX "xaddl %0, %1\n\t"
-		     : "+r" (inc), "+m" (lock->tickets)
-		     : : "memory", "cc");
-
-	for (;;) {
-		if (inc.head == inc.tail)
-			return;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
-	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
-}
-
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
 	unsigned tmp;
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 04/12] x86/ticketlock: make large and small ticket versions of spin_lock the same
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (4 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 07/12] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 01/12] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

Make the bulk of __ticket_spin_lock look identical for large and small
number of cpus.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   23 ++++++++---------------
 1 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 082990a..7586d7a 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -72,19 +72,16 @@ static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
 #if (NR_CPUS < 256)
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	register union {
-		struct __raw_tickets tickets;
-		unsigned short slock;
-	} inc = { .slock = 1 << TICKET_SHIFT };
+	register struct __raw_tickets inc = { .tail = 1 };
 
 	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
-		      : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
+		      : "+r" (inc), "+m" (lock->tickets) : : "memory", "cc");
 
 	for (;;) {
-		if (inc.tickets.head == inc.tickets.tail)
+		if (inc.head == inc.tail)
 			return;
 		cpu_relax();
-		inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
+		inc.head = ACCESS_ONCE(lock->tickets.head);
 	}
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
@@ -110,21 +107,17 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 #else
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	unsigned inc = 1 << TICKET_SHIFT;
-	__ticket_t tmp;
+	register struct __raw_tickets inc = { .tickets.tail = 1 };
 
 	asm volatile(LOCK_PREFIX "xaddl %0, %1\n\t"
-		     : "+r" (inc), "+m" (lock->slock)
+		     : "+r" (inc), "+m" (lock->tickets)
 		     : : "memory", "cc");
 
-	tmp = inc;
-	inc >>= TICKET_SHIFT;
-
 	for (;;) {
-		if ((__ticket_t)inc == tmp)
+		if (inc.head == inc.tail)
 			return;
 		cpu_relax();
-		tmp = ACCESS_ONCE(lock->tickets.head);
+		inc.head = ACCESS_ONCE(lock->tickets.head);
 	}
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 01/12] x86/ticketlock: clean up types and accessors
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (5 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 04/12] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 05/12] x86/ticketlock: make __ticket_spin_lock common Jeremy Fitzhardinge
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

A few cleanups to the way spinlocks are defined and accessed:
 - define __ticket_t which is the size of a spinlock ticket (ie, enough
   bits to hold all the cpus)
 - Define struct arch_spinlock as a union containing plain slock and
   the head and tail tickets
 - Use head and tail to implement some of the spinlock predicates.
 - Make all ticket variables unsigned.
 - Use TICKET_SHIFT to form constants

Most of this will be used in later patches.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   24 ++++++++++--------------
 arch/x86/include/asm/spinlock_types.h |   20 ++++++++++++++++++--
 2 files changed, 28 insertions(+), 16 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 3089f70..d6d5784 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -56,11 +56,9 @@
  * much between them in performance though, especially as locks are out of line.
  */
 #if (NR_CPUS < 256)
-#define TICKET_SHIFT 8
-
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	short inc = 0x0100;
+	unsigned short inc = 1 << TICKET_SHIFT;
 
 	asm volatile (
 		LOCK_PREFIX "xaddw %w0, %1\n"
@@ -79,7 +77,7 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
-	int tmp, new;
+	unsigned int tmp, new;
 
 	asm volatile("movzwl %2, %0\n\t"
 		     "cmpb %h0,%b0\n\t"
@@ -104,12 +102,10 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 		     : "memory", "cc");
 }
 #else
-#define TICKET_SHIFT 16
-
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	int inc = 0x00010000;
-	int tmp;
+	unsigned inc = 1 << TICKET_SHIFT;
+	unsigned tmp;
 
 	asm volatile(LOCK_PREFIX "xaddl %0, %1\n"
 		     "movzwl %w0, %2\n\t"
@@ -129,8 +125,8 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
-	int tmp;
-	int new;
+	unsigned tmp;
+	unsigned new;
 
 	asm volatile("movl %2,%0\n\t"
 		     "movl %0,%1\n\t"
@@ -160,16 +156,16 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
 {
-	int tmp = ACCESS_ONCE(lock->slock);
+	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return !!(((tmp >> TICKET_SHIFT) ^ tmp) & ((1 << TICKET_SHIFT) - 1));
+	return !!(tmp.tail ^ tmp.head);
 }
 
 static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 {
-	int tmp = ACCESS_ONCE(lock->slock);
+	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return (((tmp >> TICKET_SHIFT) - tmp) & ((1 << TICKET_SHIFT) - 1)) > 1;
+	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
 
 #ifndef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index dcb48b2..4582640 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,11 +5,27 @@
 # error "please don't include this file directly"
 #endif
 
+#include <linux/types.h>
+
+#if (CONFIG_NR_CPUS < 256)
+typedef u8  __ticket_t;
+#else
+typedef u16 __ticket_t;
+#endif
+
+#define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
+#define TICKET_MASK	((1 << TICKET_SHIFT) - 1)
+
 typedef struct arch_spinlock {
-	unsigned int slock;
+	union {
+		unsigned int slock;
+		struct __raw_tickets {
+			__ticket_t head, tail;
+		} tickets;
+	};
 } arch_spinlock_t;
 
-#define __ARCH_SPIN_LOCK_UNLOCKED	{ 0 }
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ { .slock = 0 } }
 
 typedef struct {
 	unsigned int lock;
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
                   ` (10 preceding siblings ...)
  2010-07-17  1:03 ` [PATCH RFC 08/12] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-08-02 15:07   ` Peter Zijlstra
  11 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

The inner loop of __ticket_spin_lock isn't doing anything very special,
so reimplement it in C.

For the 8 bit ticket lock variant, we use a register union to get direct
access to the lower and upper bytes in the tickets, but unfortunately gcc
won't generate a direct comparison between the two halves of the register,
so the generated asm isn't quite as pretty as the hand-coded version.
However benchmarking shows that this is actually a small improvement in
runtime performance on some benchmarks, and never a slowdown.

We also need to make sure there's a barrier at the end of the lock loop
to make sure that the compiler doesn't move any instructions from within
the locked region into the region where we don't yet own the lock.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   58 +++++++++++++++++++-------------------
 1 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index d6d5784..6711d36 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -58,21 +58,21 @@
 #if (NR_CPUS < 256)
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-	unsigned short inc = 1 << TICKET_SHIFT;
-
-	asm volatile (
-		LOCK_PREFIX "xaddw %w0, %1\n"
-		"1:\t"
-		"cmpb %h0, %b0\n\t"
-		"je 2f\n\t"
-		"rep ; nop\n\t"
-		"movb %1, %b0\n\t"
-		/* don't need lfence here, because loads are in-order */
-		"jmp 1b\n"
-		"2:"
-		: "+Q" (inc), "+m" (lock->slock)
-		:
-		: "memory", "cc");
+	register union {
+		struct __raw_tickets tickets;
+		unsigned short slock;
+	} inc = { .slock = 1 << TICKET_SHIFT };
+
+	asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
+		      : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
+
+	for (;;) {
+		if (inc.tickets.head == inc.tickets.tail)
+			return;
+		cpu_relax();
+		inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
+	}
+	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -105,22 +105,22 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
 	unsigned inc = 1 << TICKET_SHIFT;
-	unsigned tmp;
+	__ticket_t tmp;
 
-	asm volatile(LOCK_PREFIX "xaddl %0, %1\n"
-		     "movzwl %w0, %2\n\t"
-		     "shrl $16, %0\n\t"
-		     "1:\t"
-		     "cmpl %0, %2\n\t"
-		     "je 2f\n\t"
-		     "rep ; nop\n\t"
-		     "movzwl %1, %2\n\t"
-		     /* don't need lfence here, because loads are in-order */
-		     "jmp 1b\n"
-		     "2:"
-		     : "+r" (inc), "+m" (lock->slock), "=&r" (tmp)
-		     :
-		     : "memory", "cc");
+	asm volatile(LOCK_PREFIX "xaddl %0, %1\n\t"
+		     : "+r" (inc), "+m" (lock->slock)
+		     : : "memory", "cc");
+
+	tmp = inc;
+	inc >>= TICKET_SHIFT;
+
+	for (;;) {
+		if ((__ticket_t)inc == tmp)
+			return;
+		cpu_relax();
+		tmp = ACCESS_ONCE(lock->tickets.head);
+	}
+	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC 06/12] x86/ticketlock: make __ticket_spin_trylock common
  2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
@ 2010-07-17  1:03 ` Jeremy Fitzhardinge
  2010-07-17  1:03 ` [PATCH RFC 11/12] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-17  1:03 UTC (permalink / raw)
  To: Linux Kernel Mailing List
  Cc: Nick Piggin, Peter Zijlstra, Jan Beulich, Avi Kivity, Xen-devel

Make trylock code common regardless of ticket size.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   55 +++++++++-----------------------
 arch/x86/include/asm/spinlock_types.h |    3 ++
 2 files changed, 19 insertions(+), 39 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 4f9fa24..507f83c 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -100,48 +100,25 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-#if (NR_CPUS < 256)
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 {
-	unsigned int tmp, new;
-
-	asm volatile("movzwl %2, %0\n\t"
-		     "cmpb %h0,%b0\n\t"
-		     "leal 0x100(%" REG_PTR_MODE "0), %1\n\t"
-		     "jne 1f\n\t"
-		     LOCK_PREFIX "cmpxchgw %w1,%2\n\t"
-		     "1:"
-		     "sete %b1\n\t"
-		     "movzbl %b1,%0\n\t"
-		     : "=&a" (tmp), "=&q" (new), "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
-
-	return tmp;
-}
-#else
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
-{
-	unsigned tmp;
-	unsigned new;
-
-	asm volatile("movl %2,%0\n\t"
-		     "movl %0,%1\n\t"
-		     "roll $16, %0\n\t"
-		     "cmpl %0,%1\n\t"
-		     "leal 0x00010000(%" REG_PTR_MODE "0), %1\n\t"
-		     "jne 1f\n\t"
-		     LOCK_PREFIX "cmpxchgl %1,%2\n\t"
-		     "1:"
-		     "sete %b1\n\t"
-		     "movzbl %b1,%0\n\t"
-		     : "=&a" (tmp), "=&q" (new), "+m" (lock->slock)
-		     :
-		     : "memory", "cc");
-
-	return tmp;
+	union {
+		struct __raw_tickets tickets;
+		__ticketpair_t slock;
+	} tmp, new;
+	int ret;
+
+	tmp.tickets = ACCESS_ONCE(lock->tickets);
+	if (tmp.tickets.head != tmp.tickets.tail)
+		return 0;
+
+	new.slock = tmp.slock + (1 << TICKET_SHIFT);
+
+	ret = cmpxchg(&lock->ticketpair, tmp.slock, new.slock) == tmp.slock;
+	barrier();		/* just make nothing creeps before lock is claimed */
+
+	return ret;
 }
-#endif
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4582640..48dafc3 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -9,8 +9,10 @@
 
 #if (CONFIG_NR_CPUS < 256)
 typedef u8  __ticket_t;
+typedef u16 __ticketpair_t;
 #else
 typedef u16 __ticket_t;
+typedef u32 __ticketpair_t;
 #endif
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
@@ -19,6 +21,7 @@ typedef u16 __ticket_t;
 typedef struct arch_spinlock {
 	union {
 		unsigned int slock;
+		__ticketpair_t ticketpair;
 		struct __raw_tickets {
 			__ticket_t head, tail;
 		} tickets;
-- 
1.7.1.1



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [Xen-devel] [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-07-17  1:03 ` [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
@ 2010-07-20 15:38   ` Konrad Rzeszutek Wilk
  2010-07-20 16:17     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-07-20 15:38 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Linux Kernel Mailing List, Nick Piggin, Peter Zijlstra,
	Xen-devel, Avi Kivity, Jan Beulich

> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -33,9 +33,23 @@
>   * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>   * (PPro errata 66, 92)
>   */
> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> +	if (sizeof(lock->tickets.head) == sizeof(u8))
> +		asm (LOCK_PREFIX "incb %0"
> +		     : "+m" (lock->tickets.head) : : "memory");
> +	else
> +		asm (LOCK_PREFIX "incw %0"
> +		     : "+m" (lock->tickets.head) : : "memory");

Should those be 'asm volatile' to make them barriers as well? Or do we
not have to worry about that on a Pentium Pro SMP?

> +
> +}
>  #else
> -# define UNLOCK_LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> +	barrier();
> +	lock->tickets.head++;
> +	barrier();
> +}

Got a question:
This extra barrier() (which I see gets removed in git tree) was
done b/c the function is inlined and hence the second barrier() inhibits
gcc from re-ordering __ticket_spin_unlock instructions? Which is a big
pre-requisite in patch 7 where this function expands to:


 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
       __ticket_t next = lock->tickets.head + 1; // This code
is executed before the lock->tickets.head++ b/c of the 1st barrier?
Or would it be done irregardless b/c gcc sees the data dependency here?

        __ticket_unlock_release(lock); <- expands to
"barrier();lock->tickets.head++;barrier()" 

+       __ticket_unlock_kick(lock, next);   <- so now the second barrier()
affects this code, so it won't re-order the lock->tickets.head++ to be called
after this function?


This barrier ("asm volatile("" : : : "memory")); from what I've been reading
says : "Don't re-order the instructions within this scope and starting
right below me." ? Or is it is just within the full scope of the
function/code logic irregardless of the 'inline' defined in one of them?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Xen-devel] [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-07-20 15:38   ` [Xen-devel] " Konrad Rzeszutek Wilk
@ 2010-07-20 16:17     ` Jeremy Fitzhardinge
  2010-08-06 17:47       ` H. Peter Anvin
  0 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-20 16:17 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Linux Kernel Mailing List, Nick Piggin, Peter Zijlstra,
	Xen-devel, Avi Kivity, Jan Beulich

On 07/20/2010 08:38 AM, Konrad Rzeszutek Wilk wrote:
>> --- a/arch/x86/include/asm/spinlock.h
>> +++ b/arch/x86/include/asm/spinlock.h
>> @@ -33,9 +33,23 @@
>>   * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>>   * (PPro errata 66, 92)
>>   */
>> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>> +{
>> +	if (sizeof(lock->tickets.head) == sizeof(u8))
>> +		asm (LOCK_PREFIX "incb %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
>> +	else
>> +		asm (LOCK_PREFIX "incw %0"
>> +		     : "+m" (lock->tickets.head) : : "memory");
>>     
> Should those be 'asm volatile' to make them barriers as well? Or do we
> not have to worry about that on a Pentium Pro SMP?
>   

"volatile" would be a compiler barrier, but it has no direct effect on,
or relevence to, the CPU.  It just cares about the LOCK_PREFIX.  The
"memory" clobber is probably unnecessary as well, since the constraints
already tell the compiler the most important information.  We can add
barriers separately as needed.

>> +
>> +}
>>  #else
>> -# define UNLOCK_LOCK_PREFIX
>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>> +{
>> +	barrier();
>> +	lock->tickets.head++;
>> +	barrier();
>> +}
>>     
> Got a question:
> This extra barrier() (which I see gets removed in git tree) was
> done b/c the function is inlined and hence the second barrier() inhibits
> gcc from re-ordering __ticket_spin_unlock instructions? Which is a big
> pre-requisite in patch 7 where this function expands to:
>   

Yes, I removed the barriers here so that the compiler can combine the
unlocking "inc" with getting "next" for unlock_kick.  There's no
constraint on what instructions the compiler can use to do the unlock so
long as it ends up writing a byte value to the right location.

>  static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>  {
>        __ticket_t next = lock->tickets.head + 1; // This code
> is executed before the lock->tickets.head++ b/c of the 1st barrier?
> Or would it be done irregardless b/c gcc sees the data dependency here?
>
>         __ticket_unlock_release(lock); <- expands to
> "barrier();lock->tickets.head++;barrier()" 
>
> +       __ticket_unlock_kick(lock, next);   <- so now the second barrier()
> affects this code, so it won't re-order the lock->tickets.head++ to be called
> after this function?
>
>
> This barrier ("asm volatile("" : : : "memory")); from what I've been reading
> says : "Don't re-order the instructions within this scope and starting
> right below me." ? Or is it is just within the full scope of the
> function/code logic irregardless of the 'inline' defined in one of them?
>   

The barrier effects the entire instruction stream, regardless of the
source from which it was generated.  So inlining will bring the
function's instructions into the caller's stream, and the compiler will
freely reorder them as it sees fit - and the barrier adds a restraint to
that.

    J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-07-17  1:03 ` [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
@ 2010-08-02 15:07   ` Peter Zijlstra
  2010-08-02 15:17     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2010-08-02 15:07 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel

On Fri, 2010-07-16 at 18:03 -0700, Jeremy Fitzhardinge wrote:
> +       register union {
> +               struct __raw_tickets tickets;
> +               unsigned short slock;
> +       } inc = { .slock = 1 << TICKET_SHIFT };

  register arch_spinlock_t inc = { .tickets = { .head = 1, .tail = 0 } };

>From a quick look you can basically replace all TICKET_SHIFT usage (1 <<
TICKET_SHIFT) with such a constant.

[ Also, does gcc really listen to the register hint these days? ]

> +       asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
> +                     : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");

 "+Q" (inc->slock)

> +       for (;;) {
> +               if (inc.tickets.head == inc.tickets.tail)
> +                       return;
> +               cpu_relax();
> +               inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
> +       }
> +       barrier();              /* make sure nothing creeps before the lock is taken */
>  } 

How will it ever get to that barrier() ?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-08-02 15:07   ` Peter Zijlstra
@ 2010-08-02 15:17     ` Jeremy Fitzhardinge
  2010-08-06 12:43       ` Jan Beulich
  0 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-08-02 15:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel

  On 08/02/2010 08:07 AM, Peter Zijlstra wrote:
> On Fri, 2010-07-16 at 18:03 -0700, Jeremy Fitzhardinge wrote:
>> +       register union {
>> +               struct __raw_tickets tickets;
>> +               unsigned short slock;
>> +       } inc = { .slock = 1<<  TICKET_SHIFT };
>    register arch_spinlock_t inc = { .tickets = { .head = 1, .tail = 0 } };
>
> > From a quick look you can basically replace all TICKET_SHIFT usage (1<<
> TICKET_SHIFT) with such a constant.

Mostly.  In the later patch to convert trylock in to C, you need it to 
construct an argument for cmpxchg (which can only take a scalar, even if 
it does have a struct packed into it).

> [ Also, does gcc really listen to the register hint these days? ]

It doesn't make much different in this case.  I think the only real 
effect is that its illegal to take the address of a register variable.

>> +       asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
>> +                     : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
>   "+Q" (inc->slock)
>
>> +       for (;;) {
>> +               if (inc.tickets.head == inc.tickets.tail)
>> +                       return;
>> +               cpu_relax();
>> +               inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
>> +       }
>> +       barrier();              /* make sure nothing creeps before the lock is taken */
>>   }
> How will it ever get to that barrier() ?

The compiler treats this as being:

	for (;;) {
		if (inc.tickets.head == inc.tickets.tail)
			goto out;
		...
	}
out:	barrier();
}

(Which would probably be a reasonable way to clarify the code.)

Without the barrier there's a risk of locked-region code being scheduled 
before the for(;;) loop.

     J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus
  2010-07-17  1:03 ` [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge
@ 2010-08-03  8:32   ` Peter Zijlstra
  2010-08-03  9:44     ` Nick Piggin
  2010-08-03 15:45     ` Jeremy Fitzhardinge
  0 siblings, 2 replies; 32+ messages in thread
From: Peter Zijlstra @ 2010-08-03  8:32 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel

On Fri, 2010-07-16 at 18:03 -0700, Jeremy Fitzhardinge wrote:
> @@ -26,6 +26,9 @@ typedef struct arch_spinlock {
>                         __ticket_t head, tail;
>                 } tickets;
>         };
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +       __ticket_t waiting;
> +#endif
>  } arch_spinlock_t; 

This bloats spinlock_t from u32 to u64 on most distro configs I think,
since they'll have NR_CPUS=4096 or something large like that and
probably also want to have this PARAVIRT_SPINLOCKS thing.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus
  2010-08-03  8:32   ` Peter Zijlstra
@ 2010-08-03  9:44     ` Nick Piggin
  2010-08-03 15:45     ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 32+ messages in thread
From: Nick Piggin @ 2010-08-03  9:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Jeremy Fitzhardinge, Linux Kernel Mailing List, Nick Piggin,
	Jan Beulich, Avi Kivity, Xen-devel

On Tue, Aug 03, 2010 at 10:32:40AM +0200, Peter Zijlstra wrote:
> On Fri, 2010-07-16 at 18:03 -0700, Jeremy Fitzhardinge wrote:
> > @@ -26,6 +26,9 @@ typedef struct arch_spinlock {
> >                         __ticket_t head, tail;
> >                 } tickets;
> >         };
> > +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> > +       __ticket_t waiting;
> > +#endif
> >  } arch_spinlock_t; 
> 
> This bloats spinlock_t from u32 to u64 on most distro configs I think,
> since they'll have NR_CPUS=4096 or something large like that and
> probably also want to have this PARAVIRT_SPINLOCKS thing.

Which sucks for carefully packed data structures like dentry.
They'll tend to grow by 8 bytes per spinlock.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus
  2010-08-03  8:32   ` Peter Zijlstra
  2010-08-03  9:44     ` Nick Piggin
@ 2010-08-03 15:45     ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-08-03 15:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux Kernel Mailing List, Nick Piggin, Jan Beulich, Avi Kivity,
	Xen-devel

  On 08/03/2010 01:32 AM, Peter Zijlstra wrote:
> On Fri, 2010-07-16 at 18:03 -0700, Jeremy Fitzhardinge wrote:
>> @@ -26,6 +26,9 @@ typedef struct arch_spinlock {
>>                          __ticket_t head, tail;
>>                  } tickets;
>>          };
>> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
>> +       __ticket_t waiting;
>> +#endif
>>   } arch_spinlock_t;
> This bloats spinlock_t from u32 to u64 on most distro configs I think,
> since they'll have NR_CPUS=4096 or something large like that and
> probably also want to have this PARAVIRT_SPINLOCKS thing.

Yes, it is very unfortunate.  In principle I could make it work with a 
single bit: set it when a cpu blocks and will need kicking; clear it 
when the lock becomes uncontended again (since everything is FIFO, so if 
something blocks itself there's a good chance that everything afterwards 
will also decide to block).  But a bit takes as much space as a word 
(and it isn't obvious to me how to implement the "clearing when it 
becomes uncontended" part in a race-free way). (But see below for some 
handwaving)

I could store this out in a secondary structure, but it really needs to 
be efficient to access from the unlock fast-path (to determine whether 
it needs to do the slow-path kick), so something out of line isn't going 
to work.

Without the separate "waiting" counter, the previous code just checked 
to see if there was anyone else has a ticket on the lock.  This is too 
pessimistic, since it isn't all that uncommon to have someone else who 
has been waiting on the lock for a little while, but has not yet decided 
to block themselves, and it results in many spurious calls into the 
unlock slow path.

I was thinking of getting a bit in the ticket lock structure by stealing 
a counter bit: halve the supported number of cpus (128 for byte, 32k for 
word), add/sub 2, and use the LSB for the "needs kicking" flag.  And 
that actually gives us two bits to play with, which may be useful for 
dealing with the clearing race (perhaps can be done cleverly in the 
unlock itself, or something).  But I haven't thought this through in detail.

     J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-08-02 15:17     ` Jeremy Fitzhardinge
@ 2010-08-06 12:43       ` Jan Beulich
  2010-08-06 14:53         ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Beulich @ 2010-08-06 12:43 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Peter Zijlstra
  Cc: Xen-devel, Avi Kivity, Nick Piggin, Linux Kernel Mailing List

>>> On 02.08.10 at 17:17, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> On 08/02/2010 08:07 AM, Peter Zijlstra wrote:
>> On Fri, 2010-07-16 at 18:03 -0700, Jeremy Fitzhardinge wrote:
>>> +       asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
>>> +                     : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
>>   "+Q" (inc->slock)
>>
>>> +       for (;;) {
>>> +               if (inc.tickets.head == inc.tickets.tail)
>>> +                       return;
>>> +               cpu_relax();
>>> +               inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
>>> +       }
>>> +       barrier();              /* make sure nothing creeps before the lock 
> is taken */
>>>   }
>> How will it ever get to that barrier() ?
> 
> The compiler treats this as being:

You certainly mean "the compiler currently treats this as being:" - I
don't think there's a guarantee it'll always be doing so.

> 	for (;;) {
> 		if (inc.tickets.head == inc.tickets.tail)
> 			goto out;
> 		...
> 	}
> out:	barrier();
> }
> 
> (Which would probably be a reasonable way to clarify the code.)

I therefore think it needs to be written this way.

Jan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-08-06 12:43       ` Jan Beulich
@ 2010-08-06 14:53         ` Jeremy Fitzhardinge
  2010-08-06 20:17           ` H. Peter Anvin
  0 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-08-06 14:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Peter Zijlstra, Xen-devel, Avi Kivity, Nick Piggin,
	Linux Kernel Mailing List

  On 08/06/2010 05:43 AM, Jan Beulich wrote:
> You certainly mean "the compiler currently treats this as being:" - I
> don't think there's a guarantee it'll always be doing so.
>
>> 	for (;;) {
>> 		if (inc.tickets.head == inc.tickets.tail)
>> 			goto out;
>> 		...
>> 	}
>> out:	barrier();
>> }
>>
>> (Which would probably be a reasonable way to clarify the code.)
> I therefore think it needs to be written this way.

Agreed.

     J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Xen-devel] [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-07-20 16:17     ` Jeremy Fitzhardinge
@ 2010-08-06 17:47       ` H. Peter Anvin
  2010-08-06 20:03         ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: H. Peter Anvin @ 2010-08-06 17:47 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Konrad Rzeszutek Wilk, Linux Kernel Mailing List, Nick Piggin,
	Peter Zijlstra, Xen-devel, Avi Kivity, Jan Beulich

On 07/20/2010 09:17 AM, Jeremy Fitzhardinge wrote:
> 
> "volatile" would be a compiler barrier, but it has no direct effect on,
> or relevence to, the CPU.  It just cares about the LOCK_PREFIX.  The
> "memory" clobber is probably unnecessary as well, since the constraints
> already tell the compiler the most important information.  We can add
> barriers separately as needed.
> 

You absolutely need volatile, since otherwise you're permitting the
compiler to split, re-execute or even drop the code.  Anything else
might work, by accident, but it's not clean.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Xen-devel] [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock
  2010-08-06 17:47       ` H. Peter Anvin
@ 2010-08-06 20:03         ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-08-06 20:03 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Konrad Rzeszutek Wilk, Linux Kernel Mailing List, Nick Piggin,
	Peter Zijlstra, Xen-devel, Avi Kivity, Jan Beulich

  On 08/06/2010 10:47 AM, H. Peter Anvin wrote:
> On 07/20/2010 09:17 AM, Jeremy Fitzhardinge wrote:
>> "volatile" would be a compiler barrier, but it has no direct effect on,
>> or relevence to, the CPU.  It just cares about the LOCK_PREFIX.  The
>> "memory" clobber is probably unnecessary as well, since the constraints
>> already tell the compiler the most important information.  We can add
>> barriers separately as needed.
>>
> You absolutely need volatile, since otherwise you're permitting the
> compiler to split, re-execute or even drop the code.  Anything else
> might work, by accident, but it's not clean.

I don't think so in this case.  The instructions in question are 
basically lock->waiters++/--; the only reason they need to be asm is 
that they're locked.  But I'm not relying on them for any kind of 
compiler or cpu ordering or barrier.  Where ordering is important, I 
have explicit barrier()s to enforce it.

     J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-08-06 14:53         ` Jeremy Fitzhardinge
@ 2010-08-06 20:17           ` H. Peter Anvin
  2010-08-06 20:33             ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: H. Peter Anvin @ 2010-08-06 20:17 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Jan Beulich, Peter Zijlstra, Xen-devel, Avi Kivity, Nick Piggin,
	Linux Kernel Mailing List

On 08/06/2010 07:53 AM, Jeremy Fitzhardinge wrote:
> On 08/06/2010 05:43 AM, Jan Beulich wrote:
>> You certainly mean "the compiler currently treats this as being:" - I
>> don't think there's a guarantee it'll always be doing so.
>>
>>> for (;;) {
>>> if (inc.tickets.head == inc.tickets.tail)
>>> goto out;
>>> ...
>>> }
>>> out: barrier();
>>> }
>>>
>>> (Which would probably be a reasonable way to clarify the code.)
>> I therefore think it needs to be written this way.
>
> Agreed.
>

A call/return to an actual out-of-line function is a barrier (and will 
always be a barrier, as it is the fundamental ABI sequence points), but 
to an inline function it is not.

	-hpa

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-08-06 20:17           ` H. Peter Anvin
@ 2010-08-06 20:33             ` Jeremy Fitzhardinge
  2010-08-06 21:09               ` H. Peter Anvin
  0 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-08-06 20:33 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Jan Beulich, Peter Zijlstra, Xen-devel, Avi Kivity, Nick Piggin,
	Linux Kernel Mailing List

  On 08/06/2010 01:17 PM, H. Peter Anvin wrote:
> On 08/06/2010 07:53 AM, Jeremy Fitzhardinge wrote:
>> On 08/06/2010 05:43 AM, Jan Beulich wrote:
>>> You certainly mean "the compiler currently treats this as being:" - I
>>> don't think there's a guarantee it'll always be doing so.
>>>
>>>> for (;;) {
>>>> if (inc.tickets.head == inc.tickets.tail)
>>>> goto out;
>>>> ...
>>>> }
>>>> out: barrier();
>>>> }
>>>>
>>>> (Which would probably be a reasonable way to clarify the code.)
>>> I therefore think it needs to be written this way.
>>
>> Agreed.
>>
>
> A call/return to an actual out-of-line function is a barrier (and will 
> always be a barrier, as it is the fundamental ABI sequence points), 
> but to an inline function it is not.

Yes. So the goto explicitly puts the barrier into the control flow which 
should stop the compiler from doing anything unexpected.

     J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-08-06 20:33             ` Jeremy Fitzhardinge
@ 2010-08-06 21:09               ` H. Peter Anvin
  2010-08-06 22:03                 ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: H. Peter Anvin @ 2010-08-06 21:09 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Jan Beulich, Peter Zijlstra, Xen-devel, Avi Kivity, Nick Piggin,
	Linux Kernel Mailing List

On 08/06/2010 01:33 PM, Jeremy Fitzhardinge wrote:
> On 08/06/2010 01:17 PM, H. Peter Anvin wrote:
>> On 08/06/2010 07:53 AM, Jeremy Fitzhardinge wrote:
>>> On 08/06/2010 05:43 AM, Jan Beulich wrote:
>>>> You certainly mean "the compiler currently treats this as being:" - I
>>>> don't think there's a guarantee it'll always be doing so.
>>>>
>>>>> for (;;) {
>>>>> if (inc.tickets.head == inc.tickets.tail)
>>>>> goto out;
>>>>> ...
>>>>> }
>>>>> out: barrier();
>>>>> }
>>>>>
>>>>> (Which would probably be a reasonable way to clarify the code.)
>>>> I therefore think it needs to be written this way.
>>>
>>> Agreed.
>>>
>>
>> A call/return to an actual out-of-line function is a barrier (and will
>> always be a barrier, as it is the fundamental ABI sequence points),
>> but to an inline function it is not.
>
> Yes. So the goto explicitly puts the barrier into the control flow which
> should stop the compiler from doing anything unexpected.
>

In this particular case, though, I would somewhat expect the more 
conventional:

while (inc.tickets.head != inc.tickets.tail) {
	cpu_relax();
	inc.tickets.head = ACCESS_ONCE(lock->tickets_head);
}

	-hpa

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C
  2010-08-06 21:09               ` H. Peter Anvin
@ 2010-08-06 22:03                 ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-08-06 22:03 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Jan Beulich, Peter Zijlstra, Xen-devel, Avi Kivity, Nick Piggin,
	Linux Kernel Mailing List

  On 08/06/2010 02:09 PM, H. Peter Anvin wrote:
> On 08/06/2010 01:33 PM, Jeremy Fitzhardinge wrote:
>> On 08/06/2010 01:17 PM, H. Peter Anvin wrote:
>>> On 08/06/2010 07:53 AM, Jeremy Fitzhardinge wrote:
>>>> On 08/06/2010 05:43 AM, Jan Beulich wrote:
>>>>> You certainly mean "the compiler currently treats this as being:" - I
>>>>> don't think there's a guarantee it'll always be doing so.
>>>>>
>>>>>> for (;;) {
>>>>>> if (inc.tickets.head == inc.tickets.tail)
>>>>>> goto out;
>>>>>> ...
>>>>>> }
>>>>>> out: barrier();
>>>>>> }
>>>>>>
>>>>>> (Which would probably be a reasonable way to clarify the code.)
>>>>> I therefore think it needs to be written this way.
>>>>
>>>> Agreed.
>>>>
>>>
>>> A call/return to an actual out-of-line function is a barrier (and will
>>> always be a barrier, as it is the fundamental ABI sequence points),
>>> but to an inline function it is not.
>>
>> Yes. So the goto explicitly puts the barrier into the control flow which
>> should stop the compiler from doing anything unexpected.
>>
>
> In this particular case, though, I would somewhat expect the more 
> conventional:
>
> while (inc.tickets.head != inc.tickets.tail) {
>     cpu_relax();
>     inc.tickets.head = ACCESS_ONCE(lock->tickets_head);
> }

Yes, that makes sense for the plain spinlock version.  But the full 
code, including the pv-ticketlock spin timeout, ends up being:

static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
{
	register struct __raw_tickets inc;

	inc = __ticket_spin_claim(lock);

	for (;;) {
		unsigned count = SPIN_THRESHOLD;

		do {
			if (inc.head == inc.tail)
				goto out;
			cpu_relax();
			inc.head = ACCESS_ONCE(lock->tickets.head);
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();		/* make sure nothing creeps before the lock is taken */
}

So the goto form is closer to the final form.  If it weren't for this, 
I'd also prefer the while() form.

(If you config PARAVIRT_SPINLOCKS=n, then __ticket_lock_spinning() 
becomes an empty inline, which causes gcc to collapse the whole thing 
into a simple infinite loop (ie, it eliminates "count" and the inner loop).)


     J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks
  2010-07-17  1:03 ` [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
@ 2010-09-26 11:39   ` Srivatsa Vaddagiri
  2010-09-26 22:34     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: Srivatsa Vaddagiri @ 2010-09-26 11:39 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Linux Kernel Mailing List, Nick Piggin, Peter Zijlstra,
	Jan Beulich, Avi Kivity, Xen-devel, suzuki

On Fri, Jul 16, 2010 at 06:03:04PM -0700, Jeremy Fitzhardinge wrote:
> Replace the old Xen implementation of PV spinlocks with and implementation
> of xen_lock_spinning and xen_unlock_kick.

I see that the old implementation took care of a spinlock() call being
interrupted by another spinlock (in interrupt handler), by saving/restoring 
old lock of interest. We don't seem to be doing that in this new version?
Won't that lead to loss of wakeup -> hang?

Also are you planning to push this series into mainline sometime soon?

- vatsa

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks
  2010-09-26 11:39   ` Srivatsa Vaddagiri
@ 2010-09-26 22:34     ` Jeremy Fitzhardinge
  2011-01-18 16:27       ` Srivatsa Vaddagiri
  0 siblings, 1 reply; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-26 22:34 UTC (permalink / raw)
  To: vatsa
  Cc: Linux Kernel Mailing List, Nick Piggin, Peter Zijlstra,
	Jan Beulich, Avi Kivity, Xen-devel, suzuki,
	Konrad Rzeszutek Wilk

 On 09/26/2010 04:39 AM, Srivatsa Vaddagiri wrote:
> On Fri, Jul 16, 2010 at 06:03:04PM -0700, Jeremy Fitzhardinge wrote:
>> Replace the old Xen implementation of PV spinlocks with and implementation
>> of xen_lock_spinning and xen_unlock_kick.
> I see that the old implementation took care of a spinlock() call being
> interrupted by another spinlock (in interrupt handler), by saving/restoring 
> old lock of interest. We don't seem to be doing that in this new version?
> Won't that lead to loss of wakeup -> hang?

No, interrupts are disabled while waiting to take the lock, so it isn't
possible for an interrupt to come in.  With the old-style locks it was
reasonable to leave interrupts enabled while spinning, but with ticket
locks it isn't.

(I haven some prototype patches to implement nested spinning of ticket
locks, by allowing the nested taker to steal the queue position of the
outer lock-taker, and switch its ticket with a later one.  But there's a
fundamental problem with the idea: each lock taker needs to take a
ticket.  If you don't allow nesting, then the max amount of tickets
needed = number of cpus-1; however, with nesting, the max number of
tickets = ncpus * max-nesting-depth, so the size of the ticket type must
be larger for a given number of cpus, or the max number of cpus must be
reduced.)

> Also are you planning to push this series into mainline sometime soon?
>

I was planning on sending it out for another round of review shortly; I
got no comments on it at all the first time around.

    J

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks
  2010-09-26 22:34     ` Jeremy Fitzhardinge
@ 2011-01-18 16:27       ` Srivatsa Vaddagiri
  2011-01-19  1:28         ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 32+ messages in thread
From: Srivatsa Vaddagiri @ 2011-01-18 16:27 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Linux Kernel Mailing List, Nick Piggin, Peter Zijlstra,
	Jan Beulich, Avi Kivity, Xen-devel, suzuki,
	Konrad Rzeszutek Wilk

On Sun, Sep 26, 2010 at 03:34:55PM -0700, Jeremy Fitzhardinge wrote:
>  On 09/26/2010 04:39 AM, Srivatsa Vaddagiri wrote:
> > On Fri, Jul 16, 2010 at 06:03:04PM -0700, Jeremy Fitzhardinge wrote:
> >> Replace the old Xen implementation of PV spinlocks with and implementation
> >> of xen_lock_spinning and xen_unlock_kick.
> > I see that the old implementation took care of a spinlock() call being
> > interrupted by another spinlock (in interrupt handler), by saving/restoring 
> > old lock of interest. We don't seem to be doing that in this new version?
> > Won't that lead to loss of wakeup -> hang?

Sorry about coming back late on this, but as I was looking at the most recent
version of pv-ticketlocks, this came up in my mind again ..

> No, interrupts are disabled while waiting to take the lock, so it isn't
> possible for an interrupt to come in.

Where are we disabling interrupts? Is it in xen_poll_irq()?

>  With the old-style locks it was
> reasonable to leave interrupts enabled while spinning, but with ticket
> locks it isn't.
> 
> (I haven some prototype patches to implement nested spinning of ticket
> locks,

Hmm ..where is nested spinning allowed/possible? Process context will
disable interrupts/bh from wanting the same (spin-)lock it is trying to
acquire?

> by allowing the nested taker to steal the queue position of the
> outer lock-taker, and switch its ticket with a later one.  But there's a
> fundamental problem with the idea: each lock taker needs to take a
> ticket.  If you don't allow nesting, then the max amount of tickets
> needed = number of cpus-1; however, with nesting, the max number of
> tickets = ncpus * max-nesting-depth, so the size of the ticket type must
> be larger for a given number of cpus, or the max number of cpus must be
> reduced.)

- vatsa

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks
  2011-01-18 16:27       ` Srivatsa Vaddagiri
@ 2011-01-19  1:28         ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 32+ messages in thread
From: Jeremy Fitzhardinge @ 2011-01-19  1:28 UTC (permalink / raw)
  To: vatsa
  Cc: Linux Kernel Mailing List, Nick Piggin, Peter Zijlstra,
	Jan Beulich, Avi Kivity, Xen-devel, suzuki,
	Konrad Rzeszutek Wilk

On 01/18/2011 08:27 AM, Srivatsa Vaddagiri wrote:
>> No, interrupts are disabled while waiting to take the lock, so it isn't
>> possible for an interrupt to come in.
> Where are we disabling interrupts? Is it in xen_poll_irq()?

No, they're already disabled in the generic spinlock code. 
arch_spin_lock_flags() can re-enable them if it wants.

>>  With the old-style locks it was
>> reasonable to leave interrupts enabled while spinning, but with ticket
>> locks it isn't.
>>
>> (I haven some prototype patches to implement nested spinning of ticket
>> locks,
> Hmm ..where is nested spinning allowed/possible? Process context will
> disable interrupts/bh from wanting the same (spin-)lock it is trying to
> acquire?

If you're in an interrupt-enabled context at the time you're taking an
interrupt-safe spinlock (ie, using spin_lock_irq[save]), then it is (in
principle) valid to leave interrupts enabled until you actually acquire
the lock (obv you must avoid any window with the lock acquired and
interrupts enabled).

We did this with the old-style locks (both native and pv) - it seems
like it should be especially useful for interrupt latency if we end up
waiting for the lock a long time.  However, it can't be done with ticket
locks.  I also have no idea how often we ended up being able to it in
practice anyway.

    J

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2011-01-19  1:28 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
2010-07-20 15:38   ` [Xen-devel] " Konrad Rzeszutek Wilk
2010-07-20 16:17     ` Jeremy Fitzhardinge
2010-08-06 17:47       ` H. Peter Anvin
2010-08-06 20:03         ` Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 06/12] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 11/12] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
2010-09-26 11:39   ` Srivatsa Vaddagiri
2010-09-26 22:34     ` Jeremy Fitzhardinge
2011-01-18 16:27       ` Srivatsa Vaddagiri
2011-01-19  1:28         ` Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 07/12] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 04/12] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 01/12] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 05/12] x86/ticketlock: make __ticket_spin_lock common Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge
2010-08-03  8:32   ` Peter Zijlstra
2010-08-03  9:44     ` Nick Piggin
2010-08-03 15:45     ` Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 12/12] x86/pvticketlock: use callee-save for unlock_kick as well Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 08/12] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
2010-08-02 15:07   ` Peter Zijlstra
2010-08-02 15:17     ` Jeremy Fitzhardinge
2010-08-06 12:43       ` Jan Beulich
2010-08-06 14:53         ` Jeremy Fitzhardinge
2010-08-06 20:17           ` H. Peter Anvin
2010-08-06 20:33             ` Jeremy Fitzhardinge
2010-08-06 21:09               ` H. Peter Anvin
2010-08-06 22:03                 ` Jeremy Fitzhardinge

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).