All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
@ 2011-09-15  0:31 Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 01/10] x86/ticketlocks: remove obsolete comment Jeremy Fitzhardinge
                   ` (10 more replies)
  0 siblings, 11 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

[ Changes since last posting:
  - fix bugs exposed by the cold light of testing
    - make the "slow flag" read in unlock cover the whole lock
      to force ordering WRT the unlock write
    - when kicking on unlock, only look for the CPU *we* released
      (ie, head value the unlock resulted in), rather than re-reading
      the new head and kicking on that basis
  - enable PV ticketlocks in Xen HVM guests
]

NOTE: this series is available in:
      git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag
and is based on the previously posted ticketlock cleanup series in
      git://github.com/jsgf/linux-xen.git upstream/ticketlock-cleanup

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;

	for (;;) {
		unsigned count = SPIN_THRESHOLD;

		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();

which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is very straightforward:
	prev = *lock;
	__ticket_unlock_release(lock);
	if (unlikely(__ticket_in_slowpath(lock)))
		__ticket_unlock_slowpath(lock, prev);

which generates:
	push   %rbp
	mov    %rsp,%rbp

        movzwl (%rdi),%esi
	addb   $0x2,(%rdi)
        movzwl (%rdi),%eax
	testb  $0x1,%ah
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	movzwl (%rdi),%edx
	movzbl %dh,%ecx
	mov    %edx,%eax
	and    $-2,%ecx	# clear TICKET_SLOWPATH_FLAG
	mov    %cl,%dh
	cmp    %dl,%cl	# test to see if lock is uncontended
	je     3f

2:	movzbl %dl,%esi
	callq  *__ticket_unlock_kick	# kick anyone waiting
	pop    %rbp
	retq   

3:	lock cmpxchg %dx,(%rdi)	# use cmpxchg to safely write back flag
	jmp    2b
	### SLOWPATH END

The fastpath is pretty straightforward, but it is definitely more
complex than a simple "addb $1,(%rdi)" - which is still generated (and
inlined) when PARAVIRT_SPINLOCKS is disabled.

Thoughts? Comments? Suggestions?

Thanks,
	J

Jeremy Fitzhardinge (9):
  x86/spinlocks: replace pv spinlocks with pv ticketlocks
  x86/ticketlock: don't inline _spin_unlock when using paravirt
    spinlocks
  x86/ticketlock: collapse a layer of functions
  xen/pvticketlock: Xen implementation for PV ticket locks
  x86/pvticketlock: use callee-save for lock_spinning
  x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
  x86/ticketlock: add slowpath logic
  xen/pvticketlock: allow interrupts to be enabled while blocking

Stefano Stabellini (1):
  xen: enable PV ticketlocks on HVM Xen

 arch/x86/Kconfig                      |    3 +
 arch/x86/include/asm/paravirt.h       |   30 +---
 arch/x86/include/asm/paravirt_types.h |   10 +-
 arch/x86/include/asm/spinlock.h       |  160 ++++++++++++-----
 arch/x86/include/asm/spinlock_types.h |   16 ++-
 arch/x86/kernel/paravirt-spinlocks.c  |   16 +--
 arch/x86/xen/smp.c                    |    1 +
 arch/x86/xen/spinlock.c               |  315 ++++++++------------------------
 kernel/Kconfig.locks                  |    2 +-
 9 files changed, 217 insertions(+), 336 deletions(-)

-- 
1.7.6


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 01/10] x86/ticketlocks: remove obsolete comment
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 02/10] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

The note about partial registers is not really relevent now that we
rely on gcc to generate all the assembler.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |    4 ----
 1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f5695ee..972c260 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -49,10 +49,6 @@
  * issues and should be optimal for the uncontended case. Note the tail must be
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
- *
- * With fewer than 2^8 possible CPUs, we can use x86's partial registers to
- * save some instructions and make the code more elegant. There really isn't
- * much between them in performance though, especially as locks are out of line.
  */
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 02/10] x86/spinlocks: replace pv spinlocks with pv ticketlocks
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 01/10] x86/ticketlocks: remove obsolete comment Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 03/10] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |   30 ++--------------
 arch/x86/include/asm/paravirt_types.h |   10 ++---
 arch/x86/include/asm/spinlock.h       |   59 ++++++++++++++++++++++++++-------
 arch/x86/include/asm/spinlock_types.h |    4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +-------
 arch/x86/xen/spinlock.c               |    7 +++-
 6 files changed, 63 insertions(+), 62 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a7d2db9..76cae7a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -750,36 +750,14 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 8e8b9a4..005e24d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include <asm/spinlock_types.h>
+
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 972c260..860fc4b 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -37,6 +37,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
+/* 
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -50,19 +76,24 @@
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
 	inc = xadd(&lock->tickets, inc);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			break;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				goto out;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -80,7 +111,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 }
 
 #if (NR_CPUS < 256)
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
 	asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
 		     : "+m" (lock->head_tail)
@@ -88,7 +119,7 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 		     : "memory", "cc");
 }
 #else
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
 	asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
 		     : "+m" (lock->head_tail)
@@ -97,6 +128,14 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 }
 #endif
 
+static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+{
+	__ticket_t next = lock->tickets.head + 1;
+
+	__ticket_unlock_release(lock);
+	__ticket_unlock_kick(lock, next);
+}
+
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
@@ -111,8 +150,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -145,8 +182,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 8ebd5df..dbe223d 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #include <linux/types.h>
 
 #if (CONFIG_NR_CPUS < 256)
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index cc9b1e1..23af06a 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -121,6 +121,9 @@ struct xen_spinlock {
 	unsigned short spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -148,7 +151,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -338,6 +340,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -373,12 +376,14 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 03/10] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 01/10] x86/ticketlocks: remove obsolete comment Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 02/10] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 04/10] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

The code size expands somewhat, and its probably better to just call
a function rather than inline it.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/Kconfig     |    3 +++
 kernel/Kconfig.locks |    2 +-
 2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6a47bb2..1f03f82 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,9 @@ config PARAVIRT_SPINLOCKS
 
 	  If you are unsure how to answer this question, answer N.
 
+config ARCH_NOINLINE_SPIN_UNLOCK
+       def_bool PARAVIRT_SPINLOCKS
+
 config PARAVIRT_CLOCK
 	bool
 
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 5068e2a..584637b 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -125,7 +125,7 @@ config INLINE_SPIN_LOCK_IRQSAVE
 		 ARCH_INLINE_SPIN_LOCK_IRQSAVE
 
 config INLINE_SPIN_UNLOCK
-	def_bool !DEBUG_SPINLOCK && (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)
+	def_bool !DEBUG_SPINLOCK && (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK) && !ARCH_NOINLINE_SPIN_UNLOCK
 
 config INLINE_SPIN_UNLOCK_BH
 	def_bool !DEBUG_SPINLOCK && ARCH_INLINE_SPIN_UNLOCK_BH
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 04/10] x86/ticketlock: collapse a layer of functions
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (2 preceding siblings ...)
  2011-09-15  0:31 ` [PATCH 03/10] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 05/10] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 860fc4b..98fe202 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -128,7 +128,7 @@ static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 }
 #endif
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -136,46 +136,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 05/10] xen/pvticketlock: Xen implementation for PV ticket locks
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (3 preceding siblings ...)
  2011-09-15  0:31 ` [PATCH 04/10] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 06/10] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |  287 +++++++----------------------------------------
 1 files changed, 43 insertions(+), 244 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..f6133c5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-	u64 taken;
 	u32 taken_slow;
-	u32 taken_slow_nested;
 	u32 taken_slow_pickup;
 	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
 
-	u64 released;
 	u32 released_slow;
 	u32 released_slow_kicked;
 
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
 	if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,84 @@ static inline u64 spin_time_start(void)
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	unsigned short spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	asm(LOCK_PREFIX " incw %0"
-	    : "+m" (xl->spinners) : : "memory");
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	asm(LOCK_PREFIX " decw %0"
-	    : "+m" (xl->spinners) : : "memory");
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
+	unsigned long flags;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
+	/* Make sure interrupts are disabled to ensure that these
+	   per-cpu values are not overwritten. */
+	local_irq_save(flags);
+
+	w->want = want;
+	w->lock = lock;
+
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
 
 	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* Only check lock once pending cleared */
+	barrier();
 
-		raw_local_irq_restore(flags);
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		ADD_STATS(taken_slow_pickup, 1);
+		goto out;
+	}
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
-	unspinning_lock(xl, prev);
-	spin_time_accum_blocked(start);
-
-	return ret;
-}
-
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
 
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
+	local_irq_restore(flags);
 
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
+	spin_time_accum_blocked(start);
 }
 
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
 	int cpu;
 
 	ADD_STATS(released_slow, 1);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+
+		if (w->lock == lock && w->want == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
@@ -320,28 +163,6 @@ static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -376,14 +197,8 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -401,37 +216,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_pickup);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow_kicked);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	xen_debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	xen_debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	xen_debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 06/10] x86/pvticketlock: use callee-save for lock_spinning
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (4 preceding siblings ...)
  2011-09-15  0:31 ` [PATCH 05/10] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31   ` Jeremy Fitzhardinge
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 76cae7a..50281c7 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -752,7 +752,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 005e24d..5e0c138 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include <asm/spinlock_types.h>
 
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..7a04950 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -145,6 +145,7 @@ out:
 
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -197,7 +198,7 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 07/10] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
@ 2011-09-15  0:31   ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 02/10] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
                     ` (9 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   16 ++++++++--------
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 98fe202..40c90aa 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -113,24 +113,24 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 #if (NR_CPUS < 256)
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
+	asm volatile(UNLOCK_LOCK_PREFIX "addb %1, %0"
 		     : "+m" (lock->head_tail)
-		     :
+		     : "i" (TICKET_LOCK_INC)
 		     : "memory", "cc");
 }
 #else
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
+	asm volatile(UNLOCK_LOCK_PREFIX "addw %1, %0"
 		     : "+m" (lock->head_tail)
-		     :
+		     : "i" (TICKET_LOCK_INC)
 		     : "memory", "cc");
 }
 #endif
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
 	__ticket_unlock_release(lock);
 	__ticket_unlock_kick(lock, next);
@@ -147,7 +147,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
+	return ((tmp.tail - tmp.head) & TICKET_MASK) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index dbe223d..aa9a205 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 #define TICKET_MASK	((__ticket_t)((1 << TICKET_SHIFT) - 1))
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 07/10] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
@ 2011-09-15  0:31   ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Marcelo Tosatti, Nick Piggin, KVM, Peter Zijlstra,
	the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Avi Kivity, Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds,
	Xen Devel

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   16 ++++++++--------
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 98fe202..40c90aa 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -113,24 +113,24 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 #if (NR_CPUS < 256)
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
+	asm volatile(UNLOCK_LOCK_PREFIX "addb %1, %0"
 		     : "+m" (lock->head_tail)
-		     :
+		     : "i" (TICKET_LOCK_INC)
 		     : "memory", "cc");
 }
 #else
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
+	asm volatile(UNLOCK_LOCK_PREFIX "addw %1, %0"
 		     : "+m" (lock->head_tail)
-		     :
+		     : "i" (TICKET_LOCK_INC)
 		     : "memory", "cc");
 }
 #endif
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
 	__ticket_unlock_release(lock);
 	__ticket_unlock_kick(lock, next);
@@ -147,7 +147,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
+	return ((tmp.tail - tmp.head) & TICKET_MASK) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index dbe223d..aa9a205 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 #define TICKET_MASK	((__ticket_t)((1 << TICKET_SHIFT) - 1))
 
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 08/10] x86/ticketlock: add slowpath logic
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (6 preceding siblings ...)
  2011-09-15  0:31   ` Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 09/10] xen/pvticketlock: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Srivatsa Vaddagiri

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

Unlocker			Locker
				test for lock pickup
					-> fail
unlock
test slowpath
	-> false
				set slowpath flags
				block

Whereas this works in any ordering:

Unlocker			Locker
				set slowpath flags
				test for lock pickup
					-> fail
				block
unlock
test slowpath
	-> true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clear the slowpath flag.

Note on memory access ordering:
When unlocking a ticketlock with PV callbacks enabled, unlock
first "add"s to the lock head, then checks to see if the slowpath
flag is set in the lock tail.

However, because reads are not ordered with respect to writes in
different memory locations, the CPU could perform the read before
updating head to release the lock.

This would deadlock with another CPU in the lock slowpath, as it will
set the slowpath flag before checking to see if the lock has been
released in the interim.

A heavyweight fix would be to stick a full mfence between the two.
However, a lighterweight fix is to simply make sure the flag tests
loads both head and tail of the lock in a single operation, thereby
making sure that it overlaps with the memory written by the unlock,
forcing the CPU to maintain ordering.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

(Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.)

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/spinlock.h       |   92 ++++++++++++++++++++++++++------
 arch/x86/include/asm/spinlock_types.h |    2 +
 arch/x86/kernel/paravirt-spinlocks.c  |    1 +
 arch/x86/xen/spinlock.c               |    4 ++
 5 files changed, 82 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 50281c7..13b3d8b 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -755,7 +755,7 @@ static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, _
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 40c90aa..c1f6981 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -40,29 +40,56 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
+/*
+ * Return true if someone is in the slowpath on this lock.  This
+ * should only be used by the current lock-holder.
+ */
+static inline bool __ticket_in_slowpath(arch_spinlock_t *lock)
 {
+	/*
+	 * This deliberately reads both head and tail as a single
+	 * memory operation, and then tests the flag in tail.  This is
+	 * to guarantee that this read is ordered after the "add" to
+	 * head which does the unlock.  If we were to only read "tail"
+	 * to test the flag, then the CPU would be free to reorder the
+	 * read to before the write to "head" (since it is a different
+	 * memory location), which could cause a deadlock with someone
+	 * setting the flag before re-checking the lock availability.
+	 */
+	return ACCESS_ONCE(lock->head_tail) & (TICKET_SLOWPATH_FLAG << TICKET_SHIFT);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+	if (sizeof(lock->tickets.tail) == sizeof(u8))
+		asm (LOCK_PREFIX "orb %1, %0"
+		     : "+m" (lock->tickets.tail)
+		     : "i" (TICKET_SLOWPATH_FLAG) : "memory");
+	else
+		asm (LOCK_PREFIX "orw %1, %0"
+		     : "+m" (lock->tickets.tail)
+		     : "i" (TICKET_SLOWPATH_FLAG) : "memory");
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline bool __ticket_in_slowpath(arch_spinlock_t *lock)
+{
+	return false;
+}
 
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock, __ticket_t ticket)
+{
+}
 
-/* 
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
+static inline void __ticket_unlock_kick(arch_spinlock_t *lock, __ticket_t ticket)
 {
-	if (unlikely(lock->tickets.tail != next))
-		____ticket_unlock_kick(lock, next);
 }
 
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -76,20 +103,22 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
 {
 	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
+	if (likely(inc.head == inc.tail))
+		goto out;
 
+	inc.tail &= ~TICKET_SLOWPATH_FLAG;
 	for (;;) {
 		unsigned count = SPIN_THRESHOLD;
 
 		do {
-			if (inc.head == inc.tail)
+			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
 				goto out;
 			cpu_relax();
-			inc.head = ACCESS_ONCE(lock->tickets.head);
 		} while (--count);
 		__ticket_lock_spinning(lock, inc.tail);
 	}
@@ -101,7 +130,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	arch_spinlock_t old, new;
 
 	old.tickets = ACCESS_ONCE(lock->tickets);
-	if (old.tickets.head != old.tickets.tail)
+	if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
 		return 0;
 
 	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
@@ -128,12 +157,39 @@ static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 }
 #endif
 
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
+static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
+					    arch_spinlock_t old)
 {
-	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
+	arch_spinlock_t new;
+
+	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
+
+	/* Perform the unlock on the "before" copy */
+	old.tickets.head += TICKET_LOCK_INC;
+
+	/* Clear the slowpath flag */
+	new.head_tail = old.head_tail & ~(TICKET_SLOWPATH_FLAG << TICKET_SHIFT);
+
+	/*
+	 * If the lock is uncontended, clear the flag - use cmpxchg in
+	 * case it changes behind our back though.
+	 */
+	if (new.tickets.head != new.tickets.tail ||
+	    cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) != old.head_tail) {
+		/*
+		 * Lock still has someone queued for it, so wake up an
+		 * appropriate waiter.
+		 */
+		__ticket_unlock_kick(lock, old.tickets.head);
+	}
+}
 
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	arch_spinlock_t prev = *lock;
 	__ticket_unlock_release(lock);
-	__ticket_unlock_kick(lock, next);
+	if (unlikely(__ticket_in_slowpath(lock)))
+		__ticket_unlock_slowpath(lock, prev);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index aa9a205..407f7f7 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,8 +5,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC	2
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC	1
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..0883c48 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -15,3 +15,4 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 7a04950..c939723 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -124,6 +124,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	/* Mark entry to slowpath before doing the pickup test to make
+	   sure we don't deadlock with an unlocker. */
+	__ticket_enter_slowpath(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 09/10] xen/pvticketlock: allow interrupts to be enabled while blocking
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (7 preceding siblings ...)
  2011-09-15  0:31 ` [PATCH 08/10] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 10/10] xen: enable PV ticketlocks on HVM Xen Jeremy Fitzhardinge
  2011-09-27  9:34   ` Stephan Diestelhorst
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu "lock" and "want" values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |   48 ++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index c939723..7366b39 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,28 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 
 	start = spin_time_start();
 
-	/* Make sure interrupts are disabled to ensure that these
-	   per-cpu values are not overwritten. */
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
 	local_irq_save(flags);
 
+	/*
+	 * We don't really care if we're overwriting some other
+	 * (lock,want) pair, as that would mean that we're currently
+	 * in an interrupt context, and the outer context had
+	 * interrupts enabled.  That has already kicked the VCPU out
+	 * of xen_poll_irq(), so it will just return spuriously and
+	 * retry with newly setup (lock,want).
+	 *
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
 	w->want = want;
+	smp_wmb();
 	w->lock = lock;
 
 	/* This uses set_bit, which atomic and therefore a barrier */
@@ -124,21 +141,36 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
-	/* Mark entry to slowpath before doing the pickup test to make
-	   sure we don't deadlock with an unlocker. */
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
 	__ticket_enter_slowpath(lock);
 
-	/* check again make sure it didn't become free while
-	   we weren't looking  */
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking 
+	 */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
 		ADD_STATS(taken_slow_pickup, 1);
 		goto out;
 	}
 
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/*
+	 * If an interrupt happens here, it will leave the wakeup irq
+	 * pending, which will cause xen_poll_irq() to return
+	 * immediately.
+	 */
+
 	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
 	xen_poll_irq(irq);
 	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+	local_irq_save(flags);
+
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +192,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 	for_each_cpu(cpu, &waiting_cpus) {
 		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-		if (w->lock == lock && w->want == next) {
+		/* Make sure we read lock before want */
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 10/10] xen: enable PV ticketlocks on HVM Xen
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (8 preceding siblings ...)
  2011-09-15  0:31 ` [PATCH 09/10] xen/pvticketlock: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
@ 2011-09-15  0:31 ` Jeremy Fitzhardinge
  2011-09-27  9:34   ` Stephan Diestelhorst
  10 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-15  0:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Stefano Stabellini, Jeremy Fitzhardinge

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/smp.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index e79dbb9..bf958ce 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
+	xen_init_spinlocks();
 }
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
  2011-09-15  0:31 ` [PATCH 01/10] x86/ticketlocks: remove obsolete comment Jeremy Fitzhardinge
@ 2011-09-27  9:34   ` Stephan Diestelhorst
  2011-09-15  0:31 ` [PATCH 03/10] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-09-27  9:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jeremy Fitzhardinge, H. Peter Anvin, Marcelo Tosatti,
	Nick Piggin, KVM, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds

On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism.
[...] 
> The unlock code is very straightforward:
> 	prev = *lock;
> 	__ticket_unlock_release(lock);
> 	if (unlikely(__ticket_in_slowpath(lock)))
> 		__ticket_unlock_slowpath(lock, prev);
> 
> which generates:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
>     movzwl (%rdi),%esi
> 	addb   $0x2,(%rdi)
>     movzwl (%rdi),%eax
> 	testb  $0x1,%ah
> 	jne    1f
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	movzwl (%rdi),%edx
> 	movzbl %dh,%ecx
> 	mov    %edx,%eax
> 	and    $-2,%ecx	# clear TICKET_SLOWPATH_FLAG
> 	mov    %cl,%dh
> 	cmp    %dl,%cl	# test to see if lock is uncontended
> 	je     3f
> 
> 2:	movzbl %dl,%esi
> 	callq  *__ticket_unlock_kick	# kick anyone waiting
> 	pop    %rbp
> 	retq   
> 
> 3:	lock cmpxchg %dx,(%rdi)	# use cmpxchg to safely write back flag
> 	jmp    2b
> 	### SLOWPATH END
[...]
> Thoughts? Comments? Suggestions?

You have a nasty data race in your code that can cause a losing
acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
can race with the lock holder releasing the lock.

I used the code for the slow path from the GIT repo.

Let me try to point out an interleaving:

Lock is held by one thread, contains 0x0200.

_Lock holder_                   _Acquirer_
                                mov    $0x200,%eax
                                lock xadd %ax,(%rdi)
                                // ax:= 0x0200, lock:= 0x0400
                                ...
                                // this guy spins for a while, reading
                                // the lock
                                ...
//trying to free the lock
movzwl (%rdi),%esi (esi:=0x0400)
addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
testb  $0x1,%ah    (no wakeup of anybody)
jne    1f

                                callq  *__ticket_lock_spinning
                                  ...
                                  // __ticket_enter_slowpath(lock)
                                  lock or (%rdi), $0x100
                                  // (global view of lock := 0x0500)
						...
                                  ACCESS_ONCE(lock->tickets.head) == want
                                  // (reads 0x00)
						...
                                  xen_poll_irq(irq); // goes to sleep
...
[addb   $0x2,(%rdi)]
// (becomes globally visible only now! global view of lock := 0x0502)
...

Your code is reusing the (just about) safe version of unlocking a
spinlock without understanding the effect that close has on later
memory ordering. It may work on CPUs that cannot do narrow -> wide
store to load forwarding and have to make the addb store visible
globally. This is an implementation artifact of specific uarches, and
you mustn't rely on it, since our specified memory model allows looser
behaviour.

Since you want to get that addb out to global memory before the second
read, either use a LOCK prefix for it, add an MFENCE between addb and
movzwl, or use a LOCKed instruction that will have a fencing effect
(e.g., to top-of-stack)between addb and movzwl.

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo 
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
@ 2011-09-27  9:34   ` Stephan Diestelhorst
  0 siblings, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-09-27  9:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jeremy Fitzhardinge, maintainers, Nick Piggin, KVM,
	Peter Zijlstra, Marcelo Tosatti, Linux Kernel Mailing List, Ingo,
	Andi Kleen, Avi Kivity, Jeremy Fitzhardinge, H. Peter Anvin, the,
	Linus Torvalds, Molnar

On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism.
[...] 
> The unlock code is very straightforward:
> 	prev = *lock;
> 	__ticket_unlock_release(lock);
> 	if (unlikely(__ticket_in_slowpath(lock)))
> 		__ticket_unlock_slowpath(lock, prev);
> 
> which generates:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
>     movzwl (%rdi),%esi
> 	addb   $0x2,(%rdi)
>     movzwl (%rdi),%eax
> 	testb  $0x1,%ah
> 	jne    1f
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	movzwl (%rdi),%edx
> 	movzbl %dh,%ecx
> 	mov    %edx,%eax
> 	and    $-2,%ecx	# clear TICKET_SLOWPATH_FLAG
> 	mov    %cl,%dh
> 	cmp    %dl,%cl	# test to see if lock is uncontended
> 	je     3f
> 
> 2:	movzbl %dl,%esi
> 	callq  *__ticket_unlock_kick	# kick anyone waiting
> 	pop    %rbp
> 	retq   
> 
> 3:	lock cmpxchg %dx,(%rdi)	# use cmpxchg to safely write back flag
> 	jmp    2b
> 	### SLOWPATH END
[...]
> Thoughts? Comments? Suggestions?

You have a nasty data race in your code that can cause a losing
acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
can race with the lock holder releasing the lock.

I used the code for the slow path from the GIT repo.

Let me try to point out an interleaving:

Lock is held by one thread, contains 0x0200.

_Lock holder_                   _Acquirer_
                                mov    $0x200,%eax
                                lock xadd %ax,(%rdi)
                                // ax:= 0x0200, lock:= 0x0400
                                ...
                                // this guy spins for a while, reading
                                // the lock
                                ...
//trying to free the lock
movzwl (%rdi),%esi (esi:=0x0400)
addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
testb  $0x1,%ah    (no wakeup of anybody)
jne    1f

                                callq  *__ticket_lock_spinning
                                  ...
                                  // __ticket_enter_slowpath(lock)
                                  lock or (%rdi), $0x100
                                  // (global view of lock := 0x0500)
						...
                                  ACCESS_ONCE(lock->tickets.head) == want
                                  // (reads 0x00)
						...
                                  xen_poll_irq(irq); // goes to sleep
...
[addb   $0x2,(%rdi)]
// (becomes globally visible only now! global view of lock := 0x0502)
...

Your code is reusing the (just about) safe version of unlocking a
spinlock without understanding the effect that close has on later
memory ordering. It may work on CPUs that cannot do narrow -> wide
store to load forwarding and have to make the addb store visible
globally. This is an implementation artifact of specific uarches, and
you mustn't rely on it, since our specified memory model allows looser
behaviour.

Since you want to get that addb out to global memory before the second
read, either use a LOCK prefix for it, add an MFENCE between addb and
movzwl, or use a LOCKed instruction that will have a fencing effect
(e.g., to top-of-stack)between addb and movzwl.

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo 
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
@ 2011-09-27  9:34   ` Stephan Diestelhorst
  0 siblings, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-09-27  9:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jeremy Fitzhardinge, maintainers, Nick Piggin, KVM,
	Peter Zijlstra, Marcelo Tosatti, Linux Kernel Mailing List, Ingo,
	Andi Kleen, Avi Kivity, Jeremy Fitzhardinge, H. Peter Anvin, the,
	Linus Torvalds, Molnar

On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism.
[...] 
> The unlock code is very straightforward:
> 	prev = *lock;
> 	__ticket_unlock_release(lock);
> 	if (unlikely(__ticket_in_slowpath(lock)))
> 		__ticket_unlock_slowpath(lock, prev);
> 
> which generates:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
>     movzwl (%rdi),%esi
> 	addb   $0x2,(%rdi)
>     movzwl (%rdi),%eax
> 	testb  $0x1,%ah
> 	jne    1f
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	movzwl (%rdi),%edx
> 	movzbl %dh,%ecx
> 	mov    %edx,%eax
> 	and    $-2,%ecx	# clear TICKET_SLOWPATH_FLAG
> 	mov    %cl,%dh
> 	cmp    %dl,%cl	# test to see if lock is uncontended
> 	je     3f
> 
> 2:	movzbl %dl,%esi
> 	callq  *__ticket_unlock_kick	# kick anyone waiting
> 	pop    %rbp
> 	retq   
> 
> 3:	lock cmpxchg %dx,(%rdi)	# use cmpxchg to safely write back flag
> 	jmp    2b
> 	### SLOWPATH END
[...]
> Thoughts? Comments? Suggestions?

You have a nasty data race in your code that can cause a losing
acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
can race with the lock holder releasing the lock.

I used the code for the slow path from the GIT repo.

Let me try to point out an interleaving:

Lock is held by one thread, contains 0x0200.

_Lock holder_                   _Acquirer_
                                mov    $0x200,%eax
                                lock xadd %ax,(%rdi)
                                // ax:= 0x0200, lock:= 0x0400
                                ...
                                // this guy spins for a while, reading
                                // the lock
                                ...
//trying to free the lock
movzwl (%rdi),%esi (esi:=0x0400)
addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
testb  $0x1,%ah    (no wakeup of anybody)
jne    1f

                                callq  *__ticket_lock_spinning
                                  ...
                                  // __ticket_enter_slowpath(lock)
                                  lock or (%rdi), $0x100
                                  // (global view of lock := 0x0500)
						...
                                  ACCESS_ONCE(lock->tickets.head) == want
                                  // (reads 0x00)
						...
                                  xen_poll_irq(irq); // goes to sleep
...
[addb   $0x2,(%rdi)]
// (becomes globally visible only now! global view of lock := 0x0502)
...

Your code is reusing the (just about) safe version of unlocking a
spinlock without understanding the effect that close has on later
memory ordering. It may work on CPUs that cannot do narrow -> wide
store to load forwarding and have to make the addb store visible
globally. This is an implementation artifact of specific uarches, and
you mustn't rely on it, since our specified memory model allows looser
behaviour.

Since you want to get that addb out to global memory before the second
read, either use a LOCK prefix for it, add an MFENCE between addb and
movzwl, or use a LOCKed instruction that will have a fencing effect
(e.g., to top-of-stack)between addb and movzwl.

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo 
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-27  9:34   ` Stephan Diestelhorst
@ 2011-09-27 16:44     ` Jeremy Fitzhardinge
  -1 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-27 16:44 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: xen-devel, H. Peter Anvin, Marcelo Tosatti, Nick Piggin, KVM,
	Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds

On 09/27/2011 02:34 AM, Stephan Diestelhorst wrote:
> On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism.
> [...] 
>> The unlock code is very straightforward:
>> 	prev = *lock;
>> 	__ticket_unlock_release(lock);
>> 	if (unlikely(__ticket_in_slowpath(lock)))
>> 		__ticket_unlock_slowpath(lock, prev);
>>
>> which generates:
>> 	push   %rbp
>> 	mov    %rsp,%rbp
>>
>>     movzwl (%rdi),%esi
>> 	addb   $0x2,(%rdi)
>>     movzwl (%rdi),%eax
>> 	testb  $0x1,%ah
>> 	jne    1f
>>
>> 	pop    %rbp
>> 	retq   
>>
>> 	### SLOWPATH START
>> 1:	movzwl (%rdi),%edx
>> 	movzbl %dh,%ecx
>> 	mov    %edx,%eax
>> 	and    $-2,%ecx	# clear TICKET_SLOWPATH_FLAG
>> 	mov    %cl,%dh
>> 	cmp    %dl,%cl	# test to see if lock is uncontended
>> 	je     3f
>>
>> 2:	movzbl %dl,%esi
>> 	callq  *__ticket_unlock_kick	# kick anyone waiting
>> 	pop    %rbp
>> 	retq   
>>
>> 3:	lock cmpxchg %dx,(%rdi)	# use cmpxchg to safely write back flag
>> 	jmp    2b
>> 	### SLOWPATH END
> [...]
>> Thoughts? Comments? Suggestions?
> You have a nasty data race in your code that can cause a losing
> acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
> can race with the lock holder releasing the lock.
>
> I used the code for the slow path from the GIT repo.
>
> Let me try to point out an interleaving:
>
> Lock is held by one thread, contains 0x0200.
>
> _Lock holder_                   _Acquirer_
>                                 mov    $0x200,%eax
>                                 lock xadd %ax,(%rdi)
>                                 // ax:= 0x0200, lock:= 0x0400
>                                 ...
>                                 // this guy spins for a while, reading
>                                 // the lock
>                                 ...
> //trying to free the lock
> movzwl (%rdi),%esi (esi:=0x0400)
> addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
> movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
> testb  $0x1,%ah    (no wakeup of anybody)
> jne    1f
>
>                                 callq  *__ticket_lock_spinning
>                                   ...
>                                   // __ticket_enter_slowpath(lock)
>                                   lock or (%rdi), $0x100
>                                   // (global view of lock := 0x0500)
> 						...
>                                   ACCESS_ONCE(lock->tickets.head) == want
>                                   // (reads 0x00)
> 						...
>                                   xen_poll_irq(irq); // goes to sleep
> ...
> [addb   $0x2,(%rdi)]
> // (becomes globally visible only now! global view of lock := 0x0502)
> ...
>
> Your code is reusing the (just about) safe version of unlocking a
> spinlock without understanding the effect that close has on later
> memory ordering. It may work on CPUs that cannot do narrow -> wide
> store to load forwarding and have to make the addb store visible
> globally. This is an implementation artifact of specific uarches, and
> you mustn't rely on it, since our specified memory model allows looser
> behaviour.

Ah, thanks for this observation.  I've seen this bug before when I
didn't pay attention to the unlock W vs flag R ordering at all, and I
was hoping the aliasing would be sufficient - and certainly this seems
to have been OK on my Intel systems.  But you're saying that it will
fail on current AMD systems?  Have you tested this, or is this just from
code analysis (which I agree with after reviewing the ordering rules in
the Intel manual).

> Since you want to get that addb out to global memory before the second
> read, either use a LOCK prefix for it, add an MFENCE between addb and
> movzwl, or use a LOCKed instruction that will have a fencing effect
> (e.g., to top-of-stack)between addb and movzwl.

Hm.  I don't really want to do any of those because it will probably
have a significant effect on the unlock performance; I was really trying
to avoid adding any more locked instructions.  A previous version of the
code had an mfence in here, but I hit on the idea of using aliasing to
get the ordering I want - but overlooked the possible effect of store
forwarding.

I guess it comes down to throwing myself on the efficiency of some kind
of fence instruction.  I guess an lfence would be sufficient; is that
any more efficient than a full mfence?  At least I can make it so that
its only present when pv ticket locks are actually in use, so it won't
affect the native case.

Could you give me a pointer to AMD's description of the ordering rules?

Thanks,
    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
@ 2011-09-27 16:44     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-27 16:44 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: xen-devel, Nick Piggin, KVM, Peter Zijlstra, Marcelo Tosatti,
	the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Avi Kivity, Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar,
	Linus Torvalds

On 09/27/2011 02:34 AM, Stephan Diestelhorst wrote:
> On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism.
> [...] 
>> The unlock code is very straightforward:
>> 	prev = *lock;
>> 	__ticket_unlock_release(lock);
>> 	if (unlikely(__ticket_in_slowpath(lock)))
>> 		__ticket_unlock_slowpath(lock, prev);
>>
>> which generates:
>> 	push   %rbp
>> 	mov    %rsp,%rbp
>>
>>     movzwl (%rdi),%esi
>> 	addb   $0x2,(%rdi)
>>     movzwl (%rdi),%eax
>> 	testb  $0x1,%ah
>> 	jne    1f
>>
>> 	pop    %rbp
>> 	retq   
>>
>> 	### SLOWPATH START
>> 1:	movzwl (%rdi),%edx
>> 	movzbl %dh,%ecx
>> 	mov    %edx,%eax
>> 	and    $-2,%ecx	# clear TICKET_SLOWPATH_FLAG
>> 	mov    %cl,%dh
>> 	cmp    %dl,%cl	# test to see if lock is uncontended
>> 	je     3f
>>
>> 2:	movzbl %dl,%esi
>> 	callq  *__ticket_unlock_kick	# kick anyone waiting
>> 	pop    %rbp
>> 	retq   
>>
>> 3:	lock cmpxchg %dx,(%rdi)	# use cmpxchg to safely write back flag
>> 	jmp    2b
>> 	### SLOWPATH END
> [...]
>> Thoughts? Comments? Suggestions?
> You have a nasty data race in your code that can cause a losing
> acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
> can race with the lock holder releasing the lock.
>
> I used the code for the slow path from the GIT repo.
>
> Let me try to point out an interleaving:
>
> Lock is held by one thread, contains 0x0200.
>
> _Lock holder_                   _Acquirer_
>                                 mov    $0x200,%eax
>                                 lock xadd %ax,(%rdi)
>                                 // ax:= 0x0200, lock:= 0x0400
>                                 ...
>                                 // this guy spins for a while, reading
>                                 // the lock
>                                 ...
> //trying to free the lock
> movzwl (%rdi),%esi (esi:=0x0400)
> addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
> movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
> testb  $0x1,%ah    (no wakeup of anybody)
> jne    1f
>
>                                 callq  *__ticket_lock_spinning
>                                   ...
>                                   // __ticket_enter_slowpath(lock)
>                                   lock or (%rdi), $0x100
>                                   // (global view of lock := 0x0500)
> 						...
>                                   ACCESS_ONCE(lock->tickets.head) == want
>                                   // (reads 0x00)
> 						...
>                                   xen_poll_irq(irq); // goes to sleep
> ...
> [addb   $0x2,(%rdi)]
> // (becomes globally visible only now! global view of lock := 0x0502)
> ...
>
> Your code is reusing the (just about) safe version of unlocking a
> spinlock without understanding the effect that close has on later
> memory ordering. It may work on CPUs that cannot do narrow -> wide
> store to load forwarding and have to make the addb store visible
> globally. This is an implementation artifact of specific uarches, and
> you mustn't rely on it, since our specified memory model allows looser
> behaviour.

Ah, thanks for this observation.  I've seen this bug before when I
didn't pay attention to the unlock W vs flag R ordering at all, and I
was hoping the aliasing would be sufficient - and certainly this seems
to have been OK on my Intel systems.  But you're saying that it will
fail on current AMD systems?  Have you tested this, or is this just from
code analysis (which I agree with after reviewing the ordering rules in
the Intel manual).

> Since you want to get that addb out to global memory before the second
> read, either use a LOCK prefix for it, add an MFENCE between addb and
> movzwl, or use a LOCKed instruction that will have a fencing effect
> (e.g., to top-of-stack)between addb and movzwl.

Hm.  I don't really want to do any of those because it will probably
have a significant effect on the unlock performance; I was really trying
to avoid adding any more locked instructions.  A previous version of the
code had an mfence in here, but I hit on the idea of using aliasing to
get the ordering I want - but overlooked the possible effect of store
forwarding.

I guess it comes down to throwing myself on the efficiency of some kind
of fence instruction.  I guess an lfence would be sufficient; is that
any more efficient than a full mfence?  At least I can make it so that
its only present when pv ticket locks are actually in use, so it won't
affect the native case.

Could you give me a pointer to AMD's description of the ordering rules?

Thanks,
    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-27 16:44     ` Jeremy Fitzhardinge
  (?)
@ 2011-09-28 13:58     ` Stephan Diestelhorst
  2011-09-28 16:44       ` Jeremy Fitzhardinge
  -1 siblings, 1 reply; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-09-28 13:58 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: xen-devel, H. Peter Anvin, Marcelo Tosatti, Nick Piggin, KVM,
	Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds

On Tuesday 27 September 2011, 12:44:02 Jeremy Fitzhardinge wrote:
> On 09/27/2011 02:34 AM, Stephan Diestelhorst wrote:
> > On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
> >> This series replaces the existing paravirtualized spinlock mechanism
> >> with a paravirtualized ticketlock mechanism.
> > [...] 
> >> The unlock code is very straightforward:
> >> 	prev = *lock;
> >> 	__ticket_unlock_release(lock);
> >> 	if (unlikely(__ticket_in_slowpath(lock)))
> >> 		__ticket_unlock_slowpath(lock, prev);
> >>
> >> which generates:
> >> 	push   %rbp
> >> 	mov    %rsp,%rbp
> >>
> >>     movzwl (%rdi),%esi
> >> 	addb   $0x2,(%rdi)
> >>     movzwl (%rdi),%eax
> >> 	testb  $0x1,%ah
> >> 	jne    1f
> >>
> >> 	pop    %rbp
> >> 	retq   
> >>
> >> 	### SLOWPATH START
> >> 1:	movzwl (%rdi),%edx
> >> 	movzbl %dh,%ecx
> >> 	mov    %edx,%eax
> >> 	and    $-2,%ecx	# clear TICKET_SLOWPATH_FLAG
> >> 	mov    %cl,%dh
> >> 	cmp    %dl,%cl	# test to see if lock is uncontended
> >> 	je     3f
> >>
> >> 2:	movzbl %dl,%esi
> >> 	callq  *__ticket_unlock_kick	# kick anyone waiting
> >> 	pop    %rbp
> >> 	retq   
> >>
> >> 3:	lock cmpxchg %dx,(%rdi)	# use cmpxchg to safely write back flag
> >> 	jmp    2b
> >> 	### SLOWPATH END
> > [...]
> >> Thoughts? Comments? Suggestions?
> > You have a nasty data race in your code that can cause a losing
> > acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
> > can race with the lock holder releasing the lock.
> >
> > I used the code for the slow path from the GIT repo.
> >
> > Let me try to point out an interleaving:
> >
> > Lock is held by one thread, contains 0x0200.
> >
> > _Lock holder_                   _Acquirer_
> >                                 mov    $0x200,%eax
> >                                 lock xadd %ax,(%rdi)
> >                                 // ax:= 0x0200, lock:= 0x0400
> >                                 ...
> >                                 // this guy spins for a while, reading
> >                                 // the lock
> >                                 ...
> > //trying to free the lock
> > movzwl (%rdi),%esi (esi:=0x0400)
> > addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
> > movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
> > testb  $0x1,%ah    (no wakeup of anybody)
> > jne    1f
> >
> >                                 callq  *__ticket_lock_spinning
> >                                   ...
> >                                   // __ticket_enter_slowpath(lock)
> >                                   lock or (%rdi), $0x100
> >                                   // (global view of lock := 0x0500)
> > 						...
> >                                   ACCESS_ONCE(lock->tickets.head) == want
> >                                   // (reads 0x00)
> > 						...
> >                                   xen_poll_irq(irq); // goes to sleep
> > ...
> > [addb   $0x2,(%rdi)]
> > // (becomes globally visible only now! global view of lock := 0x0502)
> > ...
> >
> > Your code is reusing the (just about) safe version of unlocking a
> > spinlock without understanding the effect that close has on later
> > memory ordering. It may work on CPUs that cannot do narrow -> wide
> > store to load forwarding and have to make the addb store visible
> > globally. This is an implementation artifact of specific uarches, and
> > you mustn't rely on it, since our specified memory model allows looser
> > behaviour.
> 
> Ah, thanks for this observation.  I've seen this bug before when I
> didn't pay attention to the unlock W vs flag R ordering at all, and I
> was hoping the aliasing would be sufficient - and certainly this seems
> to have been OK on my Intel systems.  But you're saying that it will
> fail on current AMD systems?

I have tested this and have not seen it fail on publicly released AMD
systems. But as I have tried to point out, this does not mean it is
safe to do in software, because future microarchtectures may have more
capable forwarding engines.

> Have you tested this, or is this just from code analysis (which I
> agree with after reviewing the ordering rules in the Intel manual).

We have found a similar issue in Novell's PV ticket lock implementation
during internal product testing.

> > Since you want to get that addb out to global memory before the second
> > read, either use a LOCK prefix for it, add an MFENCE between addb and
> > movzwl, or use a LOCKed instruction that will have a fencing effect
> > (e.g., to top-of-stack)between addb and movzwl.
> 
> Hm.  I don't really want to do any of those because it will probably
> have a significant effect on the unlock performance; I was really trying
> to avoid adding any more locked instructions.  A previous version of the
> code had an mfence in here, but I hit on the idea of using aliasing to
> get the ordering I want - but overlooked the possible effect of store
> forwarding.

Well, I'd be curious about the actual performance impact. If the store
needs to commit to memory due to aliasing anyways, this would slow down
execution, too. After all it is better to write working than fast code,
no? ;-)

> I guess it comes down to throwing myself on the efficiency of some kind
> of fence instruction.  I guess an lfence would be sufficient; is that
> any more efficient than a full mfence?

An lfence should not be sufficient, since that essentially is a NOP on
WB memory. You really want a full fence here, since the store needs to
be published before reading the lock with the next load.

> At least I can make it so that its only present when pv ticket locks
> are actually in use, so it won't affect the native case.

That would be a good thing, indeed. Of course, always relative to an
actual performance comparison.

> Could you give me a pointer to AMD's description of the ordering rules?

They should be in "AMD64 Architecture Programmer's Manual Volume 2:
System Programming", Section 7.2 Multiprocessor Memory Access Ordering.

http://developer.amd.com/documentation/guides/pages/default.aspx#manuals

Let me know if you have some clarifying suggestions. We are currently
revising these documents...

Cheers,
  Stephan

-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo 
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-27 16:44     ` Jeremy Fitzhardinge
  (?)
  (?)
@ 2011-09-28 15:38     ` Linus Torvalds
  2011-09-28 15:55         ` Jan Beulich
  -1 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2011-09-28 15:38 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Stephan Diestelhorst, xen-devel, H. Peter Anvin, Marcelo Tosatti,
	Nick Piggin, KVM, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, Ingo Molnar

On Tue, Sep 27, 2011 at 9:44 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>
> I guess it comes down to throwing myself on the efficiency of some kind
> of fence instruction.  I guess an lfence would be sufficient; is that
> any more efficient than a full mfence?  At least I can make it so that
> its only present when pv ticket locks are actually in use, so it won't
> affect the native case.

Please don't play with fences, just do the final "addb" as a locked instruction.

In fact, don't even use an addb, this whole thing is disgusting:

  movzwl (%rdi),%esi (esi:=0x0400)
  addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
  movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)

just use "lock xaddw" there too.

The fact that the PV unlock is going to be much more expensive than a
regular native unlock is just a fact of life. It comes from
fundamentally caring about the old/new value, and has nothing to do
with aliasing. You care about the other bits, and it doesn't matter
where in memory they are.

The native unlock can do a simple "addb" (or incb), but that doesn't
mean the PV unlock can. There are no ordering issues with the final
unlock in the native case, because the native unlock is like the honey
badger: it don't care. It only cares that the store make it out *some*
day, but it doesn't care about what order the upper/lower bits get
updated. You do. So you have to use a locked access.

Good catch by Stephan.

                             Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 15:38     ` Linus Torvalds
@ 2011-09-28 15:55         ` Jan Beulich
  0 siblings, 0 replies; 42+ messages in thread
From: Jan Beulich @ 2011-09-28 15:55 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Linus Torvalds
  Cc: Stephan Diestelhorst, Jeremy Fitzhardinge, Ingo Molnar,
	Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List, H. Peter Anvin

>>> On 28.09.11 at 17:38, Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Tue, Sep 27, 2011 at 9:44 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>>
>> I guess it comes down to throwing myself on the efficiency of some kind
>> of fence instruction.  I guess an lfence would be sufficient; is that
>> any more efficient than a full mfence?  At least I can make it so that
>> its only present when pv ticket locks are actually in use, so it won't
>> affect the native case.
> 
> Please don't play with fences, just do the final "addb" as a locked 
> instruction.
> 
> In fact, don't even use an addb, this whole thing is disgusting:
> 
>   movzwl (%rdi),%esi (esi:=0x0400)
>   addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
>   movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
> 
> just use "lock xaddw" there too.

I'm afraid that's not possible, as that might carry from the low 8 bits
into the upper 8 ones, which must be avoided.

Jan


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
@ 2011-09-28 15:55         ` Jan Beulich
  0 siblings, 0 replies; 42+ messages in thread
From: Jan Beulich @ 2011-09-28 15:55 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Linus Torvalds
  Cc: Stephan Diestelhorst, Jeremy Fitzhardinge, Ingo Molnar,
	Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List, H. Peter Anvin

>>> On 28.09.11 at 17:38, Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Tue, Sep 27, 2011 at 9:44 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>>
>> I guess it comes down to throwing myself on the efficiency of some kind
>> of fence instruction.  I guess an lfence would be sufficient; is that
>> any more efficient than a full mfence?  At least I can make it so that
>> its only present when pv ticket locks are actually in use, so it won't
>> affect the native case.
> 
> Please don't play with fences, just do the final "addb" as a locked 
> instruction.
> 
> In fact, don't even use an addb, this whole thing is disgusting:
> 
>   movzwl (%rdi),%esi (esi:=0x0400)
>   addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
>   movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
> 
> just use "lock xaddw" there too.

I'm afraid that's not possible, as that might carry from the low 8 bits
into the upper 8 ones, which must be avoided.

Jan

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 15:55         ` Jan Beulich
  (?)
@ 2011-09-28 16:10         ` Linus Torvalds
  2011-09-28 16:47           ` Jeremy Fitzhardinge
  -1 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2011-09-28 16:10 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Jeremy Fitzhardinge, Stephan Diestelhorst, Jeremy Fitzhardinge,
	Ingo Molnar, Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List, H. Peter Anvin

On Wed, Sep 28, 2011 at 8:55 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> just use "lock xaddw" there too.
>
> I'm afraid that's not possible, as that might carry from the low 8 bits
> into the upper 8 ones, which must be avoided.

Oh damn, you're right. So I guess the "right" way to do things is with
cmpxchg, but some nasty mfence setup could do it too.

                          Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 13:58     ` [Xen-devel] " Stephan Diestelhorst
@ 2011-09-28 16:44       ` Jeremy Fitzhardinge
  2011-09-28 18:13         ` Stephan Diestelhorst
  0 siblings, 1 reply; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-28 16:44 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: xen-devel, H. Peter Anvin, Marcelo Tosatti, Nick Piggin, KVM,
	Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds, Jan Beulich

On 09/28/2011 06:58 AM, Stephan Diestelhorst wrote:
> I have tested this and have not seen it fail on publicly released AMD
> systems. But as I have tried to point out, this does not mean it is
> safe to do in software, because future microarchtectures may have more
> capable forwarding engines.

Sure.

>> Have you tested this, or is this just from code analysis (which I
>> agree with after reviewing the ordering rules in the Intel manual).
> We have found a similar issue in Novell's PV ticket lock implementation
> during internal product testing.

Jan may have picked it up from an earlier set of my patches.

>>> Since you want to get that addb out to global memory before the second
>>> read, either use a LOCK prefix for it, add an MFENCE between addb and
>>> movzwl, or use a LOCKed instruction that will have a fencing effect
>>> (e.g., to top-of-stack)between addb and movzwl.
>> Hm.  I don't really want to do any of those because it will probably
>> have a significant effect on the unlock performance; I was really trying
>> to avoid adding any more locked instructions.  A previous version of the
>> code had an mfence in here, but I hit on the idea of using aliasing to
>> get the ordering I want - but overlooked the possible effect of store
>> forwarding.
> Well, I'd be curious about the actual performance impact. If the store
> needs to commit to memory due to aliasing anyways, this would slow down
> execution, too. After all it is better to write working than fast code,
> no? ;-)

Rule of thumb is that AMD tends to do things like lock and fence more
efficiently than Intel - at least historically.  I don't know if that's
still true for current Intel microarchitectures.

>> I guess it comes down to throwing myself on the efficiency of some kind
>> of fence instruction.  I guess an lfence would be sufficient; is that
>> any more efficient than a full mfence?
> An lfence should not be sufficient, since that essentially is a NOP on
> WB memory. You really want a full fence here, since the store needs to
> be published before reading the lock with the next load.

The Intel manual reads:

    Reads cannot pass earlier LFENCE and MFENCE instructions.
    Writes cannot pass earlier LFENCE, SFENCE, and MFENCE instructions.
    LFENCE instructions cannot pass earlier reads.

Which I interpreted as meaning that an lfence would prevent forwarding. 
But I guess it doesn't say "lfence instructions cannot pass earlier
writes", which means that the lfence could logically happen before the
write, thereby allowing forwarding?  Or should I be reading this some
other way?

>> Could you give me a pointer to AMD's description of the ordering rules?
> They should be in "AMD64 Architecture Programmer's Manual Volume 2:
> System Programming", Section 7.2 Multiprocessor Memory Access Ordering.
>
> http://developer.amd.com/documentation/guides/pages/default.aspx#manuals
>
> Let me know if you have some clarifying suggestions. We are currently
> revising these documents...

I find the English descriptions of these kinds of things frustrating to
read because of ambiguities in the precise meaning of words like "pass",
"ahead", "behind" in these contexts.  I find the prose useful to get an
overview, but when I have a specific question I wonder if something more
formal would be useful.
I guess it's implied that anything that is not prohibited by the
ordering rules is allowed, but it wouldn't hurt to say it explicitly.
That said, the AMD description seems clearer and more explicit than the
Intel manual (esp since it specifically discusses the problem here).

Thanks,
    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 16:10         ` Linus Torvalds
@ 2011-09-28 16:47           ` Jeremy Fitzhardinge
  2011-09-28 17:22             ` Linus Torvalds
  0 siblings, 1 reply; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-28 16:47 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Jan Beulich, Stephan Diestelhorst, Jeremy Fitzhardinge,
	Ingo Molnar, Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List, H. Peter Anvin

On 09/28/2011 09:10 AM, Linus Torvalds wrote:
> On Wed, Sep 28, 2011 at 8:55 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> just use "lock xaddw" there too.
>> I'm afraid that's not possible, as that might carry from the low 8 bits
>> into the upper 8 ones, which must be avoided.
> Oh damn, you're right. So I guess the "right" way to do things is with
> cmpxchg, but some nasty mfence setup could do it too.

Could do something like:

	if (ticket->head >= 254)
		prev = xadd(&ticket->head_tail, 0xff02);
	else
		prev = xadd(&ticket->head_tail, 0x0002);

to compensate for the overflow.

    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 16:47           ` Jeremy Fitzhardinge
@ 2011-09-28 17:22             ` Linus Torvalds
  2011-09-28 17:24               ` H. Peter Anvin
  0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2011-09-28 17:22 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Jan Beulich, Stephan Diestelhorst, Jeremy Fitzhardinge,
	Ingo Molnar, Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List, H. Peter Anvin

On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>
> Could do something like:
>
>        if (ticket->head >= 254)
>                prev = xadd(&ticket->head_tail, 0xff02);
>        else
>                prev = xadd(&ticket->head_tail, 0x0002);
>
> to compensate for the overflow.

Oh wow. You havge an even more twisted mind than I do.

I guess that will work, exactly because we control "head" and thus can
know about the overflow in the low byte. But boy is that ugly ;)

But at least you wouldn't need to do the loop with cmpxchg. So it's
twisted and ugly, but migth be practical.

                   Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 17:22             ` Linus Torvalds
@ 2011-09-28 17:24               ` H. Peter Anvin
  2011-09-28 17:50                 ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 42+ messages in thread
From: H. Peter Anvin @ 2011-09-28 17:24 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Jeremy Fitzhardinge, Jan Beulich, Stephan Diestelhorst,
	Jeremy Fitzhardinge, Ingo Molnar, Andi Kleen, Peter Zijlstra,
	Nick Piggin, the arch/x86 maintainers, xen-devel, Avi Kivity,
	Marcelo Tosatti, KVM, Linux Kernel Mailing List

On 09/28/2011 10:22 AM, Linus Torvalds wrote:
> On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>>
>> Could do something like:
>>
>>        if (ticket->head >= 254)
>>                prev = xadd(&ticket->head_tail, 0xff02);
>>        else
>>                prev = xadd(&ticket->head_tail, 0x0002);
>>
>> to compensate for the overflow.
> 
> Oh wow. You havge an even more twisted mind than I do.
> 
> I guess that will work, exactly because we control "head" and thus can
> know about the overflow in the low byte. But boy is that ugly ;)
> 
> But at least you wouldn't need to do the loop with cmpxchg. So it's
> twisted and ugly, but migth be practical.
> 

I suspect it should be coded as -254 in order to use a short immediate
if that is even possible...

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 17:24               ` H. Peter Anvin
@ 2011-09-28 17:50                 ` Jeremy Fitzhardinge
  2011-09-28 18:08                   ` Stephan Diestelhorst
  0 siblings, 1 reply; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-28 17:50 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Jan Beulich, Stephan Diestelhorst,
	Jeremy Fitzhardinge, Ingo Molnar, Andi Kleen, Peter Zijlstra,
	Nick Piggin, the arch/x86 maintainers, xen-devel, Avi Kivity,
	Marcelo Tosatti, KVM, Linux Kernel Mailing List

On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
> On 09/28/2011 10:22 AM, Linus Torvalds wrote:
>> On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>>> Could do something like:
>>>
>>>        if (ticket->head >= 254)
>>>                prev = xadd(&ticket->head_tail, 0xff02);
>>>        else
>>>                prev = xadd(&ticket->head_tail, 0x0002);
>>>
>>> to compensate for the overflow.
>> Oh wow. You havge an even more twisted mind than I do.
>>
>> I guess that will work, exactly because we control "head" and thus can
>> know about the overflow in the low byte. But boy is that ugly ;)
>>
>> But at least you wouldn't need to do the loop with cmpxchg. So it's
>> twisted and ugly, but migth be practical.
>>
> I suspect it should be coded as -254 in order to use a short immediate
> if that is even possible...

I'm about to test:

static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
{
	if (TICKET_SLOWPATH_FLAG && unlikely(arch_static_branch(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		__ticketpair_t inc = TICKET_LOCK_INC;

		if (lock->tickets.head >= (1 << TICKET_SHIFT) - TICKET_LOCK_INC)
			inc += -1 << TICKET_SHIFT;

		prev.head_tail = xadd(&lock->head_tail, inc);

		if (prev.tickets.tail & TICKET_SLOWPATH_FLAG)
			__ticket_unlock_slowpath(lock, prev);
	} else
		__ticket_unlock_release(lock);
}

Which, frankly, is not something I particularly want to put my name to.

It makes gcc go into paroxysms of trickiness:

 4a8:   80 3f fe                cmpb   $0xfe,(%rdi)
 4ab:   19 f6                   sbb    %esi,%esi
 4ad:   66 81 e6 00 01          and    $0x100,%si
 4b2:   66 81 ee fe 00          sub    $0xfe,%si
 4b7:   f0 66 0f c1 37          lock xadd %si,(%rdi)

...which is pretty neat, actually.

    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 17:50                 ` Jeremy Fitzhardinge
@ 2011-09-28 18:08                   ` Stephan Diestelhorst
  2011-09-28 18:27                     ` Jeremy Fitzhardinge
  2011-09-28 18:49                     ` Linus Torvalds
  0 siblings, 2 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-09-28 18:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Jan Beulich, Jeremy Fitzhardinge,
	Ingo Molnar, Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List

On Wednesday 28 September 2011 19:50:08 Jeremy Fitzhardinge wrote:
> On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
> > On 09/28/2011 10:22 AM, Linus Torvalds wrote:
> >> On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> >>> Could do something like:
> >>>
> >>>        if (ticket->head >= 254)
> >>>                prev = xadd(&ticket->head_tail, 0xff02);
> >>>        else
> >>>                prev = xadd(&ticket->head_tail, 0x0002);
> >>>
> >>> to compensate for the overflow.
> >> Oh wow. You havge an even more twisted mind than I do.
> >>
> >> I guess that will work, exactly because we control "head" and thus can
> >> know about the overflow in the low byte. But boy is that ugly ;)
> >>
> >> But at least you wouldn't need to do the loop with cmpxchg. So it's
> >> twisted and ugly, but migth be practical.
> >>
> > I suspect it should be coded as -254 in order to use a short immediate
> > if that is even possible...
> 
> I'm about to test:
> 
> static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
> {
> 	if (TICKET_SLOWPATH_FLAG && unlikely(arch_static_branch(&paravirt_ticketlocks_enabled))) {
> 		arch_spinlock_t prev;
> 		__ticketpair_t inc = TICKET_LOCK_INC;
> 
> 		if (lock->tickets.head >= (1 << TICKET_SHIFT) - TICKET_LOCK_INC)
> 			inc += -1 << TICKET_SHIFT;
> 
> 		prev.head_tail = xadd(&lock->head_tail, inc);
> 
> 		if (prev.tickets.tail & TICKET_SLOWPATH_FLAG)
> 			__ticket_unlock_slowpath(lock, prev);
> 	} else
> 		__ticket_unlock_release(lock);
> }
> 
> Which, frankly, is not something I particularly want to put my name to.

I must have missed the part when this turned into the propose-the-
craziest-way-that-this-still-works.contest :)

What is wrong with converting the original addb into a lock addb? The
crazy wrap around tricks add a conditional and lots of headache. The
lock addb/w is clean. We are paying an atomic in both cases, so I just
don't see the benefit of the second solution.

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 16:44       ` Jeremy Fitzhardinge
@ 2011-09-28 18:13         ` Stephan Diestelhorst
  0 siblings, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-09-28 18:13 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: xen-devel, H. Peter Anvin, Marcelo Tosatti, Nick Piggin, KVM,
	Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds, Jan Beulich

On Wednesday 28 September 2011 18:44:25 Jeremy Fitzhardinge wrote:
> On 09/28/2011 06:58 AM, Stephan Diestelhorst wrote:
> >> I guess it comes down to throwing myself on the efficiency of some kind
> >> of fence instruction.  I guess an lfence would be sufficient; is that
> >> any more efficient than a full mfence?
> > An lfence should not be sufficient, since that essentially is a NOP on
> > WB memory. You really want a full fence here, since the store needs to
> > be published before reading the lock with the next load.
> 
> The Intel manual reads:
> 
>     Reads cannot pass earlier LFENCE and MFENCE instructions.
>     Writes cannot pass earlier LFENCE, SFENCE, and MFENCE instructions.
>     LFENCE instructions cannot pass earlier reads.
> 
> Which I interpreted as meaning that an lfence would prevent forwarding. 
> But I guess it doesn't say "lfence instructions cannot pass earlier
> writes", which means that the lfence could logically happen before the
> write, thereby allowing forwarding?  Or should I be reading this some
> other way?

Indeed. You are reading this the right way. 

> >> Could you give me a pointer to AMD's description of the ordering rules?
> > They should be in "AMD64 Architecture Programmer's Manual Volume 2:
> > System Programming", Section 7.2 Multiprocessor Memory Access Ordering.
> >
> > http://developer.amd.com/documentation/guides/pages/default.aspx#manuals
> >
> > Let me know if you have some clarifying suggestions. We are currently
> > revising these documents...
> 
> I find the English descriptions of these kinds of things frustrating to
> read because of ambiguities in the precise meaning of words like "pass",
> "ahead", "behind" in these contexts.  I find the prose useful to get an
> overview, but when I have a specific question I wonder if something more
> formal would be useful.

It would be, and some have started this efort:

http://www.cl.cam.ac.uk/~pes20/weakmemory/

But I am not sure whether that particular nasty forwarding case is
captured properly in their model It is on my list of things to check.

> I guess it's implied that anything that is not prohibited by the
> ordering rules is allowed, but it wouldn't hurt to say it explicitly.
> That said, the AMD description seems clearer and more explicit than the
> Intel manual (esp since it specifically discusses the problem here).

Thanks! Glad you like it :)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 18:08                   ` Stephan Diestelhorst
@ 2011-09-28 18:27                     ` Jeremy Fitzhardinge
  2011-09-28 18:49                     ` Linus Torvalds
  1 sibling, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-28 18:27 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: H. Peter Anvin, Linus Torvalds, Jan Beulich, Jeremy Fitzhardinge,
	Ingo Molnar, Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List

On 09/28/2011 11:08 AM, Stephan Diestelhorst wrote:
> On Wednesday 28 September 2011 19:50:08 Jeremy Fitzhardinge wrote:
>> On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
>>> On 09/28/2011 10:22 AM, Linus Torvalds wrote:
>>>> On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>>>>> Could do something like:
>>>>>
>>>>>        if (ticket->head >= 254)
>>>>>                prev = xadd(&ticket->head_tail, 0xff02);
>>>>>        else
>>>>>                prev = xadd(&ticket->head_tail, 0x0002);
>>>>>
>>>>> to compensate for the overflow.
>>>> Oh wow. You havge an even more twisted mind than I do.
>>>>
>>>> I guess that will work, exactly because we control "head" and thus can
>>>> know about the overflow in the low byte. But boy is that ugly ;)
>>>>
>>>> But at least you wouldn't need to do the loop with cmpxchg. So it's
>>>> twisted and ugly, but migth be practical.
>>>>
>>> I suspect it should be coded as -254 in order to use a short immediate
>>> if that is even possible...
>> I'm about to test:
>>
>> static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>> {
>> 	if (TICKET_SLOWPATH_FLAG && unlikely(arch_static_branch(&paravirt_ticketlocks_enabled))) {
>> 		arch_spinlock_t prev;
>> 		__ticketpair_t inc = TICKET_LOCK_INC;
>>
>> 		if (lock->tickets.head >= (1 << TICKET_SHIFT) - TICKET_LOCK_INC)
>> 			inc += -1 << TICKET_SHIFT;
>>
>> 		prev.head_tail = xadd(&lock->head_tail, inc);
>>
>> 		if (prev.tickets.tail & TICKET_SLOWPATH_FLAG)
>> 			__ticket_unlock_slowpath(lock, prev);
>> 	} else
>> 		__ticket_unlock_release(lock);
>> }
>>
>> Which, frankly, is not something I particularly want to put my name to.
> I must have missed the part when this turned into the propose-the-
> craziest-way-that-this-still-works.contest :)
>
> What is wrong with converting the original addb into a lock addb? The
> crazy wrap around tricks add a conditional and lots of headache. The
> lock addb/w is clean. We are paying an atomic in both cases, so I just
> don't see the benefit of the second solution.

Well, it does end up generating surprisingly nice code.  And to be
honest, being able to do the unlock and atomically fetch the flag as one
operation makes it much easier to reason about.

I'll do a locked add variant as well to see how it turns out.

Do you think locked add is better than unlocked + mfence?

Thanks,
    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 18:08                   ` Stephan Diestelhorst
  2011-09-28 18:27                     ` Jeremy Fitzhardinge
@ 2011-09-28 18:49                     ` Linus Torvalds
  2011-09-28 19:06                       ` Jeremy Fitzhardinge
  2011-10-06 14:04                       ` Stephan Diestelhorst
  1 sibling, 2 replies; 42+ messages in thread
From: Linus Torvalds @ 2011-09-28 18:49 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: Jeremy Fitzhardinge, H. Peter Anvin, Jan Beulich,
	Jeremy Fitzhardinge, Ingo Molnar, Andi Kleen, Peter Zijlstra,
	Nick Piggin, the arch/x86 maintainers, xen-devel, Avi Kivity,
	Marcelo Tosatti, KVM, Linux Kernel Mailing List

On Wed, Sep 28, 2011 at 11:08 AM, Stephan Diestelhorst
<stephan.diestelhorst@amd.com> wrote:
>
> I must have missed the part when this turned into the propose-the-
> craziest-way-that-this-still-works.contest :)

So doing it just with the "lock addb" probably works fine, but I have
to say that I personally shudder at the "surround the locked addb by
reads from the word, in order to approximate an atomic read of the
upper bits".

Because what you get is not really an "atomic read of the upper bits",
it's a "ok, we'll get the worst case of somebody modifying the upper
bits at the same time".

Which certainly should *work*, but from a conceptual standpoint, isn't
it just *much* nicer to say "we actually know *exactly* what the upper
bits were".

But I don't care all *that* deeply. I do agree that the xaddw trick is
pretty tricky. I just happen to think that it's actually *less* tricky
than "read the upper bits separately and depend on subtle ordering
issues with another writer that happens at the same time on another
CPU".

So I can live with either form - as long as it works. I think it might
be easier to argue that the xaddw is guaranteed to work, because all
values at all points are unarguably atomic (yeah, we read the lower
bits nonatomically, but as the owner of the lock we know that nobody
else can write them).

                                 Linus

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 18:49                     ` Linus Torvalds
@ 2011-09-28 19:06                       ` Jeremy Fitzhardinge
  2011-10-06 14:04                       ` Stephan Diestelhorst
  1 sibling, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-28 19:06 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Stephan Diestelhorst, H. Peter Anvin, Jan Beulich,
	Jeremy Fitzhardinge, Ingo Molnar, Andi Kleen, Peter Zijlstra,
	Nick Piggin, the arch/x86 maintainers, xen-devel, Avi Kivity,
	Marcelo Tosatti, KVM, Linux Kernel Mailing List

On 09/28/2011 11:49 AM, Linus Torvalds wrote:
> But I don't care all *that* deeply. I do agree that the xaddw trick is
> pretty tricky. I just happen to think that it's actually *less* tricky
> than "read the upper bits separately and depend on subtle ordering
> issues with another writer that happens at the same time on another
> CPU".
>
> So I can live with either form - as long as it works. I think it might
> be easier to argue that the xaddw is guaranteed to work, because all
> values at all points are unarguably atomic (yeah, we read the lower
> bits nonatomically, but as the owner of the lock we know that nobody
> else can write them).

Exactly.  I just did a locked add variant, and while the code looks a
little simpler, it definitely has more actual complexity to analyze.

    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-09-28 18:49                     ` Linus Torvalds
  2011-09-28 19:06                       ` Jeremy Fitzhardinge
@ 2011-10-06 14:04                       ` Stephan Diestelhorst
  2011-10-06 17:40                         ` Jeremy Fitzhardinge
  1 sibling, 1 reply; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-10-06 14:04 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Jeremy Fitzhardinge, H. Peter Anvin, Jan Beulich,
	Jeremy Fitzhardinge, Ingo Molnar, Andi Kleen, Peter Zijlstra,
	Nick Piggin, the arch/x86 maintainers, xen-devel, Avi Kivity,
	Marcelo Tosatti, KVM, Linux Kernel Mailing List

On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
> On Wed, Sep 28, 2011 at 11:08 AM, Stephan Diestelhorst
> <stephan.diestelhorst@amd.com> wrote:
> >
> > I must have missed the part when this turned into the propose-the-
> > craziest-way-that-this-still-works.contest :)
> 
> So doing it just with the "lock addb" probably works fine, but I have
> to say that I personally shudder at the "surround the locked addb by
> reads from the word, in order to approximate an atomic read of the
> upper bits".
> 
> Because what you get is not really an "atomic read of the upper bits",
> it's a "ok, we'll get the worst case of somebody modifying the upper
> bits at the same time".
> 
> Which certainly should *work*, but from a conceptual standpoint, isn't
> it just *much* nicer to say "we actually know *exactly* what the upper
> bits were".

Well, we really do NOT want atomicity here. What we really rather want
is sequentiality: free the lock, make the update visible, and THEN
check if someone has gone sleeping on it.

Atomicity only conveniently enforces that the three do not happen in a
different order (with the store becoming visible after the checking
load).

This does not have to be atomic, since spurious wakeups are not a
problem, in particular not with the FIFO-ness of ticket locks.

For that the fence, additional atomic etc. would be IMHO much cleaner
than the crazy overflow logic.

> But I don't care all *that* deeply. I do agree that the xaddw trick is
> pretty tricky. I just happen to think that it's actually *less* tricky
> than "read the upper bits separately and depend on subtle ordering
> issues with another writer that happens at the same time on another
> CPU".

Fair enough :)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-10-06 14:04                       ` Stephan Diestelhorst
@ 2011-10-06 17:40                         ` Jeremy Fitzhardinge
  2011-10-06 18:09                           ` Jeremy Fitzhardinge
  2011-10-10 11:00                             ` Stephan Diestelhorst
  0 siblings, 2 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-06 17:40 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: Linus Torvalds, H. Peter Anvin, Jan Beulich, Jeremy Fitzhardinge,
	Ingo Molnar, Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List

[-- Attachment #1: Type: text/plain, Size: 1553 bytes --]

On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
> On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
>> Which certainly should *work*, but from a conceptual standpoint, isn't
>> it just *much* nicer to say "we actually know *exactly* what the upper
>> bits were".
> Well, we really do NOT want atomicity here. What we really rather want
> is sequentiality: free the lock, make the update visible, and THEN
> check if someone has gone sleeping on it.
>
> Atomicity only conveniently enforces that the three do not happen in a
> different order (with the store becoming visible after the checking
> load).
>
> This does not have to be atomic, since spurious wakeups are not a
> problem, in particular not with the FIFO-ness of ticket locks.
>
> For that the fence, additional atomic etc. would be IMHO much cleaner
> than the crazy overflow logic.

All things being equal I'd prefer lock-xadd just because its easier to
analyze the concurrency for, crazy overflow tests or no.  But if
add+mfence turned out to be a performance win, then that would obviously
tip the scales.

However, it looks like locked xadd is also has better performance:  on
my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
than locked xadd, so that pretty much settles it unless you think
there'd be a dramatic difference on an AMD system.

(On Nehalem it was much less dramatic 2% difference, but still in favour
of locked xadd.)

This is with dumb-as-rocks run it in a loop with "time" benchmark, but
the results are not very subtle.

    J

[-- Attachment #2: add-barrier.c --]
[-- Type: text/x-csrc, Size: 285 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i < 100000000; i++) {
		l.val += 2;
		asm volatile("mfence" : : : "memory");
		if (l.flag)
			break;
		asm volatile("" : : : "memory");
	}

	return 0;
}

[-- Attachment #3: locked-xadd.c --]
[-- Type: text/x-csrc, Size: 422 bytes --]

#include <stdio.h>

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i < 100000000; i++) {
		unsigned short inc = 2;
		if (l.val >= (0x100 - 2))
			inc += -1 << 8;
		asm volatile("lock; xadd %1,%0" : "+m" (l.lock), "+r" (inc) : );
		if (inc & 0x100)
			break;
		asm volatile("" : : : "memory");
	}

	return 0;
}

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-10-06 17:40                         ` Jeremy Fitzhardinge
@ 2011-10-06 18:09                           ` Jeremy Fitzhardinge
  2011-10-10  7:32                             ` Ingo Molnar
  2011-10-10 11:00                             ` Stephan Diestelhorst
  1 sibling, 1 reply; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-06 18:09 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: Linus Torvalds, H. Peter Anvin, Jan Beulich, Jeremy Fitzhardinge,
	Ingo Molnar, Andi Kleen, Peter Zijlstra, Nick Piggin,
	the arch/x86 maintainers, xen-devel, Avi Kivity, Marcelo Tosatti,
	KVM, Linux Kernel Mailing List, Konrad Rzeszutek Wilk

On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
> However, it looks like locked xadd is also has better performance:  on
> my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> than locked xadd, so that pretty much settles it unless you think
> there'd be a dramatic difference on an AMD system.

Konrad measures add+mfence is about 65% slower on AMD Phenom as well.

    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-10-06 18:09                           ` Jeremy Fitzhardinge
@ 2011-10-10  7:32                             ` Ingo Molnar
  2011-10-10 19:51                               ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 42+ messages in thread
From: Ingo Molnar @ 2011-10-10  7:32 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Stephan Diestelhorst, Linus Torvalds, H. Peter Anvin,
	Jan Beulich, Jeremy Fitzhardinge, Andi Kleen, Peter Zijlstra,
	Nick Piggin, the arch/x86 maintainers, xen-devel, Avi Kivity,
	Marcelo Tosatti, KVM, Linux Kernel Mailing List,
	Konrad Rzeszutek Wilk


* Jeremy Fitzhardinge <jeremy@goop.org> wrote:

> On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
> > However, it looks like locked xadd is also has better performance:  on
> > my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> > than locked xadd, so that pretty much settles it unless you think
> > there'd be a dramatic difference on an AMD system.
> 
> Konrad measures add+mfence is about 65% slower on AMD Phenom as well.

xadd also results in smaller/tighter code, right?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-10-06 17:40                         ` Jeremy Fitzhardinge
@ 2011-10-10 11:00                             ` Stephan Diestelhorst
  2011-10-10 11:00                             ` Stephan Diestelhorst
  1 sibling, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-10-10 11:00 UTC (permalink / raw)
  To: xen-devel
  Cc: Jeremy Fitzhardinge, Jeremy Fitzhardinge, Nick Piggin, KVM,
	Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Marcelo Tosatti, Andi Kleen,
	Avi Kivity, Jan Beulich, H. Peter Anvin, Ingo Molnar,
	Linus Torvalds

On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
> On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
> > On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
> >> Which certainly should *work*, but from a conceptual standpoint, isn't
> >> it just *much* nicer to say "we actually know *exactly* what the upper
> >> bits were".
> > Well, we really do NOT want atomicity here. What we really rather want
> > is sequentiality: free the lock, make the update visible, and THEN
> > check if someone has gone sleeping on it.
> >
> > Atomicity only conveniently enforces that the three do not happen in a
> > different order (with the store becoming visible after the checking
> > load).
> >
> > This does not have to be atomic, since spurious wakeups are not a
> > problem, in particular not with the FIFO-ness of ticket locks.
> >
> > For that the fence, additional atomic etc. would be IMHO much cleaner
> > than the crazy overflow logic.
> 
> All things being equal I'd prefer lock-xadd just because its easier to
> analyze the concurrency for, crazy overflow tests or no.  But if
> add+mfence turned out to be a performance win, then that would obviously
> tip the scales.
> 
> However, it looks like locked xadd is also has better performance:  on
> my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> than locked xadd, so that pretty much settles it unless you think
> there'd be a dramatic difference on an AMD system.

Indeed, the fences are usually slower than locked RMWs, in particular,
if you do not need to add an instruction. I originally missed that
amazing stunt the GCC pulled off with replacing the branch with carry
flag magic. It seems that two twisted minds have found each other
here :)

One of my concerns was adding a branch in here... so that is settled,
and if everybody else feels like this is easier to reason about...
go ahead :) (I'll keep my itch to myself then.)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
@ 2011-10-10 11:00                             ` Stephan Diestelhorst
  0 siblings, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-10-10 11:00 UTC (permalink / raw)
  To: xen-devel
  Cc: Jeremy Fitzhardinge, Jeremy Fitzhardinge, Nick Piggin, KVM,
	Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Marcelo Tosatti, Andi Kleen,
	Avi Kivity, Jan Beulich, H. Peter Anvin, Ingo Molnar,
	Linus Torvalds

On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
> On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
> > On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
> >> Which certainly should *work*, but from a conceptual standpoint, isn't
> >> it just *much* nicer to say "we actually know *exactly* what the upper
> >> bits were".
> > Well, we really do NOT want atomicity here. What we really rather want
> > is sequentiality: free the lock, make the update visible, and THEN
> > check if someone has gone sleeping on it.
> >
> > Atomicity only conveniently enforces that the three do not happen in a
> > different order (with the store becoming visible after the checking
> > load).
> >
> > This does not have to be atomic, since spurious wakeups are not a
> > problem, in particular not with the FIFO-ness of ticket locks.
> >
> > For that the fence, additional atomic etc. would be IMHO much cleaner
> > than the crazy overflow logic.
> 
> All things being equal I'd prefer lock-xadd just because its easier to
> analyze the concurrency for, crazy overflow tests or no.  But if
> add+mfence turned out to be a performance win, then that would obviously
> tip the scales.
> 
> However, it looks like locked xadd is also has better performance:  on
> my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> than locked xadd, so that pretty much settles it unless you think
> there'd be a dramatic difference on an AMD system.

Indeed, the fences are usually slower than locked RMWs, in particular,
if you do not need to add an instruction. I originally missed that
amazing stunt the GCC pulled off with replacing the branch with carry
flag magic. It seems that two twisted minds have found each other
here :)

One of my concerns was adding a branch in here... so that is settled,
and if everybody else feels like this is easier to reason about...
go ahead :) (I'll keep my itch to myself then.)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-10-10 11:00                             ` Stephan Diestelhorst
@ 2011-10-10 14:01                               ` Stephan Diestelhorst
  -1 siblings, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-10-10 14:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Jeremy Fitzhardinge, Jeremy Fitzhardinge, Andi, Nick Piggin, KVM,
	Peter Zijlstra, maintainers, Linux Kernel Mailing List,
	Marcelo Tosatti, Kleen, Avi Kivity, Jan Beulich, H. Peter Anvin,
	the, Linus Torvalds, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 3111 bytes --]

On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
> On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
> > On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
> > > On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
> > >> Which certainly should *work*, but from a conceptual standpoint, isn't
> > >> it just *much* nicer to say "we actually know *exactly* what the upper
> > >> bits were".
> > > Well, we really do NOT want atomicity here. What we really rather want
> > > is sequentiality: free the lock, make the update visible, and THEN
> > > check if someone has gone sleeping on it.
> > >
> > > Atomicity only conveniently enforces that the three do not happen in a
> > > different order (with the store becoming visible after the checking
> > > load).
> > >
> > > This does not have to be atomic, since spurious wakeups are not a
> > > problem, in particular not with the FIFO-ness of ticket locks.
> > >
> > > For that the fence, additional atomic etc. would be IMHO much cleaner
> > > than the crazy overflow logic.
> > 
> > All things being equal I'd prefer lock-xadd just because its easier to
> > analyze the concurrency for, crazy overflow tests or no.  But if
> > add+mfence turned out to be a performance win, then that would obviously
> > tip the scales.
> > 
> > However, it looks like locked xadd is also has better performance:  on
> > my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> > than locked xadd, so that pretty much settles it unless you think
> > there'd be a dramatic difference on an AMD system.
> 
> Indeed, the fences are usually slower than locked RMWs, in particular,
> if you do not need to add an instruction. I originally missed that
> amazing stunt the GCC pulled off with replacing the branch with carry
> flag magic. It seems that two twisted minds have found each other
> here :)
> 
> One of my concerns was adding a branch in here... so that is settled,
> and if everybody else feels like this is easier to reason about...
> go ahead :) (I'll keep my itch to myself then.)

Just that I can't... if performance is a concern, adding the LOCK
prefix to the addb outperforms the xadd significantly:

With mean over 100 runs... this comes out as follows
(on my Phenom II)

locked-add   0.648500 s   80%
add-rmwtos   0.707700 s   88%
locked-xadd  0.807600 s  100%
add-barrier  1.270000 s  157%

With huge read contention added in (as cheaply as possible):
locked-add.openmp  0.640700 s  84%
add-rmwtos.openmp  0.658400 s  86%
locked-xadd.openmp 0.763800 s 100%

And the numbers for write contention are crazy, but also feature the
locked-add version:
locked-add.openmp  0.571400 s  71%
add-rmwtos.openmp  0.699900 s  87%
locked-xadd.openmp 0.800200 s 100%

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 

[-- Attachment #2: add-rmwtos.c --]
[-- Type: text/x-csrc, Size: 341 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

	{
		{
			for (i = 0; i < 100000000; i++) {
				l.val += 2;
				asm volatile("lock or $0x0,(%%rsp)" : : : "memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
	}
	return 0;
}

[-- Attachment #3: add-rmwtos.openmp.c --]
[-- Type: text/x-csrc, Size: 531 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

#   pragma omp sections 
	{
#       pragma omp section
		{
			for (i = 0; i < 100000000; i++) {
				l.val += 2;
				asm volatile("lock or $0x0,(%%rsp)" : : : "memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
#       pragma omp section
		while(!l.flag)
			asm volatile("":::"memory");
			//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
	}
	return 0;
}

[-- Attachment #4: locked-add.c --]
[-- Type: text/x-csrc, Size: 339 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;
	{
		{
			for (i = 0; i < 100000000; i++) {
				asm volatile("lock addb %1, %0":"+m"(l.val):"r"((char)2):"memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
	}
	return 0;
}

[-- Attachment #5: locked-xadd.openmp.c --]
[-- Type: text/x-csrc, Size: 667 bytes --]

#include <stdio.h>

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;
#   pragma omp sections 
	{
#       pragma omp section
	    {

			for (i = 0; i < 100000000; i++) {
				unsigned short inc = 2;
				if (l.val >= (0x100 - 2))
					inc += -1 << 8;
				asm volatile("lock; xadd %1,%0" : "+m" (l.lock), "+r" (inc) : );
				if (inc & 0x100)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
#       pragma omp section
	    while(!l.flag)
		    asm volatile("":::"memory");
			//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
	}
	return 0;
}

[-- Attachment #6: locked-add.openmp.c --]
[-- Type: text/x-csrc, Size: 529 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;
#   pragma omp sections 
	{
#       pragma omp section
		{
			for (i = 0; i < 100000000; i++) {
				asm volatile("lock addb %1, %0":"+m"(l.val):"r"((char)2):"memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
#       pragma omp section
		while(!l.flag)
			asm volatile("":::"memory");
			//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
	}
	return 0;
}

[-- Attachment #7: locked-xadd.c --]
[-- Type: text/x-csrc, Size: 471 bytes --]

#include <stdio.h>

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;
	{
	    {

			for (i = 0; i < 100000000; i++) {
				unsigned short inc = 2;
				if (l.val >= (0x100 - 2))
					inc += -1 << 8;
				asm volatile("lock; xadd %1,%0" : "+m" (l.lock), "+r" (inc) : );
				if (inc & 0x100)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
	}
	return 0;
}

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
@ 2011-10-10 14:01                               ` Stephan Diestelhorst
  0 siblings, 0 replies; 42+ messages in thread
From: Stephan Diestelhorst @ 2011-10-10 14:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Jeremy Fitzhardinge, Jeremy Fitzhardinge, Andi, Nick Piggin, KVM,
	Peter Zijlstra, maintainers, Linux Kernel Mailing List,
	Marcelo Tosatti, Kleen, Avi Kivity, Jan Beulich, H. Peter Anvin,
	the, Linus Torvalds, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 3111 bytes --]

On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
> On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
> > On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
> > > On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
> > >> Which certainly should *work*, but from a conceptual standpoint, isn't
> > >> it just *much* nicer to say "we actually know *exactly* what the upper
> > >> bits were".
> > > Well, we really do NOT want atomicity here. What we really rather want
> > > is sequentiality: free the lock, make the update visible, and THEN
> > > check if someone has gone sleeping on it.
> > >
> > > Atomicity only conveniently enforces that the three do not happen in a
> > > different order (with the store becoming visible after the checking
> > > load).
> > >
> > > This does not have to be atomic, since spurious wakeups are not a
> > > problem, in particular not with the FIFO-ness of ticket locks.
> > >
> > > For that the fence, additional atomic etc. would be IMHO much cleaner
> > > than the crazy overflow logic.
> > 
> > All things being equal I'd prefer lock-xadd just because its easier to
> > analyze the concurrency for, crazy overflow tests or no.  But if
> > add+mfence turned out to be a performance win, then that would obviously
> > tip the scales.
> > 
> > However, it looks like locked xadd is also has better performance:  on
> > my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> > than locked xadd, so that pretty much settles it unless you think
> > there'd be a dramatic difference on an AMD system.
> 
> Indeed, the fences are usually slower than locked RMWs, in particular,
> if you do not need to add an instruction. I originally missed that
> amazing stunt the GCC pulled off with replacing the branch with carry
> flag magic. It seems that two twisted minds have found each other
> here :)
> 
> One of my concerns was adding a branch in here... so that is settled,
> and if everybody else feels like this is easier to reason about...
> go ahead :) (I'll keep my itch to myself then.)

Just that I can't... if performance is a concern, adding the LOCK
prefix to the addb outperforms the xadd significantly:

With mean over 100 runs... this comes out as follows
(on my Phenom II)

locked-add   0.648500 s   80%
add-rmwtos   0.707700 s   88%
locked-xadd  0.807600 s  100%
add-barrier  1.270000 s  157%

With huge read contention added in (as cheaply as possible):
locked-add.openmp  0.640700 s  84%
add-rmwtos.openmp  0.658400 s  86%
locked-xadd.openmp 0.763800 s 100%

And the numbers for write contention are crazy, but also feature the
locked-add version:
locked-add.openmp  0.571400 s  71%
add-rmwtos.openmp  0.699900 s  87%
locked-xadd.openmp 0.800200 s 100%

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 

[-- Attachment #2: add-rmwtos.c --]
[-- Type: text/x-csrc, Size: 341 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

	{
		{
			for (i = 0; i < 100000000; i++) {
				l.val += 2;
				asm volatile("lock or $0x0,(%%rsp)" : : : "memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
	}
	return 0;
}

[-- Attachment #3: add-rmwtos.openmp.c --]
[-- Type: text/x-csrc, Size: 531 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

#   pragma omp sections 
	{
#       pragma omp section
		{
			for (i = 0; i < 100000000; i++) {
				l.val += 2;
				asm volatile("lock or $0x0,(%%rsp)" : : : "memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
#       pragma omp section
		while(!l.flag)
			asm volatile("":::"memory");
			//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
	}
	return 0;
}

[-- Attachment #4: locked-add.c --]
[-- Type: text/x-csrc, Size: 339 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;
	{
		{
			for (i = 0; i < 100000000; i++) {
				asm volatile("lock addb %1, %0":"+m"(l.val):"r"((char)2):"memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
	}
	return 0;
}

[-- Attachment #5: locked-xadd.openmp.c --]
[-- Type: text/x-csrc, Size: 667 bytes --]

#include <stdio.h>

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;
#   pragma omp sections 
	{
#       pragma omp section
	    {

			for (i = 0; i < 100000000; i++) {
				unsigned short inc = 2;
				if (l.val >= (0x100 - 2))
					inc += -1 << 8;
				asm volatile("lock; xadd %1,%0" : "+m" (l.lock), "+r" (inc) : );
				if (inc & 0x100)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
#       pragma omp section
	    while(!l.flag)
		    asm volatile("":::"memory");
			//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
	}
	return 0;
}

[-- Attachment #6: locked-add.openmp.c --]
[-- Type: text/x-csrc, Size: 529 bytes --]

#include <stdio.h>

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;
#   pragma omp sections 
	{
#       pragma omp section
		{
			for (i = 0; i < 100000000; i++) {
				asm volatile("lock addb %1, %0":"+m"(l.val):"r"((char)2):"memory");
				if (l.flag)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
#       pragma omp section
		while(!l.flag)
			asm volatile("":::"memory");
			//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
	}
	return 0;
}

[-- Attachment #7: locked-xadd.c --]
[-- Type: text/x-csrc, Size: 471 bytes --]

#include <stdio.h>

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;
	{
	    {

			for (i = 0; i < 100000000; i++) {
				unsigned short inc = 2;
				if (l.val >= (0x100 - 2))
					inc += -1 << 8;
				asm volatile("lock; xadd %1,%0" : "+m" (l.lock), "+r" (inc) : );
				if (inc & 0x100)
					break;
				asm volatile("" : : : "memory");
			}
			l.flag = 1;
		}
	}
	return 0;
}

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-10-10 14:01                               ` Stephan Diestelhorst
  (?)
@ 2011-10-10 19:44                               ` Jeremy Fitzhardinge
  -1 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-10 19:44 UTC (permalink / raw)
  To: Stephan Diestelhorst
  Cc: xen-devel, Jeremy Fitzhardinge, Andi, Nick Piggin, KVM,
	Peter Zijlstra, maintainers, Linux Kernel Mailing List,
	Marcelo Tosatti, Kleen, Avi Kivity, Jan Beulich, H. Peter Anvin,
	the, Linus Torvalds, Ingo Molnar

On 10/10/2011 07:01 AM, Stephan Diestelhorst wrote:
> On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
>> On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
>>> On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
>>>> On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
>>>>> Which certainly should *work*, but from a conceptual standpoint, isn't
>>>>> it just *much* nicer to say "we actually know *exactly* what the upper
>>>>> bits were".
>>>> Well, we really do NOT want atomicity here. What we really rather want
>>>> is sequentiality: free the lock, make the update visible, and THEN
>>>> check if someone has gone sleeping on it.
>>>>
>>>> Atomicity only conveniently enforces that the three do not happen in a
>>>> different order (with the store becoming visible after the checking
>>>> load).
>>>>
>>>> This does not have to be atomic, since spurious wakeups are not a
>>>> problem, in particular not with the FIFO-ness of ticket locks.
>>>>
>>>> For that the fence, additional atomic etc. would be IMHO much cleaner
>>>> than the crazy overflow logic.
>>> All things being equal I'd prefer lock-xadd just because its easier to
>>> analyze the concurrency for, crazy overflow tests or no.  But if
>>> add+mfence turned out to be a performance win, then that would obviously
>>> tip the scales.
>>>
>>> However, it looks like locked xadd is also has better performance:  on
>>> my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
>>> than locked xadd, so that pretty much settles it unless you think
>>> there'd be a dramatic difference on an AMD system.
>> Indeed, the fences are usually slower than locked RMWs, in particular,
>> if you do not need to add an instruction. I originally missed that
>> amazing stunt the GCC pulled off with replacing the branch with carry
>> flag magic. It seems that two twisted minds have found each other
>> here :)
>>
>> One of my concerns was adding a branch in here... so that is settled,
>> and if everybody else feels like this is easier to reason about...
>> go ahead :) (I'll keep my itch to myself then.)
> Just that I can't... if performance is a concern, adding the LOCK
> prefix to the addb outperforms the xadd significantly:

Hm, yes.  So using the lock prefix on add instead of the mfence?  Hm.

    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
  2011-10-10  7:32                             ` Ingo Molnar
@ 2011-10-10 19:51                               ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 42+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-10 19:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Stephan Diestelhorst, Linus Torvalds, H. Peter Anvin,
	Jan Beulich, Jeremy Fitzhardinge, Andi Kleen, Peter Zijlstra,
	Nick Piggin, the arch/x86 maintainers, xen-devel, Avi Kivity,
	Marcelo Tosatti, KVM, Linux Kernel Mailing List,
	Konrad Rzeszutek Wilk

On 10/10/2011 12:32 AM, Ingo Molnar wrote:
> * Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>
>> On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
>>> However, it looks like locked xadd is also has better performance:  on
>>> my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
>>> than locked xadd, so that pretty much settles it unless you think
>>> there'd be a dramatic difference on an AMD system.
>> Konrad measures add+mfence is about 65% slower on AMD Phenom as well.
> xadd also results in smaller/tighter code, right?

Not particularly, mostly because of the overflow-into-the-high-part
compensation.  But its only a couple of extra instructions, and no
conditionals, so I don't think it would have any concrete effect.

But, as Stephen points out, perhaps locked add is preferable to locked
xadd, since it also has the same barrier as mfence but has
(significantly!) better performance than either mfence or locked xadd...

    J

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2011-10-10 19:51 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-15  0:31 [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 01/10] x86/ticketlocks: remove obsolete comment Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 02/10] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 03/10] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 04/10] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 05/10] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 06/10] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 07/10] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2 Jeremy Fitzhardinge
2011-09-15  0:31   ` Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 08/10] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 09/10] xen/pvticketlock: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
2011-09-15  0:31 ` [PATCH 10/10] xen: enable PV ticketlocks on HVM Xen Jeremy Fitzhardinge
2011-09-27  9:34 ` [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks Stephan Diestelhorst
2011-09-27  9:34   ` Stephan Diestelhorst
2011-09-27  9:34   ` Stephan Diestelhorst
2011-09-27 16:44   ` [Xen-devel] " Jeremy Fitzhardinge
2011-09-27 16:44     ` Jeremy Fitzhardinge
2011-09-28 13:58     ` [Xen-devel] " Stephan Diestelhorst
2011-09-28 16:44       ` Jeremy Fitzhardinge
2011-09-28 18:13         ` Stephan Diestelhorst
2011-09-28 15:38     ` Linus Torvalds
2011-09-28 15:55       ` Jan Beulich
2011-09-28 15:55         ` Jan Beulich
2011-09-28 16:10         ` Linus Torvalds
2011-09-28 16:47           ` Jeremy Fitzhardinge
2011-09-28 17:22             ` Linus Torvalds
2011-09-28 17:24               ` H. Peter Anvin
2011-09-28 17:50                 ` Jeremy Fitzhardinge
2011-09-28 18:08                   ` Stephan Diestelhorst
2011-09-28 18:27                     ` Jeremy Fitzhardinge
2011-09-28 18:49                     ` Linus Torvalds
2011-09-28 19:06                       ` Jeremy Fitzhardinge
2011-10-06 14:04                       ` Stephan Diestelhorst
2011-10-06 17:40                         ` Jeremy Fitzhardinge
2011-10-06 18:09                           ` Jeremy Fitzhardinge
2011-10-10  7:32                             ` Ingo Molnar
2011-10-10 19:51                               ` Jeremy Fitzhardinge
2011-10-10 11:00                           ` Stephan Diestelhorst
2011-10-10 11:00                             ` Stephan Diestelhorst
2011-10-10 14:01                             ` Stephan Diestelhorst
2011-10-10 14:01                               ` Stephan Diestelhorst
2011-10-10 19:44                               ` Jeremy Fitzhardinge

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.