All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
@ 2011-09-02  0:54 Jeremy Fitzhardinge
  2011-09-02  0:54 ` [PATCH 01/13] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
                   ` (14 more replies)
  0 siblings, 15 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.  The downside is that it does add a
few instructions into the fastpath in the native case.

Most of the heavy lifting code is in the slowpaths, but it does have
an effect on the fastpath code.  The inner part of ticket lock code
becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	for (;;) {
		unsigned count = SPIN_THRESHOLD;

		do {
			if (inc.head == inc.tail)
				goto out;
			cpu_relax();
			inc.head = ACCESS_ONCE(lock->tickets.head);
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}

which results in:

      pushq   %rbp
      movq    %rsp,%rbp

      movl    $512, %ecx
      lock; xaddw %cx, (%rdi)	# claim ticket

      movzbl  %ch, %edx
      movl    $2048, %eax	# set spin count
      andl    $-2, %edx		# mask off TICKET_SLOWPATH_FLAG
      movzbl  %dl, %esi

1:    cmpb    %dl, %cl		# compare head and tail
      je      2f   		# got it!

      ### BEGIN SLOWPATH
      rep; nop			# pause
      decl    %eax		# dec count
      movb    (%rdi), %cl	# re-fetch head
      jne     1b      		# try again

      call    *pv_lock_ops	# call __ticket_lock_spinning
      movl    $2048, %eax	# reload spin count
      jmp     1b
      ### END SLOWPATH

2:    popq    %rbp
      ret

with CONFIG_PARAVIRT_SPINLOCKS=n, the same code identical asm to the
current ticketlock code:

	pushq   %rbp
        movq    %rsp, %rbp

        movl    $256, %eax
	lock; xaddw %ax, (%rdi)

	movzbl  %ah, %edx

1:	cmpb    %dl, %al	# compare head and tail
	je	2f   		# got it!

	### BEGIN SLOWPATH
	rep; nop		# pause
	movb    (%rdi), %al	# reload head
	jmp	1b		# loop
	### END SLOWPATH

2:	popq	%rbp
	ret

so the pv ticketlocks add 3 extra instructions to the fastpath, one of
which really doesn't need to be there (setting up the arg for the
slowpath function):
      movl    $2048, %eax	# set spin count
      andl    $-2, %edx		# mask off SLOW_PATH_FLAG
      movzbl  %dl, %esi		# set up __ticket_lock_spinning arg

The unlock code is very straightforward:
	__ticket_unlock_release(lock);
	if (unlikely(__ticket_in_slowpath(lock)))
		__ticket_unlock_slowpath(lock);
which generates:
      addb $2, (%rdi)
      testb    $1, 1(%rdi)
      je       1f
      call     __ticket_unlock_slowpath
1:

which, while simple, is more complex than the simple "incb (%rdi)".
(I'm not sure whether its worth inlining this or not.)

Thoughts? Comments? Suggestions?

Thanks,
	J

Jeremy Fitzhardinge (12):
  x86/spinlocks: replace pv spinlocks with pv ticketlocks
  x86/ticketlock: collapse a layer of functions
  xen/pvticketlock: Xen implementation for PV ticket locks
  x86/pvticketlock: use callee-save for lock_spinning
  x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
  x86/ticketlock: add slowpath logic
  x86/ticketlocks: tidy up __ticket_unlock_kick()
  xen/pvticketlock: disable interrupts while blocking
  x86/pvticketlocks: we only need to kick if there's waiters
  xen/pvticket: allow interrupts to be enabled while blocking
  x86/pvticketlock: make sure unlock_kick pvop call is inlined
  x86/pvticketlock: use __ticket_t for pvop args

Srivatsa Vaddagiri (1):
  x86/ticketlock: only do kick after doing unlock

 arch/x86/include/asm/paravirt.h       |   30 +---
 arch/x86/include/asm/paravirt_types.h |    8 +-
 arch/x86/include/asm/spinlock.h       |  118 ++++++++-----
 arch/x86/include/asm/spinlock_types.h |   16 ++-
 arch/x86/kernel/paravirt-spinlocks.c  |   40 +++--
 arch/x86/xen/spinlock.c               |  305 ++++++++-------------------------
 6 files changed, 186 insertions(+), 331 deletions(-)

-- 
1.7.6


^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH 01/13] x86/spinlocks: replace pv spinlocks with pv ticketlocks
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
@ 2011-09-02  0:54 ` Jeremy Fitzhardinge
  2011-09-02  0:54   ` Jeremy Fitzhardinge
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |   30 ++--------------
 arch/x86/include/asm/paravirt_types.h |    8 +---
 arch/x86/include/asm/spinlock.h       |   59 ++++++++++++++++++++++++++-------
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +-------
 arch/x86/xen/spinlock.c               |    7 +++-
 5 files changed, 61 insertions(+), 58 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index ebbc4d8..d88a813 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -741,36 +741,14 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 8288509..e9101c3 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -321,12 +321,8 @@ struct pv_mmu_ops {
 
 struct arch_spinlock;
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, unsigned ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 3549e6c..f5d9236 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -38,6 +38,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
+/* 
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -55,19 +81,24 @@
  * save some instructions and make the code more elegant. There really isn't
  * much between them in performance though, especially as locks are out of line.
  */
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
 	inc = xadd(&lock->tickets, inc);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			break;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				goto out;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -85,7 +116,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 }
 
 #if (NR_CPUS < 256)
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
 	asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
 		     : "+m" (lock->head_tail)
@@ -93,7 +124,7 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 		     : "memory", "cc");
 }
 #else
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
 	asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
 		     : "+m" (lock->head_tail)
@@ -102,6 +133,14 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 }
 #endif
 
+static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+{
+	__ticket_t next = lock->tickets.head + 1;
+
+	__ticket_unlock_release(lock);
+	__ticket_unlock_kick(lock, next);
+}
+
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
@@ -116,8 +155,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -150,8 +187,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index cc9b1e1..23af06a 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -121,6 +121,9 @@ struct xen_spinlock {
 	unsigned short spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -148,7 +151,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -338,6 +340,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -373,12 +376,14 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 02/13] x86/ticketlock: collapse a layer of functions
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
@ 2011-09-02  0:54   ` Jeremy Fitzhardinge
  2011-09-02  0:54   ` Jeremy Fitzhardinge
                     ` (13 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f5d9236..c1d9617 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -81,7 +81,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  * save some instructions and make the code more elegant. There really isn't
  * much between them in performance though, especially as locks are out of line.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -101,7 +101,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -133,7 +133,7 @@ static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 }
 #endif
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -141,46 +141,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 02/13] x86/ticketlock: collapse a layer of functions
@ 2011-09-02  0:54   ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Marcelo Tosatti, Nick Piggin, KVM, Peter Zijlstra,
	the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Avi Kivity, Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds,
	Xen Devel

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f5d9236..c1d9617 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -81,7 +81,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  * save some instructions and make the code more elegant. There really isn't
  * much between them in performance though, especially as locks are out of line.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -101,7 +101,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -133,7 +133,7 @@ static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 }
 #endif
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -141,46 +141,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 03/13] xen/pvticketlock: Xen implementation for PV ticket locks
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
@ 2011-09-02  0:54   ` Jeremy Fitzhardinge
  2011-09-02  0:54   ` Jeremy Fitzhardinge
                     ` (13 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |  281 ++++++-----------------------------------------
 1 files changed, 36 insertions(+), 245 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..c1bd84c 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-	u64 taken;
 	u32 taken_slow;
-	u32 taken_slow_nested;
 	u32 taken_slow_pickup;
 	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
 
-	u64 released;
 	u32 released_slow;
 	u32 released_slow_kicked;
 
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
 	if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,76 @@ static inline u64 spin_time_start(void)
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	unsigned short spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	asm(LOCK_PREFIX " incw %0"
-	    : "+m" (xl->spinners) : : "memory");
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	asm(LOCK_PREFIX " decw %0"
-	    : "+m" (xl->spinners) : : "memory");
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
+static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
+	w->want = want;
+	w->lock = lock;
+
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
 
 	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* Only check lock once pending cleared */
+	barrier();
 
-		raw_local_irq_restore(flags);
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		ADD_STATS(taken_slow_pickup, 1);
+		goto out;
+	}
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 {
 	int cpu;
 
 	ADD_STATS(released_slow, 1);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+
+		if (w->lock == lock && w->want == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
@@ -320,28 +155,6 @@ static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -376,14 +189,8 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -401,37 +208,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_pickup);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow_kicked);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	xen_debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	xen_debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	xen_debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 03/13] xen/pvticketlock: Xen implementation for PV ticket locks
@ 2011-09-02  0:54   ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Marcelo Tosatti, Nick Piggin, KVM, Peter Zijlstra,
	the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Avi Kivity, Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds,
	Xen Devel

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |  281 ++++++-----------------------------------------
 1 files changed, 36 insertions(+), 245 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..c1bd84c 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-	u64 taken;
 	u32 taken_slow;
-	u32 taken_slow_nested;
 	u32 taken_slow_pickup;
 	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
 
-	u64 released;
 	u32 released_slow;
 	u32 released_slow_kicked;
 
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
 	if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,76 @@ static inline u64 spin_time_start(void)
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	unsigned short spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	asm(LOCK_PREFIX " incw %0"
-	    : "+m" (xl->spinners) : : "memory");
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	asm(LOCK_PREFIX " decw %0"
-	    : "+m" (xl->spinners) : : "memory");
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
+static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
+	w->want = want;
+	w->lock = lock;
+
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
 
 	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* Only check lock once pending cleared */
+	barrier();
 
-		raw_local_irq_restore(flags);
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		ADD_STATS(taken_slow_pickup, 1);
+		goto out;
+	}
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 {
 	int cpu;
 
 	ADD_STATS(released_slow, 1);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+
+		if (w->lock == lock && w->want == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
@@ -320,28 +155,6 @@ static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -376,14 +189,8 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -401,37 +208,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_pickup);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow_kicked);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	xen_debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	xen_debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	xen_debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 04/13] x86/pvticketlock: use callee-save for lock_spinning
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (2 preceding siblings ...)
  2011-09-02  0:54   ` Jeremy Fitzhardinge
@ 2011-09-02  0:54 ` Jeremy Fitzhardinge
  2011-09-02  0:54 ` [PATCH 05/13] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2 Jeremy Fitzhardinge
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index d88a813..622f3d6 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -743,7 +743,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index e9101c3..97faa3b 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -321,7 +321,7 @@ struct pv_mmu_ops {
 
 struct arch_spinlock;
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, unsigned ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index c1bd84c..b133865 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -137,6 +137,7 @@ out:
 	w->lock = NULL;
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 {
@@ -189,7 +190,7 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 05/13] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (3 preceding siblings ...)
  2011-09-02  0:54 ` [PATCH 04/13] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
@ 2011-09-02  0:54 ` Jeremy Fitzhardinge
  2011-09-02  0:54 ` [PATCH 06/13] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   16 ++++++++--------
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index c1d9617..6028b01 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -83,7 +83,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -109,7 +109,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -118,24 +118,24 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 #if (NR_CPUS < 256)
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
+	asm volatile(UNLOCK_LOCK_PREFIX "addb %1, %0"
 		     : "+m" (lock->head_tail)
-		     :
+		     : "i" (TICKET_LOCK_INC)
 		     : "memory", "cc");
 }
 #else
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-	asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
+	asm volatile(UNLOCK_LOCK_PREFIX "addw %1, %0"
 		     : "+m" (lock->head_tail)
-		     :
+		     : "i" (TICKET_LOCK_INC)
 		     : "memory", "cc");
 }
 #endif
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
 	__ticket_unlock_release(lock);
 	__ticket_unlock_kick(lock, next);
@@ -152,7 +152,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
+	return ((tmp.tail - tmp.head) & TICKET_MASK) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 72e154e..0553c0b 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -7,7 +7,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -15,6 +21,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 #define TICKET_MASK	((__ticket_t)((1 << TICKET_SHIFT) - 1))
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 06/13] x86/ticketlock: add slowpath logic
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (4 preceding siblings ...)
  2011-09-02  0:54 ` [PATCH 05/13] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2 Jeremy Fitzhardinge
@ 2011-09-02  0:54 ` Jeremy Fitzhardinge
  2011-09-02 18:46   ` Eric Northup
  2011-09-02  0:55 ` [PATCH 07/13] x86/ticketlocks: tidy up __ticket_unlock_kick() Jeremy Fitzhardinge
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Maintain a flag in both LSBs of the ticket lock which indicates whether
anyone is in the lock slowpath and may need kicking when the current
holder unlocks.  The flags are set when the first locker enters
the slowpath, and cleared when unlocking to an empty queue.

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

Unlocker			Locker
				test for lock pickup
					-> fail
test slowpath + unlock
	-> false
				set slowpath flags
				block

Whereas this works in any ordering:

Unlocker			Locker
				set slowpath flags
				test for lock pickup
					-> fail
				block
test slowpath + unlock
	-> true, kick

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   41 +++++++++++++++++++++++++++++---
 arch/x86/include/asm/spinlock_types.h |    2 +
 arch/x86/kernel/paravirt-spinlocks.c  |   37 +++++++++++++++++++++++++++++
 arch/x86/xen/spinlock.c               |    4 +++
 4 files changed, 80 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6028b01..2135a48 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -41,7 +41,38 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+/* Only defined when CONFIG_PARAVIRT_SPINLOCKS defined, but may as
+ * well leave the prototype always visible.  */
+extern void __ticket_unlock_release_slowpath(struct arch_spinlock *lock);
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+/*
+ * Return true if someone is in the slowpath on this lock.  This
+ * should only be used by the current lock-holder.
+ */
+static inline bool __ticket_in_slowpath(struct arch_spinlock *lock)
+{
+	return !!(lock->tickets.tail & TICKET_SLOWPATH_FLAG);
+}
+
+static inline void __ticket_enter_slowpath(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->tickets.tail) == sizeof(u8))
+		asm (LOCK_PREFIX "orb %1, %0"
+		     : "+m" (lock->tickets.tail)
+		     : "i" (TICKET_SLOWPATH_FLAG) : "memory");
+	else
+		asm (LOCK_PREFIX "orw %1, %0"
+		     : "+m" (lock->tickets.tail)
+		     : "i" (TICKET_SLOWPATH_FLAG) : "memory");
+}
+
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline bool __ticket_in_slowpath(struct arch_spinlock *lock)
+{
+	return false;
+}
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
@@ -86,6 +117,7 @@ static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
+	inc.tail &= ~TICKET_SLOWPATH_FLAG;
 
 	for (;;) {
 		unsigned count = SPIN_THRESHOLD;
@@ -135,10 +167,11 @@ static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
-	__ticket_unlock_release(lock);
-	__ticket_unlock_kick(lock, next);
+	if (unlikely(__ticket_in_slowpath(lock)))
+		__ticket_unlock_release_slowpath(lock);
+	else
+		__ticket_unlock_release(lock);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 0553c0b..7b383e2 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -9,8 +9,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC	2
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC	1
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..21b6986 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -15,3 +15,40 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+
+/*
+ * If we're unlocking and we're leaving the lock uncontended (there's
+ * nobody else waiting for the lock), then we can clear the slowpath
+ * bits.  However, we need to be careful about this because someone
+ * may just be entering as we leave, and enter the slowpath.
+ */
+void __ticket_unlock_release_slowpath(struct arch_spinlock *lock)
+{
+	struct arch_spinlock old, new;
+
+	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
+
+	old = ACCESS_ONCE(*lock);
+
+	new = old;
+	new.tickets.head += TICKET_LOCK_INC;
+
+	/* Clear the slowpath flag */
+	new.tickets.tail &= ~TICKET_SLOWPATH_FLAG;
+
+	/*
+	 * If there's currently people waiting or someone snuck in
+	 * since we read the lock above, then do a normal unlock and
+	 * kick.  If we managed to unlock with no queued waiters, then
+	 * we can clear the slowpath flag.
+	 */
+	if (new.tickets.head != new.tickets.tail ||
+	    cmpxchg(&lock->head_tail,
+		    old.head_tail, new.head_tail) != old.head_tail) {
+		/* still people waiting */
+		__ticket_unlock_release(lock);
+	}
+
+	__ticket_unlock_kick(lock, new.tickets.head);
+}
+EXPORT_SYMBOL(__ticket_unlock_release_slowpath);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index b133865..4c46144 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -119,6 +119,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	/* Mark entry to slowpath before doing the pickup test to make
+	   sure we don't deadlock with an unlocker. */
+	__ticket_enter_slowpath(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 07/13] x86/ticketlocks: tidy up __ticket_unlock_kick()
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (5 preceding siblings ...)
  2011-09-02  0:54 ` [PATCH 06/13] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
@ 2011-09-02  0:55 ` Jeremy Fitzhardinge
  2011-09-02  0:55 ` [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking Jeremy Fitzhardinge
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:55 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

__ticket_unlock_kick() is now only called from known slowpaths, so there's
no need for it to do any checking of its own.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h |    2 +-
 arch/x86/include/asm/spinlock.h |   14 --------------
 2 files changed, 1 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 622f3d6..b89699a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -746,7 +746,7 @@ static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned t
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
+static inline void __ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 2135a48..365d787 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,23 +78,9 @@ static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, u
 {
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
-{
-}
-
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
 
 
-/* 
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
-{
-	if (unlikely(lock->tickets.tail != next))
-		____ticket_unlock_kick(lock, next);
-}
-
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (6 preceding siblings ...)
  2011-09-02  0:55 ` [PATCH 07/13] x86/ticketlocks: tidy up __ticket_unlock_kick() Jeremy Fitzhardinge
@ 2011-09-02  0:55 ` Jeremy Fitzhardinge
  2011-09-02 14:47   ` Peter Zijlstra
  2011-09-02  0:55 ` [PATCH 09/13] x86/pvticketlocks: we only need to kick if there's waiters Jeremy Fitzhardinge
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:55 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 4c46144..2ed5d05 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -98,6 +98,7 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
 	int cpu = smp_processor_id();
 	u64 start;
+	unsigned long flags;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
@@ -105,6 +106,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 
 	start = spin_time_start();
 
+	/* Make sure interrupts are disabled to ensure that these
+	   per-cpu values are not overwritten. */
+	local_irq_save(flags);
+
 	w->want = want;
 	w->lock = lock;
 
@@ -139,6 +144,9 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 out:
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
+
+	local_irq_restore(flags);
+
 	spin_time_accum_blocked(start);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 09/13] x86/pvticketlocks: we only need to kick if there's waiters
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (7 preceding siblings ...)
  2011-09-02  0:55 ` [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking Jeremy Fitzhardinge
@ 2011-09-02  0:55 ` Jeremy Fitzhardinge
  2011-09-02  0:55 ` [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:55 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If we're releasing the lock into an uncontended state, there's nobody
waiting on it, so there's no need to kick anyone.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/kernel/paravirt-spinlocks.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 21b6986..71b8557 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -47,8 +47,7 @@ void __ticket_unlock_release_slowpath(struct arch_spinlock *lock)
 		    old.head_tail, new.head_tail) != old.head_tail) {
 		/* still people waiting */
 		__ticket_unlock_release(lock);
+		__ticket_unlock_kick(lock, new.tickets.head);
 	}
-
-	__ticket_unlock_kick(lock, new.tickets.head);
 }
 EXPORT_SYMBOL(__ticket_unlock_release_slowpath);
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (8 preceding siblings ...)
  2011-09-02  0:55 ` [PATCH 09/13] x86/pvticketlocks: we only need to kick if there's waiters Jeremy Fitzhardinge
@ 2011-09-02  0:55 ` Jeremy Fitzhardinge
  2011-09-02 14:48   ` Peter Zijlstra
  2011-09-02  0:55 ` [PATCH 11/13] x86/ticketlock: only do kick after doing unlock Jeremy Fitzhardinge
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:55 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |   28 +++++++++++++++++++++++++---
 1 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2ed5d05..7b89439 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,26 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 
 	start = spin_time_start();
 
-	/* Make sure interrupts are disabled to ensure that these
-	   per-cpu values are not overwritten. */
+	/* Make sure an interrupt handler can't upset things in a
+	   partially setup state. */
 	local_irq_save(flags);
 
+	/*
+	 * We don't really care if we're overwriting some other
+	 * (lock,want) pair, as that would mean that we're currently
+	 * in an interrupt context, and the outer context had
+	 * interrupts enabled.  That has already kicked the VCPU out
+	 * of xen_poll_irq(), so it will just return spuriously and
+	 * retry with newly setup (lock,want).
+	 *
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
 	w->want = want;
+	smp_wmb();
 	w->lock = lock;
 
 	/* This uses set_bit, which atomic and therefore a barrier */
@@ -135,10 +150,15 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 		goto out;
 	}
 
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
 	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
 	xen_poll_irq(irq);
 	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+	local_irq_save(flags);
+
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +180,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
 	for_each_cpu(cpu, &waiting_cpus) {
 		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-		if (w->lock == lock && w->want == next) {
+		/* Make sure we read lock before want */
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 11/13] x86/ticketlock: only do kick after doing unlock
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (9 preceding siblings ...)
  2011-09-02  0:55 ` [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
@ 2011-09-02  0:55 ` Jeremy Fitzhardinge
  2011-09-02 14:49   ` Peter Zijlstra
  2011-09-02  0:55 ` [PATCH 12/13] x86/pvticketlock: make sure unlock_kick pvop call is inlined Jeremy Fitzhardinge
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:55 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

We must release the lock before checking to see if the lock is in
slowpath or else there's a potential race where the lock enters the
slow path after the unlocker has checked the slowpath flag, but before
it has actually unlocked.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h      |    7 +++----
 arch/x86/kernel/paravirt-spinlocks.c |   23 ++++++-----------------
 2 files changed, 9 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 365d787..bf6dff4 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -43,7 +43,7 @@
 
 /* Only defined when CONFIG_PARAVIRT_SPINLOCKS defined, but may as
  * well leave the prototype always visible.  */
-extern void __ticket_unlock_release_slowpath(struct arch_spinlock *lock);
+extern void __ticket_unlock_slowpath(struct arch_spinlock *lock);
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
@@ -154,10 +154,9 @@ static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 
+	__ticket_unlock_release(lock);
 	if (unlikely(__ticket_in_slowpath(lock)))
-		__ticket_unlock_release_slowpath(lock);
-	else
-		__ticket_unlock_release(lock);
+		__ticket_unlock_slowpath(lock);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 71b8557..674718f 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -22,32 +22,21 @@ EXPORT_SYMBOL(pv_lock_ops);
  * bits.  However, we need to be careful about this because someone
  * may just be entering as we leave, and enter the slowpath.
  */
-void __ticket_unlock_release_slowpath(struct arch_spinlock *lock)
+void __ticket_unlock_slowpath(struct arch_spinlock *lock)
 {
 	struct arch_spinlock old, new;
 
 	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
 
 	old = ACCESS_ONCE(*lock);
-
 	new = old;
-	new.tickets.head += TICKET_LOCK_INC;
 
 	/* Clear the slowpath flag */
 	new.tickets.tail &= ~TICKET_SLOWPATH_FLAG;
+	if (new.tickets.head == new.tickets.tail)
+		cmpxchg(&lock->head_tail, old.head_tail, new.head_tail);
 
-	/*
-	 * If there's currently people waiting or someone snuck in
-	 * since we read the lock above, then do a normal unlock and
-	 * kick.  If we managed to unlock with no queued waiters, then
-	 * we can clear the slowpath flag.
-	 */
-	if (new.tickets.head != new.tickets.tail ||
-	    cmpxchg(&lock->head_tail,
-		    old.head_tail, new.head_tail) != old.head_tail) {
-		/* still people waiting */
-		__ticket_unlock_release(lock);
-		__ticket_unlock_kick(lock, new.tickets.head);
-	}
+	/* Wake up an appropriate waiter */
+ 	__ticket_unlock_kick(lock, new.tickets.head);
 }
-EXPORT_SYMBOL(__ticket_unlock_release_slowpath);
+EXPORT_SYMBOL(__ticket_unlock_slowpath);
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 12/13] x86/pvticketlock: make sure unlock_kick pvop call is inlined
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (10 preceding siblings ...)
  2011-09-02  0:55 ` [PATCH 11/13] x86/ticketlock: only do kick after doing unlock Jeremy Fitzhardinge
@ 2011-09-02  0:55 ` Jeremy Fitzhardinge
  2011-09-02  0:55 ` [PATCH 13/13] x86/pvticketlock: use __ticket_t for pvop args Jeremy Fitzhardinge
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:55 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Otherwise the generated code for raw_spin_lock will look awful.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index b89699a..a6f2651 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -741,12 +741,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
 {
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline void __ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH 13/13] x86/pvticketlock: use __ticket_t for pvop args
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (11 preceding siblings ...)
  2011-09-02  0:55 ` [PATCH 12/13] x86/pvticketlock: make sure unlock_kick pvop call is inlined Jeremy Fitzhardinge
@ 2011-09-02  0:55 ` Jeremy Fitzhardinge
  2011-09-02 11:22   ` Stefano Stabellini
  2011-09-02 15:38 ` Linus Torvalds
  14 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02  0:55 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Use __ticket_t for the ticket argument to the pvops, to prevent
unnecessary zero-extension in the calling code.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    6 ++++--
 arch/x86/include/asm/spinlock_types.h |    4 ----
 arch/x86/xen/spinlock.c               |    4 ++--
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a6f2651..932a682 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -741,12 +741,14 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned ticket)
+#include <asm/spinlock_types.h>
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
 {
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, unsigned ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 7b383e2..62ea99e 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #include <linux/types.h>
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 7b89439..91b0940 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -92,7 +92,7 @@ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
 static cpumask_t waiting_cpus;
 
-static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
 	int irq = __this_cpu_read(lock_kicker_irq);
 	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
@@ -171,7 +171,7 @@ out:
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
-static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
 	int cpu;
 
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
@ 2011-09-02 11:22   ` Stefano Stabellini
  2011-09-02  0:54   ` Jeremy Fitzhardinge
                     ` (13 subsequent siblings)
  14 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2011-09-02 11:22 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Fri, 2 Sep 2011, Jeremy Fitzhardinge wrote:
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism.
> 
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs).  This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning.  (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
> 
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
> 
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
> 
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
> 
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
>   iterations, then call out to the __ticket_lock_spinning() pvop,
>   which allows a backend to block the vCPU rather than spinning.  This
>   pvop can set the lock into "slowpath state".
> 
> - When releasing a lock, if it is in "slowpath state", the call
>   __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
>   lock is no longer in contention, it also clears the slowpath flag.
> 
> The "slowpath state" is stored in the LSB of the within the lock
> ticket.  This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
> 
> This series provides a Xen implementation, but it should be
> straightforward to add a KVM implementation as well.
> 
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.  The downside is that it does add a
> few instructions into the fastpath in the native case.
> 
> Most of the heavy lifting code is in the slowpaths, but it does have
> an effect on the fastpath code.  The inner part of ticket lock code
> becomes:
> 	inc = xadd(&lock->tickets, inc);
> 	inc.tail &= ~TICKET_SLOWPATH_FLAG;
> 
> 	for (;;) {
> 		unsigned count = SPIN_THRESHOLD;
> 
> 		do {
> 			if (inc.head == inc.tail)
> 				goto out;
> 			cpu_relax();
> 			inc.head = ACCESS_ONCE(lock->tickets.head);
> 		} while (--count);
> 		__ticket_lock_spinning(lock, inc.tail);
> 	}
> 
> which results in:
> 
>       pushq   %rbp
>       movq    %rsp,%rbp
> 
>       movl    $512, %ecx
>       lock; xaddw %cx, (%rdi)	# claim ticket
> 
>       movzbl  %ch, %edx
>       movl    $2048, %eax	# set spin count
>       andl    $-2, %edx		# mask off TICKET_SLOWPATH_FLAG
>       movzbl  %dl, %esi
> 
> 1:    cmpb    %dl, %cl		# compare head and tail
>       je      2f   		# got it!
> 
>       ### BEGIN SLOWPATH
>       rep; nop			# pause
>       decl    %eax		# dec count
>       movb    (%rdi), %cl	# re-fetch head
>       jne     1b      		# try again
> 
>       call    *pv_lock_ops	# call __ticket_lock_spinning
>       movl    $2048, %eax	# reload spin count
>       jmp     1b
>       ### END SLOWPATH
> 
> 2:    popq    %rbp
>       ret
> 
> with CONFIG_PARAVIRT_SPINLOCKS=n, the same code identical asm to the
> current ticketlock code:
> 
> 	pushq   %rbp
>         movq    %rsp, %rbp
> 
>         movl    $256, %eax
> 	lock; xaddw %ax, (%rdi)
> 
> 	movzbl  %ah, %edx
> 
> 1:	cmpb    %dl, %al	# compare head and tail
> 	je	2f   		# got it!
> 
> 	### BEGIN SLOWPATH
> 	rep; nop		# pause
> 	movb    (%rdi), %al	# reload head
> 	jmp	1b		# loop
> 	### END SLOWPATH
> 
> 2:	popq	%rbp
> 	ret
> 
> so the pv ticketlocks add 3 extra instructions to the fastpath, one of
> which really doesn't need to be there (setting up the arg for the
> slowpath function):
>       movl    $2048, %eax	# set spin count
>       andl    $-2, %edx		# mask off SLOW_PATH_FLAG
>       movzbl  %dl, %esi		# set up __ticket_lock_spinning arg
> 
> The unlock code is very straightforward:
> 	__ticket_unlock_release(lock);
> 	if (unlikely(__ticket_in_slowpath(lock)))
> 		__ticket_unlock_slowpath(lock);
> which generates:
>       addb $2, (%rdi)
>       testb    $1, 1(%rdi)
>       je       1f
>       call     __ticket_unlock_slowpath
> 1:
> 
> which, while simple, is more complex than the simple "incb (%rdi)".
> (I'm not sure whether its worth inlining this or not.)
> 
> Thoughts? Comments? Suggestions?
> 
> Thanks,
> 	J
> 
> Jeremy Fitzhardinge (12):
>   x86/spinlocks: replace pv spinlocks with pv ticketlocks
>   x86/ticketlock: collapse a layer of functions
>   xen/pvticketlock: Xen implementation for PV ticket locks
>   x86/pvticketlock: use callee-save for lock_spinning
>   x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
>   x86/ticketlock: add slowpath logic
>   x86/ticketlocks: tidy up __ticket_unlock_kick()
>   xen/pvticketlock: disable interrupts while blocking
>   x86/pvticketlocks: we only need to kick if there's waiters
>   xen/pvticket: allow interrupts to be enabled while blocking
>   x86/pvticketlock: make sure unlock_kick pvop call is inlined
>   x86/pvticketlock: use __ticket_t for pvop args
> 
> Srivatsa Vaddagiri (1):
>   x86/ticketlock: only do kick after doing unlock
> 
>  arch/x86/include/asm/paravirt.h       |   30 +---
>  arch/x86/include/asm/paravirt_types.h |    8 +-
>  arch/x86/include/asm/spinlock.h       |  118 ++++++++-----
>  arch/x86/include/asm/spinlock_types.h |   16 ++-
>  arch/x86/kernel/paravirt-spinlocks.c  |   40 +++--
>  arch/x86/xen/spinlock.c               |  305 ++++++++-------------------------
>  6 files changed, 186 insertions(+), 331 deletions(-)

do you have a git tree somewhere with this series?

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
@ 2011-09-02 11:22   ` Stefano Stabellini
  0 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2011-09-02 11:22 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Ingo, Jeremy, Nick Piggin, KVM, Linux, Peter Zijlstra,
	the arch/x86 maintainers, Kernel Mailing List, Marcelo Tosatti,
	Fitzhardinge, Andi Kleen, Avi Kivity, H. Peter Anvin, Molnar,
	Linus Torvalds, Xen Devel

On Fri, 2 Sep 2011, Jeremy Fitzhardinge wrote:
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism.
> 
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs).  This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning.  (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
> 
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
> 
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
> 
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
> 
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
>   iterations, then call out to the __ticket_lock_spinning() pvop,
>   which allows a backend to block the vCPU rather than spinning.  This
>   pvop can set the lock into "slowpath state".
> 
> - When releasing a lock, if it is in "slowpath state", the call
>   __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
>   lock is no longer in contention, it also clears the slowpath flag.
> 
> The "slowpath state" is stored in the LSB of the within the lock
> ticket.  This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
> 
> This series provides a Xen implementation, but it should be
> straightforward to add a KVM implementation as well.
> 
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.  The downside is that it does add a
> few instructions into the fastpath in the native case.
> 
> Most of the heavy lifting code is in the slowpaths, but it does have
> an effect on the fastpath code.  The inner part of ticket lock code
> becomes:
> 	inc = xadd(&lock->tickets, inc);
> 	inc.tail &= ~TICKET_SLOWPATH_FLAG;
> 
> 	for (;;) {
> 		unsigned count = SPIN_THRESHOLD;
> 
> 		do {
> 			if (inc.head == inc.tail)
> 				goto out;
> 			cpu_relax();
> 			inc.head = ACCESS_ONCE(lock->tickets.head);
> 		} while (--count);
> 		__ticket_lock_spinning(lock, inc.tail);
> 	}
> 
> which results in:
> 
>       pushq   %rbp
>       movq    %rsp,%rbp
> 
>       movl    $512, %ecx
>       lock; xaddw %cx, (%rdi)	# claim ticket
> 
>       movzbl  %ch, %edx
>       movl    $2048, %eax	# set spin count
>       andl    $-2, %edx		# mask off TICKET_SLOWPATH_FLAG
>       movzbl  %dl, %esi
> 
> 1:    cmpb    %dl, %cl		# compare head and tail
>       je      2f   		# got it!
> 
>       ### BEGIN SLOWPATH
>       rep; nop			# pause
>       decl    %eax		# dec count
>       movb    (%rdi), %cl	# re-fetch head
>       jne     1b      		# try again
> 
>       call    *pv_lock_ops	# call __ticket_lock_spinning
>       movl    $2048, %eax	# reload spin count
>       jmp     1b
>       ### END SLOWPATH
> 
> 2:    popq    %rbp
>       ret
> 
> with CONFIG_PARAVIRT_SPINLOCKS=n, the same code identical asm to the
> current ticketlock code:
> 
> 	pushq   %rbp
>         movq    %rsp, %rbp
> 
>         movl    $256, %eax
> 	lock; xaddw %ax, (%rdi)
> 
> 	movzbl  %ah, %edx
> 
> 1:	cmpb    %dl, %al	# compare head and tail
> 	je	2f   		# got it!
> 
> 	### BEGIN SLOWPATH
> 	rep; nop		# pause
> 	movb    (%rdi), %al	# reload head
> 	jmp	1b		# loop
> 	### END SLOWPATH
> 
> 2:	popq	%rbp
> 	ret
> 
> so the pv ticketlocks add 3 extra instructions to the fastpath, one of
> which really doesn't need to be there (setting up the arg for the
> slowpath function):
>       movl    $2048, %eax	# set spin count
>       andl    $-2, %edx		# mask off SLOW_PATH_FLAG
>       movzbl  %dl, %esi		# set up __ticket_lock_spinning arg
> 
> The unlock code is very straightforward:
> 	__ticket_unlock_release(lock);
> 	if (unlikely(__ticket_in_slowpath(lock)))
> 		__ticket_unlock_slowpath(lock);
> which generates:
>       addb $2, (%rdi)
>       testb    $1, 1(%rdi)
>       je       1f
>       call     __ticket_unlock_slowpath
> 1:
> 
> which, while simple, is more complex than the simple "incb (%rdi)".
> (I'm not sure whether its worth inlining this or not.)
> 
> Thoughts? Comments? Suggestions?
> 
> Thanks,
> 	J
> 
> Jeremy Fitzhardinge (12):
>   x86/spinlocks: replace pv spinlocks with pv ticketlocks
>   x86/ticketlock: collapse a layer of functions
>   xen/pvticketlock: Xen implementation for PV ticket locks
>   x86/pvticketlock: use callee-save for lock_spinning
>   x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
>   x86/ticketlock: add slowpath logic
>   x86/ticketlocks: tidy up __ticket_unlock_kick()
>   xen/pvticketlock: disable interrupts while blocking
>   x86/pvticketlocks: we only need to kick if there's waiters
>   xen/pvticket: allow interrupts to be enabled while blocking
>   x86/pvticketlock: make sure unlock_kick pvop call is inlined
>   x86/pvticketlock: use __ticket_t for pvop args
> 
> Srivatsa Vaddagiri (1):
>   x86/ticketlock: only do kick after doing unlock
> 
>  arch/x86/include/asm/paravirt.h       |   30 +---
>  arch/x86/include/asm/paravirt_types.h |    8 +-
>  arch/x86/include/asm/spinlock.h       |  118 ++++++++-----
>  arch/x86/include/asm/spinlock_types.h |   16 ++-
>  arch/x86/kernel/paravirt-spinlocks.c  |   40 +++--
>  arch/x86/xen/spinlock.c               |  305 ++++++++-------------------------
>  6 files changed, 186 insertions(+), 331 deletions(-)

do you have a git tree somewhere with this series?

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02  0:55 ` [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking Jeremy Fitzhardinge
@ 2011-09-02 14:47   ` Peter Zijlstra
  2011-09-02 19:29       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 86+ messages in thread
From: Peter Zijlstra @ 2011-09-02 14:47 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> We need to make sure interrupts are disabled while we're relying on the
> contents of the per-cpu lock_waiting values, otherwise an interrupt
> handler could come in, try to take some other lock, block, and overwrite
> our values.

Would this make it illegal to take a spinlock from NMI context?

I know that its generally considered bad form, but there's at least one
spinlock that's only taken from NMI context and thus hasn't got any
deadlock potential.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking
  2011-09-02  0:55 ` [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
@ 2011-09-02 14:48   ` Peter Zijlstra
  2011-09-02 19:30     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 86+ messages in thread
From: Peter Zijlstra @ 2011-09-02 14:48 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
> +       /* Make sure an interrupt handler can't upset things in a
> +          partially setup state. */
>         local_irq_save(flags);
>  
> +       /*
> +        * We don't really care if we're overwriting some other
> +        * (lock,want) pair, as that would mean that we're currently
> +        * in an interrupt context, and the outer context had
> +        * interrupts enabled.  That has already kicked the VCPU out
> +        * of xen_poll_irq(), so it will just return spuriously and
> +        * retry with newly setup (lock,want).
> +        *
> +        * The ordering protocol on this is that the "lock" pointer
> +        * may only be set non-NULL if the "want" ticket is correct.
> +        * If we're updating "want", we must first clear "lock".
> +        */
> +       w->lock = NULL; 

I mean, I don't much care about Xen code, but that's two different
comment styles.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 11/13] x86/ticketlock: only do kick after doing unlock
  2011-09-02  0:55 ` [PATCH 11/13] x86/ticketlock: only do kick after doing unlock Jeremy Fitzhardinge
@ 2011-09-02 14:49   ` Peter Zijlstra
  2011-09-02 19:34       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 86+ messages in thread
From: Peter Zijlstra @ 2011-09-02 14:49 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
> From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> 
> We must release the lock before checking to see if the lock is in
> slowpath or else there's a potential race where the lock enters the
> slow path after the unlocker has checked the slowpath flag, but before
> it has actually unlocked.
> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> 

Wouldn't it be much better to fold it back so that this bug never
happens when you bisect?

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
  2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (13 preceding siblings ...)
  2011-09-02 11:22   ` Stefano Stabellini
@ 2011-09-02 15:38 ` Linus Torvalds
  2011-09-02 20:07   ` Jeremy Fitzhardinge
  14 siblings, 1 reply; 86+ messages in thread
From: Linus Torvalds @ 2011-09-02 15:38 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>
> The inner part of ticket lock code becomes:
>        inc = xadd(&lock->tickets, inc);
>        inc.tail &= ~TICKET_SLOWPATH_FLAG;
>
>        for (;;) {
>                unsigned count = SPIN_THRESHOLD;
>
>                do {
>                        if (inc.head == inc.tail)
>                                goto out;
>                        cpu_relax();
>                        inc.head = ACCESS_ONCE(lock->tickets.head);
>                } while (--count);
>                __ticket_lock_spinning(lock, inc.tail);
>        }

Hmm. It strikes me that I don't think you should touch the
TICKET_SLOWPATH_FLAG in the fastpath at all.

Can't you just do this:

   inc = xadd(&lock->tickets, inc);
   if (likely(inc.head == inc.tail))
     goto out;

   ### SLOWPATH ###
   inc.tail &= ~TICKET_SLOWPATH_FLAG;
   for (;;) {
      .. as before ..

which might alleviate the problem with the fastpath being polluted by
all those silly slowpath things.  Hmm?

(This assumes that TICKET_SLOWPATH_FLAG is never set in inc.head, so
if it's set that equality check will fail. I didn't actually check if
that assumption was correct)

                         Linus

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 06/13] x86/ticketlock: add slowpath logic
  2011-09-02  0:54 ` [PATCH 06/13] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
@ 2011-09-02 18:46   ` Eric Northup
  2011-09-02 19:32     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 86+ messages in thread
From: Eric Northup @ 2011-09-02 18:46 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>
> Maintain a flag in both LSBs of the ticket lock which indicates whether
> anyone is in the lock slowpath and may need kicking when the current
> holder unlocks.  The flags are set when the first locker enters
> the slowpath, and cleared when unlocking to an empty queue.

Are there actually two flags maintained?  I only see the one in the
ticket tail getting set/cleared/tested.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 14:47   ` Peter Zijlstra
@ 2011-09-02 19:29       ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 19:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 09/02/2011 07:47 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>>
>> We need to make sure interrupts are disabled while we're relying on the
>> contents of the per-cpu lock_waiting values, otherwise an interrupt
>> handler could come in, try to take some other lock, block, and overwrite
>> our values.
> Would this make it illegal to take a spinlock from NMI context?

That would be problematic.  But a Xen domain wouldn't be getting NMIs -
at least not standard x86 ones - so that's moot.

> I know that its generally considered bad form, but there's at least one
> spinlock that's only taken from NMI context and thus hasn't got any
> deadlock potential.

Which one?

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-02 19:29       ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 19:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marcelo Tosatti, Nick Piggin, KVM, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar, Linus Torvalds,
	Xen Devel

On 09/02/2011 07:47 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>>
>> We need to make sure interrupts are disabled while we're relying on the
>> contents of the per-cpu lock_waiting values, otherwise an interrupt
>> handler could come in, try to take some other lock, block, and overwrite
>> our values.
> Would this make it illegal to take a spinlock from NMI context?

That would be problematic.  But a Xen domain wouldn't be getting NMIs -
at least not standard x86 ones - so that's moot.

> I know that its generally considered bad form, but there's at least one
> spinlock that's only taken from NMI context and thus hasn't got any
> deadlock potential.

Which one?

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking
  2011-09-02 14:48   ` Peter Zijlstra
@ 2011-09-02 19:30     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 19:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 09/02/2011 07:48 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> +       /* Make sure an interrupt handler can't upset things in a
>> +          partially setup state. */
>>         local_irq_save(flags);
>>  
>> +       /*
>> +        * We don't really care if we're overwriting some other
>> +        * (lock,want) pair, as that would mean that we're currently
>> +        * in an interrupt context, and the outer context had
>> +        * interrupts enabled.  That has already kicked the VCPU out
>> +        * of xen_poll_irq(), so it will just return spuriously and
>> +        * retry with newly setup (lock,want).
>> +        *
>> +        * The ordering protocol on this is that the "lock" pointer
>> +        * may only be set non-NULL if the "want" ticket is correct.
>> +        * If we're updating "want", we must first clear "lock".
>> +        */
>> +       w->lock = NULL; 
> I mean, I don't much care about Xen code, but that's two different
> comment styles.

Yeah, that's the "two line comment style" next to "big block comment"
style - but you're right they look pretty bad juxtaposed like that.

    J


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 06/13] x86/ticketlock: add slowpath logic
  2011-09-02 18:46   ` Eric Northup
@ 2011-09-02 19:32     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 19:32 UTC (permalink / raw)
  To: Eric Northup
  Cc: H. Peter Anvin, Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 09/02/2011 11:46 AM, Eric Northup wrote:
> On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>>
>> Maintain a flag in both LSBs of the ticket lock which indicates whether
>> anyone is in the lock slowpath and may need kicking when the current
>> holder unlocks.  The flags are set when the first locker enters
>> the slowpath, and cleared when unlocking to an empty queue.
> Are there actually two flags maintained?  I only see the one in the
> ticket tail getting set/cleared/tested.

Yeah, there's only one flag, so there's a spare bit in the other half.

    j


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 11/13] x86/ticketlock: only do kick after doing unlock
  2011-09-02 14:49   ` Peter Zijlstra
@ 2011-09-02 19:34       ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 19:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Srivatsa Vaddagiri, Jeremy Fitzhardinge

On 09/02/2011 07:49 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>>
>> We must release the lock before checking to see if the lock is in
>> slowpath or else there's a potential race where the lock enters the
>> slow path after the unlocker has checked the slowpath flag, but before
>> it has actually unlocked.
>>
>> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> 
> Wouldn't it be much better to fold it back so that this bug never
> happens when you bisect?

Yes indeed.

    J


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 11/13] x86/ticketlock: only do kick after doing unlock
@ 2011-09-02 19:34       ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 19:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marcelo Tosatti, Nick Piggin, KVM, the arch/x86 maintainers,
	Linux Kernel Mailing List, Srivatsa Vaddagiri, Andi Kleen,
	Avi Kivity, Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar,
	Linus Torvalds, Xen Devel

On 09/02/2011 07:49 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>>
>> We must release the lock before checking to see if the lock is in
>> slowpath or else there's a potential race where the lock enters the
>> slow path after the unlocker has checked the slowpath flag, but before
>> it has actually unlocked.
>>
>> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> 
> Wouldn't it be much better to fold it back so that this bug never
> happens when you bisect?

Yes indeed.

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
  2011-09-02 15:38 ` Linus Torvalds
@ 2011-09-02 20:07   ` Jeremy Fitzhardinge
  2011-09-02 20:27     ` Linus Torvalds
  0 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 20:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: H. Peter Anvin, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 09/02/2011 08:38 AM, Linus Torvalds wrote:
> On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>> The inner part of ticket lock code becomes:
>>        inc = xadd(&lock->tickets, inc);
>>        inc.tail &= ~TICKET_SLOWPATH_FLAG;
>>
>>        for (;;) {
>>                unsigned count = SPIN_THRESHOLD;
>>
>>                do {
>>                        if (inc.head == inc.tail)
>>                                goto out;
>>                        cpu_relax();
>>                        inc.head = ACCESS_ONCE(lock->tickets.head);
>>                } while (--count);
>>                __ticket_lock_spinning(lock, inc.tail);
>>        }
> Hmm. It strikes me that I don't think you should touch the
> TICKET_SLOWPATH_FLAG in the fastpath at all.
>
> Can't you just do this:
>
>    inc = xadd(&lock->tickets, inc);
>    if (likely(inc.head == inc.tail))
>      goto out;
>
>    ### SLOWPATH ###
>    inc.tail &= ~TICKET_SLOWPATH_FLAG;
>    for (;;) {
>       .. as before ..
>
> which might alleviate the problem with the fastpath being polluted by
> all those silly slowpath things.  Hmm?
>
> (This assumes that TICKET_SLOWPATH_FLAG is never set in inc.head, so
> if it's set that equality check will fail. I didn't actually check if
> that assumption was correct)

Yes, nice idea.  That ends up making the overall code slightly longer,
but the fastpath becomes identical to the non-pv case:

	mov    $512,%ecx
	lock xadd %cx,(%rdi)
	movzbl %ch,%edx
	cmp    %cl,%dl
	je     2f

	### SLOWPATH START
	and    $-2,%edx
	mov    $8192,%eax
	movzbl %dl,%esi
1:	cmp    %dl,%cl
	je     2f
	pause  
	dec    %eax
	mov    (%rdi),%cl
	jne    1b
	callq  __ticket_lock_spinning
	mov    $8192,%eax
	jmp    1b
	### SLOWPATH ENDS

2:


It's especially nice that it also moves the spin counter and arg setup
into the slowpath code.

And that entire piece of slowpath code can be moved out into its own
function, so the fastpath becomes:

	mov    $512,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%esi
	cmp    %al,%sil
	je     1f

	movzbl %sil,%esi
	callq  __ticket_lock_slow
1:

I don't know whether that fastpath code is small enough to consider
inlining everywhere?

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
  2011-09-02 20:07   ` Jeremy Fitzhardinge
@ 2011-09-02 20:27     ` Linus Torvalds
  2011-09-02 21:42       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 86+ messages in thread
From: Linus Torvalds @ 2011-09-02 20:27 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Fri, Sep 2, 2011 at 1:07 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>
> I don't know whether that fastpath code is small enough to consider
> inlining everywhere?

No.

There's no point in inlining something that ends up containing a
conditional function call: gcc will have to effectively save/restore
registers around that thing anyway, so you lose a lot of the
advantages of inlining. So I think it's better done as an out-of-line
function, which I thought we did for spinlocks anyway.

Also, do you run with CONFIG_OPTIMIZE_SIZE? Without that, gcc should
be smart enough to make a "likely()" case be a fall-through.

                          Linus

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 19:29       ` Jeremy Fitzhardinge
@ 2011-09-02 20:47         ` Peter Zijlstra
  -1 siblings, 0 replies; 86+ messages in thread
From: Peter Zijlstra @ 2011-09-02 20:47 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
> 
> > I know that its generally considered bad form, but there's at least one
> > spinlock that's only taken from NMI context and thus hasn't got any
> > deadlock potential.
> 
> Which one? 

arch/x86/kernel/traps.c:nmi_reason_lock

It serializes NMI access to the NMI reason port across CPUs.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-02 20:47         ` Peter Zijlstra
  0 siblings, 0 replies; 86+ messages in thread
From: Peter Zijlstra @ 2011-09-02 20:47 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Xen Devel, Marcelo Tosatti, Nick Piggin, KVM, maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar, Linus Torvalds,
	the

On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
> 
> > I know that its generally considered bad form, but there's at least one
> > spinlock that's only taken from NMI context and thus hasn't got any
> > deadlock potential.
> 
> Which one? 

arch/x86/kernel/traps.c:nmi_reason_lock

It serializes NMI access to the NMI reason port across CPUs.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
  2011-09-02 20:27     ` Linus Torvalds
@ 2011-09-02 21:42       ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 21:42 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: H. Peter Anvin, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 09/02/2011 01:27 PM, Linus Torvalds wrote:
> On Fri, Sep 2, 2011 at 1:07 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>> I don't know whether that fastpath code is small enough to consider
>> inlining everywhere?
> No.
>
> There's no point in inlining something that ends up containing a
> conditional function call: gcc will have to effectively save/restore
> registers around that thing anyway, so you lose a lot of the
> advantages of inlining. So I think it's better done as an out-of-line
> function, which I thought we did for spinlocks anyway.

Yes, lock currently out-of-line.

I should also make sure that unlock is also out of line when
paravirtualized.

> Also, do you run with CONFIG_OPTIMIZE_SIZE? Without that, gcc should
> be smart enough to make a "likely()" case be a fall-through.

Ah, I was wondering why I'd never seen likely/unlikely do anything
useful.  With OPTIMIZE_SIZE=n, there's no point in explicitly moving the
slowpath out to a separate function.

So the only downside with this variant is that it breaks my design
criteria of making the generated code look identical to the the original
code when CONFIG_PARAVIRT_SPINLOCKS=n.  But I don't know if that's an
actual downside in practice.

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 20:47         ` Peter Zijlstra
  (?)
@ 2011-09-02 21:50         ` Jeremy Fitzhardinge
  2011-09-02 22:37             ` Peter Zijlstra
                             ` (4 more replies)
  -1 siblings, 5 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-02 21:50 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
>>> I know that its generally considered bad form, but there's at least one
>>> spinlock that's only taken from NMI context and thus hasn't got any
>>> deadlock potential.
>> Which one? 
> arch/x86/kernel/traps.c:nmi_reason_lock
>
> It serializes NMI access to the NMI reason port across CPUs.

Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
so I guess there's at least some chance there could be a virtual
emulated NMI.  Maybe?  Does qemu do that kind of thing?

But, erm, does that even make sense?  I'm assuming the NMI reason port
tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
there's only a single reason port, then doesn't that mean that either 1)
they all got the NMI for the same reason, or 2) having a single port is
inherently racy?  How does the locking actually work there?

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 21:50         ` Jeremy Fitzhardinge
@ 2011-09-02 22:37             ` Peter Zijlstra
  2011-09-02 23:14           ` Andi Kleen
                               ` (3 subsequent siblings)
  4 siblings, 0 replies; 86+ messages in thread
From: Peter Zijlstra @ 2011-09-02 22:37 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On Fri, 2011-09-02 at 14:50 -0700, Jeremy Fitzhardinge wrote:
> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> > On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
> >>> I know that its generally considered bad form, but there's at least one
> >>> spinlock that's only taken from NMI context and thus hasn't got any
> >>> deadlock potential.
> >> Which one? 
> > arch/x86/kernel/traps.c:nmi_reason_lock
> >
> > It serializes NMI access to the NMI reason port across CPUs.
> 
> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
> so I guess there's at least some chance there could be a virtual
> emulated NMI.  Maybe?  Does qemu do that kind of thing?

Afaik qemu/kvm can do the whole NMI thing, yes.

> But, erm, does that even make sense?  I'm assuming the NMI reason port
> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> there's only a single reason port, then doesn't that mean that either 1)
> they all got the NMI for the same reason, or 2) having a single port is
> inherently racy?  How does the locking actually work there?

I really wouldn't know, the whole NMI thing is a bit of a trainwreck
architecturally. Maybe the x86 maintainers or Linus knows more on this
aspect of it.



^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-02 22:37             ` Peter Zijlstra
  0 siblings, 0 replies; 86+ messages in thread
From: Peter Zijlstra @ 2011-09-02 22:37 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Xen Devel, Marcelo Tosatti, Nick Piggin, KVM, Stefano Stabellini,
	maintainers, Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar, Linus Torvalds,
	the

On Fri, 2011-09-02 at 14:50 -0700, Jeremy Fitzhardinge wrote:
> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> > On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
> >>> I know that its generally considered bad form, but there's at least one
> >>> spinlock that's only taken from NMI context and thus hasn't got any
> >>> deadlock potential.
> >> Which one? 
> > arch/x86/kernel/traps.c:nmi_reason_lock
> >
> > It serializes NMI access to the NMI reason port across CPUs.
> 
> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
> so I guess there's at least some chance there could be a virtual
> emulated NMI.  Maybe?  Does qemu do that kind of thing?

Afaik qemu/kvm can do the whole NMI thing, yes.

> But, erm, does that even make sense?  I'm assuming the NMI reason port
> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> there's only a single reason port, then doesn't that mean that either 1)
> they all got the NMI for the same reason, or 2) having a single port is
> inherently racy?  How does the locking actually work there?

I really wouldn't know, the whole NMI thing is a bit of a trainwreck
architecturally. Maybe the x86 maintainers or Linus knows more on this
aspect of it.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 21:50         ` Jeremy Fitzhardinge
  2011-09-02 22:37             ` Peter Zijlstra
@ 2011-09-02 23:14           ` Andi Kleen
  2011-09-05 11:52           ` Stefano Stabellini
                             ` (2 subsequent siblings)
  4 siblings, 0 replies; 86+ messages in thread
From: Andi Kleen @ 2011-09-02 23:14 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini


> But, erm, does that even make sense?  I'm assuming the NMI reason port
> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> there's only a single reason port, then doesn't that mean that either 1)
> they all got the NMI for the same reason, or 2) having a single port is
> inherently racy?  How does the locking actually work there?

All the code to determine NMI reasons is inherently racy,
and each new NMI user makes it worse.

-Andi


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 21:50         ` Jeremy Fitzhardinge
  2011-09-02 22:37             ` Peter Zijlstra
  2011-09-02 23:14           ` Andi Kleen
@ 2011-09-05 11:52           ` Stefano Stabellini
  2011-09-05 12:05           ` Avi Kivity
  2011-09-06 15:14           ` Don Zickus
  4 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2011-09-05 11:52 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini, Keir Fraser

CC'ing Keir.

On Fri, 2 Sep 2011, Jeremy Fitzhardinge wrote:
> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> > On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
> >>> I know that its generally considered bad form, but there's at least one
> >>> spinlock that's only taken from NMI context and thus hasn't got any
> >>> deadlock potential.
> >> Which one? 
> > arch/x86/kernel/traps.c:nmi_reason_lock
> >
> > It serializes NMI access to the NMI reason port across CPUs.
> 
> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
> so I guess there's at least some chance there could be a virtual
> emulated NMI.  Maybe?  Does qemu do that kind of thing?

Xen knows how to inject NMIs to HVM guests, even though I am not sure
if it is actually done in practice at the moment.

However digging into the implementation details, it looks like virtual
NMIs are not injected if blocking-by-STI (or blocking-by-MOV-SS), so we
should be fine, even though I don't know if you actually want to rely on
this:

/*
 * We can only inject an NMI if no blocking by MOV SS (also, depending on
 * implementation, if no blocking by STI). If pin-based 'virtual NMIs'
 * control is specified then the NMI-blocking interruptibility flag is
 * also checked. The 'virtual NMI pending' control (available only in
 * conjunction with 'virtual NMIs') causes a VM exit when all these checks
 * succeed. It will exit immediately after VM entry if the checks succeed
 * at that point.
 * 
 * Because a processor may or may not check blocking-by-STI when injecting
 * a virtual NMI, it will be necessary to convert that to block-by-MOV-SS
 * before specifying the 'virtual NMI pending' control. Otherwise we could
 * enter an infinite loop where we check blocking-by-STI in software and
 * thus delay delivery of a virtual NMI, but the processor causes immediate
 * VM exit because it does not check blocking-by-STI.
 */

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 21:50         ` Jeremy Fitzhardinge
                             ` (2 preceding siblings ...)
  2011-09-05 11:52           ` Stefano Stabellini
@ 2011-09-05 12:05           ` Avi Kivity
  2011-09-06 15:14           ` Don Zickus
  4 siblings, 0 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-05 12:05 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Marcelo Tosatti, KVM, Andi Kleen, Xen Devel, Jeremy Fitzhardinge,
	Stefano Stabellini

On 09/03/2011 12:50 AM, Jeremy Fitzhardinge wrote:
> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> >  On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
> >>>  I know that its generally considered bad form, but there's at least one
> >>>  spinlock that's only taken from NMI context and thus hasn't got any
> >>>  deadlock potential.
> >>  Which one?
> >  arch/x86/kernel/traps.c:nmi_reason_lock
> >
> >  It serializes NMI access to the NMI reason port across CPUs.
>
> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
> so I guess there's at least some chance there could be a virtual
> emulated NMI.  Maybe?  Does qemu do that kind of thing?

kvm does.  In fact, I want to use 'hlt' for blocking vcpus in the 
slowpath and an NMI IPI for waking them up.  That's not going to work in 
an NMI, but I guess I can replace the 'hlt' with a 'pause' if we're in 
an NMI, and just spin there.

btw, doing a race-free NMI wakeup is going to be interesting - we'll 
have to look at %rip and see if is just before our hlt, since there's no 
magic sti; hlt sequence we can use.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-02 21:50         ` Jeremy Fitzhardinge
                             ` (3 preceding siblings ...)
  2011-09-05 12:05           ` Avi Kivity
@ 2011-09-06 15:14           ` Don Zickus
  2011-09-06 18:07             ` Jeremy Fitzhardinge
  4 siblings, 1 reply; 86+ messages in thread
From: Don Zickus @ 2011-09-06 15:14 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On Fri, Sep 02, 2011 at 02:50:53PM -0700, Jeremy Fitzhardinge wrote:
> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> > On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
> >>> I know that its generally considered bad form, but there's at least one
> >>> spinlock that's only taken from NMI context and thus hasn't got any
> >>> deadlock potential.
> >> Which one? 
> > arch/x86/kernel/traps.c:nmi_reason_lock
> >
> > It serializes NMI access to the NMI reason port across CPUs.
> 
> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
> so I guess there's at least some chance there could be a virtual
> emulated NMI.  Maybe?  Does qemu do that kind of thing?
> 
> But, erm, does that even make sense?  I'm assuming the NMI reason port
> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> there's only a single reason port, then doesn't that mean that either 1)
> they all got the NMI for the same reason, or 2) having a single port is
> inherently racy?  How does the locking actually work there?

The reason port is for an external/system NMI.  All the IPI-NMI don't need
to access this register to process their handlers, ie perf.  I think in
general the IOAPIC is configured to deliver the external NMI to one cpu,
usually the bsp cpu.  However, there has been a slow movement to free the
bsp cpu from exceptions like this to allow one to eventually hot-swap the
bsp cpu.  The spin locks in that code were an attempt to be more abstract
about who really gets the external NMI.  Of course SGI's box is setup to
deliver an external NMI to all cpus to dump the stack when the system
isn't behaving.

This is a very low usage NMI (in fact almost all cases lead to loud
console messages).

Hope that clears up some of the confusion.

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-06 15:14           ` Don Zickus
@ 2011-09-06 18:07             ` Jeremy Fitzhardinge
  2011-09-06 18:27               ` Don Zickus
  0 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-06 18:07 UTC (permalink / raw)
  To: Don Zickus
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/06/2011 08:14 AM, Don Zickus wrote:
> On Fri, Sep 02, 2011 at 02:50:53PM -0700, Jeremy Fitzhardinge wrote:
>> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
>>> On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
>>>>> I know that its generally considered bad form, but there's at least one
>>>>> spinlock that's only taken from NMI context and thus hasn't got any
>>>>> deadlock potential.
>>>> Which one? 
>>> arch/x86/kernel/traps.c:nmi_reason_lock
>>>
>>> It serializes NMI access to the NMI reason port across CPUs.
>> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
>> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
>> so I guess there's at least some chance there could be a virtual
>> emulated NMI.  Maybe?  Does qemu do that kind of thing?
>>
>> But, erm, does that even make sense?  I'm assuming the NMI reason port
>> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
>> there's only a single reason port, then doesn't that mean that either 1)
>> they all got the NMI for the same reason, or 2) having a single port is
>> inherently racy?  How does the locking actually work there?
> The reason port is for an external/system NMI.  All the IPI-NMI don't need
> to access this register to process their handlers, ie perf.  I think in
> general the IOAPIC is configured to deliver the external NMI to one cpu,
> usually the bsp cpu.  However, there has been a slow movement to free the
> bsp cpu from exceptions like this to allow one to eventually hot-swap the
> bsp cpu.  The spin locks in that code were an attempt to be more abstract
> about who really gets the external NMI.  Of course SGI's box is setup to
> deliver an external NMI to all cpus to dump the stack when the system
> isn't behaving.
>
> This is a very low usage NMI (in fact almost all cases lead to loud
> console messages).
>
> Hope that clears up some of the confusion.

Hm, not really.

What does it mean if two CPUs go down that path?  Should one do some NMI
processing while the other waits around for it to finish, and then do
some NMI processing on its own?

It sounds like that could only happen if you reroute NMI from one CPU to
another while the first CPU is actually in the middle of processing an
NMI - in which case, shouldn't the code doing the re-routing be taking
the spinlock?

Or perhaps a spinlock isn't the right primitive to use at all?  Couldn't
the second CPU just set a flag/counter (using something like an atomic
add/cmpxchg/etc) to make the first CPU process the second NMI?

But on the other hand, I don't really care if you can say that this path
will never be called in a virtual machine.

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-06 18:07             ` Jeremy Fitzhardinge
@ 2011-09-06 18:27               ` Don Zickus
  2011-09-06 18:50                   ` Jeremy Fitzhardinge
  2011-09-07  4:13                 ` Avi Kivity
  0 siblings, 2 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-06 18:27 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On Tue, Sep 06, 2011 at 11:07:26AM -0700, Jeremy Fitzhardinge wrote:
> >> But, erm, does that even make sense?  I'm assuming the NMI reason port
> >> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> >> there's only a single reason port, then doesn't that mean that either 1)
> >> they all got the NMI for the same reason, or 2) having a single port is
> >> inherently racy?  How does the locking actually work there?
> > The reason port is for an external/system NMI.  All the IPI-NMI don't need
> > to access this register to process their handlers, ie perf.  I think in
> > general the IOAPIC is configured to deliver the external NMI to one cpu,
> > usually the bsp cpu.  However, there has been a slow movement to free the
> > bsp cpu from exceptions like this to allow one to eventually hot-swap the
> > bsp cpu.  The spin locks in that code were an attempt to be more abstract
> > about who really gets the external NMI.  Of course SGI's box is setup to
> > deliver an external NMI to all cpus to dump the stack when the system
> > isn't behaving.
> >
> > This is a very low usage NMI (in fact almost all cases lead to loud
> > console messages).
> >
> > Hope that clears up some of the confusion.
> 
> Hm, not really.
> 
> What does it mean if two CPUs go down that path?  Should one do some NMI
> processing while the other waits around for it to finish, and then do
> some NMI processing on its own?

Well the time the second one gets to the external NMI it should have been
cleared by the first cpu, which would of course lead to the second cpu
causing a 'Dazed and confused' message.  But on most x86 machines only one
cpu should be routed the external NMI.  Though there is an SGI box that is
designed to send an external NMI to all of its cpus.

> 
> It sounds like that could only happen if you reroute NMI from one CPU to
> another while the first CPU is actually in the middle of processing an
> NMI - in which case, shouldn't the code doing the re-routing be taking
> the spinlock?

Perhaps, but like I said it is a slow transition because most people don't
have the hardware to test this (nor does it work due to other
limitations).

> 
> Or perhaps a spinlock isn't the right primitive to use at all?  Couldn't
> the second CPU just set a flag/counter (using something like an atomic
> add/cmpxchg/etc) to make the first CPU process the second NMI?

Might be a smarter approach.  Like I said it is hard to test without
functioning hardware. :-(

> 
> But on the other hand, I don't really care if you can say that this path
> will never be called in a virtual machine.

Does virtual machines support hot remove of cpus?  Probably not
considering bare-metal barely supports it.

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-06 18:27               ` Don Zickus
@ 2011-09-06 18:50                   ` Jeremy Fitzhardinge
  2011-09-07  4:13                 ` Avi Kivity
  1 sibling, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-06 18:50 UTC (permalink / raw)
  To: Don Zickus
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/06/2011 11:27 AM, Don Zickus wrote:
>> But on the other hand, I don't really care if you can say that this path
>> will never be called in a virtual machine.
> Does virtual machines support hot remove of cpus?  Probably not
> considering bare-metal barely supports it.

The only reason you'd want to is to add/remove VCPUs as a mechanism of
resource control, so if you were removing a VCPU it wouldn't matter much
which one you choose.  In other words, there's no reason you'd ever need
to remove the BSP in favour of one of the other CPUs.

Anyway, I'm not going to lose any sleep over this issue.

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-06 18:50                   ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-06 18:50 UTC (permalink / raw)
  To: Don Zickus
  Cc: Marcelo Tosatti, Nick Piggin, KVM, Stefano Stabellini,
	Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Avi Kivity,
	Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar, Linus Torvalds,
	Xen Devel

On 09/06/2011 11:27 AM, Don Zickus wrote:
>> But on the other hand, I don't really care if you can say that this path
>> will never be called in a virtual machine.
> Does virtual machines support hot remove of cpus?  Probably not
> considering bare-metal barely supports it.

The only reason you'd want to is to add/remove VCPUs as a mechanism of
resource control, so if you were removing a VCPU it wouldn't matter much
which one you choose.  In other words, there's no reason you'd ever need
to remove the BSP in favour of one of the other CPUs.

Anyway, I'm not going to lose any sleep over this issue.

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks
  2011-09-02 11:22   ` Stefano Stabellini
  (?)
@ 2011-09-06 19:33   ` Jeremy Fitzhardinge
  -1 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-06 19:33 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: H. Peter Anvin, Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 09/02/2011 04:22 AM, Stefano Stabellini wrote:
> do you have a git tree somewhere with this series? 

git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-06 18:27               ` Don Zickus
  2011-09-06 18:50                   ` Jeremy Fitzhardinge
@ 2011-09-07  4:13                 ` Avi Kivity
  2011-09-07 13:44                     ` Don Zickus
  1 sibling, 1 reply; 86+ messages in thread
From: Avi Kivity @ 2011-09-07  4:13 UTC (permalink / raw)
  To: Don Zickus
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/06/2011 09:27 PM, Don Zickus wrote:
> On Tue, Sep 06, 2011 at 11:07:26AM -0700, Jeremy Fitzhardinge wrote:
> >  >>  But, erm, does that even make sense?  I'm assuming the NMI reason port
> >  >>  tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> >  >>  there's only a single reason port, then doesn't that mean that either 1)
> >  >>  they all got the NMI for the same reason, or 2) having a single port is
> >  >>  inherently racy?  How does the locking actually work there?
> >  >  The reason port is for an external/system NMI.  All the IPI-NMI don't need
> >  >  to access this register to process their handlers, ie perf.  I think in
> >  >  general the IOAPIC is configured to deliver the external NMI to one cpu,
> >  >  usually the bsp cpu.  However, there has been a slow movement to free the
> >  >  bsp cpu from exceptions like this to allow one to eventually hot-swap the
> >  >  bsp cpu.  The spin locks in that code were an attempt to be more abstract
> >  >  about who really gets the external NMI.  Of course SGI's box is setup to
> >  >  deliver an external NMI to all cpus to dump the stack when the system
> >  >  isn't behaving.
> >  >
> >  >  This is a very low usage NMI (in fact almost all cases lead to loud
> >  >  console messages).
> >  >
> >  >  Hope that clears up some of the confusion.
> >
> >  Hm, not really.
> >
> >  What does it mean if two CPUs go down that path?  Should one do some NMI
> >  processing while the other waits around for it to finish, and then do
> >  some NMI processing on its own?
>
> Well the time the second one gets to the external NMI it should have been
> cleared by the first cpu, which would of course lead to the second cpu
> causing a 'Dazed and confused' message.  But on most x86 machines only one
> cpu should be routed the external NMI.  Though there is an SGI box that is
> designed to send an external NMI to all of its cpus.

Is there a way to tell whether an NMI was internally or externally 
generated?

I don't think so, especially as two or more NMIs can be coalesced.  So 
any NMI received on this first cpu has to check the NMI reason port?

> >
> >  But on the other hand, I don't really care if you can say that this path
> >  will never be called in a virtual machine.
>
> Does virtual machines support hot remove of cpus?  Probably not
> considering bare-metal barely supports it.
>

They do.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07  4:13                 ` Avi Kivity
@ 2011-09-07 13:44                     ` Don Zickus
  0 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-07 13:44 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Wed, Sep 07, 2011 at 07:13:58AM +0300, Avi Kivity wrote:
> On 09/06/2011 09:27 PM, Don Zickus wrote:
> >On Tue, Sep 06, 2011 at 11:07:26AM -0700, Jeremy Fitzhardinge wrote:
> >>  >>  But, erm, does that even make sense?  I'm assuming the NMI reason port
> >>  >>  tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> >>  >>  there's only a single reason port, then doesn't that mean that either 1)
> >>  >>  they all got the NMI for the same reason, or 2) having a single port is
> >>  >>  inherently racy?  How does the locking actually work there?
> >>  >  The reason port is for an external/system NMI.  All the IPI-NMI don't need
> >>  >  to access this register to process their handlers, ie perf.  I think in
> >>  >  general the IOAPIC is configured to deliver the external NMI to one cpu,
> >>  >  usually the bsp cpu.  However, there has been a slow movement to free the
> >>  >  bsp cpu from exceptions like this to allow one to eventually hot-swap the
> >>  >  bsp cpu.  The spin locks in that code were an attempt to be more abstract
> >>  >  about who really gets the external NMI.  Of course SGI's box is setup to
> >>  >  deliver an external NMI to all cpus to dump the stack when the system
> >>  >  isn't behaving.
> >>  >
> >>  >  This is a very low usage NMI (in fact almost all cases lead to loud
> >>  >  console messages).
> >>  >
> >>  >  Hope that clears up some of the confusion.
> >>
> >>  Hm, not really.
> >>
> >>  What does it mean if two CPUs go down that path?  Should one do some NMI
> >>  processing while the other waits around for it to finish, and then do
> >>  some NMI processing on its own?
> >
> >Well the time the second one gets to the external NMI it should have been
> >cleared by the first cpu, which would of course lead to the second cpu
> >causing a 'Dazed and confused' message.  But on most x86 machines only one
> >cpu should be routed the external NMI.  Though there is an SGI box that is
> >designed to send an external NMI to all of its cpus.
> 
> Is there a way to tell whether an NMI was internally or externally
> generated?
> 
> I don't think so, especially as two or more NMIs can be coalesced.
> So any NMI received on this first cpu has to check the NMI reason
> port?

Well we cheat and execute all the nmi handlers first.  If they come back
as handled, we skip the check for the external NMI.

But you are right, other than checking the reason port, there isn't a way
to determine if an NMI is internally or externally generated.

> 
> >>
> >>  But on the other hand, I don't really care if you can say that this path
> >>  will never be called in a virtual machine.
> >
> >Does virtual machines support hot remove of cpus?  Probably not
> >considering bare-metal barely supports it.
> >
> 
> They do.

But vcpus probably don't have the notion of a bsp cpu, so perhaps virtual
machines can get away with it easier?  (I don't know enough about the hot
cpu remove code to really explain it, just enough to know it can cause
problems and people are trying to address it).

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-07 13:44                     ` Don Zickus
  0 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-07 13:44 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Jeremy Fitzhardinge,
	H. Peter Anvin, Ingo Molnar, Linus Torvalds, Xen Devel

On Wed, Sep 07, 2011 at 07:13:58AM +0300, Avi Kivity wrote:
> On 09/06/2011 09:27 PM, Don Zickus wrote:
> >On Tue, Sep 06, 2011 at 11:07:26AM -0700, Jeremy Fitzhardinge wrote:
> >>  >>  But, erm, does that even make sense?  I'm assuming the NMI reason port
> >>  >>  tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
> >>  >>  there's only a single reason port, then doesn't that mean that either 1)
> >>  >>  they all got the NMI for the same reason, or 2) having a single port is
> >>  >>  inherently racy?  How does the locking actually work there?
> >>  >  The reason port is for an external/system NMI.  All the IPI-NMI don't need
> >>  >  to access this register to process their handlers, ie perf.  I think in
> >>  >  general the IOAPIC is configured to deliver the external NMI to one cpu,
> >>  >  usually the bsp cpu.  However, there has been a slow movement to free the
> >>  >  bsp cpu from exceptions like this to allow one to eventually hot-swap the
> >>  >  bsp cpu.  The spin locks in that code were an attempt to be more abstract
> >>  >  about who really gets the external NMI.  Of course SGI's box is setup to
> >>  >  deliver an external NMI to all cpus to dump the stack when the system
> >>  >  isn't behaving.
> >>  >
> >>  >  This is a very low usage NMI (in fact almost all cases lead to loud
> >>  >  console messages).
> >>  >
> >>  >  Hope that clears up some of the confusion.
> >>
> >>  Hm, not really.
> >>
> >>  What does it mean if two CPUs go down that path?  Should one do some NMI
> >>  processing while the other waits around for it to finish, and then do
> >>  some NMI processing on its own?
> >
> >Well the time the second one gets to the external NMI it should have been
> >cleared by the first cpu, which would of course lead to the second cpu
> >causing a 'Dazed and confused' message.  But on most x86 machines only one
> >cpu should be routed the external NMI.  Though there is an SGI box that is
> >designed to send an external NMI to all of its cpus.
> 
> Is there a way to tell whether an NMI was internally or externally
> generated?
> 
> I don't think so, especially as two or more NMIs can be coalesced.
> So any NMI received on this first cpu has to check the NMI reason
> port?

Well we cheat and execute all the nmi handlers first.  If they come back
as handled, we skip the check for the external NMI.

But you are right, other than checking the reason port, there isn't a way
to determine if an NMI is internally or externally generated.

> 
> >>
> >>  But on the other hand, I don't really care if you can say that this path
> >>  will never be called in a virtual machine.
> >
> >Does virtual machines support hot remove of cpus?  Probably not
> >considering bare-metal barely supports it.
> >
> 
> They do.

But vcpus probably don't have the notion of a bsp cpu, so perhaps virtual
machines can get away with it easier?  (I don't know enough about the hot
cpu remove code to really explain it, just enough to know it can cause
problems and people are trying to address it).

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 13:44                     ` Don Zickus
@ 2011-09-07 15:11                       ` Avi Kivity
  -1 siblings, 0 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-07 15:11 UTC (permalink / raw)
  To: Don Zickus
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 04:44 PM, Don Zickus wrote:
> >
> >  Is there a way to tell whether an NMI was internally or externally
> >  generated?
> >
> >  I don't think so, especially as two or more NMIs can be coalesced.
> >  So any NMI received on this first cpu has to check the NMI reason
> >  port?
>
> Well we cheat and execute all the nmi handlers first.  If they come back
> as handled, we skip the check for the external NMI.

And hope that no other NMI was generated while we're handling this one.  
It's a little... fragile?

> But you are right, other than checking the reason port, there isn't a way
> to determine if an NMI is internally or externally generated.

Ouch.

>
> >
> >  >>
> >  >>   But on the other hand, I don't really care if you can say that this path
> >  >>   will never be called in a virtual machine.
> >  >
> >  >Does virtual machines support hot remove of cpus?  Probably not
> >  >considering bare-metal barely supports it.
> >  >
> >
> >  They do.
>
> But vcpus probably don't have the notion of a bsp cpu, so perhaps virtual
> machines can get away with it easier?  (I don't know enough about the hot
> cpu remove code to really explain it, just enough to know it can cause
> problems and people are trying to address it).
>

The concept of a bsp exists in exactly the same way as on real hardware.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-07 15:11                       ` Avi Kivity
  0 siblings, 0 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-07 15:11 UTC (permalink / raw)
  To: Don Zickus
  Cc: Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Jeremy Fitzhardinge,
	H. Peter Anvin, Ingo Molnar, Linus Torvalds, Xen Devel

On 09/07/2011 04:44 PM, Don Zickus wrote:
> >
> >  Is there a way to tell whether an NMI was internally or externally
> >  generated?
> >
> >  I don't think so, especially as two or more NMIs can be coalesced.
> >  So any NMI received on this first cpu has to check the NMI reason
> >  port?
>
> Well we cheat and execute all the nmi handlers first.  If they come back
> as handled, we skip the check for the external NMI.

And hope that no other NMI was generated while we're handling this one.  
It's a little... fragile?

> But you are right, other than checking the reason port, there isn't a way
> to determine if an NMI is internally or externally generated.

Ouch.

>
> >
> >  >>
> >  >>   But on the other hand, I don't really care if you can say that this path
> >  >>   will never be called in a virtual machine.
> >  >
> >  >Does virtual machines support hot remove of cpus?  Probably not
> >  >considering bare-metal barely supports it.
> >  >
> >
> >  They do.
>
> But vcpus probably don't have the notion of a bsp cpu, so perhaps virtual
> machines can get away with it easier?  (I don't know enough about the hot
> cpu remove code to really explain it, just enough to know it can cause
> problems and people are trying to address it).
>

The concept of a bsp exists in exactly the same way as on real hardware.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 15:11                       ` Avi Kivity
@ 2011-09-07 15:56                         ` Don Zickus
  -1 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-07 15:56 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Wed, Sep 07, 2011 at 06:11:14PM +0300, Avi Kivity wrote:
> On 09/07/2011 04:44 PM, Don Zickus wrote:
> >>
> >>  Is there a way to tell whether an NMI was internally or externally
> >>  generated?
> >>
> >>  I don't think so, especially as two or more NMIs can be coalesced.
> >>  So any NMI received on this first cpu has to check the NMI reason
> >>  port?
> >
> >Well we cheat and execute all the nmi handlers first.  If they come back
> >as handled, we skip the check for the external NMI.
> 
> And hope that no other NMI was generated while we're handling this
> one.  It's a little... fragile?

No.  If another NMI is generated while we are processing the current one
it should get latched.  Upon completion of the current one, the cpu should
jump right back into the nmi exception routine again.  The only downside
is when multiple NMIs come in during the processing of the current one.
Only one can be latched, so the others get dropped.  But we are addressing
that.

Cheers,
Don

> 
> >But you are right, other than checking the reason port, there isn't a way
> >to determine if an NMI is internally or externally generated.
> 
> Ouch.
> 
> >
> >>
> >>  >>
> >>  >>   But on the other hand, I don't really care if you can say that this path
> >>  >>   will never be called in a virtual machine.
> >>  >
> >>  >Does virtual machines support hot remove of cpus?  Probably not
> >>  >considering bare-metal barely supports it.
> >>  >
> >>
> >>  They do.
> >
> >But vcpus probably don't have the notion of a bsp cpu, so perhaps virtual
> >machines can get away with it easier?  (I don't know enough about the hot
> >cpu remove code to really explain it, just enough to know it can cause
> >problems and people are trying to address it).
> >
> 
> The concept of a bsp exists in exactly the same way as on real hardware.
> 
> -- 
> error compiling committee.c: too many arguments to function
> 

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-07 15:56                         ` Don Zickus
  0 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-07 15:56 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Jeremy Fitzhardinge,
	H. Peter Anvin, Ingo Molnar, Linus Torvalds, Xen Devel

On Wed, Sep 07, 2011 at 06:11:14PM +0300, Avi Kivity wrote:
> On 09/07/2011 04:44 PM, Don Zickus wrote:
> >>
> >>  Is there a way to tell whether an NMI was internally or externally
> >>  generated?
> >>
> >>  I don't think so, especially as two or more NMIs can be coalesced.
> >>  So any NMI received on this first cpu has to check the NMI reason
> >>  port?
> >
> >Well we cheat and execute all the nmi handlers first.  If they come back
> >as handled, we skip the check for the external NMI.
> 
> And hope that no other NMI was generated while we're handling this
> one.  It's a little... fragile?

No.  If another NMI is generated while we are processing the current one
it should get latched.  Upon completion of the current one, the cpu should
jump right back into the nmi exception routine again.  The only downside
is when multiple NMIs come in during the processing of the current one.
Only one can be latched, so the others get dropped.  But we are addressing
that.

Cheers,
Don

> 
> >But you are right, other than checking the reason port, there isn't a way
> >to determine if an NMI is internally or externally generated.
> 
> Ouch.
> 
> >
> >>
> >>  >>
> >>  >>   But on the other hand, I don't really care if you can say that this path
> >>  >>   will never be called in a virtual machine.
> >>  >
> >>  >Does virtual machines support hot remove of cpus?  Probably not
> >>  >considering bare-metal barely supports it.
> >>  >
> >>
> >>  They do.
> >
> >But vcpus probably don't have the notion of a bsp cpu, so perhaps virtual
> >machines can get away with it easier?  (I don't know enough about the hot
> >cpu remove code to really explain it, just enough to know it can cause
> >problems and people are trying to address it).
> >
> 
> The concept of a bsp exists in exactly the same way as on real hardware.
> 
> -- 
> error compiling committee.c: too many arguments to function
> 

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 15:56                         ` Don Zickus
  (?)
@ 2011-09-07 16:25                         ` Avi Kivity
  2011-09-07 16:52                           ` Don Zickus
  -1 siblings, 1 reply; 86+ messages in thread
From: Avi Kivity @ 2011-09-07 16:25 UTC (permalink / raw)
  To: Don Zickus
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 06:56 PM, Don Zickus wrote:
> >
> >  And hope that no other NMI was generated while we're handling this
> >  one.  It's a little... fragile?
>
> No.  If another NMI is generated while we are processing the current one
> it should get latched.  Upon completion of the current one, the cpu should
> jump right back into the nmi exception routine again.  The only downside
> is when multiple NMIs come in during the processing of the current one.
> Only one can be latched, so the others get dropped.

Ah, yes, I remember now.

> But we are addressing
> that.
>

May I ask how?  Detecting a back-to-back NMI?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 16:25                         ` Avi Kivity
@ 2011-09-07 16:52                           ` Don Zickus
  2011-09-07 17:09                             ` Avi Kivity
  0 siblings, 1 reply; 86+ messages in thread
From: Don Zickus @ 2011-09-07 16:52 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Wed, Sep 07, 2011 at 07:25:24PM +0300, Avi Kivity wrote:
> On 09/07/2011 06:56 PM, Don Zickus wrote:
> >>
> >>  And hope that no other NMI was generated while we're handling this
> >>  one.  It's a little... fragile?
> >
> >No.  If another NMI is generated while we are processing the current one
> >it should get latched.  Upon completion of the current one, the cpu should
> >jump right back into the nmi exception routine again.  The only downside
> >is when multiple NMIs come in during the processing of the current one.
> >Only one can be latched, so the others get dropped.
> 
> Ah, yes, I remember now.
> 
> >But we are addressing
> >that.
> >
> 
> May I ask how?  Detecting a back-to-back NMI?

Pretty boring actually.  Currently we execute an NMI handler until one of
them returns handled.  Then we stop.  This may cause us to miss an NMI in
the case of multiple NMIs at once.  Now we are changing it to execute
_all_ the handlers to make sure we didn't miss one.  But then the downside
here is we accidentally handle an NMI that was latched.  This would cause
a 'Dazed on confused' message as that NMI was already handled by the
previous NMI.

We are working on an algorithm to detect this condition and flag it
(nothing complicated).  But it may never be perfect.

On the other hand, what else are we going to do with an edge-triggered
shared interrupt line?

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 16:52                           ` Don Zickus
@ 2011-09-07 17:09                             ` Avi Kivity
  2011-09-07 17:17                               ` Jeremy Fitzhardinge
                                                 ` (2 more replies)
  0 siblings, 3 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-07 17:09 UTC (permalink / raw)
  To: Don Zickus
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 07:52 PM, Don Zickus wrote:
> >
> >  May I ask how?  Detecting a back-to-back NMI?
>
> Pretty boring actually.  Currently we execute an NMI handler until one of
> them returns handled.  Then we stop.  This may cause us to miss an NMI in
> the case of multiple NMIs at once.  Now we are changing it to execute
> _all_ the handlers to make sure we didn't miss one.

That's going to be pretty bad for kvm - those handlers become a lot more 
expensive since they involve reading MSRs.  Even worse if we start using 
NMIs as a wakeup for pv spinlocks as provided by this patchset.

> But then the downside
> here is we accidentally handle an NMI that was latched.  This would cause
> a 'Dazed on confused' message as that NMI was already handled by the
> previous NMI.
>
> We are working on an algorithm to detect this condition and flag it
> (nothing complicated).  But it may never be perfect.
>
> On the other hand, what else are we going to do with an edge-triggered
> shared interrupt line?
>

How about, during NMI, save %rip to a per-cpu variable.  Handle just one 
cause.  If, on the next NMI, we hit the same %rip, assume back-to-back 
NMI has occured and now handle all causes.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 17:09                             ` Avi Kivity
@ 2011-09-07 17:17                               ` Jeremy Fitzhardinge
  2011-09-07 17:41                                 ` Avi Kivity
  2011-09-07 17:21                                 ` Don Zickus
  2011-09-13 18:40                               ` Don Zickus
  2 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-07 17:17 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Don Zickus, Peter Zijlstra, H. Peter Anvin, Linus Torvalds,
	Ingo Molnar, the arch/x86 maintainers, Linux Kernel Mailing List,
	Nick Piggin, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 10:09 AM, Avi Kivity wrote:
> On 09/07/2011 07:52 PM, Don Zickus wrote:
>> >
>> >  May I ask how?  Detecting a back-to-back NMI?
>>
>> Pretty boring actually.  Currently we execute an NMI handler until
>> one of
>> them returns handled.  Then we stop.  This may cause us to miss an
>> NMI in
>> the case of multiple NMIs at once.  Now we are changing it to execute
>> _all_ the handlers to make sure we didn't miss one.
>
> That's going to be pretty bad for kvm - those handlers become a lot
> more expensive since they involve reading MSRs.

How often are you going to get NMIs in a kvm guest?

>   Even worse if we start using NMIs as a wakeup for pv spinlocks as
> provided by this patchset.

Hm, I'm interested to know what you're thinking in more detail.  Can you
leave an NMI pending before you block in the same way you can with
"sti;halt" with normal interrupts?

I was thinking you might want to do something with monitor/mwait to
implement the blocking/kick ops. (Handwave)

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 17:09                             ` Avi Kivity
@ 2011-09-07 17:21                                 ` Don Zickus
  2011-09-07 17:21                                 ` Don Zickus
  2011-09-13 18:40                               ` Don Zickus
  2 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-07 17:21 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Wed, Sep 07, 2011 at 08:09:37PM +0300, Avi Kivity wrote:
> On 09/07/2011 07:52 PM, Don Zickus wrote:
> >>
> >>  May I ask how?  Detecting a back-to-back NMI?
> >
> >Pretty boring actually.  Currently we execute an NMI handler until one of
> >them returns handled.  Then we stop.  This may cause us to miss an NMI in
> >the case of multiple NMIs at once.  Now we are changing it to execute
> >_all_ the handlers to make sure we didn't miss one.
> 
> That's going to be pretty bad for kvm - those handlers become a lot
> more expensive since they involve reading MSRs.  Even worse if we
> start using NMIs as a wakeup for pv spinlocks as provided by this
> patchset.

Oh.

> 
> >But then the downside
> >here is we accidentally handle an NMI that was latched.  This would cause
> >a 'Dazed on confused' message as that NMI was already handled by the
> >previous NMI.
> >
> >We are working on an algorithm to detect this condition and flag it
> >(nothing complicated).  But it may never be perfect.
> >
> >On the other hand, what else are we going to do with an edge-triggered
> >shared interrupt line?
> >
> 
> How about, during NMI, save %rip to a per-cpu variable.  Handle just
> one cause.  If, on the next NMI, we hit the same %rip, assume
> back-to-back NMI has occured and now handle all causes.

I had a similar idea a couple of months ago while debugging a continuous
flow of back-to-back NMIs from a stress-test perf application and I
couldn't get it to work.  But let me try it again, because it does make
sense as an optimization.

Thanks,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-07 17:21                                 ` Don Zickus
  0 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-07 17:21 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Jeremy Fitzhardinge,
	H. Peter Anvin, Ingo Molnar, Linus Torvalds, Xen Devel

On Wed, Sep 07, 2011 at 08:09:37PM +0300, Avi Kivity wrote:
> On 09/07/2011 07:52 PM, Don Zickus wrote:
> >>
> >>  May I ask how?  Detecting a back-to-back NMI?
> >
> >Pretty boring actually.  Currently we execute an NMI handler until one of
> >them returns handled.  Then we stop.  This may cause us to miss an NMI in
> >the case of multiple NMIs at once.  Now we are changing it to execute
> >_all_ the handlers to make sure we didn't miss one.
> 
> That's going to be pretty bad for kvm - those handlers become a lot
> more expensive since they involve reading MSRs.  Even worse if we
> start using NMIs as a wakeup for pv spinlocks as provided by this
> patchset.

Oh.

> 
> >But then the downside
> >here is we accidentally handle an NMI that was latched.  This would cause
> >a 'Dazed on confused' message as that NMI was already handled by the
> >previous NMI.
> >
> >We are working on an algorithm to detect this condition and flag it
> >(nothing complicated).  But it may never be perfect.
> >
> >On the other hand, what else are we going to do with an edge-triggered
> >shared interrupt line?
> >
> 
> How about, during NMI, save %rip to a per-cpu variable.  Handle just
> one cause.  If, on the next NMI, we hit the same %rip, assume
> back-to-back NMI has occured and now handle all causes.

I had a similar idea a couple of months ago while debugging a continuous
flow of back-to-back NMIs from a stress-test perf application and I
couldn't get it to work.  But let me try it again, because it does make
sense as an optimization.

Thanks,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 17:17                               ` Jeremy Fitzhardinge
@ 2011-09-07 17:41                                 ` Avi Kivity
  2011-09-07 19:09                                   ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 86+ messages in thread
From: Avi Kivity @ 2011-09-07 17:41 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Don Zickus, Peter Zijlstra, H. Peter Anvin, Linus Torvalds,
	Ingo Molnar, the arch/x86 maintainers, Linux Kernel Mailing List,
	Nick Piggin, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 08:17 PM, Jeremy Fitzhardinge wrote:
> On 09/07/2011 10:09 AM, Avi Kivity wrote:
> >  On 09/07/2011 07:52 PM, Don Zickus wrote:
> >>  >
> >>  >   May I ask how?  Detecting a back-to-back NMI?
> >>
> >>  Pretty boring actually.  Currently we execute an NMI handler until
> >>  one of
> >>  them returns handled.  Then we stop.  This may cause us to miss an
> >>  NMI in
> >>  the case of multiple NMIs at once.  Now we are changing it to execute
> >>  _all_ the handlers to make sure we didn't miss one.
> >
> >  That's going to be pretty bad for kvm - those handlers become a lot
> >  more expensive since they involve reading MSRs.
>
> How often are you going to get NMIs in a kvm guest?

We'll soon have the perf-based watchdog firing every 60s worth of 
instructions or so.  But if we implement your new kick pvop using NMI 
then it can be _very_ often.

>
> >    Even worse if we start using NMIs as a wakeup for pv spinlocks as
> >  provided by this patchset.
>
> Hm, I'm interested to know what you're thinking in more detail.  Can you
> leave an NMI pending before you block in the same way you can with
> "sti;halt" with normal interrupts?

Nope.  But you can do

    if (regs->rip in critical section)
            regs->rip = after_halt;

and effectively emulate it.  The critical section is something like

     critical_section_start:
         if (woken_up)
             goto critical_section_end;
         hlt
     critical_section_end:

>
> I was thinking you might want to do something with monitor/mwait to
> implement the blocking/kick ops. (Handwave)
>

monitor/mwait are incredibly expensive to virtualize since they require 
write-protecting a page, IPIs flying everywhere and flushing tlbs, not 
to mention my lovely hugepages being broken up mercilessly.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 17:21                                 ` Don Zickus
  (?)
@ 2011-09-07 17:41                                 ` Avi Kivity
  -1 siblings, 0 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-07 17:41 UTC (permalink / raw)
  To: Don Zickus
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 08:21 PM, Don Zickus wrote:
> >
> >  How about, during NMI, save %rip to a per-cpu variable.  Handle just
> >  one cause.  If, on the next NMI, we hit the same %rip, assume
> >  back-to-back NMI has occured and now handle all causes.
>
> I had a similar idea a couple of months ago while debugging a continuous
> flow of back-to-back NMIs from a stress-test perf application and I
> couldn't get it to work.  But let me try it again, because it does make
> sense as an optimization.
>
>

Great, thanks.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 17:41                                 ` Avi Kivity
@ 2011-09-07 19:09                                   ` Jeremy Fitzhardinge
  2011-09-08  7:51                                     ` Avi Kivity
  0 siblings, 1 reply; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-07 19:09 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Don Zickus, Peter Zijlstra, H. Peter Anvin, Linus Torvalds,
	Ingo Molnar, the arch/x86 maintainers, Linux Kernel Mailing List,
	Nick Piggin, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 10:41 AM, Avi Kivity wrote:
>> Hm, I'm interested to know what you're thinking in more detail.  Can you
>> leave an NMI pending before you block in the same way you can with
>> "sti;halt" with normal interrupts?
>
>
> Nope.  But you can do
>
>    if (regs->rip in critical section)
>            regs->rip = after_halt;
>
> and effectively emulate it.  The critical section is something like
>
>     critical_section_start:
>         if (woken_up)
>             goto critical_section_end;
>         hlt
>     critical_section_end:

Hm.  It's a pity you have to deliver an actual interrupt to implement
the kick though.

>>
>> I was thinking you might want to do something with monitor/mwait to
>> implement the blocking/kick ops. (Handwave)
>>
>
> monitor/mwait are incredibly expensive to virtualize since they
> require write-protecting a page, IPIs flying everywhere and flushing
> tlbs, not to mention my lovely hugepages being broken up mercilessly.

Or what about a futex-like hypercall?

    J


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 19:09                                   ` Jeremy Fitzhardinge
@ 2011-09-08  7:51                                     ` Avi Kivity
  2011-09-08 17:29                                         ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 86+ messages in thread
From: Avi Kivity @ 2011-09-08  7:51 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Don Zickus, Peter Zijlstra, H. Peter Anvin, Linus Torvalds,
	Ingo Molnar, the arch/x86 maintainers, Linux Kernel Mailing List,
	Nick Piggin, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/07/2011 10:09 PM, Jeremy Fitzhardinge wrote:
> On 09/07/2011 10:41 AM, Avi Kivity wrote:
> >>  Hm, I'm interested to know what you're thinking in more detail.  Can you
> >>  leave an NMI pending before you block in the same way you can with
> >>  "sti;halt" with normal interrupts?
> >
> >
> >  Nope.  But you can do
> >
> >     if (regs->rip in critical section)
> >             regs->rip = after_halt;
> >
> >  and effectively emulate it.  The critical section is something like
> >
> >      critical_section_start:
> >          if (woken_up)
> >              goto critical_section_end;
> >          hlt
> >      critical_section_end:
>
> Hm.  It's a pity you have to deliver an actual interrupt to implement
> the kick though.

I don't think it's that expensive, especially compared to the 
double-context-switch and vmexit of the spinner going to sleep.  On AMD 
we do have to take an extra vmexit (on IRET) though.

> >>
> >>  I was thinking you might want to do something with monitor/mwait to
> >>  implement the blocking/kick ops. (Handwave)
> >>
> >
> >  monitor/mwait are incredibly expensive to virtualize since they
> >  require write-protecting a page, IPIs flying everywhere and flushing
> >  tlbs, not to mention my lovely hugepages being broken up mercilessly.
>
> Or what about a futex-like hypercall?
>

Well we could have a specialized sleep/wakeup hypercall pair like Xen, 
but I'd like to avoid it if at all possible.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-08  7:51                                     ` Avi Kivity
@ 2011-09-08 17:29                                         ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-08 17:29 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Don Zickus, Peter Zijlstra, H. Peter Anvin, Linus Torvalds,
	Ingo Molnar, the arch/x86 maintainers, Linux Kernel Mailing List,
	Nick Piggin, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/08/2011 12:51 AM, Avi Kivity wrote:
> On 09/07/2011 10:09 PM, Jeremy Fitzhardinge wrote:
>> On 09/07/2011 10:41 AM, Avi Kivity wrote:
>> >>  Hm, I'm interested to know what you're thinking in more detail. 
>> Can you
>> >>  leave an NMI pending before you block in the same way you can with
>> >>  "sti;halt" with normal interrupts?
>> >
>> >
>> >  Nope.  But you can do
>> >
>> >     if (regs->rip in critical section)
>> >             regs->rip = after_halt;
>> >
>> >  and effectively emulate it.  The critical section is something like
>> >
>> >      critical_section_start:
>> >          if (woken_up)
>> >              goto critical_section_end;
>> >          hlt
>> >      critical_section_end:
>>
>> Hm.  It's a pity you have to deliver an actual interrupt to implement
>> the kick though.
>
> I don't think it's that expensive, especially compared to the
> double-context-switch and vmexit of the spinner going to sleep.  On
> AMD we do have to take an extra vmexit (on IRET) though.

Fair enough - so if the vcpu blocks itself, it ends up being rescheduled
in the NMI handler, which then returns to the lock slowpath.  And if its
a normal hlt, then you can also take interrupts if they're enabled while
spinning.

And if you get nested NMIs (since you can get multiple spurious kicks,
or from other NMI sources), then one NMI will get latched and any others
will get dropped?

> Well we could have a specialized sleep/wakeup hypercall pair like Xen,
> but I'd like to avoid it if at all possible.

Yeah, that's something that just falls out of the existing event channel
machinery, so it isn't something that I specifically added.  But it does
mean that you simply end up with a hypercall returning on kick, with no
real complexities.

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-08 17:29                                         ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 86+ messages in thread
From: Jeremy Fitzhardinge @ 2011-09-08 17:29 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Don Zickus, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Jeremy Fitzhardinge,
	H. Peter Anvin, Ingo Molnar, Linus Torvalds, Xen Devel

On 09/08/2011 12:51 AM, Avi Kivity wrote:
> On 09/07/2011 10:09 PM, Jeremy Fitzhardinge wrote:
>> On 09/07/2011 10:41 AM, Avi Kivity wrote:
>> >>  Hm, I'm interested to know what you're thinking in more detail. 
>> Can you
>> >>  leave an NMI pending before you block in the same way you can with
>> >>  "sti;halt" with normal interrupts?
>> >
>> >
>> >  Nope.  But you can do
>> >
>> >     if (regs->rip in critical section)
>> >             regs->rip = after_halt;
>> >
>> >  and effectively emulate it.  The critical section is something like
>> >
>> >      critical_section_start:
>> >          if (woken_up)
>> >              goto critical_section_end;
>> >          hlt
>> >      critical_section_end:
>>
>> Hm.  It's a pity you have to deliver an actual interrupt to implement
>> the kick though.
>
> I don't think it's that expensive, especially compared to the
> double-context-switch and vmexit of the spinner going to sleep.  On
> AMD we do have to take an extra vmexit (on IRET) though.

Fair enough - so if the vcpu blocks itself, it ends up being rescheduled
in the NMI handler, which then returns to the lock slowpath.  And if its
a normal hlt, then you can also take interrupts if they're enabled while
spinning.

And if you get nested NMIs (since you can get multiple spurious kicks,
or from other NMI sources), then one NMI will get latched and any others
will get dropped?

> Well we could have a specialized sleep/wakeup hypercall pair like Xen,
> but I'd like to avoid it if at all possible.

Yeah, that's something that just falls out of the existing event channel
machinery, so it isn't something that I specifically added.  But it does
mean that you simply end up with a hypercall returning on kick, with no
real complexities.

    J

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-08 17:29                                         ` Jeremy Fitzhardinge
@ 2011-09-11  9:59                                           ` Avi Kivity
  -1 siblings, 0 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-11  9:59 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Don Zickus, Peter Zijlstra, H. Peter Anvin, Linus Torvalds,
	Ingo Molnar, the arch/x86 maintainers, Linux Kernel Mailing List,
	Nick Piggin, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Stefano Stabellini

On 09/08/2011 08:29 PM, Jeremy Fitzhardinge wrote:
> >  I don't think it's that expensive, especially compared to the
> >  double-context-switch and vmexit of the spinner going to sleep.  On
> >  AMD we do have to take an extra vmexit (on IRET) though.
>
> Fair enough - so if the vcpu blocks itself, it ends up being rescheduled
> in the NMI handler, which then returns to the lock slowpath.  And if its
> a normal hlt, then you can also take interrupts if they're enabled while
> spinning.

Yes.  To be clear, just execute 'hlt' and inherit the interrupt enable 
flag from the environment.

> And if you get nested NMIs (since you can get multiple spurious kicks,
> or from other NMI sources), then one NMI will get latched and any others
> will get dropped?

While we're in the NMI handler, any further NMIs will be collapsed and 
queued (so one NMI can be in service and just one other queued behind 
it).  We can detect this condition by checking %rip on stack.

>
> >  Well we could have a specialized sleep/wakeup hypercall pair like Xen,
> >  but I'd like to avoid it if at all possible.
>
> Yeah, that's something that just falls out of the existing event channel
> machinery, so it isn't something that I specifically added.  But it does
> mean that you simply end up with a hypercall returning on kick, with no
> real complexities.

It also has to return on interrupt, MNI, INIT etc.  "No real 
complexities" is a meaningless phrase on x86, though it is fertile 
ground for math puns.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-11  9:59                                           ` Avi Kivity
  0 siblings, 0 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-11  9:59 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Don Zickus, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Jeremy Fitzhardinge,
	H. Peter Anvin, Ingo Molnar, Linus Torvalds, Xen Devel

On 09/08/2011 08:29 PM, Jeremy Fitzhardinge wrote:
> >  I don't think it's that expensive, especially compared to the
> >  double-context-switch and vmexit of the spinner going to sleep.  On
> >  AMD we do have to take an extra vmexit (on IRET) though.
>
> Fair enough - so if the vcpu blocks itself, it ends up being rescheduled
> in the NMI handler, which then returns to the lock slowpath.  And if its
> a normal hlt, then you can also take interrupts if they're enabled while
> spinning.

Yes.  To be clear, just execute 'hlt' and inherit the interrupt enable 
flag from the environment.

> And if you get nested NMIs (since you can get multiple spurious kicks,
> or from other NMI sources), then one NMI will get latched and any others
> will get dropped?

While we're in the NMI handler, any further NMIs will be collapsed and 
queued (so one NMI can be in service and just one other queued behind 
it).  We can detect this condition by checking %rip on stack.

>
> >  Well we could have a specialized sleep/wakeup hypercall pair like Xen,
> >  but I'd like to avoid it if at all possible.
>
> Yeah, that's something that just falls out of the existing event channel
> machinery, so it isn't something that I specifically added.  But it does
> mean that you simply end up with a hypercall returning on kick, with no
> real complexities.

It also has to return on interrupt, MNI, INIT etc.  "No real 
complexities" is a meaningless phrase on x86, though it is fertile 
ground for math puns.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-07 17:09                             ` Avi Kivity
  2011-09-07 17:17                               ` Jeremy Fitzhardinge
  2011-09-07 17:21                                 ` Don Zickus
@ 2011-09-13 18:40                               ` Don Zickus
  2011-09-13 19:03                                 ` Andi Kleen
  2 siblings, 1 reply; 86+ messages in thread
From: Don Zickus @ 2011-09-13 18:40 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Wed, Sep 07, 2011 at 08:09:37PM +0300, Avi Kivity wrote:
> >But then the downside
> >here is we accidentally handle an NMI that was latched.  This would cause
> >a 'Dazed on confused' message as that NMI was already handled by the
> >previous NMI.
> >
> >We are working on an algorithm to detect this condition and flag it
> >(nothing complicated).  But it may never be perfect.
> >
> >On the other hand, what else are we going to do with an edge-triggered
> >shared interrupt line?
> >
> 
> How about, during NMI, save %rip to a per-cpu variable.  Handle just
> one cause.  If, on the next NMI, we hit the same %rip, assume
> back-to-back NMI has occured and now handle all causes.

So I got around to implementing this and it seems to work great.  The back
to back NMIs are detected properly using the %rip and that info is passed to
the NMI notifier.  That info is used to determine if only the first
handler to report 'handled' is executed or _all_ the handlers are
executed.

I think all the 'unknown' NMIs I generated with various perf runs have
disappeared.  I'll post a new version of my nmi notifier rewrite soon.

Thanks for the great suggestion Avi!

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-13 18:40                               ` Don Zickus
@ 2011-09-13 19:03                                 ` Andi Kleen
  2011-09-13 19:21                                     ` Don Zickus
  2011-09-13 19:27                                   ` Don Zickus
  0 siblings, 2 replies; 86+ messages in thread
From: Andi Kleen @ 2011-09-13 19:03 UTC (permalink / raw)
  To: Don Zickus
  Cc: Avi Kivity, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Andi Kleen, Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

> So I got around to implementing this and it seems to work great.  The back
> to back NMIs are detected properly using the %rip and that info is passed to
> the NMI notifier.  That info is used to determine if only the first
> handler to report 'handled' is executed or _all_ the handlers are
> executed.
> 
> I think all the 'unknown' NMIs I generated with various perf runs have
> disappeared.  I'll post a new version of my nmi notifier rewrite soon.

This will fail when the system is idle.

-Andi


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-13 19:03                                 ` Andi Kleen
@ 2011-09-13 19:21                                     ` Don Zickus
  2011-09-13 19:27                                   ` Don Zickus
  1 sibling, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-13 19:21 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Avi Kivity, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Tue, Sep 13, 2011 at 09:03:20PM +0200, Andi Kleen wrote:
> > So I got around to implementing this and it seems to work great.  The back
> > to back NMIs are detected properly using the %rip and that info is passed to
> > the NMI notifier.  That info is used to determine if only the first
> > handler to report 'handled' is executed or _all_ the handlers are
> > executed.
> > 
> > I think all the 'unknown' NMIs I generated with various perf runs have
> > disappeared.  I'll post a new version of my nmi notifier rewrite soon.
> 
> This will fail when the system is idle.

Heh.  I don't think I explained what I was doing properly.

I had a bunch of perf runs going simultaneously on my Core2quad.  Probably
generated around 40,000 NMIs in a few minutes.  I think around a 1000 or
so detected back-to-back NMIs.

With the current NMI detection algorithm in perf to determine back-to-back
NMIs, I can usually use the above scenario to generate an 'unknown' NMI.
Actually numerous ones before the box freezes. It is a false positive.

With my current code and Avi's idea, all those disappeared as they are now
'swallowed' by the algorithm.  That seemed like a positive.

However, being cautious, I decided to instrument lkdtm to inject NMIs to
force unknown NMI conditions
(apic->send_IPI_mask(cpumask_of(smp_processor_id(), NMI_VECTOR)))

I tried to inject the NMIs while my trace_printk buffer was flooding my
screen with 'back-to-back NMIs detected'.  I did this 4 or 5 times and
every single one of them were detected as an 'unknown' NMI.  So this was
good too, in the sense it was not swallowing 'real' unknown NMIs.

Does that make sense?

Or are you saying an NMI in an idle system will have the same %rip thus
falsely detecting a back-to-back NMI?

Cheers,
Don

> 
> -Andi
> 

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-13 19:21                                     ` Don Zickus
  0 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-13 19:21 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Xen Devel, Avi Kivity,
	Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar, Linus Torvalds

On Tue, Sep 13, 2011 at 09:03:20PM +0200, Andi Kleen wrote:
> > So I got around to implementing this and it seems to work great.  The back
> > to back NMIs are detected properly using the %rip and that info is passed to
> > the NMI notifier.  That info is used to determine if only the first
> > handler to report 'handled' is executed or _all_ the handlers are
> > executed.
> > 
> > I think all the 'unknown' NMIs I generated with various perf runs have
> > disappeared.  I'll post a new version of my nmi notifier rewrite soon.
> 
> This will fail when the system is idle.

Heh.  I don't think I explained what I was doing properly.

I had a bunch of perf runs going simultaneously on my Core2quad.  Probably
generated around 40,000 NMIs in a few minutes.  I think around a 1000 or
so detected back-to-back NMIs.

With the current NMI detection algorithm in perf to determine back-to-back
NMIs, I can usually use the above scenario to generate an 'unknown' NMI.
Actually numerous ones before the box freezes. It is a false positive.

With my current code and Avi's idea, all those disappeared as they are now
'swallowed' by the algorithm.  That seemed like a positive.

However, being cautious, I decided to instrument lkdtm to inject NMIs to
force unknown NMI conditions
(apic->send_IPI_mask(cpumask_of(smp_processor_id(), NMI_VECTOR)))

I tried to inject the NMIs while my trace_printk buffer was flooding my
screen with 'back-to-back NMIs detected'.  I did this 4 or 5 times and
every single one of them were detected as an 'unknown' NMI.  So this was
good too, in the sense it was not swallowing 'real' unknown NMIs.

Does that make sense?

Or are you saying an NMI in an idle system will have the same %rip thus
falsely detecting a back-to-back NMI?

Cheers,
Don

> 
> -Andi
> 

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-13 19:03                                 ` Andi Kleen
  2011-09-13 19:21                                     ` Don Zickus
@ 2011-09-13 19:27                                   ` Don Zickus
  1 sibling, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-13 19:27 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Avi Kivity, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Tue, Sep 13, 2011 at 09:03:20PM +0200, Andi Kleen wrote:
> > So I got around to implementing this and it seems to work great.  The back
> > to back NMIs are detected properly using the %rip and that info is passed to
> > the NMI notifier.  That info is used to determine if only the first
> > handler to report 'handled' is executed or _all_ the handlers are
> > executed.
> > 
> > I think all the 'unknown' NMIs I generated with various perf runs have
> > disappeared.  I'll post a new version of my nmi notifier rewrite soon.
> 
> This will fail when the system is idle.

Oh one other thing I forgot to mention is that an NMI handler has to
return a value greater than 1, meaning that it handle multiple NMI events
during this NMI to even enable the 'NMI swallowing' algorithm.

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-13 19:21                                     ` Don Zickus
  (?)
@ 2011-09-13 19:58                                     ` Andi Kleen
  2011-09-13 20:53                                       ` Don Zickus
  -1 siblings, 1 reply; 86+ messages in thread
From: Andi Kleen @ 2011-09-13 19:58 UTC (permalink / raw)
  To: Don Zickus
  Cc: Andi Kleen, Avi Kivity, Jeremy Fitzhardinge, Peter Zijlstra,
	H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Marcelo Tosatti, KVM, Xen Devel, Jeremy Fitzhardinge,
	Stefano Stabellini

> Or are you saying an NMI in an idle system will have the same %rip thus
> falsely detecting a back-to-back NMI?

Yup.

Another problem is very long running instructions, like WBINVD and some others.
If there's a high frequency NMI it may well hit multiple times in a single
 instance.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-13 19:58                                     ` Andi Kleen
@ 2011-09-13 20:53                                       ` Don Zickus
  2011-09-13 21:04                                         ` Andi Kleen
  0 siblings, 1 reply; 86+ messages in thread
From: Don Zickus @ 2011-09-13 20:53 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Avi Kivity, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Tue, Sep 13, 2011 at 09:58:38PM +0200, Andi Kleen wrote:
> > Or are you saying an NMI in an idle system will have the same %rip thus
> > falsely detecting a back-to-back NMI?
> 
> Yup.

Hmm.  That sucks.  Is there another register that can be used in
conjunction to seperate it, like sp or something?  Or we can we assume
than an idle cpu isn't doing much for local NMI IPIs and that the only
NMIs that would interrupt it would be external ones?

> 
> Another problem is very long running instructions, like WBINVD and some others.
> If there's a high frequency NMI it may well hit multiple times in a single
>  instance.

I thought NMIs happen on instruction boundaries, maybe not.

Honestly, our current implementation would probably be tripped up with
those examples too, so I don't think I am making things worse (assuming
the only high frequency NMI is coming from perf).

Cheers,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-13 20:53                                       ` Don Zickus
@ 2011-09-13 21:04                                         ` Andi Kleen
  0 siblings, 0 replies; 86+ messages in thread
From: Andi Kleen @ 2011-09-13 21:04 UTC (permalink / raw)
  To: Don Zickus
  Cc: Andi Kleen, Avi Kivity, Jeremy Fitzhardinge, Peter Zijlstra,
	H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Marcelo Tosatti, KVM, Xen Devel, Jeremy Fitzhardinge,
	Stefano Stabellini

On Tue, Sep 13, 2011 at 04:53:18PM -0400, Don Zickus wrote:
> On Tue, Sep 13, 2011 at 09:58:38PM +0200, Andi Kleen wrote:
> > > Or are you saying an NMI in an idle system will have the same %rip thus
> > > falsely detecting a back-to-back NMI?
> > 
> > Yup.
> 
> Hmm.  That sucks.  Is there another register that can be used in
> conjunction to seperate it, like sp or something?  Or we can we assume

Not that I know of.

> than an idle cpu isn't doing much for local NMI IPIs and that the only
> NMIs that would interrupt it would be external ones?

There can be still activity on the "idle" system, e.g. SMM or some 
Hypervisor in the background. If you profile events those might trigger 
samples, but they will be all accounted to the MWAIT.

> > Another problem is very long running instructions, like WBINVD and some others.
> > If there's a high frequency NMI it may well hit multiple times in a single
> >  instance.
> 
> I thought NMIs happen on instruction boundaries, maybe not.

Yes, but there may be multiple queued up when the instruction has finished
executing. So you get multiple at the boundary.

> Honestly, our current implementation would probably be tripped up with
> those examples too, so I don't think I am making things worse (assuming
> the only high frequency NMI is coming from perf).

I wish perf/oprofile would just stop using NMIs.  The interrupt off regions
are getting smaller and smaller and they can be profiled in a limited way
using PEBS anyways. Or maybe make it a knob with default to off.

-Andi


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-13 19:21                                     ` Don Zickus
  (?)
  (?)
@ 2011-09-14  7:00                                     ` Avi Kivity
  2011-09-14 12:49                                         ` Don Zickus
  2011-09-14 14:49                                       ` Andi Kleen
  -1 siblings, 2 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-14  7:00 UTC (permalink / raw)
  To: Don Zickus
  Cc: Andi Kleen, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/13/2011 10:21 PM, Don Zickus wrote:
> Or are you saying an NMI in an idle system will have the same %rip thus
> falsely detecting a back-to-back NMI?
>
>

That's easy to avoid - insert an instruction zeroing the last nmi_rip 
somewhere before or after hlt.  It's always okay to execute such an 
instruction (outside the nmi handler itself), since nmi_rip is meant to 
detect a "no instructions executed" condition.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-14  7:00                                     ` Avi Kivity
@ 2011-09-14 12:49                                         ` Don Zickus
  2011-09-14 14:49                                       ` Andi Kleen
  1 sibling, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-14 12:49 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Andi Kleen, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On Wed, Sep 14, 2011 at 10:00:07AM +0300, Avi Kivity wrote:
> On 09/13/2011 10:21 PM, Don Zickus wrote:
> >Or are you saying an NMI in an idle system will have the same %rip thus
> >falsely detecting a back-to-back NMI?
> >
> >
> 
> That's easy to avoid - insert an instruction zeroing the last
> nmi_rip somewhere before or after hlt.  It's always okay to execute
> such an instruction (outside the nmi handler itself), since nmi_rip
> is meant to detect a "no instructions executed" condition.

Ah. Like a touch_nmi_watchdog() type of thing.  Interesting.  I'll poke
around the idle code.  Need to instrument a reproducer first.

Thanks,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-14 12:49                                         ` Don Zickus
  0 siblings, 0 replies; 86+ messages in thread
From: Don Zickus @ 2011-09-14 12:49 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin, KVM,
	Stefano Stabellini, Peter Zijlstra, the arch/x86 maintainers,
	Linux Kernel Mailing List, Andi Kleen, Jeremy Fitzhardinge,
	H. Peter Anvin, Ingo Molnar, Linus Torvalds, Xen Devel

On Wed, Sep 14, 2011 at 10:00:07AM +0300, Avi Kivity wrote:
> On 09/13/2011 10:21 PM, Don Zickus wrote:
> >Or are you saying an NMI in an idle system will have the same %rip thus
> >falsely detecting a back-to-back NMI?
> >
> >
> 
> That's easy to avoid - insert an instruction zeroing the last
> nmi_rip somewhere before or after hlt.  It's always okay to execute
> such an instruction (outside the nmi handler itself), since nmi_rip
> is meant to detect a "no instructions executed" condition.

Ah. Like a touch_nmi_watchdog() type of thing.  Interesting.  I'll poke
around the idle code.  Need to instrument a reproducer first.

Thanks,
Don

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-14  7:00                                     ` Avi Kivity
  2011-09-14 12:49                                         ` Don Zickus
@ 2011-09-14 14:49                                       ` Andi Kleen
  2011-09-14 15:01                                         ` Avi Kivity
  1 sibling, 1 reply; 86+ messages in thread
From: Andi Kleen @ 2011-09-14 14:49 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Don Zickus, Andi Kleen, Jeremy Fitzhardinge, Peter Zijlstra,
	H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Marcelo Tosatti, KVM, Xen Devel, Jeremy Fitzhardinge,
	Stefano Stabellini

On Wed, Sep 14, 2011 at 10:00:07AM +0300, Avi Kivity wrote:
> On 09/13/2011 10:21 PM, Don Zickus wrote:
> >Or are you saying an NMI in an idle system will have the same %rip thus
> >falsely detecting a back-to-back NMI?
> >
> >
> 
> That's easy to avoid - insert an instruction zeroing the last nmi_rip 
> somewhere before or after hlt.  It's always okay to execute such an 
> instruction (outside the nmi handler itself), since nmi_rip is meant to 
> detect a "no instructions executed" condition.

At least for classic hlt there is no simple "after hlt" because it's all
interrupt handlers and exceptions and everything else that can interrupt
combined.

It may work with newer MWAIT.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-14 14:49                                       ` Andi Kleen
@ 2011-09-14 15:01                                         ` Avi Kivity
  2011-09-14 17:28                                             ` Andi Kleen
  0 siblings, 1 reply; 86+ messages in thread
From: Avi Kivity @ 2011-09-14 15:01 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Don Zickus, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/14/2011 05:49 PM, Andi Kleen wrote:
> On Wed, Sep 14, 2011 at 10:00:07AM +0300, Avi Kivity wrote:
> >  On 09/13/2011 10:21 PM, Don Zickus wrote:
> >  >Or are you saying an NMI in an idle system will have the same %rip thus
> >  >falsely detecting a back-to-back NMI?
> >  >
> >  >
> >
> >  That's easy to avoid - insert an instruction zeroing the last nmi_rip
> >  somewhere before or after hlt.  It's always okay to execute such an
> >  instruction (outside the nmi handler itself), since nmi_rip is meant to
> >  detect a "no instructions executed" condition.
>
> At least for classic hlt there is no simple "after hlt" because it's all
> interrupt handlers and exceptions and everything else that can interrupt
> combined.

If an NMI hits in an interrupt handler, or in the "after hlt" section 
before the write-to-last-nmi-rip, then we'll see that %rip has changed.  
If it hits after the write-to-last-nmi-rip instruction (or in the hlt 
itself), then we'll also see that %rip has changed, due to the effect of 
that instruction.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-14 15:01                                         ` Avi Kivity
@ 2011-09-14 17:28                                             ` Andi Kleen
  0 siblings, 0 replies; 86+ messages in thread
From: Andi Kleen @ 2011-09-14 17:28 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Andi Kleen, Don Zickus, Jeremy Fitzhardinge, Peter Zijlstra,
	H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Marcelo Tosatti, KVM, Xen Devel, Jeremy Fitzhardinge,
	Stefano Stabellini

> If an NMI hits in an interrupt handler, or in the "after hlt" section 
> before the write-to-last-nmi-rip, then we'll see that %rip has changed.  
> If it hits after the write-to-last-nmi-rip instruction (or in the hlt 
> itself), then we'll also see that %rip has changed, due to the effect of 
> that instruction.

It won't handle multiple NMIs in halt. I assume that's reasonable common.

-Andi

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-14 17:28                                             ` Andi Kleen
  0 siblings, 0 replies; 86+ messages in thread
From: Andi Kleen @ 2011-09-14 17:28 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Don Zickus, Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin,
	KVM, Stefano Stabellini, Peter Zijlstra,
	the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar, Linus Torvalds,
	Xen Devel

> If an NMI hits in an interrupt handler, or in the "after hlt" section 
> before the write-to-last-nmi-rip, then we'll see that %rip has changed.  
> If it hits after the write-to-last-nmi-rip instruction (or in the hlt 
> itself), then we'll also see that %rip has changed, due to the effect of 
> that instruction.

It won't handle multiple NMIs in halt. I assume that's reasonable common.

-Andi

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-14 17:28                                             ` Andi Kleen
  (?)
@ 2011-09-14 19:26                                             ` Avi Kivity
  2011-09-14 19:34                                                 ` Andi Kleen
  -1 siblings, 1 reply; 86+ messages in thread
From: Avi Kivity @ 2011-09-14 19:26 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Don Zickus, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/14/2011 08:28 PM, Andi Kleen wrote:
> >  If an NMI hits in an interrupt handler, or in the "after hlt" section
> >  before the write-to-last-nmi-rip, then we'll see that %rip has changed.
> >  If it hits after the write-to-last-nmi-rip instruction (or in the hlt
> >  itself), then we'll also see that %rip has changed, due to the effect of
> >  that instruction.
>
> It won't handle multiple NMIs in halt. I assume that's reasonable common.
>

Why not?

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-14 19:26                                             ` Avi Kivity
@ 2011-09-14 19:34                                                 ` Andi Kleen
  0 siblings, 0 replies; 86+ messages in thread
From: Andi Kleen @ 2011-09-14 19:34 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Andi Kleen, Don Zickus, Jeremy Fitzhardinge, Peter Zijlstra,
	H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Marcelo Tosatti, KVM, Xen Devel, Jeremy Fitzhardinge,
	Stefano Stabellini

On Wed, Sep 14, 2011 at 10:26:21PM +0300, Avi Kivity wrote:
> On 09/14/2011 08:28 PM, Andi Kleen wrote:
> >>  If an NMI hits in an interrupt handler, or in the "after hlt" section
> >>  before the write-to-last-nmi-rip, then we'll see that %rip has changed.
> >>  If it hits after the write-to-last-nmi-rip instruction (or in the hlt
> >>  itself), then we'll also see that %rip has changed, due to the effect of
> >>  that instruction.
> >
> >It won't handle multiple NMIs in halt. I assume that's reasonable common.
> >
> 
> Why not?

They all have the same original RIPs and there is no way to distingush
them.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
@ 2011-09-14 19:34                                                 ` Andi Kleen
  0 siblings, 0 replies; 86+ messages in thread
From: Andi Kleen @ 2011-09-14 19:34 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Don Zickus, Jeremy Fitzhardinge, Marcelo Tosatti, Nick Piggin,
	KVM, Stefano Stabellini, Peter Zijlstra,
	the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Jeremy Fitzhardinge, H. Peter Anvin, Ingo Molnar, Linus Torvalds,
	Xen Devel

On Wed, Sep 14, 2011 at 10:26:21PM +0300, Avi Kivity wrote:
> On 09/14/2011 08:28 PM, Andi Kleen wrote:
> >>  If an NMI hits in an interrupt handler, or in the "after hlt" section
> >>  before the write-to-last-nmi-rip, then we'll see that %rip has changed.
> >>  If it hits after the write-to-last-nmi-rip instruction (or in the hlt
> >>  itself), then we'll also see that %rip has changed, due to the effect of
> >>  that instruction.
> >
> >It won't handle multiple NMIs in halt. I assume that's reasonable common.
> >
> 
> Why not?

They all have the same original RIPs and there is no way to distingush
them.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
  2011-09-14 19:34                                                 ` Andi Kleen
  (?)
@ 2011-09-14 19:56                                                 ` Avi Kivity
  -1 siblings, 0 replies; 86+ messages in thread
From: Avi Kivity @ 2011-09-14 19:56 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Don Zickus, Jeremy Fitzhardinge, Peter Zijlstra, H. Peter Anvin,
	Linus Torvalds, Ingo Molnar, the arch/x86 maintainers,
	Linux Kernel Mailing List, Nick Piggin, Marcelo Tosatti, KVM,
	Xen Devel, Jeremy Fitzhardinge, Stefano Stabellini

On 09/14/2011 10:34 PM, Andi Kleen wrote:
> On Wed, Sep 14, 2011 at 10:26:21PM +0300, Avi Kivity wrote:
> >  On 09/14/2011 08:28 PM, Andi Kleen wrote:
> >  >>   If an NMI hits in an interrupt handler, or in the "after hlt" section
> >  >>   before the write-to-last-nmi-rip, then we'll see that %rip has changed.
> >  >>   If it hits after the write-to-last-nmi-rip instruction (or in the hlt
> >  >>   itself), then we'll also see that %rip has changed, due to the effect of
> >  >>   that instruction.
> >  >
> >  >It won't handle multiple NMIs in halt. I assume that's reasonable common.
> >  >
> >
> >  Why not?
>
> They all have the same original RIPs and there is no way to distingush
> them.
>

That's how we detect multiple NMIs.

1. First NMI is posted
2. NMI handler starts
3. 2nd NMI posted, queued
4. First NMI source handled
5. IRET
6. Queued NMI hits the core
7. back-to-back NMI detected (same rip)
8. Second (and third...) NMI source handled
9. Execution continues.


-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 86+ messages in thread

end of thread, other threads:[~2011-09-14 19:57 UTC | newest]

Thread overview: 86+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-02  0:54 [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Jeremy Fitzhardinge
2011-09-02  0:54 ` [PATCH 01/13] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
2011-09-02  0:54 ` [PATCH 02/13] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
2011-09-02  0:54   ` Jeremy Fitzhardinge
2011-09-02  0:54 ` [PATCH 03/13] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
2011-09-02  0:54   ` Jeremy Fitzhardinge
2011-09-02  0:54 ` [PATCH 04/13] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
2011-09-02  0:54 ` [PATCH 05/13] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2 Jeremy Fitzhardinge
2011-09-02  0:54 ` [PATCH 06/13] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
2011-09-02 18:46   ` Eric Northup
2011-09-02 19:32     ` Jeremy Fitzhardinge
2011-09-02  0:55 ` [PATCH 07/13] x86/ticketlocks: tidy up __ticket_unlock_kick() Jeremy Fitzhardinge
2011-09-02  0:55 ` [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking Jeremy Fitzhardinge
2011-09-02 14:47   ` Peter Zijlstra
2011-09-02 19:29     ` Jeremy Fitzhardinge
2011-09-02 19:29       ` Jeremy Fitzhardinge
2011-09-02 20:47       ` Peter Zijlstra
2011-09-02 20:47         ` Peter Zijlstra
2011-09-02 21:50         ` Jeremy Fitzhardinge
2011-09-02 22:37           ` Peter Zijlstra
2011-09-02 22:37             ` Peter Zijlstra
2011-09-02 23:14           ` Andi Kleen
2011-09-05 11:52           ` Stefano Stabellini
2011-09-05 12:05           ` Avi Kivity
2011-09-06 15:14           ` Don Zickus
2011-09-06 18:07             ` Jeremy Fitzhardinge
2011-09-06 18:27               ` Don Zickus
2011-09-06 18:50                 ` Jeremy Fitzhardinge
2011-09-06 18:50                   ` Jeremy Fitzhardinge
2011-09-07  4:13                 ` Avi Kivity
2011-09-07 13:44                   ` Don Zickus
2011-09-07 13:44                     ` Don Zickus
2011-09-07 15:11                     ` Avi Kivity
2011-09-07 15:11                       ` Avi Kivity
2011-09-07 15:56                       ` Don Zickus
2011-09-07 15:56                         ` Don Zickus
2011-09-07 16:25                         ` Avi Kivity
2011-09-07 16:52                           ` Don Zickus
2011-09-07 17:09                             ` Avi Kivity
2011-09-07 17:17                               ` Jeremy Fitzhardinge
2011-09-07 17:41                                 ` Avi Kivity
2011-09-07 19:09                                   ` Jeremy Fitzhardinge
2011-09-08  7:51                                     ` Avi Kivity
2011-09-08 17:29                                       ` Jeremy Fitzhardinge
2011-09-08 17:29                                         ` Jeremy Fitzhardinge
2011-09-11  9:59                                         ` Avi Kivity
2011-09-11  9:59                                           ` Avi Kivity
2011-09-07 17:21                               ` Don Zickus
2011-09-07 17:21                                 ` Don Zickus
2011-09-07 17:41                                 ` Avi Kivity
2011-09-13 18:40                               ` Don Zickus
2011-09-13 19:03                                 ` Andi Kleen
2011-09-13 19:21                                   ` Don Zickus
2011-09-13 19:21                                     ` Don Zickus
2011-09-13 19:58                                     ` Andi Kleen
2011-09-13 20:53                                       ` Don Zickus
2011-09-13 21:04                                         ` Andi Kleen
2011-09-14  7:00                                     ` Avi Kivity
2011-09-14 12:49                                       ` Don Zickus
2011-09-14 12:49                                         ` Don Zickus
2011-09-14 14:49                                       ` Andi Kleen
2011-09-14 15:01                                         ` Avi Kivity
2011-09-14 17:28                                           ` Andi Kleen
2011-09-14 17:28                                             ` Andi Kleen
2011-09-14 19:26                                             ` Avi Kivity
2011-09-14 19:34                                               ` Andi Kleen
2011-09-14 19:34                                                 ` Andi Kleen
2011-09-14 19:56                                                 ` Avi Kivity
2011-09-13 19:27                                   ` Don Zickus
2011-09-02  0:55 ` [PATCH 09/13] x86/pvticketlocks: we only need to kick if there's waiters Jeremy Fitzhardinge
2011-09-02  0:55 ` [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
2011-09-02 14:48   ` Peter Zijlstra
2011-09-02 19:30     ` Jeremy Fitzhardinge
2011-09-02  0:55 ` [PATCH 11/13] x86/ticketlock: only do kick after doing unlock Jeremy Fitzhardinge
2011-09-02 14:49   ` Peter Zijlstra
2011-09-02 19:34     ` Jeremy Fitzhardinge
2011-09-02 19:34       ` Jeremy Fitzhardinge
2011-09-02  0:55 ` [PATCH 12/13] x86/pvticketlock: make sure unlock_kick pvop call is inlined Jeremy Fitzhardinge
2011-09-02  0:55 ` [PATCH 13/13] x86/pvticketlock: use __ticket_t for pvop args Jeremy Fitzhardinge
2011-09-02 11:22 ` [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks Stefano Stabellini
2011-09-02 11:22   ` Stefano Stabellini
2011-09-06 19:33   ` Jeremy Fitzhardinge
2011-09-02 15:38 ` Linus Torvalds
2011-09-02 20:07   ` Jeremy Fitzhardinge
2011-09-02 20:27     ` Linus Torvalds
2011-09-02 21:42       ` Jeremy Fitzhardinge

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.