linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
@ 2012-05-02 10:06 Raghavendra K T
  2012-05-02 10:06 ` [PATCH RFC V8 1/17] x86/spinlock: Replace pv spinlocks with pv ticketlocks Raghavendra K T
                   ` (17 more replies)
  0 siblings, 18 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:06 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML


This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.(targeted for 3.5 window)

Note: This needs debugfs changes patch that should be in Xen / linux-next
   https://lkml.org/lkml/2012/3/30/687

Changes in V8:
 - Reabsed patches to 3.4-rc4
 - Combined the KVM changes with ticketlock + Xen changes (Ingo)
 - Removed CAP_PV_UNHALT since it is redundant (Avi). But note that we
    need newer qemu which uses KVM_GET_SUPPORTED_CPUID ioctl.
 - Rewrite GET_MP_STATE condition (Avi)
 - Make pv_unhalt = bool (Avi)
 - Move out reset pv_unhalt code to vcpu_run from vcpu_block (Gleb)
 - Documentation changes (Rob Landley)
 - Have a printk to recognize that paravirt spinlock is enabled (Nikunj)
 - Move out kick hypercall out of CONFIG_PARAVIRT_SPINLOCK now
   so that it can be used for other optimizations such as 
   flush_tlb_ipi_others etc. (Nikunj)

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;
	for (;;) {
		unsigned count = SPIN_THRESHOLD;
		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();
which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	     static_key_false(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */
		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".

TODO: 1) Remove CONFIG_PARAVIRT_SPINLOCK ?
      2) Experiments on further optimization possibilities. (discussed in V6)
      3) Use kvm_irq_delivery_to_apic() in kvm hypercall (suggested by Gleb)
      4) Any cleanups for e.g. Xen/KVM common code for debugfs.

PS: TODOs are no blockers for the current series merge.

Results:
=======
various form of results based on V6 of the patch series are posted in following links
 
 https://lkml.org/lkml/2012/3/21/161
 https://lkml.org/lkml/2012/3/21/198

 kvm results:
 https://lkml.org/lkml/2012/3/23/50
 https://lkml.org/lkml/2012/4/5/73

Benchmarking on the current set of patches will be posted soon.

Thoughts? Comments? Suggestions?. It  would be nice to see
Acked-by/Reviewed-by/Tested-by for the patch series.
 
Jeremy Fitzhardinge (9):
  x86/spinlock: Replace pv spinlocks with pv ticketlocks
  x86/ticketlock: Collapse a layer of functions
  xen: Defer spinlock setup until boot CPU setup
  xen/pvticketlock: Xen implementation for PV ticket locks
  xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv
    ticketlocks
  x86/pvticketlock: Use callee-save for lock_spinning
  x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
  x86/ticketlock: Add slowpath logic
  xen/pvticketlock: Allow interrupts to be enabled while blocking

Srivatsa Vaddagiri (3): 
  Add a hypercall to KVM hypervisor to support pv-ticketlocks
  Added configuration support to enable debug information for KVM Guests
  Paravirtual ticketlock support for linux guests running on KVM hypervisor

Raghavendra K T (3):
  x86/ticketlock: Don't inline _spin_unlock when using paravirt
    spinlocks
  Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
  Add documentation on Hypercalls and features used for PV spinlock

Andrew Jones (1):
  Split out rate limiting from jump_label.h

Stefano Stabellini (1):
 xen: Enable PV ticketlocks on HVM Xen
---
PS: Had to trim down recipient list because, LKML archive does not support
list > 20. Though many more people should have been in To/CC list.

Ticketlock links:
V7 : https://lkml.org/lkml/2012/4/19/335 
V6 : https://lkml.org/lkml/2012/3/21/161

KVM patch links:
 V6: https://lkml.org/lkml/2012/4/23/123

 V5 kernel changes:
 https://lkml.org/lkml/2012/3/23/50
 Qemu changes for V5:
 http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04455.html 

 V4 kernel changes:
 https://lkml.org/lkml/2012/1/14/66
 Qemu changes for V4:
 http://www.mail-archive.com/kvm@vger.kernel.org/msg66450.html

 V3 kernel Changes:
 https://lkml.org/lkml/2011/11/30/62
 Qemu patch for V3:
 http://lists.gnu.org/archive/html/qemu-devel/2011-12/msg00397.html

 V2 kernel changes : 
 https://lkml.org/lkml/2011/10/23/207

 Previous discussions : (posted by Srivatsa V).
 https://lkml.org/lkml/2010/7/26/24
 https://lkml.org/lkml/2011/1/19/212

Ticketlock change history:
Changes in V7:
 - Reabsed patches to 3.4-rc3
 - Added jumplabel split patch (originally from Andrew Jones rebased to
    3.4-rc3
 - jumplabel changes from Ingo and Jason taken and now using static_key_*
    instead of static_branch.
 - using UNINLINE_SPIN_UNLOCK (which was splitted as per suggestion from Linus)
 - This patch series is rebased on debugfs patch (that sould be already in
    Xen/linux-next https://lkml.org/lkml/2012/3/23/51)

Changes in V6 posting: (Raghavendra K T)
 - Rebased to linux-3.3-rc6.
 - used function+enum in place of macro (better type checking) 
 - use cmpxchg while resetting zero status for possible race
	[suggested by Dave Hansen for KVM patches ]

KVM patch Change history:
Changes in V6:
- Rebased to 3.4-rc3
- Removed debugfs changes patch which should now be in Xen/linux-next.
  (https://lkml.org/lkml/2012/3/30/687)
- Removed PV_UNHALT_MSR since currently we don't need guest communication,
  and made pv_unhalt folded to GET_MP_STATE (Marcello, Avi[long back])
- Take jumplabel changes from Ingo/Jason into use (static_key_slow_inc usage)
- Added inline to spinlock_init in non PARAVIRT case
- Move arch specific code to arch/x86 and add stubs to other archs (Marcello)
- Added more comments on pv_unhalt usage etc (Marcello)

Changes in V5:
- rebased to 3.3-rc6
- added PV_UNHALT_MSR that would help in live migration (Avi)
- removed PV_LOCK_KICK vcpu request and pv_unhalt flag (re)added.
- Changed hypercall documentaion (Alex).
- mode_t changed to umode_t in debugfs.
- MSR related documentation added.
- rename PV_LOCK_KICK to PV_UNHALT. 
- host and guest patches not mixed. (Marcelo, Alex)
- kvm_kick_cpu now takes cpu so it can be used by flush_tlb_ipi_other 
   paravirtualization (Nikunj)
- coding style changes in variable declarion etc (Srikar)

Changes in V4:
- reabsed to 3.2.0 pre.
- use APIC ID for kicking the vcpu and use kvm_apic_match_dest for matching (Avi)
- fold vcpu->kicked flag into vcpu->requests (KVM_REQ_PVLOCK_KICK) and related 
  changes for UNHALT path to make pv ticket spinlock migration friendly(Avi, Marcello)
- Added Documentation for CPUID, Hypercall (KVM_HC_KICK_CPU)
  and capabilty (KVM_CAP_PVLOCK_KICK) (Avi)
- Remove unneeded kvm_arch_vcpu_ioctl_set_mpstate call. (Marcello)
- cumulative variable type changed (int ==> u32) in add_stat (Konrad)
- remove unneeded kvm_guest_init for !CONFIG_KVM_GUEST case

Changes in V3:
- rebased to 3.2-rc1
- use halt() instead of wait for kick hypercall.
- modify kick hyper call to do wakeup halted vcpu.
- hook kvm_spinlock_init to smp_prepare_cpus call (moved the call out of head##.c).
- fix the potential race when zero_stat is read.
- export debugfs_create_32 and add documentation to API.
- use static inline and enum instead of ADDSTAT macro. 
- add  barrier() in after setting kick_vcpu.
- empty static inline function for kvm_spinlock_init.
- combine the patches one and two readuce overhead.
- make KVM_DEBUGFS depends on DEBUGFS.
- include debugfs header unconditionally.

Changes in V2:
- rebased patchesto -rc9
- synchronization related changes based on Jeremy's changes 
 (Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>) pointed by 
 Stephan Diestelhorst <stephan.diestelhorst@amd.com>
- enabling 32 bit guests
- splitted patches into two more chunks

 Documentation/virtual/kvm/cpuid.txt      |    4 +
 Documentation/virtual/kvm/hypercalls.txt |   60 +++++
 arch/x86/Kconfig                         |   10 +
 arch/x86/include/asm/kvm_host.h          |    4 +
 arch/x86/include/asm/kvm_para.h          |   16 +-
 arch/x86/include/asm/paravirt.h          |   32 +--
 arch/x86/include/asm/paravirt_types.h    |   10 +-
 arch/x86/include/asm/spinlock.h          |  128 +++++++----
 arch/x86/include/asm/spinlock_types.h    |   16 +-
 arch/x86/kernel/kvm.c                    |  256 ++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c     |   18 +-
 arch/x86/kvm/cpuid.c                     |    3 +-
 arch/x86/kvm/x86.c                       |   44 ++++-
 arch/x86/xen/smp.c                       |    3 +-
 arch/x86/xen/spinlock.c                  |  387 ++++++++++--------------------
 include/linux/jump_label.h               |   26 +--
 include/linux/jump_label_ratelimit.h     |   34 +++
 include/linux/kvm_para.h                 |    1 +
 include/linux/perf_event.h               |    1 +
 kernel/jump_label.c                      |    1 +
 20 files changed, 673 insertions(+), 381 deletions(-)


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 1/17]  x86/spinlock: Replace pv spinlocks with pv ticketlocks
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
@ 2012-05-02 10:06 ` Raghavendra K T
  2012-05-02 10:06 ` [PATCH RFC V8 2/17] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks Raghavendra K T
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:06 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com> 
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |   32 ++++----------------
 arch/x86/include/asm/paravirt_types.h |   10 ++----
 arch/x86/include/asm/spinlock.h       |   53 ++++++++++++++++++++++++++------
 arch/x86/include/asm/spinlock_types.h |    4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
 arch/x86/xen/spinlock.c               |    8 ++++-
 6 files changed, 61 insertions(+), 61 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index aa0f913..4bcd146 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -751,36 +751,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 8e8b9a4..005e24d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include <asm/spinlock_types.h>
+
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 76bfa2c..3e47608 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -37,6 +37,35 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							 __ticket_t ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
+/*
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -50,19 +79,24 @@
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
 	inc = xadd(&lock->tickets, inc);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			break;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				goto out;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -81,7 +115,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
+	__ticket_t next = lock->tickets.head + 1;
+
 	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__ticket_unlock_kick(lock, next);
 }
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
@@ -98,8 +135,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -132,8 +167,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index ad0ad07..83fd3c7 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #include <linux/types.h>
 
 #if (CONFIG_NR_CPUS < 256)
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 00461a4..f1f4540 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -138,6 +138,9 @@ struct xen_spinlock {
 	xen_spinners_t spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -353,6 +355,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -389,13 +392,14 @@ void xen_uninit_lock_cpu(int cpu)
 void __init xen_init_spinlocks(void)
 {
 	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 2/17]  x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
  2012-05-02 10:06 ` [PATCH RFC V8 1/17] x86/spinlock: Replace pv spinlocks with pv ticketlocks Raghavendra K T
@ 2012-05-02 10:06 ` Raghavendra K T
  2012-05-02 10:06 ` [PATCH RFC V8 3/17] x86/ticketlock: Collapse a layer of functions Raghavendra K T
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:06 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> 

The code size expands somewhat, and its better to just call
a function rather than inline it.

Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch, 
which is simplified.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1d14cc6..35eb2e4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -597,6 +597,7 @@ config PARAVIRT
 config PARAVIRT_SPINLOCKS
 	bool "Paravirtualization layer for spinlocks"
 	depends on PARAVIRT && SMP && EXPERIMENTAL
+	select UNINLINE_SPIN_UNLOCK
 	---help---
 	  Paravirtualized spinlocks allow a pvops backend to replace the
 	  spinlock implementation with something virtualization-friendly


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 3/17]  x86/ticketlock: Collapse a layer of functions
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
  2012-05-02 10:06 ` [PATCH RFC V8 1/17] x86/spinlock: Replace pv spinlocks with pv ticketlocks Raghavendra K T
  2012-05-02 10:06 ` [PATCH RFC V8 2/17] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks Raghavendra K T
@ 2012-05-02 10:06 ` Raghavendra K T
  2012-05-02 10:07 ` [PATCH RFC V8 4/17] xen: Defer spinlock setup until boot CPU setup Raghavendra K T
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:06 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com> 
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 files changed, 5 insertions(+), 30 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 3e47608..ee4bbd4 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -79,7 +79,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -99,7 +99,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -113,7 +113,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -121,46 +121,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return tmp.tail != tmp.head;
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 4/17]  xen: Defer spinlock setup until boot CPU setup
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (2 preceding siblings ...)
  2012-05-02 10:06 ` [PATCH RFC V8 3/17] x86/ticketlock: Collapse a layer of functions Raghavendra K T
@ 2012-05-02 10:07 ` Raghavendra K T
  2012-05-02 10:07 ` [PATCH RFC V8 5/17] xen/pvticketlock: Xen implementation for PV ticket locks Raghavendra K T
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:07 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 0503c0c..7dc400a 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -222,6 +222,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
 
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
+	xen_init_spinlocks();
 }
 
 static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
@@ -551,7 +552,6 @@ void __init xen_smp_init(void)
 {
 	smp_ops = xen_smp_ops;
 	xen_fill_possible_map();
-	xen_init_spinlocks();
 }
 
 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 5/17]  xen/pvticketlock: Xen implementation for PV ticket locks
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (3 preceding siblings ...)
  2012-05-02 10:07 ` [PATCH RFC V8 4/17] xen: Defer spinlock setup until boot CPU setup Raghavendra K T
@ 2012-05-02 10:07 ` Raghavendra K T
  2012-05-02 10:07 ` [PATCH RFC V8 6/17] xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks Raghavendra K T
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:07 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Raghu: use function + enum instead of macro, cmpxchg for zero status reset

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |  348 +++++++++++------------------------------------
 1 files changed, 78 insertions(+), 270 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f1f4540..4e98a07 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -16,45 +16,44 @@
 #include "xen-ops.h"
 #include "debugfs.h"
 
-#ifdef CONFIG_XEN_DEBUG_FS
-static struct xen_spinlock_stats
-{
-	u64 taken;
-	u32 taken_slow;
-	u32 taken_slow_nested;
-	u32 taken_slow_pickup;
-	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
+enum xen_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	TAKEN_SLOW_SPURIOUS,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
 
-	u64 released;
-	u32 released_slow;
-	u32 released_slow_kicked;
 
+#ifdef CONFIG_XEN_DEBUG_FS
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
+static struct xen_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
-
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
-	if (unlikely(zero_stats)) {
-		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
-		zero_stats = 0;
+	u8 ret;
+	u8 old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
 	}
 }
 
-#define ADD_STATS(elem, val)			\
-	do { check_zero(); spinlock_stats.elem += (val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
 
 static inline u64 spin_time_start(void)
 {
@@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
 }
 #else  /* !CONFIG_XEN_DEBUG_FS */
 #define TIMEOUT			(1 << 10)
-#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+}
 
 static inline u64 spin_time_start(void)
 {
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
@@ -133,230 +112,83 @@ typedef u16 xen_spinners_t;
 	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
 #endif
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	xen_spinners_t spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	inc_spinners(xl);
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	dec_spinners(xl);
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
+	unsigned long flags;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
-
-	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+	add_stats(TAKEN_SLOW, 1);
 
-		raw_local_irq_restore(flags);
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Only check lock once pending cleared */
+	barrier();
 
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
-
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
 	int cpu;
 
-	ADD_STATS(released_slow, 1);
+	add_stats(RELEASED_SLOW, 1);
+
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
-			ADD_STATS(released_slow_kicked, 1);
+		if (w->lock == lock && w->want == next) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
 		}
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -391,15 +223,8 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -417,42 +242,25 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW]);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_pickup);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_SPURIOUS]);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW]);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow_kicked);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 
 	return 0;
 }
 fs_initcall(xen_spinlock_debugfs);
-
 #endif	/* CONFIG_XEN_DEBUG_FS */


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 6/17]  xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (4 preceding siblings ...)
  2012-05-02 10:07 ` [PATCH RFC V8 5/17] xen/pvticketlock: Xen implementation for PV ticket locks Raghavendra K T
@ 2012-05-02 10:07 ` Raghavendra K T
  2012-05-02 10:07 ` [PATCH RFC V8 7/17] x86/pvticketlock: Use callee-save for lock_spinning Raghavendra K T
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:07 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 4e98a07..c4886dc 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -221,12 +221,26 @@ void xen_uninit_lock_cpu(int cpu)
 	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
 }
 
+static bool xen_pvspin __initdata = true;
+
 void __init xen_init_spinlocks(void)
 {
+	if (!xen_pvspin) {
+		printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
+		return;
+	}
+
 	pv_lock_ops.lock_spinning = xen_lock_spinning;
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
+static __init int xen_parse_nopvspin(char *arg)
+{
+	xen_pvspin = false;
+	return 0;
+}
+early_param("xen_nopvspin", xen_parse_nopvspin);
+
 #ifdef CONFIG_XEN_DEBUG_FS
 
 static struct dentry *d_spin_debug;


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 7/17]  x86/pvticketlock: Use callee-save for lock_spinning
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (5 preceding siblings ...)
  2012-05-02 10:07 ` [PATCH RFC V8 6/17] xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks Raghavendra K T
@ 2012-05-02 10:07 ` Raghavendra K T
  2012-05-02 10:07 ` [PATCH RFC V8 8/17] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 Raghavendra K T
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:07 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com> 
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 4bcd146..9769096 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -754,7 +754,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 005e24d..5e0c138 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include <asm/spinlock_types.h>
 
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index c4886dc..c47a8d1 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -171,6 +171,7 @@ out:
 	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -230,7 +231,7 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 8/17]  x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (6 preceding siblings ...)
  2012-05-02 10:07 ` [PATCH RFC V8 7/17] x86/pvticketlock: Use callee-save for lock_spinning Raghavendra K T
@ 2012-05-02 10:07 ` Raghavendra K T
  2012-05-02 10:08 ` [PATCH RFC V8 9/17] Split out rate limiting from jump_label.h Raghavendra K T
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:07 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com> 
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h       |   10 +++++-----
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index ee4bbd4..60b7e83 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -81,7 +81,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -107,7 +107,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -115,9 +115,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
-	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 	__ticket_unlock_kick(lock, next);
 }
 
@@ -132,7 +132,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return (__ticket_t)(tmp.tail - tmp.head) > 1;
+	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 83fd3c7..e96fcbd 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
 typedef struct arch_spinlock {


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 9/17]  Split out rate limiting from jump_label.h
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (7 preceding siblings ...)
  2012-05-02 10:07 ` [PATCH RFC V8 8/17] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 Raghavendra K T
@ 2012-05-02 10:08 ` Raghavendra K T
  2012-05-02 10:08 ` [PATCH RFC V8 10/17] x86/ticketlock: Add slowpath logic Raghavendra K T
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Andrew Jones <drjones@redhat.com>

Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting
for jump label disabling. The changes were made in the jump label code
in order to be more widely available and to keep things tidier. This is
all fine, except now jump_label.h includes linux/workqueue.h, which
makes it impossible to include jump_label.h from anything that
workqueue.h needs. For example, it's now impossible to include
jump_label.h from asm/spinlock.h, which is done in proposed
pv-ticketlock patches. This patch splits out the rate limiting related
changes from jump_label.h into a new file, jump_label_ratelimit.h, to
resolve the issue.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 include/linux/jump_label.h           |   26 +-------------------------
 include/linux/jump_label_ratelimit.h |   34 ++++++++++++++++++++++++++++++++++
 include/linux/perf_event.h           |    1 +
 kernel/jump_label.c                  |    1 +
 4 files changed, 37 insertions(+), 25 deletions(-)
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index c513a40..8195227 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -49,7 +49,6 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
-#include <linux/workqueue.h>
 
 #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
 
@@ -62,12 +61,6 @@ struct static_key {
 #endif
 };
 
-struct static_key_deferred {
-	struct static_key key;
-	unsigned long timeout;
-	struct delayed_work work;
-};
-
 # include <asm/jump_label.h>
 # define HAVE_JUMP_LABEL
 #endif	/* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
@@ -126,10 +119,7 @@ extern void arch_jump_label_transform_static(struct jump_entry *entry,
 extern int jump_label_text_reserved(void *start, void *end);
 extern void static_key_slow_inc(struct static_key *key);
 extern void static_key_slow_dec(struct static_key *key);
-extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
 extern void jump_label_apply_nops(struct module *mod);
-extern void
-jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
 
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
@@ -148,10 +138,6 @@ static __always_inline void jump_label_init(void)
 {
 }
 
-struct static_key_deferred {
-	struct static_key  key;
-};
-
 static __always_inline bool static_key_false(struct static_key *key)
 {
 	if (unlikely(atomic_read(&key->enabled)) > 0)
@@ -184,11 +170,6 @@ static inline void static_key_slow_dec(struct static_key *key)
 	atomic_dec(&key->enabled);
 }
 
-static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
-{
-	static_key_slow_dec(&key->key);
-}
-
 static inline int jump_label_text_reserved(void *start, void *end)
 {
 	return 0;
@@ -202,12 +183,6 @@ static inline int jump_label_apply_nops(struct module *mod)
 	return 0;
 }
 
-static inline void
-jump_label_rate_limit(struct static_key_deferred *key,
-		unsigned long rl)
-{
-}
-
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 		{ .enabled = ATOMIC_INIT(1) })
 #define STATIC_KEY_INIT_FALSE ((struct static_key) \
@@ -218,6 +193,7 @@ jump_label_rate_limit(struct static_key_deferred *key,
 #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE
 #define jump_label_enabled static_key_enabled
 
+static inline int atomic_read(const atomic_t *v);
 static inline bool static_key_enabled(struct static_key *key)
 {
 	return (atomic_read(&key->enabled) > 0);
diff --git a/include/linux/jump_label_ratelimit.h b/include/linux/jump_label_ratelimit.h
new file mode 100644
index 0000000..1137883
--- /dev/null
+++ b/include/linux/jump_label_ratelimit.h
@@ -0,0 +1,34 @@
+#ifndef _LINUX_JUMP_LABEL_RATELIMIT_H
+#define _LINUX_JUMP_LABEL_RATELIMIT_H
+
+#include <linux/jump_label.h>
+#include <linux/workqueue.h>
+
+#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
+struct static_key_deferred {
+	struct static_key key;
+	unsigned long timeout;
+	struct delayed_work work;
+};
+#endif
+
+#ifdef HAVE_JUMP_LABEL
+extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
+extern void
+jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
+
+#else	/* !HAVE_JUMP_LABEL */
+struct static_key_deferred {
+	struct static_key  key;
+};
+static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
+{
+	static_key_slow_dec(&key->key);
+}
+static inline void
+jump_label_rate_limit(struct static_key_deferred *key,
+		unsigned long rl)
+{
+}
+#endif	/* HAVE_JUMP_LABEL */
+#endif	/* _LINUX_JUMP_LABEL_RATELIMIT_H */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index ddbb6a9..a0e6118 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -605,6 +605,7 @@ struct perf_guest_info_callbacks {
 #include <linux/cpu.h>
 #include <linux/irq_work.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 #include <linux/atomic.h>
 #include <linux/sysfs.h>
 #include <asm/local.h>
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 4304919..e17f8d6 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -13,6 +13,7 @@
 #include <linux/sort.h>
 #include <linux/err.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 
 #ifdef HAVE_JUMP_LABEL
 


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 10/17]  x86/ticketlock: Add slowpath logic
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (8 preceding siblings ...)
  2012-05-02 10:08 ` [PATCH RFC V8 9/17] Split out rate limiting from jump_label.h Raghavendra K T
@ 2012-05-02 10:08 ` Raghavendra K T
  2012-05-02 10:08 ` [PATCH RFC V8 11/17] xen/pvticketlock: Allow interrupts to be enabled while blocking Raghavendra K T
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

Unlocker			Locker
				test for lock pickup
					-> fail
unlock
test slowpath
	-> false
				set slowpath flags
				block

Whereas this works in any ordering:

Unlocker			Locker
				set slowpath flags
				test for lock pickup
					-> fail
				block
unlock
test slowpath
	-> true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clears the slowpath flag.

The unlock code uses a locked add to update the head counter.  This also
acts as a full memory barrier so that its safe to subsequently
read back the slowflag state, knowing that the updated lock is visible
to the other CPUs.  If it were an unlocked add, then the flag read may
just be forwarded from the store buffer before it was visible to the other
CPUs, which could result in a deadlock.

Unfortunately this means we need to do a locked instruction when
unlocking with PV ticketlocks.  However, if PV ticketlocks are not
enabled, then the old non-locked "add" is the only unlocking code.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.
Thanks to Stephan Diestelhorst for commenting on some code which relied
on an inaccurate reading of the x86 memory ordering rules.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stephan Diestelhorst <stephan.diestelhorst@amd.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/spinlock.h       |   86 +++++++++++++++++++++++---------
 arch/x86/include/asm/spinlock_types.h |    2 +
 arch/x86/kernel/paravirt-spinlocks.c  |    3 +
 arch/x86/xen/spinlock.c               |    6 ++
 5 files changed, 74 insertions(+), 25 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 9769096..af49670 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -757,7 +757,7 @@ static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 60b7e83..e6881fd 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -1,11 +1,14 @@
 #ifndef _ASM_X86_SPINLOCK_H
 #define _ASM_X86_SPINLOCK_H
 
+#include <linux/jump_label.h>
 #include <linux/atomic.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 #include <linux/compiler.h>
 #include <asm/paravirt.h>
+#include <asm/bitops.h>
+
 /*
  * Your basic SMP spinlocks, allowing only a single CPU anywhere
  *
@@ -40,32 +43,28 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+extern struct static_key paravirt_ticketlocks_enabled;
+static __always_inline bool static_key_false(struct static_key *key);
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
-							__ticket_t ticket)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+	set_bit(0, (volatile unsigned long *)&lock->tickets.tail);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
-							 __ticket_t ticket)
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
 }
-
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
-
-/*
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
-							__ticket_t next)
+static inline void __ticket_unlock_kick(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
-	if (unlikely(lock->tickets.tail != next))
-		____ticket_unlock_kick(lock, next);
 }
 
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -79,20 +78,22 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
 {
 	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
+	if (likely(inc.head == inc.tail))
+		goto out;
 
+	inc.tail &= ~TICKET_SLOWPATH_FLAG;
 	for (;;) {
 		unsigned count = SPIN_THRESHOLD;
 
 		do {
-			if (inc.head == inc.tail)
+			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
 				goto out;
 			cpu_relax();
-			inc.head = ACCESS_ONCE(lock->tickets.head);
 		} while (--count);
 		__ticket_lock_spinning(lock, inc.tail);
 	}
@@ -104,7 +105,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	arch_spinlock_t old, new;
 
 	old.tickets = ACCESS_ONCE(lock->tickets);
-	if (old.tickets.head != old.tickets.tail)
+	if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
 		return 0;
 
 	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
@@ -113,12 +114,49 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
+static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
+					    arch_spinlock_t old)
+{
+	arch_spinlock_t new;
+
+	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
+
+	/* Perform the unlock on the "before" copy */
+	old.tickets.head += TICKET_LOCK_INC;
+
+	/* Clear the slowpath flag */
+	new.head_tail = old.head_tail & ~(TICKET_SLOWPATH_FLAG << TICKET_SHIFT);
+
+	/*
+	 * If the lock is uncontended, clear the flag - use cmpxchg in
+	 * case it changes behind our back though.
+	 */
+	if (new.tickets.head != new.tickets.tail ||
+	    cmpxchg(&lock->head_tail, old.head_tail,
+					new.head_tail) != old.head_tail) {
+		/*
+		 * Lock still has someone queued for it, so wake up an
+		 * appropriate waiter.
+		 */
+		__ticket_unlock_kick(lock, old.tickets.head);
+	}
+}
+
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
+	if (TICKET_SLOWPATH_FLAG &&
+	    static_key_false(&paravirt_ticketlocks_enabled)) {
+		arch_spinlock_t prev;
+
+		prev = *lock;
+		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
+
+		/* add_smp() is a full mb() */
 
-	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
-	__ticket_unlock_kick(lock, next);
+		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
+			__ticket_unlock_slowpath(lock, prev);
+	} else
+		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index e96fcbd..4f1bea1 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,8 +5,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC	2
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC	1
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..bbb6c73 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -4,6 +4,7 @@
  */
 #include <linux/spinlock.h>
 #include <linux/module.h>
+#include <linux/jump_label.h>
 
 #include <asm/paravirt.h>
 
@@ -15,3 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index c47a8d1..d4abaf9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -155,6 +155,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	/* Mark entry to slowpath before doing the pickup test to make
+	   sure we don't deadlock with an unlocker. */
+	__ticket_enter_slowpath(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -231,6 +235,8 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 11/17]  xen/pvticketlock: Allow interrupts to be enabled while blocking
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (9 preceding siblings ...)
  2012-05-02 10:08 ` [PATCH RFC V8 10/17] x86/ticketlock: Add slowpath logic Raghavendra K T
@ 2012-05-02 10:08 ` Raghavendra K T
  2012-05-02 10:08 ` [PATCH RFC V8 12/17] xen: Enable PV ticketlocks on HVM Xen Raghavendra K T
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu "lock" and "want" values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   46 ++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 40 insertions(+), 6 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index d4abaf9..3bf93d5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -140,7 +140,20 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	 * partially setup state.
 	 */
 	local_irq_save(flags);
-
+	/*
+	 * We don't really care if we're overwriting some other
+	 * (lock,want) pair, as that would mean that we're currently
+	 * in an interrupt context, and the outer context had
+	 * interrupts enabled.  That has already kicked the VCPU out
+	 * of xen_poll_irq(), so it will just return spuriously and
+	 * retry with newly setup (lock,want).
+	 *
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
 	w->want = want;
 	smp_wmb();
 	w->lock = lock;
@@ -155,24 +168,43 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
-	/* Mark entry to slowpath before doing the pickup test to make
-	   sure we don't deadlock with an unlocker. */
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
 	__ticket_enter_slowpath(lock);
 
-	/* check again make sure it didn't become free while
-	   we weren't looking  */
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking
+	 */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
 		add_stats(TAKEN_SLOW_PICKUP, 1);
 		goto out;
 	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/*
+	 * If an interrupt happens here, it will leave the wakeup irq
+	 * pending, which will cause xen_poll_irq() to return
+	 * immediately.
+	 */
+
 	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
 	xen_poll_irq(irq);
 	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
+
+	local_irq_save(flags);
+
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 out:
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
+
 	local_irq_restore(flags);
+
 	spin_time_accum_blocked(start);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
@@ -186,7 +218,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 	for_each_cpu(cpu, &waiting_cpus) {
 		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-		if (w->lock == lock && w->want == next) {
+		/* Make sure we read lock before want */
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == next) {
 			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 12/17]  xen: Enable PV ticketlocks on HVM Xen
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (10 preceding siblings ...)
  2012-05-02 10:08 ` [PATCH RFC V8 11/17] xen/pvticketlock: Allow interrupts to be enabled while blocking Raghavendra K T
@ 2012-05-02 10:08 ` Raghavendra K T
  2012-05-02 10:08 ` [PATCH RFC V8 13/17] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks Raghavendra K T
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 7dc400a..6a7a3da 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -589,4 +589,5 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
+	xen_init_spinlocks();
 }


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 13/17] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (11 preceding siblings ...)
  2012-05-02 10:08 ` [PATCH RFC V8 12/17] xen: Enable PV ticketlocks on HVM Xen Raghavendra K T
@ 2012-05-02 10:08 ` Raghavendra K T
  2012-05-02 10:09 ` [PATCH RFC V8 14/17] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration Raghavendra K T
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: srivatsa vaddagiri <vatsa@linux.vnet.ibm.com>

kvm_hc_kick_cpu allows the calling vcpu to kick another vcpu out of halt state.
    
the presence of these hypercalls is indicated to guest via
kvm_feature_pv_unhalt.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_host.h |    4 ++++
 arch/x86/include/asm/kvm_para.h |    2 ++
 arch/x86/kvm/cpuid.c            |    3 ++-
 arch/x86/kvm/x86.c              |   37 +++++++++++++++++++++++++++++++++++++
 include/linux/kvm_para.h        |    1 +
 5 files changed, 46 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e216ba0..e187a9b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -481,6 +481,10 @@ struct kvm_vcpu_arch {
 		u64 length;
 		u64 status;
 	} osvw;
+	/* pv related host specific info */
+	struct {
+		bool pv_unhalted;
+	} pv;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 734c376..5b647ea 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -16,12 +16,14 @@
 #define KVM_FEATURE_CLOCKSOURCE		0
 #define KVM_FEATURE_NOP_IO_DELAY	1
 #define KVM_FEATURE_MMU_OP		2
+
 /* This indicates that the new set of kvmclock msrs
  * are available. The use of 0x11 and 0x12 is deprecated
  */
 #define KVM_FEATURE_CLOCKSOURCE2        3
 #define KVM_FEATURE_ASYNC_PF		4
 #define KVM_FEATURE_STEAL_TIME		5
+#define KVM_FEATURE_PV_UNHALT		6
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 9fed5be..7c93806 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -408,7 +408,8 @@ static int do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			     (1 << KVM_FEATURE_NOP_IO_DELAY) |
 			     (1 << KVM_FEATURE_CLOCKSOURCE2) |
 			     (1 << KVM_FEATURE_ASYNC_PF) |
-			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT);
+			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
+			     (1 << KVM_FEATURE_PV_UNHALT);
 
 		if (sched_info_on())
 			entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 91a5e98..f188cdc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4993,6 +4993,36 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+/*
+ * kvm_pv_kick_cpu_op:  Kick a vcpu.
+ *
+ * @apicid - apicid of vcpu to be kicked.
+ */
+static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
+{
+	struct kvm_vcpu *vcpu = NULL;
+	int i;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!kvm_apic_present(vcpu))
+			continue;
+
+		if (kvm_apic_match_dest(vcpu, 0, 0, apicid, 0))
+			break;
+	}
+	if (vcpu) {
+		/*
+		 * Setting unhalt flag here can result in spurious runnable
+		 * state when unhalt reset does not happen in vcpu_block.
+		 * But that is harmless since that should soon result in halt.
+		 */
+		vcpu->arch.pv.pv_unhalted = true;
+		/* We need everybody see unhalt before vcpu unblocks */
+		smp_wmb();
+		kvm_vcpu_kick(vcpu);
+	}
+}
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
 	unsigned long nr, a0, a1, a2, a3, ret;
@@ -5026,6 +5056,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	case KVM_HC_VAPIC_POLL_IRQ:
 		ret = 0;
 		break;
+	case KVM_HC_KICK_CPU:
+		kvm_pv_kick_cpu_op(vcpu->kvm, a0);
+		ret = 0;
+		break;
 	default:
 		ret = -KVM_ENOSYS;
 		break;
@@ -5409,6 +5443,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
 			{
 				switch(vcpu->arch.mp_state) {
 				case KVM_MP_STATE_HALTED:
+					vcpu->arch.pv.pv_unhalted = false;
 					vcpu->arch.mp_state =
 						KVM_MP_STATE_RUNNABLE;
 				case KVM_MP_STATE_RUNNABLE:
@@ -6128,6 +6163,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 	BUG_ON(vcpu->kvm == NULL);
 	kvm = vcpu->kvm;
 
+	vcpu->arch.pv.pv_unhalted = false;
 	vcpu->arch.emulate_ctxt.ops = &emulate_ops;
 	if (!irqchip_in_kernel(kvm) || kvm_vcpu_is_bsp(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
@@ -6394,6 +6430,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 		!vcpu->arch.apf.halted)
 		|| !list_empty_careful(&vcpu->async_pf.done)
 		|| vcpu->arch.mp_state == KVM_MP_STATE_SIPI_RECEIVED
+		|| vcpu->arch.pv.pv_unhalted
 		|| atomic_read(&vcpu->arch.nmi_queued) ||
 		(kvm_arch_interrupt_allowed(vcpu) &&
 		 kvm_cpu_has_interrupt(vcpu));
diff --git a/include/linux/kvm_para.h b/include/linux/kvm_para.h
index ff476dd..38226e1 100644
--- a/include/linux/kvm_para.h
+++ b/include/linux/kvm_para.h
@@ -19,6 +19,7 @@
 #define KVM_HC_MMU_OP			2
 #define KVM_HC_FEATURES			3
 #define KVM_HC_PPC_MAP_MAGIC_PAGE	4
+#define KVM_HC_KICK_CPU			5
 
 /*
  * hypercalls use architecture specific


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 14/17] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (12 preceding siblings ...)
  2012-05-02 10:08 ` [PATCH RFC V8 13/17] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks Raghavendra K T
@ 2012-05-02 10:09 ` Raghavendra K T
  2012-05-02 10:09 ` [PATCH RFC V8 15/17] kvm guest : Add configuration support to enable debug information for KVM Guests Raghavendra K T
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:09 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

During migration, any vcpu that got kicked but did not become runnable
 (still in halted state) should be runnable after migration.

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/kvm/x86.c |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f188cdc..5b09b67 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5691,7 +5691,12 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
 				    struct kvm_mp_state *mp_state)
 {
-	mp_state->mp_state = vcpu->arch.mp_state;
+	if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED &&
+					vcpu->arch.pv.pv_unhalted)
+		mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
+	else
+		mp_state->mp_state = vcpu->arch.mp_state;
+
 	return 0;
 }
 


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 15/17] kvm guest : Add configuration support to enable debug information for KVM Guests
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (13 preceding siblings ...)
  2012-05-02 10:09 ` [PATCH RFC V8 14/17] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration Raghavendra K T
@ 2012-05-02 10:09 ` Raghavendra K T
  2012-05-02 10:09 ` [PATCH RFC V8 16/17] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor Raghavendra K T
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:09 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    9 +++++++++
 1 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 35eb2e4..a9ec0da 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -584,6 +584,15 @@ config KVM_GUEST
 	  This option enables various optimizations for running under the KVM
 	  hypervisor.
 
+config KVM_DEBUG_FS
+	bool "Enable debug information for KVM Guests in debugfs"
+	depends on KVM_GUEST && DEBUG_FS
+	default n
+	---help---
+	  This option enables collection of various statistics for KVM guest.
+   	  Statistics are displayed in debugfs filesystem. Enabling this option
+	  may incur significant overhead.
+
 source "arch/x86/lguest/Kconfig"
 
 config PARAVIRT


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 16/17] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (14 preceding siblings ...)
  2012-05-02 10:09 ` [PATCH RFC V8 15/17] kvm guest : Add configuration support to enable debug information for KVM Guests Raghavendra K T
@ 2012-05-02 10:09 ` Raghavendra K T
  2012-05-02 10:09 ` [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Raghavendra K T
  2012-05-07  8:29 ` [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Ingo Molnar
  17 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:09 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity
  Cc: Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
 support for pv-ticketlocks is registered via pv_lock_ops.

Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_para.h |   14 ++-
 arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 268 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 5b647ea..77266d3 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -195,10 +195,20 @@ void kvm_async_pf_task_wait(u32 token);
 void kvm_async_pf_task_wake(u32 token);
 u32 kvm_read_and_reset_pf_reason(void);
 extern void kvm_disable_steal_time(void);
-#else
-#define kvm_guest_init() do { } while (0)
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+void __init kvm_spinlock_init(void);
+#else /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline void kvm_spinlock_init(void)
+{
+}
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
+#else /* CONFIG_KVM_GUEST */
+#define kvm_guest_init() do {} while (0)
 #define kvm_async_pf_task_wait(T) do {} while(0)
 #define kvm_async_pf_task_wake(T) do {} while(0)
+
 static inline u32 kvm_read_and_reset_pf_reason(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index b8ba6e4..7c46567 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -33,6 +33,7 @@
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/kprobes.h>
+#include <linux/debugfs.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -368,6 +369,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
 #endif
 	kvm_guest_cpu_init();
 	native_smp_prepare_boot_cpu();
+	kvm_spinlock_init();
 }
 
 static void __cpuinit kvm_guest_cpu_online(void *dummy)
@@ -450,3 +452,257 @@ static __init int activate_jump_labels(void)
 	return 0;
 }
 arch_initcall(activate_jump_labels);
+
+/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
+void kvm_kick_cpu(int cpu)
+{
+	int apicid;
+
+	apicid = per_cpu(x86_cpu_to_apicid, cpu);
+	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
+}
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+enum kvm_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
+
+#ifdef CONFIG_KVM_DEBUG_FS
+#define HISTO_BUCKETS	30
+
+static struct kvm_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
+	u32 histo_spin_blocked[HISTO_BUCKETS+1];
+	u64 time_blocked;
+} spinlock_stats;
+
+static u8 zero_stats;
+
+static inline void check_zero(void)
+{
+	u8 ret;
+	u8 old;
+
+	old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
+	}
+}
+
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
+
+
+static inline u64 spin_time_start(void)
+{
+	return sched_clock();
+}
+
+static void __spin_time_accum(u64 delta, u32 *array)
+{
+	unsigned index;
+
+	index = ilog2(delta);
+	check_zero();
+
+	if (index < HISTO_BUCKETS)
+		array[index]++;
+	else
+		array[HISTO_BUCKETS]++;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+	u32 delta;
+
+	delta = sched_clock() - start;
+	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
+	spinlock_stats.time_blocked += delta;
+}
+
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+
+struct dentry *kvm_init_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm", NULL);
+	if (!d_kvm_debug)
+		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
+
+	return d_kvm_debug;
+}
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	struct dentry *d_kvm;
+
+	d_kvm = kvm_init_debugfs();
+	if (d_kvm == NULL)
+		return -ENOMEM;
+
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
+
+	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
+
+	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
+	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
+
+	debugfs_create_u32("released_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
+	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
+
+	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
+			   &spinlock_stats.time_blocked);
+
+	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
+		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
+
+	return 0;
+}
+fs_initcall(kvm_spinlock_debugfs);
+#else  /* !CONFIG_KVM_DEBUG_FS */
+#define TIMEOUT			(1 << 10)
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+	return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif  /* CONFIG_KVM_DEBUG_FS */
+
+struct kvm_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
+};
+
+/* cpus 'waiting' on a spinlock to become available */
+static cpumask_t waiting_cpus;
+
+/* Track spinlock on which a cpu is waiting */
+static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
+
+static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
+{
+	struct kvm_lock_waiting *w;
+	int cpu;
+	u64 start;
+	unsigned long flags;
+
+	w = &__get_cpu_var(lock_waiting);
+	cpu = smp_processor_id();
+	start = spin_time_start();
+
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
+
+	/*
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
+
+	add_stats(TAKEN_SLOW, 1);
+
+	/*
+	 * This uses set_bit, which is atomic but we should not rely on its
+	 * reordering gurantees. So barrier is needed after this call.
+	 */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+
+	barrier();
+
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
+	__ticket_enter_slowpath(lock);
+
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking.
+	 */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/* halt until it's our turn and kicked. */
+	halt();
+
+	local_irq_save(flags);
+out:
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
+	spin_time_accum_blocked(start);
+}
+PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
+
+/* Kick vcpu waiting on @lock->head to reach value @ticket */
+static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+{
+	int cpu;
+
+	add_stats(RELEASED_SLOW, 1);
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == ticket) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
+			kvm_kick_cpu(cpu);
+			break;
+		}
+	}
+}
+
+/*
+ * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
+ */
+void __init kvm_spinlock_init(void)
+{
+	if (!kvm_para_available())
+		return;
+	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+		return;
+
+	printk(KERN_INFO"KVM setup paravirtual spinlock\n");
+
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
+	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+}
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (15 preceding siblings ...)
  2012-05-02 10:09 ` [PATCH RFC V8 16/17] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor Raghavendra K T
@ 2012-05-02 10:09 ` Raghavendra K T
  2012-05-30 11:54   ` Jan Kiszka
  2012-05-07  8:29 ` [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Ingo Molnar
  17 siblings, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-02 10:09 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti
  Cc: Attilio Rao, Srivatsa Vaddagiri, linux-doc, Virtualization,
	Xen Devel, Linus Torvalds, KVM, Andi Kleen, Raghavendra K T,
	Stefano Stabellini, Stephan Diestelhorst, LKML

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> 

KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
enabled guest.

KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
in guest.

Thanks Alex for KVM_HC_FEATURES inputs and Vatsa for rewriting KVM_HC_KICK_CPU

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 Documentation/virtual/kvm/cpuid.txt      |    4 ++
 Documentation/virtual/kvm/hypercalls.txt |   60 ++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+), 0 deletions(-)
diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
index 8820685..062dff9 100644
--- a/Documentation/virtual/kvm/cpuid.txt
+++ b/Documentation/virtual/kvm/cpuid.txt
@@ -39,6 +39,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
 KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
                                    ||       || writing to msr 0x4b564d02
 ------------------------------------------------------------------------------
+KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
+                                   ||       || before enabling paravirtualized
+                                   ||       || spinlock support.
+------------------------------------------------------------------------------
 KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
                                    ||       || per-cpu warps are expected in
                                    ||       || kvmclock.
diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
new file mode 100644
index 0000000..bc3f14a
--- /dev/null
+++ b/Documentation/virtual/kvm/hypercalls.txt
@@ -0,0 +1,60 @@
+KVM Hypercalls Documentation
+===========================
+The template for each hypercall is:
+1. Hypercall name, value.
+2. Architecture(s)
+3. Status (deprecated, obsolete, active)
+4. Purpose
+
+1. KVM_HC_VAPIC_POLL_IRQ
+------------------------
+Value: 1
+Architecture: x86
+Purpose: None
+
+2. KVM_HC_MMU_OP
+------------------------
+Value: 2
+Architecture: x86
+Status: deprecated.
+Purpose: Support MMU operations such as writing to PTE,
+flushing TLB, release PT.
+
+3. KVM_HC_FEATURES
+------------------------
+Value: 3
+Architecture: PPC
+Status: active
+Purpose: Expose hypercall availability to the guest. On x86 platforms, cpuid
+used to enumerate which hypercalls are available. On PPC, either device tree
+based lookup ( which is also what EPAPR dictates) OR KVM specific enumeration
+mechanism (which is this hypercall) can be used.
+
+4. KVM_HC_PPC_MAP_MAGIC_PAGE
+------------------------
+Value: 4
+Architecture: PPC
+Status: active
+Purpose: To enable communication between the hypervisor and guest there is a
+shared page that contains parts of supervisor visible register state.
+The guest can map this shared page to access its supervisor register through
+memory using this hypercall.
+
+5. KVM_HC_KICK_CPU
+------------------------
+Value: 5
+Architecture: x86
+Status: active
+Purpose: Hypercall used to wakeup a vcpu from HLT state
+
+Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
+kernel mode for an event to occur (ex: a spinlock to become available) can
+execute HLT instruction once it has busy-waited for more than a threshold
+time-interval. Execution of HLT instruction would cause the hypervisor to put
+the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
+same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
+specifying APIC ID of the vcpu to be wokenup.
+
+TODO:
+1. more information on input and output needed?
+2. Add more detail to purpose of hypercalls.


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
                   ` (16 preceding siblings ...)
  2012-05-02 10:09 ` [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Raghavendra K T
@ 2012-05-07  8:29 ` Ingo Molnar
  2012-05-07  8:32   ` Avi Kivity
  17 siblings, 1 reply; 53+ messages in thread
From: Ingo Molnar @ 2012-05-07  8:29 UTC (permalink / raw)
  To: Raghavendra K T, Linus Torvalds, Andrew Morton, Avi Kivity
  Cc: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Avi Kivity, Attilio Rao, Srivatsa Vaddagiri, Linus Torvalds,
	Virtualization, Xen Devel, linux-doc, KVM, Andi Kleen,
	Stefano Stabellini, Stephan Diestelhorst, LKML, Peter Zijlstra,
	Thomas Gleixner


* Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> wrote:

> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.(targeted for 3.5 window)
> 
> Note: This needs debugfs changes patch that should be in Xen / linux-next
>    https://lkml.org/lkml/2012/3/30/687
> 
> Changes in V8:
>  - Reabsed patches to 3.4-rc4
>  - Combined the KVM changes with ticketlock + Xen changes (Ingo)
>  - Removed CAP_PV_UNHALT since it is redundant (Avi). But note that we
>     need newer qemu which uses KVM_GET_SUPPORTED_CPUID ioctl.
>  - Rewrite GET_MP_STATE condition (Avi)
>  - Make pv_unhalt = bool (Avi)
>  - Move out reset pv_unhalt code to vcpu_run from vcpu_block (Gleb)
>  - Documentation changes (Rob Landley)
>  - Have a printk to recognize that paravirt spinlock is enabled (Nikunj)
>  - Move out kick hypercall out of CONFIG_PARAVIRT_SPINLOCK now
>    so that it can be used for other optimizations such as 
>    flush_tlb_ipi_others etc. (Nikunj)
> 
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs).  This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning.  (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
> 
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
> 
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
> 
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
> 
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
>   iterations, then call out to the __ticket_lock_spinning() pvop,
>   which allows a backend to block the vCPU rather than spinning.  This
>   pvop can set the lock into "slowpath state".
> 
> - When releasing a lock, if it is in "slowpath state", the call
>   __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
>   lock is no longer in contention, it also clears the slowpath flag.
> 
> The "slowpath state" is stored in the LSB of the within the lock tail
> ticket.  This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
> 
> For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
> another vcpu out of halt state.
> The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
> 
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.
> 
> The fast path (taking an uncontended lock which isn't in "slowpath"
> state) is optimal, identical to the non-paravirtualized case.
> 
> The inner part of ticket lock code becomes:
> 	inc = xadd(&lock->tickets, inc);
> 	inc.tail &= ~TICKET_SLOWPATH_FLAG;
> 
> 	if (likely(inc.head == inc.tail))
> 		goto out;
> 	for (;;) {
> 		unsigned count = SPIN_THRESHOLD;
> 		do {
> 			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
> 				goto out;
> 			cpu_relax();
> 		} while (--count);
> 		__ticket_lock_spinning(lock, inc.tail);
> 	}
> out:	barrier();
> which results in:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	mov    $0x200,%eax
> 	lock xadd %ax,(%rdi)
> 	movzbl %ah,%edx
> 	cmp    %al,%dl
> 	jne    1f	# Slowpath if lock in contention
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	and    $-2,%edx
> 	movzbl %dl,%esi
> 
> 2:	mov    $0x800,%eax
> 	jmp    4f
> 
> 3:	pause  
> 	sub    $0x1,%eax
> 	je     5f
> 
> 4:	movzbl (%rdi),%ecx
> 	cmp    %cl,%dl
> 	jne    3b
> 
> 	pop    %rbp
> 	retq   
> 
> 5:	callq  *__ticket_lock_spinning
> 	jmp    2b
> 	### SLOWPATH END
> 
> with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
> the fastpath case is straight through (taking the lock without
> contention), and the spin loop is out of line:
> 
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	mov    $0x100,%eax
> 	lock xadd %ax,(%rdi)
> 	movzbl %ah,%edx
> 	cmp    %al,%dl
> 	jne    1f
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	pause  
> 	movzbl (%rdi),%eax
> 	cmp    %dl,%al
> 	jne    1b
> 
> 	pop    %rbp
> 	retq   
> 	### SLOWPATH END
> 
> The unlock code is complicated by the need to both add to the lock's
> "head" and fetch the slowpath flag from "tail".  This version of the
> patch uses a locked add to do this, followed by a test to see if the
> slowflag is set.  The lock prefix acts as a full memory barrier, so we
> can be sure that other CPUs will have seen the unlock before we read
> the flag (without the barrier the read could be fetched from the
> store queue before it hits memory, which could result in a deadlock).
> 
> This is is all unnecessary complication if you're not using PV ticket
> locks, it also uses the jump-label machinery to use the standard
> "add"-based unlock in the non-PV case.
> 
> 	if (TICKET_SLOWPATH_FLAG &&
> 	     static_key_false(&paravirt_ticketlocks_enabled))) {
> 		arch_spinlock_t prev;
> 		prev = *lock;
> 		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
> 
> 		/* add_smp() is a full mb() */
> 		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> 			__ticket_unlock_slowpath(lock, prev);
> 	} else
> 		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
> which generates:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	nop5	# replaced by 5-byte jmp 2f when PV enabled
> 
> 	# non-PV unlock
> 	addb   $0x2,(%rdi)
> 
> 1:	pop    %rbp
> 	retq   
> 
> ### PV unlock ###
> 2:	movzwl (%rdi),%esi	# Fetch prev
> 
> 	lock addb $0x2,(%rdi)	# Do unlock
> 
> 	testb  $0x1,0x1(%rdi)	# Test flag
> 	je     1b		# Finished if not set
> 
> ### Slow path ###
> 	add    $2,%sil		# Add "head" in old lock state
> 	mov    %esi,%edx
> 	and    $0xfe,%dh	# clear slowflag for comparison
> 	movzbl %dh,%eax
> 	cmp    %dl,%al		# If head == tail (uncontended)
> 	je     4f		# clear slowpath flag
> 
> 	# Kick next CPU waiting for lock
> 3:	movzbl %sil,%esi
> 	callq  *pv_lock_ops.kick
> 
> 	pop    %rbp
> 	retq   
> 
> 	# Lock no longer contended - clear slowflag
> 4:	mov    %esi,%eax
> 	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
> 	cmp    %si,%ax
> 	jne    3b		# If clear failed, then kick
> 
> 	pop    %rbp
> 	retq   
> 
> So when not using PV ticketlocks, the unlock sequence just has a
> 5-byte nop added to it, and the PV case is reasonable straightforward
> aside from requiring a "lock add".
> 
> TODO: 1) Remove CONFIG_PARAVIRT_SPINLOCK ?
>       2) Experiments on further optimization possibilities. (discussed in V6)
>       3) Use kvm_irq_delivery_to_apic() in kvm hypercall (suggested by Gleb)
>       4) Any cleanups for e.g. Xen/KVM common code for debugfs.
> 
> PS: TODOs are no blockers for the current series merge.
> 
> Results:
> =======
> various form of results based on V6 of the patch series are posted in following links
>  
>  https://lkml.org/lkml/2012/3/21/161
>  https://lkml.org/lkml/2012/3/21/198
> 
>  kvm results:
>  https://lkml.org/lkml/2012/3/23/50
>  https://lkml.org/lkml/2012/4/5/73
> 
> Benchmarking on the current set of patches will be posted soon.
> 
> Thoughts? Comments? Suggestions?. It  would be nice to see
> Acked-by/Reviewed-by/Tested-by for the patch series.
>  
> Jeremy Fitzhardinge (9):
>   x86/spinlock: Replace pv spinlocks with pv ticketlocks
>   x86/ticketlock: Collapse a layer of functions
>   xen: Defer spinlock setup until boot CPU setup
>   xen/pvticketlock: Xen implementation for PV ticket locks
>   xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv
>     ticketlocks
>   x86/pvticketlock: Use callee-save for lock_spinning
>   x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
>   x86/ticketlock: Add slowpath logic
>   xen/pvticketlock: Allow interrupts to be enabled while blocking
> 
> Srivatsa Vaddagiri (3): 
>   Add a hypercall to KVM hypervisor to support pv-ticketlocks
>   Added configuration support to enable debug information for KVM Guests
>   Paravirtual ticketlock support for linux guests running on KVM hypervisor
> 
> Raghavendra K T (3):
>   x86/ticketlock: Don't inline _spin_unlock when using paravirt
>     spinlocks
>   Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
>   Add documentation on Hypercalls and features used for PV spinlock
> 
> Andrew Jones (1):
>   Split out rate limiting from jump_label.h
> 
> Stefano Stabellini (1):
>  xen: Enable PV ticketlocks on HVM Xen
> ---
> PS: Had to trim down recipient list because, LKML archive does not support
> list > 20. Though many more people should have been in To/CC list.
> 
> Ticketlock links:
> V7 : https://lkml.org/lkml/2012/4/19/335 
> V6 : https://lkml.org/lkml/2012/3/21/161
> 
> KVM patch links:
>  V6: https://lkml.org/lkml/2012/4/23/123
> 
>  V5 kernel changes:
>  https://lkml.org/lkml/2012/3/23/50
>  Qemu changes for V5:
>  http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04455.html 
> 
>  V4 kernel changes:
>  https://lkml.org/lkml/2012/1/14/66
>  Qemu changes for V4:
>  http://www.mail-archive.com/kvm@vger.kernel.org/msg66450.html
> 
>  V3 kernel Changes:
>  https://lkml.org/lkml/2011/11/30/62
>  Qemu patch for V3:
>  http://lists.gnu.org/archive/html/qemu-devel/2011-12/msg00397.html
> 
>  V2 kernel changes : 
>  https://lkml.org/lkml/2011/10/23/207
> 
>  Previous discussions : (posted by Srivatsa V).
>  https://lkml.org/lkml/2010/7/26/24
>  https://lkml.org/lkml/2011/1/19/212
> 
> Ticketlock change history:
> Changes in V7:
>  - Reabsed patches to 3.4-rc3
>  - Added jumplabel split patch (originally from Andrew Jones rebased to
>     3.4-rc3
>  - jumplabel changes from Ingo and Jason taken and now using static_key_*
>     instead of static_branch.
>  - using UNINLINE_SPIN_UNLOCK (which was splitted as per suggestion from Linus)
>  - This patch series is rebased on debugfs patch (that sould be already in
>     Xen/linux-next https://lkml.org/lkml/2012/3/23/51)
> 
> Changes in V6 posting: (Raghavendra K T)
>  - Rebased to linux-3.3-rc6.
>  - used function+enum in place of macro (better type checking) 
>  - use cmpxchg while resetting zero status for possible race
> 	[suggested by Dave Hansen for KVM patches ]
> 
> KVM patch Change history:
> Changes in V6:
> - Rebased to 3.4-rc3
> - Removed debugfs changes patch which should now be in Xen/linux-next.
>   (https://lkml.org/lkml/2012/3/30/687)
> - Removed PV_UNHALT_MSR since currently we don't need guest communication,
>   and made pv_unhalt folded to GET_MP_STATE (Marcello, Avi[long back])
> - Take jumplabel changes from Ingo/Jason into use (static_key_slow_inc usage)
> - Added inline to spinlock_init in non PARAVIRT case
> - Move arch specific code to arch/x86 and add stubs to other archs (Marcello)
> - Added more comments on pv_unhalt usage etc (Marcello)
> 
> Changes in V5:
> - rebased to 3.3-rc6
> - added PV_UNHALT_MSR that would help in live migration (Avi)
> - removed PV_LOCK_KICK vcpu request and pv_unhalt flag (re)added.
> - Changed hypercall documentaion (Alex).
> - mode_t changed to umode_t in debugfs.
> - MSR related documentation added.
> - rename PV_LOCK_KICK to PV_UNHALT. 
> - host and guest patches not mixed. (Marcelo, Alex)
> - kvm_kick_cpu now takes cpu so it can be used by flush_tlb_ipi_other 
>    paravirtualization (Nikunj)
> - coding style changes in variable declarion etc (Srikar)
> 
> Changes in V4:
> - reabsed to 3.2.0 pre.
> - use APIC ID for kicking the vcpu and use kvm_apic_match_dest for matching (Avi)
> - fold vcpu->kicked flag into vcpu->requests (KVM_REQ_PVLOCK_KICK) and related 
>   changes for UNHALT path to make pv ticket spinlock migration friendly(Avi, Marcello)
> - Added Documentation for CPUID, Hypercall (KVM_HC_KICK_CPU)
>   and capabilty (KVM_CAP_PVLOCK_KICK) (Avi)
> - Remove unneeded kvm_arch_vcpu_ioctl_set_mpstate call. (Marcello)
> - cumulative variable type changed (int ==> u32) in add_stat (Konrad)
> - remove unneeded kvm_guest_init for !CONFIG_KVM_GUEST case
> 
> Changes in V3:
> - rebased to 3.2-rc1
> - use halt() instead of wait for kick hypercall.
> - modify kick hyper call to do wakeup halted vcpu.
> - hook kvm_spinlock_init to smp_prepare_cpus call (moved the call out of head##.c).
> - fix the potential race when zero_stat is read.
> - export debugfs_create_32 and add documentation to API.
> - use static inline and enum instead of ADDSTAT macro. 
> - add  barrier() in after setting kick_vcpu.
> - empty static inline function for kvm_spinlock_init.
> - combine the patches one and two readuce overhead.
> - make KVM_DEBUGFS depends on DEBUGFS.
> - include debugfs header unconditionally.
> 
> Changes in V2:
> - rebased patchesto -rc9
> - synchronization related changes based on Jeremy's changes 
>  (Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>) pointed by 
>  Stephan Diestelhorst <stephan.diestelhorst@amd.com>
> - enabling 32 bit guests
> - splitted patches into two more chunks
> 
>  Documentation/virtual/kvm/cpuid.txt      |    4 +
>  Documentation/virtual/kvm/hypercalls.txt |   60 +++++
>  arch/x86/Kconfig                         |   10 +
>  arch/x86/include/asm/kvm_host.h          |    4 +
>  arch/x86/include/asm/kvm_para.h          |   16 +-
>  arch/x86/include/asm/paravirt.h          |   32 +--
>  arch/x86/include/asm/paravirt_types.h    |   10 +-
>  arch/x86/include/asm/spinlock.h          |  128 +++++++----
>  arch/x86/include/asm/spinlock_types.h    |   16 +-
>  arch/x86/kernel/kvm.c                    |  256 ++++++++++++++++++++
>  arch/x86/kernel/paravirt-spinlocks.c     |   18 +-
>  arch/x86/kvm/cpuid.c                     |    3 +-
>  arch/x86/kvm/x86.c                       |   44 ++++-
>  arch/x86/xen/smp.c                       |    3 +-
>  arch/x86/xen/spinlock.c                  |  387 ++++++++++--------------------
>  include/linux/jump_label.h               |   26 +--
>  include/linux/jump_label_ratelimit.h     |   34 +++
>  include/linux/kvm_para.h                 |    1 +
>  include/linux/perf_event.h               |    1 +
>  kernel/jump_label.c                      |    1 +
>  20 files changed, 673 insertions(+), 381 deletions(-)

This is looking pretty good and complete now - any objections 
from anyone to trying this out in a separate x86 topic tree?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07  8:29 ` [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Ingo Molnar
@ 2012-05-07  8:32   ` Avi Kivity
  2012-05-07 10:58     ` Raghavendra K T
  0 siblings, 1 reply; 53+ messages in thread
From: Avi Kivity @ 2012-05-07  8:32 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Raghavendra K T, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Srivatsa Vaddagiri, Virtualization, Xen Devel,
	linux-doc, KVM, Andi Kleen, Stefano Stabellini,
	Stephan Diestelhorst, LKML, Peter Zijlstra, Thomas Gleixner

On 05/07/2012 11:29 AM, Ingo Molnar wrote:
> This is looking pretty good and complete now - any objections 
> from anyone to trying this out in a separate x86 topic tree?

No objections, instead an

Acked-by: Avi Kivity <avi@redhat.com>

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07  8:32   ` Avi Kivity
@ 2012-05-07 10:58     ` Raghavendra K T
  2012-05-07 12:06       ` Avi Kivity
  0 siblings, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-07 10:58 UTC (permalink / raw)
  To: Avi Kivity, Ingo Molnar
  Cc: Linus Torvalds, Andrew Morton, Jeremy Fitzhardinge,
	Greg Kroah-Hartman, Konrad Rzeszutek Wilk, H. Peter Anvin,
	Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar, Attilio Rao,
	Srivatsa Vaddagiri, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 02:02 PM, Avi Kivity wrote:
> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>> This is looking pretty good and complete now - any objections
>> from anyone to trying this out in a separate x86 topic tree?
>
> No objections, instead an
>
> Acked-by: Avi Kivity<avi@redhat.com>
>

Thank you.

Here is a benchmark result with the patches.

3 guests with 8VCPU, 8GB RAM, 1 used for kernbench
(kernbench -f -H -M -o 20) other for cpuhog (shell script while
true with an instruction)

unpinned scenario
1x: no hogs
2x: 8hogs in one guest
3x: 8hogs each in two guest

BASE: 3.4-rc4 vanilla with CONFIG_PARAVIRT_SPINLOCK=n
BASE+patch: 3.4-rc4 + debugfs + pv patches with CONFIG_PARAVIRT_SPINLOCK=y

Machine : IBM xSeries with Intel(R) Xeon(R) x5570 2.93GHz CPU (Non PLE) 
with 8 core , 64GB RAM

(Less is better. Below is time elapsed in sec for x86_64_defconfig (3+3 
runs)).

		 BASE                    BASE+patch            %improvement
		 mean (sd)               mean (sd)
case 1x:	 66.0566 (74.0304) 	 61.3233 (68.8299) 	7.16552
case 2x:	 1253.2 (1795.74) 	 131.606 (137.358) 	89.4984
case 3x:	 3431.04 (5297.26) 	 134.964 (149.861) 	96.0664


Will be working on further analysis with other benchmarks 
(pgbench/sysbench/ebizzy...) and further optimization.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 10:58     ` Raghavendra K T
@ 2012-05-07 12:06       ` Avi Kivity
  2012-05-07 13:20         ` Raghavendra K T
  2012-05-13 17:59         ` Raghavendra K T
  0 siblings, 2 replies; 53+ messages in thread
From: Avi Kivity @ 2012-05-07 12:06 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Ingo Molnar, Linus Torvalds, Andrew Morton, Jeremy Fitzhardinge,
	Greg Kroah-Hartman, Konrad Rzeszutek Wilk, H. Peter Anvin,
	Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar, Attilio Rao,
	Srivatsa Vaddagiri, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 01:58 PM, Raghavendra K T wrote:
> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>> This is looking pretty good and complete now - any objections
>>> from anyone to trying this out in a separate x86 topic tree?
>>
>> No objections, instead an
>>
>> Acked-by: Avi Kivity<avi@redhat.com>
>>
>
> Thank you.
>
> Here is a benchmark result with the patches.
>
> 3 guests with 8VCPU, 8GB RAM, 1 used for kernbench
> (kernbench -f -H -M -o 20) other for cpuhog (shell script while
> true with an instruction)
>
> unpinned scenario
> 1x: no hogs
> 2x: 8hogs in one guest
> 3x: 8hogs each in two guest
>
> BASE: 3.4-rc4 vanilla with CONFIG_PARAVIRT_SPINLOCK=n
> BASE+patch: 3.4-rc4 + debugfs + pv patches with
> CONFIG_PARAVIRT_SPINLOCK=y
>
> Machine : IBM xSeries with Intel(R) Xeon(R) x5570 2.93GHz CPU (Non
> PLE) with 8 core , 64GB RAM
>
> (Less is better. Below is time elapsed in sec for x86_64_defconfig
> (3+3 runs)).
>
>          BASE                    BASE+patch            %improvement
>          mean (sd)               mean (sd)
> case 1x:     66.0566 (74.0304)      61.3233 (68.8299)     7.16552
> case 2x:     1253.2 (1795.74)      131.606 (137.358)     89.4984
> case 3x:     3431.04 (5297.26)      134.964 (149.861)     96.0664
>

You're calculating the improvement incorrectly.  In the last case, it's
not 96%, rather it's 2400% (25x).  Similarly the second case is about
900% faster.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 12:06       ` Avi Kivity
@ 2012-05-07 13:20         ` Raghavendra K T
  2012-05-07 13:22           ` Avi Kivity
  2012-05-13 17:59         ` Raghavendra K T
  1 sibling, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-07 13:20 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Ingo Molnar, Linus Torvalds, Andrew Morton, Jeremy Fitzhardinge,
	Greg Kroah-Hartman, Konrad Rzeszutek Wilk, H. Peter Anvin,
	Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar, Attilio Rao,
	Srivatsa Vaddagiri, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 05:36 PM, Avi Kivity wrote:
> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>>> This is looking pretty good and complete now - any objections
>>>> from anyone to trying this out in a separate x86 topic tree?
>>>
>>> No objections, instead an
>>>
>>> Acked-by: Avi Kivity<avi@redhat.com>
>>>
[...]
>>
>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>> (3+3 runs)).
>>
>>           BASE                    BASE+patch            %improvement
>>           mean (sd)               mean (sd)
>> case 1x:     66.0566 (74.0304)      61.3233 (68.8299)     7.16552
>> case 2x:     1253.2 (1795.74)      131.606 (137.358)     89.4984
>> case 3x:     3431.04 (5297.26)      134.964 (149.861)     96.0664
>>
>
> You're calculating the improvement incorrectly.  In the last case, it's
> not 96%, rather it's 2400% (25x).  Similarly the second case is about
> 900% faster.
>

You are right,
my %improvement was intended to be like
if
1) base takes 100 sec ==> patch takes 93 sec
2) base takes 100 sec ==> patch takes 11 sec
3) base takes 100 sec ==> patch takes 4 sec

The above is more confusing (and incorrect!).

Better is what you told which boils to 10x and 25x improvement in case
2 and case 3. And IMO, this *really* gives the feeling of magnitude of
improvement with patches.

I ll change script to report that way :).


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:20         ` Raghavendra K T
@ 2012-05-07 13:22           ` Avi Kivity
  2012-05-07 13:38             ` Raghavendra K T
  0 siblings, 1 reply; 53+ messages in thread
From: Avi Kivity @ 2012-05-07 13:22 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Ingo Molnar, Linus Torvalds, Andrew Morton, Jeremy Fitzhardinge,
	Greg Kroah-Hartman, Konrad Rzeszutek Wilk, H. Peter Anvin,
	Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar, Attilio Rao,
	Srivatsa Vaddagiri, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 04:20 PM, Raghavendra K T wrote:
> On 05/07/2012 05:36 PM, Avi Kivity wrote:
>> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>>>> This is looking pretty good and complete now - any objections
>>>>> from anyone to trying this out in a separate x86 topic tree?
>>>>
>>>> No objections, instead an
>>>>
>>>> Acked-by: Avi Kivity<avi@redhat.com>
>>>>
> [...]
>>>
>>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>>> (3+3 runs)).
>>>
>>>           BASE                    BASE+patch            %improvement
>>>           mean (sd)               mean (sd)
>>> case 1x:     66.0566 (74.0304)      61.3233 (68.8299)     7.16552
>>> case 2x:     1253.2 (1795.74)      131.606 (137.358)     89.4984
>>> case 3x:     3431.04 (5297.26)      134.964 (149.861)     96.0664
>>>
>>
>> You're calculating the improvement incorrectly.  In the last case, it's
>> not 96%, rather it's 2400% (25x).  Similarly the second case is about
>> 900% faster.
>>
>
> You are right,
> my %improvement was intended to be like
> if
> 1) base takes 100 sec ==> patch takes 93 sec
> 2) base takes 100 sec ==> patch takes 11 sec
> 3) base takes 100 sec ==> patch takes 4 sec
>
> The above is more confusing (and incorrect!).
>
> Better is what you told which boils to 10x and 25x improvement in case
> 2 and case 3. And IMO, this *really* gives the feeling of magnitude of
> improvement with patches.
>
> I ll change script to report that way :).
>

btw, this is on non-PLE hardware, right?  What are the numbers for PLE?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:22           ` Avi Kivity
@ 2012-05-07 13:38             ` Raghavendra K T
  2012-05-07 13:46               ` Srivatsa Vaddagiri
  0 siblings, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-07 13:38 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Ingo Molnar, Linus Torvalds, Andrew Morton, Jeremy Fitzhardinge,
	Greg Kroah-Hartman, Konrad Rzeszutek Wilk, H. Peter Anvin,
	Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar, Attilio Rao,
	Srivatsa Vaddagiri, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 06:52 PM, Avi Kivity wrote:
> On 05/07/2012 04:20 PM, Raghavendra K T wrote:
>> On 05/07/2012 05:36 PM, Avi Kivity wrote:
>>> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>>>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>>>>> This is looking pretty good and complete now - any objections
>>>>>> from anyone to trying this out in a separate x86 topic tree?
>>>>>
>>>>> No objections, instead an
>>>>>
>>>>> Acked-by: Avi Kivity<avi@redhat.com>
>>>>>
>> [...]
>>>>
>>>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>>>> (3+3 runs)).
>>>>
>>>>            BASE                    BASE+patch            %improvement
>>>>            mean (sd)               mean (sd)
>>>> case 1x:     66.0566 (74.0304)      61.3233 (68.8299)     7.16552
>>>> case 2x:     1253.2 (1795.74)      131.606 (137.358)     89.4984
>>>> case 3x:     3431.04 (5297.26)      134.964 (149.861)     96.0664
>>>>
>>>
>>> You're calculating the improvement incorrectly.  In the last case, it's
>>> not 96%, rather it's 2400% (25x).  Similarly the second case is about
>>> 900% faster.
>>>
>>
>> You are right,
>> my %improvement was intended to be like
>> if
>> 1) base takes 100 sec ==>  patch takes 93 sec
>> 2) base takes 100 sec ==>  patch takes 11 sec
>> 3) base takes 100 sec ==>  patch takes 4 sec
>>
>> The above is more confusing (and incorrect!).
>>
>> Better is what you told which boils to 10x and 25x improvement in case
>> 2 and case 3. And IMO, this *really* gives the feeling of magnitude of
>> improvement with patches.
>>
>> I ll change script to report that way :).
>>
>
> btw, this is on non-PLE hardware, right?  What are the numbers for PLE?
>
Sure.
I 'll get hold of a PLE mc  and come up with the numbers soon. but I
'll expect the improvement around 1-3% as it was in last version.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:38             ` Raghavendra K T
@ 2012-05-07 13:46               ` Srivatsa Vaddagiri
  2012-05-07 13:49                 ` Avi Kivity
  2012-05-07 13:56                 ` Raghavendra K T
  0 siblings, 2 replies; 53+ messages in thread
From: Srivatsa Vaddagiri @ 2012-05-07 13:46 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Avi Kivity, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

* Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> [2012-05-07 19:08:51]:

> I 'll get hold of a PLE mc  and come up with the numbers soon. but I
> 'll expect the improvement around 1-3% as it was in last version.

Deferring preemption (when vcpu is holding lock) may give us better than 1-3% 
results on PLE hardware. Something worth trying IMHO.

- vatsa


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:46               ` Srivatsa Vaddagiri
@ 2012-05-07 13:49                 ` Avi Kivity
  2012-05-07 13:53                   ` Raghavendra K T
                                     ` (2 more replies)
  2012-05-07 13:56                 ` Raghavendra K T
  1 sibling, 3 replies; 53+ messages in thread
From: Avi Kivity @ 2012-05-07 13:49 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: Raghavendra K T, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
> * Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> [2012-05-07 19:08:51]:
>
> > I 'll get hold of a PLE mc  and come up with the numbers soon. but I
> > 'll expect the improvement around 1-3% as it was in last version.
>
> Deferring preemption (when vcpu is holding lock) may give us better than 1-3% 
> results on PLE hardware. Something worth trying IMHO.

Is the improvement so low, because PLE is interfering with the patch, or
because PLE already does a good job?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:49                 ` Avi Kivity
@ 2012-05-07 13:53                   ` Raghavendra K T
  2012-05-07 13:58                     ` Avi Kivity
  2012-05-07 13:55                   ` Srivatsa Vaddagiri
  2012-05-07 23:15                   ` Jeremy Fitzhardinge
  2 siblings, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-07 13:53 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 07:19 PM, Avi Kivity wrote:
> On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
>> * Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>  [2012-05-07 19:08:51]:
>>
>>> I 'll get hold of a PLE mc  and come up with the numbers soon. but I
>>> 'll expect the improvement around 1-3% as it was in last version.
>>
>> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
>> results on PLE hardware. Something worth trying IMHO.
>
> Is the improvement so low, because PLE is interfering with the patch, or
> because PLE already does a good job?
>

It is because PLE already does a good job (of not burning cpu). The
1-3% improvement is because, patchset knows atleast who is next to hold
lock, which is lacking in PLE.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:49                 ` Avi Kivity
  2012-05-07 13:53                   ` Raghavendra K T
@ 2012-05-07 13:55                   ` Srivatsa Vaddagiri
  2012-05-07 23:15                   ` Jeremy Fitzhardinge
  2 siblings, 0 replies; 53+ messages in thread
From: Srivatsa Vaddagiri @ 2012-05-07 13:55 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Raghavendra K T, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

* Avi Kivity <avi@redhat.com> [2012-05-07 16:49:25]:

> > Deferring preemption (when vcpu is holding lock) may give us better than 1-3% 
> > results on PLE hardware. Something worth trying IMHO.
> 
> Is the improvement so low, because PLE is interfering with the patch, or
> because PLE already does a good job?

I think its latter (PLE already doing a good job). 

- vatsa


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:46               ` Srivatsa Vaddagiri
  2012-05-07 13:49                 ` Avi Kivity
@ 2012-05-07 13:56                 ` Raghavendra K T
  1 sibling, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-07 13:56 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: Avi Kivity, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 07:16 PM, Srivatsa Vaddagiri wrote:
> * Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>  [2012-05-07 19:08:51]:
>
>> I 'll get hold of a PLE mc  and come up with the numbers soon. but I
>> 'll expect the improvement around 1-3% as it was in last version.
>
> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
> results on PLE hardware. Something worth trying IMHO.
>

Yes, Sure. 'll take-up this and any scalability improvement possible 
further.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:53                   ` Raghavendra K T
@ 2012-05-07 13:58                     ` Avi Kivity
  2012-05-07 14:47                       ` Raghavendra K T
  0 siblings, 1 reply; 53+ messages in thread
From: Avi Kivity @ 2012-05-07 13:58 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 04:53 PM, Raghavendra K T wrote:
>> Is the improvement so low, because PLE is interfering with the patch, or
>> because PLE already does a good job?
>>
>
>
> It is because PLE already does a good job (of not burning cpu). The
> 1-3% improvement is because, patchset knows atleast who is next to hold
> lock, which is lacking in PLE.
>

Not good.  Solving a problem in software that is already solved by
hardware?  It's okay if there are no costs involved, but here we're
introducing a new ABI that we'll have to maintain for a long time.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:58                     ` Avi Kivity
@ 2012-05-07 14:47                       ` Raghavendra K T
  2012-05-07 14:52                         ` Avi Kivity
  0 siblings, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-07 14:47 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 07:28 PM, Avi Kivity wrote:
> On 05/07/2012 04:53 PM, Raghavendra K T wrote:
>>> Is the improvement so low, because PLE is interfering with the patch, or
>>> because PLE already does a good job?
>>>
>>
>>
>> It is because PLE already does a good job (of not burning cpu). The
>> 1-3% improvement is because, patchset knows atleast who is next to hold
>> lock, which is lacking in PLE.
>>
>
> Not good.  Solving a problem in software that is already solved by
> hardware?  It's okay if there are no costs involved, but here we're
> introducing a new ABI that we'll have to maintain for a long time.
>

Hmm agree that being a step ahead of mighty hardware (and just an
improvement of 1-3%) is no good for long term (where PLE is future).

Having said that, it is hard for me to resist saying :
  bottleneck is somewhere else on PLE m/c and IMHO answer would be
combination of paravirt-spinlock + pv-flush-tb.

But I need to come up with good number to argue in favour of the claim.

PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a 
win on PLE where only one of them alone could not prove the benefit.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 14:47                       ` Raghavendra K T
@ 2012-05-07 14:52                         ` Avi Kivity
  2012-05-07 14:54                           ` Avi Kivity
                                             ` (3 more replies)
  0 siblings, 4 replies; 53+ messages in thread
From: Avi Kivity @ 2012-05-07 14:52 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 05:47 PM, Raghavendra K T wrote:
>> Not good.  Solving a problem in software that is already solved by
>> hardware?  It's okay if there are no costs involved, but here we're
>> introducing a new ABI that we'll have to maintain for a long time.
>>
>
>
> Hmm agree that being a step ahead of mighty hardware (and just an
> improvement of 1-3%) is no good for long term (where PLE is future).
>

PLE is the present, not the future.  It was introduced on later Nehalems
and is present on all Westmeres.  Two more processor generations have
passed meanwhile.  The AMD equivalent was also introduced around that
timeframe.

> Having said that, it is hard for me to resist saying :
>  bottleneck is somewhere else on PLE m/c and IMHO answer would be
> combination of paravirt-spinlock + pv-flush-tb.
>
> But I need to come up with good number to argue in favour of the claim.
>
> PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
> win on PLE where only one of them alone could not prove the benefit.
>

I'd like to see those numbers, then.

Ingo, please hold on the kvm-specific patches, meanwhile.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 14:52                         ` Avi Kivity
@ 2012-05-07 14:54                           ` Avi Kivity
  2012-05-07 17:25                           ` Ingo Molnar
                                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 53+ messages in thread
From: Avi Kivity @ 2012-05-07 14:54 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 05:52 PM, Avi Kivity wrote:
> > Having said that, it is hard for me to resist saying :
> >  bottleneck is somewhere else on PLE m/c and IMHO answer would be
> > combination of paravirt-spinlock + pv-flush-tb.
> >
> > But I need to come up with good number to argue in favour of the claim.
> >
> > PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
> > win on PLE where only one of them alone could not prove the benefit.
> >
>
> I'd like to see those numbers, then.
>

Note: it's probably best to try very wide guests, where the overhead of
iterating on all vcpus begins to show.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 14:52                         ` Avi Kivity
  2012-05-07 14:54                           ` Avi Kivity
@ 2012-05-07 17:25                           ` Ingo Molnar
  2012-05-07 20:42                             ` Thomas Gleixner
  2012-05-15 11:26                             ` [Xen-devel] " Jan Beulich
  2012-05-08  5:25                           ` Raghavendra K T
  2012-05-13 18:45                           ` Raghavendra K T
  3 siblings, 2 replies; 53+ messages in thread
From: Ingo Molnar @ 2012-05-07 17:25 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Raghavendra K T, Srivatsa Vaddagiri, Linus Torvalds,
	Andrew Morton, Jeremy Fitzhardinge, Greg Kroah-Hartman,
	Konrad Rzeszutek Wilk, H. Peter Anvin, Marcelo Tosatti, X86,
	Gleb Natapov, Ingo Molnar, Attilio Rao, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Stefano Stabellini,
	Stephan Diestelhorst, LKML, Peter Zijlstra, Thomas Gleixner


* Avi Kivity <avi@redhat.com> wrote:

> > PS: Nikunj had experimented that pv-flush tlb + 
> > paravirt-spinlock is a win on PLE where only one of them 
> > alone could not prove the benefit.
> 
> I'd like to see those numbers, then.
> 
> Ingo, please hold on the kvm-specific patches, meanwhile.

I'll hold off on the whole thing - frankly, we don't want this 
kind of Xen-only complexity. If KVM can make use of PLE then Xen 
ought to be able to do it as well.

If both Xen and KVM makes good use of it then that's a different 
matter.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 17:25                           ` Ingo Molnar
@ 2012-05-07 20:42                             ` Thomas Gleixner
  2012-05-08  6:46                               ` Nikunj A Dadhania
  2012-05-15 11:26                             ` [Xen-devel] " Jan Beulich
  1 sibling, 1 reply; 53+ messages in thread
From: Thomas Gleixner @ 2012-05-07 20:42 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Avi Kivity, Raghavendra K T, Srivatsa Vaddagiri, Linus Torvalds,
	Andrew Morton, Jeremy Fitzhardinge, Greg Kroah-Hartman,
	Konrad Rzeszutek Wilk, H. Peter Anvin, Marcelo Tosatti, X86,
	Gleb Natapov, Ingo Molnar, Attilio Rao, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Stefano Stabellini,
	Stephan Diestelhorst, LKML, Peter Zijlstra

On Mon, 7 May 2012, Ingo Molnar wrote:
> * Avi Kivity <avi@redhat.com> wrote:
> 
> > > PS: Nikunj had experimented that pv-flush tlb + 
> > > paravirt-spinlock is a win on PLE where only one of them 
> > > alone could not prove the benefit.
> > 
> > I'd like to see those numbers, then.
> > 
> > Ingo, please hold on the kvm-specific patches, meanwhile.
> 
> I'll hold off on the whole thing - frankly, we don't want this 
> kind of Xen-only complexity. If KVM can make use of PLE then Xen 
> ought to be able to do it as well.
> 
> If both Xen and KVM makes good use of it then that's a different 
> matter.

Aside of that, it's kinda strange that a dude named "Nikunj" is
referenced in the argument chain, but I can't find him on the CC list.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 13:49                 ` Avi Kivity
  2012-05-07 13:53                   ` Raghavendra K T
  2012-05-07 13:55                   ` Srivatsa Vaddagiri
@ 2012-05-07 23:15                   ` Jeremy Fitzhardinge
  2012-05-08  1:13                     ` Raghavendra K T
  2012-05-08  9:08                     ` Avi Kivity
  2 siblings, 2 replies; 53+ messages in thread
From: Jeremy Fitzhardinge @ 2012-05-07 23:15 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Raghavendra K T, Ingo Molnar, Linus Torvalds,
	Andrew Morton, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 06:49 AM, Avi Kivity wrote:
> On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
>> * Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> [2012-05-07 19:08:51]:
>>
>>> I 'll get hold of a PLE mc  and come up with the numbers soon. but I
>>> 'll expect the improvement around 1-3% as it was in last version.
>> Deferring preemption (when vcpu is holding lock) may give us better than 1-3% 
>> results on PLE hardware. Something worth trying IMHO.
> Is the improvement so low, because PLE is interfering with the patch, or
> because PLE already does a good job?

How does PLE help with ticket scheduling on unlock?  I thought it would
just help with the actual spin loops.

    J

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 23:15                   ` Jeremy Fitzhardinge
@ 2012-05-08  1:13                     ` Raghavendra K T
  2012-05-08  9:08                     ` Avi Kivity
  1 sibling, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-08  1:13 UTC (permalink / raw)
  To: Jeremy Fitzhardinge, Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Greg Kroah-Hartman, Konrad Rzeszutek Wilk, H. Peter Anvin,
	Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar, Attilio Rao,
	Virtualization, Xen Devel, linux-doc, KVM, Andi Kleen,
	Stefano Stabellini, Stephan Diestelhorst, LKML, Peter Zijlstra,
	Thomas Gleixner

On 05/08/2012 04:45 AM, Jeremy Fitzhardinge wrote:
> On 05/07/2012 06:49 AM, Avi Kivity wrote:
>> On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
>>> * Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>  [2012-05-07 19:08:51]:
>>>
>>>> I 'll get hold of a PLE mc  and come up with the numbers soon. but I
>>>> 'll expect the improvement around 1-3% as it was in last version.
>>> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
>>> results on PLE hardware. Something worth trying IMHO.
>> Is the improvement so low, because PLE is interfering with the patch, or
>> because PLE already does a good job?
>
> How does PLE help with ticket scheduling on unlock?  I thought it would
> just help with the actual spin loops.

Hmm. This strikes something to me. I think I should replace while 1 hog
in with some *real job*  to measure over-commit case. I hope to see
greater improvements because of fairness and scheduling of the
patch-set.

May be all the way I was measuring something equal to 1x case.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 14:52                         ` Avi Kivity
  2012-05-07 14:54                           ` Avi Kivity
  2012-05-07 17:25                           ` Ingo Molnar
@ 2012-05-08  5:25                           ` Raghavendra K T
  2012-05-13 18:45                           ` Raghavendra K T
  3 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-08  5:25 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 08:22 PM, Avi Kivity wrote:
> On 05/07/2012 05:47 PM, Raghavendra K T wrote:
>>> Not good.  Solving a problem in software that is already solved by
>>> hardware?  It's okay if there are no costs involved, but here we're
>>> introducing a new ABI that we'll have to maintain for a long time.
>>>
>>
>>
>> Hmm agree that being a step ahead of mighty hardware (and just an
>> improvement of 1-3%) is no good for long term (where PLE is future).
>>
>
> PLE is the present, not the future.  It was introduced on later Nehalems
> and is present on all Westmeres.  Two more processor generations have
> passed meanwhile.  The AMD equivalent was also introduced around that
> timeframe.
>
>> Having said that, it is hard for me to resist saying :
>>   bottleneck is somewhere else on PLE m/c and IMHO answer would be
>> combination of paravirt-spinlock + pv-flush-tb.
>>
>> But I need to come up with good number to argue in favour of the claim.
>>
>> PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
>> win on PLE where only one of them alone could not prove the benefit.
>>
>
> I'd like to see those numbers, then.
>
> Ingo, please hold on the kvm-specific patches, meanwhile.
>


Hmm. I think I messed up the fact while saying 1-3% improvement on PLE.

Going by what I had posted in  https://lkml.org/lkml/2012/4/5/73 (with
correct calculation)

   1x  	 70.475 (85.6979) 	63.5033 (72.7041)   15.7%
   2x  	 110.971 (132.829) 	105.099 (128.738)    5.56%	
   3x   	 150.265 (184.766) 	138.341 (172.69)     8.62%


It was around 12% with optimization patch posted separately with that 
(That one Needs more experiment though)

But anyways, I will come up with result for current patch series..


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 20:42                             ` Thomas Gleixner
@ 2012-05-08  6:46                               ` Nikunj A Dadhania
  0 siblings, 0 replies; 53+ messages in thread
From: Nikunj A Dadhania @ 2012-05-08  6:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar
  Cc: Avi Kivity, Raghavendra K T, Srivatsa Vaddagiri, Linus Torvalds,
	Andrew Morton, Jeremy Fitzhardinge, Greg Kroah-Hartman,
	Konrad Rzeszutek Wilk, H. Peter Anvin, Marcelo Tosatti, X86,
	Gleb Natapov, Ingo Molnar, Attilio Rao, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Stefano Stabellini,
	Stephan Diestelhorst, LKML, Peter Zijlstra

On Mon, 7 May 2012 22:42:30 +0200 (CEST), Thomas Gleixner <tglx@linutronix.de> wrote:
> On Mon, 7 May 2012, Ingo Molnar wrote:
> > * Avi Kivity <avi@redhat.com> wrote:
> > 
> > > > PS: Nikunj had experimented that pv-flush tlb + 
> > > > paravirt-spinlock is a win on PLE where only one of them 
> > > > alone could not prove the benefit.
> > > 
Do not have PLE numbers yet for pvflush and pvspinlock. 

I have seen on Non-PLE having pvflush and pvspinlock patches -
kernbench, ebizzy, specjbb, hackbench and dbench all of them improved. 

I am chasing a race currently on pv-flush path, it is causing
file-system corruption. I will post these number along with my v2 post.

> > > I'd like to see those numbers, then.
> > > 
> > > Ingo, please hold on the kvm-specific patches, meanwhile.
> > 
> > I'll hold off on the whole thing - frankly, we don't want this 
> > kind of Xen-only complexity. If KVM can make use of PLE then Xen 
> > ought to be able to do it as well.
> > 
> > If both Xen and KVM makes good use of it then that's a different 
> > matter.
> 
> Aside of that, it's kinda strange that a dude named "Nikunj" is
> referenced in the argument chain, but I can't find him on the CC list.
> 
/me waves my hand

Regards
Nikunj


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 23:15                   ` Jeremy Fitzhardinge
  2012-05-08  1:13                     ` Raghavendra K T
@ 2012-05-08  9:08                     ` Avi Kivity
  1 sibling, 0 replies; 53+ messages in thread
From: Avi Kivity @ 2012-05-08  9:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Srivatsa Vaddagiri, Raghavendra K T, Ingo Molnar, Linus Torvalds,
	Andrew Morton, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/08/2012 02:15 AM, Jeremy Fitzhardinge wrote:
> On 05/07/2012 06:49 AM, Avi Kivity wrote:
> > On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
> >> * Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> [2012-05-07 19:08:51]:
> >>
> >>> I 'll get hold of a PLE mc  and come up with the numbers soon. but I
> >>> 'll expect the improvement around 1-3% as it was in last version.
> >> Deferring preemption (when vcpu is holding lock) may give us better than 1-3% 
> >> results on PLE hardware. Something worth trying IMHO.
> > Is the improvement so low, because PLE is interfering with the patch, or
> > because PLE already does a good job?
>
> How does PLE help with ticket scheduling on unlock?  I thought it would
> just help with the actual spin loops.

PLE yields to up a random vcpu, hoping it is the lock holder.  This
patchset wakes up the right vcpu.  For small vcpu counts the difference
is a few bad wakeups (and even a bad wakeup sometimes works, since it
can put the spinner to sleep for a bit).  I expect that large vcpu
counts would show a greater difference.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 12:06       ` Avi Kivity
  2012-05-07 13:20         ` Raghavendra K T
@ 2012-05-13 17:59         ` Raghavendra K T
  1 sibling, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-13 17:59 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Ingo Molnar, Linus Torvalds, Andrew Morton, Jeremy Fitzhardinge,
	Greg Kroah-Hartman, Konrad Rzeszutek Wilk, H. Peter Anvin,
	Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar, Attilio Rao,
	Srivatsa Vaddagiri, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On 05/07/2012 05:36 PM, Avi Kivity wrote:
> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>> (3+3 runs)).
>>
>>           BASE                    BASE+patch            %improvement
>>           mean (sd)               mean (sd)
>> case 1x:     66.0566 (74.0304)      61.3233 (68.8299)     7.16552
>> case 2x:     1253.2 (1795.74)      131.606 (137.358)     89.4984
>> case 3x:     3431.04 (5297.26)      134.964 (149.861)     96.0664
>>
>
> You're calculating the improvement incorrectly.  In the last case, it's
> not 96%, rather it's 2400% (25x).  Similarly the second case is about
> 900% faster.
>

speedup calculation is clear.

I think confusion for me was more because of the types of benchmarks.

I always did

|(patch - base)| * 100 / base


So,  for
(1) lesser is better sort of benchmarks,
improvement calculation would be like

|(patched -  base)| * 100/ patched
e.g for kernbench,

suppose base    = 150 sec
         patched = 100 sec
improvement = 50 % ( = 33% degradation of base)


(2) for higher is better sort of benchmarks improvement calculation 
would be like

|(patched - base)| * 100 / base

for e.g say for pgbench/ ebizzy...

     base = 100 tps (transactions per sec)
     patched = 150 tps

  improvement  = 50 % of pathched kernel ( OR 33 % degradation of base )


Is this is what generally done? just wanted to be on same page before 
publishing benchmark results, other than kernbench.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 14:52                         ` Avi Kivity
                                             ` (2 preceding siblings ...)
  2012-05-08  5:25                           ` Raghavendra K T
@ 2012-05-13 18:45                           ` Raghavendra K T
  2012-05-14  4:57                             ` Nikunj A Dadhania
                                               ` (2 more replies)
  3 siblings, 3 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-13 18:45 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner, Nikunj A. Dadhania

On 05/07/2012 08:22 PM, Avi Kivity wrote:

I could not come with pv-flush results (also Nikunj had clarified that
the result was on NOn PLE

> I'd like to see those numbers, then.
>
> Ingo, please hold on the kvm-specific patches, meanwhile.
>

3 guests 8GB RAM, 1 used for kernbench
(kernbench -f -H -M -o 20) other for cpuhog (shell script with  while
true do hackbench)

1x: no hogs
2x: 8hogs in one guest
3x: 8hogs each in two guest

kernbench on PLE:
Machine : IBM xSeries with Intel(R) Xeon(R)  X7560 2.27GHz CPU with 32 
core, with 8 online cpus and 4*64GB RAM.

The average is taken over 4 iterations with 3 run each (4*3=12). and 
stdev is calculated over mean reported in each run.


A): 8 vcpu guest

                  BASE                    BASE+patch 
%improvement w.r.t
                  mean (sd)               mean (sd)              patched 
kernel time
case 1*1x:	61.7075  (1.17872)	60.93     (1.475625)    1.27605
case 1*2x:	107.2125 (1.3821349)	97.506675 (1.3461878)   9.95401
case 1*3x:	144.3515 (1.8203927)	138.9525  (0.58309319)  3.8855


B): 16 vcpu guest
                  BASE                    BASE+patch 
%improvement w.r.t
                  mean (sd)               mean (sd)              patched 
kernel time
case 2*1x:	70.524   (1.5941395)	69.68866  (1.9392529)   1.19867
case 2*2x:	133.0738 (1.4558653)	124.8568  (1.4544986)   6.58114
case 2*3x:	206.0094 (1.3437359)	181.4712  (2.9134116)   13.5218

B): 32 vcpu guest
                  BASE                    BASE+patch 
%improvementw.r.t
                  mean (sd)               mean (sd)              patched 
kernel time
case 4*1x:	100.61046 (2.7603485)	 85.48734  (2.6035035)  17.6905

It seems while we do not see any improvement in low contention case,
the benefit becomes evident with overcommit and large guests. I am
continuing analysis with other benchmarks (now with pgbench to check if
it has acceptable improvement/degradation in low contenstion case).

Avi,
Can patch series go ahead for inclusion into tree with following
reasons:

The patch series brings fairness with ticketlock ( hence the
predictability, since during contention, vcpu trying
  to acqire lock is sure that it gets its turn in less than total number 
of vcpus conntending for lock), which is very much desired irrespective
of its low benefit/degradation (if any) in low contention scenarios.

Ofcourse ticketlocks had undesirable effect of exploding LHP problem,
and the series addresses with improvement in scheduling and sleeping 
instead of burning cpu time.

Finally a less famous one, it brings almost PLE equivalent capabilty to
all the non PLE hardware (TBH I always preferred my experiment kernel to 
be compiled in my pv guest that saves more than 30 min of time for each 
run).

It would be nice to see any results if somebody got benefited/suffered 
with patchset.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-13 18:45                           ` Raghavendra K T
@ 2012-05-14  4:57                             ` Nikunj A Dadhania
  2012-05-14  9:01                               ` Raghavendra K T
  2012-05-14  7:38                             ` Jeremy Fitzhardinge
  2012-05-16  3:19                             ` Raghavendra K T
  2 siblings, 1 reply; 53+ messages in thread
From: Nikunj A Dadhania @ 2012-05-14  4:57 UTC (permalink / raw)
  To: Raghavendra K T, Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner

On Mon, 14 May 2012 00:15:30 +0530, Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> wrote:
> On 05/07/2012 08:22 PM, Avi Kivity wrote:
> 
> I could not come with pv-flush results (also Nikunj had clarified that
> the result was on NOn PLE
> 
Did you see any issues on PLE?

Regards,
Nikunj


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-13 18:45                           ` Raghavendra K T
  2012-05-14  4:57                             ` Nikunj A Dadhania
@ 2012-05-14  7:38                             ` Jeremy Fitzhardinge
  2012-05-14  8:11                               ` Raghavendra K T
  2012-05-16  3:19                             ` Raghavendra K T
  2 siblings, 1 reply; 53+ messages in thread
From: Jeremy Fitzhardinge @ 2012-05-14  7:38 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Avi Kivity, Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds,
	Andrew Morton, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner, Nikunj A. Dadhania

On 05/13/2012 11:45 AM, Raghavendra K T wrote:
> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>
> I could not come with pv-flush results (also Nikunj had clarified that
> the result was on NOn PLE
>
>> I'd like to see those numbers, then.
>>
>> Ingo, please hold on the kvm-specific patches, meanwhile.
>>
>
> 3 guests 8GB RAM, 1 used for kernbench
> (kernbench -f -H -M -o 20) other for cpuhog (shell script with  while
> true do hackbench)
>
> 1x: no hogs
> 2x: 8hogs in one guest
> 3x: 8hogs each in two guest
>
> kernbench on PLE:
> Machine : IBM xSeries with Intel(R) Xeon(R)  X7560 2.27GHz CPU with 32
> core, with 8 online cpus and 4*64GB RAM.
>
> The average is taken over 4 iterations with 3 run each (4*3=12). and
> stdev is calculated over mean reported in each run.
>
>
> A): 8 vcpu guest
>
>                  BASE                    BASE+patch %improvement w.r.t
>                  mean (sd)               mean (sd)             
> patched kernel time
> case 1*1x:    61.7075  (1.17872)    60.93     (1.475625)    1.27605
> case 1*2x:    107.2125 (1.3821349)    97.506675 (1.3461878)   9.95401
> case 1*3x:    144.3515 (1.8203927)    138.9525  (0.58309319)  3.8855
>
>
> B): 16 vcpu guest
>                  BASE                    BASE+patch %improvement w.r.t
>                  mean (sd)               mean (sd)             
> patched kernel time
> case 2*1x:    70.524   (1.5941395)    69.68866  (1.9392529)   1.19867
> case 2*2x:    133.0738 (1.4558653)    124.8568  (1.4544986)   6.58114
> case 2*3x:    206.0094 (1.3437359)    181.4712  (2.9134116)   13.5218
>
> B): 32 vcpu guest
>                  BASE                    BASE+patch %improvementw.r.t
>                  mean (sd)               mean (sd)             
> patched kernel time
> case 4*1x:    100.61046 (2.7603485)     85.48734  (2.6035035)  17.6905

What does the "4*1x" notation mean? Do these workloads have overcommit
of the PCPU resources?

When I measured it, even quite small amounts of overcommit lead to large
performance drops with non-pv ticket locks (on the order of 10%
improvements when there were 5 busy VCPUs on a 4 cpu system).  I never
tested it on larger machines, but I guess that represents around 25%
overcommit, or 40 busy VCPUs on a 32-PCPU system.

    J

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-14  7:38                             ` Jeremy Fitzhardinge
@ 2012-05-14  8:11                               ` Raghavendra K T
  0 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-14  8:11 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Avi Kivity, Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds,
	Andrew Morton, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner, Nikunj A. Dadhania

On 05/14/2012 01:08 PM, Jeremy Fitzhardinge wrote:
> On 05/13/2012 11:45 AM, Raghavendra K T wrote:
>> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>>
>> I could not come with pv-flush results (also Nikunj had clarified that
>> the result was on NOn PLE
>>
>>> I'd like to see those numbers, then.
>>>
>>> Ingo, please hold on the kvm-specific patches, meanwhile.
>>>
>>
>> 3 guests 8GB RAM, 1 used for kernbench
>> (kernbench -f -H -M -o 20) other for cpuhog (shell script with  while
>> true do hackbench)
>>
>> 1x: no hogs
>> 2x: 8hogs in one guest
>> 3x: 8hogs each in two guest
>>
>> kernbench on PLE:
>> Machine : IBM xSeries with Intel(R) Xeon(R)  X7560 2.27GHz CPU with 32
>> core, with 8 online cpus and 4*64GB RAM.
>>
>> The average is taken over 4 iterations with 3 run each (4*3=12). and
>> stdev is calculated over mean reported in each run.
>>
>>
>> A): 8 vcpu guest
>>
>>                   BASE                    BASE+patch %improvement w.r.t
>>                   mean (sd)               mean (sd)
>> patched kernel time
>> case 1*1x:    61.7075  (1.17872)    60.93     (1.475625)    1.27605
>> case 1*2x:    107.2125 (1.3821349)    97.506675 (1.3461878)   9.95401
>> case 1*3x:    144.3515 (1.8203927)    138.9525  (0.58309319)  3.8855
>>
>>
>> B): 16 vcpu guest
>>                   BASE                    BASE+patch %improvement w.r.t
>>                   mean (sd)               mean (sd)
>> patched kernel time
>> case 2*1x:    70.524   (1.5941395)    69.68866  (1.9392529)   1.19867
>> case 2*2x:    133.0738 (1.4558653)    124.8568  (1.4544986)   6.58114
>> case 2*3x:    206.0094 (1.3437359)    181.4712  (2.9134116)   13.5218
>>
>> B): 32 vcpu guest
>>                   BASE                    BASE+patch %improvementw.r.t
>>                   mean (sd)               mean (sd)
>> patched kernel time
>> case 4*1x:    100.61046 (2.7603485)     85.48734  (2.6035035)  17.6905
>
> What does the "4*1x" notation mean? Do these workloads have overcommit
> of the PCPU resources?
>
> When I measured it, even quite small amounts of overcommit lead to large
> performance drops with non-pv ticket locks (on the order of 10%
> improvements when there were 5 busy VCPUs on a 4 cpu system).  I never
> tested it on larger machines, but I guess that represents around 25%
> overcommit, or 40 busy VCPUs on a 32-PCPU system.

All the above measurements are on PLE machine. It is 32 vcpu single
guest on a 8 pcpu.

(PS:One problem I saw in my kernbench run itself is that
number of threads spawned = 20 instead of 2* number of vcpu. I ll
correct during next measurement.)

"even quite small amounts of overcommit lead to large performance drops
with non-pv ticket locks":

This is very much true on non PLE machine. probably compilation takes
even a day vs just one hour. ( with just 1:3x overcommit I had got 25 x
speedup).


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-14  4:57                             ` Nikunj A Dadhania
@ 2012-05-14  9:01                               ` Raghavendra K T
  0 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-14  9:01 UTC (permalink / raw)
  To: Nikunj A Dadhania
  Cc: Avi Kivity, Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds,
	Andrew Morton, Jeremy Fitzhardinge, Greg Kroah-Hartman,
	Konrad Rzeszutek Wilk, H. Peter Anvin, Marcelo Tosatti, X86,
	Gleb Natapov, Ingo Molnar, Attilio Rao, Virtualization,
	Xen Devel, linux-doc, KVM, Andi Kleen, Stefano Stabellini,
	Stephan Diestelhorst, LKML, Peter Zijlstra, Thomas Gleixner

On 05/14/2012 10:27 AM, Nikunj A Dadhania wrote:
> On Mon, 14 May 2012 00:15:30 +0530, Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>  wrote:
>> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>>
>> I could not come with pv-flush results (also Nikunj had clarified that
>> the result was on NOn PLE
>>
> Did you see any issues on PLE?
>

No, I did not see issues in setup, but did not get time to check
that out yet ..


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Xen-devel] [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-07 17:25                           ` Ingo Molnar
  2012-05-07 20:42                             ` Thomas Gleixner
@ 2012-05-15 11:26                             ` Jan Beulich
  1 sibling, 0 replies; 53+ messages in thread
From: Jan Beulich @ 2012-05-15 11:26 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Stephan Diestelhorst, Peter Zijlstra, Attilio Rao,
	Stefano Stabellini, Andi Kleen, Jeremy Fitzhardinge, X86,
	Thomas Gleixner, Andrew Morton, Linus Torvalds, Raghavendra K T,
	Srivatsa Vaddagiri, Virtualization, Xen Devel,
	Konrad Rzeszutek Wilk, Avi Kivity, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti, Greg Kroah-Hartman, KVM, linux-doc, LKML,
	H. Peter Anvin

>>> On 07.05.12 at 19:25, Ingo Molnar <mingo@kernel.org> wrote:

(apologies for the late reply, the mail just now made it to my inbox
via xen-devel)

> I'll hold off on the whole thing - frankly, we don't want this 
> kind of Xen-only complexity. If KVM can make use of PLE then Xen 
> ought to be able to do it as well.

It does - for fully virtualized guests. For para-virtualized ones,
it can't (as the hardware feature is an extension to VMX/SVM).

> If both Xen and KVM makes good use of it then that's a different 
> matter.

I saw in a later reply that you're now tending towards trying it
out at least - thanks.

Jan


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-13 18:45                           ` Raghavendra K T
  2012-05-14  4:57                             ` Nikunj A Dadhania
  2012-05-14  7:38                             ` Jeremy Fitzhardinge
@ 2012-05-16  3:19                             ` Raghavendra K T
  2012-05-30 11:26                               ` Raghavendra K T
  2 siblings, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-16  3:19 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner, Nikunj A. Dadhania

On 05/14/2012 12:15 AM, Raghavendra K T wrote:
> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>
> I could not come with pv-flush results (also Nikunj had clarified that
> the result was on NOn PLE
>
>> I'd like to see those numbers, then.
>>
>> Ingo, please hold on the kvm-specific patches, meanwhile.
>>
>
> 3 guests 8GB RAM, 1 used for kernbench
> (kernbench -f -H -M -o 20) other for cpuhog (shell script with while
> true do hackbench)
>
> 1x: no hogs
> 2x: 8hogs in one guest
> 3x: 8hogs each in two guest
>
> kernbench on PLE:
> Machine : IBM xSeries with Intel(R) Xeon(R) X7560 2.27GHz CPU with 32
> core, with 8 online cpus and 4*64GB RAM.
>
> The average is taken over 4 iterations with 3 run each (4*3=12). and
> stdev is calculated over mean reported in each run.
>
>
> A): 8 vcpu guest
>
> BASE BASE+patch %improvement w.r.t
> mean (sd) mean (sd) patched kernel time
> case 1*1x: 61.7075 (1.17872) 60.93 (1.475625) 1.27605
> case 1*2x: 107.2125 (1.3821349) 97.506675 (1.3461878) 9.95401
> case 1*3x: 144.3515 (1.8203927) 138.9525 (0.58309319) 3.8855
>
>
> B): 16 vcpu guest
> BASE BASE+patch %improvement w.r.t
> mean (sd) mean (sd) patched kernel time
> case 2*1x: 70.524 (1.5941395) 69.68866 (1.9392529) 1.19867
> case 2*2x: 133.0738 (1.4558653) 124.8568 (1.4544986) 6.58114
> case 2*3x: 206.0094 (1.3437359) 181.4712 (2.9134116) 13.5218
>
> B): 32 vcpu guest
> BASE BASE+patch %improvementw.r.t
> mean (sd) mean (sd) patched kernel time
> case 4*1x: 100.61046 (2.7603485) 85.48734 (2.6035035) 17.6905
>
> It seems while we do not see any improvement in low contention case,
> the benefit becomes evident with overcommit and large guests. I am
> continuing analysis with other benchmarks (now with pgbench to check if
> it has acceptable improvement/degradation in low contenstion case).

Here are the results for pgbench and sysbench. Here the results are on a 
single guest.

Machine : IBM xSeries with Intel(R) Xeon(R)  X7560 2.27GHz CPU with 32 
core, with 8
          online cpus and 4*64GB RAM.

Guest config: 8GB RAM

pgbench
==========

   unit=tps (higher is better)
   pgbench based on pgsql 9.2-dev:
	http://www.postgresql.org/ftp/snapshot/dev/ (link given by Attilo)

   tool used to collect benachmark: 
git://git.postgresql.org/git/pgbench-tools.git
   config: MAX_WORKER=16 SCALE=32 run for NRCLIENTS = 1, 8, 64

Average taken over 10 iterations.

      8 vcpu guest	

      N  base	   patch	improvement
      1  5271       5235    	-0.687679
      8  37953      38202    	0.651798
      64 37546      37774    	0.60359


      16 vcpu guest	

      N  base	   patch	improvement
      1  5229       5239  	0.190876
      8  34908      36048    	3.16245
      64 51796      52852   	1.99803

sysbench
==========
sysbench 0.4.12 cnfigured for postgres driver ran with
sysbench --num-threads=8/16/32 --max-requests=100000 --test=oltp 
--oltp-table-size=500000 --db-driver=pgsql --oltp-read-only run
annalysed with ministat with
x patch
+ base

8 vcpu guest
---------------
1) num_threads = 8
     N           Min           Max        Median           Avg        Stddev
x  10       20.7805         21.55       20.9667      21.03502    0.22682186
+  10        21.025       22.3122      21.29535      21.41793    0.39542349
Difference at 98.0% confidence
	1.82035% +/- 1.74892%

2) num_threads = 16
     N           Min           Max        Median           Avg        Stddev
x  10       20.8786       21.3967       21.1566      21.14441    0.15490983
+  10       21.3992       21.9437      21.46235      21.58724     0.2089425
Difference at 98.0% confidence
	2.09431% +/- 0.992732%

3) num_threads = 32
     N           Min           Max        Median           Avg        Stddev
x  10       21.1329       21.3726      21.33415       21.2893    0.08324195
+  10       21.5692       21.8966       21.6441      21.65679   0.093430003
Difference at 98.0% confidence
	1.72617% +/- 0.474343%


16 vcpu guest
---------------
1) num_threads = 8
     N           Min           Max        Median           Avg        Stddev
x  10       23.5314       25.6118      24.76145      24.64517    0.74856264
+  10       22.2675       26.6204       22.9131      23.50554      1.345386
No difference proven at 98.0% confidence

2) num_threads = 16
     N           Min           Max        Median           Avg        Stddev
x  10       12.0095       12.2305      12.15575      12.13926   0.070872722
+  10        11.413       11.6986       11.4817        11.493   0.080007819
Difference at 98.0% confidence
	-5.32372% +/- 0.710561%

3) num_threads = 32
     N           Min           Max        Median           Avg        Stddev
x  10       12.1378       12.3567      12.21675      12.22703     0.0670695
+  10        11.573       11.7438       11.6306      11.64905   0.062780221
Difference at 98.0% confidence
	-4.72707% +/- 0.606349%


32 vcpu guest
---------------
1) num_threads = 8
     N           Min           Max        Median           Avg        Stddev
x  10       30.5602       41.4756      37.45155      36.43752     3.5490215
+  10       21.1183       49.2599      22.60845      29.61119     11.269393
No difference proven at 98.0% confidence

2) num_threads = 16
     N           Min           Max        Median           Avg        Stddev
x  10       12.2556       12.9023       12.4968      12.55764    0.25330459
+  10       11.7627       11.9959       11.8419      11.86256   0.088563903
Difference at 98.0% confidence
	-5.53512% +/- 1.72448%

3) num_threads = 32
     N           Min           Max        Median           Avg        Stddev
x  10       16.8751       17.0756      16.97335      16.96765   0.063197191
+  10       21.3763       21.8111       21.6799      21.66438    0.13059888
Difference at 98.0% confidence
	27.6805% +/- 0.690056%


To summarise,
with 32 vcpu guest with nr thread=32 we get around 27% improvement. In 
very low/undercommitted systems we may see very small improvement or 
small acceptable degradation ( which it deserves).

(IMO with more overcommit/contention, we can get more than 15% for the 
benchmarks and we do ).

  Please let me know if you have any suggestions for try.
(Currently my PLE machine lease is expired, it may take some time to 
comeback :()

  Ingo, Avi ?


>
> Avi,
> Can patch series go ahead for inclusion into tree with following
> reasons:
>
> The patch series brings fairness with ticketlock ( hence the
> predictability, since during contention, vcpu trying
> to acqire lock is sure that it gets its turn in less than total number
> of vcpus conntending for lock), which is very much desired irrespective
> of its low benefit/degradation (if any) in low contention scenarios.
>
> Ofcourse ticketlocks had undesirable effect of exploding LHP problem,
> and the series addresses with improvement in scheduling and sleeping
> instead of burning cpu time.
>
> Finally a less famous one, it brings almost PLE equivalent capabilty to
> all the non PLE hardware (TBH I always preferred my experiment kernel to
> be compiled in my pv guest that saves more than 30 min of time for each
> run).


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-16  3:19                             ` Raghavendra K T
@ 2012-05-30 11:26                               ` Raghavendra K T
  2012-06-14 12:21                                 ` Raghavendra K T
  0 siblings, 1 reply; 53+ messages in thread
From: Raghavendra K T @ 2012-05-30 11:26 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner, Nikunj A. Dadhania

On 05/16/2012 08:49 AM, Raghavendra K T wrote:
> On 05/14/2012 12:15 AM, Raghavendra K T wrote:
>> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>>
>> I could not come with pv-flush results (also Nikunj had clarified that
>> the result was on NOn PLE
>>
>>> I'd like to see those numbers, then.
>>>
>>> Ingo, please hold on the kvm-specific patches, meanwhile.
[...]
> To summarise,
> with 32 vcpu guest with nr thread=32 we get around 27% improvement. In
> very low/undercommitted systems we may see very small improvement or
> small acceptable degradation ( which it deserves).
>

For large guests, current value SPIN_THRESHOLD, along with ple_window 
needed some of research/experiment.

[Thanks to Jeremy/Nikunj for inputs and help in result analysis ]

I started with debugfs spinlock/histograms, and ran experiments with 32, 
64 vcpu guests for spin threshold of 2k, 4k, 8k, 16k, and 32k with
1vm/2vm/4vm  for kernbench, sysbench, ebizzy, hackbench.
[ spinlock/histogram  gives logarithmic view of lockwait times ]

machine: PLE machine  with 32 cores.

Here is the result summary.
The summary includes 2 part,
(1) %improvement w.r.t 2K spin threshold,
(2) improvement w.r.t sum of histogram numbers in debugfs (that gives 
rough indication of contention/cpu time wasted)

  For e.g 98% for 4k threshold kbench 1 vm would imply, there is a 98% 
reduction in sigma(histogram values) compared to 2k case

Result for 32 vcpu guest
==========================
+----------------+-----------+-----------+-----------+-----------+
|    Base-2k     |     4k    |    8k     |   16k     |    32k    |
+----------------+-----------+-----------+-----------+-----------+
|     kbench-1vm |       44  |       50  |       46  |       41  |
|  SPINHisto-1vm |       98  |       99  |       99  |       99  |
|     kbench-2vm |       25  |       45  |       49  |       45  |
|  SPINHisto-2vm |       31  |       91  |       99  |       99  |
|     kbench-4vm |      -13  |      -27  |       -2  |       -4  |
|  SPINHisto-4vm |       29  |       66  |       95  |       99  |
+----------------+-----------+-----------+-----------+-----------+
|     ebizzy-1vm |      954  |      942  |      913  |      915  |
|  SPINHisto-1vm |       96  |       99  |       99  |       99  |
|     ebizzy-2vm |      158  |      135  |      123  |      106  |
|  SPINHisto-2vm |       90  |       98  |       99  |       99  |
|     ebizzy-4vm |      -13  |      -28  |      -33  |      -37  |
|  SPINHisto-4vm |       83  |       98  |       99  |       99  |
+----------------+-----------+-----------+-----------+-----------+
|     hbench-1vm |       48  |       56  |       52  |       64  |
|  SPINHisto-1vm |       92  |       95  |       99  |       99  |
|     hbench-2vm |       32  |       40  |       39  |       21  |
|  SPINHisto-2vm |       74  |       96  |       99  |       99  |
|     hbench-4vm |       27  |       15  |        3  |      -57  |
|  SPINHisto-4vm |       68  |       88  |       94  |       97  |
+----------------+-----------+-----------+-----------+-----------+
|    sysbnch-1vm |        0  |        0  |        1  |        0  |
|  SPINHisto-1vm |       76  |       98  |       99  |       99  |
|    sysbnch-2vm |       -1  |        3  |       -1  |       -4  |
|  SPINHisto-2vm |       82  |       94  |       96  |       99  |
|    sysbnch-4vm |        0  |       -2  |       -8  |      -14  |
|  SPINHisto-4vm |       57  |       79  |       88  |       95  |
+----------------+-----------+-----------+-----------+-----------+

result for 64  vcpu guest
=========================
+----------------+-----------+-----------+-----------+-----------+
|    Base-2k     |     4k    |    8k     |   16k     |    32k    |
+----------------+-----------+-----------+-----------+-----------+
|     kbench-1vm |        1  |      -11  |      -25  |       31  |
|  SPINHisto-1vm |        3  |       10  |       47  |       99  |
|     kbench-2vm |       15  |       -9  |      -66  |      -15  |
|  SPINHisto-2vm |        2  |       11  |       19  |       90  |
+----------------+-----------+-----------+-----------+-----------+
|     ebizzy-1vm |      784  |     1097  |      978  |      930  |
|  SPINHisto-1vm |       74  |       97  |       98  |       99  |
|     ebizzy-2vm |       43  |       48  |       56  |       32  |
|  SPINHisto-2vm |       58  |       93  |       97  |       98  |
+----------------+-----------+-----------+-----------+-----------+
|     hbench-1vm |        8  |       55  |       56  |       62  |
|  SPINHisto-1vm |       18  |       69  |       96  |       99  |
|     hbench-2vm |       13  |      -14  |      -75  |      -29  |
|  SPINHisto-2vm |       57  |       74  |       80  |       97  |
+----------------+-----------+-----------+-----------+-----------+
|    sysbnch-1vm |        9  |       11  |       15  |       10  |
|  SPINHisto-1vm |       80  |       93  |       98  |       99  |
|    sysbnch-2vm |        3  |        3  |        4  |        2  |
|  SPINHisto-2vm |       72  |       89  |       94  |       97  |
+----------------+-----------+-----------+-----------+-----------+

 From this, value around 4k-8k threshold seem to be optimal one. [ This 
is amost inline with ple_window default ]
(lower the spin threshold, we would cover lesser % of spinlocks, that 
would result in more halt_exit/wakeups.

[ www.xen.org/files/xensummitboston08/LHP.pdf also has good graphical 
detail on covering spinlock waits ]

After 8k threshold, we see no more contention but that would mean we 
have wasted lot of cpu time in busy waits.

Will get a PLE machine again, and 'll continue experimenting with 
further tuning of SPIN_THRESHOLD.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
  2012-05-02 10:09 ` [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Raghavendra K T
@ 2012-05-30 11:54   ` Jan Kiszka
  2012-05-30 13:44     ` Raghavendra K T
  0 siblings, 1 reply; 53+ messages in thread
From: Jan Kiszka @ 2012-05-30 11:54 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti, Xen Devel, KVM, linux-doc, LKML,
	Srivatsa Vaddagiri, Virtualization, Andi Kleen,
	Stephan Diestelhorst, Attilio Rao, Linus Torvalds,
	Stefano Stabellini

On 2012-05-02 12:09, Raghavendra K T wrote:
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> 
> 
> KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
> enabled guest.
> 
> KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
> in guest.
> 
> Thanks Alex for KVM_HC_FEATURES inputs and Vatsa for rewriting KVM_HC_KICK_CPU

This contains valuable documentation for features that are already
supported. Can you break them out and post as separate patch already?
One comment on them below.

> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  Documentation/virtual/kvm/cpuid.txt      |    4 ++
>  Documentation/virtual/kvm/hypercalls.txt |   60 ++++++++++++++++++++++++++++++
>  2 files changed, 64 insertions(+), 0 deletions(-)
> diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
> index 8820685..062dff9 100644
> --- a/Documentation/virtual/kvm/cpuid.txt
> +++ b/Documentation/virtual/kvm/cpuid.txt
> @@ -39,6 +39,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
>  KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
>                                     ||       || writing to msr 0x4b564d02
>  ------------------------------------------------------------------------------
> +KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
> +                                   ||       || before enabling paravirtualized
> +                                   ||       || spinlock support.
> +------------------------------------------------------------------------------
>  KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
>                                     ||       || per-cpu warps are expected in
>                                     ||       || kvmclock.
> diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
> new file mode 100644
> index 0000000..bc3f14a
> --- /dev/null
> +++ b/Documentation/virtual/kvm/hypercalls.txt
> @@ -0,0 +1,60 @@
> +KVM Hypercalls Documentation
> +===========================
> +The template for each hypercall is:
> +1. Hypercall name, value.
> +2. Architecture(s)
> +3. Status (deprecated, obsolete, active)
> +4. Purpose
> +
> +1. KVM_HC_VAPIC_POLL_IRQ
> +------------------------
> +Value: 1
> +Architecture: x86
> +Purpose: None

Purpose: Trigger guest exit so that the host can check for pending
interrupts on reentry.

> +
> +2. KVM_HC_MMU_OP
> +------------------------
> +Value: 2
> +Architecture: x86
> +Status: deprecated.
> +Purpose: Support MMU operations such as writing to PTE,
> +flushing TLB, release PT.
> +
> +3. KVM_HC_FEATURES
> +------------------------
> +Value: 3
> +Architecture: PPC
> +Status: active
> +Purpose: Expose hypercall availability to the guest. On x86 platforms, cpuid
> +used to enumerate which hypercalls are available. On PPC, either device tree
> +based lookup ( which is also what EPAPR dictates) OR KVM specific enumeration
> +mechanism (which is this hypercall) can be used.
> +
> +4. KVM_HC_PPC_MAP_MAGIC_PAGE
> +------------------------
> +Value: 4
> +Architecture: PPC
> +Status: active
> +Purpose: To enable communication between the hypervisor and guest there is a
> +shared page that contains parts of supervisor visible register state.
> +The guest can map this shared page to access its supervisor register through
> +memory using this hypercall.
> +
> +5. KVM_HC_KICK_CPU
> +------------------------
> +Value: 5
> +Architecture: x86
> +Status: active
> +Purpose: Hypercall used to wakeup a vcpu from HLT state
> +
> +Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
> +kernel mode for an event to occur (ex: a spinlock to become available) can
> +execute HLT instruction once it has busy-waited for more than a threshold
> +time-interval. Execution of HLT instruction would cause the hypervisor to put
> +the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
> +same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
> +specifying APIC ID of the vcpu to be wokenup.
> +
> +TODO:
> +1. more information on input and output needed?
> +2. Add more detail to purpose of hypercalls.

Thanks,
Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
  2012-05-30 11:54   ` Jan Kiszka
@ 2012-05-30 13:44     ` Raghavendra K T
  0 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-05-30 13:44 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Avi Kivity, X86, Gleb Natapov, Ingo Molnar,
	Marcelo Tosatti, Xen Devel, KVM, linux-doc, LKML,
	Srivatsa Vaddagiri, Virtualization, Andi Kleen,
	Stephan Diestelhorst, Attilio Rao, Linus Torvalds,
	Stefano Stabellini

On 05/30/2012 05:24 PM, Jan Kiszka wrote:
> On 2012-05-02 12:09, Raghavendra K T wrote:
>> From: Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>
>>
>> KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
>> enabled guest.
>>
>> KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
>> in guest.
>>
>> Thanks Alex for KVM_HC_FEATURES inputs and Vatsa for rewriting KVM_HC_KICK_CPU
>
> This contains valuable documentation for features that are already
> supported. Can you break them out and post as separate patch already?
> One comment on them below.
>

That sounds like a good idea. Sure, will do that.

>>
>> Signed-off-by: Srivatsa Vaddagiri<vatsa@linux.vnet.ibm.com>
>> Signed-off-by: Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   Documentation/virtual/kvm/cpuid.txt      |    4 ++
>>   Documentation/virtual/kvm/hypercalls.txt |   60 ++++++++++++++++++++++++++++++
>>   2 files changed, 64 insertions(+), 0 deletions(-)
>> diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
>> index 8820685..062dff9 100644
>> --- a/Documentation/virtual/kvm/cpuid.txt
>> +++ b/Documentation/virtual/kvm/cpuid.txt
>> @@ -39,6 +39,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
>>   KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
>>                                      ||       || writing to msr 0x4b564d02
>>   ------------------------------------------------------------------------------
>> +KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
>> +                                   ||       || before enabling paravirtualized
>> +                                   ||       || spinlock support.
>> +------------------------------------------------------------------------------
>>   KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
>>                                      ||       || per-cpu warps are expected in
>>                                      ||       || kvmclock.
>> diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
>> new file mode 100644
>> index 0000000..bc3f14a
>> --- /dev/null
>> +++ b/Documentation/virtual/kvm/hypercalls.txt
>> @@ -0,0 +1,60 @@
>> +KVM Hypercalls Documentation
>> +===========================
>> +The template for each hypercall is:
>> +1. Hypercall name, value.
>> +2. Architecture(s)
>> +3. Status (deprecated, obsolete, active)
>> +4. Purpose
>> +
>> +1. KVM_HC_VAPIC_POLL_IRQ
>> +------------------------
>> +Value: 1
>> +Architecture: x86
>> +Purpose: None
>
> Purpose: Trigger guest exit so that the host can check for pending
> interrupts on reentry.

will add fold this and resend.

[...]


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
  2012-05-30 11:26                               ` Raghavendra K T
@ 2012-06-14 12:21                                 ` Raghavendra K T
  0 siblings, 0 replies; 53+ messages in thread
From: Raghavendra K T @ 2012-06-14 12:21 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Srivatsa Vaddagiri, Ingo Molnar, Linus Torvalds, Andrew Morton,
	Jeremy Fitzhardinge, Greg Kroah-Hartman, Konrad Rzeszutek Wilk,
	H. Peter Anvin, Marcelo Tosatti, X86, Gleb Natapov, Ingo Molnar,
	Attilio Rao, Virtualization, Xen Devel, linux-doc, KVM,
	Andi Kleen, Stefano Stabellini, Stephan Diestelhorst, LKML,
	Peter Zijlstra, Thomas Gleixner, Nikunj A. Dadhania

On 05/30/2012 04:56 PM, Raghavendra K T wrote:
> On 05/16/2012 08:49 AM, Raghavendra K T wrote:
>> On 05/14/2012 12:15 AM, Raghavendra K T wrote:
>>> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>>>
>>> I could not come with pv-flush results (also Nikunj had clarified that
>>> the result was on NOn PLE
>>>
>>>> I'd like to see those numbers, then.
>>>>
>>>> Ingo, please hold on the kvm-specific patches, meanwhile.
> [...]
>> To summarise,
>> with 32 vcpu guest with nr thread=32 we get around 27% improvement. In
>> very low/undercommitted systems we may see very small improvement or
>> small acceptable degradation ( which it deserves).
>>
>
> For large guests, current value SPIN_THRESHOLD, along with ple_window
> needed some of research/experiment.
>
> [Thanks to Jeremy/Nikunj for inputs and help in result analysis ]
>
> I started with debugfs spinlock/histograms, and ran experiments with 32,
> 64 vcpu guests for spin threshold of 2k, 4k, 8k, 16k, and 32k with
> 1vm/2vm/4vm for kernbench, sysbench, ebizzy, hackbench.
> [ spinlock/histogram gives logarithmic view of lockwait times ]
>
> machine: PLE machine with 32 cores.
>
> Here is the result summary.
> The summary includes 2 part,
> (1) %improvement w.r.t 2K spin threshold,
> (2) improvement w.r.t sum of histogram numbers in debugfs (that gives
> rough indication of contention/cpu time wasted)
>
> For e.g 98% for 4k threshold kbench 1 vm would imply, there is a 98%
> reduction in sigma(histogram values) compared to 2k case
>
> Result for 32 vcpu guest
> ==========================
> +----------------+-----------+-----------+-----------+-----------+
> | Base-2k | 4k | 8k | 16k | 32k |
> +----------------+-----------+-----------+-----------+-----------+
> | kbench-1vm | 44 | 50 | 46 | 41 |
> | SPINHisto-1vm | 98 | 99 | 99 | 99 |
> | kbench-2vm | 25 | 45 | 49 | 45 |
> | SPINHisto-2vm | 31 | 91 | 99 | 99 |
> | kbench-4vm | -13 | -27 | -2 | -4 |
> | SPINHisto-4vm | 29 | 66 | 95 | 99 |
> +----------------+-----------+-----------+-----------+-----------+
> | ebizzy-1vm | 954 | 942 | 913 | 915 |
> | SPINHisto-1vm | 96 | 99 | 99 | 99 |
> | ebizzy-2vm | 158 | 135 | 123 | 106 |
> | SPINHisto-2vm | 90 | 98 | 99 | 99 |
> | ebizzy-4vm | -13 | -28 | -33 | -37 |
> | SPINHisto-4vm | 83 | 98 | 99 | 99 |
> +----------------+-----------+-----------+-----------+-----------+
> | hbench-1vm | 48 | 56 | 52 | 64 |
> | SPINHisto-1vm | 92 | 95 | 99 | 99 |
> | hbench-2vm | 32 | 40 | 39 | 21 |
> | SPINHisto-2vm | 74 | 96 | 99 | 99 |
> | hbench-4vm | 27 | 15 | 3 | -57 |
> | SPINHisto-4vm | 68 | 88 | 94 | 97 |
> +----------------+-----------+-----------+-----------+-----------+
> | sysbnch-1vm | 0 | 0 | 1 | 0 |
> | SPINHisto-1vm | 76 | 98 | 99 | 99 |
> | sysbnch-2vm | -1 | 3 | -1 | -4 |
> | SPINHisto-2vm | 82 | 94 | 96 | 99 |
> | sysbnch-4vm | 0 | -2 | -8 | -14 |
> | SPINHisto-4vm | 57 | 79 | 88 | 95 |
> +----------------+-----------+-----------+-----------+-----------+
>
> result for 64 vcpu guest
> =========================
> +----------------+-----------+-----------+-----------+-----------+
> | Base-2k | 4k | 8k | 16k | 32k |
> +----------------+-----------+-----------+-----------+-----------+
> | kbench-1vm | 1 | -11 | -25 | 31 |
> | SPINHisto-1vm | 3 | 10 | 47 | 99 |
> | kbench-2vm | 15 | -9 | -66 | -15 |
> | SPINHisto-2vm | 2 | 11 | 19 | 90 |
> +----------------+-----------+-----------+-----------+-----------+
> | ebizzy-1vm | 784 | 1097 | 978 | 930 |
> | SPINHisto-1vm | 74 | 97 | 98 | 99 |
> | ebizzy-2vm | 43 | 48 | 56 | 32 |
> | SPINHisto-2vm | 58 | 93 | 97 | 98 |
> +----------------+-----------+-----------+-----------+-----------+
> | hbench-1vm | 8 | 55 | 56 | 62 |
> | SPINHisto-1vm | 18 | 69 | 96 | 99 |
> | hbench-2vm | 13 | -14 | -75 | -29 |
> | SPINHisto-2vm | 57 | 74 | 80 | 97 |
> +----------------+-----------+-----------+-----------+-----------+
> | sysbnch-1vm | 9 | 11 | 15 | 10 |
> | SPINHisto-1vm | 80 | 93 | 98 | 99 |
> | sysbnch-2vm | 3 | 3 | 4 | 2 |
> | SPINHisto-2vm | 72 | 89 | 94 | 97 |
> +----------------+-----------+-----------+-----------+-----------+
>
>  From this, value around 4k-8k threshold seem to be optimal one. [ This
> is amost inline with ple_window default ]
> (lower the spin threshold, we would cover lesser % of spinlocks, that
> would result in more halt_exit/wakeups.
>
> [ www.xen.org/files/xensummitboston08/LHP.pdf also has good graphical
> detail on covering spinlock waits ]
>
> After 8k threshold, we see no more contention but that would mean we
> have wasted lot of cpu time in busy waits.
>
> Will get a PLE machine again, and 'll continue experimenting with
> further tuning of SPIN_THRESHOLD.

Sorry for delayed response. Was doing too much of analysis and
experiments.

Continued my experiment, with spin threshold. unfortunately could
not settle between which one of 4k/8k threshold is better, since it
depends on load and type of workload.

Here is the result for 32 vcpu guest for sysbench and kernebench for 4 
8GB RAM vms on same PLE machine with:

1x: benchmark running on 1 guest
2x: same benchmark running on 2 guest and so on

1x run is taken over 8*3 run averages
2x run was taken with 4*3 runs
3x run was with 6*3
4x run was with 4*3


kernbench
=========
total job=2* number of vcpus
kernbench -f -H -M -o $total_job


+------------+------------+-----------+---------------+---------+
| base       |  pv_4k     | %impr     |   pv_8k       | %impr   |
+------------+------------+-----------+---------------+---------+
| 49.98      |  49.147475 | 1.69393   |   50.575567   | -1.17758|
| 106.0051   |  96.668325 | 9.65857   |   91.62165    | 15.6987 |
| 189.82067  |  181.839   | 4.38942   |   188.8595    | 0.508934|
+------------+------------+-----------+---------------+---------+

sysbench
===========
Ran with  num_thread=2* number of vcpus

sysbench --num-threads=$num_thread --max-requests=100000 --test=oltp 
--oltp-table-size=500000 --db-driver=pgsql --oltp-read-only run

32 vcpu
-------

+------------+------------+-----------+---------------+---------+
| base       |  pv_4k     | %impr     |   pv_8k       | %impr   |
+------------+------------+-----------+---------------+---------+
| 16.4109    |  12.109988 | 35.5154   |   12.658113   | 29.6473 |
| 14.232712  |  13.640387 | 4.34244   |   14.16485    | 0.479087|
| 23.49685   |  23.196375 | 1.29535   |   19.024871   | 23.506  |
+------------+------------+-----------+---------------+---------+

and observations are:

1) 8k threshold does better for medium overcommit. But PLE has more
control rather than pv spinlock.

2) 4k does well for no overcommit and high overcommit cases. and also,
for non PLE machine this helps rather than 8k. in medium overcommit
cases, we see less performance benefits due to increase in halt exits

I 'll continue my analysis.
Also I have come-up with directed yield patch where we do directed
yield in vcpu block path, instead of blind schedule. will do some more
experiment with that and post as an RFC.

Let me know if you have any comments/suggestions.


^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2012-06-14 12:22 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
2012-05-02 10:06 ` [PATCH RFC V8 1/17] x86/spinlock: Replace pv spinlocks with pv ticketlocks Raghavendra K T
2012-05-02 10:06 ` [PATCH RFC V8 2/17] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks Raghavendra K T
2012-05-02 10:06 ` [PATCH RFC V8 3/17] x86/ticketlock: Collapse a layer of functions Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 4/17] xen: Defer spinlock setup until boot CPU setup Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 5/17] xen/pvticketlock: Xen implementation for PV ticket locks Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 6/17] xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 7/17] x86/pvticketlock: Use callee-save for lock_spinning Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 8/17] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 9/17] Split out rate limiting from jump_label.h Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 10/17] x86/ticketlock: Add slowpath logic Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 11/17] xen/pvticketlock: Allow interrupts to be enabled while blocking Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 12/17] xen: Enable PV ticketlocks on HVM Xen Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 13/17] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 14/17] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 15/17] kvm guest : Add configuration support to enable debug information for KVM Guests Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 16/17] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Raghavendra K T
2012-05-30 11:54   ` Jan Kiszka
2012-05-30 13:44     ` Raghavendra K T
2012-05-07  8:29 ` [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Ingo Molnar
2012-05-07  8:32   ` Avi Kivity
2012-05-07 10:58     ` Raghavendra K T
2012-05-07 12:06       ` Avi Kivity
2012-05-07 13:20         ` Raghavendra K T
2012-05-07 13:22           ` Avi Kivity
2012-05-07 13:38             ` Raghavendra K T
2012-05-07 13:46               ` Srivatsa Vaddagiri
2012-05-07 13:49                 ` Avi Kivity
2012-05-07 13:53                   ` Raghavendra K T
2012-05-07 13:58                     ` Avi Kivity
2012-05-07 14:47                       ` Raghavendra K T
2012-05-07 14:52                         ` Avi Kivity
2012-05-07 14:54                           ` Avi Kivity
2012-05-07 17:25                           ` Ingo Molnar
2012-05-07 20:42                             ` Thomas Gleixner
2012-05-08  6:46                               ` Nikunj A Dadhania
2012-05-15 11:26                             ` [Xen-devel] " Jan Beulich
2012-05-08  5:25                           ` Raghavendra K T
2012-05-13 18:45                           ` Raghavendra K T
2012-05-14  4:57                             ` Nikunj A Dadhania
2012-05-14  9:01                               ` Raghavendra K T
2012-05-14  7:38                             ` Jeremy Fitzhardinge
2012-05-14  8:11                               ` Raghavendra K T
2012-05-16  3:19                             ` Raghavendra K T
2012-05-30 11:26                               ` Raghavendra K T
2012-06-14 12:21                                 ` Raghavendra K T
2012-05-07 13:55                   ` Srivatsa Vaddagiri
2012-05-07 23:15                   ` Jeremy Fitzhardinge
2012-05-08  1:13                     ` Raghavendra K T
2012-05-08  9:08                     ` Avi Kivity
2012-05-07 13:56                 ` Raghavendra K T
2012-05-13 17:59         ` Raghavendra K T

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).