All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 19:21 ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri


This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.

Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
   causing undercommit degradation (after PLE handler improvement).
- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler

V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.

With this series we see that we could get little more improvements on top
of that. 

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;
	for (;;) {
		unsigned count = SPIN_THRESHOLD;
		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();
which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	     static_key_false(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */
		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".


Results:
=======
base = 3.10-rc2 kernel
patched = base + this series

The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.

+-----------+-----------+-----------+------------+-----------+
               ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
              dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
+-----------+-----------+-----------+------------+-----------+

Your suggestions and comments are welcome.

github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9


Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines. 

The older series was tested by Attilio for Xen implementation [1].

Jeremy Fitzhardinge (9):
 x86/spinlock: Replace pv spinlocks with pv ticketlocks
 x86/ticketlock: Collapse a layer of functions
 xen: Defer spinlock setup until boot CPU setup
 xen/pvticketlock: Xen implementation for PV ticket locks
 xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
 x86/pvticketlock: Use callee-save for lock_spinning
 x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
 x86/ticketlock: Add slowpath logic
 xen/pvticketlock: Allow interrupts to be enabled while blocking

Andrew Jones (1):
 Split jumplabel ratelimit

Stefano Stabellini (1):
 xen: Enable PV ticketlocks on HVM Xen

Srivatsa Vaddagiri (3):
 kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
 kvm guest : Add configuration support to enable debug information for KVM Guests
 kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

Raghavendra K T (5):
 x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
 kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
 Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
 Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
 Add directed yield in vcpu block path

---
Link in V8 has links to previous patch series and also whole history.

V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119

 Documentation/virtual/kvm/cpuid.txt      |   4 +
 Documentation/virtual/kvm/hypercalls.txt |  13 ++
 arch/ia64/include/asm/kvm_host.h         |   5 +
 arch/powerpc/include/asm/kvm_host.h      |   5 +
 arch/s390/include/asm/kvm_host.h         |   5 +
 arch/x86/Kconfig                         |  10 +
 arch/x86/include/asm/kvm_host.h          |   7 +-
 arch/x86/include/asm/kvm_para.h          |  14 +-
 arch/x86/include/asm/paravirt.h          |  32 +--
 arch/x86/include/asm/paravirt_types.h    |  10 +-
 arch/x86/include/asm/spinlock.h          | 128 +++++++----
 arch/x86/include/asm/spinlock_types.h    |  16 +-
 arch/x86/include/uapi/asm/kvm_para.h     |   1 +
 arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
 arch/x86/kvm/cpuid.c                     |   3 +-
 arch/x86/kvm/lapic.c                     |   5 +-
 arch/x86/kvm/x86.c                       |  39 +++-
 arch/x86/xen/smp.c                       |   3 +-
 arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
 include/linux/jump_label.h               |  26 +--
 include/linux/jump_label_ratelimit.h     |  34 +++
 include/linux/kvm_host.h                 |   2 +-
 include/linux/perf_event.h               |   1 +
 include/uapi/linux/kvm_para.h            |   1 +
 kernel/jump_label.c                      |   1 +
 virt/kvm/kvm_main.c                      |   6 +-
 27 files changed, 645 insertions(+), 384 deletions(-)


^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 19:21 ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri


This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.

Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
   causing undercommit degradation (after PLE handler improvement).
- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler

V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.

With this series we see that we could get little more improvements on top
of that. 

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;
	for (;;) {
		unsigned count = SPIN_THRESHOLD;
		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();
which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	     static_key_false(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */
		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".


Results:
=======
base = 3.10-rc2 kernel
patched = base + this series

The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.

+-----------+-----------+-----------+------------+-----------+
               ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
              dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
+-----------+-----------+-----------+------------+-----------+

Your suggestions and comments are welcome.

github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9


Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines. 

The older series was tested by Attilio for Xen implementation [1].

Jeremy Fitzhardinge (9):
 x86/spinlock: Replace pv spinlocks with pv ticketlocks
 x86/ticketlock: Collapse a layer of functions
 xen: Defer spinlock setup until boot CPU setup
 xen/pvticketlock: Xen implementation for PV ticket locks
 xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
 x86/pvticketlock: Use callee-save for lock_spinning
 x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
 x86/ticketlock: Add slowpath logic
 xen/pvticketlock: Allow interrupts to be enabled while blocking

Andrew Jones (1):
 Split jumplabel ratelimit

Stefano Stabellini (1):
 xen: Enable PV ticketlocks on HVM Xen

Srivatsa Vaddagiri (3):
 kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
 kvm guest : Add configuration support to enable debug information for KVM Guests
 kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

Raghavendra K T (5):
 x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
 kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
 Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
 Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
 Add directed yield in vcpu block path

---
Link in V8 has links to previous patch series and also whole history.

V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119

 Documentation/virtual/kvm/cpuid.txt      |   4 +
 Documentation/virtual/kvm/hypercalls.txt |  13 ++
 arch/ia64/include/asm/kvm_host.h         |   5 +
 arch/powerpc/include/asm/kvm_host.h      |   5 +
 arch/s390/include/asm/kvm_host.h         |   5 +
 arch/x86/Kconfig                         |  10 +
 arch/x86/include/asm/kvm_host.h          |   7 +-
 arch/x86/include/asm/kvm_para.h          |  14 +-
 arch/x86/include/asm/paravirt.h          |  32 +--
 arch/x86/include/asm/paravirt_types.h    |  10 +-
 arch/x86/include/asm/spinlock.h          | 128 +++++++----
 arch/x86/include/asm/spinlock_types.h    |  16 +-
 arch/x86/include/uapi/asm/kvm_para.h     |   1 +
 arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
 arch/x86/kvm/cpuid.c                     |   3 +-
 arch/x86/kvm/lapic.c                     |   5 +-
 arch/x86/kvm/x86.c                       |  39 +++-
 arch/x86/xen/smp.c                       |   3 +-
 arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
 include/linux/jump_label.h               |  26 +--
 include/linux/jump_label_ratelimit.h     |  34 +++
 include/linux/kvm_host.h                 |   2 +-
 include/linux/perf_event.h               |   1 +
 include/uapi/linux/kvm_para.h            |   1 +
 kernel/jump_label.c                      |   1 +
 virt/kvm/kvm_main.c                      |   6 +-
 27 files changed, 645 insertions(+), 384 deletions(-)


^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 1/19]  x86/spinlock: Replace pv spinlocks with pv ticketlocks
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:21   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/spinlock: Replace pv spinlocks with pv ticketlocks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
[ Raghavendra: Changed SPIN_THRESHOLD ]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |   32 ++++----------------
 arch/x86/include/asm/paravirt_types.h |   10 ++----
 arch/x86/include/asm/spinlock.h       |   53 +++++++++++++++++++++++++++------
 arch/x86/include/asm/spinlock_types.h |    4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
 arch/x86/xen/spinlock.c               |    8 ++++-
 6 files changed, 61 insertions(+), 61 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cfdc9ee..040e72d 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 0db1fca..d5deb6d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include <asm/spinlock_types.h>
+
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 33692ea..4d54244 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -34,6 +34,35 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 15)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							 __ticket_t ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
+/*
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -47,19 +76,24 @@
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
 	inc = xadd(&lock->tickets, inc);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			break;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				goto out;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -78,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
+	__ticket_t next = lock->tickets.head + 1;
+
 	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__ticket_unlock_kick(lock, next);
 }
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
@@ -95,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -129,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index ad0ad07..83fd3c7 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #include <linux/types.h>
 
 #if (CONFIG_NR_CPUS < 256)
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 3002ec1..d6481a9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -138,6 +138,9 @@ struct xen_spinlock {
 	xen_spinners_t spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -352,6 +354,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -413,13 +416,14 @@ void __init xen_init_spinlocks(void)
 		return;
 
 	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 1/19] x86/spinlock: Replace pv spinlocks with pv ticketlocks
@ 2013-06-01 19:21   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/spinlock: Replace pv spinlocks with pv ticketlocks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
[ Raghavendra: Changed SPIN_THRESHOLD ]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |   32 ++++----------------
 arch/x86/include/asm/paravirt_types.h |   10 ++----
 arch/x86/include/asm/spinlock.h       |   53 +++++++++++++++++++++++++++------
 arch/x86/include/asm/spinlock_types.h |    4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
 arch/x86/xen/spinlock.c               |    8 ++++-
 6 files changed, 61 insertions(+), 61 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cfdc9ee..040e72d 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 0db1fca..d5deb6d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include <asm/spinlock_types.h>
+
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 33692ea..4d54244 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -34,6 +34,35 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 15)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							 __ticket_t ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
+/*
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -47,19 +76,24 @@
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
 	inc = xadd(&lock->tickets, inc);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			break;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				goto out;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -78,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
+	__ticket_t next = lock->tickets.head + 1;
+
 	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__ticket_unlock_kick(lock, next);
 }
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
@@ -95,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -129,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index ad0ad07..83fd3c7 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #include <linux/types.h>
 
 #if (CONFIG_NR_CPUS < 256)
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 3002ec1..d6481a9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -138,6 +138,9 @@ struct xen_spinlock {
 	xen_spinners_t spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -352,6 +354,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -413,13 +416,14 @@ void __init xen_init_spinlocks(void)
 		return;
 
 	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 1/19] x86/spinlock: Replace pv spinlocks with pv ticketlocks
@ 2013-06-01 19:21   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/spinlock: Replace pv spinlocks with pv ticketlocks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
[ Raghavendra: Changed SPIN_THRESHOLD ]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |   32 ++++----------------
 arch/x86/include/asm/paravirt_types.h |   10 ++----
 arch/x86/include/asm/spinlock.h       |   53 +++++++++++++++++++++++++++------
 arch/x86/include/asm/spinlock_types.h |    4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
 arch/x86/xen/spinlock.c               |    8 ++++-
 6 files changed, 61 insertions(+), 61 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cfdc9ee..040e72d 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 0db1fca..d5deb6d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include <asm/spinlock_types.h>
+
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 33692ea..4d54244 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -34,6 +34,35 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 15)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
+							__ticket_t ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+							 __ticket_t ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
+/*
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
+							__ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -47,19 +76,24 @@
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
 	inc = xadd(&lock->tickets, inc);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			break;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				goto out;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -78,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
+	__ticket_t next = lock->tickets.head + 1;
+
 	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__ticket_unlock_kick(lock, next);
 }
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
@@ -95,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -129,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index ad0ad07..83fd3c7 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #include <linux/types.h>
 
 #if (CONFIG_NR_CPUS < 256)
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 3002ec1..d6481a9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -138,6 +138,9 @@ struct xen_spinlock {
 	xen_spinners_t spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -352,6 +354,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -413,13 +416,14 @@ void __init xen_init_spinlocks(void)
 		return;
 
 	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 2/19]  x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:22   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

The code size expands somewhat, and its better to just call
a function rather than inline it.

Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch,
which is simplified.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 685692c..80fcc4b 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -621,6 +621,7 @@ config PARAVIRT_DEBUG
 config PARAVIRT_SPINLOCKS
 	bool "Paravirtualization layer for spinlocks"
 	depends on PARAVIRT && SMP
+	select UNINLINE_SPIN_UNLOCK
 	---help---
 	  Paravirtualized spinlocks allow a pvops backend to replace the
 	  spinlock implementation with something virtualization-friendly


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 2/19] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
@ 2013-06-01 19:22   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

The code size expands somewhat, and its better to just call
a function rather than inline it.

Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch,
which is simplified.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 685692c..80fcc4b 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -621,6 +621,7 @@ config PARAVIRT_DEBUG
 config PARAVIRT_SPINLOCKS
 	bool "Paravirtualization layer for spinlocks"
 	depends on PARAVIRT && SMP
+	select UNINLINE_SPIN_UNLOCK
 	---help---
 	  Paravirtualized spinlocks allow a pvops backend to replace the
 	  spinlock implementation with something virtualization-friendly

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 2/19] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
@ 2013-06-01 19:22   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

The code size expands somewhat, and its better to just call
a function rather than inline it.

Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch,
which is simplified.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 685692c..80fcc4b 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -621,6 +621,7 @@ config PARAVIRT_DEBUG
 config PARAVIRT_SPINLOCKS
 	bool "Paravirtualization layer for spinlocks"
 	depends on PARAVIRT && SMP
+	select UNINLINE_SPIN_UNLOCK
 	---help---
 	  Paravirtualized spinlocks allow a pvops backend to replace the
 	  spinlock implementation with something virtualization-friendly

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 3/19]  x86/ticketlock: Collapse a layer of functions
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:22   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/ticketlock: Collapse a layer of functions

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 file changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 4d54244..7442410 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -110,7 +110,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -118,46 +118,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return tmp.tail != tmp.head;
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 3/19]  x86/ticketlock: Collapse a layer of functions
  2013-06-01 19:21 ` Raghavendra K T
                   ` (2 preceding siblings ...)
  (?)
@ 2013-06-01 19:22 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/ticketlock: Collapse a layer of functions

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 file changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 4d54244..7442410 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -110,7 +110,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -118,46 +118,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return tmp.tail != tmp.head;
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 3/19]  x86/ticketlock: Collapse a layer of functions
@ 2013-06-01 19:22   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/ticketlock: Collapse a layer of functions

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 file changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 4d54244..7442410 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();	/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -110,7 +110,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -118,46 +118,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return tmp.tail != tmp.head;
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return (__ticket_t)(tmp.tail - tmp.head) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 4/19]  xen: Defer spinlock setup until boot CPU setup
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:22   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen: Defer spinlock setup until boot CPU setup

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 8ff3799..dcdc91c 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -246,6 +246,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
 
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
+	xen_init_spinlocks();
 }
 
 static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
@@ -647,7 +648,6 @@ void __init xen_smp_init(void)
 {
 	smp_ops = xen_smp_ops;
 	xen_fill_possible_map();
-	xen_init_spinlocks();
 }
 
 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 4/19]  xen: Defer spinlock setup until boot CPU setup
@ 2013-06-01 19:22   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

xen: Defer spinlock setup until boot CPU setup

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 8ff3799..dcdc91c 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -246,6 +246,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
 
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
+	xen_init_spinlocks();
 }
 
 static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
@@ -647,7 +648,6 @@ void __init xen_smp_init(void)
 {
 	smp_ops = xen_smp_ops;
 	xen_fill_possible_map();
-	xen_init_spinlocks();
 }
 
 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 4/19]  xen: Defer spinlock setup until boot CPU setup
@ 2013-06-01 19:22   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:22 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

xen: Defer spinlock setup until boot CPU setup

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 8ff3799..dcdc91c 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -246,6 +246,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
 
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
+	xen_init_spinlocks();
 }
 
 static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
@@ -647,7 +648,6 @@ void __init xen_smp_init(void)
 {
 	smp_ops = xen_smp_ops;
 	xen_fill_possible_map();
-	xen_init_spinlocks();
 }
 
 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 5/19]  xen/pvticketlock: Xen implementation for PV ticket locks
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:23   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen/pvticketlock: Xen implementation for PV ticket locks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Raghu: use function + enum instead of macro, cmpxchg for zero status reset

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |  347 +++++++++++------------------------------------
 1 file changed, 78 insertions(+), 269 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index d6481a9..860e190 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -16,45 +16,44 @@
 #include "xen-ops.h"
 #include "debugfs.h"
 
-#ifdef CONFIG_XEN_DEBUG_FS
-static struct xen_spinlock_stats
-{
-	u64 taken;
-	u32 taken_slow;
-	u32 taken_slow_nested;
-	u32 taken_slow_pickup;
-	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
+enum xen_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	TAKEN_SLOW_SPURIOUS,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
 
-	u64 released;
-	u32 released_slow;
-	u32 released_slow_kicked;
 
+#ifdef CONFIG_XEN_DEBUG_FS
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
+static struct xen_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
-
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
-	if (unlikely(zero_stats)) {
-		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
-		zero_stats = 0;
+	u8 ret;
+	u8 old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
 	}
 }
 
-#define ADD_STATS(elem, val)			\
-	do { check_zero(); spinlock_stats.elem += (val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
 
 static inline u64 spin_time_start(void)
 {
@@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
 }
 #else  /* !CONFIG_XEN_DEBUG_FS */
 #define TIMEOUT			(1 << 10)
-#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+}
 
 static inline u64 spin_time_start(void)
 {
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
@@ -133,229 +112,82 @@ typedef u16 xen_spinners_t;
 	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
 #endif
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	xen_spinners_t spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	inc_spinners(xl);
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	dec_spinners(xl);
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
+	unsigned long flags;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
-
-	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+	add_stats(TAKEN_SLOW, 1);
 
-		raw_local_irq_restore(flags);
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Only check lock once pending cleared */
+	barrier();
 
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
-
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
 	int cpu;
 
-	ADD_STATS(released_slow, 1);
+	add_stats(RELEASED_SLOW, 1);
+
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
-			ADD_STATS(released_slow_kicked, 1);
+		if (w->lock == lock && w->want == next) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 		}
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -415,15 +247,8 @@ void __init xen_init_spinlocks(void)
 	if (xen_hvm_domain())
 		return;
 
-	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -441,37 +266,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW]);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_pickup);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_SPURIOUS]);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW]);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow_kicked);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 5/19] xen/pvticketlock: Xen implementation for PV ticket locks
  2013-06-01 19:21 ` Raghavendra K T
                   ` (6 preceding siblings ...)
  (?)
@ 2013-06-01 19:23 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

xen/pvticketlock: Xen implementation for PV ticket locks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Raghu: use function + enum instead of macro, cmpxchg for zero status reset

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |  347 +++++++++++------------------------------------
 1 file changed, 78 insertions(+), 269 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index d6481a9..860e190 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -16,45 +16,44 @@
 #include "xen-ops.h"
 #include "debugfs.h"
 
-#ifdef CONFIG_XEN_DEBUG_FS
-static struct xen_spinlock_stats
-{
-	u64 taken;
-	u32 taken_slow;
-	u32 taken_slow_nested;
-	u32 taken_slow_pickup;
-	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
+enum xen_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	TAKEN_SLOW_SPURIOUS,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
 
-	u64 released;
-	u32 released_slow;
-	u32 released_slow_kicked;
 
+#ifdef CONFIG_XEN_DEBUG_FS
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
+static struct xen_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
-
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
-	if (unlikely(zero_stats)) {
-		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
-		zero_stats = 0;
+	u8 ret;
+	u8 old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
 	}
 }
 
-#define ADD_STATS(elem, val)			\
-	do { check_zero(); spinlock_stats.elem += (val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
 
 static inline u64 spin_time_start(void)
 {
@@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
 }
 #else  /* !CONFIG_XEN_DEBUG_FS */
 #define TIMEOUT			(1 << 10)
-#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+}
 
 static inline u64 spin_time_start(void)
 {
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
@@ -133,229 +112,82 @@ typedef u16 xen_spinners_t;
 	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
 #endif
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	xen_spinners_t spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	inc_spinners(xl);
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	dec_spinners(xl);
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
+	unsigned long flags;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
-
-	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+	add_stats(TAKEN_SLOW, 1);
 
-		raw_local_irq_restore(flags);
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Only check lock once pending cleared */
+	barrier();
 
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
-
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
 	int cpu;
 
-	ADD_STATS(released_slow, 1);
+	add_stats(RELEASED_SLOW, 1);
+
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
-			ADD_STATS(released_slow_kicked, 1);
+		if (w->lock == lock && w->want == next) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 		}
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -415,15 +247,8 @@ void __init xen_init_spinlocks(void)
 	if (xen_hvm_domain())
 		return;
 
-	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -441,37 +266,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW]);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_pickup);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_SPURIOUS]);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW]);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow_kicked);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 5/19]  xen/pvticketlock: Xen implementation for PV ticket locks
@ 2013-06-01 19:23   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen/pvticketlock: Xen implementation for PV ticket locks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Raghu: use function + enum instead of macro, cmpxchg for zero status reset

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |  347 +++++++++++------------------------------------
 1 file changed, 78 insertions(+), 269 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index d6481a9..860e190 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -16,45 +16,44 @@
 #include "xen-ops.h"
 #include "debugfs.h"
 
-#ifdef CONFIG_XEN_DEBUG_FS
-static struct xen_spinlock_stats
-{
-	u64 taken;
-	u32 taken_slow;
-	u32 taken_slow_nested;
-	u32 taken_slow_pickup;
-	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
+enum xen_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	TAKEN_SLOW_SPURIOUS,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
 
-	u64 released;
-	u32 released_slow;
-	u32 released_slow_kicked;
 
+#ifdef CONFIG_XEN_DEBUG_FS
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
+static struct xen_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
-
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
-	if (unlikely(zero_stats)) {
-		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
-		zero_stats = 0;
+	u8 ret;
+	u8 old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
 	}
 }
 
-#define ADD_STATS(elem, val)			\
-	do { check_zero(); spinlock_stats.elem += (val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
 
 static inline u64 spin_time_start(void)
 {
@@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
 }
 #else  /* !CONFIG_XEN_DEBUG_FS */
 #define TIMEOUT			(1 << 10)
-#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
+static inline void add_stats(enum xen_contention_stat var, u32 val)
+{
+}
 
 static inline u64 spin_time_start(void)
 {
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
@@ -133,229 +112,82 @@ typedef u16 xen_spinners_t;
 	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
 #endif
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	xen_spinners_t spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	inc_spinners(xl);
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	dec_spinners(xl);
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
+	unsigned long flags;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
-
-	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+	add_stats(TAKEN_SLOW, 1);
 
-		raw_local_irq_restore(flags);
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Only check lock once pending cleared */
+	barrier();
 
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
-
 out:
-	unspinning_lock(xl, prev);
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
-
-	return ret;
 }
 
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
-
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
-
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
-}
-
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
 	int cpu;
 
-	ADD_STATS(released_slow, 1);
+	add_stats(RELEASED_SLOW, 1);
+
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
-			ADD_STATS(released_slow_kicked, 1);
+		if (w->lock == lock && w->want == next) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 		}
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -415,15 +247,8 @@ void __init xen_init_spinlocks(void)
 	if (xen_hvm_domain())
 		return;
 
-	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -441,37 +266,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW]);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_pickup);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
+			   &spinlock_stats.contention_stats[TAKEN_SLOW_SPURIOUS]);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW]);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
-			   &spinlock_stats.released_slow_kicked);
+			   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 6/19]  xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:23   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 860e190..3de6805 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -238,6 +238,8 @@ void xen_uninit_lock_cpu(int cpu)
 	per_cpu(lock_kicker_irq, cpu) = -1;
 }
 
+static bool xen_pvspin __initdata = true;
+
 void __init xen_init_spinlocks(void)
 {
 	/*
@@ -247,10 +249,22 @@ void __init xen_init_spinlocks(void)
 	if (xen_hvm_domain())
 		return;
 
+	if (!xen_pvspin) {
+		printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
+		return;
+	}
+
 	pv_lock_ops.lock_spinning = xen_lock_spinning;
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
+static __init int xen_parse_nopvspin(char *arg)
+{
+	xen_pvspin = false;
+	return 0;
+}
+early_param("xen_nopvspin", xen_parse_nopvspin);
+
 #ifdef CONFIG_XEN_DEBUG_FS
 
 static struct dentry *d_spin_debug;


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 6/19] xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
  2013-06-01 19:21 ` Raghavendra K T
                   ` (8 preceding siblings ...)
  (?)
@ 2013-06-01 19:23 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 860e190..3de6805 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -238,6 +238,8 @@ void xen_uninit_lock_cpu(int cpu)
 	per_cpu(lock_kicker_irq, cpu) = -1;
 }
 
+static bool xen_pvspin __initdata = true;
+
 void __init xen_init_spinlocks(void)
 {
 	/*
@@ -247,10 +249,22 @@ void __init xen_init_spinlocks(void)
 	if (xen_hvm_domain())
 		return;
 
+	if (!xen_pvspin) {
+		printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
+		return;
+	}
+
 	pv_lock_ops.lock_spinning = xen_lock_spinning;
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
+static __init int xen_parse_nopvspin(char *arg)
+{
+	xen_pvspin = false;
+	return 0;
+}
+early_param("xen_nopvspin", xen_parse_nopvspin);
+
 #ifdef CONFIG_XEN_DEBUG_FS
 
 static struct dentry *d_spin_debug;

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 6/19]  xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
@ 2013-06-01 19:23   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 860e190..3de6805 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -238,6 +238,8 @@ void xen_uninit_lock_cpu(int cpu)
 	per_cpu(lock_kicker_irq, cpu) = -1;
 }
 
+static bool xen_pvspin __initdata = true;
+
 void __init xen_init_spinlocks(void)
 {
 	/*
@@ -247,10 +249,22 @@ void __init xen_init_spinlocks(void)
 	if (xen_hvm_domain())
 		return;
 
+	if (!xen_pvspin) {
+		printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
+		return;
+	}
+
 	pv_lock_ops.lock_spinning = xen_lock_spinning;
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
+static __init int xen_parse_nopvspin(char *arg)
+{
+	xen_pvspin = false;
+	return 0;
+}
+early_param("xen_nopvspin", xen_parse_nopvspin);
+
 #ifdef CONFIG_XEN_DEBUG_FS
 
 static struct dentry *d_spin_debug;


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 7/19]  x86/pvticketlock: Use callee-save for lock_spinning
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:23   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/pvticketlock: Use callee-save for lock_spinning

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 040e72d..7131e12c 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -715,7 +715,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index d5deb6d..350d017 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include <asm/spinlock_types.h>
 
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 3de6805..5502fda 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -171,6 +171,7 @@ out:
 	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -254,7 +255,7 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 7/19] x86/pvticketlock: Use callee-save for lock_spinning
@ 2013-06-01 19:23   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/pvticketlock: Use callee-save for lock_spinning

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 040e72d..7131e12c 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -715,7 +715,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index d5deb6d..350d017 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include <asm/spinlock_types.h>
 
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 3de6805..5502fda 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -171,6 +171,7 @@ out:
 	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -254,7 +255,7 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 7/19] x86/pvticketlock: Use callee-save for lock_spinning
@ 2013-06-01 19:23   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:23 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/pvticketlock: Use callee-save for lock_spinning

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 040e72d..7131e12c 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -715,7 +715,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index d5deb6d..350d017 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include <asm/spinlock_types.h>
 
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 3de6805..5502fda 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -171,6 +171,7 @@ out:
 	local_irq_restore(flags);
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -254,7 +255,7 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 8/19]  x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:24   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/pvticketlock: When paravirtualizing ticket locks, increment by 2

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h       |   10 +++++-----
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 7442410..04a5cd5 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -112,9 +112,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
-	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 	__ticket_unlock_kick(lock, next);
 }
 
@@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return (__ticket_t)(tmp.tail - tmp.head) > 1;
+	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 83fd3c7..e96fcbd 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
 typedef struct arch_spinlock {


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 8/19] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
@ 2013-06-01 19:24   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/pvticketlock: When paravirtualizing ticket locks, increment by 2

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h       |   10 +++++-----
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 7442410..04a5cd5 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -112,9 +112,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
-	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 	__ticket_unlock_kick(lock, next);
 }
 
@@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return (__ticket_t)(tmp.tail - tmp.head) > 1;
+	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 83fd3c7..e96fcbd 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
 typedef struct arch_spinlock {

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 8/19] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
@ 2013-06-01 19:24   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/pvticketlock: When paravirtualizing ticket locks, increment by 2

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Attilio Rao <attilio.rao@citrix.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/spinlock.h       |   10 +++++-----
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 7442410..04a5cd5 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -112,9 +112,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
-	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 	__ticket_unlock_kick(lock, next);
 }
 
@@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return (__ticket_t)(tmp.tail - tmp.head) > 1;
+	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 83fd3c7..e96fcbd 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
 typedef struct arch_spinlock {

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 9/19]  Split out rate limiting from jump_label.h
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:24   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

Split jumplabel ratelimit

From: Andrew Jones <drjones@redhat.com>

Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting
for jump label disabling. The changes were made in the jump label code
in order to be more widely available and to keep things tidier. This is
all fine, except now jump_label.h includes linux/workqueue.h, which
makes it impossible to include jump_label.h from anything that
workqueue.h needs. For example, it's now impossible to include
jump_label.h from asm/spinlock.h, which is done in proposed
pv-ticketlock patches. This patch splits out the rate limiting related
changes from jump_label.h into a new file, jump_label_ratelimit.h, to
resolve the issue.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 include/linux/jump_label.h           |   26 +-------------------------
 include/linux/jump_label_ratelimit.h |   34 ++++++++++++++++++++++++++++++++++
 include/linux/perf_event.h           |    1 +
 kernel/jump_label.c                  |    1 +
 4 files changed, 37 insertions(+), 25 deletions(-)
 create mode 100644 include/linux/jump_label_ratelimit.h

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 0976fc4..53cdf89 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -48,7 +48,6 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
-#include <linux/workqueue.h>
 
 #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
 
@@ -61,12 +60,6 @@ struct static_key {
 #endif
 };
 
-struct static_key_deferred {
-	struct static_key key;
-	unsigned long timeout;
-	struct delayed_work work;
-};
-
 # include <asm/jump_label.h>
 # define HAVE_JUMP_LABEL
 #endif	/* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
@@ -119,10 +112,7 @@ extern void arch_jump_label_transform_static(struct jump_entry *entry,
 extern int jump_label_text_reserved(void *start, void *end);
 extern void static_key_slow_inc(struct static_key *key);
 extern void static_key_slow_dec(struct static_key *key);
-extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
 extern void jump_label_apply_nops(struct module *mod);
-extern void
-jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
 
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
@@ -141,10 +131,6 @@ static __always_inline void jump_label_init(void)
 {
 }
 
-struct static_key_deferred {
-	struct static_key  key;
-};
-
 static __always_inline bool static_key_false(struct static_key *key)
 {
 	if (unlikely(atomic_read(&key->enabled)) > 0)
@@ -169,11 +155,6 @@ static inline void static_key_slow_dec(struct static_key *key)
 	atomic_dec(&key->enabled);
 }
 
-static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
-{
-	static_key_slow_dec(&key->key);
-}
-
 static inline int jump_label_text_reserved(void *start, void *end)
 {
 	return 0;
@@ -187,12 +168,6 @@ static inline int jump_label_apply_nops(struct module *mod)
 	return 0;
 }
 
-static inline void
-jump_label_rate_limit(struct static_key_deferred *key,
-		unsigned long rl)
-{
-}
-
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 		{ .enabled = ATOMIC_INIT(1) })
 #define STATIC_KEY_INIT_FALSE ((struct static_key) \
@@ -203,6 +178,7 @@ jump_label_rate_limit(struct static_key_deferred *key,
 #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE
 #define jump_label_enabled static_key_enabled
 
+static inline int atomic_read(const atomic_t *v);
 static inline bool static_key_enabled(struct static_key *key)
 {
 	return (atomic_read(&key->enabled) > 0);
diff --git a/include/linux/jump_label_ratelimit.h b/include/linux/jump_label_ratelimit.h
new file mode 100644
index 0000000..1137883
--- /dev/null
+++ b/include/linux/jump_label_ratelimit.h
@@ -0,0 +1,34 @@
+#ifndef _LINUX_JUMP_LABEL_RATELIMIT_H
+#define _LINUX_JUMP_LABEL_RATELIMIT_H
+
+#include <linux/jump_label.h>
+#include <linux/workqueue.h>
+
+#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
+struct static_key_deferred {
+	struct static_key key;
+	unsigned long timeout;
+	struct delayed_work work;
+};
+#endif
+
+#ifdef HAVE_JUMP_LABEL
+extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
+extern void
+jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
+
+#else	/* !HAVE_JUMP_LABEL */
+struct static_key_deferred {
+	struct static_key  key;
+};
+static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
+{
+	static_key_slow_dec(&key->key);
+}
+static inline void
+jump_label_rate_limit(struct static_key_deferred *key,
+		unsigned long rl)
+{
+}
+#endif	/* HAVE_JUMP_LABEL */
+#endif	/* _LINUX_JUMP_LABEL_RATELIMIT_H */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f463a46..a8eac60 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -48,6 +48,7 @@ struct perf_guest_info_callbacks {
 #include <linux/cpu.h>
 #include <linux/irq_work.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 #include <linux/atomic.h>
 #include <linux/sysfs.h>
 #include <linux/perf_regs.h>
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 60f48fa..297a924 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -13,6 +13,7 @@
 #include <linux/sort.h>
 #include <linux/err.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 
 #ifdef HAVE_JUMP_LABEL
 


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 9/19]  Split out rate limiting from jump_label.h
  2013-06-01 19:21 ` Raghavendra K T
                   ` (11 preceding siblings ...)
  (?)
@ 2013-06-01 19:24 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

Split jumplabel ratelimit

From: Andrew Jones <drjones@redhat.com>

Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting
for jump label disabling. The changes were made in the jump label code
in order to be more widely available and to keep things tidier. This is
all fine, except now jump_label.h includes linux/workqueue.h, which
makes it impossible to include jump_label.h from anything that
workqueue.h needs. For example, it's now impossible to include
jump_label.h from asm/spinlock.h, which is done in proposed
pv-ticketlock patches. This patch splits out the rate limiting related
changes from jump_label.h into a new file, jump_label_ratelimit.h, to
resolve the issue.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 include/linux/jump_label.h           |   26 +-------------------------
 include/linux/jump_label_ratelimit.h |   34 ++++++++++++++++++++++++++++++++++
 include/linux/perf_event.h           |    1 +
 kernel/jump_label.c                  |    1 +
 4 files changed, 37 insertions(+), 25 deletions(-)
 create mode 100644 include/linux/jump_label_ratelimit.h

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 0976fc4..53cdf89 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -48,7 +48,6 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
-#include <linux/workqueue.h>
 
 #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
 
@@ -61,12 +60,6 @@ struct static_key {
 #endif
 };
 
-struct static_key_deferred {
-	struct static_key key;
-	unsigned long timeout;
-	struct delayed_work work;
-};
-
 # include <asm/jump_label.h>
 # define HAVE_JUMP_LABEL
 #endif	/* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
@@ -119,10 +112,7 @@ extern void arch_jump_label_transform_static(struct jump_entry *entry,
 extern int jump_label_text_reserved(void *start, void *end);
 extern void static_key_slow_inc(struct static_key *key);
 extern void static_key_slow_dec(struct static_key *key);
-extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
 extern void jump_label_apply_nops(struct module *mod);
-extern void
-jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
 
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
@@ -141,10 +131,6 @@ static __always_inline void jump_label_init(void)
 {
 }
 
-struct static_key_deferred {
-	struct static_key  key;
-};
-
 static __always_inline bool static_key_false(struct static_key *key)
 {
 	if (unlikely(atomic_read(&key->enabled)) > 0)
@@ -169,11 +155,6 @@ static inline void static_key_slow_dec(struct static_key *key)
 	atomic_dec(&key->enabled);
 }
 
-static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
-{
-	static_key_slow_dec(&key->key);
-}
-
 static inline int jump_label_text_reserved(void *start, void *end)
 {
 	return 0;
@@ -187,12 +168,6 @@ static inline int jump_label_apply_nops(struct module *mod)
 	return 0;
 }
 
-static inline void
-jump_label_rate_limit(struct static_key_deferred *key,
-		unsigned long rl)
-{
-}
-
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 		{ .enabled = ATOMIC_INIT(1) })
 #define STATIC_KEY_INIT_FALSE ((struct static_key) \
@@ -203,6 +178,7 @@ jump_label_rate_limit(struct static_key_deferred *key,
 #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE
 #define jump_label_enabled static_key_enabled
 
+static inline int atomic_read(const atomic_t *v);
 static inline bool static_key_enabled(struct static_key *key)
 {
 	return (atomic_read(&key->enabled) > 0);
diff --git a/include/linux/jump_label_ratelimit.h b/include/linux/jump_label_ratelimit.h
new file mode 100644
index 0000000..1137883
--- /dev/null
+++ b/include/linux/jump_label_ratelimit.h
@@ -0,0 +1,34 @@
+#ifndef _LINUX_JUMP_LABEL_RATELIMIT_H
+#define _LINUX_JUMP_LABEL_RATELIMIT_H
+
+#include <linux/jump_label.h>
+#include <linux/workqueue.h>
+
+#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
+struct static_key_deferred {
+	struct static_key key;
+	unsigned long timeout;
+	struct delayed_work work;
+};
+#endif
+
+#ifdef HAVE_JUMP_LABEL
+extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
+extern void
+jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
+
+#else	/* !HAVE_JUMP_LABEL */
+struct static_key_deferred {
+	struct static_key  key;
+};
+static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
+{
+	static_key_slow_dec(&key->key);
+}
+static inline void
+jump_label_rate_limit(struct static_key_deferred *key,
+		unsigned long rl)
+{
+}
+#endif	/* HAVE_JUMP_LABEL */
+#endif	/* _LINUX_JUMP_LABEL_RATELIMIT_H */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f463a46..a8eac60 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -48,6 +48,7 @@ struct perf_guest_info_callbacks {
 #include <linux/cpu.h>
 #include <linux/irq_work.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 #include <linux/atomic.h>
 #include <linux/sysfs.h>
 #include <linux/perf_regs.h>
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 60f48fa..297a924 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -13,6 +13,7 @@
 #include <linux/sort.h>
 #include <linux/err.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 
 #ifdef HAVE_JUMP_LABEL

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 9/19]  Split out rate limiting from jump_label.h
@ 2013-06-01 19:24   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

Split jumplabel ratelimit

From: Andrew Jones <drjones@redhat.com>

Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting
for jump label disabling. The changes were made in the jump label code
in order to be more widely available and to keep things tidier. This is
all fine, except now jump_label.h includes linux/workqueue.h, which
makes it impossible to include jump_label.h from anything that
workqueue.h needs. For example, it's now impossible to include
jump_label.h from asm/spinlock.h, which is done in proposed
pv-ticketlock patches. This patch splits out the rate limiting related
changes from jump_label.h into a new file, jump_label_ratelimit.h, to
resolve the issue.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 include/linux/jump_label.h           |   26 +-------------------------
 include/linux/jump_label_ratelimit.h |   34 ++++++++++++++++++++++++++++++++++
 include/linux/perf_event.h           |    1 +
 kernel/jump_label.c                  |    1 +
 4 files changed, 37 insertions(+), 25 deletions(-)
 create mode 100644 include/linux/jump_label_ratelimit.h

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 0976fc4..53cdf89 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -48,7 +48,6 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
-#include <linux/workqueue.h>
 
 #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
 
@@ -61,12 +60,6 @@ struct static_key {
 #endif
 };
 
-struct static_key_deferred {
-	struct static_key key;
-	unsigned long timeout;
-	struct delayed_work work;
-};
-
 # include <asm/jump_label.h>
 # define HAVE_JUMP_LABEL
 #endif	/* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
@@ -119,10 +112,7 @@ extern void arch_jump_label_transform_static(struct jump_entry *entry,
 extern int jump_label_text_reserved(void *start, void *end);
 extern void static_key_slow_inc(struct static_key *key);
 extern void static_key_slow_dec(struct static_key *key);
-extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
 extern void jump_label_apply_nops(struct module *mod);
-extern void
-jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
 
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
@@ -141,10 +131,6 @@ static __always_inline void jump_label_init(void)
 {
 }
 
-struct static_key_deferred {
-	struct static_key  key;
-};
-
 static __always_inline bool static_key_false(struct static_key *key)
 {
 	if (unlikely(atomic_read(&key->enabled)) > 0)
@@ -169,11 +155,6 @@ static inline void static_key_slow_dec(struct static_key *key)
 	atomic_dec(&key->enabled);
 }
 
-static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
-{
-	static_key_slow_dec(&key->key);
-}
-
 static inline int jump_label_text_reserved(void *start, void *end)
 {
 	return 0;
@@ -187,12 +168,6 @@ static inline int jump_label_apply_nops(struct module *mod)
 	return 0;
 }
 
-static inline void
-jump_label_rate_limit(struct static_key_deferred *key,
-		unsigned long rl)
-{
-}
-
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
 		{ .enabled = ATOMIC_INIT(1) })
 #define STATIC_KEY_INIT_FALSE ((struct static_key) \
@@ -203,6 +178,7 @@ jump_label_rate_limit(struct static_key_deferred *key,
 #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE
 #define jump_label_enabled static_key_enabled
 
+static inline int atomic_read(const atomic_t *v);
 static inline bool static_key_enabled(struct static_key *key)
 {
 	return (atomic_read(&key->enabled) > 0);
diff --git a/include/linux/jump_label_ratelimit.h b/include/linux/jump_label_ratelimit.h
new file mode 100644
index 0000000..1137883
--- /dev/null
+++ b/include/linux/jump_label_ratelimit.h
@@ -0,0 +1,34 @@
+#ifndef _LINUX_JUMP_LABEL_RATELIMIT_H
+#define _LINUX_JUMP_LABEL_RATELIMIT_H
+
+#include <linux/jump_label.h>
+#include <linux/workqueue.h>
+
+#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
+struct static_key_deferred {
+	struct static_key key;
+	unsigned long timeout;
+	struct delayed_work work;
+};
+#endif
+
+#ifdef HAVE_JUMP_LABEL
+extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
+extern void
+jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
+
+#else	/* !HAVE_JUMP_LABEL */
+struct static_key_deferred {
+	struct static_key  key;
+};
+static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
+{
+	static_key_slow_dec(&key->key);
+}
+static inline void
+jump_label_rate_limit(struct static_key_deferred *key,
+		unsigned long rl)
+{
+}
+#endif	/* HAVE_JUMP_LABEL */
+#endif	/* _LINUX_JUMP_LABEL_RATELIMIT_H */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f463a46..a8eac60 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -48,6 +48,7 @@ struct perf_guest_info_callbacks {
 #include <linux/cpu.h>
 #include <linux/irq_work.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 #include <linux/atomic.h>
 #include <linux/sysfs.h>
 #include <linux/perf_regs.h>
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 60f48fa..297a924 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -13,6 +13,7 @@
 #include <linux/sort.h>
 #include <linux/err.h>
 #include <linux/static_key.h>
+#include <linux/jump_label_ratelimit.h>
 
 #ifdef HAVE_JUMP_LABEL
 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 10/19]  x86/ticketlock: Add slowpath logic
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:24   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/ticketlock: Add slowpath logic

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

Unlocker			Locker
				test for lock pickup
					-> fail
unlock
test slowpath
	-> false
				set slowpath flags
				block

Whereas this works in any ordering:

Unlocker			Locker
				set slowpath flags
				test for lock pickup
					-> fail
				block
unlock
test slowpath
	-> true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clears the slowpath flag.

The unlock code uses a locked add to update the head counter.  This also
acts as a full memory barrier so that its safe to subsequently
read back the slowflag state, knowing that the updated lock is visible
to the other CPUs.  If it were an unlocked add, then the flag read may
just be forwarded from the store buffer before it was visible to the other
CPUs, which could result in a deadlock.

Unfortunately this means we need to do a locked instruction when
unlocking with PV ticketlocks.  However, if PV ticketlocks are not
enabled, then the old non-locked "add" is the only unlocking code.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.
Thanks to Stephan Diestelhorst for commenting on some code which relied
on an inaccurate reading of the x86 memory ordering rules.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stephan Diestelhorst <stephan.diestelhorst@amd.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 -
 arch/x86/include/asm/spinlock.h       |   86 ++++++++++++++++++++++++---------
 arch/x86/include/asm/spinlock_types.h |    2 +
 arch/x86/kernel/paravirt-spinlocks.c  |    3 +
 arch/x86/xen/spinlock.c               |    6 ++
 5 files changed, 74 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 7131e12c..401f350 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -718,7 +718,7 @@ static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 04a5cd5..d68883d 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -1,11 +1,14 @@
 #ifndef _ASM_X86_SPINLOCK_H
 #define _ASM_X86_SPINLOCK_H
 
+#include <linux/jump_label.h>
 #include <linux/atomic.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 #include <linux/compiler.h>
 #include <asm/paravirt.h>
+#include <asm/bitops.h>
+
 /*
  * Your basic SMP spinlocks, allowing only a single CPU anywhere
  *
@@ -37,32 +40,28 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+extern struct static_key paravirt_ticketlocks_enabled;
+static __always_inline bool static_key_false(struct static_key *key);
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
-							__ticket_t ticket)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+	set_bit(0, (volatile unsigned long *)&lock->tickets.tail);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
-							 __ticket_t ticket)
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
 }
-
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
-
-/*
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
-							__ticket_t next)
+static inline void __ticket_unlock_kick(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
-	if (unlikely(lock->tickets.tail != next))
-		____ticket_unlock_kick(lock, next);
 }
 
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -76,20 +75,22 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
 {
 	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
+	if (likely(inc.head == inc.tail))
+		goto out;
 
+	inc.tail &= ~TICKET_SLOWPATH_FLAG;
 	for (;;) {
 		unsigned count = SPIN_THRESHOLD;
 
 		do {
-			if (inc.head == inc.tail)
+			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
 				goto out;
 			cpu_relax();
-			inc.head = ACCESS_ONCE(lock->tickets.head);
 		} while (--count);
 		__ticket_lock_spinning(lock, inc.tail);
 	}
@@ -101,7 +102,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	arch_spinlock_t old, new;
 
 	old.tickets = ACCESS_ONCE(lock->tickets);
-	if (old.tickets.head != old.tickets.tail)
+	if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
 		return 0;
 
 	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
@@ -110,12 +111,49 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
+static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
+					    arch_spinlock_t old)
+{
+	arch_spinlock_t new;
+
+	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
+
+	/* Perform the unlock on the "before" copy */
+	old.tickets.head += TICKET_LOCK_INC;
+
+	/* Clear the slowpath flag */
+	new.head_tail = old.head_tail & ~(TICKET_SLOWPATH_FLAG << TICKET_SHIFT);
+
+	/*
+	 * If the lock is uncontended, clear the flag - use cmpxchg in
+	 * case it changes behind our back though.
+	 */
+	if (new.tickets.head != new.tickets.tail ||
+	    cmpxchg(&lock->head_tail, old.head_tail,
+					new.head_tail) != old.head_tail) {
+		/*
+		 * Lock still has someone queued for it, so wake up an
+		 * appropriate waiter.
+		 */
+		__ticket_unlock_kick(lock, old.tickets.head);
+	}
+}
+
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
+	if (TICKET_SLOWPATH_FLAG &&
+	    static_key_false(&paravirt_ticketlocks_enabled)) {
+		arch_spinlock_t prev;
+
+		prev = *lock;
+		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
+
+		/* add_smp() is a full mb() */
 
-	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
-	__ticket_unlock_kick(lock, next);
+		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
+			__ticket_unlock_slowpath(lock, prev);
+	} else
+		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index e96fcbd..4f1bea1 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,8 +5,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC	2
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC	1
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..bbb6c73 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -4,6 +4,7 @@
  */
 #include <linux/spinlock.h>
 #include <linux/module.h>
+#include <linux/jump_label.h>
 
 #include <asm/paravirt.h>
 
@@ -15,3 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 5502fda..a3b22e6 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -155,6 +155,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	/* Mark entry to slowpath before doing the pickup test to make
+	   sure we don't deadlock with an unlocker. */
+	__ticket_enter_slowpath(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -255,6 +259,8 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 10/19]  x86/ticketlock: Add slowpath logic
  2013-06-01 19:21 ` Raghavendra K T
                   ` (13 preceding siblings ...)
  (?)
@ 2013-06-01 19:24 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

x86/ticketlock: Add slowpath logic

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

Unlocker			Locker
				test for lock pickup
					-> fail
unlock
test slowpath
	-> false
				set slowpath flags
				block

Whereas this works in any ordering:

Unlocker			Locker
				set slowpath flags
				test for lock pickup
					-> fail
				block
unlock
test slowpath
	-> true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clears the slowpath flag.

The unlock code uses a locked add to update the head counter.  This also
acts as a full memory barrier so that its safe to subsequently
read back the slowflag state, knowing that the updated lock is visible
to the other CPUs.  If it were an unlocked add, then the flag read may
just be forwarded from the store buffer before it was visible to the other
CPUs, which could result in a deadlock.

Unfortunately this means we need to do a locked instruction when
unlocking with PV ticketlocks.  However, if PV ticketlocks are not
enabled, then the old non-locked "add" is the only unlocking code.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.
Thanks to Stephan Diestelhorst for commenting on some code which relied
on an inaccurate reading of the x86 memory ordering rules.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stephan Diestelhorst <stephan.diestelhorst@amd.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 -
 arch/x86/include/asm/spinlock.h       |   86 ++++++++++++++++++++++++---------
 arch/x86/include/asm/spinlock_types.h |    2 +
 arch/x86/kernel/paravirt-spinlocks.c  |    3 +
 arch/x86/xen/spinlock.c               |    6 ++
 5 files changed, 74 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 7131e12c..401f350 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -718,7 +718,7 @@ static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 04a5cd5..d68883d 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -1,11 +1,14 @@
 #ifndef _ASM_X86_SPINLOCK_H
 #define _ASM_X86_SPINLOCK_H
 
+#include <linux/jump_label.h>
 #include <linux/atomic.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 #include <linux/compiler.h>
 #include <asm/paravirt.h>
+#include <asm/bitops.h>
+
 /*
  * Your basic SMP spinlocks, allowing only a single CPU anywhere
  *
@@ -37,32 +40,28 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+extern struct static_key paravirt_ticketlocks_enabled;
+static __always_inline bool static_key_false(struct static_key *key);
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
-							__ticket_t ticket)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+	set_bit(0, (volatile unsigned long *)&lock->tickets.tail);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
-							 __ticket_t ticket)
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
 }
-
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
-
-/*
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
-							__ticket_t next)
+static inline void __ticket_unlock_kick(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
-	if (unlikely(lock->tickets.tail != next))
-		____ticket_unlock_kick(lock, next);
 }
 
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -76,20 +75,22 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
 {
 	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
+	if (likely(inc.head == inc.tail))
+		goto out;
 
+	inc.tail &= ~TICKET_SLOWPATH_FLAG;
 	for (;;) {
 		unsigned count = SPIN_THRESHOLD;
 
 		do {
-			if (inc.head == inc.tail)
+			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
 				goto out;
 			cpu_relax();
-			inc.head = ACCESS_ONCE(lock->tickets.head);
 		} while (--count);
 		__ticket_lock_spinning(lock, inc.tail);
 	}
@@ -101,7 +102,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	arch_spinlock_t old, new;
 
 	old.tickets = ACCESS_ONCE(lock->tickets);
-	if (old.tickets.head != old.tickets.tail)
+	if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
 		return 0;
 
 	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
@@ -110,12 +111,49 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
+static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
+					    arch_spinlock_t old)
+{
+	arch_spinlock_t new;
+
+	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
+
+	/* Perform the unlock on the "before" copy */
+	old.tickets.head += TICKET_LOCK_INC;
+
+	/* Clear the slowpath flag */
+	new.head_tail = old.head_tail & ~(TICKET_SLOWPATH_FLAG << TICKET_SHIFT);
+
+	/*
+	 * If the lock is uncontended, clear the flag - use cmpxchg in
+	 * case it changes behind our back though.
+	 */
+	if (new.tickets.head != new.tickets.tail ||
+	    cmpxchg(&lock->head_tail, old.head_tail,
+					new.head_tail) != old.head_tail) {
+		/*
+		 * Lock still has someone queued for it, so wake up an
+		 * appropriate waiter.
+		 */
+		__ticket_unlock_kick(lock, old.tickets.head);
+	}
+}
+
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
+	if (TICKET_SLOWPATH_FLAG &&
+	    static_key_false(&paravirt_ticketlocks_enabled)) {
+		arch_spinlock_t prev;
+
+		prev = *lock;
+		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
+
+		/* add_smp() is a full mb() */
 
-	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
-	__ticket_unlock_kick(lock, next);
+		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
+			__ticket_unlock_slowpath(lock, prev);
+	} else
+		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index e96fcbd..4f1bea1 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,8 +5,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC	2
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC	1
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..bbb6c73 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -4,6 +4,7 @@
  */
 #include <linux/spinlock.h>
 #include <linux/module.h>
+#include <linux/jump_label.h>
 
 #include <asm/paravirt.h>
 
@@ -15,3 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 5502fda..a3b22e6 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -155,6 +155,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	/* Mark entry to slowpath before doing the pickup test to make
+	   sure we don't deadlock with an unlocker. */
+	__ticket_enter_slowpath(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -255,6 +259,8 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 10/19]  x86/ticketlock: Add slowpath logic
@ 2013-06-01 19:24   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

x86/ticketlock: Add slowpath logic

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

Unlocker			Locker
				test for lock pickup
					-> fail
unlock
test slowpath
	-> false
				set slowpath flags
				block

Whereas this works in any ordering:

Unlocker			Locker
				set slowpath flags
				test for lock pickup
					-> fail
				block
unlock
test slowpath
	-> true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clears the slowpath flag.

The unlock code uses a locked add to update the head counter.  This also
acts as a full memory barrier so that its safe to subsequently
read back the slowflag state, knowing that the updated lock is visible
to the other CPUs.  If it were an unlocked add, then the flag read may
just be forwarded from the store buffer before it was visible to the other
CPUs, which could result in a deadlock.

Unfortunately this means we need to do a locked instruction when
unlocking with PV ticketlocks.  However, if PV ticketlocks are not
enabled, then the old non-locked "add" is the only unlocking code.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.
Thanks to Stephan Diestelhorst for commenting on some code which relied
on an inaccurate reading of the x86 memory ordering rules.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stephan Diestelhorst <stephan.diestelhorst@amd.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/paravirt.h       |    2 -
 arch/x86/include/asm/spinlock.h       |   86 ++++++++++++++++++++++++---------
 arch/x86/include/asm/spinlock_types.h |    2 +
 arch/x86/kernel/paravirt-spinlocks.c  |    3 +
 arch/x86/xen/spinlock.c               |    6 ++
 5 files changed, 74 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 7131e12c..401f350 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -718,7 +718,7 @@ static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 04a5cd5..d68883d 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -1,11 +1,14 @@
 #ifndef _ASM_X86_SPINLOCK_H
 #define _ASM_X86_SPINLOCK_H
 
+#include <linux/jump_label.h>
 #include <linux/atomic.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 #include <linux/compiler.h>
 #include <asm/paravirt.h>
+#include <asm/bitops.h>
+
 /*
  * Your basic SMP spinlocks, allowing only a single CPU anywhere
  *
@@ -37,32 +40,28 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+extern struct static_key paravirt_ticketlocks_enabled;
+static __always_inline bool static_key_false(struct static_key *key);
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
-							__ticket_t ticket)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+	set_bit(0, (volatile unsigned long *)&lock->tickets.tail);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
-							 __ticket_t ticket)
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
 }
-
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
-
-/*
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
-							__ticket_t next)
+static inline void __ticket_unlock_kick(arch_spinlock_t *lock,
+							__ticket_t ticket)
 {
-	if (unlikely(lock->tickets.tail != next))
-		____ticket_unlock_kick(lock, next);
 }
 
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -76,20 +75,22 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
 {
 	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
+	if (likely(inc.head == inc.tail))
+		goto out;
 
+	inc.tail &= ~TICKET_SLOWPATH_FLAG;
 	for (;;) {
 		unsigned count = SPIN_THRESHOLD;
 
 		do {
-			if (inc.head == inc.tail)
+			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
 				goto out;
 			cpu_relax();
-			inc.head = ACCESS_ONCE(lock->tickets.head);
 		} while (--count);
 		__ticket_lock_spinning(lock, inc.tail);
 	}
@@ -101,7 +102,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	arch_spinlock_t old, new;
 
 	old.tickets = ACCESS_ONCE(lock->tickets);
-	if (old.tickets.head != old.tickets.tail)
+	if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
 		return 0;
 
 	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
@@ -110,12 +111,49 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
+static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
+					    arch_spinlock_t old)
+{
+	arch_spinlock_t new;
+
+	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
+
+	/* Perform the unlock on the "before" copy */
+	old.tickets.head += TICKET_LOCK_INC;
+
+	/* Clear the slowpath flag */
+	new.head_tail = old.head_tail & ~(TICKET_SLOWPATH_FLAG << TICKET_SHIFT);
+
+	/*
+	 * If the lock is uncontended, clear the flag - use cmpxchg in
+	 * case it changes behind our back though.
+	 */
+	if (new.tickets.head != new.tickets.tail ||
+	    cmpxchg(&lock->head_tail, old.head_tail,
+					new.head_tail) != old.head_tail) {
+		/*
+		 * Lock still has someone queued for it, so wake up an
+		 * appropriate waiter.
+		 */
+		__ticket_unlock_kick(lock, old.tickets.head);
+	}
+}
+
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
+	if (TICKET_SLOWPATH_FLAG &&
+	    static_key_false(&paravirt_ticketlocks_enabled)) {
+		arch_spinlock_t prev;
+
+		prev = *lock;
+		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
+
+		/* add_smp() is a full mb() */
 
-	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
-	__ticket_unlock_kick(lock, next);
+		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
+			__ticket_unlock_slowpath(lock, prev);
+	} else
+		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index e96fcbd..4f1bea1 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,8 +5,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC	2
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC	1
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..bbb6c73 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -4,6 +4,7 @@
  */
 #include <linux/spinlock.h>
 #include <linux/module.h>
+#include <linux/jump_label.h>
 
 #include <asm/paravirt.h>
 
@@ -15,3 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 5502fda..a3b22e6 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -155,6 +155,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	/* Mark entry to slowpath before doing the pickup test to make
+	   sure we don't deadlock with an unlocker. */
+	__ticket_enter_slowpath(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -255,6 +259,8 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 11/19]  xen/pvticketlock: Allow interrupts to be enabled while blocking
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:24   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen/pvticketlock: Allow interrupts to be enabled while blocking

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu "lock" and "want" values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   46 ++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 40 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index a3b22e6..5e78ee9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -140,7 +140,20 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	 * partially setup state.
 	 */
 	local_irq_save(flags);
-
+	/*
+	 * We don't really care if we're overwriting some other
+	 * (lock,want) pair, as that would mean that we're currently
+	 * in an interrupt context, and the outer context had
+	 * interrupts enabled.  That has already kicked the VCPU out
+	 * of xen_poll_irq(), so it will just return spuriously and
+	 * retry with newly setup (lock,want).
+	 *
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
 	w->want = want;
 	smp_wmb();
 	w->lock = lock;
@@ -155,24 +168,43 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
-	/* Mark entry to slowpath before doing the pickup test to make
-	   sure we don't deadlock with an unlocker. */
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
 	__ticket_enter_slowpath(lock);
 
-	/* check again make sure it didn't become free while
-	   we weren't looking  */
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking
+	 */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
 		add_stats(TAKEN_SLOW_PICKUP, 1);
 		goto out;
 	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/*
+	 * If an interrupt happens here, it will leave the wakeup irq
+	 * pending, which will cause xen_poll_irq() to return
+	 * immediately.
+	 */
+
 	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
 	xen_poll_irq(irq);
 	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
+
+	local_irq_save(flags);
+
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 out:
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
+
 	local_irq_restore(flags);
+
 	spin_time_accum_blocked(start);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
@@ -186,7 +218,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 	for_each_cpu(cpu, &waiting_cpus) {
 		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-		if (w->lock == lock && w->want == next) {
+		/* Make sure we read lock before want */
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == next) {
 			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 		}


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 11/19] xen/pvticketlock: Allow interrupts to be enabled while blocking
  2013-06-01 19:21 ` Raghavendra K T
                   ` (15 preceding siblings ...)
  (?)
@ 2013-06-01 19:24 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

xen/pvticketlock: Allow interrupts to be enabled while blocking

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu "lock" and "want" values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   46 ++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 40 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index a3b22e6..5e78ee9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -140,7 +140,20 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	 * partially setup state.
 	 */
 	local_irq_save(flags);
-
+	/*
+	 * We don't really care if we're overwriting some other
+	 * (lock,want) pair, as that would mean that we're currently
+	 * in an interrupt context, and the outer context had
+	 * interrupts enabled.  That has already kicked the VCPU out
+	 * of xen_poll_irq(), so it will just return spuriously and
+	 * retry with newly setup (lock,want).
+	 *
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
 	w->want = want;
 	smp_wmb();
 	w->lock = lock;
@@ -155,24 +168,43 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
-	/* Mark entry to slowpath before doing the pickup test to make
-	   sure we don't deadlock with an unlocker. */
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
 	__ticket_enter_slowpath(lock);
 
-	/* check again make sure it didn't become free while
-	   we weren't looking  */
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking
+	 */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
 		add_stats(TAKEN_SLOW_PICKUP, 1);
 		goto out;
 	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/*
+	 * If an interrupt happens here, it will leave the wakeup irq
+	 * pending, which will cause xen_poll_irq() to return
+	 * immediately.
+	 */
+
 	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
 	xen_poll_irq(irq);
 	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
+
+	local_irq_save(flags);
+
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 out:
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
+
 	local_irq_restore(flags);
+
 	spin_time_accum_blocked(start);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
@@ -186,7 +218,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 	for_each_cpu(cpu, &waiting_cpus) {
 		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-		if (w->lock == lock && w->want == next) {
+		/* Make sure we read lock before want */
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == next) {
 			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 		}

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 11/19]  xen/pvticketlock: Allow interrupts to be enabled while blocking
@ 2013-06-01 19:24   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:24 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen/pvticketlock: Allow interrupts to be enabled while blocking

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu "lock" and "want" values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/spinlock.c |   46 ++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 40 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index a3b22e6..5e78ee9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -140,7 +140,20 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	 * partially setup state.
 	 */
 	local_irq_save(flags);
-
+	/*
+	 * We don't really care if we're overwriting some other
+	 * (lock,want) pair, as that would mean that we're currently
+	 * in an interrupt context, and the outer context had
+	 * interrupts enabled.  That has already kicked the VCPU out
+	 * of xen_poll_irq(), so it will just return spuriously and
+	 * retry with newly setup (lock,want).
+	 *
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
 	w->want = want;
 	smp_wmb();
 	w->lock = lock;
@@ -155,24 +168,43 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
-	/* Mark entry to slowpath before doing the pickup test to make
-	   sure we don't deadlock with an unlocker. */
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
 	__ticket_enter_slowpath(lock);
 
-	/* check again make sure it didn't become free while
-	   we weren't looking  */
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking
+	 */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
 		add_stats(TAKEN_SLOW_PICKUP, 1);
 		goto out;
 	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/*
+	 * If an interrupt happens here, it will leave the wakeup irq
+	 * pending, which will cause xen_poll_irq() to return
+	 * immediately.
+	 */
+
 	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
 	xen_poll_irq(irq);
 	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
+
+	local_irq_save(flags);
+
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 out:
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
+
 	local_irq_restore(flags);
+
 	spin_time_accum_blocked(start);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
@@ -186,7 +218,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 	for_each_cpu(cpu, &waiting_cpus) {
 		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-		if (w->lock == lock && w->want == next) {
+		/* Make sure we read lock before want */
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == next) {
 			add_stats(RELEASED_SLOW_KICKED, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 		}


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:25   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen: Enable PV ticketlocks on HVM Xen

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index dcdc91c..8d2abf7 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -682,4 +682,5 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
+	xen_init_spinlocks();
 }


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
  2013-06-01 19:21 ` Raghavendra K T
                   ` (18 preceding siblings ...)
  (?)
@ 2013-06-01 19:25 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

xen: Enable PV ticketlocks on HVM Xen

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index dcdc91c..8d2abf7 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -682,4 +682,5 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
+	xen_init_spinlocks();
 }

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
@ 2013-06-01 19:25   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

xen: Enable PV ticketlocks on HVM Xen

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/xen/smp.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index dcdc91c..8d2abf7 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -682,4 +682,5 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
+	xen_init_spinlocks();
 }

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 13/19] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:25   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

kvm_hc_kick_cpu allows the calling vcpu to kick another vcpu out of halt state.
the presence of these hypercalls is indicated to guest via
kvm_feature_pv_unhalt.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
[Raghu: Apic related changes, folding pvunhalted into vcpu_runnable]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_host.h      |    5 +++++
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 arch/x86/kvm/cpuid.c                 |    3 ++-
 arch/x86/kvm/x86.c                   |   37 ++++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm_para.h        |    1 +
 5 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3741c65..95702de 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -503,6 +503,11 @@ struct kvm_vcpu_arch {
 	 * instruction.
 	 */
 	bool write_fault_to_shadow_pgtable;
+
+	/* pv related host specific info */
+	struct {
+		bool pv_unhalted;
+	} pv;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 06fdbd9..94dc8ca 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -23,6 +23,7 @@
 #define KVM_FEATURE_ASYNC_PF		4
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
+#define KVM_FEATURE_PV_UNHALT		7
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a20ecb5..b110fe6 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -413,7 +413,8 @@ static int do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			     (1 << KVM_FEATURE_CLOCKSOURCE2) |
 			     (1 << KVM_FEATURE_ASYNC_PF) |
 			     (1 << KVM_FEATURE_PV_EOI) |
-			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT);
+			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
+			     (1 << KVM_FEATURE_PV_UNHALT);
 
 		if (sched_info_on())
 			entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 094b5d9..f8bea30 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5449,6 +5449,36 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+/*
+ * kvm_pv_kick_cpu_op:  Kick a vcpu.
+ *
+ * @apicid - apicid of vcpu to be kicked.
+ */
+static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
+{
+	struct kvm_vcpu *vcpu = NULL;
+	int i;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!kvm_apic_present(vcpu))
+			continue;
+
+		if (kvm_apic_match_dest(vcpu, 0, 0, apicid, 0))
+			break;
+	}
+	if (vcpu) {
+		/*
+		 * Setting unhalt flag here can result in spurious runnable
+		 * state when unhalt reset does not happen in vcpu_block.
+		 * But that is harmless since that should soon result in halt.
+		 */
+		vcpu->arch.pv.pv_unhalted = true;
+		/* We need everybody see unhalt before vcpu unblocks */
+		smp_wmb();
+		kvm_vcpu_kick(vcpu);
+	}
+}
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
 	unsigned long nr, a0, a1, a2, a3, ret;
@@ -5482,6 +5512,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	case KVM_HC_VAPIC_POLL_IRQ:
 		ret = 0;
 		break;
+	case KVM_HC_KICK_CPU:
+		kvm_pv_kick_cpu_op(vcpu->kvm, a0);
+		ret = 0;
+		break;
 	default:
 		ret = -KVM_ENOSYS;
 		break;
@@ -5909,6 +5943,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
 				kvm_apic_accept_events(vcpu);
 				switch(vcpu->arch.mp_state) {
 				case KVM_MP_STATE_HALTED:
+					vcpu->arch.pv.pv_unhalted = false;
 					vcpu->arch.mp_state =
 						KVM_MP_STATE_RUNNABLE;
 				case KVM_MP_STATE_RUNNABLE:
@@ -6729,6 +6764,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 	BUG_ON(vcpu->kvm == NULL);
 	kvm = vcpu->kvm;
 
+	vcpu->arch.pv.pv_unhalted = false;
 	vcpu->arch.emulate_ctxt.ops = &emulate_ops;
 	if (!irqchip_in_kernel(kvm) || kvm_vcpu_is_bsp(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
@@ -7065,6 +7101,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 		!vcpu->arch.apf.halted)
 		|| !list_empty_careful(&vcpu->async_pf.done)
 		|| kvm_apic_has_events(vcpu)
+		|| vcpu->arch.pv.pv_unhalted
 		|| atomic_read(&vcpu->arch.nmi_queued) ||
 		(kvm_arch_interrupt_allowed(vcpu) &&
 		 kvm_cpu_has_interrupt(vcpu));
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index cea2c5c..2841f86 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -19,6 +19,7 @@
 #define KVM_HC_MMU_OP			2
 #define KVM_HC_FEATURES			3
 #define KVM_HC_PPC_MAP_MAGIC_PAGE	4
+#define KVM_HC_KICK_CPU			5
 
 /*
  * hypercalls use architecture specific


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 13/19] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
  2013-06-01 19:21 ` Raghavendra K T
                   ` (20 preceding siblings ...)
  (?)
@ 2013-06-01 19:25 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

kvm_hc_kick_cpu allows the calling vcpu to kick another vcpu out of halt state.
the presence of these hypercalls is indicated to guest via
kvm_feature_pv_unhalt.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
[Raghu: Apic related changes, folding pvunhalted into vcpu_runnable]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_host.h      |    5 +++++
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 arch/x86/kvm/cpuid.c                 |    3 ++-
 arch/x86/kvm/x86.c                   |   37 ++++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm_para.h        |    1 +
 5 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3741c65..95702de 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -503,6 +503,11 @@ struct kvm_vcpu_arch {
 	 * instruction.
 	 */
 	bool write_fault_to_shadow_pgtable;
+
+	/* pv related host specific info */
+	struct {
+		bool pv_unhalted;
+	} pv;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 06fdbd9..94dc8ca 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -23,6 +23,7 @@
 #define KVM_FEATURE_ASYNC_PF		4
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
+#define KVM_FEATURE_PV_UNHALT		7
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a20ecb5..b110fe6 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -413,7 +413,8 @@ static int do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			     (1 << KVM_FEATURE_CLOCKSOURCE2) |
 			     (1 << KVM_FEATURE_ASYNC_PF) |
 			     (1 << KVM_FEATURE_PV_EOI) |
-			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT);
+			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
+			     (1 << KVM_FEATURE_PV_UNHALT);
 
 		if (sched_info_on())
 			entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 094b5d9..f8bea30 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5449,6 +5449,36 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+/*
+ * kvm_pv_kick_cpu_op:  Kick a vcpu.
+ *
+ * @apicid - apicid of vcpu to be kicked.
+ */
+static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
+{
+	struct kvm_vcpu *vcpu = NULL;
+	int i;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!kvm_apic_present(vcpu))
+			continue;
+
+		if (kvm_apic_match_dest(vcpu, 0, 0, apicid, 0))
+			break;
+	}
+	if (vcpu) {
+		/*
+		 * Setting unhalt flag here can result in spurious runnable
+		 * state when unhalt reset does not happen in vcpu_block.
+		 * But that is harmless since that should soon result in halt.
+		 */
+		vcpu->arch.pv.pv_unhalted = true;
+		/* We need everybody see unhalt before vcpu unblocks */
+		smp_wmb();
+		kvm_vcpu_kick(vcpu);
+	}
+}
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
 	unsigned long nr, a0, a1, a2, a3, ret;
@@ -5482,6 +5512,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	case KVM_HC_VAPIC_POLL_IRQ:
 		ret = 0;
 		break;
+	case KVM_HC_KICK_CPU:
+		kvm_pv_kick_cpu_op(vcpu->kvm, a0);
+		ret = 0;
+		break;
 	default:
 		ret = -KVM_ENOSYS;
 		break;
@@ -5909,6 +5943,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
 				kvm_apic_accept_events(vcpu);
 				switch(vcpu->arch.mp_state) {
 				case KVM_MP_STATE_HALTED:
+					vcpu->arch.pv.pv_unhalted = false;
 					vcpu->arch.mp_state =
 						KVM_MP_STATE_RUNNABLE;
 				case KVM_MP_STATE_RUNNABLE:
@@ -6729,6 +6764,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 	BUG_ON(vcpu->kvm == NULL);
 	kvm = vcpu->kvm;
 
+	vcpu->arch.pv.pv_unhalted = false;
 	vcpu->arch.emulate_ctxt.ops = &emulate_ops;
 	if (!irqchip_in_kernel(kvm) || kvm_vcpu_is_bsp(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
@@ -7065,6 +7101,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 		!vcpu->arch.apf.halted)
 		|| !list_empty_careful(&vcpu->async_pf.done)
 		|| kvm_apic_has_events(vcpu)
+		|| vcpu->arch.pv.pv_unhalted
 		|| atomic_read(&vcpu->arch.nmi_queued) ||
 		(kvm_arch_interrupt_allowed(vcpu) &&
 		 kvm_cpu_has_interrupt(vcpu));
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index cea2c5c..2841f86 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -19,6 +19,7 @@
 #define KVM_HC_MMU_OP			2
 #define KVM_HC_FEATURES			3
 #define KVM_HC_PPC_MAP_MAGIC_PAGE	4
+#define KVM_HC_KICK_CPU			5
 
 /*
  * hypercalls use architecture specific

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 13/19] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
@ 2013-06-01 19:25   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

kvm_hc_kick_cpu allows the calling vcpu to kick another vcpu out of halt state.
the presence of these hypercalls is indicated to guest via
kvm_feature_pv_unhalt.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
[Raghu: Apic related changes, folding pvunhalted into vcpu_runnable]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_host.h      |    5 +++++
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 arch/x86/kvm/cpuid.c                 |    3 ++-
 arch/x86/kvm/x86.c                   |   37 ++++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm_para.h        |    1 +
 5 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3741c65..95702de 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -503,6 +503,11 @@ struct kvm_vcpu_arch {
 	 * instruction.
 	 */
 	bool write_fault_to_shadow_pgtable;
+
+	/* pv related host specific info */
+	struct {
+		bool pv_unhalted;
+	} pv;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 06fdbd9..94dc8ca 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -23,6 +23,7 @@
 #define KVM_FEATURE_ASYNC_PF		4
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
+#define KVM_FEATURE_PV_UNHALT		7
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a20ecb5..b110fe6 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -413,7 +413,8 @@ static int do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			     (1 << KVM_FEATURE_CLOCKSOURCE2) |
 			     (1 << KVM_FEATURE_ASYNC_PF) |
 			     (1 << KVM_FEATURE_PV_EOI) |
-			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT);
+			     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
+			     (1 << KVM_FEATURE_PV_UNHALT);
 
 		if (sched_info_on())
 			entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 094b5d9..f8bea30 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5449,6 +5449,36 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+/*
+ * kvm_pv_kick_cpu_op:  Kick a vcpu.
+ *
+ * @apicid - apicid of vcpu to be kicked.
+ */
+static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
+{
+	struct kvm_vcpu *vcpu = NULL;
+	int i;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!kvm_apic_present(vcpu))
+			continue;
+
+		if (kvm_apic_match_dest(vcpu, 0, 0, apicid, 0))
+			break;
+	}
+	if (vcpu) {
+		/*
+		 * Setting unhalt flag here can result in spurious runnable
+		 * state when unhalt reset does not happen in vcpu_block.
+		 * But that is harmless since that should soon result in halt.
+		 */
+		vcpu->arch.pv.pv_unhalted = true;
+		/* We need everybody see unhalt before vcpu unblocks */
+		smp_wmb();
+		kvm_vcpu_kick(vcpu);
+	}
+}
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
 	unsigned long nr, a0, a1, a2, a3, ret;
@@ -5482,6 +5512,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	case KVM_HC_VAPIC_POLL_IRQ:
 		ret = 0;
 		break;
+	case KVM_HC_KICK_CPU:
+		kvm_pv_kick_cpu_op(vcpu->kvm, a0);
+		ret = 0;
+		break;
 	default:
 		ret = -KVM_ENOSYS;
 		break;
@@ -5909,6 +5943,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
 				kvm_apic_accept_events(vcpu);
 				switch(vcpu->arch.mp_state) {
 				case KVM_MP_STATE_HALTED:
+					vcpu->arch.pv.pv_unhalted = false;
 					vcpu->arch.mp_state =
 						KVM_MP_STATE_RUNNABLE;
 				case KVM_MP_STATE_RUNNABLE:
@@ -6729,6 +6764,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 	BUG_ON(vcpu->kvm == NULL);
 	kvm = vcpu->kvm;
 
+	vcpu->arch.pv.pv_unhalted = false;
 	vcpu->arch.emulate_ctxt.ops = &emulate_ops;
 	if (!irqchip_in_kernel(kvm) || kvm_vcpu_is_bsp(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
@@ -7065,6 +7101,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 		!vcpu->arch.apf.halted)
 		|| !list_empty_careful(&vcpu->async_pf.done)
 		|| kvm_apic_has_events(vcpu)
+		|| vcpu->arch.pv.pv_unhalted
 		|| atomic_read(&vcpu->arch.nmi_queued) ||
 		(kvm_arch_interrupt_allowed(vcpu) &&
 		 kvm_cpu_has_interrupt(vcpu));
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index cea2c5c..2841f86 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -19,6 +19,7 @@
 #define KVM_HC_MMU_OP			2
 #define KVM_HC_FEATURES			3
 #define KVM_HC_PPC_MAP_MAGIC_PAGE	4
+#define KVM_HC_KICK_CPU			5
 
 /*
  * hypercalls use architecture specific


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 14/19] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:25   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

During migration, any vcpu that got kicked but did not become runnable
(still in halted state) should be runnable after migration.

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/kvm/x86.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f8bea30..92a9932 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6243,7 +6243,12 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
 				    struct kvm_mp_state *mp_state)
 {
 	kvm_apic_accept_events(vcpu);
-	mp_state->mp_state = vcpu->arch.mp_state;
+	if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED &&
+					vcpu->arch.pv.pv_unhalted)
+		mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
+	else
+		mp_state->mp_state = vcpu->arch.mp_state;
+
 	return 0;
 }
 


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 14/19] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
  2013-06-01 19:21 ` Raghavendra K T
                   ` (22 preceding siblings ...)
  (?)
@ 2013-06-01 19:25 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

During migration, any vcpu that got kicked but did not become runnable
(still in halted state) should be runnable after migration.

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/kvm/x86.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f8bea30..92a9932 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6243,7 +6243,12 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
 				    struct kvm_mp_state *mp_state)
 {
 	kvm_apic_accept_events(vcpu);
-	mp_state->mp_state = vcpu->arch.mp_state;
+	if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED &&
+					vcpu->arch.pv.pv_unhalted)
+		mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
+	else
+		mp_state->mp_state = vcpu->arch.mp_state;
+
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 14/19] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
@ 2013-06-01 19:25   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

During migration, any vcpu that got kicked but did not become runnable
(still in halted state) should be runnable after migration.

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/kvm/x86.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f8bea30..92a9932 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6243,7 +6243,12 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
 				    struct kvm_mp_state *mp_state)
 {
 	kvm_apic_accept_events(vcpu);
-	mp_state->mp_state = vcpu->arch.mp_state;
+	if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED &&
+					vcpu->arch.pv.pv_unhalted)
+		mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
+	else
+		mp_state->mp_state = vcpu->arch.mp_state;
+
 	return 0;
 }
 

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 15/19] kvm guest : Add configuration support to enable debug information for KVM Guests
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:25   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm guest : Add configuration support to enable debug information for KVM Guests

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 80fcc4b..f8ff42d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -646,6 +646,15 @@ config KVM_GUEST
 	  underlying device model, the host provides the guest with
 	  timing infrastructure such as time of day, and system time
 
+config KVM_DEBUG_FS
+	bool "Enable debug information for KVM Guests in debugfs"
+	depends on KVM_GUEST && DEBUG_FS
+	default n
+	---help---
+	  This option enables collection of various statistics for KVM guest.
+	  Statistics are displayed in debugfs filesystem. Enabling this option
+	  may incur significant overhead.
+
 source "arch/x86/lguest/Kconfig"
 
 config PARAVIRT_TIME_ACCOUNTING


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 15/19] kvm guest : Add configuration support to enable debug information for KVM Guests
@ 2013-06-01 19:25   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

kvm guest : Add configuration support to enable debug information for KVM Guests

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 80fcc4b..f8ff42d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -646,6 +646,15 @@ config KVM_GUEST
 	  underlying device model, the host provides the guest with
 	  timing infrastructure such as time of day, and system time
 
+config KVM_DEBUG_FS
+	bool "Enable debug information for KVM Guests in debugfs"
+	depends on KVM_GUEST && DEBUG_FS
+	default n
+	---help---
+	  This option enables collection of various statistics for KVM guest.
+	  Statistics are displayed in debugfs filesystem. Enabling this option
+	  may incur significant overhead.
+
 source "arch/x86/lguest/Kconfig"
 
 config PARAVIRT_TIME_ACCOUNTING

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 15/19] kvm guest : Add configuration support to enable debug information for KVM Guests
@ 2013-06-01 19:25   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

kvm guest : Add configuration support to enable debug information for KVM Guests

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/Kconfig |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 80fcc4b..f8ff42d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -646,6 +646,15 @@ config KVM_GUEST
 	  underlying device model, the host provides the guest with
 	  timing infrastructure such as time of day, and system time
 
+config KVM_DEBUG_FS
+	bool "Enable debug information for KVM Guests in debugfs"
+	depends on KVM_GUEST && DEBUG_FS
+	default n
+	---help---
+	  This option enables collection of various statistics for KVM guest.
+	  Statistics are displayed in debugfs filesystem. Enabling this option
+	  may incur significant overhead.
+
 source "arch/x86/lguest/Kconfig"
 
 config PARAVIRT_TIME_ACCOUNTING

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:25   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
 support for pv-ticketlocks is registered via pv_lock_ops.

Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
[Raghu: check_zero race fix, enum for kvm_contention_stat
jumplabel related changes ]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_para.h |   14 ++
 arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 268 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 695399f..427afcb 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -118,10 +118,20 @@ void kvm_async_pf_task_wait(u32 token);
 void kvm_async_pf_task_wake(u32 token);
 u32 kvm_read_and_reset_pf_reason(void);
 extern void kvm_disable_steal_time(void);
-#else
-#define kvm_guest_init() do { } while (0)
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+void __init kvm_spinlock_init(void);
+#else /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline void kvm_spinlock_init(void)
+{
+}
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
+#else /* CONFIG_KVM_GUEST */
+#define kvm_guest_init() do {} while (0)
 #define kvm_async_pf_task_wait(T) do {} while(0)
 #define kvm_async_pf_task_wake(T) do {} while(0)
+
 static inline u32 kvm_read_and_reset_pf_reason(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index cd6d9a5..2715b92 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/kprobes.h>
+#include <linux/debugfs.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -419,6 +420,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
 	WARN_ON(kvm_register_clock("primary cpu clock"));
 	kvm_guest_cpu_init();
 	native_smp_prepare_boot_cpu();
+	kvm_spinlock_init();
 }
 
 static void __cpuinit kvm_guest_cpu_online(void *dummy)
@@ -523,3 +525,257 @@ static __init int activate_jump_labels(void)
 	return 0;
 }
 arch_initcall(activate_jump_labels);
+
+/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
+void kvm_kick_cpu(int cpu)
+{
+	int apicid;
+
+	apicid = per_cpu(x86_cpu_to_apicid, cpu);
+	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
+}
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+enum kvm_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
+
+#ifdef CONFIG_KVM_DEBUG_FS
+#define HISTO_BUCKETS	30
+
+static struct kvm_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
+	u32 histo_spin_blocked[HISTO_BUCKETS+1];
+	u64 time_blocked;
+} spinlock_stats;
+
+static u8 zero_stats;
+
+static inline void check_zero(void)
+{
+	u8 ret;
+	u8 old;
+
+	old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
+	}
+}
+
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
+
+
+static inline u64 spin_time_start(void)
+{
+	return sched_clock();
+}
+
+static void __spin_time_accum(u64 delta, u32 *array)
+{
+	unsigned index;
+
+	index = ilog2(delta);
+	check_zero();
+
+	if (index < HISTO_BUCKETS)
+		array[index]++;
+	else
+		array[HISTO_BUCKETS]++;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+	u32 delta;
+
+	delta = sched_clock() - start;
+	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
+	spinlock_stats.time_blocked += delta;
+}
+
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+
+struct dentry *kvm_init_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm", NULL);
+	if (!d_kvm_debug)
+		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
+
+	return d_kvm_debug;
+}
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	struct dentry *d_kvm;
+
+	d_kvm = kvm_init_debugfs();
+	if (d_kvm == NULL)
+		return -ENOMEM;
+
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
+
+	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
+
+	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
+	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
+
+	debugfs_create_u32("released_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
+	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
+
+	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
+			   &spinlock_stats.time_blocked);
+
+	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
+		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
+
+	return 0;
+}
+fs_initcall(kvm_spinlock_debugfs);
+#else  /* !CONFIG_KVM_DEBUG_FS */
+#define TIMEOUT			(1 << 10)
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+	return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif  /* CONFIG_KVM_DEBUG_FS */
+
+struct kvm_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
+};
+
+/* cpus 'waiting' on a spinlock to become available */
+static cpumask_t waiting_cpus;
+
+/* Track spinlock on which a cpu is waiting */
+static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
+
+static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
+{
+	struct kvm_lock_waiting *w;
+	int cpu;
+	u64 start;
+	unsigned long flags;
+
+	w = &__get_cpu_var(lock_waiting);
+	cpu = smp_processor_id();
+	start = spin_time_start();
+
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
+
+	/*
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
+
+	add_stats(TAKEN_SLOW, 1);
+
+	/*
+	 * This uses set_bit, which is atomic but we should not rely on its
+	 * reordering gurantees. So barrier is needed after this call.
+	 */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+
+	barrier();
+
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
+	__ticket_enter_slowpath(lock);
+
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking.
+	 */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/* halt until it's our turn and kicked. */
+	halt();
+
+	local_irq_save(flags);
+out:
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
+	spin_time_accum_blocked(start);
+}
+PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
+
+/* Kick vcpu waiting on @lock->head to reach value @ticket */
+static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+{
+	int cpu;
+
+	add_stats(RELEASED_SLOW, 1);
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == ticket) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
+			kvm_kick_cpu(cpu);
+			break;
+		}
+	}
+}
+
+/*
+ * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
+ */
+void __init kvm_spinlock_init(void)
+{
+	if (!kvm_para_available())
+		return;
+	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+		return;
+
+	printk(KERN_INFO"KVM setup paravirtual spinlock\n");
+
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
+	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+}
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
  2013-06-01 19:21 ` Raghavendra K T
                   ` (25 preceding siblings ...)
  (?)
@ 2013-06-01 19:25 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
 support for pv-ticketlocks is registered via pv_lock_ops.

Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
[Raghu: check_zero race fix, enum for kvm_contention_stat
jumplabel related changes ]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_para.h |   14 ++
 arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 268 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 695399f..427afcb 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -118,10 +118,20 @@ void kvm_async_pf_task_wait(u32 token);
 void kvm_async_pf_task_wake(u32 token);
 u32 kvm_read_and_reset_pf_reason(void);
 extern void kvm_disable_steal_time(void);
-#else
-#define kvm_guest_init() do { } while (0)
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+void __init kvm_spinlock_init(void);
+#else /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline void kvm_spinlock_init(void)
+{
+}
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
+#else /* CONFIG_KVM_GUEST */
+#define kvm_guest_init() do {} while (0)
 #define kvm_async_pf_task_wait(T) do {} while(0)
 #define kvm_async_pf_task_wake(T) do {} while(0)
+
 static inline u32 kvm_read_and_reset_pf_reason(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index cd6d9a5..2715b92 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/kprobes.h>
+#include <linux/debugfs.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -419,6 +420,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
 	WARN_ON(kvm_register_clock("primary cpu clock"));
 	kvm_guest_cpu_init();
 	native_smp_prepare_boot_cpu();
+	kvm_spinlock_init();
 }
 
 static void __cpuinit kvm_guest_cpu_online(void *dummy)
@@ -523,3 +525,257 @@ static __init int activate_jump_labels(void)
 	return 0;
 }
 arch_initcall(activate_jump_labels);
+
+/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
+void kvm_kick_cpu(int cpu)
+{
+	int apicid;
+
+	apicid = per_cpu(x86_cpu_to_apicid, cpu);
+	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
+}
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+enum kvm_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
+
+#ifdef CONFIG_KVM_DEBUG_FS
+#define HISTO_BUCKETS	30
+
+static struct kvm_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
+	u32 histo_spin_blocked[HISTO_BUCKETS+1];
+	u64 time_blocked;
+} spinlock_stats;
+
+static u8 zero_stats;
+
+static inline void check_zero(void)
+{
+	u8 ret;
+	u8 old;
+
+	old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
+	}
+}
+
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
+
+
+static inline u64 spin_time_start(void)
+{
+	return sched_clock();
+}
+
+static void __spin_time_accum(u64 delta, u32 *array)
+{
+	unsigned index;
+
+	index = ilog2(delta);
+	check_zero();
+
+	if (index < HISTO_BUCKETS)
+		array[index]++;
+	else
+		array[HISTO_BUCKETS]++;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+	u32 delta;
+
+	delta = sched_clock() - start;
+	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
+	spinlock_stats.time_blocked += delta;
+}
+
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+
+struct dentry *kvm_init_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm", NULL);
+	if (!d_kvm_debug)
+		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
+
+	return d_kvm_debug;
+}
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	struct dentry *d_kvm;
+
+	d_kvm = kvm_init_debugfs();
+	if (d_kvm == NULL)
+		return -ENOMEM;
+
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
+
+	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
+
+	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
+	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
+
+	debugfs_create_u32("released_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
+	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
+
+	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
+			   &spinlock_stats.time_blocked);
+
+	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
+		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
+
+	return 0;
+}
+fs_initcall(kvm_spinlock_debugfs);
+#else  /* !CONFIG_KVM_DEBUG_FS */
+#define TIMEOUT			(1 << 10)
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+	return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif  /* CONFIG_KVM_DEBUG_FS */
+
+struct kvm_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
+};
+
+/* cpus 'waiting' on a spinlock to become available */
+static cpumask_t waiting_cpus;
+
+/* Track spinlock on which a cpu is waiting */
+static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
+
+static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
+{
+	struct kvm_lock_waiting *w;
+	int cpu;
+	u64 start;
+	unsigned long flags;
+
+	w = &__get_cpu_var(lock_waiting);
+	cpu = smp_processor_id();
+	start = spin_time_start();
+
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
+
+	/*
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
+
+	add_stats(TAKEN_SLOW, 1);
+
+	/*
+	 * This uses set_bit, which is atomic but we should not rely on its
+	 * reordering gurantees. So barrier is needed after this call.
+	 */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+
+	barrier();
+
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
+	__ticket_enter_slowpath(lock);
+
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking.
+	 */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/* halt until it's our turn and kicked. */
+	halt();
+
+	local_irq_save(flags);
+out:
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
+	spin_time_accum_blocked(start);
+}
+PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
+
+/* Kick vcpu waiting on @lock->head to reach value @ticket */
+static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+{
+	int cpu;
+
+	add_stats(RELEASED_SLOW, 1);
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == ticket) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
+			kvm_kick_cpu(cpu);
+			break;
+		}
+	}
+}
+
+/*
+ * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
+ */
+void __init kvm_spinlock_init(void)
+{
+	if (!kvm_para_available())
+		return;
+	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+		return;
+
+	printk(KERN_INFO"KVM setup paravirtual spinlock\n");
+
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
+	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+}
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
@ 2013-06-01 19:25   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:25 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
 support for pv-ticketlocks is registered via pv_lock_ops.

Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
[Raghu: check_zero race fix, enum for kvm_contention_stat
jumplabel related changes ]
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/include/asm/kvm_para.h |   14 ++
 arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 268 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 695399f..427afcb 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -118,10 +118,20 @@ void kvm_async_pf_task_wait(u32 token);
 void kvm_async_pf_task_wake(u32 token);
 u32 kvm_read_and_reset_pf_reason(void);
 extern void kvm_disable_steal_time(void);
-#else
-#define kvm_guest_init() do { } while (0)
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+void __init kvm_spinlock_init(void);
+#else /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline void kvm_spinlock_init(void)
+{
+}
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
+#else /* CONFIG_KVM_GUEST */
+#define kvm_guest_init() do {} while (0)
 #define kvm_async_pf_task_wait(T) do {} while(0)
 #define kvm_async_pf_task_wake(T) do {} while(0)
+
 static inline u32 kvm_read_and_reset_pf_reason(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index cd6d9a5..2715b92 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/kprobes.h>
+#include <linux/debugfs.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -419,6 +420,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
 	WARN_ON(kvm_register_clock("primary cpu clock"));
 	kvm_guest_cpu_init();
 	native_smp_prepare_boot_cpu();
+	kvm_spinlock_init();
 }
 
 static void __cpuinit kvm_guest_cpu_online(void *dummy)
@@ -523,3 +525,257 @@ static __init int activate_jump_labels(void)
 	return 0;
 }
 arch_initcall(activate_jump_labels);
+
+/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
+void kvm_kick_cpu(int cpu)
+{
+	int apicid;
+
+	apicid = per_cpu(x86_cpu_to_apicid, cpu);
+	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
+}
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+enum kvm_contention_stat {
+	TAKEN_SLOW,
+	TAKEN_SLOW_PICKUP,
+	RELEASED_SLOW,
+	RELEASED_SLOW_KICKED,
+	NR_CONTENTION_STATS
+};
+
+#ifdef CONFIG_KVM_DEBUG_FS
+#define HISTO_BUCKETS	30
+
+static struct kvm_spinlock_stats
+{
+	u32 contention_stats[NR_CONTENTION_STATS];
+	u32 histo_spin_blocked[HISTO_BUCKETS+1];
+	u64 time_blocked;
+} spinlock_stats;
+
+static u8 zero_stats;
+
+static inline void check_zero(void)
+{
+	u8 ret;
+	u8 old;
+
+	old = ACCESS_ONCE(zero_stats);
+	if (unlikely(old)) {
+		ret = cmpxchg(&zero_stats, old, 0);
+		/* This ensures only one fellow resets the stat */
+		if (ret == old)
+			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
+	}
+}
+
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+	check_zero();
+	spinlock_stats.contention_stats[var] += val;
+}
+
+
+static inline u64 spin_time_start(void)
+{
+	return sched_clock();
+}
+
+static void __spin_time_accum(u64 delta, u32 *array)
+{
+	unsigned index;
+
+	index = ilog2(delta);
+	check_zero();
+
+	if (index < HISTO_BUCKETS)
+		array[index]++;
+	else
+		array[HISTO_BUCKETS]++;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+	u32 delta;
+
+	delta = sched_clock() - start;
+	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
+	spinlock_stats.time_blocked += delta;
+}
+
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+
+struct dentry *kvm_init_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm", NULL);
+	if (!d_kvm_debug)
+		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
+
+	return d_kvm_debug;
+}
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	struct dentry *d_kvm;
+
+	d_kvm = kvm_init_debugfs();
+	if (d_kvm == NULL)
+		return -ENOMEM;
+
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
+
+	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
+
+	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
+	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
+
+	debugfs_create_u32("released_slow", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
+	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
+		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
+
+	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
+			   &spinlock_stats.time_blocked);
+
+	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
+		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
+
+	return 0;
+}
+fs_initcall(kvm_spinlock_debugfs);
+#else  /* !CONFIG_KVM_DEBUG_FS */
+#define TIMEOUT			(1 << 10)
+static inline void add_stats(enum kvm_contention_stat var, u32 val)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+	return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif  /* CONFIG_KVM_DEBUG_FS */
+
+struct kvm_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
+};
+
+/* cpus 'waiting' on a spinlock to become available */
+static cpumask_t waiting_cpus;
+
+/* Track spinlock on which a cpu is waiting */
+static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
+
+static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
+{
+	struct kvm_lock_waiting *w;
+	int cpu;
+	u64 start;
+	unsigned long flags;
+
+	w = &__get_cpu_var(lock_waiting);
+	cpu = smp_processor_id();
+	start = spin_time_start();
+
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
+	local_irq_save(flags);
+
+	/*
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
+	w->want = want;
+	smp_wmb();
+	w->lock = lock;
+
+	add_stats(TAKEN_SLOW, 1);
+
+	/*
+	 * This uses set_bit, which is atomic but we should not rely on its
+	 * reordering gurantees. So barrier is needed after this call.
+	 */
+	cpumask_set_cpu(cpu, &waiting_cpus);
+
+	barrier();
+
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
+	__ticket_enter_slowpath(lock);
+
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking.
+	 */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		add_stats(TAKEN_SLOW_PICKUP, 1);
+		goto out;
+	}
+
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/* halt until it's our turn and kicked. */
+	halt();
+
+	local_irq_save(flags);
+out:
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
+	local_irq_restore(flags);
+	spin_time_accum_blocked(start);
+}
+PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
+
+/* Kick vcpu waiting on @lock->head to reach value @ticket */
+static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+{
+	int cpu;
+
+	add_stats(RELEASED_SLOW, 1);
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == ticket) {
+			add_stats(RELEASED_SLOW_KICKED, 1);
+			kvm_kick_cpu(cpu);
+			break;
+		}
+	}
+}
+
+/*
+ * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
+ */
+void __init kvm_spinlock_init(void)
+{
+	if (!kvm_para_available())
+		return;
+	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+		return;
+
+	printk(KERN_INFO"KVM setup paravirtual spinlock\n");
+
+	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
+	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+}
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 17/19] kvm hypervisor : Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-01 19:26   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Note that we are using APIC_DM_REMRD which has reserved usage.
In future if APIC_DM_REMRD usage is standardized, then we should
find some other way or go back to old method.

Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/kvm/lapic.c |    5 ++++-
 arch/x86/kvm/x86.c   |   25 ++++++-------------------
 2 files changed, 10 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index e1adbb4..3f5f82e 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -706,7 +706,10 @@ out:
 		break;
 
 	case APIC_DM_REMRD:
-		apic_debug("Ignoring delivery mode 3\n");
+		result = 1;
+		vcpu->arch.pv.pv_unhalted = 1;
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
 		break;
 
 	case APIC_DM_SMI:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 92a9932..b963c86 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5456,27 +5456,14 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
  */
 static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
 {
-	struct kvm_vcpu *vcpu = NULL;
-	int i;
+	struct kvm_lapic_irq lapic_irq;
 
-	kvm_for_each_vcpu(i, vcpu, kvm) {
-		if (!kvm_apic_present(vcpu))
-			continue;
+	lapic_irq.shorthand = 0;
+	lapic_irq.dest_mode = 0;
+	lapic_irq.dest_id = apicid;
 
-		if (kvm_apic_match_dest(vcpu, 0, 0, apicid, 0))
-			break;
-	}
-	if (vcpu) {
-		/*
-		 * Setting unhalt flag here can result in spurious runnable
-		 * state when unhalt reset does not happen in vcpu_block.
-		 * But that is harmless since that should soon result in halt.
-		 */
-		vcpu->arch.pv.pv_unhalted = true;
-		/* We need everybody see unhalt before vcpu unblocks */
-		smp_wmb();
-		kvm_vcpu_kick(vcpu);
-	}
+	lapic_irq.delivery_mode = APIC_DM_REMRD;
+	kvm_irq_delivery_to_apic(kvm, 0, &lapic_irq, NULL);
 }
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 17/19] kvm hypervisor : Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
  2013-06-01 19:21 ` Raghavendra K T
                   ` (27 preceding siblings ...)
  (?)
@ 2013-06-01 19:26 ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Note that we are using APIC_DM_REMRD which has reserved usage.
In future if APIC_DM_REMRD usage is standardized, then we should
find some other way or go back to old method.

Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/kvm/lapic.c |    5 ++++-
 arch/x86/kvm/x86.c   |   25 ++++++-------------------
 2 files changed, 10 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index e1adbb4..3f5f82e 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -706,7 +706,10 @@ out:
 		break;
 
 	case APIC_DM_REMRD:
-		apic_debug("Ignoring delivery mode 3\n");
+		result = 1;
+		vcpu->arch.pv.pv_unhalted = 1;
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
 		break;
 
 	case APIC_DM_SMI:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 92a9932..b963c86 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5456,27 +5456,14 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
  */
 static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
 {
-	struct kvm_vcpu *vcpu = NULL;
-	int i;
+	struct kvm_lapic_irq lapic_irq;
 
-	kvm_for_each_vcpu(i, vcpu, kvm) {
-		if (!kvm_apic_present(vcpu))
-			continue;
+	lapic_irq.shorthand = 0;
+	lapic_irq.dest_mode = 0;
+	lapic_irq.dest_id = apicid;
 
-		if (kvm_apic_match_dest(vcpu, 0, 0, apicid, 0))
-			break;
-	}
-	if (vcpu) {
-		/*
-		 * Setting unhalt flag here can result in spurious runnable
-		 * state when unhalt reset does not happen in vcpu_block.
-		 * But that is harmless since that should soon result in halt.
-		 */
-		vcpu->arch.pv.pv_unhalted = true;
-		/* We need everybody see unhalt before vcpu unblocks */
-		smp_wmb();
-		kvm_vcpu_kick(vcpu);
-	}
+	lapic_irq.delivery_mode = APIC_DM_REMRD;
+	kvm_irq_delivery_to_apic(kvm, 0, &lapic_irq, NULL);
 }
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 17/19] kvm hypervisor : Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
@ 2013-06-01 19:26   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Note that we are using APIC_DM_REMRD which has reserved usage.
In future if APIC_DM_REMRD usage is standardized, then we should
find some other way or go back to old method.

Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/x86/kvm/lapic.c |    5 ++++-
 arch/x86/kvm/x86.c   |   25 ++++++-------------------
 2 files changed, 10 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index e1adbb4..3f5f82e 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -706,7 +706,10 @@ out:
 		break;
 
 	case APIC_DM_REMRD:
-		apic_debug("Ignoring delivery mode 3\n");
+		result = 1;
+		vcpu->arch.pv.pv_unhalted = 1;
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
 		break;
 
 	case APIC_DM_SMI:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 92a9932..b963c86 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5456,27 +5456,14 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
  */
 static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
 {
-	struct kvm_vcpu *vcpu = NULL;
-	int i;
+	struct kvm_lapic_irq lapic_irq;
 
-	kvm_for_each_vcpu(i, vcpu, kvm) {
-		if (!kvm_apic_present(vcpu))
-			continue;
+	lapic_irq.shorthand = 0;
+	lapic_irq.dest_mode = 0;
+	lapic_irq.dest_id = apicid;
 
-		if (kvm_apic_match_dest(vcpu, 0, 0, apicid, 0))
-			break;
-	}
-	if (vcpu) {
-		/*
-		 * Setting unhalt flag here can result in spurious runnable
-		 * state when unhalt reset does not happen in vcpu_block.
-		 * But that is harmless since that should soon result in halt.
-		 */
-		vcpu->arch.pv.pv_unhalted = true;
-		/* We need everybody see unhalt before vcpu unblocks */
-		smp_wmb();
-		kvm_vcpu_kick(vcpu);
-	}
+	lapic_irq.delivery_mode = APIC_DM_REMRD;
+	kvm_irq_delivery_to_apic(kvm, 0, &lapic_irq, NULL);
 }
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:26   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
enabled guest.

KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
in guest.

Thanks Vatsa for rewriting KVM_HC_KICK_CPU

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 Documentation/virtual/kvm/cpuid.txt      |    4 ++++
 Documentation/virtual/kvm/hypercalls.txt |   13 +++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
index 83afe65..654f43c 100644
--- a/Documentation/virtual/kvm/cpuid.txt
+++ b/Documentation/virtual/kvm/cpuid.txt
@@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
 KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
                                    ||       || writing to msr 0x4b564d02
 ------------------------------------------------------------------------------
+KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
+                                   ||       || before enabling paravirtualized
+                                   ||       || spinlock support.
+------------------------------------------------------------------------------
 KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
                                    ||       || per-cpu warps are expected in
                                    ||       || kvmclock.
diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
index ea113b5..2a4da11 100644
--- a/Documentation/virtual/kvm/hypercalls.txt
+++ b/Documentation/virtual/kvm/hypercalls.txt
@@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a
 shared page that contains parts of supervisor visible register state.
 The guest can map this shared page to access its supervisor register through
 memory using this hypercall.
+
+5. KVM_HC_KICK_CPU
+------------------------
+Architecture: x86
+Status: active
+Purpose: Hypercall used to wakeup a vcpu from HLT state
+Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
+kernel mode for an event to occur (ex: a spinlock to become available) can
+execute HLT instruction once it has busy-waited for more than a threshold
+time-interval. Execution of HLT instruction would cause the hypervisor to put
+the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
+same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
+specifying APIC ID of the vcpu to be wokenup.


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
@ 2013-06-01 19:26   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
enabled guest.

KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
in guest.

Thanks Vatsa for rewriting KVM_HC_KICK_CPU

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 Documentation/virtual/kvm/cpuid.txt      |    4 ++++
 Documentation/virtual/kvm/hypercalls.txt |   13 +++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
index 83afe65..654f43c 100644
--- a/Documentation/virtual/kvm/cpuid.txt
+++ b/Documentation/virtual/kvm/cpuid.txt
@@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
 KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
                                    ||       || writing to msr 0x4b564d02
 ------------------------------------------------------------------------------
+KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
+                                   ||       || before enabling paravirtualized
+                                   ||       || spinlock support.
+------------------------------------------------------------------------------
 KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
                                    ||       || per-cpu warps are expected in
                                    ||       || kvmclock.
diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
index ea113b5..2a4da11 100644
--- a/Documentation/virtual/kvm/hypercalls.txt
+++ b/Documentation/virtual/kvm/hypercalls.txt
@@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a
 shared page that contains parts of supervisor visible register state.
 The guest can map this shared page to access its supervisor register through
 memory using this hypercall.
+
+5. KVM_HC_KICK_CPU
+------------------------
+Architecture: x86
+Status: active
+Purpose: Hypercall used to wakeup a vcpu from HLT state
+Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
+kernel mode for an event to occur (ex: a spinlock to become available) can
+execute HLT instruction once it has busy-waited for more than a threshold
+time-interval. Execution of HLT instruction would cause the hypervisor to put
+the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
+same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
+specifying APIC ID of the vcpu to be wokenup.

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
@ 2013-06-01 19:26   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
enabled guest.

KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
in guest.

Thanks Vatsa for rewriting KVM_HC_KICK_CPU

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 Documentation/virtual/kvm/cpuid.txt      |    4 ++++
 Documentation/virtual/kvm/hypercalls.txt |   13 +++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
index 83afe65..654f43c 100644
--- a/Documentation/virtual/kvm/cpuid.txt
+++ b/Documentation/virtual/kvm/cpuid.txt
@@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
 KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
                                    ||       || writing to msr 0x4b564d02
 ------------------------------------------------------------------------------
+KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
+                                   ||       || before enabling paravirtualized
+                                   ||       || spinlock support.
+------------------------------------------------------------------------------
 KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
                                    ||       || per-cpu warps are expected in
                                    ||       || kvmclock.
diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
index ea113b5..2a4da11 100644
--- a/Documentation/virtual/kvm/hypercalls.txt
+++ b/Documentation/virtual/kvm/hypercalls.txt
@@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a
 shared page that contains parts of supervisor visible register state.
 The guest can map this shared page to access its supervisor register through
 memory using this hypercall.
+
+5. KVM_HC_KICK_CPU
+------------------------
+Architecture: x86
+Status: active
+Purpose: Hypercall used to wakeup a vcpu from HLT state
+Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
+kernel mode for an event to occur (ex: a spinlock to become available) can
+execute HLT instruction once it has busy-waited for more than a threshold
+time-interval. Execution of HLT instruction would cause the hypervisor to put
+the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
+same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
+specifying APIC ID of the vcpu to be wokenup.

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
  2013-06-01 19:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:26   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri

kvm hypervisor: Add directed yield in vcpu block path

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

We use the improved PLE handler logic in vcpu block patch for
scheduling rather than plain schedule, so that we can make
intelligent decisions

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/ia64/include/asm/kvm_host.h    |    5 +++++
 arch/powerpc/include/asm/kvm_host.h |    5 +++++
 arch/s390/include/asm/kvm_host.h    |    5 +++++
 arch/x86/include/asm/kvm_host.h     |    2 +-
 arch/x86/kvm/x86.c                  |    8 ++++++++
 include/linux/kvm_host.h            |    2 +-
 virt/kvm/kvm_main.c                 |    6 ++++--
 7 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
index 989dd3f..999ab15 100644
--- a/arch/ia64/include/asm/kvm_host.h
+++ b/arch/ia64/include/asm/kvm_host.h
@@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
 void kvm_sal_emul(struct kvm_vcpu *vcpu);
 
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #define __KVM_HAVE_ARCH_VM_ALLOC 1
 struct kvm *kvm_arch_alloc_vm(void);
 void kvm_arch_free_vm(struct kvm *kvm);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index af326cd..1aeecc0 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -628,4 +628,9 @@ struct kvm_vcpu_arch {
 #define __KVM_HAVE_ARCH_WQP
 #define __KVM_HAVE_CREATE_DEVICE
 
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 16bd5d1..db09a56 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -266,4 +266,9 @@ struct kvm_arch{
 };
 
 extern int sie64a(struct kvm_s390_sie_block *, u64 *);
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #endif
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 95702de..72ff791 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
 int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
 void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
 void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
-
+void kvm_do_schedule(struct kvm_vcpu *vcpu);
 #endif /* _ASM_X86_KVM_HOST_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b963c86..d26c4be 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
 			kvm_x86_ops->interrupt_allowed(vcpu);
 }
 
+void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	/* We try to yield to a kikced vcpu else do a schedule */
+	if (kvm_vcpu_on_spin(vcpu) <= 0)
+		schedule();
+}
+EXPORT_SYMBOL_GPL(kvm_do_schedule);
+
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f0eea07..39efc18 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -565,7 +565,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
 void kvm_vcpu_block(struct kvm_vcpu *vcpu);
 void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_yield_to(struct kvm_vcpu *target);
-void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
+bool kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
 void kvm_resched(struct kvm_vcpu *vcpu);
 void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
 void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 302681c..8387247 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1685,7 +1685,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 		if (signal_pending(current))
 			break;
 
-		schedule();
+		kvm_do_schedule(vcpu);
 	}
 
 	finish_wait(&vcpu->wq, &wait);
@@ -1786,7 +1786,7 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
 }
 #endif
 
-void kvm_vcpu_on_spin(struct kvm_vcpu *me)
+bool kvm_vcpu_on_spin(struct kvm_vcpu *me)
 {
 	struct kvm *kvm = me->kvm;
 	struct kvm_vcpu *vcpu;
@@ -1835,6 +1835,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 
 	/* Ensure vcpu is not eligible during next spinloop */
 	kvm_vcpu_set_dy_eligible(me, false);
+
+	return yielded;
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
 


^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
@ 2013-06-01 19:26   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

kvm hypervisor: Add directed yield in vcpu block path

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

We use the improved PLE handler logic in vcpu block patch for
scheduling rather than plain schedule, so that we can make
intelligent decisions

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/ia64/include/asm/kvm_host.h    |    5 +++++
 arch/powerpc/include/asm/kvm_host.h |    5 +++++
 arch/s390/include/asm/kvm_host.h    |    5 +++++
 arch/x86/include/asm/kvm_host.h     |    2 +-
 arch/x86/kvm/x86.c                  |    8 ++++++++
 include/linux/kvm_host.h            |    2 +-
 virt/kvm/kvm_main.c                 |    6 ++++--
 7 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
index 989dd3f..999ab15 100644
--- a/arch/ia64/include/asm/kvm_host.h
+++ b/arch/ia64/include/asm/kvm_host.h
@@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
 void kvm_sal_emul(struct kvm_vcpu *vcpu);
 
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #define __KVM_HAVE_ARCH_VM_ALLOC 1
 struct kvm *kvm_arch_alloc_vm(void);
 void kvm_arch_free_vm(struct kvm *kvm);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index af326cd..1aeecc0 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -628,4 +628,9 @@ struct kvm_vcpu_arch {
 #define __KVM_HAVE_ARCH_WQP
 #define __KVM_HAVE_CREATE_DEVICE
 
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 16bd5d1..db09a56 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -266,4 +266,9 @@ struct kvm_arch{
 };
 
 extern int sie64a(struct kvm_s390_sie_block *, u64 *);
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #endif
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 95702de..72ff791 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
 int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
 void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
 void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
-
+void kvm_do_schedule(struct kvm_vcpu *vcpu);
 #endif /* _ASM_X86_KVM_HOST_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b963c86..d26c4be 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
 			kvm_x86_ops->interrupt_allowed(vcpu);
 }
 
+void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	/* We try to yield to a kikced vcpu else do a schedule */
+	if (kvm_vcpu_on_spin(vcpu) <= 0)
+		schedule();
+}
+EXPORT_SYMBOL_GPL(kvm_do_schedule);
+
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f0eea07..39efc18 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -565,7 +565,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
 void kvm_vcpu_block(struct kvm_vcpu *vcpu);
 void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_yield_to(struct kvm_vcpu *target);
-void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
+bool kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
 void kvm_resched(struct kvm_vcpu *vcpu);
 void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
 void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 302681c..8387247 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1685,7 +1685,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 		if (signal_pending(current))
 			break;
 
-		schedule();
+		kvm_do_schedule(vcpu);
 	}
 
 	finish_wait(&vcpu->wq, &wait);
@@ -1786,7 +1786,7 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
 }
 #endif
 
-void kvm_vcpu_on_spin(struct kvm_vcpu *me)
+bool kvm_vcpu_on_spin(struct kvm_vcpu *me)
 {
 	struct kvm *kvm = me->kvm;
 	struct kvm_vcpu *vcpu;
@@ -1835,6 +1835,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 
 	/* Ensure vcpu is not eligible during next spinloop */
 	kvm_vcpu_set_dy_eligible(me, false);
+
+	return yielded;
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
@ 2013-06-01 19:26   ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:26 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst

kvm hypervisor: Add directed yield in vcpu block path

From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

We use the improved PLE handler logic in vcpu block patch for
scheduling rather than plain schedule, so that we can make
intelligent decisions

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 arch/ia64/include/asm/kvm_host.h    |    5 +++++
 arch/powerpc/include/asm/kvm_host.h |    5 +++++
 arch/s390/include/asm/kvm_host.h    |    5 +++++
 arch/x86/include/asm/kvm_host.h     |    2 +-
 arch/x86/kvm/x86.c                  |    8 ++++++++
 include/linux/kvm_host.h            |    2 +-
 virt/kvm/kvm_main.c                 |    6 ++++--
 7 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
index 989dd3f..999ab15 100644
--- a/arch/ia64/include/asm/kvm_host.h
+++ b/arch/ia64/include/asm/kvm_host.h
@@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
 void kvm_sal_emul(struct kvm_vcpu *vcpu);
 
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #define __KVM_HAVE_ARCH_VM_ALLOC 1
 struct kvm *kvm_arch_alloc_vm(void);
 void kvm_arch_free_vm(struct kvm *kvm);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index af326cd..1aeecc0 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -628,4 +628,9 @@ struct kvm_vcpu_arch {
 #define __KVM_HAVE_ARCH_WQP
 #define __KVM_HAVE_CREATE_DEVICE
 
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 16bd5d1..db09a56 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -266,4 +266,9 @@ struct kvm_arch{
 };
 
 extern int sie64a(struct kvm_s390_sie_block *, u64 *);
+static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	schedule();
+}
+
 #endif
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 95702de..72ff791 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
 int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
 void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
 void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
-
+void kvm_do_schedule(struct kvm_vcpu *vcpu);
 #endif /* _ASM_X86_KVM_HOST_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b963c86..d26c4be 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
 			kvm_x86_ops->interrupt_allowed(vcpu);
 }
 
+void kvm_do_schedule(struct kvm_vcpu *vcpu)
+{
+	/* We try to yield to a kikced vcpu else do a schedule */
+	if (kvm_vcpu_on_spin(vcpu) <= 0)
+		schedule();
+}
+EXPORT_SYMBOL_GPL(kvm_do_schedule);
+
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f0eea07..39efc18 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -565,7 +565,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
 void kvm_vcpu_block(struct kvm_vcpu *vcpu);
 void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_yield_to(struct kvm_vcpu *target);
-void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
+bool kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
 void kvm_resched(struct kvm_vcpu *vcpu);
 void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
 void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 302681c..8387247 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1685,7 +1685,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 		if (signal_pending(current))
 			break;
 
-		schedule();
+		kvm_do_schedule(vcpu);
 	}
 
 	finish_wait(&vcpu->wq, &wait);
@@ -1786,7 +1786,7 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
 }
 #endif
 
-void kvm_vcpu_on_spin(struct kvm_vcpu *me)
+bool kvm_vcpu_on_spin(struct kvm_vcpu *me)
 {
 	struct kvm *kvm = me->kvm;
 	struct kvm_vcpu *vcpu;
@@ -1835,6 +1835,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 
 	/* Ensure vcpu is not eligible during next spinloop */
 	kvm_vcpu_set_dy_eligible(me, false);
+
+	return yielded;
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 1/19]  x86/spinlock: Replace pv spinlocks with pv ticketlocks
  2013-06-01 19:21   ` Raghavendra K T
@ 2013-06-01 20:32     ` Jeremy Fitzhardinge
  -1 siblings, 0 replies; 192+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:32 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	habanero, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/01/2013 12:21 PM, Raghavendra K T wrote:
> x86/spinlock: Replace pv spinlocks with pv ticketlocks
>
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
I'm not sure what the etiquette is here; I did the work while at Citrix,
but jeremy@goop.org is my canonical email address.  The Citrix address
is dead and bounces, so is useless for anything.  Probably best to
change it.

    J

>
> Rather than outright replacing the entire spinlock implementation in
> order to paravirtualize it, keep the ticket lock implementation but add
> a couple of pvops hooks on the slow patch (long spin on lock, unlocking
> a contended lock).
>
> Ticket locks have a number of nice properties, but they also have some
> surprising behaviours in virtual environments.  They enforce a strict
> FIFO ordering on cpus trying to take a lock; however, if the hypervisor
> scheduler does not schedule the cpus in the correct order, the system can
> waste a huge amount of time spinning until the next cpu can take the lock.
>
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
>
> To address this, we add two hooks:
>  - __ticket_spin_lock which is called after the cpu has been
>    spinning on the lock for a significant number of iterations but has
>    failed to take the lock (presumably because the cpu holding the lock
>    has been descheduled).  The lock_spinning pvop is expected to block
>    the cpu until it has been kicked by the current lock holder.
>  - __ticket_spin_unlock, which on releasing a contended lock
>    (there are more cpus with tail tickets), it looks to see if the next
>    cpu is blocked and wakes it if so.
>
> When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
> functions causes all the extra code to go away.
>
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Tested-by: Attilio Rao <attilio.rao@citrix.com>
> [ Raghavendra: Changed SPIN_THRESHOLD ]
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/include/asm/paravirt.h       |   32 ++++----------------
>  arch/x86/include/asm/paravirt_types.h |   10 ++----
>  arch/x86/include/asm/spinlock.h       |   53 +++++++++++++++++++++++++++------
>  arch/x86/include/asm/spinlock_types.h |    4 --
>  arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
>  arch/x86/xen/spinlock.c               |    8 ++++-
>  6 files changed, 61 insertions(+), 61 deletions(-)
>
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index cfdc9ee..040e72d 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
>  
>  #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
>  
> -static inline int arch_spin_is_locked(struct arch_spinlock *lock)
> +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
> +							__ticket_t ticket)
>  {
> -	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
> +	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
>  }
>  
> -static inline int arch_spin_is_contended(struct arch_spinlock *lock)
> +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
> +							__ticket_t ticket)
>  {
> -	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
> -}
> -#define arch_spin_is_contended	arch_spin_is_contended
> -
> -static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
> -{
> -	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
> -}
> -
> -static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
> -						  unsigned long flags)
> -{
> -	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
> -}
> -
> -static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
> -{
> -	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
> -}
> -
> -static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
> -{
> -	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
> +	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
>  }
>  
>  #endif
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 0db1fca..d5deb6d 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -327,13 +327,11 @@ struct pv_mmu_ops {
>  };
>  
>  struct arch_spinlock;
> +#include <asm/spinlock_types.h>
> +
>  struct pv_lock_ops {
> -	int (*spin_is_locked)(struct arch_spinlock *lock);
> -	int (*spin_is_contended)(struct arch_spinlock *lock);
> -	void (*spin_lock)(struct arch_spinlock *lock);
> -	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
> -	int (*spin_trylock)(struct arch_spinlock *lock);
> -	void (*spin_unlock)(struct arch_spinlock *lock);
> +	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
> +	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
>  };
>  
>  /* This contains all the paravirt structures: we get a convenient
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 33692ea..4d54244 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -34,6 +34,35 @@
>  # define UNLOCK_LOCK_PREFIX
>  #endif
>  
> +/* How long a lock should spin before we consider blocking */
> +#define SPIN_THRESHOLD	(1 << 15)
> +
> +#ifndef CONFIG_PARAVIRT_SPINLOCKS
> +
> +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
> +							__ticket_t ticket)
> +{
> +}
> +
> +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
> +							 __ticket_t ticket)
> +{
> +}
> +
> +#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +
> +/*
> + * If a spinlock has someone waiting on it, then kick the appropriate
> + * waiting cpu.
> + */
> +static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
> +							__ticket_t next)
> +{
> +	if (unlikely(lock->tickets.tail != next))
> +		____ticket_unlock_kick(lock, next);
> +}
> +
>  /*
>   * Ticket locks are conceptually two parts, one indicating the current head of
>   * the queue, and the other indicating the current tail. The lock is acquired
> @@ -47,19 +76,24 @@
>   * in the high part, because a wide xadd increment of the low part would carry
>   * up and contaminate the high part.
>   */
> -static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> +static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
>  {
>  	register struct __raw_tickets inc = { .tail = 1 };
>  
>  	inc = xadd(&lock->tickets, inc);
>  
>  	for (;;) {
> -		if (inc.head == inc.tail)
> -			break;
> -		cpu_relax();
> -		inc.head = ACCESS_ONCE(lock->tickets.head);
> +		unsigned count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (inc.head == inc.tail)
> +				goto out;
> +			cpu_relax();
> +			inc.head = ACCESS_ONCE(lock->tickets.head);
> +		} while (--count);
> +		__ticket_lock_spinning(lock, inc.tail);
>  	}
> -	barrier();		/* make sure nothing creeps before the lock is taken */
> +out:	barrier();	/* make sure nothing creeps before the lock is taken */
>  }
>  
>  static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
> @@ -78,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>  
>  static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>  {
> +	__ticket_t next = lock->tickets.head + 1;
> +
>  	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
> +	__ticket_unlock_kick(lock, next);
>  }
>  
>  static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
> @@ -95,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
>  	return (__ticket_t)(tmp.tail - tmp.head) > 1;
>  }
>  
> -#ifndef CONFIG_PARAVIRT_SPINLOCKS
> -
>  static inline int arch_spin_is_locked(arch_spinlock_t *lock)
>  {
>  	return __ticket_spin_is_locked(lock);
> @@ -129,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
>  	arch_spin_lock(lock);
>  }
>  
> -#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> -
>  static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
>  {
>  	while (arch_spin_is_locked(lock))
> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
> index ad0ad07..83fd3c7 100644
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -1,10 +1,6 @@
>  #ifndef _ASM_X86_SPINLOCK_TYPES_H
>  #define _ASM_X86_SPINLOCK_TYPES_H
>  
> -#ifndef __LINUX_SPINLOCK_TYPES_H
> -# error "please don't include this file directly"
> -#endif
> -
>  #include <linux/types.h>
>  
>  #if (CONFIG_NR_CPUS < 256)
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
> index 676b8c7..c2e010e 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -7,21 +7,10 @@
>  
>  #include <asm/paravirt.h>
>  
> -static inline void
> -default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
> -{
> -	arch_spin_lock(lock);
> -}
> -
>  struct pv_lock_ops pv_lock_ops = {
>  #ifdef CONFIG_SMP
> -	.spin_is_locked = __ticket_spin_is_locked,
> -	.spin_is_contended = __ticket_spin_is_contended,
> -
> -	.spin_lock = __ticket_spin_lock,
> -	.spin_lock_flags = default_spin_lock_flags,
> -	.spin_trylock = __ticket_spin_trylock,
> -	.spin_unlock = __ticket_spin_unlock,
> +	.lock_spinning = paravirt_nop,
> +	.unlock_kick = paravirt_nop,
>  #endif
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index 3002ec1..d6481a9 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -138,6 +138,9 @@ struct xen_spinlock {
>  	xen_spinners_t spinners;	/* count of waiting cpus */
>  };
>  
> +static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
> +
> +#if 0
>  static int xen_spin_is_locked(struct arch_spinlock *lock)
>  {
>  	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> @@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
>  	return old == 0;
>  }
>  
> -static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
>  static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
>  
>  /*
> @@ -352,6 +354,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
>  	if (unlikely(xl->spinners))
>  		xen_spin_unlock_slow(xl);
>  }
> +#endif
>  
>  static irqreturn_t dummy_handler(int irq, void *dev_id)
>  {
> @@ -413,13 +416,14 @@ void __init xen_init_spinlocks(void)
>  		return;
>  
>  	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
> -
> +#if 0
>  	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
>  	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
>  	pv_lock_ops.spin_lock = xen_spin_lock;
>  	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
>  	pv_lock_ops.spin_trylock = xen_spin_trylock;
>  	pv_lock_ops.spin_unlock = xen_spin_unlock;
> +#endif
>  }
>  
>  #ifdef CONFIG_XEN_DEBUG_FS
>


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 1/19] x86/spinlock: Replace pv spinlocks with pv ticketlocks
@ 2013-06-01 20:32     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 192+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:32 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, gleb, x86, agraf, mingo,
	habanero, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	mtosatti, linux-kernel, srivatsa.vaddagiri, attilio.rao,
	pbonzini, torvalds, stephan.diestelhorst

On 06/01/2013 12:21 PM, Raghavendra K T wrote:
> x86/spinlock: Replace pv spinlocks with pv ticketlocks
>
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
I'm not sure what the etiquette is here; I did the work while at Citrix,
but jeremy@goop.org is my canonical email address.  The Citrix address
is dead and bounces, so is useless for anything.  Probably best to
change it.

    J

>
> Rather than outright replacing the entire spinlock implementation in
> order to paravirtualize it, keep the ticket lock implementation but add
> a couple of pvops hooks on the slow patch (long spin on lock, unlocking
> a contended lock).
>
> Ticket locks have a number of nice properties, but they also have some
> surprising behaviours in virtual environments.  They enforce a strict
> FIFO ordering on cpus trying to take a lock; however, if the hypervisor
> scheduler does not schedule the cpus in the correct order, the system can
> waste a huge amount of time spinning until the next cpu can take the lock.
>
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
>
> To address this, we add two hooks:
>  - __ticket_spin_lock which is called after the cpu has been
>    spinning on the lock for a significant number of iterations but has
>    failed to take the lock (presumably because the cpu holding the lock
>    has been descheduled).  The lock_spinning pvop is expected to block
>    the cpu until it has been kicked by the current lock holder.
>  - __ticket_spin_unlock, which on releasing a contended lock
>    (there are more cpus with tail tickets), it looks to see if the next
>    cpu is blocked and wakes it if so.
>
> When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
> functions causes all the extra code to go away.
>
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Tested-by: Attilio Rao <attilio.rao@citrix.com>
> [ Raghavendra: Changed SPIN_THRESHOLD ]
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/include/asm/paravirt.h       |   32 ++++----------------
>  arch/x86/include/asm/paravirt_types.h |   10 ++----
>  arch/x86/include/asm/spinlock.h       |   53 +++++++++++++++++++++++++++------
>  arch/x86/include/asm/spinlock_types.h |    4 --
>  arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
>  arch/x86/xen/spinlock.c               |    8 ++++-
>  6 files changed, 61 insertions(+), 61 deletions(-)
>
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index cfdc9ee..040e72d 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
>  
>  #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
>  
> -static inline int arch_spin_is_locked(struct arch_spinlock *lock)
> +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
> +							__ticket_t ticket)
>  {
> -	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
> +	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
>  }
>  
> -static inline int arch_spin_is_contended(struct arch_spinlock *lock)
> +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
> +							__ticket_t ticket)
>  {
> -	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
> -}
> -#define arch_spin_is_contended	arch_spin_is_contended
> -
> -static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
> -{
> -	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
> -}
> -
> -static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
> -						  unsigned long flags)
> -{
> -	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
> -}
> -
> -static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
> -{
> -	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
> -}
> -
> -static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
> -{
> -	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
> +	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
>  }
>  
>  #endif
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 0db1fca..d5deb6d 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -327,13 +327,11 @@ struct pv_mmu_ops {
>  };
>  
>  struct arch_spinlock;
> +#include <asm/spinlock_types.h>
> +
>  struct pv_lock_ops {
> -	int (*spin_is_locked)(struct arch_spinlock *lock);
> -	int (*spin_is_contended)(struct arch_spinlock *lock);
> -	void (*spin_lock)(struct arch_spinlock *lock);
> -	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
> -	int (*spin_trylock)(struct arch_spinlock *lock);
> -	void (*spin_unlock)(struct arch_spinlock *lock);
> +	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
> +	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
>  };
>  
>  /* This contains all the paravirt structures: we get a convenient
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 33692ea..4d54244 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -34,6 +34,35 @@
>  # define UNLOCK_LOCK_PREFIX
>  #endif
>  
> +/* How long a lock should spin before we consider blocking */
> +#define SPIN_THRESHOLD	(1 << 15)
> +
> +#ifndef CONFIG_PARAVIRT_SPINLOCKS
> +
> +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
> +							__ticket_t ticket)
> +{
> +}
> +
> +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
> +							 __ticket_t ticket)
> +{
> +}
> +
> +#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +
> +/*
> + * If a spinlock has someone waiting on it, then kick the appropriate
> + * waiting cpu.
> + */
> +static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
> +							__ticket_t next)
> +{
> +	if (unlikely(lock->tickets.tail != next))
> +		____ticket_unlock_kick(lock, next);
> +}
> +
>  /*
>   * Ticket locks are conceptually two parts, one indicating the current head of
>   * the queue, and the other indicating the current tail. The lock is acquired
> @@ -47,19 +76,24 @@
>   * in the high part, because a wide xadd increment of the low part would carry
>   * up and contaminate the high part.
>   */
> -static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> +static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
>  {
>  	register struct __raw_tickets inc = { .tail = 1 };
>  
>  	inc = xadd(&lock->tickets, inc);
>  
>  	for (;;) {
> -		if (inc.head == inc.tail)
> -			break;
> -		cpu_relax();
> -		inc.head = ACCESS_ONCE(lock->tickets.head);
> +		unsigned count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (inc.head == inc.tail)
> +				goto out;
> +			cpu_relax();
> +			inc.head = ACCESS_ONCE(lock->tickets.head);
> +		} while (--count);
> +		__ticket_lock_spinning(lock, inc.tail);
>  	}
> -	barrier();		/* make sure nothing creeps before the lock is taken */
> +out:	barrier();	/* make sure nothing creeps before the lock is taken */
>  }
>  
>  static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
> @@ -78,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>  
>  static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>  {
> +	__ticket_t next = lock->tickets.head + 1;
> +
>  	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
> +	__ticket_unlock_kick(lock, next);
>  }
>  
>  static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
> @@ -95,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
>  	return (__ticket_t)(tmp.tail - tmp.head) > 1;
>  }
>  
> -#ifndef CONFIG_PARAVIRT_SPINLOCKS
> -
>  static inline int arch_spin_is_locked(arch_spinlock_t *lock)
>  {
>  	return __ticket_spin_is_locked(lock);
> @@ -129,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
>  	arch_spin_lock(lock);
>  }
>  
> -#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> -
>  static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
>  {
>  	while (arch_spin_is_locked(lock))
> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
> index ad0ad07..83fd3c7 100644
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -1,10 +1,6 @@
>  #ifndef _ASM_X86_SPINLOCK_TYPES_H
>  #define _ASM_X86_SPINLOCK_TYPES_H
>  
> -#ifndef __LINUX_SPINLOCK_TYPES_H
> -# error "please don't include this file directly"
> -#endif
> -
>  #include <linux/types.h>
>  
>  #if (CONFIG_NR_CPUS < 256)
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
> index 676b8c7..c2e010e 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -7,21 +7,10 @@
>  
>  #include <asm/paravirt.h>
>  
> -static inline void
> -default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
> -{
> -	arch_spin_lock(lock);
> -}
> -
>  struct pv_lock_ops pv_lock_ops = {
>  #ifdef CONFIG_SMP
> -	.spin_is_locked = __ticket_spin_is_locked,
> -	.spin_is_contended = __ticket_spin_is_contended,
> -
> -	.spin_lock = __ticket_spin_lock,
> -	.spin_lock_flags = default_spin_lock_flags,
> -	.spin_trylock = __ticket_spin_trylock,
> -	.spin_unlock = __ticket_spin_unlock,
> +	.lock_spinning = paravirt_nop,
> +	.unlock_kick = paravirt_nop,
>  #endif
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index 3002ec1..d6481a9 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -138,6 +138,9 @@ struct xen_spinlock {
>  	xen_spinners_t spinners;	/* count of waiting cpus */
>  };
>  
> +static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
> +
> +#if 0
>  static int xen_spin_is_locked(struct arch_spinlock *lock)
>  {
>  	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> @@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
>  	return old == 0;
>  }
>  
> -static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
>  static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
>  
>  /*
> @@ -352,6 +354,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
>  	if (unlikely(xl->spinners))
>  		xen_spin_unlock_slow(xl);
>  }
> +#endif
>  
>  static irqreturn_t dummy_handler(int irq, void *dev_id)
>  {
> @@ -413,13 +416,14 @@ void __init xen_init_spinlocks(void)
>  		return;
>  
>  	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
> -
> +#if 0
>  	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
>  	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
>  	pv_lock_ops.spin_lock = xen_spin_lock;
>  	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
>  	pv_lock_ops.spin_trylock = xen_spin_trylock;
>  	pv_lock_ops.spin_unlock = xen_spin_unlock;
> +#endif
>  }
>  
>  #ifdef CONFIG_XEN_DEBUG_FS
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 1/19]  x86/spinlock: Replace pv spinlocks with pv ticketlocks
  2013-06-01 19:21   ` Raghavendra K T
                     ` (2 preceding siblings ...)
  (?)
@ 2013-06-01 20:32   ` Jeremy Fitzhardinge
  -1 siblings, 0 replies; 192+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:32 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, x86, mingo, habanero, riel,
	konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhorst

On 06/01/2013 12:21 PM, Raghavendra K T wrote:
> x86/spinlock: Replace pv spinlocks with pv ticketlocks
>
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
I'm not sure what the etiquette is here; I did the work while at Citrix,
but jeremy@goop.org is my canonical email address.  The Citrix address
is dead and bounces, so is useless for anything.  Probably best to
change it.

    J

>
> Rather than outright replacing the entire spinlock implementation in
> order to paravirtualize it, keep the ticket lock implementation but add
> a couple of pvops hooks on the slow patch (long spin on lock, unlocking
> a contended lock).
>
> Ticket locks have a number of nice properties, but they also have some
> surprising behaviours in virtual environments.  They enforce a strict
> FIFO ordering on cpus trying to take a lock; however, if the hypervisor
> scheduler does not schedule the cpus in the correct order, the system can
> waste a huge amount of time spinning until the next cpu can take the lock.
>
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
>
> To address this, we add two hooks:
>  - __ticket_spin_lock which is called after the cpu has been
>    spinning on the lock for a significant number of iterations but has
>    failed to take the lock (presumably because the cpu holding the lock
>    has been descheduled).  The lock_spinning pvop is expected to block
>    the cpu until it has been kicked by the current lock holder.
>  - __ticket_spin_unlock, which on releasing a contended lock
>    (there are more cpus with tail tickets), it looks to see if the next
>    cpu is blocked and wakes it if so.
>
> When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
> functions causes all the extra code to go away.
>
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Tested-by: Attilio Rao <attilio.rao@citrix.com>
> [ Raghavendra: Changed SPIN_THRESHOLD ]
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/include/asm/paravirt.h       |   32 ++++----------------
>  arch/x86/include/asm/paravirt_types.h |   10 ++----
>  arch/x86/include/asm/spinlock.h       |   53 +++++++++++++++++++++++++++------
>  arch/x86/include/asm/spinlock_types.h |    4 --
>  arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
>  arch/x86/xen/spinlock.c               |    8 ++++-
>  6 files changed, 61 insertions(+), 61 deletions(-)
>
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index cfdc9ee..040e72d 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
>  
>  #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
>  
> -static inline int arch_spin_is_locked(struct arch_spinlock *lock)
> +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
> +							__ticket_t ticket)
>  {
> -	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
> +	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
>  }
>  
> -static inline int arch_spin_is_contended(struct arch_spinlock *lock)
> +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
> +							__ticket_t ticket)
>  {
> -	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
> -}
> -#define arch_spin_is_contended	arch_spin_is_contended
> -
> -static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
> -{
> -	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
> -}
> -
> -static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
> -						  unsigned long flags)
> -{
> -	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
> -}
> -
> -static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
> -{
> -	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
> -}
> -
> -static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
> -{
> -	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
> +	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
>  }
>  
>  #endif
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 0db1fca..d5deb6d 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -327,13 +327,11 @@ struct pv_mmu_ops {
>  };
>  
>  struct arch_spinlock;
> +#include <asm/spinlock_types.h>
> +
>  struct pv_lock_ops {
> -	int (*spin_is_locked)(struct arch_spinlock *lock);
> -	int (*spin_is_contended)(struct arch_spinlock *lock);
> -	void (*spin_lock)(struct arch_spinlock *lock);
> -	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
> -	int (*spin_trylock)(struct arch_spinlock *lock);
> -	void (*spin_unlock)(struct arch_spinlock *lock);
> +	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
> +	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
>  };
>  
>  /* This contains all the paravirt structures: we get a convenient
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 33692ea..4d54244 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -34,6 +34,35 @@
>  # define UNLOCK_LOCK_PREFIX
>  #endif
>  
> +/* How long a lock should spin before we consider blocking */
> +#define SPIN_THRESHOLD	(1 << 15)
> +
> +#ifndef CONFIG_PARAVIRT_SPINLOCKS
> +
> +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
> +							__ticket_t ticket)
> +{
> +}
> +
> +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock,
> +							 __ticket_t ticket)
> +{
> +}
> +
> +#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +
> +/*
> + * If a spinlock has someone waiting on it, then kick the appropriate
> + * waiting cpu.
> + */
> +static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
> +							__ticket_t next)
> +{
> +	if (unlikely(lock->tickets.tail != next))
> +		____ticket_unlock_kick(lock, next);
> +}
> +
>  /*
>   * Ticket locks are conceptually two parts, one indicating the current head of
>   * the queue, and the other indicating the current tail. The lock is acquired
> @@ -47,19 +76,24 @@
>   * in the high part, because a wide xadd increment of the low part would carry
>   * up and contaminate the high part.
>   */
> -static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> +static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
>  {
>  	register struct __raw_tickets inc = { .tail = 1 };
>  
>  	inc = xadd(&lock->tickets, inc);
>  
>  	for (;;) {
> -		if (inc.head == inc.tail)
> -			break;
> -		cpu_relax();
> -		inc.head = ACCESS_ONCE(lock->tickets.head);
> +		unsigned count = SPIN_THRESHOLD;
> +
> +		do {
> +			if (inc.head == inc.tail)
> +				goto out;
> +			cpu_relax();
> +			inc.head = ACCESS_ONCE(lock->tickets.head);
> +		} while (--count);
> +		__ticket_lock_spinning(lock, inc.tail);
>  	}
> -	barrier();		/* make sure nothing creeps before the lock is taken */
> +out:	barrier();	/* make sure nothing creeps before the lock is taken */
>  }
>  
>  static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
> @@ -78,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>  
>  static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>  {
> +	__ticket_t next = lock->tickets.head + 1;
> +
>  	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
> +	__ticket_unlock_kick(lock, next);
>  }
>  
>  static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
> @@ -95,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
>  	return (__ticket_t)(tmp.tail - tmp.head) > 1;
>  }
>  
> -#ifndef CONFIG_PARAVIRT_SPINLOCKS
> -
>  static inline int arch_spin_is_locked(arch_spinlock_t *lock)
>  {
>  	return __ticket_spin_is_locked(lock);
> @@ -129,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
>  	arch_spin_lock(lock);
>  }
>  
> -#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> -
>  static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
>  {
>  	while (arch_spin_is_locked(lock))
> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
> index ad0ad07..83fd3c7 100644
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -1,10 +1,6 @@
>  #ifndef _ASM_X86_SPINLOCK_TYPES_H
>  #define _ASM_X86_SPINLOCK_TYPES_H
>  
> -#ifndef __LINUX_SPINLOCK_TYPES_H
> -# error "please don't include this file directly"
> -#endif
> -
>  #include <linux/types.h>
>  
>  #if (CONFIG_NR_CPUS < 256)
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
> index 676b8c7..c2e010e 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -7,21 +7,10 @@
>  
>  #include <asm/paravirt.h>
>  
> -static inline void
> -default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
> -{
> -	arch_spin_lock(lock);
> -}
> -
>  struct pv_lock_ops pv_lock_ops = {
>  #ifdef CONFIG_SMP
> -	.spin_is_locked = __ticket_spin_is_locked,
> -	.spin_is_contended = __ticket_spin_is_contended,
> -
> -	.spin_lock = __ticket_spin_lock,
> -	.spin_lock_flags = default_spin_lock_flags,
> -	.spin_trylock = __ticket_spin_trylock,
> -	.spin_unlock = __ticket_spin_unlock,
> +	.lock_spinning = paravirt_nop,
> +	.unlock_kick = paravirt_nop,
>  #endif
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index 3002ec1..d6481a9 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -138,6 +138,9 @@ struct xen_spinlock {
>  	xen_spinners_t spinners;	/* count of waiting cpus */
>  };
>  
> +static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
> +
> +#if 0
>  static int xen_spin_is_locked(struct arch_spinlock *lock)
>  {
>  	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> @@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
>  	return old == 0;
>  }
>  
> -static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
>  static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
>  
>  /*
> @@ -352,6 +354,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
>  	if (unlikely(xl->spinners))
>  		xen_spin_unlock_slow(xl);
>  }
> +#endif
>  
>  static irqreturn_t dummy_handler(int irq, void *dev_id)
>  {
> @@ -413,13 +416,14 @@ void __init xen_init_spinlocks(void)
>  		return;
>  
>  	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
> -
> +#if 0
>  	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
>  	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
>  	pv_lock_ops.spin_lock = xen_spin_lock;
>  	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
>  	pv_lock_ops.spin_trylock = xen_spin_trylock;
>  	pv_lock_ops.spin_unlock = xen_spin_unlock;
> +#endif
>  }
>  
>  #ifdef CONFIG_XEN_DEBUG_FS
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 1/19]  x86/spinlock: Replace pv spinlocks with pv ticketlocks
  2013-06-01 20:32     ` Jeremy Fitzhardinge
@ 2013-06-02  6:54       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-02  6:54 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: gleb, mingo, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	habanero, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/02/2013 02:02 AM, Jeremy Fitzhardinge wrote:
> On 06/01/2013 12:21 PM, Raghavendra K T wrote:
>> x86/spinlock: Replace pv spinlocks with pv ticketlocks
>>
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> I'm not sure what the etiquette is here; I did the work while at Citrix,
> but jeremy@goop.org is my canonical email address.  The Citrix address
> is dead and bounces, so is useless for anything.  Probably best to
> change it.
>

Agreed.
I would change to goop address in the next posting.
I had the same doubt for Vatsa's email also. Even I am not sure about
the practice here. So had kept as is.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 1/19]  x86/spinlock: Replace pv spinlocks with pv ticketlocks
@ 2013-06-02  6:54       ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-02  6:54 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, x86, mingo, habanero, riel,
	konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhorst

On 06/02/2013 02:02 AM, Jeremy Fitzhardinge wrote:
> On 06/01/2013 12:21 PM, Raghavendra K T wrote:
>> x86/spinlock: Replace pv spinlocks with pv ticketlocks
>>
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> I'm not sure what the etiquette is here; I did the work while at Citrix,
> but jeremy@goop.org is my canonical email address.  The Citrix address
> is dead and bounces, so is useless for anything.  Probably best to
> change it.
>

Agreed.
I would change to goop address in the next posting.
I had the same doubt for Vatsa's email also. Even I am not sure about
the practice here. So had kept as is.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-02  8:07   ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-06-02  8:07 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	habanero, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:51:25AM +0530, Raghavendra K T wrote:
> 
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
> 
High level question here. We have a big hope for "Preemptable Ticket
Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
ticketing spinlocks in overcommit scenarios problem without need for PV.
So how this patch series compares with his patches on PLE enabled processors?

> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
>    causing undercommit degradation (after PLE handler improvement).
> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
> 
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.
> 
> With this series we see that we could get little more improvements on top
> of that. 
> 
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs).  This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning.  (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
> 
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
> 
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
> 
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
> 
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
>   iterations, then call out to the __ticket_lock_spinning() pvop,
>   which allows a backend to block the vCPU rather than spinning.  This
>   pvop can set the lock into "slowpath state".
> 
> - When releasing a lock, if it is in "slowpath state", the call
>   __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
>   lock is no longer in contention, it also clears the slowpath flag.
> 
> The "slowpath state" is stored in the LSB of the within the lock tail
> ticket.  This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
> 
> For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
> another vcpu out of halt state.
> The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
> 
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.
> 
> The fast path (taking an uncontended lock which isn't in "slowpath"
> state) is optimal, identical to the non-paravirtualized case.
> 
> The inner part of ticket lock code becomes:
> 	inc = xadd(&lock->tickets, inc);
> 	inc.tail &= ~TICKET_SLOWPATH_FLAG;
> 
> 	if (likely(inc.head == inc.tail))
> 		goto out;
> 	for (;;) {
> 		unsigned count = SPIN_THRESHOLD;
> 		do {
> 			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
> 				goto out;
> 			cpu_relax();
> 		} while (--count);
> 		__ticket_lock_spinning(lock, inc.tail);
> 	}
> out:	barrier();
> which results in:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	mov    $0x200,%eax
> 	lock xadd %ax,(%rdi)
> 	movzbl %ah,%edx
> 	cmp    %al,%dl
> 	jne    1f	# Slowpath if lock in contention
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	and    $-2,%edx
> 	movzbl %dl,%esi
> 
> 2:	mov    $0x800,%eax
> 	jmp    4f
> 
> 3:	pause  
> 	sub    $0x1,%eax
> 	je     5f
> 
> 4:	movzbl (%rdi),%ecx
> 	cmp    %cl,%dl
> 	jne    3b
> 
> 	pop    %rbp
> 	retq   
> 
> 5:	callq  *__ticket_lock_spinning
> 	jmp    2b
> 	### SLOWPATH END
> 
> with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
> the fastpath case is straight through (taking the lock without
> contention), and the spin loop is out of line:
> 
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	mov    $0x100,%eax
> 	lock xadd %ax,(%rdi)
> 	movzbl %ah,%edx
> 	cmp    %al,%dl
> 	jne    1f
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	pause  
> 	movzbl (%rdi),%eax
> 	cmp    %dl,%al
> 	jne    1b
> 
> 	pop    %rbp
> 	retq   
> 	### SLOWPATH END
> 
> The unlock code is complicated by the need to both add to the lock's
> "head" and fetch the slowpath flag from "tail".  This version of the
> patch uses a locked add to do this, followed by a test to see if the
> slowflag is set.  The lock prefix acts as a full memory barrier, so we
> can be sure that other CPUs will have seen the unlock before we read
> the flag (without the barrier the read could be fetched from the
> store queue before it hits memory, which could result in a deadlock).
> 
> This is is all unnecessary complication if you're not using PV ticket
> locks, it also uses the jump-label machinery to use the standard
> "add"-based unlock in the non-PV case.
> 
> 	if (TICKET_SLOWPATH_FLAG &&
> 	     static_key_false(&paravirt_ticketlocks_enabled))) {
> 		arch_spinlock_t prev;
> 		prev = *lock;
> 		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
> 
> 		/* add_smp() is a full mb() */
> 		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> 			__ticket_unlock_slowpath(lock, prev);
> 	} else
> 		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
> which generates:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	nop5	# replaced by 5-byte jmp 2f when PV enabled
> 
> 	# non-PV unlock
> 	addb   $0x2,(%rdi)
> 
> 1:	pop    %rbp
> 	retq   
> 
> ### PV unlock ###
> 2:	movzwl (%rdi),%esi	# Fetch prev
> 
> 	lock addb $0x2,(%rdi)	# Do unlock
> 
> 	testb  $0x1,0x1(%rdi)	# Test flag
> 	je     1b		# Finished if not set
> 
> ### Slow path ###
> 	add    $2,%sil		# Add "head" in old lock state
> 	mov    %esi,%edx
> 	and    $0xfe,%dh	# clear slowflag for comparison
> 	movzbl %dh,%eax
> 	cmp    %dl,%al		# If head == tail (uncontended)
> 	je     4f		# clear slowpath flag
> 
> 	# Kick next CPU waiting for lock
> 3:	movzbl %sil,%esi
> 	callq  *pv_lock_ops.kick
> 
> 	pop    %rbp
> 	retq   
> 
> 	# Lock no longer contended - clear slowflag
> 4:	mov    %esi,%eax
> 	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
> 	cmp    %si,%ax
> 	jne    3b		# If clear failed, then kick
> 
> 	pop    %rbp
> 	retq   
> 
> So when not using PV ticketlocks, the unlock sequence just has a
> 5-byte nop added to it, and the PV case is reasonable straightforward
> aside from requiring a "lock add".
> 
> 
> Results:
> =======
> base = 3.10-rc2 kernel
> patched = base + this series
> 
> The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
> with 32 KVM guest vcpu 8GB RAM.
> 
> +-----------+-----------+-----------+------------+-----------+
>                ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>     base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
> 2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
> 3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
> 4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>               dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>     base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
> 2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
> 3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
> 4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
> +-----------+-----------+-----------+------------+-----------+
> 
> Your suggestions and comments are welcome.
> 
> github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
> 
> 
> Please note that we set SPIN_THRESHOLD = 32k with this series,
> that would eatup little bit of overcommit performance of PLE machines
> and overall performance of non-PLE machines. 
> 
> The older series was tested by Attilio for Xen implementation [1].
> 
> Jeremy Fitzhardinge (9):
>  x86/spinlock: Replace pv spinlocks with pv ticketlocks
>  x86/ticketlock: Collapse a layer of functions
>  xen: Defer spinlock setup until boot CPU setup
>  xen/pvticketlock: Xen implementation for PV ticket locks
>  xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
>  x86/pvticketlock: Use callee-save for lock_spinning
>  x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
>  x86/ticketlock: Add slowpath logic
>  xen/pvticketlock: Allow interrupts to be enabled while blocking
> 
> Andrew Jones (1):
>  Split jumplabel ratelimit
> 
> Stefano Stabellini (1):
>  xen: Enable PV ticketlocks on HVM Xen
> 
> Srivatsa Vaddagiri (3):
>  kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
>  kvm guest : Add configuration support to enable debug information for KVM Guests
>  kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
> 
> Raghavendra K T (5):
>  x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
>  kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
>  Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
>  Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
>  Add directed yield in vcpu block path
> 
> ---
> Link in V8 has links to previous patch series and also whole history.
> 
> V8 PV Ticketspinlock for Xen/KVM link:
> [1] https://lkml.org/lkml/2012/5/2/119
> 
>  Documentation/virtual/kvm/cpuid.txt      |   4 +
>  Documentation/virtual/kvm/hypercalls.txt |  13 ++
>  arch/ia64/include/asm/kvm_host.h         |   5 +
>  arch/powerpc/include/asm/kvm_host.h      |   5 +
>  arch/s390/include/asm/kvm_host.h         |   5 +
>  arch/x86/Kconfig                         |  10 +
>  arch/x86/include/asm/kvm_host.h          |   7 +-
>  arch/x86/include/asm/kvm_para.h          |  14 +-
>  arch/x86/include/asm/paravirt.h          |  32 +--
>  arch/x86/include/asm/paravirt_types.h    |  10 +-
>  arch/x86/include/asm/spinlock.h          | 128 +++++++----
>  arch/x86/include/asm/spinlock_types.h    |  16 +-
>  arch/x86/include/uapi/asm/kvm_para.h     |   1 +
>  arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
>  arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
>  arch/x86/kvm/cpuid.c                     |   3 +-
>  arch/x86/kvm/lapic.c                     |   5 +-
>  arch/x86/kvm/x86.c                       |  39 +++-
>  arch/x86/xen/smp.c                       |   3 +-
>  arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
>  include/linux/jump_label.h               |  26 +--
>  include/linux/jump_label_ratelimit.h     |  34 +++
>  include/linux/kvm_host.h                 |   2 +-
>  include/linux/perf_event.h               |   1 +
>  include/uapi/linux/kvm_para.h            |   1 +
>  kernel/jump_label.c                      |   1 +
>  virt/kvm/kvm_main.c                      |   6 +-
>  27 files changed, 645 insertions(+), 384 deletions(-)

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-02  8:07   ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-06-02  8:07 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	riel, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:51:25AM +0530, Raghavendra K T wrote:
> 
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
> 
High level question here. We have a big hope for "Preemptable Ticket
Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
ticketing spinlocks in overcommit scenarios problem without need for PV.
So how this patch series compares with his patches on PLE enabled processors?

> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
>    causing undercommit degradation (after PLE handler improvement).
> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
> 
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.
> 
> With this series we see that we could get little more improvements on top
> of that. 
> 
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs).  This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning.  (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
> 
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
> 
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
> 
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
> 
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
>   iterations, then call out to the __ticket_lock_spinning() pvop,
>   which allows a backend to block the vCPU rather than spinning.  This
>   pvop can set the lock into "slowpath state".
> 
> - When releasing a lock, if it is in "slowpath state", the call
>   __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
>   lock is no longer in contention, it also clears the slowpath flag.
> 
> The "slowpath state" is stored in the LSB of the within the lock tail
> ticket.  This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
> 
> For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
> another vcpu out of halt state.
> The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
> 
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.
> 
> The fast path (taking an uncontended lock which isn't in "slowpath"
> state) is optimal, identical to the non-paravirtualized case.
> 
> The inner part of ticket lock code becomes:
> 	inc = xadd(&lock->tickets, inc);
> 	inc.tail &= ~TICKET_SLOWPATH_FLAG;
> 
> 	if (likely(inc.head == inc.tail))
> 		goto out;
> 	for (;;) {
> 		unsigned count = SPIN_THRESHOLD;
> 		do {
> 			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
> 				goto out;
> 			cpu_relax();
> 		} while (--count);
> 		__ticket_lock_spinning(lock, inc.tail);
> 	}
> out:	barrier();
> which results in:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	mov    $0x200,%eax
> 	lock xadd %ax,(%rdi)
> 	movzbl %ah,%edx
> 	cmp    %al,%dl
> 	jne    1f	# Slowpath if lock in contention
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	and    $-2,%edx
> 	movzbl %dl,%esi
> 
> 2:	mov    $0x800,%eax
> 	jmp    4f
> 
> 3:	pause  
> 	sub    $0x1,%eax
> 	je     5f
> 
> 4:	movzbl (%rdi),%ecx
> 	cmp    %cl,%dl
> 	jne    3b
> 
> 	pop    %rbp
> 	retq   
> 
> 5:	callq  *__ticket_lock_spinning
> 	jmp    2b
> 	### SLOWPATH END
> 
> with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
> the fastpath case is straight through (taking the lock without
> contention), and the spin loop is out of line:
> 
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	mov    $0x100,%eax
> 	lock xadd %ax,(%rdi)
> 	movzbl %ah,%edx
> 	cmp    %al,%dl
> 	jne    1f
> 
> 	pop    %rbp
> 	retq   
> 
> 	### SLOWPATH START
> 1:	pause  
> 	movzbl (%rdi),%eax
> 	cmp    %dl,%al
> 	jne    1b
> 
> 	pop    %rbp
> 	retq   
> 	### SLOWPATH END
> 
> The unlock code is complicated by the need to both add to the lock's
> "head" and fetch the slowpath flag from "tail".  This version of the
> patch uses a locked add to do this, followed by a test to see if the
> slowflag is set.  The lock prefix acts as a full memory barrier, so we
> can be sure that other CPUs will have seen the unlock before we read
> the flag (without the barrier the read could be fetched from the
> store queue before it hits memory, which could result in a deadlock).
> 
> This is is all unnecessary complication if you're not using PV ticket
> locks, it also uses the jump-label machinery to use the standard
> "add"-based unlock in the non-PV case.
> 
> 	if (TICKET_SLOWPATH_FLAG &&
> 	     static_key_false(&paravirt_ticketlocks_enabled))) {
> 		arch_spinlock_t prev;
> 		prev = *lock;
> 		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
> 
> 		/* add_smp() is a full mb() */
> 		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> 			__ticket_unlock_slowpath(lock, prev);
> 	} else
> 		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
> which generates:
> 	push   %rbp
> 	mov    %rsp,%rbp
> 
> 	nop5	# replaced by 5-byte jmp 2f when PV enabled
> 
> 	# non-PV unlock
> 	addb   $0x2,(%rdi)
> 
> 1:	pop    %rbp
> 	retq   
> 
> ### PV unlock ###
> 2:	movzwl (%rdi),%esi	# Fetch prev
> 
> 	lock addb $0x2,(%rdi)	# Do unlock
> 
> 	testb  $0x1,0x1(%rdi)	# Test flag
> 	je     1b		# Finished if not set
> 
> ### Slow path ###
> 	add    $2,%sil		# Add "head" in old lock state
> 	mov    %esi,%edx
> 	and    $0xfe,%dh	# clear slowflag for comparison
> 	movzbl %dh,%eax
> 	cmp    %dl,%al		# If head == tail (uncontended)
> 	je     4f		# clear slowpath flag
> 
> 	# Kick next CPU waiting for lock
> 3:	movzbl %sil,%esi
> 	callq  *pv_lock_ops.kick
> 
> 	pop    %rbp
> 	retq   
> 
> 	# Lock no longer contended - clear slowflag
> 4:	mov    %esi,%eax
> 	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
> 	cmp    %si,%ax
> 	jne    3b		# If clear failed, then kick
> 
> 	pop    %rbp
> 	retq   
> 
> So when not using PV ticketlocks, the unlock sequence just has a
> 5-byte nop added to it, and the PV case is reasonable straightforward
> aside from requiring a "lock add".
> 
> 
> Results:
> =======
> base = 3.10-rc2 kernel
> patched = base + this series
> 
> The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
> with 32 KVM guest vcpu 8GB RAM.
> 
> +-----------+-----------+-----------+------------+-----------+
>                ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>     base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
> 2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
> 3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
> 4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>               dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>     base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
> 2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
> 3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
> 4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
> +-----------+-----------+-----------+------------+-----------+
> 
> Your suggestions and comments are welcome.
> 
> github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
> 
> 
> Please note that we set SPIN_THRESHOLD = 32k with this series,
> that would eatup little bit of overcommit performance of PLE machines
> and overall performance of non-PLE machines. 
> 
> The older series was tested by Attilio for Xen implementation [1].
> 
> Jeremy Fitzhardinge (9):
>  x86/spinlock: Replace pv spinlocks with pv ticketlocks
>  x86/ticketlock: Collapse a layer of functions
>  xen: Defer spinlock setup until boot CPU setup
>  xen/pvticketlock: Xen implementation for PV ticket locks
>  xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
>  x86/pvticketlock: Use callee-save for lock_spinning
>  x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
>  x86/ticketlock: Add slowpath logic
>  xen/pvticketlock: Allow interrupts to be enabled while blocking
> 
> Andrew Jones (1):
>  Split jumplabel ratelimit
> 
> Stefano Stabellini (1):
>  xen: Enable PV ticketlocks on HVM Xen
> 
> Srivatsa Vaddagiri (3):
>  kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
>  kvm guest : Add configuration support to enable debug information for KVM Guests
>  kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
> 
> Raghavendra K T (5):
>  x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
>  kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
>  Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
>  Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
>  Add directed yield in vcpu block path
> 
> ---
> Link in V8 has links to previous patch series and also whole history.
> 
> V8 PV Ticketspinlock for Xen/KVM link:
> [1] https://lkml.org/lkml/2012/5/2/119
> 
>  Documentation/virtual/kvm/cpuid.txt      |   4 +
>  Documentation/virtual/kvm/hypercalls.txt |  13 ++
>  arch/ia64/include/asm/kvm_host.h         |   5 +
>  arch/powerpc/include/asm/kvm_host.h      |   5 +
>  arch/s390/include/asm/kvm_host.h         |   5 +
>  arch/x86/Kconfig                         |  10 +
>  arch/x86/include/asm/kvm_host.h          |   7 +-
>  arch/x86/include/asm/kvm_para.h          |  14 +-
>  arch/x86/include/asm/paravirt.h          |  32 +--
>  arch/x86/include/asm/paravirt_types.h    |  10 +-
>  arch/x86/include/asm/spinlock.h          | 128 +++++++----
>  arch/x86/include/asm/spinlock_types.h    |  16 +-
>  arch/x86/include/uapi/asm/kvm_para.h     |   1 +
>  arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
>  arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
>  arch/x86/kvm/cpuid.c                     |   3 +-
>  arch/x86/kvm/lapic.c                     |   5 +-
>  arch/x86/kvm/x86.c                       |  39 +++-
>  arch/x86/xen/smp.c                       |   3 +-
>  arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
>  include/linux/jump_label.h               |  26 +--
>  include/linux/jump_label_ratelimit.h     |  34 +++
>  include/linux/kvm_host.h                 |   2 +-
>  include/linux/perf_event.h               |   1 +
>  include/uapi/linux/kvm_para.h            |   1 +
>  kernel/jump_label.c                      |   1 +
>  virt/kvm/kvm_main.c                      |   6 +-
>  27 files changed, 645 insertions(+), 384 deletions(-)

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-02  8:07   ` Gleb Natapov
@ 2013-06-02 16:20     ` Jiannan Ouyang
  -1 siblings, 0 replies; 192+ messages in thread
From: Jiannan Ouyang @ 2013-06-02 16:20 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Raghavendra K T, Ingo Molnar, Jeremy Fitzhardinge, x86,
	konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
	Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
	stefano.stabellini, andi, attilio.rao, Jiannan Ouyang, gregkh,
	agraf, chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM,
	LKML, stephan.diestelhorst, Rik van Riel, Andrew Jones,
	virtualization, Srivatsa Vaddagiri

On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:

> High level question here. We have a big hope for "Preemptable Ticket
> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> ticketing spinlocks in overcommit scenarios problem without need for PV.
> So how this patch series compares with his patches on PLE enabled processors?
>

No experiment results yet.

An error is reported on a 20 core VM. I'm during an internship
relocation, and will start work on it next week.

--
Jiannan

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-02 16:20     ` Jiannan Ouyang
  0 siblings, 0 replies; 192+ messages in thread
From: Jiannan Ouyang @ 2013-06-02 16:20 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Raghavendra K T, Ingo Molnar, Jeremy Fitzhardinge, x86,
	konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
	Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
	stefano.stabellini, andi, attilio.rao, Jiannan Ouyang, gregkh,
	agraf, chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM,
	LKML, stephan.diestelhorst, Rik van Riel, Andrew Jones,
	virtualization

On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:

> High level question here. We have a big hope for "Preemptable Ticket
> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> ticketing spinlocks in overcommit scenarios problem without need for PV.
> So how this patch series compares with his patches on PLE enabled processors?
>

No experiment results yet.

An error is reported on a 20 core VM. I'm during an internship
relocation, and will start work on it next week.

--
Jiannan

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-02  8:07   ` Gleb Natapov
  (?)
  (?)
@ 2013-06-02 16:20   ` Jiannan Ouyang
  -1 siblings, 0 replies; 192+ messages in thread
From: Jiannan Ouyang @ 2013-06-02 16:20 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Jeremy Fitzhardinge, x86, KVM, linux-doc, Peter Zijlstra,
	Andrew Jones, virtualization, andi, H. Peter Anvin,
	stefano.stabellini, xen-devel, Raghavendra K T, Ingo Molnar,
	Andrew M. Theurer, Rik van Riel, konrad.wilk, Jiannan Ouyang,
	Avi Kivity, Thomas Gleixner, chegu vinod, gregkh, LKML,
	Srivatsa Vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhor

On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:

> High level question here. We have a big hope for "Preemptable Ticket
> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> ticketing spinlocks in overcommit scenarios problem without need for PV.
> So how this patch series compares with his patches on PLE enabled processors?
>

No experiment results yet.

An error is reported on a 20 core VM. I'm during an internship
relocation, and will start work on it next week.

--
Jiannan

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-02 16:20     ` Jiannan Ouyang
@ 2013-06-03  1:40       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-03  1:40 UTC (permalink / raw)
  To: Jiannan Ouyang, Gleb Natapov
  Cc: Ingo Molnar, Jeremy Fitzhardinge, x86, konrad.wilk,
	H. Peter Anvin, pbonzini, linux-doc, Andrew M. Theurer,
	xen-devel, Peter Zijlstra, Marcelo Tosatti, stefano.stabellini,
	andi, attilio.rao, gregkh, agraf, chegu vinod, torvalds,
	Avi Kivity, Thomas Gleixner, KVM, LKML, stephan.diestelhorst,
	Rik van Riel, Andrew Jones, virtualization, Srivatsa Vaddagiri

On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>
>> High level question here. We have a big hope for "Preemptable Ticket
>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>> So how this patch series compares with his patches on PLE enabled processors?
>>
>
> No experiment results yet.
>
> An error is reported on a 20 core VM. I'm during an internship
> relocation, and will start work on it next week.

Preemptable spinlocks' testing update:
I hit the same softlockup problem while testing on 32 core machine with
32 guest vcpus that Andrew had reported.

After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
things seemed to be manageable for undercommit cases.
But I still see degradation for undercommit w.r.t baseline itself on 32
core machine (after tuning).

(37.5% degradation w.r.t base line).
I can give the full report after the all tests complete.

For over-commit cases, I again started hitting softlockups (and
degradation is worse). But as I said in the preemptable thread, the
concept of preemptable locks looks promising (though I am still not a
fan of  embedded TIMEOUT mechanism)

Here is my opinion of TODOs for preemptable locks to make it better ( I
think I need to paste in the preemptable thread also)

1. Current TIMEOUT UNIT seem to be on higher side and also it does not
scale well with large guests and also overcommit. we need to have a
sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
for different types of lock too. The hashing mechanism that was used in
Rik's spinlock backoff series fits better probably.

2. I do not think TIMEOUT_UNIT itself would work great when we have a
big queue (for large guests / overcommits) for lock.
one way is to add a PV hook that does yield hypercall immediately for
the waiters above some THRESHOLD so that they don't burn the CPU.
( I can do POC to check if  that idea works in improving situation
at some later point of time)


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-03  1:40       ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-03  1:40 UTC (permalink / raw)
  To: Jiannan Ouyang, Gleb Natapov
  Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
	Andrew Jones, virtualization, andi, H. Peter Anvin,
	stefano.stabellini, xen-devel, x86, Ingo Molnar,
	Andrew M. Theurer, Rik van Riel, konrad.wilk, Avi Kivity,
	Thomas Gleixner, chegu vinod, LKML, Srivatsa Vaddagiri,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>
>> High level question here. We have a big hope for "Preemptable Ticket
>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>> So how this patch series compares with his patches on PLE enabled processors?
>>
>
> No experiment results yet.
>
> An error is reported on a 20 core VM. I'm during an internship
> relocation, and will start work on it next week.

Preemptable spinlocks' testing update:
I hit the same softlockup problem while testing on 32 core machine with
32 guest vcpus that Andrew had reported.

After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
things seemed to be manageable for undercommit cases.
But I still see degradation for undercommit w.r.t baseline itself on 32
core machine (after tuning).

(37.5% degradation w.r.t base line).
I can give the full report after the all tests complete.

For over-commit cases, I again started hitting softlockups (and
degradation is worse). But as I said in the preemptable thread, the
concept of preemptable locks looks promising (though I am still not a
fan of  embedded TIMEOUT mechanism)

Here is my opinion of TODOs for preemptable locks to make it better ( I
think I need to paste in the preemptable thread also)

1. Current TIMEOUT UNIT seem to be on higher side and also it does not
scale well with large guests and also overcommit. we need to have a
sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
for different types of lock too. The hashing mechanism that was used in
Rik's spinlock backoff series fits better probably.

2. I do not think TIMEOUT_UNIT itself would work great when we have a
big queue (for large guests / overcommits) for lock.
one way is to add a PV hook that does yield hypercall immediately for
the waiters above some THRESHOLD so that they don't burn the CPU.
( I can do POC to check if  that idea works in improving situation
at some later point of time)

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-03  1:40       ` Raghavendra K T
  (?)
@ 2013-06-03  6:21       ` Raghavendra K T
  2013-06-07  6:15           ` Raghavendra K T
  -1 siblings, 1 reply; 192+ messages in thread
From: Raghavendra K T @ 2013-06-03  6:21 UTC (permalink / raw)
  To: Jiannan Ouyang, Gleb Natapov
  Cc: Ingo Molnar, Jeremy Fitzhardinge, x86, konrad.wilk,
	H. Peter Anvin, pbonzini, linux-doc, Andrew M. Theurer,
	xen-devel, Peter Zijlstra, Marcelo Tosatti, stefano.stabellini,
	andi, attilio.rao, gregkh, agraf, chegu vinod, torvalds,
	Avi Kivity, Thomas Gleixner, KVM, LKML, stephan.diestelhorst,
	Rik van Riel, Andrew Jones, virtualization, Srivatsa Vaddagiri

On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>
>>> High level question here. We have a big hope for "Preemptable Ticket
>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>>> So how this patch series compares with his patches on PLE enabled
>>> processors?
>>>
>>
>> No experiment results yet.
>>
>> An error is reported on a 20 core VM. I'm during an internship
>> relocation, and will start work on it next week.
>
> Preemptable spinlocks' testing update:
> I hit the same softlockup problem while testing on 32 core machine with
> 32 guest vcpus that Andrew had reported.
>
> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> things seemed to be manageable for undercommit cases.
> But I still see degradation for undercommit w.r.t baseline itself on 32
> core machine (after tuning).
>
> (37.5% degradation w.r.t base line).
> I can give the full report after the all tests complete.
>
> For over-commit cases, I again started hitting softlockups (and
> degradation is worse). But as I said in the preemptable thread, the
> concept of preemptable locks looks promising (though I am still not a
> fan of  embedded TIMEOUT mechanism)
>
> Here is my opinion of TODOs for preemptable locks to make it better ( I
> think I need to paste in the preemptable thread also)
>
> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> scale well with large guests and also overcommit. we need to have a
> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> for different types of lock too. The hashing mechanism that was used in
> Rik's spinlock backoff series fits better probably.
>
> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> big queue (for large guests / overcommits) for lock.
> one way is to add a PV hook that does yield hypercall immediately for
> the waiters above some THRESHOLD so that they don't burn the CPU.
> ( I can do POC to check if  that idea works in improving situation
> at some later point of time)
>

Preemptable-lock results from my run with 2^8 TIMEOUT:

+-----------+-----------+-----------+------------+-----------+
                  ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
     base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
                dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
      base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
+-----------+-----------+-----------+------------+-----------+

Note we can not trust on overcommit results because of softlock-ups


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-03  1:40       ` Raghavendra K T
  (?)
  (?)
@ 2013-06-03  6:21       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-03  6:21 UTC (permalink / raw)
  To: Jiannan Ouyang, Gleb Natapov
  Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
	Andrew Jones, virtualization, andi, H. Peter Anvin,
	stefano.stabellini, xen-devel, x86, Ingo Molnar,
	Andrew M. Theurer, Rik van Riel, konrad.wilk, Avi Kivity,
	Thomas Gleixner, chegu vinod, LKML, Srivatsa Vaddagiri,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>
>>> High level question here. We have a big hope for "Preemptable Ticket
>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>>> So how this patch series compares with his patches on PLE enabled
>>> processors?
>>>
>>
>> No experiment results yet.
>>
>> An error is reported on a 20 core VM. I'm during an internship
>> relocation, and will start work on it next week.
>
> Preemptable spinlocks' testing update:
> I hit the same softlockup problem while testing on 32 core machine with
> 32 guest vcpus that Andrew had reported.
>
> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> things seemed to be manageable for undercommit cases.
> But I still see degradation for undercommit w.r.t baseline itself on 32
> core machine (after tuning).
>
> (37.5% degradation w.r.t base line).
> I can give the full report after the all tests complete.
>
> For over-commit cases, I again started hitting softlockups (and
> degradation is worse). But as I said in the preemptable thread, the
> concept of preemptable locks looks promising (though I am still not a
> fan of  embedded TIMEOUT mechanism)
>
> Here is my opinion of TODOs for preemptable locks to make it better ( I
> think I need to paste in the preemptable thread also)
>
> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> scale well with large guests and also overcommit. we need to have a
> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> for different types of lock too. The hashing mechanism that was used in
> Rik's spinlock backoff series fits better probably.
>
> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> big queue (for large guests / overcommits) for lock.
> one way is to add a PV hook that does yield hypercall immediately for
> the waiters above some THRESHOLD so that they don't burn the CPU.
> ( I can do POC to check if  that idea works in improving situation
> at some later point of time)
>

Preemptable-lock results from my run with 2^8 TIMEOUT:

+-----------+-----------+-----------+------------+-----------+
                  ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
     base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
                dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
      base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
+-----------+-----------+-----------+------------+-----------+

Note we can not trust on overcommit results because of softlock-ups

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 2/19]  x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
  2013-06-01 19:22   ` Raghavendra K T
@ 2013-06-03 15:28     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:28 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:52:09AM +0530, Raghavendra K T wrote:
> x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
> 
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> 
> The code size expands somewhat, and its better to just call
> a function rather than inline it.
> 
> Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch,
> which is simplified.
> 
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/Kconfig |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 685692c..80fcc4b 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -621,6 +621,7 @@ config PARAVIRT_DEBUG
>  config PARAVIRT_SPINLOCKS
>  	bool "Paravirtualization layer for spinlocks"
>  	depends on PARAVIRT && SMP
> +	select UNINLINE_SPIN_UNLOCK
>  	---help---
>  	  Paravirtualized spinlocks allow a pvops backend to replace the
>  	  spinlock implementation with something virtualization-friendly
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 2/19]  x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
@ 2013-06-03 15:28     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:28 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:52:09AM +0530, Raghavendra K T wrote:
> x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
> 
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> 
> The code size expands somewhat, and its better to just call
> a function rather than inline it.
> 
> Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch,
> which is simplified.
> 
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/Kconfig |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 685692c..80fcc4b 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -621,6 +621,7 @@ config PARAVIRT_DEBUG
>  config PARAVIRT_SPINLOCKS
>  	bool "Paravirtualization layer for spinlocks"
>  	depends on PARAVIRT && SMP
> +	select UNINLINE_SPIN_UNLOCK
>  	---help---
>  	  Paravirtualized spinlocks allow a pvops backend to replace the
>  	  spinlock implementation with something virtualization-friendly
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 3/19]  x86/ticketlock: Collapse a layer of functions
  2013-06-01 19:22   ` Raghavendra K T
@ 2013-06-03 15:28     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:28 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:52:29AM +0530, Raghavendra K T wrote:
> x86/ticketlock: Collapse a layer of functions
> 
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> Now that the paravirtualization layer doesn't exist at the spinlock
> level any more, we can collapse the __ticket_ functions into the arch_
> functions.
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Tested-by: Attilio Rao <attilio.rao@citrix.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
>  1 file changed, 5 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 4d54244..7442410 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
>   * in the high part, because a wide xadd increment of the low part would carry
>   * up and contaminate the high part.
>   */
> -static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
> +static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
>  {
>  	register struct __raw_tickets inc = { .tail = 1 };
>  
> @@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
>  out:	barrier();	/* make sure nothing creeps before the lock is taken */
>  }
>  
> -static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
> +static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
>  {
>  	arch_spinlock_t old, new;
>  
> @@ -110,7 +110,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>  	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
>  }
>  
> -static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
> +static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
>  	__ticket_t next = lock->tickets.head + 1;
>  
> @@ -118,46 +118,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>  	__ticket_unlock_kick(lock, next);
>  }
>  
> -static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
> +static inline int arch_spin_is_locked(arch_spinlock_t *lock)
>  {
>  	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
>  
>  	return tmp.tail != tmp.head;
>  }
>  
> -static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
> +static inline int arch_spin_is_contended(arch_spinlock_t *lock)
>  {
>  	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
>  
>  	return (__ticket_t)(tmp.tail - tmp.head) > 1;
>  }
> -
> -static inline int arch_spin_is_locked(arch_spinlock_t *lock)
> -{
> -	return __ticket_spin_is_locked(lock);
> -}
> -
> -static inline int arch_spin_is_contended(arch_spinlock_t *lock)
> -{
> -	return __ticket_spin_is_contended(lock);
> -}
>  #define arch_spin_is_contended	arch_spin_is_contended
>  
> -static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
> -{
> -	__ticket_spin_lock(lock);
> -}
> -
> -static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
> -{
> -	return __ticket_spin_trylock(lock);
> -}
> -
> -static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
> -{
> -	__ticket_spin_unlock(lock);
> -}
> -
>  static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
>  						  unsigned long flags)
>  {
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 3/19]  x86/ticketlock: Collapse a layer of functions
@ 2013-06-03 15:28     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:28 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:52:29AM +0530, Raghavendra K T wrote:
> x86/ticketlock: Collapse a layer of functions
> 
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> Now that the paravirtualization layer doesn't exist at the spinlock
> level any more, we can collapse the __ticket_ functions into the arch_
> functions.
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Tested-by: Attilio Rao <attilio.rao@citrix.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
>  1 file changed, 5 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 4d54244..7442410 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
>   * in the high part, because a wide xadd increment of the low part would carry
>   * up and contaminate the high part.
>   */
> -static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
> +static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
>  {
>  	register struct __raw_tickets inc = { .tail = 1 };
>  
> @@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
>  out:	barrier();	/* make sure nothing creeps before the lock is taken */
>  }
>  
> -static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
> +static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
>  {
>  	arch_spinlock_t old, new;
>  
> @@ -110,7 +110,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>  	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
>  }
>  
> -static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
> +static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
>  	__ticket_t next = lock->tickets.head + 1;
>  
> @@ -118,46 +118,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>  	__ticket_unlock_kick(lock, next);
>  }
>  
> -static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
> +static inline int arch_spin_is_locked(arch_spinlock_t *lock)
>  {
>  	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
>  
>  	return tmp.tail != tmp.head;
>  }
>  
> -static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
> +static inline int arch_spin_is_contended(arch_spinlock_t *lock)
>  {
>  	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
>  
>  	return (__ticket_t)(tmp.tail - tmp.head) > 1;
>  }
> -
> -static inline int arch_spin_is_locked(arch_spinlock_t *lock)
> -{
> -	return __ticket_spin_is_locked(lock);
> -}
> -
> -static inline int arch_spin_is_contended(arch_spinlock_t *lock)
> -{
> -	return __ticket_spin_is_contended(lock);
> -}
>  #define arch_spin_is_contended	arch_spin_is_contended
>  
> -static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
> -{
> -	__ticket_spin_lock(lock);
> -}
> -
> -static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
> -{
> -	return __ticket_spin_trylock(lock);
> -}
> -
> -static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
> -{
> -	__ticket_spin_unlock(lock);
> -}
> -
>  static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
>  						  unsigned long flags)
>  {
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 8/19]  x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
  2013-06-01 19:24   ` Raghavendra K T
@ 2013-06-03 15:53     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:53 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:54:02AM +0530, Raghavendra K T wrote:
> x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
> 
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> Increment ticket head/tails by 2 rather than 1 to leave the LSB free
> to store a "is in slowpath state" bit.  This halves the number
> of possible CPUs for a given ticket size, but this shouldn't matter
> in practice - kernels built for 32k+ CPU systems are probably
> specially built for the hardware rather than a generic distro
> kernel.
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Tested-by: Attilio Rao <attilio.rao@citrix.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/include/asm/spinlock.h       |   10 +++++-----
>  arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
>  2 files changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 7442410..04a5cd5 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
>   */
>  static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
>  {
> -	register struct __raw_tickets inc = { .tail = 1 };
> +	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
>  
>  	inc = xadd(&lock->tickets, inc);
>  
> @@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
>  	if (old.tickets.head != old.tickets.tail)
>  		return 0;
>  
> -	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
> +	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
>  
>  	/* cmpxchg is a full barrier, so nothing can move before it */
>  	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
> @@ -112,9 +112,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
>  
>  static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
> -	__ticket_t next = lock->tickets.head + 1;
> +	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
>  
> -	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
> +	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
>  	__ticket_unlock_kick(lock, next);
>  }
>  
> @@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
>  {
>  	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
>  
> -	return (__ticket_t)(tmp.tail - tmp.head) > 1;
> +	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
>  }
>  #define arch_spin_is_contended	arch_spin_is_contended
>  
> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
> index 83fd3c7..e96fcbd 100644
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -3,7 +3,13 @@
>  
>  #include <linux/types.h>
>  
> -#if (CONFIG_NR_CPUS < 256)
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +#define __TICKET_LOCK_INC	2
> +#else
> +#define __TICKET_LOCK_INC	1
> +#endif
> +
> +#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
>  typedef u8  __ticket_t;
>  typedef u16 __ticketpair_t;
>  #else
> @@ -11,6 +17,8 @@ typedef u16 __ticket_t;
>  typedef u32 __ticketpair_t;
>  #endif
>  
> +#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
> +
>  #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
>  
>  typedef struct arch_spinlock {
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 8/19]  x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
@ 2013-06-03 15:53     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:53 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:54:02AM +0530, Raghavendra K T wrote:
> x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
> 
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> Increment ticket head/tails by 2 rather than 1 to leave the LSB free
> to store a "is in slowpath state" bit.  This halves the number
> of possible CPUs for a given ticket size, but this shouldn't matter
> in practice - kernels built for 32k+ CPU systems are probably
> specially built for the hardware rather than a generic distro
> kernel.
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Tested-by: Attilio Rao <attilio.rao@citrix.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/include/asm/spinlock.h       |   10 +++++-----
>  arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
>  2 files changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 7442410..04a5cd5 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
>   */
>  static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
>  {
> -	register struct __raw_tickets inc = { .tail = 1 };
> +	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
>  
>  	inc = xadd(&lock->tickets, inc);
>  
> @@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
>  	if (old.tickets.head != old.tickets.tail)
>  		return 0;
>  
> -	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
> +	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
>  
>  	/* cmpxchg is a full barrier, so nothing can move before it */
>  	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
> @@ -112,9 +112,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
>  
>  static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
> -	__ticket_t next = lock->tickets.head + 1;
> +	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
>  
> -	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
> +	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
>  	__ticket_unlock_kick(lock, next);
>  }
>  
> @@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
>  {
>  	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
>  
> -	return (__ticket_t)(tmp.tail - tmp.head) > 1;
> +	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
>  }
>  #define arch_spin_is_contended	arch_spin_is_contended
>  
> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
> index 83fd3c7..e96fcbd 100644
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -3,7 +3,13 @@
>  
>  #include <linux/types.h>
>  
> -#if (CONFIG_NR_CPUS < 256)
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +#define __TICKET_LOCK_INC	2
> +#else
> +#define __TICKET_LOCK_INC	1
> +#endif
> +
> +#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
>  typedef u8  __ticket_t;
>  typedef u16 __ticketpair_t;
>  #else
> @@ -11,6 +17,8 @@ typedef u16 __ticket_t;
>  typedef u32 __ticketpair_t;
>  #endif
>  
> +#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
> +
>  #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
>  
>  typedef struct arch_spinlock {
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 9/19]  Split out rate limiting from jump_label.h
  2013-06-01 19:24   ` Raghavendra K T
@ 2013-06-03 15:56     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:56 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:54:22AM +0530, Raghavendra K T wrote:
> Split jumplabel ratelimit

I would change the title a bit, perhaps prefix it with: "jump_label: "
> 
> From: Andrew Jones <drjones@redhat.com>
> 
> Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting

Also please add right after the git id this:

("perf, core: Rate limit perf_sched_events jump_label patching")

> for jump label disabling. The changes were made in the jump label code
> in order to be more widely available and to keep things tidier. This is
> all fine, except now jump_label.h includes linux/workqueue.h, which
> makes it impossible to include jump_label.h from anything that
> workqueue.h needs. For example, it's now impossible to include
> jump_label.h from asm/spinlock.h, which is done in proposed
> pv-ticketlock patches. This patch splits out the rate limiting related
> changes from jump_label.h into a new file, jump_label_ratelimit.h, to
> resolve the issue.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Otherwise looks fine to me:

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  include/linux/jump_label.h           |   26 +-------------------------
>  include/linux/jump_label_ratelimit.h |   34 ++++++++++++++++++++++++++++++++++
>  include/linux/perf_event.h           |    1 +
>  kernel/jump_label.c                  |    1 +
>  4 files changed, 37 insertions(+), 25 deletions(-)
>  create mode 100644 include/linux/jump_label_ratelimit.h
> 
> diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
> index 0976fc4..53cdf89 100644
> --- a/include/linux/jump_label.h
> +++ b/include/linux/jump_label.h
> @@ -48,7 +48,6 @@
>  
>  #include <linux/types.h>
>  #include <linux/compiler.h>
> -#include <linux/workqueue.h>
>  
>  #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
>  
> @@ -61,12 +60,6 @@ struct static_key {
>  #endif
>  };
>  
> -struct static_key_deferred {
> -	struct static_key key;
> -	unsigned long timeout;
> -	struct delayed_work work;
> -};
> -
>  # include <asm/jump_label.h>
>  # define HAVE_JUMP_LABEL
>  #endif	/* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
> @@ -119,10 +112,7 @@ extern void arch_jump_label_transform_static(struct jump_entry *entry,
>  extern int jump_label_text_reserved(void *start, void *end);
>  extern void static_key_slow_inc(struct static_key *key);
>  extern void static_key_slow_dec(struct static_key *key);
> -extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
>  extern void jump_label_apply_nops(struct module *mod);
> -extern void
> -jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
>  
>  #define STATIC_KEY_INIT_TRUE ((struct static_key) \
>  	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
> @@ -141,10 +131,6 @@ static __always_inline void jump_label_init(void)
>  {
>  }
>  
> -struct static_key_deferred {
> -	struct static_key  key;
> -};
> -
>  static __always_inline bool static_key_false(struct static_key *key)
>  {
>  	if (unlikely(atomic_read(&key->enabled)) > 0)
> @@ -169,11 +155,6 @@ static inline void static_key_slow_dec(struct static_key *key)
>  	atomic_dec(&key->enabled);
>  }
>  
> -static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
> -{
> -	static_key_slow_dec(&key->key);
> -}
> -
>  static inline int jump_label_text_reserved(void *start, void *end)
>  {
>  	return 0;
> @@ -187,12 +168,6 @@ static inline int jump_label_apply_nops(struct module *mod)
>  	return 0;
>  }
>  
> -static inline void
> -jump_label_rate_limit(struct static_key_deferred *key,
> -		unsigned long rl)
> -{
> -}
> -
>  #define STATIC_KEY_INIT_TRUE ((struct static_key) \
>  		{ .enabled = ATOMIC_INIT(1) })
>  #define STATIC_KEY_INIT_FALSE ((struct static_key) \
> @@ -203,6 +178,7 @@ jump_label_rate_limit(struct static_key_deferred *key,
>  #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE
>  #define jump_label_enabled static_key_enabled
>  
> +static inline int atomic_read(const atomic_t *v);
>  static inline bool static_key_enabled(struct static_key *key)
>  {
>  	return (atomic_read(&key->enabled) > 0);
> diff --git a/include/linux/jump_label_ratelimit.h b/include/linux/jump_label_ratelimit.h
> new file mode 100644
> index 0000000..1137883
> --- /dev/null
> +++ b/include/linux/jump_label_ratelimit.h
> @@ -0,0 +1,34 @@
> +#ifndef _LINUX_JUMP_LABEL_RATELIMIT_H
> +#define _LINUX_JUMP_LABEL_RATELIMIT_H
> +
> +#include <linux/jump_label.h>
> +#include <linux/workqueue.h>
> +
> +#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
> +struct static_key_deferred {
> +	struct static_key key;
> +	unsigned long timeout;
> +	struct delayed_work work;
> +};
> +#endif
> +
> +#ifdef HAVE_JUMP_LABEL
> +extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
> +extern void
> +jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
> +
> +#else	/* !HAVE_JUMP_LABEL */
> +struct static_key_deferred {
> +	struct static_key  key;
> +};
> +static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
> +{
> +	static_key_slow_dec(&key->key);
> +}
> +static inline void
> +jump_label_rate_limit(struct static_key_deferred *key,
> +		unsigned long rl)
> +{
> +}
> +#endif	/* HAVE_JUMP_LABEL */
> +#endif	/* _LINUX_JUMP_LABEL_RATELIMIT_H */
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index f463a46..a8eac60 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -48,6 +48,7 @@ struct perf_guest_info_callbacks {
>  #include <linux/cpu.h>
>  #include <linux/irq_work.h>
>  #include <linux/static_key.h>
> +#include <linux/jump_label_ratelimit.h>
>  #include <linux/atomic.h>
>  #include <linux/sysfs.h>
>  #include <linux/perf_regs.h>
> diff --git a/kernel/jump_label.c b/kernel/jump_label.c
> index 60f48fa..297a924 100644
> --- a/kernel/jump_label.c
> +++ b/kernel/jump_label.c
> @@ -13,6 +13,7 @@
>  #include <linux/sort.h>
>  #include <linux/err.h>
>  #include <linux/static_key.h>
> +#include <linux/jump_label_ratelimit.h>
>  
>  #ifdef HAVE_JUMP_LABEL
>  
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 9/19]  Split out rate limiting from jump_label.h
@ 2013-06-03 15:56     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:56 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:54:22AM +0530, Raghavendra K T wrote:
> Split jumplabel ratelimit

I would change the title a bit, perhaps prefix it with: "jump_label: "
> 
> From: Andrew Jones <drjones@redhat.com>
> 
> Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting

Also please add right after the git id this:

("perf, core: Rate limit perf_sched_events jump_label patching")

> for jump label disabling. The changes were made in the jump label code
> in order to be more widely available and to keep things tidier. This is
> all fine, except now jump_label.h includes linux/workqueue.h, which
> makes it impossible to include jump_label.h from anything that
> workqueue.h needs. For example, it's now impossible to include
> jump_label.h from asm/spinlock.h, which is done in proposed
> pv-ticketlock patches. This patch splits out the rate limiting related
> changes from jump_label.h into a new file, jump_label_ratelimit.h, to
> resolve the issue.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Otherwise looks fine to me:

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  include/linux/jump_label.h           |   26 +-------------------------
>  include/linux/jump_label_ratelimit.h |   34 ++++++++++++++++++++++++++++++++++
>  include/linux/perf_event.h           |    1 +
>  kernel/jump_label.c                  |    1 +
>  4 files changed, 37 insertions(+), 25 deletions(-)
>  create mode 100644 include/linux/jump_label_ratelimit.h
> 
> diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
> index 0976fc4..53cdf89 100644
> --- a/include/linux/jump_label.h
> +++ b/include/linux/jump_label.h
> @@ -48,7 +48,6 @@
>  
>  #include <linux/types.h>
>  #include <linux/compiler.h>
> -#include <linux/workqueue.h>
>  
>  #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
>  
> @@ -61,12 +60,6 @@ struct static_key {
>  #endif
>  };
>  
> -struct static_key_deferred {
> -	struct static_key key;
> -	unsigned long timeout;
> -	struct delayed_work work;
> -};
> -
>  # include <asm/jump_label.h>
>  # define HAVE_JUMP_LABEL
>  #endif	/* CC_HAVE_ASM_GOTO && CONFIG_JUMP_LABEL */
> @@ -119,10 +112,7 @@ extern void arch_jump_label_transform_static(struct jump_entry *entry,
>  extern int jump_label_text_reserved(void *start, void *end);
>  extern void static_key_slow_inc(struct static_key *key);
>  extern void static_key_slow_dec(struct static_key *key);
> -extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
>  extern void jump_label_apply_nops(struct module *mod);
> -extern void
> -jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
>  
>  #define STATIC_KEY_INIT_TRUE ((struct static_key) \
>  	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
> @@ -141,10 +131,6 @@ static __always_inline void jump_label_init(void)
>  {
>  }
>  
> -struct static_key_deferred {
> -	struct static_key  key;
> -};
> -
>  static __always_inline bool static_key_false(struct static_key *key)
>  {
>  	if (unlikely(atomic_read(&key->enabled)) > 0)
> @@ -169,11 +155,6 @@ static inline void static_key_slow_dec(struct static_key *key)
>  	atomic_dec(&key->enabled);
>  }
>  
> -static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
> -{
> -	static_key_slow_dec(&key->key);
> -}
> -
>  static inline int jump_label_text_reserved(void *start, void *end)
>  {
>  	return 0;
> @@ -187,12 +168,6 @@ static inline int jump_label_apply_nops(struct module *mod)
>  	return 0;
>  }
>  
> -static inline void
> -jump_label_rate_limit(struct static_key_deferred *key,
> -		unsigned long rl)
> -{
> -}
> -
>  #define STATIC_KEY_INIT_TRUE ((struct static_key) \
>  		{ .enabled = ATOMIC_INIT(1) })
>  #define STATIC_KEY_INIT_FALSE ((struct static_key) \
> @@ -203,6 +178,7 @@ jump_label_rate_limit(struct static_key_deferred *key,
>  #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE
>  #define jump_label_enabled static_key_enabled
>  
> +static inline int atomic_read(const atomic_t *v);
>  static inline bool static_key_enabled(struct static_key *key)
>  {
>  	return (atomic_read(&key->enabled) > 0);
> diff --git a/include/linux/jump_label_ratelimit.h b/include/linux/jump_label_ratelimit.h
> new file mode 100644
> index 0000000..1137883
> --- /dev/null
> +++ b/include/linux/jump_label_ratelimit.h
> @@ -0,0 +1,34 @@
> +#ifndef _LINUX_JUMP_LABEL_RATELIMIT_H
> +#define _LINUX_JUMP_LABEL_RATELIMIT_H
> +
> +#include <linux/jump_label.h>
> +#include <linux/workqueue.h>
> +
> +#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
> +struct static_key_deferred {
> +	struct static_key key;
> +	unsigned long timeout;
> +	struct delayed_work work;
> +};
> +#endif
> +
> +#ifdef HAVE_JUMP_LABEL
> +extern void static_key_slow_dec_deferred(struct static_key_deferred *key);
> +extern void
> +jump_label_rate_limit(struct static_key_deferred *key, unsigned long rl);
> +
> +#else	/* !HAVE_JUMP_LABEL */
> +struct static_key_deferred {
> +	struct static_key  key;
> +};
> +static inline void static_key_slow_dec_deferred(struct static_key_deferred *key)
> +{
> +	static_key_slow_dec(&key->key);
> +}
> +static inline void
> +jump_label_rate_limit(struct static_key_deferred *key,
> +		unsigned long rl)
> +{
> +}
> +#endif	/* HAVE_JUMP_LABEL */
> +#endif	/* _LINUX_JUMP_LABEL_RATELIMIT_H */
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index f463a46..a8eac60 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -48,6 +48,7 @@ struct perf_guest_info_callbacks {
>  #include <linux/cpu.h>
>  #include <linux/irq_work.h>
>  #include <linux/static_key.h>
> +#include <linux/jump_label_ratelimit.h>
>  #include <linux/atomic.h>
>  #include <linux/sysfs.h>
>  #include <linux/perf_regs.h>
> diff --git a/kernel/jump_label.c b/kernel/jump_label.c
> index 60f48fa..297a924 100644
> --- a/kernel/jump_label.c
> +++ b/kernel/jump_label.c
> @@ -13,6 +13,7 @@
>  #include <linux/sort.h>
>  #include <linux/err.h>
>  #include <linux/static_key.h>
> +#include <linux/jump_label_ratelimit.h>
>  
>  #ifdef HAVE_JUMP_LABEL
>  
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
  2013-06-01 19:25   ` Raghavendra K T
@ 2013-06-03 15:57     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:57 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
> xen: Enable PV ticketlocks on HVM Xen

There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb

> 
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/xen/smp.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index dcdc91c..8d2abf7 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -682,4 +682,5 @@ void __init xen_hvm_smp_init(void)
>  	smp_ops.cpu_die = xen_hvm_cpu_die;
>  	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
>  	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
> +	xen_init_spinlocks();
>  }
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
@ 2013-06-03 15:57     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 15:57 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
> xen: Enable PV ticketlocks on HVM Xen

There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb

> 
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/xen/smp.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index dcdc91c..8d2abf7 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -682,4 +682,5 @@ void __init xen_hvm_smp_init(void)
>  	smp_ops.cpu_die = xen_hvm_cpu_die;
>  	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
>  	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
> +	xen_init_spinlocks();
>  }
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
  2013-06-01 19:25   ` Raghavendra K T
@ 2013-06-03 16:00     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:00 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:55:57AM +0530, Raghavendra K T wrote:
> kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
> 
> From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> 
> During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
> required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
>  support for pv-ticketlocks is registered via pv_lock_ops.
> 
> Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.
> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
> [Raghu: check_zero race fix, enum for kvm_contention_stat
> jumplabel related changes ]
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/include/asm/kvm_para.h |   14 ++
>  arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
>  2 files changed, 268 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
> index 695399f..427afcb 100644
> --- a/arch/x86/include/asm/kvm_para.h
> +++ b/arch/x86/include/asm/kvm_para.h
> @@ -118,10 +118,20 @@ void kvm_async_pf_task_wait(u32 token);
>  void kvm_async_pf_task_wake(u32 token);
>  u32 kvm_read_and_reset_pf_reason(void);
>  extern void kvm_disable_steal_time(void);
> -#else
> -#define kvm_guest_init() do { } while (0)
> +
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +void __init kvm_spinlock_init(void);
> +#else /* !CONFIG_PARAVIRT_SPINLOCKS */
> +static inline void kvm_spinlock_init(void)
> +{
> +}
> +#endif /* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#else /* CONFIG_KVM_GUEST */
> +#define kvm_guest_init() do {} while (0)
>  #define kvm_async_pf_task_wait(T) do {} while(0)
>  #define kvm_async_pf_task_wake(T) do {} while(0)
> +
>  static inline u32 kvm_read_and_reset_pf_reason(void)
>  {
>  	return 0;
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index cd6d9a5..2715b92 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -34,6 +34,7 @@
>  #include <linux/sched.h>
>  #include <linux/slab.h>
>  #include <linux/kprobes.h>
> +#include <linux/debugfs.h>
>  #include <asm/timer.h>
>  #include <asm/cpu.h>
>  #include <asm/traps.h>
> @@ -419,6 +420,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
>  	WARN_ON(kvm_register_clock("primary cpu clock"));
>  	kvm_guest_cpu_init();
>  	native_smp_prepare_boot_cpu();
> +	kvm_spinlock_init();
>  }
>  
>  static void __cpuinit kvm_guest_cpu_online(void *dummy)
> @@ -523,3 +525,257 @@ static __init int activate_jump_labels(void)
>  	return 0;
>  }
>  arch_initcall(activate_jump_labels);
> +
> +/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
> +void kvm_kick_cpu(int cpu)
> +{
> +	int apicid;
> +
> +	apicid = per_cpu(x86_cpu_to_apicid, cpu);
> +	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
> +}
> +
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +enum kvm_contention_stat {
> +	TAKEN_SLOW,
> +	TAKEN_SLOW_PICKUP,
> +	RELEASED_SLOW,
> +	RELEASED_SLOW_KICKED,
> +	NR_CONTENTION_STATS
> +};
> +
> +#ifdef CONFIG_KVM_DEBUG_FS
> +#define HISTO_BUCKETS	30
> +
> +static struct kvm_spinlock_stats
> +{
> +	u32 contention_stats[NR_CONTENTION_STATS];
> +	u32 histo_spin_blocked[HISTO_BUCKETS+1];
> +	u64 time_blocked;
> +} spinlock_stats;
> +
> +static u8 zero_stats;
> +
> +static inline void check_zero(void)
> +{
> +	u8 ret;
> +	u8 old;
> +
> +	old = ACCESS_ONCE(zero_stats);
> +	if (unlikely(old)) {
> +		ret = cmpxchg(&zero_stats, old, 0);
> +		/* This ensures only one fellow resets the stat */
> +		if (ret == old)
> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
> +	}
> +}
> +
> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
> +{
> +	check_zero();
> +	spinlock_stats.contention_stats[var] += val;
> +}
> +
> +
> +static inline u64 spin_time_start(void)
> +{
> +	return sched_clock();
> +}
> +
> +static void __spin_time_accum(u64 delta, u32 *array)
> +{
> +	unsigned index;
> +
> +	index = ilog2(delta);
> +	check_zero();
> +
> +	if (index < HISTO_BUCKETS)
> +		array[index]++;
> +	else
> +		array[HISTO_BUCKETS]++;
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> +	u32 delta;
> +
> +	delta = sched_clock() - start;
> +	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
> +	spinlock_stats.time_blocked += delta;
> +}
> +
> +static struct dentry *d_spin_debug;
> +static struct dentry *d_kvm_debug;
> +
> +struct dentry *kvm_init_debugfs(void)
> +{
> +	d_kvm_debug = debugfs_create_dir("kvm", NULL);
> +	if (!d_kvm_debug)
> +		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
> +
> +	return d_kvm_debug;
> +}
> +
> +static int __init kvm_spinlock_debugfs(void)
> +{
> +	struct dentry *d_kvm;
> +
> +	d_kvm = kvm_init_debugfs();
> +	if (d_kvm == NULL)
> +		return -ENOMEM;
> +
> +	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
> +
> +	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
> +
> +	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
> +	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
> +
> +	debugfs_create_u32("released_slow", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
> +	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
> +
> +	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
> +			   &spinlock_stats.time_blocked);
> +
> +	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
> +		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
> +
> +	return 0;
> +}
> +fs_initcall(kvm_spinlock_debugfs);
> +#else  /* !CONFIG_KVM_DEBUG_FS */
> +#define TIMEOUT			(1 << 10)

What do you use that for?


> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
> +{
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> +	return 0;
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> +}
> +#endif  /* CONFIG_KVM_DEBUG_FS */
> +
> +struct kvm_lock_waiting {
> +	struct arch_spinlock *lock;
> +	__ticket_t want;
> +};
> +
> +/* cpus 'waiting' on a spinlock to become available */
> +static cpumask_t waiting_cpus;
> +
> +/* Track spinlock on which a cpu is waiting */
> +static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
> +
> +static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
> +{
> +	struct kvm_lock_waiting *w;
> +	int cpu;
> +	u64 start;
> +	unsigned long flags;
> +
> +	w = &__get_cpu_var(lock_waiting);
> +	cpu = smp_processor_id();
> +	start = spin_time_start();
> +
> +	/*
> +	 * Make sure an interrupt handler can't upset things in a
> +	 * partially setup state.
> +	 */
> +	local_irq_save(flags);
> +
> +	/*
> +	 * The ordering protocol on this is that the "lock" pointer
> +	 * may only be set non-NULL if the "want" ticket is correct.
> +	 * If we're updating "want", we must first clear "lock".
> +	 */
> +	w->lock = NULL;
> +	smp_wmb();
> +	w->want = want;
> +	smp_wmb();
> +	w->lock = lock;
> +
> +	add_stats(TAKEN_SLOW, 1);
> +
> +	/*
> +	 * This uses set_bit, which is atomic but we should not rely on its
> +	 * reordering gurantees. So barrier is needed after this call.
> +	 */
> +	cpumask_set_cpu(cpu, &waiting_cpus);
> +
> +	barrier();
> +
> +	/*
> +	 * Mark entry to slowpath before doing the pickup test to make
> +	 * sure we don't deadlock with an unlocker.
> +	 */
> +	__ticket_enter_slowpath(lock);
> +
> +	/*
> +	 * check again make sure it didn't become free while
> +	 * we weren't looking.
> +	 */
> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
> +		add_stats(TAKEN_SLOW_PICKUP, 1);
> +		goto out;
> +	}
> +
> +	/* Allow interrupts while blocked */
> +	local_irq_restore(flags);
> +
> +	/* halt until it's our turn and kicked. */
> +	halt();
> +
> +	local_irq_save(flags);
> +out:
> +	cpumask_clear_cpu(cpu, &waiting_cpus);
> +	w->lock = NULL;
> +	local_irq_restore(flags);
> +	spin_time_accum_blocked(start);
> +}
> +PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
> +
> +/* Kick vcpu waiting on @lock->head to reach value @ticket */
> +static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
> +{
> +	int cpu;
> +
> +	add_stats(RELEASED_SLOW, 1);
> +	for_each_cpu(cpu, &waiting_cpus) {
> +		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
> +		if (ACCESS_ONCE(w->lock) == lock &&
> +		    ACCESS_ONCE(w->want) == ticket) {
> +			add_stats(RELEASED_SLOW_KICKED, 1);
> +			kvm_kick_cpu(cpu);
> +			break;
> +		}
> +	}
> +}
> +
> +/*
> + * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> + */
> +void __init kvm_spinlock_init(void)
> +{
> +	if (!kvm_para_available())
> +		return;
> +	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> +		return;
> +
> +	printk(KERN_INFO"KVM setup paravirtual spinlock\n");

That spacing is odd.

> +
> +	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +
> +	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
> +	pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +}
> +#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
@ 2013-06-03 16:00     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:00 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:55:57AM +0530, Raghavendra K T wrote:
> kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
> 
> From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> 
> During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
> required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
>  support for pv-ticketlocks is registered via pv_lock_ops.
> 
> Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.
> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
> [Raghu: check_zero race fix, enum for kvm_contention_stat
> jumplabel related changes ]
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/include/asm/kvm_para.h |   14 ++
>  arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
>  2 files changed, 268 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
> index 695399f..427afcb 100644
> --- a/arch/x86/include/asm/kvm_para.h
> +++ b/arch/x86/include/asm/kvm_para.h
> @@ -118,10 +118,20 @@ void kvm_async_pf_task_wait(u32 token);
>  void kvm_async_pf_task_wake(u32 token);
>  u32 kvm_read_and_reset_pf_reason(void);
>  extern void kvm_disable_steal_time(void);
> -#else
> -#define kvm_guest_init() do { } while (0)
> +
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +void __init kvm_spinlock_init(void);
> +#else /* !CONFIG_PARAVIRT_SPINLOCKS */
> +static inline void kvm_spinlock_init(void)
> +{
> +}
> +#endif /* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#else /* CONFIG_KVM_GUEST */
> +#define kvm_guest_init() do {} while (0)
>  #define kvm_async_pf_task_wait(T) do {} while(0)
>  #define kvm_async_pf_task_wake(T) do {} while(0)
> +
>  static inline u32 kvm_read_and_reset_pf_reason(void)
>  {
>  	return 0;
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index cd6d9a5..2715b92 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -34,6 +34,7 @@
>  #include <linux/sched.h>
>  #include <linux/slab.h>
>  #include <linux/kprobes.h>
> +#include <linux/debugfs.h>
>  #include <asm/timer.h>
>  #include <asm/cpu.h>
>  #include <asm/traps.h>
> @@ -419,6 +420,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
>  	WARN_ON(kvm_register_clock("primary cpu clock"));
>  	kvm_guest_cpu_init();
>  	native_smp_prepare_boot_cpu();
> +	kvm_spinlock_init();
>  }
>  
>  static void __cpuinit kvm_guest_cpu_online(void *dummy)
> @@ -523,3 +525,257 @@ static __init int activate_jump_labels(void)
>  	return 0;
>  }
>  arch_initcall(activate_jump_labels);
> +
> +/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
> +void kvm_kick_cpu(int cpu)
> +{
> +	int apicid;
> +
> +	apicid = per_cpu(x86_cpu_to_apicid, cpu);
> +	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
> +}
> +
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +enum kvm_contention_stat {
> +	TAKEN_SLOW,
> +	TAKEN_SLOW_PICKUP,
> +	RELEASED_SLOW,
> +	RELEASED_SLOW_KICKED,
> +	NR_CONTENTION_STATS
> +};
> +
> +#ifdef CONFIG_KVM_DEBUG_FS
> +#define HISTO_BUCKETS	30
> +
> +static struct kvm_spinlock_stats
> +{
> +	u32 contention_stats[NR_CONTENTION_STATS];
> +	u32 histo_spin_blocked[HISTO_BUCKETS+1];
> +	u64 time_blocked;
> +} spinlock_stats;
> +
> +static u8 zero_stats;
> +
> +static inline void check_zero(void)
> +{
> +	u8 ret;
> +	u8 old;
> +
> +	old = ACCESS_ONCE(zero_stats);
> +	if (unlikely(old)) {
> +		ret = cmpxchg(&zero_stats, old, 0);
> +		/* This ensures only one fellow resets the stat */
> +		if (ret == old)
> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
> +	}
> +}
> +
> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
> +{
> +	check_zero();
> +	spinlock_stats.contention_stats[var] += val;
> +}
> +
> +
> +static inline u64 spin_time_start(void)
> +{
> +	return sched_clock();
> +}
> +
> +static void __spin_time_accum(u64 delta, u32 *array)
> +{
> +	unsigned index;
> +
> +	index = ilog2(delta);
> +	check_zero();
> +
> +	if (index < HISTO_BUCKETS)
> +		array[index]++;
> +	else
> +		array[HISTO_BUCKETS]++;
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> +	u32 delta;
> +
> +	delta = sched_clock() - start;
> +	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
> +	spinlock_stats.time_blocked += delta;
> +}
> +
> +static struct dentry *d_spin_debug;
> +static struct dentry *d_kvm_debug;
> +
> +struct dentry *kvm_init_debugfs(void)
> +{
> +	d_kvm_debug = debugfs_create_dir("kvm", NULL);
> +	if (!d_kvm_debug)
> +		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
> +
> +	return d_kvm_debug;
> +}
> +
> +static int __init kvm_spinlock_debugfs(void)
> +{
> +	struct dentry *d_kvm;
> +
> +	d_kvm = kvm_init_debugfs();
> +	if (d_kvm == NULL)
> +		return -ENOMEM;
> +
> +	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
> +
> +	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
> +
> +	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
> +	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
> +
> +	debugfs_create_u32("released_slow", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
> +	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
> +		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
> +
> +	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
> +			   &spinlock_stats.time_blocked);
> +
> +	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
> +		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
> +
> +	return 0;
> +}
> +fs_initcall(kvm_spinlock_debugfs);
> +#else  /* !CONFIG_KVM_DEBUG_FS */
> +#define TIMEOUT			(1 << 10)

What do you use that for?


> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
> +{
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> +	return 0;
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> +}
> +#endif  /* CONFIG_KVM_DEBUG_FS */
> +
> +struct kvm_lock_waiting {
> +	struct arch_spinlock *lock;
> +	__ticket_t want;
> +};
> +
> +/* cpus 'waiting' on a spinlock to become available */
> +static cpumask_t waiting_cpus;
> +
> +/* Track spinlock on which a cpu is waiting */
> +static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
> +
> +static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
> +{
> +	struct kvm_lock_waiting *w;
> +	int cpu;
> +	u64 start;
> +	unsigned long flags;
> +
> +	w = &__get_cpu_var(lock_waiting);
> +	cpu = smp_processor_id();
> +	start = spin_time_start();
> +
> +	/*
> +	 * Make sure an interrupt handler can't upset things in a
> +	 * partially setup state.
> +	 */
> +	local_irq_save(flags);
> +
> +	/*
> +	 * The ordering protocol on this is that the "lock" pointer
> +	 * may only be set non-NULL if the "want" ticket is correct.
> +	 * If we're updating "want", we must first clear "lock".
> +	 */
> +	w->lock = NULL;
> +	smp_wmb();
> +	w->want = want;
> +	smp_wmb();
> +	w->lock = lock;
> +
> +	add_stats(TAKEN_SLOW, 1);
> +
> +	/*
> +	 * This uses set_bit, which is atomic but we should not rely on its
> +	 * reordering gurantees. So barrier is needed after this call.
> +	 */
> +	cpumask_set_cpu(cpu, &waiting_cpus);
> +
> +	barrier();
> +
> +	/*
> +	 * Mark entry to slowpath before doing the pickup test to make
> +	 * sure we don't deadlock with an unlocker.
> +	 */
> +	__ticket_enter_slowpath(lock);
> +
> +	/*
> +	 * check again make sure it didn't become free while
> +	 * we weren't looking.
> +	 */
> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
> +		add_stats(TAKEN_SLOW_PICKUP, 1);
> +		goto out;
> +	}
> +
> +	/* Allow interrupts while blocked */
> +	local_irq_restore(flags);
> +
> +	/* halt until it's our turn and kicked. */
> +	halt();
> +
> +	local_irq_save(flags);
> +out:
> +	cpumask_clear_cpu(cpu, &waiting_cpus);
> +	w->lock = NULL;
> +	local_irq_restore(flags);
> +	spin_time_accum_blocked(start);
> +}
> +PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
> +
> +/* Kick vcpu waiting on @lock->head to reach value @ticket */
> +static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
> +{
> +	int cpu;
> +
> +	add_stats(RELEASED_SLOW, 1);
> +	for_each_cpu(cpu, &waiting_cpus) {
> +		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
> +		if (ACCESS_ONCE(w->lock) == lock &&
> +		    ACCESS_ONCE(w->want) == ticket) {
> +			add_stats(RELEASED_SLOW_KICKED, 1);
> +			kvm_kick_cpu(cpu);
> +			break;
> +		}
> +	}
> +}
> +
> +/*
> + * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> + */
> +void __init kvm_spinlock_init(void)
> +{
> +	if (!kvm_para_available())
> +		return;
> +	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> +		return;
> +
> +	printk(KERN_INFO"KVM setup paravirtual spinlock\n");

That spacing is odd.

> +
> +	static_key_slow_inc(&paravirt_ticketlocks_enabled);
> +
> +	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
> +	pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +}
> +#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 5/19]  xen/pvticketlock: Xen implementation for PV ticket locks
  2013-06-01 19:23   ` Raghavendra K T
@ 2013-06-03 16:03     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:03 UTC (permalink / raw)
  To: Raghavendra K T, stefan.bader
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sat, Jun 01, 2013 at 12:23:14PM -0700, Raghavendra K T wrote:
> xen/pvticketlock: Xen implementation for PV ticket locks
> 
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> Replace the old Xen implementation of PV spinlocks with and implementation
> of xen_lock_spinning and xen_unlock_kick.
> 
> xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
> adds itself to the waiting_cpus set, and blocks on an event channel
> until the channel becomes pending.
> 
> xen_unlock_kick searches the cpus in waiting_cpus looking for the one
> which next wants this lock with the next ticket, if any.  If found,
> it kicks it by making its event channel pending, which wakes it up.
> 
> We need to make sure interrupts are disabled while we're relying on the
> contents of the per-cpu lock_waiting values, otherwise an interrupt
> handler could come in, try to take some other lock, block, and overwrite
> our values.
> 
> Raghu: use function + enum instead of macro, cmpxchg for zero status reset
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/xen/spinlock.c |  347 +++++++++++------------------------------------
>  1 file changed, 78 insertions(+), 269 deletions(-)
> 
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index d6481a9..860e190 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -16,45 +16,44 @@
>  #include "xen-ops.h"
>  #include "debugfs.h"
>  
> -#ifdef CONFIG_XEN_DEBUG_FS
> -static struct xen_spinlock_stats
> -{
> -	u64 taken;
> -	u32 taken_slow;
> -	u32 taken_slow_nested;
> -	u32 taken_slow_pickup;
> -	u32 taken_slow_spurious;
> -	u32 taken_slow_irqenable;
> +enum xen_contention_stat {
> +	TAKEN_SLOW,
> +	TAKEN_SLOW_PICKUP,
> +	TAKEN_SLOW_SPURIOUS,
> +	RELEASED_SLOW,
> +	RELEASED_SLOW_KICKED,
> +	NR_CONTENTION_STATS
> +};
>  
> -	u64 released;
> -	u32 released_slow;
> -	u32 released_slow_kicked;
>  
> +#ifdef CONFIG_XEN_DEBUG_FS
>  #define HISTO_BUCKETS	30
> -	u32 histo_spin_total[HISTO_BUCKETS+1];
> -	u32 histo_spin_spinning[HISTO_BUCKETS+1];
> +static struct xen_spinlock_stats
> +{
> +	u32 contention_stats[NR_CONTENTION_STATS];
>  	u32 histo_spin_blocked[HISTO_BUCKETS+1];
> -
> -	u64 time_total;
> -	u64 time_spinning;
>  	u64 time_blocked;
>  } spinlock_stats;
>  
>  static u8 zero_stats;
>  
> -static unsigned lock_timeout = 1 << 10;
> -#define TIMEOUT lock_timeout
> -
>  static inline void check_zero(void)
>  {
> -	if (unlikely(zero_stats)) {
> -		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
> -		zero_stats = 0;
> +	u8 ret;
> +	u8 old = ACCESS_ONCE(zero_stats);
> +	if (unlikely(old)) {
> +		ret = cmpxchg(&zero_stats, old, 0);
> +		/* This ensures only one fellow resets the stat */
> +		if (ret == old)
> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>  	}
>  }
>  
> -#define ADD_STATS(elem, val)			\
> -	do { check_zero(); spinlock_stats.elem += (val); } while(0)
> +static inline void add_stats(enum xen_contention_stat var, u32 val)
> +{
> +	check_zero();
> +	spinlock_stats.contention_stats[var] += val;
> +}
>  
>  static inline u64 spin_time_start(void)
>  {
> @@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
>  		array[HISTO_BUCKETS]++;
>  }
>  
> -static inline void spin_time_accum_spinning(u64 start)
> -{
> -	u32 delta = xen_clocksource_read() - start;
> -
> -	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
> -	spinlock_stats.time_spinning += delta;
> -}
> -
> -static inline void spin_time_accum_total(u64 start)
> -{
> -	u32 delta = xen_clocksource_read() - start;
> -
> -	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
> -	spinlock_stats.time_total += delta;
> -}
> -
>  static inline void spin_time_accum_blocked(u64 start)
>  {
>  	u32 delta = xen_clocksource_read() - start;
> @@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
>  }
>  #else  /* !CONFIG_XEN_DEBUG_FS */
>  #define TIMEOUT			(1 << 10)
> -#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
> +static inline void add_stats(enum xen_contention_stat var, u32 val)
> +{
> +}
>  
>  static inline u64 spin_time_start(void)
>  {
>  	return 0;
>  }
>  
> -static inline void spin_time_accum_total(u64 start)
> -{
> -}
> -static inline void spin_time_accum_spinning(u64 start)
> -{
> -}
>  static inline void spin_time_accum_blocked(u64 start)
>  {
>  }
> @@ -133,229 +112,82 @@ typedef u16 xen_spinners_t;
>  	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
>  #endif
>  
> -struct xen_spinlock {
> -	unsigned char lock;		/* 0 -> free; 1 -> locked */
> -	xen_spinners_t spinners;	/* count of waiting cpus */
> +struct xen_lock_waiting {
> +	struct arch_spinlock *lock;
> +	__ticket_t want;
>  };
>  
>  static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
> +static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
> +static cpumask_t waiting_cpus;
>  
> -#if 0
> -static int xen_spin_is_locked(struct arch_spinlock *lock)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -
> -	return xl->lock != 0;
> -}
> -
> -static int xen_spin_is_contended(struct arch_spinlock *lock)
> +static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>  {
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -
> -	/* Not strictly true; this is only the count of contended
> -	   lock-takers entering the slow path. */
> -	return xl->spinners != 0;
> -}
> -
> -static int xen_spin_trylock(struct arch_spinlock *lock)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -	u8 old = 1;
> -
> -	asm("xchgb %b0,%1"
> -	    : "+q" (old), "+m" (xl->lock) : : "memory");
> -
> -	return old == 0;
> -}
> -
> -static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
> -
> -/*
> - * Mark a cpu as interested in a lock.  Returns the CPU's previous
> - * lock of interest, in case we got preempted by an interrupt.
> - */
> -static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
> -{
> -	struct xen_spinlock *prev;
> -
> -	prev = __this_cpu_read(lock_spinners);
> -	__this_cpu_write(lock_spinners, xl);
> -
> -	wmb();			/* set lock of interest before count */
> -
> -	inc_spinners(xl);
> -
> -	return prev;
> -}
> -
> -/*
> - * Mark a cpu as no longer interested in a lock.  Restores previous
> - * lock of interest (NULL for none).
> - */
> -static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
> -{
> -	dec_spinners(xl);
> -	wmb();			/* decrement count before restoring lock */
> -	__this_cpu_write(lock_spinners, prev);
> -}
> -
> -static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -	struct xen_spinlock *prev;
>  	int irq = __this_cpu_read(lock_kicker_irq);
> -	int ret;
> +	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
> +	int cpu = smp_processor_id();
>  	u64 start;
> +	unsigned long flags;
>  
>  	/* If kicker interrupts not initialized yet, just spin */
>  	if (irq == -1)
> -		return 0;
> +		return;
>  
>  	start = spin_time_start();
>  
> -	/* announce we're spinning */
> -	prev = spinning_lock(xl);
> -
> -	ADD_STATS(taken_slow, 1);
> -	ADD_STATS(taken_slow_nested, prev != NULL);
> -
> -	do {
> -		unsigned long flags;
> -
> -		/* clear pending */
> -		xen_clear_irq_pending(irq);
> -
> -		/* check again make sure it didn't become free while
> -		   we weren't looking  */
> -		ret = xen_spin_trylock(lock);
> -		if (ret) {
> -			ADD_STATS(taken_slow_pickup, 1);
> -
> -			/*
> -			 * If we interrupted another spinlock while it
> -			 * was blocking, make sure it doesn't block
> -			 * without rechecking the lock.
> -			 */
> -			if (prev != NULL)
> -				xen_set_irq_pending(irq);
> -			goto out;
> -		}
> +	/*
> +	 * Make sure an interrupt handler can't upset things in a
> +	 * partially setup state.
> +	 */
> +	local_irq_save(flags);
>  
> -		flags = arch_local_save_flags();
> -		if (irq_enable) {
> -			ADD_STATS(taken_slow_irqenable, 1);
> -			raw_local_irq_enable();
> -		}
> +	w->want = want;
> +	smp_wmb();
> +	w->lock = lock;
>  
> -		/*
> -		 * Block until irq becomes pending.  If we're
> -		 * interrupted at this point (after the trylock but
> -		 * before entering the block), then the nested lock
> -		 * handler guarantees that the irq will be left
> -		 * pending if there's any chance the lock became free;
> -		 * xen_poll_irq() returns immediately if the irq is
> -		 * pending.
> -		 */
> -		xen_poll_irq(irq);
> +	/* This uses set_bit, which atomic and therefore a barrier */
> +	cpumask_set_cpu(cpu, &waiting_cpus);
> +	add_stats(TAKEN_SLOW, 1);
>  
> -		raw_local_irq_restore(flags);
> +	/* clear pending */
> +	xen_clear_irq_pending(irq);
>  
> -		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
> -	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
> +	/* Only check lock once pending cleared */
> +	barrier();
>  
> +	/* check again make sure it didn't become free while
> +	   we weren't looking  */
> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
> +		add_stats(TAKEN_SLOW_PICKUP, 1);
> +		goto out;
> +	}
> +	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
> +	xen_poll_irq(irq);
> +	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
>  	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
> -
>  out:
> -	unspinning_lock(xl, prev);
> +	cpumask_clear_cpu(cpu, &waiting_cpus);
> +	w->lock = NULL;
> +	local_irq_restore(flags);
>  	spin_time_accum_blocked(start);
> -
> -	return ret;
>  }
>  
> -static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -	unsigned timeout;
> -	u8 oldval;
> -	u64 start_spin;
> -
> -	ADD_STATS(taken, 1);
> -
> -	start_spin = spin_time_start();
> -
> -	do {
> -		u64 start_spin_fast = spin_time_start();
> -
> -		timeout = TIMEOUT;
> -
> -		asm("1: xchgb %1,%0\n"
> -		    "   testb %1,%1\n"
> -		    "   jz 3f\n"
> -		    "2: rep;nop\n"
> -		    "   cmpb $0,%0\n"
> -		    "   je 1b\n"
> -		    "   dec %2\n"
> -		    "   jnz 2b\n"
> -		    "3:\n"
> -		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
> -		    : "1" (1)
> -		    : "memory");
> -
> -		spin_time_accum_spinning(start_spin_fast);
> -
> -	} while (unlikely(oldval != 0 &&
> -			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
> -
> -	spin_time_accum_total(start_spin);
> -}
> -
> -static void xen_spin_lock(struct arch_spinlock *lock)
> -{
> -	__xen_spin_lock(lock, false);
> -}
> -
> -static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
> -{
> -	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
> -}
> -
> -static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
> +static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
>  {
>  	int cpu;
>  
> -	ADD_STATS(released_slow, 1);
> +	add_stats(RELEASED_SLOW, 1);
> +
> +	for_each_cpu(cpu, &waiting_cpus) {
> +		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
>  
> -	for_each_online_cpu(cpu) {
> -		/* XXX should mix up next cpu selection */
> -		if (per_cpu(lock_spinners, cpu) == xl) {
> -			ADD_STATS(released_slow_kicked, 1);
> +		if (w->lock == lock && w->want == next) {
> +			add_stats(RELEASED_SLOW_KICKED, 1);
>  			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);

When this was initially implemented there was a "break" here. But
76eaca031f0af2bb303e405986f637811956a422 (xen: Send spinlock IPI to all waiters)
fixed an issue and changed this to send an IPI to all of the CPUs. That
means the 'break' was removed..

With this implementation of spinlock, you know exactly which vCPU is holding it,
so this code should introduce the 'break' back.


>  		}
>  	}
>  }
>  
> -static void xen_spin_unlock(struct arch_spinlock *lock)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -
> -	ADD_STATS(released, 1);
> -
> -	smp_wmb();		/* make sure no writes get moved after unlock */
> -	xl->lock = 0;		/* release lock */
> -
> -	/*
> -	 * Make sure unlock happens before checking for waiting
> -	 * spinners.  We need a strong barrier to enforce the
> -	 * write-read ordering to different memory locations, as the
> -	 * CPU makes no implied guarantees about their ordering.
> -	 */
> -	mb();
> -
> -	if (unlikely(xl->spinners))
> -		xen_spin_unlock_slow(xl);
> -}
> -#endif
> -
>  static irqreturn_t dummy_handler(int irq, void *dev_id)
>  {
>  	BUG();
> @@ -415,15 +247,8 @@ void __init xen_init_spinlocks(void)
>  	if (xen_hvm_domain())
>  		return;
>  
> -	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
> -#if 0
> -	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
> -	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
> -	pv_lock_ops.spin_lock = xen_spin_lock;
> -	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
> -	pv_lock_ops.spin_trylock = xen_spin_trylock;
> -	pv_lock_ops.spin_unlock = xen_spin_unlock;
> -#endif
> +	pv_lock_ops.lock_spinning = xen_lock_spinning;
> +	pv_lock_ops.unlock_kick = xen_unlock_kick;
>  }
>  
>  #ifdef CONFIG_XEN_DEBUG_FS
> @@ -441,37 +266,21 @@ static int __init xen_spinlock_debugfs(void)
>  
>  	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
>  
> -	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
> -
> -	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
>  	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow);
> -	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_nested);
> +			   &spinlock_stats.contention_stats[TAKEN_SLOW]);
>  	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_pickup);
> +			   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
>  	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_spurious);
> -	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_irqenable);
> +			   &spinlock_stats.contention_stats[TAKEN_SLOW_SPURIOUS]);
>  
> -	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
>  	debugfs_create_u32("released_slow", 0444, d_spin_debug,
> -			   &spinlock_stats.released_slow);
> +			   &spinlock_stats.contention_stats[RELEASED_SLOW]);
>  	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
> -			   &spinlock_stats.released_slow_kicked);
> +			   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
>  
> -	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
> -			   &spinlock_stats.time_spinning);
>  	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
>  			   &spinlock_stats.time_blocked);
> -	debugfs_create_u64("time_total", 0444, d_spin_debug,
> -			   &spinlock_stats.time_total);
>  
> -	debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
> -				spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
> -	debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
> -				spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
>  	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
>  				spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
>  
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 5/19] xen/pvticketlock: Xen implementation for PV ticket locks
@ 2013-06-03 16:03     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:03 UTC (permalink / raw)
  To: Raghavendra K T, stefan.bader
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sat, Jun 01, 2013 at 12:23:14PM -0700, Raghavendra K T wrote:
> xen/pvticketlock: Xen implementation for PV ticket locks
> 
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> 
> Replace the old Xen implementation of PV spinlocks with and implementation
> of xen_lock_spinning and xen_unlock_kick.
> 
> xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
> adds itself to the waiting_cpus set, and blocks on an event channel
> until the channel becomes pending.
> 
> xen_unlock_kick searches the cpus in waiting_cpus looking for the one
> which next wants this lock with the next ticket, if any.  If found,
> it kicks it by making its event channel pending, which wakes it up.
> 
> We need to make sure interrupts are disabled while we're relying on the
> contents of the per-cpu lock_waiting values, otherwise an interrupt
> handler could come in, try to take some other lock, block, and overwrite
> our values.
> 
> Raghu: use function + enum instead of macro, cmpxchg for zero status reset
> 
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/x86/xen/spinlock.c |  347 +++++++++++------------------------------------
>  1 file changed, 78 insertions(+), 269 deletions(-)
> 
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index d6481a9..860e190 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -16,45 +16,44 @@
>  #include "xen-ops.h"
>  #include "debugfs.h"
>  
> -#ifdef CONFIG_XEN_DEBUG_FS
> -static struct xen_spinlock_stats
> -{
> -	u64 taken;
> -	u32 taken_slow;
> -	u32 taken_slow_nested;
> -	u32 taken_slow_pickup;
> -	u32 taken_slow_spurious;
> -	u32 taken_slow_irqenable;
> +enum xen_contention_stat {
> +	TAKEN_SLOW,
> +	TAKEN_SLOW_PICKUP,
> +	TAKEN_SLOW_SPURIOUS,
> +	RELEASED_SLOW,
> +	RELEASED_SLOW_KICKED,
> +	NR_CONTENTION_STATS
> +};
>  
> -	u64 released;
> -	u32 released_slow;
> -	u32 released_slow_kicked;
>  
> +#ifdef CONFIG_XEN_DEBUG_FS
>  #define HISTO_BUCKETS	30
> -	u32 histo_spin_total[HISTO_BUCKETS+1];
> -	u32 histo_spin_spinning[HISTO_BUCKETS+1];
> +static struct xen_spinlock_stats
> +{
> +	u32 contention_stats[NR_CONTENTION_STATS];
>  	u32 histo_spin_blocked[HISTO_BUCKETS+1];
> -
> -	u64 time_total;
> -	u64 time_spinning;
>  	u64 time_blocked;
>  } spinlock_stats;
>  
>  static u8 zero_stats;
>  
> -static unsigned lock_timeout = 1 << 10;
> -#define TIMEOUT lock_timeout
> -
>  static inline void check_zero(void)
>  {
> -	if (unlikely(zero_stats)) {
> -		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
> -		zero_stats = 0;
> +	u8 ret;
> +	u8 old = ACCESS_ONCE(zero_stats);
> +	if (unlikely(old)) {
> +		ret = cmpxchg(&zero_stats, old, 0);
> +		/* This ensures only one fellow resets the stat */
> +		if (ret == old)
> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>  	}
>  }
>  
> -#define ADD_STATS(elem, val)			\
> -	do { check_zero(); spinlock_stats.elem += (val); } while(0)
> +static inline void add_stats(enum xen_contention_stat var, u32 val)
> +{
> +	check_zero();
> +	spinlock_stats.contention_stats[var] += val;
> +}
>  
>  static inline u64 spin_time_start(void)
>  {
> @@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
>  		array[HISTO_BUCKETS]++;
>  }
>  
> -static inline void spin_time_accum_spinning(u64 start)
> -{
> -	u32 delta = xen_clocksource_read() - start;
> -
> -	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
> -	spinlock_stats.time_spinning += delta;
> -}
> -
> -static inline void spin_time_accum_total(u64 start)
> -{
> -	u32 delta = xen_clocksource_read() - start;
> -
> -	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
> -	spinlock_stats.time_total += delta;
> -}
> -
>  static inline void spin_time_accum_blocked(u64 start)
>  {
>  	u32 delta = xen_clocksource_read() - start;
> @@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
>  }
>  #else  /* !CONFIG_XEN_DEBUG_FS */
>  #define TIMEOUT			(1 << 10)
> -#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
> +static inline void add_stats(enum xen_contention_stat var, u32 val)
> +{
> +}
>  
>  static inline u64 spin_time_start(void)
>  {
>  	return 0;
>  }
>  
> -static inline void spin_time_accum_total(u64 start)
> -{
> -}
> -static inline void spin_time_accum_spinning(u64 start)
> -{
> -}
>  static inline void spin_time_accum_blocked(u64 start)
>  {
>  }
> @@ -133,229 +112,82 @@ typedef u16 xen_spinners_t;
>  	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
>  #endif
>  
> -struct xen_spinlock {
> -	unsigned char lock;		/* 0 -> free; 1 -> locked */
> -	xen_spinners_t spinners;	/* count of waiting cpus */
> +struct xen_lock_waiting {
> +	struct arch_spinlock *lock;
> +	__ticket_t want;
>  };
>  
>  static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
> +static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
> +static cpumask_t waiting_cpus;
>  
> -#if 0
> -static int xen_spin_is_locked(struct arch_spinlock *lock)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -
> -	return xl->lock != 0;
> -}
> -
> -static int xen_spin_is_contended(struct arch_spinlock *lock)
> +static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>  {
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -
> -	/* Not strictly true; this is only the count of contended
> -	   lock-takers entering the slow path. */
> -	return xl->spinners != 0;
> -}
> -
> -static int xen_spin_trylock(struct arch_spinlock *lock)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -	u8 old = 1;
> -
> -	asm("xchgb %b0,%1"
> -	    : "+q" (old), "+m" (xl->lock) : : "memory");
> -
> -	return old == 0;
> -}
> -
> -static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
> -
> -/*
> - * Mark a cpu as interested in a lock.  Returns the CPU's previous
> - * lock of interest, in case we got preempted by an interrupt.
> - */
> -static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
> -{
> -	struct xen_spinlock *prev;
> -
> -	prev = __this_cpu_read(lock_spinners);
> -	__this_cpu_write(lock_spinners, xl);
> -
> -	wmb();			/* set lock of interest before count */
> -
> -	inc_spinners(xl);
> -
> -	return prev;
> -}
> -
> -/*
> - * Mark a cpu as no longer interested in a lock.  Restores previous
> - * lock of interest (NULL for none).
> - */
> -static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
> -{
> -	dec_spinners(xl);
> -	wmb();			/* decrement count before restoring lock */
> -	__this_cpu_write(lock_spinners, prev);
> -}
> -
> -static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -	struct xen_spinlock *prev;
>  	int irq = __this_cpu_read(lock_kicker_irq);
> -	int ret;
> +	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
> +	int cpu = smp_processor_id();
>  	u64 start;
> +	unsigned long flags;
>  
>  	/* If kicker interrupts not initialized yet, just spin */
>  	if (irq == -1)
> -		return 0;
> +		return;
>  
>  	start = spin_time_start();
>  
> -	/* announce we're spinning */
> -	prev = spinning_lock(xl);
> -
> -	ADD_STATS(taken_slow, 1);
> -	ADD_STATS(taken_slow_nested, prev != NULL);
> -
> -	do {
> -		unsigned long flags;
> -
> -		/* clear pending */
> -		xen_clear_irq_pending(irq);
> -
> -		/* check again make sure it didn't become free while
> -		   we weren't looking  */
> -		ret = xen_spin_trylock(lock);
> -		if (ret) {
> -			ADD_STATS(taken_slow_pickup, 1);
> -
> -			/*
> -			 * If we interrupted another spinlock while it
> -			 * was blocking, make sure it doesn't block
> -			 * without rechecking the lock.
> -			 */
> -			if (prev != NULL)
> -				xen_set_irq_pending(irq);
> -			goto out;
> -		}
> +	/*
> +	 * Make sure an interrupt handler can't upset things in a
> +	 * partially setup state.
> +	 */
> +	local_irq_save(flags);
>  
> -		flags = arch_local_save_flags();
> -		if (irq_enable) {
> -			ADD_STATS(taken_slow_irqenable, 1);
> -			raw_local_irq_enable();
> -		}
> +	w->want = want;
> +	smp_wmb();
> +	w->lock = lock;
>  
> -		/*
> -		 * Block until irq becomes pending.  If we're
> -		 * interrupted at this point (after the trylock but
> -		 * before entering the block), then the nested lock
> -		 * handler guarantees that the irq will be left
> -		 * pending if there's any chance the lock became free;
> -		 * xen_poll_irq() returns immediately if the irq is
> -		 * pending.
> -		 */
> -		xen_poll_irq(irq);
> +	/* This uses set_bit, which atomic and therefore a barrier */
> +	cpumask_set_cpu(cpu, &waiting_cpus);
> +	add_stats(TAKEN_SLOW, 1);
>  
> -		raw_local_irq_restore(flags);
> +	/* clear pending */
> +	xen_clear_irq_pending(irq);
>  
> -		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
> -	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
> +	/* Only check lock once pending cleared */
> +	barrier();
>  
> +	/* check again make sure it didn't become free while
> +	   we weren't looking  */
> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
> +		add_stats(TAKEN_SLOW_PICKUP, 1);
> +		goto out;
> +	}
> +	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
> +	xen_poll_irq(irq);
> +	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
>  	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
> -
>  out:
> -	unspinning_lock(xl, prev);
> +	cpumask_clear_cpu(cpu, &waiting_cpus);
> +	w->lock = NULL;
> +	local_irq_restore(flags);
>  	spin_time_accum_blocked(start);
> -
> -	return ret;
>  }
>  
> -static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -	unsigned timeout;
> -	u8 oldval;
> -	u64 start_spin;
> -
> -	ADD_STATS(taken, 1);
> -
> -	start_spin = spin_time_start();
> -
> -	do {
> -		u64 start_spin_fast = spin_time_start();
> -
> -		timeout = TIMEOUT;
> -
> -		asm("1: xchgb %1,%0\n"
> -		    "   testb %1,%1\n"
> -		    "   jz 3f\n"
> -		    "2: rep;nop\n"
> -		    "   cmpb $0,%0\n"
> -		    "   je 1b\n"
> -		    "   dec %2\n"
> -		    "   jnz 2b\n"
> -		    "3:\n"
> -		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
> -		    : "1" (1)
> -		    : "memory");
> -
> -		spin_time_accum_spinning(start_spin_fast);
> -
> -	} while (unlikely(oldval != 0 &&
> -			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
> -
> -	spin_time_accum_total(start_spin);
> -}
> -
> -static void xen_spin_lock(struct arch_spinlock *lock)
> -{
> -	__xen_spin_lock(lock, false);
> -}
> -
> -static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
> -{
> -	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
> -}
> -
> -static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
> +static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
>  {
>  	int cpu;
>  
> -	ADD_STATS(released_slow, 1);
> +	add_stats(RELEASED_SLOW, 1);
> +
> +	for_each_cpu(cpu, &waiting_cpus) {
> +		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
>  
> -	for_each_online_cpu(cpu) {
> -		/* XXX should mix up next cpu selection */
> -		if (per_cpu(lock_spinners, cpu) == xl) {
> -			ADD_STATS(released_slow_kicked, 1);
> +		if (w->lock == lock && w->want == next) {
> +			add_stats(RELEASED_SLOW_KICKED, 1);
>  			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);

When this was initially implemented there was a "break" here. But
76eaca031f0af2bb303e405986f637811956a422 (xen: Send spinlock IPI to all waiters)
fixed an issue and changed this to send an IPI to all of the CPUs. That
means the 'break' was removed..

With this implementation of spinlock, you know exactly which vCPU is holding it,
so this code should introduce the 'break' back.


>  		}
>  	}
>  }
>  
> -static void xen_spin_unlock(struct arch_spinlock *lock)
> -{
> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
> -
> -	ADD_STATS(released, 1);
> -
> -	smp_wmb();		/* make sure no writes get moved after unlock */
> -	xl->lock = 0;		/* release lock */
> -
> -	/*
> -	 * Make sure unlock happens before checking for waiting
> -	 * spinners.  We need a strong barrier to enforce the
> -	 * write-read ordering to different memory locations, as the
> -	 * CPU makes no implied guarantees about their ordering.
> -	 */
> -	mb();
> -
> -	if (unlikely(xl->spinners))
> -		xen_spin_unlock_slow(xl);
> -}
> -#endif
> -
>  static irqreturn_t dummy_handler(int irq, void *dev_id)
>  {
>  	BUG();
> @@ -415,15 +247,8 @@ void __init xen_init_spinlocks(void)
>  	if (xen_hvm_domain())
>  		return;
>  
> -	BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t));
> -#if 0
> -	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
> -	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
> -	pv_lock_ops.spin_lock = xen_spin_lock;
> -	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
> -	pv_lock_ops.spin_trylock = xen_spin_trylock;
> -	pv_lock_ops.spin_unlock = xen_spin_unlock;
> -#endif
> +	pv_lock_ops.lock_spinning = xen_lock_spinning;
> +	pv_lock_ops.unlock_kick = xen_unlock_kick;
>  }
>  
>  #ifdef CONFIG_XEN_DEBUG_FS
> @@ -441,37 +266,21 @@ static int __init xen_spinlock_debugfs(void)
>  
>  	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
>  
> -	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
> -
> -	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
>  	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow);
> -	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_nested);
> +			   &spinlock_stats.contention_stats[TAKEN_SLOW]);
>  	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_pickup);
> +			   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
>  	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_spurious);
> -	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
> -			   &spinlock_stats.taken_slow_irqenable);
> +			   &spinlock_stats.contention_stats[TAKEN_SLOW_SPURIOUS]);
>  
> -	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
>  	debugfs_create_u32("released_slow", 0444, d_spin_debug,
> -			   &spinlock_stats.released_slow);
> +			   &spinlock_stats.contention_stats[RELEASED_SLOW]);
>  	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
> -			   &spinlock_stats.released_slow_kicked);
> +			   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
>  
> -	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
> -			   &spinlock_stats.time_spinning);
>  	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
>  			   &spinlock_stats.time_blocked);
> -	debugfs_create_u64("time_total", 0444, d_spin_debug,
> -			   &spinlock_stats.time_total);
>  
> -	debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
> -				spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
> -	debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
> -				spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
>  	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
>  				spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
>  
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
  2013-06-01 19:26   ` Raghavendra K T
@ 2013-06-03 16:04     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:04 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:56:24AM +0530, Raghavendra K T wrote:
> Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
> 
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> 
> KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
> enabled guest.
> 
> KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
> in guest.
> 
> Thanks Vatsa for rewriting KVM_HC_KICK_CPU
> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  Documentation/virtual/kvm/cpuid.txt      |    4 ++++
>  Documentation/virtual/kvm/hypercalls.txt |   13 +++++++++++++
>  2 files changed, 17 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
> index 83afe65..654f43c 100644
> --- a/Documentation/virtual/kvm/cpuid.txt
> +++ b/Documentation/virtual/kvm/cpuid.txt
> @@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
>  KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
>                                     ||       || writing to msr 0x4b564d02
>  ------------------------------------------------------------------------------
> +KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
> +                                   ||       || before enabling paravirtualized
> +                                   ||       || spinlock support.
> +------------------------------------------------------------------------------
>  KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
>                                     ||       || per-cpu warps are expected in
>                                     ||       || kvmclock.
> diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
> index ea113b5..2a4da11 100644
> --- a/Documentation/virtual/kvm/hypercalls.txt
> +++ b/Documentation/virtual/kvm/hypercalls.txt
> @@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a
>  shared page that contains parts of supervisor visible register state.
>  The guest can map this shared page to access its supervisor register through
>  memory using this hypercall.
> +
> +5. KVM_HC_KICK_CPU
> +------------------------
> +Architecture: x86
> +Status: active
> +Purpose: Hypercall used to wakeup a vcpu from HLT state
> +Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
> +kernel mode for an event to occur (ex: a spinlock to become available) can
> +execute HLT instruction once it has busy-waited for more than a threshold
> +time-interval. Execution of HLT instruction would cause the hypervisor to put
> +the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
> +same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
> +specifying APIC ID of the vcpu to be wokenup.

woken up.
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
@ 2013-06-03 16:04     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:04 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:56:24AM +0530, Raghavendra K T wrote:
> Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
> 
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> 
> KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
> enabled guest.
> 
> KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
> in guest.
> 
> Thanks Vatsa for rewriting KVM_HC_KICK_CPU
> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  Documentation/virtual/kvm/cpuid.txt      |    4 ++++
>  Documentation/virtual/kvm/hypercalls.txt |   13 +++++++++++++
>  2 files changed, 17 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
> index 83afe65..654f43c 100644
> --- a/Documentation/virtual/kvm/cpuid.txt
> +++ b/Documentation/virtual/kvm/cpuid.txt
> @@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
>  KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
>                                     ||       || writing to msr 0x4b564d02
>  ------------------------------------------------------------------------------
> +KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
> +                                   ||       || before enabling paravirtualized
> +                                   ||       || spinlock support.
> +------------------------------------------------------------------------------
>  KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
>                                     ||       || per-cpu warps are expected in
>                                     ||       || kvmclock.
> diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
> index ea113b5..2a4da11 100644
> --- a/Documentation/virtual/kvm/hypercalls.txt
> +++ b/Documentation/virtual/kvm/hypercalls.txt
> @@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a
>  shared page that contains parts of supervisor visible register state.
>  The guest can map this shared page to access its supervisor register through
>  memory using this hypercall.
> +
> +5. KVM_HC_KICK_CPU
> +------------------------
> +Architecture: x86
> +Status: active
> +Purpose: Hypercall used to wakeup a vcpu from HLT state
> +Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
> +kernel mode for an event to occur (ex: a spinlock to become available) can
> +execute HLT instruction once it has busy-waited for more than a threshold
> +time-interval. Execution of HLT instruction would cause the hypervisor to put
> +the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
> +same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
> +specifying APIC ID of the vcpu to be wokenup.

woken up.
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
  2013-06-01 19:26   ` Raghavendra K T
@ 2013-06-03 16:05     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:05 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, Jun 02, 2013 at 12:56:45AM +0530, Raghavendra K T wrote:
> kvm hypervisor: Add directed yield in vcpu block path
> 
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> 
> We use the improved PLE handler logic in vcpu block patch for
> scheduling rather than plain schedule, so that we can make
> intelligent decisions

You are missing '.' there, and

> 
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/ia64/include/asm/kvm_host.h    |    5 +++++
>  arch/powerpc/include/asm/kvm_host.h |    5 +++++
>  arch/s390/include/asm/kvm_host.h    |    5 +++++
>  arch/x86/include/asm/kvm_host.h     |    2 +-
>  arch/x86/kvm/x86.c                  |    8 ++++++++
>  include/linux/kvm_host.h            |    2 +-
>  virt/kvm/kvm_main.c                 |    6 ++++--
>  7 files changed, 29 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
> index 989dd3f..999ab15 100644
> --- a/arch/ia64/include/asm/kvm_host.h
> +++ b/arch/ia64/include/asm/kvm_host.h
> @@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
>  int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
>  void kvm_sal_emul(struct kvm_vcpu *vcpu);
>  
> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	schedule();
> +}
> +
>  #define __KVM_HAVE_ARCH_VM_ALLOC 1
>  struct kvm *kvm_arch_alloc_vm(void);
>  void kvm_arch_free_vm(struct kvm *kvm);
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index af326cd..1aeecc0 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -628,4 +628,9 @@ struct kvm_vcpu_arch {
>  #define __KVM_HAVE_ARCH_WQP
>  #define __KVM_HAVE_CREATE_DEVICE
>  
> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	schedule();
> +}
> +
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 16bd5d1..db09a56 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -266,4 +266,9 @@ struct kvm_arch{
>  };
>  
>  extern int sie64a(struct kvm_s390_sie_block *, u64 *);
> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	schedule();
> +}
> +
>  #endif
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 95702de..72ff791 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
>  int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
>  void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
>  void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
> -
> +void kvm_do_schedule(struct kvm_vcpu *vcpu);
>  #endif /* _ASM_X86_KVM_HOST_H */
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index b963c86..d26c4be 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
>  			kvm_x86_ops->interrupt_allowed(vcpu);
>  }
>  
> +void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	/* We try to yield to a kikced vcpu else do a schedule */

s/kikced/kicked/

> +	if (kvm_vcpu_on_spin(vcpu) <= 0)
> +		schedule();
> +}
> +EXPORT_SYMBOL_GPL(kvm_do_schedule);
> +
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index f0eea07..39efc18 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -565,7 +565,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
>  void kvm_vcpu_block(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
>  bool kvm_vcpu_yield_to(struct kvm_vcpu *target);
> -void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
> +bool kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
>  void kvm_resched(struct kvm_vcpu *vcpu);
>  void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
>  void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 302681c..8387247 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1685,7 +1685,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
>  		if (signal_pending(current))
>  			break;
>  
> -		schedule();
> +		kvm_do_schedule(vcpu);
>  	}
>  
>  	finish_wait(&vcpu->wq, &wait);
> @@ -1786,7 +1786,7 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
>  }
>  #endif
>  
> -void kvm_vcpu_on_spin(struct kvm_vcpu *me)
> +bool kvm_vcpu_on_spin(struct kvm_vcpu *me)
>  {
>  	struct kvm *kvm = me->kvm;
>  	struct kvm_vcpu *vcpu;
> @@ -1835,6 +1835,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
>  
>  	/* Ensure vcpu is not eligible during next spinloop */
>  	kvm_vcpu_set_dy_eligible(me, false);
> +
> +	return yielded;
>  }
>  EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
>  
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
@ 2013-06-03 16:05     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-03 16:05 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Sun, Jun 02, 2013 at 12:56:45AM +0530, Raghavendra K T wrote:
> kvm hypervisor: Add directed yield in vcpu block path
> 
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> 
> We use the improved PLE handler logic in vcpu block patch for
> scheduling rather than plain schedule, so that we can make
> intelligent decisions

You are missing '.' there, and

> 
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  arch/ia64/include/asm/kvm_host.h    |    5 +++++
>  arch/powerpc/include/asm/kvm_host.h |    5 +++++
>  arch/s390/include/asm/kvm_host.h    |    5 +++++
>  arch/x86/include/asm/kvm_host.h     |    2 +-
>  arch/x86/kvm/x86.c                  |    8 ++++++++
>  include/linux/kvm_host.h            |    2 +-
>  virt/kvm/kvm_main.c                 |    6 ++++--
>  7 files changed, 29 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
> index 989dd3f..999ab15 100644
> --- a/arch/ia64/include/asm/kvm_host.h
> +++ b/arch/ia64/include/asm/kvm_host.h
> @@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
>  int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
>  void kvm_sal_emul(struct kvm_vcpu *vcpu);
>  
> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	schedule();
> +}
> +
>  #define __KVM_HAVE_ARCH_VM_ALLOC 1
>  struct kvm *kvm_arch_alloc_vm(void);
>  void kvm_arch_free_vm(struct kvm *kvm);
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index af326cd..1aeecc0 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -628,4 +628,9 @@ struct kvm_vcpu_arch {
>  #define __KVM_HAVE_ARCH_WQP
>  #define __KVM_HAVE_CREATE_DEVICE
>  
> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	schedule();
> +}
> +
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 16bd5d1..db09a56 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -266,4 +266,9 @@ struct kvm_arch{
>  };
>  
>  extern int sie64a(struct kvm_s390_sie_block *, u64 *);
> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	schedule();
> +}
> +
>  #endif
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 95702de..72ff791 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
>  int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
>  void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
>  void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
> -
> +void kvm_do_schedule(struct kvm_vcpu *vcpu);
>  #endif /* _ASM_X86_KVM_HOST_H */
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index b963c86..d26c4be 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
>  			kvm_x86_ops->interrupt_allowed(vcpu);
>  }
>  
> +void kvm_do_schedule(struct kvm_vcpu *vcpu)
> +{
> +	/* We try to yield to a kikced vcpu else do a schedule */

s/kikced/kicked/

> +	if (kvm_vcpu_on_spin(vcpu) <= 0)
> +		schedule();
> +}
> +EXPORT_SYMBOL_GPL(kvm_do_schedule);
> +
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
>  EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index f0eea07..39efc18 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -565,7 +565,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
>  void kvm_vcpu_block(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
>  bool kvm_vcpu_yield_to(struct kvm_vcpu *target);
> -void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
> +bool kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
>  void kvm_resched(struct kvm_vcpu *vcpu);
>  void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
>  void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 302681c..8387247 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1685,7 +1685,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
>  		if (signal_pending(current))
>  			break;
>  
> -		schedule();
> +		kvm_do_schedule(vcpu);
>  	}
>  
>  	finish_wait(&vcpu->wq, &wait);
> @@ -1786,7 +1786,7 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
>  }
>  #endif
>  
> -void kvm_vcpu_on_spin(struct kvm_vcpu *me)
> +bool kvm_vcpu_on_spin(struct kvm_vcpu *me)
>  {
>  	struct kvm *kvm = me->kvm;
>  	struct kvm_vcpu *vcpu;
> @@ -1835,6 +1835,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
>  
>  	/* Ensure vcpu is not eligible during next spinloop */
>  	kvm_vcpu_set_dy_eligible(me, false);
> +
> +	return yielded;
>  }
>  EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
>  
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 9/19]  Split out rate limiting from jump_label.h
  2013-06-03 15:56     ` Konrad Rzeszutek Wilk
@ 2013-06-04  7:15       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:15 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/03/2013 09:26 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:54:22AM +0530, Raghavendra K T wrote:
>> Split jumplabel ratelimit
>
> I would change the title a bit, perhaps prefix it with: "jump_label: "
>>
>> From: Andrew Jones <drjones@redhat.com>
>>
>> Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting
>
> Also please add right after the git id this:
>
> ("perf, core: Rate limit perf_sched_events jump_label patching")

Agreed.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 9/19]  Split out rate limiting from jump_label.h
@ 2013-06-04  7:15       ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:15 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/03/2013 09:26 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:54:22AM +0530, Raghavendra K T wrote:
>> Split jumplabel ratelimit
>
> I would change the title a bit, perhaps prefix it with: "jump_label: "
>>
>> From: Andrew Jones <drjones@redhat.com>
>>
>> Commit b202952075f62603bea9bfb6ebc6b0420db11949 introduced rate limiting
>
> Also please add right after the git id this:
>
> ("perf, core: Rate limit perf_sched_events jump_label patching")

Agreed.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
  2013-06-03 15:57     ` Konrad Rzeszutek Wilk
@ 2013-06-04  7:16       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:16 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/03/2013 09:27 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
>> xen: Enable PV ticketlocks on HVM Xen
>
> There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb
>

Yes, true. Do you expect the revert to be folded into this patch itself?


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
@ 2013-06-04  7:16       ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:16 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/03/2013 09:27 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
>> xen: Enable PV ticketlocks on HVM Xen
>
> There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb
>

Yes, true. Do you expect the revert to be folded into this patch itself?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
  2013-06-03 16:00     ` Konrad Rzeszutek Wilk
  (?)
@ 2013-06-04  7:19     ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:19 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, riel, drjones,
	virtualization, srivatsa.vaddagiri

On 06/03/2013 09:30 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:55:57AM +0530, Raghavendra K T wrote:
>> kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
>>
>> From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>>
>> During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
>> required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
>>   support for pv-ticketlocks is registered via pv_lock_ops.
>>
>> Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.
>>
>> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>> Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
>> [Raghu: check_zero race fix, enum for kvm_contention_stat
>> jumplabel related changes ]
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   arch/x86/include/asm/kvm_para.h |   14 ++
>>   arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 268 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
>> index 695399f..427afcb 100644
>> --- a/arch/x86/include/asm/kvm_para.h
>> +++ b/arch/x86/include/asm/kvm_para.h
>> @@ -118,10 +118,20 @@ void kvm_async_pf_task_wait(u32 token);
>>   void kvm_async_pf_task_wake(u32 token);
>>   u32 kvm_read_and_reset_pf_reason(void);
>>   extern void kvm_disable_steal_time(void);
>> -#else
>> -#define kvm_guest_init() do { } while (0)
>> +
>> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
>> +void __init kvm_spinlock_init(void);
>> +#else /* !CONFIG_PARAVIRT_SPINLOCKS */
>> +static inline void kvm_spinlock_init(void)
>> +{
>> +}
>> +#endif /* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#else /* CONFIG_KVM_GUEST */
>> +#define kvm_guest_init() do {} while (0)
>>   #define kvm_async_pf_task_wait(T) do {} while(0)
>>   #define kvm_async_pf_task_wake(T) do {} while(0)
>> +
>>   static inline u32 kvm_read_and_reset_pf_reason(void)
>>   {
>>   	return 0;
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index cd6d9a5..2715b92 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -34,6 +34,7 @@
>>   #include <linux/sched.h>
>>   #include <linux/slab.h>
>>   #include <linux/kprobes.h>
>> +#include <linux/debugfs.h>
>>   #include <asm/timer.h>
>>   #include <asm/cpu.h>
>>   #include <asm/traps.h>
>> @@ -419,6 +420,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
>>   	WARN_ON(kvm_register_clock("primary cpu clock"));
>>   	kvm_guest_cpu_init();
>>   	native_smp_prepare_boot_cpu();
>> +	kvm_spinlock_init();
>>   }
>>
>>   static void __cpuinit kvm_guest_cpu_online(void *dummy)
>> @@ -523,3 +525,257 @@ static __init int activate_jump_labels(void)
>>   	return 0;
>>   }
>>   arch_initcall(activate_jump_labels);
>> +
>> +/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
>> +void kvm_kick_cpu(int cpu)
>> +{
>> +	int apicid;
>> +
>> +	apicid = per_cpu(x86_cpu_to_apicid, cpu);
>> +	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
>> +}
>> +
>> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
>> +
>> +enum kvm_contention_stat {
>> +	TAKEN_SLOW,
>> +	TAKEN_SLOW_PICKUP,
>> +	RELEASED_SLOW,
>> +	RELEASED_SLOW_KICKED,
>> +	NR_CONTENTION_STATS
>> +};
>> +
>> +#ifdef CONFIG_KVM_DEBUG_FS
>> +#define HISTO_BUCKETS	30
>> +
>> +static struct kvm_spinlock_stats
>> +{
>> +	u32 contention_stats[NR_CONTENTION_STATS];
>> +	u32 histo_spin_blocked[HISTO_BUCKETS+1];
>> +	u64 time_blocked;
>> +} spinlock_stats;
>> +
>> +static u8 zero_stats;
>> +
>> +static inline void check_zero(void)
>> +{
>> +	u8 ret;
>> +	u8 old;
>> +
>> +	old = ACCESS_ONCE(zero_stats);
>> +	if (unlikely(old)) {
>> +		ret = cmpxchg(&zero_stats, old, 0);
>> +		/* This ensures only one fellow resets the stat */
>> +		if (ret == old)
>> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>> +	}
>> +}
>> +
>> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
>> +{
>> +	check_zero();
>> +	spinlock_stats.contention_stats[var] += val;
>> +}
>> +
>> +
>> +static inline u64 spin_time_start(void)
>> +{
>> +	return sched_clock();
>> +}
>> +
>> +static void __spin_time_accum(u64 delta, u32 *array)
>> +{
>> +	unsigned index;
>> +
>> +	index = ilog2(delta);
>> +	check_zero();
>> +
>> +	if (index < HISTO_BUCKETS)
>> +		array[index]++;
>> +	else
>> +		array[HISTO_BUCKETS]++;
>> +}
>> +
>> +static inline void spin_time_accum_blocked(u64 start)
>> +{
>> +	u32 delta;
>> +
>> +	delta = sched_clock() - start;
>> +	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
>> +	spinlock_stats.time_blocked += delta;
>> +}
>> +
>> +static struct dentry *d_spin_debug;
>> +static struct dentry *d_kvm_debug;
>> +
>> +struct dentry *kvm_init_debugfs(void)
>> +{
>> +	d_kvm_debug = debugfs_create_dir("kvm", NULL);
>> +	if (!d_kvm_debug)
>> +		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
>> +
>> +	return d_kvm_debug;
>> +}
>> +
>> +static int __init kvm_spinlock_debugfs(void)
>> +{
>> +	struct dentry *d_kvm;
>> +
>> +	d_kvm = kvm_init_debugfs();
>> +	if (d_kvm == NULL)
>> +		return -ENOMEM;
>> +
>> +	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
>> +
>> +	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
>> +
>> +	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
>> +	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
>> +
>> +	debugfs_create_u32("released_slow", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
>> +	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
>> +
>> +	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
>> +			   &spinlock_stats.time_blocked);
>> +
>> +	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
>> +		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
>> +
>> +	return 0;
>> +}
>> +fs_initcall(kvm_spinlock_debugfs);
>> +#else  /* !CONFIG_KVM_DEBUG_FS */
>> +#define TIMEOUT			(1 << 10)
>
> What do you use that for?
>
>

Thanks Konrad for the review. Great eyes! .. will remove this in next patch.


>> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
>> +{
>> +}
>> +
>> +static inline u64 spin_time_start(void)
>> +{
>> +	return 0;
>> +}
>> +
>> +static inline void spin_time_accum_blocked(u64 start)
>> +{
>> +}
>> +#endif  /* CONFIG_KVM_DEBUG_FS */
>> +
>> +struct kvm_lock_waiting {
>> +	struct arch_spinlock *lock;
>> +	__ticket_t want;
>> +};
>> +
>> +/* cpus 'waiting' on a spinlock to become available */
>> +static cpumask_t waiting_cpus;
>> +
>> +/* Track spinlock on which a cpu is waiting */
>> +static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
>> +
>> +static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>> +{
>> +	struct kvm_lock_waiting *w;
>> +	int cpu;
>> +	u64 start;
>> +	unsigned long flags;
>> +
>> +	w = &__get_cpu_var(lock_waiting);
>> +	cpu = smp_processor_id();
>> +	start = spin_time_start();
>> +
>> +	/*
>> +	 * Make sure an interrupt handler can't upset things in a
>> +	 * partially setup state.
>> +	 */
>> +	local_irq_save(flags);
>> +
>> +	/*
>> +	 * The ordering protocol on this is that the "lock" pointer
>> +	 * may only be set non-NULL if the "want" ticket is correct.
>> +	 * If we're updating "want", we must first clear "lock".
>> +	 */
>> +	w->lock = NULL;
>> +	smp_wmb();
>> +	w->want = want;
>> +	smp_wmb();
>> +	w->lock = lock;
>> +
>> +	add_stats(TAKEN_SLOW, 1);
>> +
>> +	/*
>> +	 * This uses set_bit, which is atomic but we should not rely on its
>> +	 * reordering gurantees. So barrier is needed after this call.
>> +	 */
>> +	cpumask_set_cpu(cpu, &waiting_cpus);
>> +
>> +	barrier();
>> +
>> +	/*
>> +	 * Mark entry to slowpath before doing the pickup test to make
>> +	 * sure we don't deadlock with an unlocker.
>> +	 */
>> +	__ticket_enter_slowpath(lock);
>> +
>> +	/*
>> +	 * check again make sure it didn't become free while
>> +	 * we weren't looking.
>> +	 */
>> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
>> +		add_stats(TAKEN_SLOW_PICKUP, 1);
>> +		goto out;
>> +	}
>> +
>> +	/* Allow interrupts while blocked */
>> +	local_irq_restore(flags);
>> +
>> +	/* halt until it's our turn and kicked. */
>> +	halt();
>> +
>> +	local_irq_save(flags);
>> +out:
>> +	cpumask_clear_cpu(cpu, &waiting_cpus);
>> +	w->lock = NULL;
>> +	local_irq_restore(flags);
>> +	spin_time_accum_blocked(start);
>> +}
>> +PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
>> +
>> +/* Kick vcpu waiting on @lock->head to reach value @ticket */
>> +static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
>> +{
>> +	int cpu;
>> +
>> +	add_stats(RELEASED_SLOW, 1);
>> +	for_each_cpu(cpu, &waiting_cpus) {
>> +		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
>> +		if (ACCESS_ONCE(w->lock) == lock &&
>> +		    ACCESS_ONCE(w->want) == ticket) {
>> +			add_stats(RELEASED_SLOW_KICKED, 1);
>> +			kvm_kick_cpu(cpu);
>> +			break;
>> +		}
>> +	}
>> +}
>> +
>> +/*
>> + * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
>> + */
>> +void __init kvm_spinlock_init(void)
>> +{
>> +	if (!kvm_para_available())
>> +		return;
>> +	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>> +		return;
>> +
>> +	printk(KERN_INFO"KVM setup paravirtual spinlock\n");
>
> That spacing is odd.

Yes. Will modify in the next version.

>
>> +
>> +	static_key_slow_inc(&paravirt_ticketlocks_enabled);
>> +
>> +	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
>> +	pv_lock_ops.unlock_kick = kvm_unlock_kick;
>> +}
>> +#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
>>
>
>


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
  2013-06-03 16:00     ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2013-06-04  7:19     ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:19 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On 06/03/2013 09:30 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:55:57AM +0530, Raghavendra K T wrote:
>> kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
>>
>> From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>>
>> During smp_boot_cpus  paravirtualied KVM guest detects if the hypervisor has
>> required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so,
>>   support for pv-ticketlocks is registered via pv_lock_ops.
>>
>> Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu.
>>
>> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>> Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com>
>> [Raghu: check_zero race fix, enum for kvm_contention_stat
>> jumplabel related changes ]
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   arch/x86/include/asm/kvm_para.h |   14 ++
>>   arch/x86/kernel/kvm.c           |  256 +++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 268 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
>> index 695399f..427afcb 100644
>> --- a/arch/x86/include/asm/kvm_para.h
>> +++ b/arch/x86/include/asm/kvm_para.h
>> @@ -118,10 +118,20 @@ void kvm_async_pf_task_wait(u32 token);
>>   void kvm_async_pf_task_wake(u32 token);
>>   u32 kvm_read_and_reset_pf_reason(void);
>>   extern void kvm_disable_steal_time(void);
>> -#else
>> -#define kvm_guest_init() do { } while (0)
>> +
>> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
>> +void __init kvm_spinlock_init(void);
>> +#else /* !CONFIG_PARAVIRT_SPINLOCKS */
>> +static inline void kvm_spinlock_init(void)
>> +{
>> +}
>> +#endif /* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#else /* CONFIG_KVM_GUEST */
>> +#define kvm_guest_init() do {} while (0)
>>   #define kvm_async_pf_task_wait(T) do {} while(0)
>>   #define kvm_async_pf_task_wake(T) do {} while(0)
>> +
>>   static inline u32 kvm_read_and_reset_pf_reason(void)
>>   {
>>   	return 0;
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index cd6d9a5..2715b92 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -34,6 +34,7 @@
>>   #include <linux/sched.h>
>>   #include <linux/slab.h>
>>   #include <linux/kprobes.h>
>> +#include <linux/debugfs.h>
>>   #include <asm/timer.h>
>>   #include <asm/cpu.h>
>>   #include <asm/traps.h>
>> @@ -419,6 +420,7 @@ static void __init kvm_smp_prepare_boot_cpu(void)
>>   	WARN_ON(kvm_register_clock("primary cpu clock"));
>>   	kvm_guest_cpu_init();
>>   	native_smp_prepare_boot_cpu();
>> +	kvm_spinlock_init();
>>   }
>>
>>   static void __cpuinit kvm_guest_cpu_online(void *dummy)
>> @@ -523,3 +525,257 @@ static __init int activate_jump_labels(void)
>>   	return 0;
>>   }
>>   arch_initcall(activate_jump_labels);
>> +
>> +/* Kick a cpu by its apicid. Used to wake up a halted vcpu */
>> +void kvm_kick_cpu(int cpu)
>> +{
>> +	int apicid;
>> +
>> +	apicid = per_cpu(x86_cpu_to_apicid, cpu);
>> +	kvm_hypercall1(KVM_HC_KICK_CPU, apicid);
>> +}
>> +
>> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
>> +
>> +enum kvm_contention_stat {
>> +	TAKEN_SLOW,
>> +	TAKEN_SLOW_PICKUP,
>> +	RELEASED_SLOW,
>> +	RELEASED_SLOW_KICKED,
>> +	NR_CONTENTION_STATS
>> +};
>> +
>> +#ifdef CONFIG_KVM_DEBUG_FS
>> +#define HISTO_BUCKETS	30
>> +
>> +static struct kvm_spinlock_stats
>> +{
>> +	u32 contention_stats[NR_CONTENTION_STATS];
>> +	u32 histo_spin_blocked[HISTO_BUCKETS+1];
>> +	u64 time_blocked;
>> +} spinlock_stats;
>> +
>> +static u8 zero_stats;
>> +
>> +static inline void check_zero(void)
>> +{
>> +	u8 ret;
>> +	u8 old;
>> +
>> +	old = ACCESS_ONCE(zero_stats);
>> +	if (unlikely(old)) {
>> +		ret = cmpxchg(&zero_stats, old, 0);
>> +		/* This ensures only one fellow resets the stat */
>> +		if (ret == old)
>> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>> +	}
>> +}
>> +
>> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
>> +{
>> +	check_zero();
>> +	spinlock_stats.contention_stats[var] += val;
>> +}
>> +
>> +
>> +static inline u64 spin_time_start(void)
>> +{
>> +	return sched_clock();
>> +}
>> +
>> +static void __spin_time_accum(u64 delta, u32 *array)
>> +{
>> +	unsigned index;
>> +
>> +	index = ilog2(delta);
>> +	check_zero();
>> +
>> +	if (index < HISTO_BUCKETS)
>> +		array[index]++;
>> +	else
>> +		array[HISTO_BUCKETS]++;
>> +}
>> +
>> +static inline void spin_time_accum_blocked(u64 start)
>> +{
>> +	u32 delta;
>> +
>> +	delta = sched_clock() - start;
>> +	__spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
>> +	spinlock_stats.time_blocked += delta;
>> +}
>> +
>> +static struct dentry *d_spin_debug;
>> +static struct dentry *d_kvm_debug;
>> +
>> +struct dentry *kvm_init_debugfs(void)
>> +{
>> +	d_kvm_debug = debugfs_create_dir("kvm", NULL);
>> +	if (!d_kvm_debug)
>> +		printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
>> +
>> +	return d_kvm_debug;
>> +}
>> +
>> +static int __init kvm_spinlock_debugfs(void)
>> +{
>> +	struct dentry *d_kvm;
>> +
>> +	d_kvm = kvm_init_debugfs();
>> +	if (d_kvm == NULL)
>> +		return -ENOMEM;
>> +
>> +	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm);
>> +
>> +	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
>> +
>> +	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[TAKEN_SLOW]);
>> +	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[TAKEN_SLOW_PICKUP]);
>> +
>> +	debugfs_create_u32("released_slow", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[RELEASED_SLOW]);
>> +	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
>> +		   &spinlock_stats.contention_stats[RELEASED_SLOW_KICKED]);
>> +
>> +	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
>> +			   &spinlock_stats.time_blocked);
>> +
>> +	debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
>> +		     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
>> +
>> +	return 0;
>> +}
>> +fs_initcall(kvm_spinlock_debugfs);
>> +#else  /* !CONFIG_KVM_DEBUG_FS */
>> +#define TIMEOUT			(1 << 10)
>
> What do you use that for?
>
>

Thanks Konrad for the review. Great eyes! .. will remove this in next patch.


>> +static inline void add_stats(enum kvm_contention_stat var, u32 val)
>> +{
>> +}
>> +
>> +static inline u64 spin_time_start(void)
>> +{
>> +	return 0;
>> +}
>> +
>> +static inline void spin_time_accum_blocked(u64 start)
>> +{
>> +}
>> +#endif  /* CONFIG_KVM_DEBUG_FS */
>> +
>> +struct kvm_lock_waiting {
>> +	struct arch_spinlock *lock;
>> +	__ticket_t want;
>> +};
>> +
>> +/* cpus 'waiting' on a spinlock to become available */
>> +static cpumask_t waiting_cpus;
>> +
>> +/* Track spinlock on which a cpu is waiting */
>> +static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
>> +
>> +static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>> +{
>> +	struct kvm_lock_waiting *w;
>> +	int cpu;
>> +	u64 start;
>> +	unsigned long flags;
>> +
>> +	w = &__get_cpu_var(lock_waiting);
>> +	cpu = smp_processor_id();
>> +	start = spin_time_start();
>> +
>> +	/*
>> +	 * Make sure an interrupt handler can't upset things in a
>> +	 * partially setup state.
>> +	 */
>> +	local_irq_save(flags);
>> +
>> +	/*
>> +	 * The ordering protocol on this is that the "lock" pointer
>> +	 * may only be set non-NULL if the "want" ticket is correct.
>> +	 * If we're updating "want", we must first clear "lock".
>> +	 */
>> +	w->lock = NULL;
>> +	smp_wmb();
>> +	w->want = want;
>> +	smp_wmb();
>> +	w->lock = lock;
>> +
>> +	add_stats(TAKEN_SLOW, 1);
>> +
>> +	/*
>> +	 * This uses set_bit, which is atomic but we should not rely on its
>> +	 * reordering gurantees. So barrier is needed after this call.
>> +	 */
>> +	cpumask_set_cpu(cpu, &waiting_cpus);
>> +
>> +	barrier();
>> +
>> +	/*
>> +	 * Mark entry to slowpath before doing the pickup test to make
>> +	 * sure we don't deadlock with an unlocker.
>> +	 */
>> +	__ticket_enter_slowpath(lock);
>> +
>> +	/*
>> +	 * check again make sure it didn't become free while
>> +	 * we weren't looking.
>> +	 */
>> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
>> +		add_stats(TAKEN_SLOW_PICKUP, 1);
>> +		goto out;
>> +	}
>> +
>> +	/* Allow interrupts while blocked */
>> +	local_irq_restore(flags);
>> +
>> +	/* halt until it's our turn and kicked. */
>> +	halt();
>> +
>> +	local_irq_save(flags);
>> +out:
>> +	cpumask_clear_cpu(cpu, &waiting_cpus);
>> +	w->lock = NULL;
>> +	local_irq_restore(flags);
>> +	spin_time_accum_blocked(start);
>> +}
>> +PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
>> +
>> +/* Kick vcpu waiting on @lock->head to reach value @ticket */
>> +static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
>> +{
>> +	int cpu;
>> +
>> +	add_stats(RELEASED_SLOW, 1);
>> +	for_each_cpu(cpu, &waiting_cpus) {
>> +		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
>> +		if (ACCESS_ONCE(w->lock) == lock &&
>> +		    ACCESS_ONCE(w->want) == ticket) {
>> +			add_stats(RELEASED_SLOW_KICKED, 1);
>> +			kvm_kick_cpu(cpu);
>> +			break;
>> +		}
>> +	}
>> +}
>> +
>> +/*
>> + * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
>> + */
>> +void __init kvm_spinlock_init(void)
>> +{
>> +	if (!kvm_para_available())
>> +		return;
>> +	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>> +		return;
>> +
>> +	printk(KERN_INFO"KVM setup paravirtual spinlock\n");
>
> That spacing is odd.

Yes. Will modify in the next version.

>
>> +
>> +	static_key_slow_inc(&paravirt_ticketlocks_enabled);
>> +
>> +	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
>> +	pv_lock_ops.unlock_kick = kvm_unlock_kick;
>> +}
>> +#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
>>
>
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 5/19]  xen/pvticketlock: Xen implementation for PV ticket locks
  2013-06-03 16:03     ` Konrad Rzeszutek Wilk
@ 2013-06-04  7:21       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:21 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: stefan.bader, gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc,
	habanero, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/03/2013 09:33 PM, Konrad Rzeszutek Wilk wrote:
> On Sat, Jun 01, 2013 at 12:23:14PM -0700, Raghavendra K T wrote:
>> xen/pvticketlock: Xen implementation for PV ticket locks
>>
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>>
>> Replace the old Xen implementation of PV spinlocks with and implementation
>> of xen_lock_spinning and xen_unlock_kick.
>>
>> xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
>> adds itself to the waiting_cpus set, and blocks on an event channel
>> until the channel becomes pending.
>>
>> xen_unlock_kick searches the cpus in waiting_cpus looking for the one
>> which next wants this lock with the next ticket, if any.  If found,
>> it kicks it by making its event channel pending, which wakes it up.
>>
>> We need to make sure interrupts are disabled while we're relying on the
>> contents of the per-cpu lock_waiting values, otherwise an interrupt
>> handler could come in, try to take some other lock, block, and overwrite
>> our values.
>>
>> Raghu: use function + enum instead of macro, cmpxchg for zero status reset
>>
>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   arch/x86/xen/spinlock.c |  347 +++++++++++------------------------------------
>>   1 file changed, 78 insertions(+), 269 deletions(-)
>>
>> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
>> index d6481a9..860e190 100644
>> --- a/arch/x86/xen/spinlock.c
>> +++ b/arch/x86/xen/spinlock.c
>> @@ -16,45 +16,44 @@
>>   #include "xen-ops.h"
>>   #include "debugfs.h"
>>
>> -#ifdef CONFIG_XEN_DEBUG_FS
>> -static struct xen_spinlock_stats
>> -{
>> -	u64 taken;
>> -	u32 taken_slow;
>> -	u32 taken_slow_nested;
>> -	u32 taken_slow_pickup;
>> -	u32 taken_slow_spurious;
>> -	u32 taken_slow_irqenable;
>> +enum xen_contention_stat {
>> +	TAKEN_SLOW,
>> +	TAKEN_SLOW_PICKUP,
>> +	TAKEN_SLOW_SPURIOUS,
>> +	RELEASED_SLOW,
>> +	RELEASED_SLOW_KICKED,
>> +	NR_CONTENTION_STATS
>> +};
>>
>> -	u64 released;
>> -	u32 released_slow;
>> -	u32 released_slow_kicked;
>>
>> +#ifdef CONFIG_XEN_DEBUG_FS
>>   #define HISTO_BUCKETS	30
>> -	u32 histo_spin_total[HISTO_BUCKETS+1];
>> -	u32 histo_spin_spinning[HISTO_BUCKETS+1];
>> +static struct xen_spinlock_stats
>> +{
>> +	u32 contention_stats[NR_CONTENTION_STATS];
>>   	u32 histo_spin_blocked[HISTO_BUCKETS+1];
>> -
>> -	u64 time_total;
>> -	u64 time_spinning;
>>   	u64 time_blocked;
>>   } spinlock_stats;
>>
>>   static u8 zero_stats;
>>
>> -static unsigned lock_timeout = 1 << 10;
>> -#define TIMEOUT lock_timeout
>> -
>>   static inline void check_zero(void)
>>   {
>> -	if (unlikely(zero_stats)) {
>> -		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>> -		zero_stats = 0;
>> +	u8 ret;
>> +	u8 old = ACCESS_ONCE(zero_stats);
>> +	if (unlikely(old)) {
>> +		ret = cmpxchg(&zero_stats, old, 0);
>> +		/* This ensures only one fellow resets the stat */
>> +		if (ret == old)
>> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>>   	}
>>   }
>>
>> -#define ADD_STATS(elem, val)			\
>> -	do { check_zero(); spinlock_stats.elem += (val); } while(0)
>> +static inline void add_stats(enum xen_contention_stat var, u32 val)
>> +{
>> +	check_zero();
>> +	spinlock_stats.contention_stats[var] += val;
>> +}
>>
>>   static inline u64 spin_time_start(void)
>>   {
>> @@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
>>   		array[HISTO_BUCKETS]++;
>>   }
>>
>> -static inline void spin_time_accum_spinning(u64 start)
>> -{
>> -	u32 delta = xen_clocksource_read() - start;
>> -
>> -	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
>> -	spinlock_stats.time_spinning += delta;
>> -}
>> -
>> -static inline void spin_time_accum_total(u64 start)
>> -{
>> -	u32 delta = xen_clocksource_read() - start;
>> -
>> -	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
>> -	spinlock_stats.time_total += delta;
>> -}
>> -
>>   static inline void spin_time_accum_blocked(u64 start)
>>   {
>>   	u32 delta = xen_clocksource_read() - start;
>> @@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
>>   }
>>   #else  /* !CONFIG_XEN_DEBUG_FS */
>>   #define TIMEOUT			(1 << 10)
>> -#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
>> +static inline void add_stats(enum xen_contention_stat var, u32 val)
>> +{
>> +}
>>
>>   static inline u64 spin_time_start(void)
>>   {
>>   	return 0;
>>   }
>>
>> -static inline void spin_time_accum_total(u64 start)
>> -{
>> -}
>> -static inline void spin_time_accum_spinning(u64 start)
>> -{
>> -}
>>   static inline void spin_time_accum_blocked(u64 start)
>>   {
>>   }
>> @@ -133,229 +112,82 @@ typedef u16 xen_spinners_t;
>>   	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
>>   #endif
>>
>> -struct xen_spinlock {
>> -	unsigned char lock;		/* 0 -> free; 1 -> locked */
>> -	xen_spinners_t spinners;	/* count of waiting cpus */
>> +struct xen_lock_waiting {
>> +	struct arch_spinlock *lock;
>> +	__ticket_t want;
>>   };
>>
>>   static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
>> +static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
>> +static cpumask_t waiting_cpus;
>>
>> -#if 0
>> -static int xen_spin_is_locked(struct arch_spinlock *lock)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -
>> -	return xl->lock != 0;
>> -}
>> -
>> -static int xen_spin_is_contended(struct arch_spinlock *lock)
>> +static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>>   {
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -
>> -	/* Not strictly true; this is only the count of contended
>> -	   lock-takers entering the slow path. */
>> -	return xl->spinners != 0;
>> -}
>> -
>> -static int xen_spin_trylock(struct arch_spinlock *lock)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -	u8 old = 1;
>> -
>> -	asm("xchgb %b0,%1"
>> -	    : "+q" (old), "+m" (xl->lock) : : "memory");
>> -
>> -	return old == 0;
>> -}
>> -
>> -static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
>> -
>> -/*
>> - * Mark a cpu as interested in a lock.  Returns the CPU's previous
>> - * lock of interest, in case we got preempted by an interrupt.
>> - */
>> -static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
>> -{
>> -	struct xen_spinlock *prev;
>> -
>> -	prev = __this_cpu_read(lock_spinners);
>> -	__this_cpu_write(lock_spinners, xl);
>> -
>> -	wmb();			/* set lock of interest before count */
>> -
>> -	inc_spinners(xl);
>> -
>> -	return prev;
>> -}
>> -
>> -/*
>> - * Mark a cpu as no longer interested in a lock.  Restores previous
>> - * lock of interest (NULL for none).
>> - */
>> -static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
>> -{
>> -	dec_spinners(xl);
>> -	wmb();			/* decrement count before restoring lock */
>> -	__this_cpu_write(lock_spinners, prev);
>> -}
>> -
>> -static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -	struct xen_spinlock *prev;
>>   	int irq = __this_cpu_read(lock_kicker_irq);
>> -	int ret;
>> +	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
>> +	int cpu = smp_processor_id();
>>   	u64 start;
>> +	unsigned long flags;
>>
>>   	/* If kicker interrupts not initialized yet, just spin */
>>   	if (irq == -1)
>> -		return 0;
>> +		return;
>>
>>   	start = spin_time_start();
>>
>> -	/* announce we're spinning */
>> -	prev = spinning_lock(xl);
>> -
>> -	ADD_STATS(taken_slow, 1);
>> -	ADD_STATS(taken_slow_nested, prev != NULL);
>> -
>> -	do {
>> -		unsigned long flags;
>> -
>> -		/* clear pending */
>> -		xen_clear_irq_pending(irq);
>> -
>> -		/* check again make sure it didn't become free while
>> -		   we weren't looking  */
>> -		ret = xen_spin_trylock(lock);
>> -		if (ret) {
>> -			ADD_STATS(taken_slow_pickup, 1);
>> -
>> -			/*
>> -			 * If we interrupted another spinlock while it
>> -			 * was blocking, make sure it doesn't block
>> -			 * without rechecking the lock.
>> -			 */
>> -			if (prev != NULL)
>> -				xen_set_irq_pending(irq);
>> -			goto out;
>> -		}
>> +	/*
>> +	 * Make sure an interrupt handler can't upset things in a
>> +	 * partially setup state.
>> +	 */
>> +	local_irq_save(flags);
>>
>> -		flags = arch_local_save_flags();
>> -		if (irq_enable) {
>> -			ADD_STATS(taken_slow_irqenable, 1);
>> -			raw_local_irq_enable();
>> -		}
>> +	w->want = want;
>> +	smp_wmb();
>> +	w->lock = lock;
>>
>> -		/*
>> -		 * Block until irq becomes pending.  If we're
>> -		 * interrupted at this point (after the trylock but
>> -		 * before entering the block), then the nested lock
>> -		 * handler guarantees that the irq will be left
>> -		 * pending if there's any chance the lock became free;
>> -		 * xen_poll_irq() returns immediately if the irq is
>> -		 * pending.
>> -		 */
>> -		xen_poll_irq(irq);
>> +	/* This uses set_bit, which atomic and therefore a barrier */
>> +	cpumask_set_cpu(cpu, &waiting_cpus);
>> +	add_stats(TAKEN_SLOW, 1);
>>
>> -		raw_local_irq_restore(flags);
>> +	/* clear pending */
>> +	xen_clear_irq_pending(irq);
>>
>> -		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
>> -	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
>> +	/* Only check lock once pending cleared */
>> +	barrier();
>>
>> +	/* check again make sure it didn't become free while
>> +	   we weren't looking  */
>> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
>> +		add_stats(TAKEN_SLOW_PICKUP, 1);
>> +		goto out;
>> +	}
>> +	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
>> +	xen_poll_irq(irq);
>> +	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
>>   	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
>> -
>>   out:
>> -	unspinning_lock(xl, prev);
>> +	cpumask_clear_cpu(cpu, &waiting_cpus);
>> +	w->lock = NULL;
>> +	local_irq_restore(flags);
>>   	spin_time_accum_blocked(start);
>> -
>> -	return ret;
>>   }
>>
>> -static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -	unsigned timeout;
>> -	u8 oldval;
>> -	u64 start_spin;
>> -
>> -	ADD_STATS(taken, 1);
>> -
>> -	start_spin = spin_time_start();
>> -
>> -	do {
>> -		u64 start_spin_fast = spin_time_start();
>> -
>> -		timeout = TIMEOUT;
>> -
>> -		asm("1: xchgb %1,%0\n"
>> -		    "   testb %1,%1\n"
>> -		    "   jz 3f\n"
>> -		    "2: rep;nop\n"
>> -		    "   cmpb $0,%0\n"
>> -		    "   je 1b\n"
>> -		    "   dec %2\n"
>> -		    "   jnz 2b\n"
>> -		    "3:\n"
>> -		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
>> -		    : "1" (1)
>> -		    : "memory");
>> -
>> -		spin_time_accum_spinning(start_spin_fast);
>> -
>> -	} while (unlikely(oldval != 0 &&
>> -			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
>> -
>> -	spin_time_accum_total(start_spin);
>> -}
>> -
>> -static void xen_spin_lock(struct arch_spinlock *lock)
>> -{
>> -	__xen_spin_lock(lock, false);
>> -}
>> -
>> -static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
>> -{
>> -	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
>> -}
>> -
>> -static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
>> +static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
>>   {
>>   	int cpu;
>>
>> -	ADD_STATS(released_slow, 1);
>> +	add_stats(RELEASED_SLOW, 1);
>> +
>> +	for_each_cpu(cpu, &waiting_cpus) {
>> +		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
>>
>> -	for_each_online_cpu(cpu) {
>> -		/* XXX should mix up next cpu selection */
>> -		if (per_cpu(lock_spinners, cpu) == xl) {
>> -			ADD_STATS(released_slow_kicked, 1);
>> +		if (w->lock == lock && w->want == next) {
>> +			add_stats(RELEASED_SLOW_KICKED, 1);
>>   			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
>
> When this was initially implemented there was a "break" here. But
> 76eaca031f0af2bb303e405986f637811956a422 (xen: Send spinlock IPI to all waiters)
> fixed an issue and changed this to send an IPI to all of the CPUs. That
> means the 'break' was removed..
>
> With this implementation of spinlock, you know exactly which vCPU is holding it,
> so this code should introduce the 'break' back.
>

Thank you for spotting. agreed.




^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 5/19]  xen/pvticketlock: Xen implementation for PV ticket locks
@ 2013-06-04  7:21       ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:21 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, gregkh, linux-doc, peterz, drjones, virtualization, andi,
	hpa, xen-devel, kvm, x86, mingo, habanero, riel,
	stefano.stabellini, stefan.bader, ouyang, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst, avi.kivity

On 06/03/2013 09:33 PM, Konrad Rzeszutek Wilk wrote:
> On Sat, Jun 01, 2013 at 12:23:14PM -0700, Raghavendra K T wrote:
>> xen/pvticketlock: Xen implementation for PV ticket locks
>>
>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>>
>> Replace the old Xen implementation of PV spinlocks with and implementation
>> of xen_lock_spinning and xen_unlock_kick.
>>
>> xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
>> adds itself to the waiting_cpus set, and blocks on an event channel
>> until the channel becomes pending.
>>
>> xen_unlock_kick searches the cpus in waiting_cpus looking for the one
>> which next wants this lock with the next ticket, if any.  If found,
>> it kicks it by making its event channel pending, which wakes it up.
>>
>> We need to make sure interrupts are disabled while we're relying on the
>> contents of the per-cpu lock_waiting values, otherwise an interrupt
>> handler could come in, try to take some other lock, block, and overwrite
>> our values.
>>
>> Raghu: use function + enum instead of macro, cmpxchg for zero status reset
>>
>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
>> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   arch/x86/xen/spinlock.c |  347 +++++++++++------------------------------------
>>   1 file changed, 78 insertions(+), 269 deletions(-)
>>
>> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
>> index d6481a9..860e190 100644
>> --- a/arch/x86/xen/spinlock.c
>> +++ b/arch/x86/xen/spinlock.c
>> @@ -16,45 +16,44 @@
>>   #include "xen-ops.h"
>>   #include "debugfs.h"
>>
>> -#ifdef CONFIG_XEN_DEBUG_FS
>> -static struct xen_spinlock_stats
>> -{
>> -	u64 taken;
>> -	u32 taken_slow;
>> -	u32 taken_slow_nested;
>> -	u32 taken_slow_pickup;
>> -	u32 taken_slow_spurious;
>> -	u32 taken_slow_irqenable;
>> +enum xen_contention_stat {
>> +	TAKEN_SLOW,
>> +	TAKEN_SLOW_PICKUP,
>> +	TAKEN_SLOW_SPURIOUS,
>> +	RELEASED_SLOW,
>> +	RELEASED_SLOW_KICKED,
>> +	NR_CONTENTION_STATS
>> +};
>>
>> -	u64 released;
>> -	u32 released_slow;
>> -	u32 released_slow_kicked;
>>
>> +#ifdef CONFIG_XEN_DEBUG_FS
>>   #define HISTO_BUCKETS	30
>> -	u32 histo_spin_total[HISTO_BUCKETS+1];
>> -	u32 histo_spin_spinning[HISTO_BUCKETS+1];
>> +static struct xen_spinlock_stats
>> +{
>> +	u32 contention_stats[NR_CONTENTION_STATS];
>>   	u32 histo_spin_blocked[HISTO_BUCKETS+1];
>> -
>> -	u64 time_total;
>> -	u64 time_spinning;
>>   	u64 time_blocked;
>>   } spinlock_stats;
>>
>>   static u8 zero_stats;
>>
>> -static unsigned lock_timeout = 1 << 10;
>> -#define TIMEOUT lock_timeout
>> -
>>   static inline void check_zero(void)
>>   {
>> -	if (unlikely(zero_stats)) {
>> -		memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>> -		zero_stats = 0;
>> +	u8 ret;
>> +	u8 old = ACCESS_ONCE(zero_stats);
>> +	if (unlikely(old)) {
>> +		ret = cmpxchg(&zero_stats, old, 0);
>> +		/* This ensures only one fellow resets the stat */
>> +		if (ret == old)
>> +			memset(&spinlock_stats, 0, sizeof(spinlock_stats));
>>   	}
>>   }
>>
>> -#define ADD_STATS(elem, val)			\
>> -	do { check_zero(); spinlock_stats.elem += (val); } while(0)
>> +static inline void add_stats(enum xen_contention_stat var, u32 val)
>> +{
>> +	check_zero();
>> +	spinlock_stats.contention_stats[var] += val;
>> +}
>>
>>   static inline u64 spin_time_start(void)
>>   {
>> @@ -73,22 +72,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
>>   		array[HISTO_BUCKETS]++;
>>   }
>>
>> -static inline void spin_time_accum_spinning(u64 start)
>> -{
>> -	u32 delta = xen_clocksource_read() - start;
>> -
>> -	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
>> -	spinlock_stats.time_spinning += delta;
>> -}
>> -
>> -static inline void spin_time_accum_total(u64 start)
>> -{
>> -	u32 delta = xen_clocksource_read() - start;
>> -
>> -	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
>> -	spinlock_stats.time_total += delta;
>> -}
>> -
>>   static inline void spin_time_accum_blocked(u64 start)
>>   {
>>   	u32 delta = xen_clocksource_read() - start;
>> @@ -98,19 +81,15 @@ static inline void spin_time_accum_blocked(u64 start)
>>   }
>>   #else  /* !CONFIG_XEN_DEBUG_FS */
>>   #define TIMEOUT			(1 << 10)
>> -#define ADD_STATS(elem, val)	do { (void)(val); } while(0)
>> +static inline void add_stats(enum xen_contention_stat var, u32 val)
>> +{
>> +}
>>
>>   static inline u64 spin_time_start(void)
>>   {
>>   	return 0;
>>   }
>>
>> -static inline void spin_time_accum_total(u64 start)
>> -{
>> -}
>> -static inline void spin_time_accum_spinning(u64 start)
>> -{
>> -}
>>   static inline void spin_time_accum_blocked(u64 start)
>>   {
>>   }
>> @@ -133,229 +112,82 @@ typedef u16 xen_spinners_t;
>>   	asm(LOCK_PREFIX " decw %0" : "+m" ((xl)->spinners) : : "memory");
>>   #endif
>>
>> -struct xen_spinlock {
>> -	unsigned char lock;		/* 0 -> free; 1 -> locked */
>> -	xen_spinners_t spinners;	/* count of waiting cpus */
>> +struct xen_lock_waiting {
>> +	struct arch_spinlock *lock;
>> +	__ticket_t want;
>>   };
>>
>>   static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
>> +static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
>> +static cpumask_t waiting_cpus;
>>
>> -#if 0
>> -static int xen_spin_is_locked(struct arch_spinlock *lock)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -
>> -	return xl->lock != 0;
>> -}
>> -
>> -static int xen_spin_is_contended(struct arch_spinlock *lock)
>> +static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>>   {
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -
>> -	/* Not strictly true; this is only the count of contended
>> -	   lock-takers entering the slow path. */
>> -	return xl->spinners != 0;
>> -}
>> -
>> -static int xen_spin_trylock(struct arch_spinlock *lock)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -	u8 old = 1;
>> -
>> -	asm("xchgb %b0,%1"
>> -	    : "+q" (old), "+m" (xl->lock) : : "memory");
>> -
>> -	return old == 0;
>> -}
>> -
>> -static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
>> -
>> -/*
>> - * Mark a cpu as interested in a lock.  Returns the CPU's previous
>> - * lock of interest, in case we got preempted by an interrupt.
>> - */
>> -static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
>> -{
>> -	struct xen_spinlock *prev;
>> -
>> -	prev = __this_cpu_read(lock_spinners);
>> -	__this_cpu_write(lock_spinners, xl);
>> -
>> -	wmb();			/* set lock of interest before count */
>> -
>> -	inc_spinners(xl);
>> -
>> -	return prev;
>> -}
>> -
>> -/*
>> - * Mark a cpu as no longer interested in a lock.  Restores previous
>> - * lock of interest (NULL for none).
>> - */
>> -static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
>> -{
>> -	dec_spinners(xl);
>> -	wmb();			/* decrement count before restoring lock */
>> -	__this_cpu_write(lock_spinners, prev);
>> -}
>> -
>> -static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -	struct xen_spinlock *prev;
>>   	int irq = __this_cpu_read(lock_kicker_irq);
>> -	int ret;
>> +	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
>> +	int cpu = smp_processor_id();
>>   	u64 start;
>> +	unsigned long flags;
>>
>>   	/* If kicker interrupts not initialized yet, just spin */
>>   	if (irq == -1)
>> -		return 0;
>> +		return;
>>
>>   	start = spin_time_start();
>>
>> -	/* announce we're spinning */
>> -	prev = spinning_lock(xl);
>> -
>> -	ADD_STATS(taken_slow, 1);
>> -	ADD_STATS(taken_slow_nested, prev != NULL);
>> -
>> -	do {
>> -		unsigned long flags;
>> -
>> -		/* clear pending */
>> -		xen_clear_irq_pending(irq);
>> -
>> -		/* check again make sure it didn't become free while
>> -		   we weren't looking  */
>> -		ret = xen_spin_trylock(lock);
>> -		if (ret) {
>> -			ADD_STATS(taken_slow_pickup, 1);
>> -
>> -			/*
>> -			 * If we interrupted another spinlock while it
>> -			 * was blocking, make sure it doesn't block
>> -			 * without rechecking the lock.
>> -			 */
>> -			if (prev != NULL)
>> -				xen_set_irq_pending(irq);
>> -			goto out;
>> -		}
>> +	/*
>> +	 * Make sure an interrupt handler can't upset things in a
>> +	 * partially setup state.
>> +	 */
>> +	local_irq_save(flags);
>>
>> -		flags = arch_local_save_flags();
>> -		if (irq_enable) {
>> -			ADD_STATS(taken_slow_irqenable, 1);
>> -			raw_local_irq_enable();
>> -		}
>> +	w->want = want;
>> +	smp_wmb();
>> +	w->lock = lock;
>>
>> -		/*
>> -		 * Block until irq becomes pending.  If we're
>> -		 * interrupted at this point (after the trylock but
>> -		 * before entering the block), then the nested lock
>> -		 * handler guarantees that the irq will be left
>> -		 * pending if there's any chance the lock became free;
>> -		 * xen_poll_irq() returns immediately if the irq is
>> -		 * pending.
>> -		 */
>> -		xen_poll_irq(irq);
>> +	/* This uses set_bit, which atomic and therefore a barrier */
>> +	cpumask_set_cpu(cpu, &waiting_cpus);
>> +	add_stats(TAKEN_SLOW, 1);
>>
>> -		raw_local_irq_restore(flags);
>> +	/* clear pending */
>> +	xen_clear_irq_pending(irq);
>>
>> -		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
>> -	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
>> +	/* Only check lock once pending cleared */
>> +	barrier();
>>
>> +	/* check again make sure it didn't become free while
>> +	   we weren't looking  */
>> +	if (ACCESS_ONCE(lock->tickets.head) == want) {
>> +		add_stats(TAKEN_SLOW_PICKUP, 1);
>> +		goto out;
>> +	}
>> +	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
>> +	xen_poll_irq(irq);
>> +	add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
>>   	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
>> -
>>   out:
>> -	unspinning_lock(xl, prev);
>> +	cpumask_clear_cpu(cpu, &waiting_cpus);
>> +	w->lock = NULL;
>> +	local_irq_restore(flags);
>>   	spin_time_accum_blocked(start);
>> -
>> -	return ret;
>>   }
>>
>> -static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
>> -{
>> -	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>> -	unsigned timeout;
>> -	u8 oldval;
>> -	u64 start_spin;
>> -
>> -	ADD_STATS(taken, 1);
>> -
>> -	start_spin = spin_time_start();
>> -
>> -	do {
>> -		u64 start_spin_fast = spin_time_start();
>> -
>> -		timeout = TIMEOUT;
>> -
>> -		asm("1: xchgb %1,%0\n"
>> -		    "   testb %1,%1\n"
>> -		    "   jz 3f\n"
>> -		    "2: rep;nop\n"
>> -		    "   cmpb $0,%0\n"
>> -		    "   je 1b\n"
>> -		    "   dec %2\n"
>> -		    "   jnz 2b\n"
>> -		    "3:\n"
>> -		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
>> -		    : "1" (1)
>> -		    : "memory");
>> -
>> -		spin_time_accum_spinning(start_spin_fast);
>> -
>> -	} while (unlikely(oldval != 0 &&
>> -			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
>> -
>> -	spin_time_accum_total(start_spin);
>> -}
>> -
>> -static void xen_spin_lock(struct arch_spinlock *lock)
>> -{
>> -	__xen_spin_lock(lock, false);
>> -}
>> -
>> -static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
>> -{
>> -	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
>> -}
>> -
>> -static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
>> +static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
>>   {
>>   	int cpu;
>>
>> -	ADD_STATS(released_slow, 1);
>> +	add_stats(RELEASED_SLOW, 1);
>> +
>> +	for_each_cpu(cpu, &waiting_cpus) {
>> +		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
>>
>> -	for_each_online_cpu(cpu) {
>> -		/* XXX should mix up next cpu selection */
>> -		if (per_cpu(lock_spinners, cpu) == xl) {
>> -			ADD_STATS(released_slow_kicked, 1);
>> +		if (w->lock == lock && w->want == next) {
>> +			add_stats(RELEASED_SLOW_KICKED, 1);
>>   			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
>
> When this was initially implemented there was a "break" here. But
> 76eaca031f0af2bb303e405986f637811956a422 (xen: Send spinlock IPI to all waiters)
> fixed an issue and changed this to send an IPI to all of the CPUs. That
> means the 'break' was removed..
>
> With this implementation of spinlock, you know exactly which vCPU is holding it,
> so this code should introduce the 'break' back.
>

Thank you for spotting. agreed.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
  2013-06-03 16:04     ` Konrad Rzeszutek Wilk
@ 2013-06-04  7:22       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:22 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/03/2013 09:34 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:56:24AM +0530, Raghavendra K T wrote:
>> Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
>>
>> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>>
>> KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
>> enabled guest.
>>
>> KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
>> in guest.
>>
>> Thanks Vatsa for rewriting KVM_HC_KICK_CPU
>>
>> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   Documentation/virtual/kvm/cpuid.txt      |    4 ++++
>>   Documentation/virtual/kvm/hypercalls.txt |   13 +++++++++++++
>>   2 files changed, 17 insertions(+)
>>
>> diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
>> index 83afe65..654f43c 100644
>> --- a/Documentation/virtual/kvm/cpuid.txt
>> +++ b/Documentation/virtual/kvm/cpuid.txt
>> @@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
>>   KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
>>                                      ||       || writing to msr 0x4b564d02
>>   ------------------------------------------------------------------------------
>> +KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
>> +                                   ||       || before enabling paravirtualized
>> +                                   ||       || spinlock support.
>> +------------------------------------------------------------------------------
>>   KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
>>                                      ||       || per-cpu warps are expected in
>>                                      ||       || kvmclock.
>> diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
>> index ea113b5..2a4da11 100644
>> --- a/Documentation/virtual/kvm/hypercalls.txt
>> +++ b/Documentation/virtual/kvm/hypercalls.txt
>> @@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a
>>   shared page that contains parts of supervisor visible register state.
>>   The guest can map this shared page to access its supervisor register through
>>   memory using this hypercall.
>> +
>> +5. KVM_HC_KICK_CPU
>> +------------------------
>> +Architecture: x86
>> +Status: active
>> +Purpose: Hypercall used to wakeup a vcpu from HLT state
>> +Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
>> +kernel mode for an event to occur (ex: a spinlock to become available) can
>> +execute HLT instruction once it has busy-waited for more than a threshold
>> +time-interval. Execution of HLT instruction would cause the hypervisor to put
>> +the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
>> +same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
>> +specifying APIC ID of the vcpu to be wokenup.
>
> woken up.

Yep. :)


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
@ 2013-06-04  7:22       ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:22 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/03/2013 09:34 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:56:24AM +0530, Raghavendra K T wrote:
>> Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
>>
>> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>>
>> KVM_HC_KICK_CPU  hypercall added to wakeup halted vcpu in paravirtual spinlock
>> enabled guest.
>>
>> KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled
>> in guest.
>>
>> Thanks Vatsa for rewriting KVM_HC_KICK_CPU
>>
>> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   Documentation/virtual/kvm/cpuid.txt      |    4 ++++
>>   Documentation/virtual/kvm/hypercalls.txt |   13 +++++++++++++
>>   2 files changed, 17 insertions(+)
>>
>> diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
>> index 83afe65..654f43c 100644
>> --- a/Documentation/virtual/kvm/cpuid.txt
>> +++ b/Documentation/virtual/kvm/cpuid.txt
>> @@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2           ||     3 || kvmclock available at msrs
>>   KVM_FEATURE_ASYNC_PF               ||     4 || async pf can be enabled by
>>                                      ||       || writing to msr 0x4b564d02
>>   ------------------------------------------------------------------------------
>> +KVM_FEATURE_PV_UNHALT              ||     6 || guest checks this feature bit
>> +                                   ||       || before enabling paravirtualized
>> +                                   ||       || spinlock support.
>> +------------------------------------------------------------------------------
>>   KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
>>                                      ||       || per-cpu warps are expected in
>>                                      ||       || kvmclock.
>> diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
>> index ea113b5..2a4da11 100644
>> --- a/Documentation/virtual/kvm/hypercalls.txt
>> +++ b/Documentation/virtual/kvm/hypercalls.txt
>> @@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a
>>   shared page that contains parts of supervisor visible register state.
>>   The guest can map this shared page to access its supervisor register through
>>   memory using this hypercall.
>> +
>> +5. KVM_HC_KICK_CPU
>> +------------------------
>> +Architecture: x86
>> +Status: active
>> +Purpose: Hypercall used to wakeup a vcpu from HLT state
>> +Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
>> +kernel mode for an event to occur (ex: a spinlock to become available) can
>> +execute HLT instruction once it has busy-waited for more than a threshold
>> +time-interval. Execution of HLT instruction would cause the hypervisor to put
>> +the vcpu to sleep until occurence of an appropriate event. Another vcpu of the
>> +same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
>> +specifying APIC ID of the vcpu to be wokenup.
>
> woken up.

Yep. :)

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
  2013-06-03 16:05     ` Konrad Rzeszutek Wilk
@ 2013-06-04  7:28       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:28 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, riel, drjones,
	virtualization, srivatsa.vaddagiri

On 06/03/2013 09:35 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:56:45AM +0530, Raghavendra K T wrote:
>> kvm hypervisor: Add directed yield in vcpu block path
>>
>> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>>
>> We use the improved PLE handler logic in vcpu block patch for
>> scheduling rather than plain schedule, so that we can make
>> intelligent decisions
>
> You are missing '.' there, and
>

Yep.

>>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   arch/ia64/include/asm/kvm_host.h    |    5 +++++
>>   arch/powerpc/include/asm/kvm_host.h |    5 +++++
>>   arch/s390/include/asm/kvm_host.h    |    5 +++++
>>   arch/x86/include/asm/kvm_host.h     |    2 +-
>>   arch/x86/kvm/x86.c                  |    8 ++++++++
>>   include/linux/kvm_host.h            |    2 +-
>>   virt/kvm/kvm_main.c                 |    6 ++++--
>>   7 files changed, 29 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
>> index 989dd3f..999ab15 100644
>> --- a/arch/ia64/include/asm/kvm_host.h
>> +++ b/arch/ia64/include/asm/kvm_host.h
>> @@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
>>   int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
>>   void kvm_sal_emul(struct kvm_vcpu *vcpu);
>>
>> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	schedule();
>> +}
>> +
>>   #define __KVM_HAVE_ARCH_VM_ALLOC 1
>>   struct kvm *kvm_arch_alloc_vm(void);
>>   void kvm_arch_free_vm(struct kvm *kvm);
>> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
>> index af326cd..1aeecc0 100644
>> --- a/arch/powerpc/include/asm/kvm_host.h
>> +++ b/arch/powerpc/include/asm/kvm_host.h
>> @@ -628,4 +628,9 @@ struct kvm_vcpu_arch {
>>   #define __KVM_HAVE_ARCH_WQP
>>   #define __KVM_HAVE_CREATE_DEVICE
>>
>> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	schedule();
>> +}
>> +
>>   #endif /* __POWERPC_KVM_HOST_H__ */
>> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
>> index 16bd5d1..db09a56 100644
>> --- a/arch/s390/include/asm/kvm_host.h
>> +++ b/arch/s390/include/asm/kvm_host.h
>> @@ -266,4 +266,9 @@ struct kvm_arch{
>>   };
>>
>>   extern int sie64a(struct kvm_s390_sie_block *, u64 *);
>> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	schedule();
>> +}
>> +
>>   #endif
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 95702de..72ff791 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
>>   int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
>>   void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
>>   void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
>> -
>> +void kvm_do_schedule(struct kvm_vcpu *vcpu);
>>   #endif /* _ASM_X86_KVM_HOST_H */
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index b963c86..d26c4be 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
>>   			kvm_x86_ops->interrupt_allowed(vcpu);
>>   }
>>
>> +void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	/* We try to yield to a kikced vcpu else do a schedule */
>
> s/kikced/kicked/

:(.  Thanks .. will change that.

>
[...]


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
@ 2013-06-04  7:28       ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04  7:28 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On 06/03/2013 09:35 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Jun 02, 2013 at 12:56:45AM +0530, Raghavendra K T wrote:
>> kvm hypervisor: Add directed yield in vcpu block path
>>
>> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>>
>> We use the improved PLE handler logic in vcpu block patch for
>> scheduling rather than plain schedule, so that we can make
>> intelligent decisions
>
> You are missing '.' there, and
>

Yep.

>>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>> ---
>>   arch/ia64/include/asm/kvm_host.h    |    5 +++++
>>   arch/powerpc/include/asm/kvm_host.h |    5 +++++
>>   arch/s390/include/asm/kvm_host.h    |    5 +++++
>>   arch/x86/include/asm/kvm_host.h     |    2 +-
>>   arch/x86/kvm/x86.c                  |    8 ++++++++
>>   include/linux/kvm_host.h            |    2 +-
>>   virt/kvm/kvm_main.c                 |    6 ++++--
>>   7 files changed, 29 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
>> index 989dd3f..999ab15 100644
>> --- a/arch/ia64/include/asm/kvm_host.h
>> +++ b/arch/ia64/include/asm/kvm_host.h
>> @@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
>>   int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
>>   void kvm_sal_emul(struct kvm_vcpu *vcpu);
>>
>> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	schedule();
>> +}
>> +
>>   #define __KVM_HAVE_ARCH_VM_ALLOC 1
>>   struct kvm *kvm_arch_alloc_vm(void);
>>   void kvm_arch_free_vm(struct kvm *kvm);
>> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
>> index af326cd..1aeecc0 100644
>> --- a/arch/powerpc/include/asm/kvm_host.h
>> +++ b/arch/powerpc/include/asm/kvm_host.h
>> @@ -628,4 +628,9 @@ struct kvm_vcpu_arch {
>>   #define __KVM_HAVE_ARCH_WQP
>>   #define __KVM_HAVE_CREATE_DEVICE
>>
>> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	schedule();
>> +}
>> +
>>   #endif /* __POWERPC_KVM_HOST_H__ */
>> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
>> index 16bd5d1..db09a56 100644
>> --- a/arch/s390/include/asm/kvm_host.h
>> +++ b/arch/s390/include/asm/kvm_host.h
>> @@ -266,4 +266,9 @@ struct kvm_arch{
>>   };
>>
>>   extern int sie64a(struct kvm_s390_sie_block *, u64 *);
>> +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	schedule();
>> +}
>> +
>>   #endif
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 95702de..72ff791 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
>>   int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
>>   void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
>>   void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
>> -
>> +void kvm_do_schedule(struct kvm_vcpu *vcpu);
>>   #endif /* _ASM_X86_KVM_HOST_H */
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index b963c86..d26c4be 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
>>   			kvm_x86_ops->interrupt_allowed(vcpu);
>>   }
>>
>> +void kvm_do_schedule(struct kvm_vcpu *vcpu)
>> +{
>> +	/* We try to yield to a kikced vcpu else do a schedule */
>
> s/kikced/kicked/

:(.  Thanks .. will change that.

>
[...]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
  2013-06-04  7:16       ` Raghavendra K T
@ 2013-06-04 14:44         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-04 14:44 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Tue, Jun 04, 2013 at 12:46:53PM +0530, Raghavendra K T wrote:
> On 06/03/2013 09:27 PM, Konrad Rzeszutek Wilk wrote:
> >On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
> >>xen: Enable PV ticketlocks on HVM Xen
> >
> >There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb
> >
> 
> Yes, true. Do you expect the revert to be folded into this patch itself?
> 

I can do them. I would drop this patch and just mention in
the cover letter that Konrad would have to revert two git commits
to re-enable it on PVHVM.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
@ 2013-06-04 14:44         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-04 14:44 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Tue, Jun 04, 2013 at 12:46:53PM +0530, Raghavendra K T wrote:
> On 06/03/2013 09:27 PM, Konrad Rzeszutek Wilk wrote:
> >On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
> >>xen: Enable PV ticketlocks on HVM Xen
> >
> >There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb
> >
> 
> Yes, true. Do you expect the revert to be folded into this patch itself?
> 

I can do them. I would drop this patch and just mention in
the cover letter that Konrad would have to revert two git commits
to re-enable it on PVHVM.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
  2013-06-04 14:44         ` Konrad Rzeszutek Wilk
  (?)
@ 2013-06-04 15:00         ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04 15:00 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: gleb, mingo, jeremy, x86, hpa, pbonzini, linux-doc, habanero,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/04/2013 08:14 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 04, 2013 at 12:46:53PM +0530, Raghavendra K T wrote:
>> On 06/03/2013 09:27 PM, Konrad Rzeszutek Wilk wrote:
>>> On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
>>>> xen: Enable PV ticketlocks on HVM Xen
>>>
>>> There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb
>>>
>>
>> Yes, true. Do you expect the revert to be folded into this patch itself?
>>
>
> I can do them. I would drop this patch and just mention in
> the cover letter that Konrad would have to revert two git commits
> to re-enable it on PVHVM.
>

Thanks. will do that.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 12/19]  xen: Enable PV ticketlocks on HVM Xen
  2013-06-04 14:44         ` Konrad Rzeszutek Wilk
  (?)
  (?)
@ 2013-06-04 15:00         ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04 15:00 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, xen-devel, x86, mingo, habanero, riel,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/04/2013 08:14 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 04, 2013 at 12:46:53PM +0530, Raghavendra K T wrote:
>> On 06/03/2013 09:27 PM, Konrad Rzeszutek Wilk wrote:
>>> On Sun, Jun 02, 2013 at 12:55:03AM +0530, Raghavendra K T wrote:
>>>> xen: Enable PV ticketlocks on HVM Xen
>>>
>>> There is more to it. You should also revert 70dd4998cb85f0ecd6ac892cc7232abefa432efb
>>>
>>
>> Yes, true. Do you expect the revert to be folded into this patch itself?
>>
>
> I can do them. I would drop this patch and just mention in
> the cover letter that Konrad would have to revert two git commits
> to re-enable it on PVHVM.
>

Thanks. will do that.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-03  6:21       ` Raghavendra K T
@ 2013-06-07  6:15           ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-07  6:15 UTC (permalink / raw)
  To: Jiannan Ouyang, Gleb Natapov
  Cc: Ingo Molnar, Jeremy Fitzhardinge, x86, konrad.wilk,
	H. Peter Anvin, pbonzini, linux-doc, Andrew M. Theurer,
	xen-devel, Peter Zijlstra, Marcelo Tosatti, stefano.stabellini,
	andi, attilio.rao, gregkh, agraf, chegu vinod, torvalds,
	Avi Kivity, Thomas Gleixner, KVM, LKML, stephan.diestelhorst,
	Rik van Riel, Andrew Jones, virtualization, Srivatsa Vaddagiri

On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>
>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>> PV.
>>>> So how this patch series compares with his patches on PLE enabled
>>>> processors?
>>>>
>>>
>>> No experiment results yet.
>>>
>>> An error is reported on a 20 core VM. I'm during an internship
>>> relocation, and will start work on it next week.
>>
>> Preemptable spinlocks' testing update:
>> I hit the same softlockup problem while testing on 32 core machine with
>> 32 guest vcpus that Andrew had reported.
>>
>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>> things seemed to be manageable for undercommit cases.
>> But I still see degradation for undercommit w.r.t baseline itself on 32
>> core machine (after tuning).
>>
>> (37.5% degradation w.r.t base line).
>> I can give the full report after the all tests complete.
>>
>> For over-commit cases, I again started hitting softlockups (and
>> degradation is worse). But as I said in the preemptable thread, the
>> concept of preemptable locks looks promising (though I am still not a
>> fan of  embedded TIMEOUT mechanism)
>>
>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>> think I need to paste in the preemptable thread also)
>>
>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>> scale well with large guests and also overcommit. we need to have a
>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>> for different types of lock too. The hashing mechanism that was used in
>> Rik's spinlock backoff series fits better probably.
>>
>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>> big queue (for large guests / overcommits) for lock.
>> one way is to add a PV hook that does yield hypercall immediately for
>> the waiters above some THRESHOLD so that they don't burn the CPU.
>> ( I can do POC to check if  that idea works in improving situation
>> at some later point of time)
>>
>
> Preemptable-lock results from my run with 2^8 TIMEOUT:
>
> +-----------+-----------+-----------+------------+-----------+
>                   ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>      base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
> 2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
> 3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
> 4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>                 dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>       base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
> 2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
> 3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
> 4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
> +-----------+-----------+-----------+------------+-----------+
>
> Note we can not trust on overcommit results because of softlock-ups
>

Hi, I tried
(1) TIMEOUT=(2^7)

(2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed 
yield to other vCPUs.

Now I do not see any soft-lockup in overcommit cases and results are 
better now (except ebizzy 1x). and for dbench I see now it is closer to 
base and even improvement in 4x

+-----------+-----------+-----------+------------+-----------+
                ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
   base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
   5574.9000   237.4997     523.7000     1.4181   -90.60611
   2741.5000   561.3090     597.8000    34.9755   -78.19442
   2146.2500   216.7718     902.6667    82.4228   -57.94215
   1663.0000   141.9235    1245.0000    67.2989   -25.13530
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
                 dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
  14111.5600   754.4525     884.9051    24.4723   -93.72922
   2481.6270    71.2665    2383.5700   333.2435    -3.95132
   1510.2483    31.8634    1477.7358    50.5126    -2.15279
   1029.4875    16.9166    1075.9225    13.9911     4.51050
+-----------+-----------+-----------+------------+-----------+


IMO hash based timeout is worth a try further.
I think little more tuning will get more better results.

Jiannan, When you start working on this, I can also help
to get best of preemptable lock idea if you wish and share
the patches I tried.




^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-07  6:15           ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-07  6:15 UTC (permalink / raw)
  To: Jiannan Ouyang, Gleb Natapov
  Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
	Andrew Jones, virtualization, andi, H. Peter Anvin,
	stefano.stabellini, xen-devel, x86, Ingo Molnar,
	Andrew M. Theurer, Rik van Riel, konrad.wilk, Avi Kivity,
	Thomas Gleixner, chegu vinod, LKML, Srivatsa Vaddagiri,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>
>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>> PV.
>>>> So how this patch series compares with his patches on PLE enabled
>>>> processors?
>>>>
>>>
>>> No experiment results yet.
>>>
>>> An error is reported on a 20 core VM. I'm during an internship
>>> relocation, and will start work on it next week.
>>
>> Preemptable spinlocks' testing update:
>> I hit the same softlockup problem while testing on 32 core machine with
>> 32 guest vcpus that Andrew had reported.
>>
>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>> things seemed to be manageable for undercommit cases.
>> But I still see degradation for undercommit w.r.t baseline itself on 32
>> core machine (after tuning).
>>
>> (37.5% degradation w.r.t base line).
>> I can give the full report after the all tests complete.
>>
>> For over-commit cases, I again started hitting softlockups (and
>> degradation is worse). But as I said in the preemptable thread, the
>> concept of preemptable locks looks promising (though I am still not a
>> fan of  embedded TIMEOUT mechanism)
>>
>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>> think I need to paste in the preemptable thread also)
>>
>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>> scale well with large guests and also overcommit. we need to have a
>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>> for different types of lock too. The hashing mechanism that was used in
>> Rik's spinlock backoff series fits better probably.
>>
>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>> big queue (for large guests / overcommits) for lock.
>> one way is to add a PV hook that does yield hypercall immediately for
>> the waiters above some THRESHOLD so that they don't burn the CPU.
>> ( I can do POC to check if  that idea works in improving situation
>> at some later point of time)
>>
>
> Preemptable-lock results from my run with 2^8 TIMEOUT:
>
> +-----------+-----------+-----------+------------+-----------+
>                   ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>      base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
> 2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
> 3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
> 4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>                 dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>       base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
> 2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
> 3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
> 4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
> +-----------+-----------+-----------+------------+-----------+
>
> Note we can not trust on overcommit results because of softlock-ups
>

Hi, I tried
(1) TIMEOUT=(2^7)

(2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed 
yield to other vCPUs.

Now I do not see any soft-lockup in overcommit cases and results are 
better now (except ebizzy 1x). and for dbench I see now it is closer to 
base and even improvement in 4x

+-----------+-----------+-----------+------------+-----------+
                ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
   base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
   5574.9000   237.4997     523.7000     1.4181   -90.60611
   2741.5000   561.3090     597.8000    34.9755   -78.19442
   2146.2500   216.7718     902.6667    82.4228   -57.94215
   1663.0000   141.9235    1245.0000    67.2989   -25.13530
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
                 dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
  14111.5600   754.4525     884.9051    24.4723   -93.72922
   2481.6270    71.2665    2383.5700   333.2435    -3.95132
   1510.2483    31.8634    1477.7358    50.5126    -2.15279
   1029.4875    16.9166    1075.9225    13.9911     4.51050
+-----------+-----------+-----------+------------+-----------+


IMO hash based timeout is worth a try further.
I think little more tuning will get more better results.

Jiannan, When you start working on this, I can also help
to get best of preemptable lock idea if you wish and share
the patches I tried.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-07  6:15           ` Raghavendra K T
  (?)
@ 2013-06-07 13:29           ` Andrew Theurer
  -1 siblings, 0 replies; 192+ messages in thread
From: Andrew Theurer @ 2013-06-07 13:29 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jiannan Ouyang, Gleb Natapov, Ingo Molnar, Jeremy Fitzhardinge,
	x86, konrad.wilk, H. Peter Anvin, pbonzini, linux-doc, xen-devel,
	Peter Zijlstra, Marcelo Tosatti, stefano.stabellini, andi,
	attilio.rao, gregkh, agraf, chegu vinod, torvalds, Avi Kivity,
	Thomas Gleixner, KVM, LKML, stephan.diestelhorst, Rik van Riel,
	Andrew Jones, virtualization, Srivatsa Vaddagiri

On Fri, 2013-06-07 at 11:45 +0530, Raghavendra K T wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> > On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> >> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> >>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
> >>>
> >>>> High level question here. We have a big hope for "Preemptable Ticket
> >>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> >>>> ticketing spinlocks in overcommit scenarios problem without need for
> >>>> PV.
> >>>> So how this patch series compares with his patches on PLE enabled
> >>>> processors?
> >>>>
> >>>
> >>> No experiment results yet.
> >>>
> >>> An error is reported on a 20 core VM. I'm during an internship
> >>> relocation, and will start work on it next week.
> >>
> >> Preemptable spinlocks' testing update:
> >> I hit the same softlockup problem while testing on 32 core machine with
> >> 32 guest vcpus that Andrew had reported.
> >>
> >> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> >> things seemed to be manageable for undercommit cases.
> >> But I still see degradation for undercommit w.r.t baseline itself on 32
> >> core machine (after tuning).
> >>
> >> (37.5% degradation w.r.t base line).
> >> I can give the full report after the all tests complete.
> >>
> >> For over-commit cases, I again started hitting softlockups (and
> >> degradation is worse). But as I said in the preemptable thread, the
> >> concept of preemptable locks looks promising (though I am still not a
> >> fan of  embedded TIMEOUT mechanism)
> >>
> >> Here is my opinion of TODOs for preemptable locks to make it better ( I
> >> think I need to paste in the preemptable thread also)
> >>
> >> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> >> scale well with large guests and also overcommit. we need to have a
> >> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> >> for different types of lock too. The hashing mechanism that was used in
> >> Rik's spinlock backoff series fits better probably.
> >>
> >> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> >> big queue (for large guests / overcommits) for lock.
> >> one way is to add a PV hook that does yield hypercall immediately for
> >> the waiters above some THRESHOLD so that they don't burn the CPU.
> >> ( I can do POC to check if  that idea works in improving situation
> >> at some later point of time)
> >>
> >
> > Preemptable-lock results from my run with 2^8 TIMEOUT:
> >
> > +-----------+-----------+-----------+------------+-----------+
> >                   ebizzy (records/sec) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> >      base        stdev        patched    stdev        %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
> > 2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
> > 3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
> > 4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
> > +-----------+-----------+-----------+------------+-----------+
> > +-----------+-----------+-----------+------------+-----------+
> >                 dbench  (Throughput) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> >       base        stdev        patched    stdev        %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
> > 2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
> > 3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
> > 4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
> > +-----------+-----------+-----------+------------+-----------+
> >
> > Note we can not trust on overcommit results because of softlock-ups
> >
> 
> Hi, I tried
> (1) TIMEOUT=(2^7)
> 
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed 
> yield to other vCPUs.
> 
> Now I do not see any soft-lockup in overcommit cases and results are 
> better now (except ebizzy 1x). and for dbench I see now it is closer to 
> base and even improvement in 4x
> 
> +-----------+-----------+-----------+------------+-----------+
>                 ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>    base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>    5574.9000   237.4997     523.7000     1.4181   -90.60611
>    2741.5000   561.3090     597.8000    34.9755   -78.19442
>    2146.2500   216.7718     902.6667    82.4228   -57.94215
>    1663.0000   141.9235    1245.0000    67.2989   -25.13530
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>                  dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>     base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>   14111.5600   754.4525     884.9051    24.4723   -93.72922
>    2481.6270    71.2665    2383.5700   333.2435    -3.95132
>    1510.2483    31.8634    1477.7358    50.5126    -2.15279
>    1029.4875    16.9166    1075.9225    13.9911     4.51050
> +-----------+-----------+-----------+------------+-----------+
> 
> 
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.

The problem I see (especially for dbench) is that we are still way off
what I would consider the goal.  IMO, 2x over-commit result should be a
bit lower than 50% (to account for switching overhead and less cache
warmth).  We are at about 17.5% for 2x.  I am thinking we need a
completely different approach to get there, but of course I do not know
what that is yet :)  

I am testing your patches now and hopefully with some analysis data we
can better understand what's going on.
> 
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.

-Andrew Theurer


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-07  6:15           ` Raghavendra K T
  (?)
  (?)
@ 2013-06-07 13:29           ` Andrew Theurer
  -1 siblings, 0 replies; 192+ messages in thread
From: Andrew Theurer @ 2013-06-07 13:29 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
	Andrew Jones, virtualization, andi, H. Peter Anvin,
	stefano.stabellini, xen-devel, x86, Ingo Molnar, Rik van Riel,
	konrad.wilk, Jiannan Ouyang, Avi Kivity, Thomas Gleixner,
	chegu vinod, LKML, Srivatsa Vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Fri, 2013-06-07 at 11:45 +0530, Raghavendra K T wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> > On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> >> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> >>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
> >>>
> >>>> High level question here. We have a big hope for "Preemptable Ticket
> >>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> >>>> ticketing spinlocks in overcommit scenarios problem without need for
> >>>> PV.
> >>>> So how this patch series compares with his patches on PLE enabled
> >>>> processors?
> >>>>
> >>>
> >>> No experiment results yet.
> >>>
> >>> An error is reported on a 20 core VM. I'm during an internship
> >>> relocation, and will start work on it next week.
> >>
> >> Preemptable spinlocks' testing update:
> >> I hit the same softlockup problem while testing on 32 core machine with
> >> 32 guest vcpus that Andrew had reported.
> >>
> >> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> >> things seemed to be manageable for undercommit cases.
> >> But I still see degradation for undercommit w.r.t baseline itself on 32
> >> core machine (after tuning).
> >>
> >> (37.5% degradation w.r.t base line).
> >> I can give the full report after the all tests complete.
> >>
> >> For over-commit cases, I again started hitting softlockups (and
> >> degradation is worse). But as I said in the preemptable thread, the
> >> concept of preemptable locks looks promising (though I am still not a
> >> fan of  embedded TIMEOUT mechanism)
> >>
> >> Here is my opinion of TODOs for preemptable locks to make it better ( I
> >> think I need to paste in the preemptable thread also)
> >>
> >> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> >> scale well with large guests and also overcommit. we need to have a
> >> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> >> for different types of lock too. The hashing mechanism that was used in
> >> Rik's spinlock backoff series fits better probably.
> >>
> >> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> >> big queue (for large guests / overcommits) for lock.
> >> one way is to add a PV hook that does yield hypercall immediately for
> >> the waiters above some THRESHOLD so that they don't burn the CPU.
> >> ( I can do POC to check if  that idea works in improving situation
> >> at some later point of time)
> >>
> >
> > Preemptable-lock results from my run with 2^8 TIMEOUT:
> >
> > +-----------+-----------+-----------+------------+-----------+
> >                   ebizzy (records/sec) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> >      base        stdev        patched    stdev        %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
> > 2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
> > 3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
> > 4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
> > +-----------+-----------+-----------+------------+-----------+
> > +-----------+-----------+-----------+------------+-----------+
> >                 dbench  (Throughput) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> >       base        stdev        patched    stdev        %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
> > 2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
> > 3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
> > 4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
> > +-----------+-----------+-----------+------------+-----------+
> >
> > Note we can not trust on overcommit results because of softlock-ups
> >
> 
> Hi, I tried
> (1) TIMEOUT=(2^7)
> 
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed 
> yield to other vCPUs.
> 
> Now I do not see any soft-lockup in overcommit cases and results are 
> better now (except ebizzy 1x). and for dbench I see now it is closer to 
> base and even improvement in 4x
> 
> +-----------+-----------+-----------+------------+-----------+
>                 ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>    base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>    5574.9000   237.4997     523.7000     1.4181   -90.60611
>    2741.5000   561.3090     597.8000    34.9755   -78.19442
>    2146.2500   216.7718     902.6667    82.4228   -57.94215
>    1663.0000   141.9235    1245.0000    67.2989   -25.13530
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>                  dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>     base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>   14111.5600   754.4525     884.9051    24.4723   -93.72922
>    2481.6270    71.2665    2383.5700   333.2435    -3.95132
>    1510.2483    31.8634    1477.7358    50.5126    -2.15279
>    1029.4875    16.9166    1075.9225    13.9911     4.51050
> +-----------+-----------+-----------+------------+-----------+
> 
> 
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.

The problem I see (especially for dbench) is that we are still way off
what I would consider the goal.  IMO, 2x over-commit result should be a
bit lower than 50% (to account for switching overhead and less cache
warmth).  We are at about 17.5% for 2x.  I am thinking we need a
completely different approach to get there, but of course I do not know
what that is yet :)  

I am testing your patches now and hopefully with some analysis data we
can better understand what's going on.
> 
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.

-Andrew Theurer

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-07  6:15           ` Raghavendra K T
@ 2013-06-07 23:41             ` Jiannan Ouyang
  -1 siblings, 0 replies; 192+ messages in thread
From: Jiannan Ouyang @ 2013-06-07 23:41 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jiannan Ouyang, Gleb Natapov, Ingo Molnar, Jeremy Fitzhardinge,
	x86, konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
	Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
	Stefano Stabellini, andi, attilio.rao, gregkh, Alexander Graf,
	chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM, LKML,
	stephan.diestelhorst, Rik van Riel, Andrew Jones, virtualization,
	Srivatsa Vaddagiri

Raghu, thanks for you input. I'm more than glad to work together with
you to make this idea work better.

-Jiannan

On Thu, Jun 6, 2013 at 11:15 PM, Raghavendra K T
<raghavendra.kt@linux.vnet.ibm.com> wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
>>
>> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>>>
>>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>>>
>>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>>
>>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>>> PV.
>>>>> So how this patch series compares with his patches on PLE enabled
>>>>> processors?
>>>>>
>>>>
>>>> No experiment results yet.
>>>>
>>>> An error is reported on a 20 core VM. I'm during an internship
>>>> relocation, and will start work on it next week.
>>>
>>>
>>> Preemptable spinlocks' testing update:
>>> I hit the same softlockup problem while testing on 32 core machine with
>>> 32 guest vcpus that Andrew had reported.
>>>
>>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>>> things seemed to be manageable for undercommit cases.
>>> But I still see degradation for undercommit w.r.t baseline itself on 32
>>> core machine (after tuning).
>>>
>>> (37.5% degradation w.r.t base line).
>>> I can give the full report after the all tests complete.
>>>
>>> For over-commit cases, I again started hitting softlockups (and
>>> degradation is worse). But as I said in the preemptable thread, the
>>> concept of preemptable locks looks promising (though I am still not a
>>> fan of  embedded TIMEOUT mechanism)
>>>
>>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>>> think I need to paste in the preemptable thread also)
>>>
>>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>>> scale well with large guests and also overcommit. we need to have a
>>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>>> for different types of lock too. The hashing mechanism that was used in
>>> Rik's spinlock backoff series fits better probably.
>>>
>>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>>> big queue (for large guests / overcommits) for lock.
>>> one way is to add a PV hook that does yield hypercall immediately for
>>> the waiters above some THRESHOLD so that they don't burn the CPU.
>>> ( I can do POC to check if  that idea works in improving situation
>>> at some later point of time)
>>>
>>
>> Preemptable-lock results from my run with 2^8 TIMEOUT:
>>
>> +-----------+-----------+-----------+------------+-----------+
>>                   ebizzy (records/sec) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>>      base        stdev        patched    stdev        %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
>> 2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
>> 3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
>> 4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
>> +-----------+-----------+-----------+------------+-----------+
>> +-----------+-----------+-----------+------------+-----------+
>>                 dbench  (Throughput) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>>       base        stdev        patched    stdev        %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
>> 2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
>> 3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
>> 4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
>> +-----------+-----------+-----------+------------+-----------+
>>
>> Note we can not trust on overcommit results because of softlock-ups
>>
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed yield
> to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are better
> now (except ebizzy 1x). and for dbench I see now it is closer to base and
> even improvement in 4x
>
>
> +-----------+-----------+-----------+------------+-----------+
>                ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>   base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>   5574.9000   237.4997     523.7000     1.4181   -90.60611
>   2741.5000   561.3090     597.8000    34.9755   -78.19442
>   2146.2500   216.7718     902.6667    82.4228   -57.94215
>   1663.0000   141.9235    1245.0000    67.2989   -25.13530
>
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>                 dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>    base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>  14111.5600   754.4525     884.9051    24.4723   -93.72922
>   2481.6270    71.2665    2383.5700   333.2435    -3.95132
>   1510.2483    31.8634    1477.7358    50.5126    -2.15279
>   1029.4875    16.9166    1075.9225    13.9911     4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
>
>
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-07 23:41             ` Jiannan Ouyang
  0 siblings, 0 replies; 192+ messages in thread
From: Jiannan Ouyang @ 2013-06-07 23:41 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jiannan Ouyang, Gleb Natapov, Ingo Molnar, Jeremy Fitzhardinge,
	x86, konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
	Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
	Stefano Stabellini, andi, attilio.rao, gregkh, Alexander Graf,
	chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM, LKML,
	stephan.diestelhorst, Rik van Riel, Andrew Jones, virtualiz

Raghu, thanks for you input. I'm more than glad to work together with
you to make this idea work better.

-Jiannan

On Thu, Jun 6, 2013 at 11:15 PM, Raghavendra K T
<raghavendra.kt@linux.vnet.ibm.com> wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
>>
>> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>>>
>>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>>>
>>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>>
>>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>>> PV.
>>>>> So how this patch series compares with his patches on PLE enabled
>>>>> processors?
>>>>>
>>>>
>>>> No experiment results yet.
>>>>
>>>> An error is reported on a 20 core VM. I'm during an internship
>>>> relocation, and will start work on it next week.
>>>
>>>
>>> Preemptable spinlocks' testing update:
>>> I hit the same softlockup problem while testing on 32 core machine with
>>> 32 guest vcpus that Andrew had reported.
>>>
>>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>>> things seemed to be manageable for undercommit cases.
>>> But I still see degradation for undercommit w.r.t baseline itself on 32
>>> core machine (after tuning).
>>>
>>> (37.5% degradation w.r.t base line).
>>> I can give the full report after the all tests complete.
>>>
>>> For over-commit cases, I again started hitting softlockups (and
>>> degradation is worse). But as I said in the preemptable thread, the
>>> concept of preemptable locks looks promising (though I am still not a
>>> fan of  embedded TIMEOUT mechanism)
>>>
>>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>>> think I need to paste in the preemptable thread also)
>>>
>>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>>> scale well with large guests and also overcommit. we need to have a
>>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>>> for different types of lock too. The hashing mechanism that was used in
>>> Rik's spinlock backoff series fits better probably.
>>>
>>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>>> big queue (for large guests / overcommits) for lock.
>>> one way is to add a PV hook that does yield hypercall immediately for
>>> the waiters above some THRESHOLD so that they don't burn the CPU.
>>> ( I can do POC to check if  that idea works in improving situation
>>> at some later point of time)
>>>
>>
>> Preemptable-lock results from my run with 2^8 TIMEOUT:
>>
>> +-----------+-----------+-----------+------------+-----------+
>>                   ebizzy (records/sec) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>>      base        stdev        patched    stdev        %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
>> 2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
>> 3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
>> 4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
>> +-----------+-----------+-----------+------------+-----------+
>> +-----------+-----------+-----------+------------+-----------+
>>                 dbench  (Throughput) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>>       base        stdev        patched    stdev        %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
>> 2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
>> 3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
>> 4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
>> +-----------+-----------+-----------+------------+-----------+
>>
>> Note we can not trust on overcommit results because of softlock-ups
>>
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed yield
> to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are better
> now (except ebizzy 1x). and for dbench I see now it is closer to base and
> even improvement in 4x
>
>
> +-----------+-----------+-----------+------------+-----------+
>                ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>   base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>   5574.9000   237.4997     523.7000     1.4181   -90.60611
>   2741.5000   561.3090     597.8000    34.9755   -78.19442
>   2146.2500   216.7718     902.6667    82.4228   -57.94215
>   1663.0000   141.9235    1245.0000    67.2989   -25.13530
>
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>                 dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>    base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>  14111.5600   754.4525     884.9051    24.4723   -93.72922
>   2481.6270    71.2665    2383.5700   333.2435    -3.95132
>   1510.2483    31.8634    1477.7358    50.5126    -2.15279
>   1029.4875    16.9166    1075.9225    13.9911     4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
>
>
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-07  6:15           ` Raghavendra K T
                             ` (3 preceding siblings ...)
  (?)
@ 2013-06-07 23:41           ` Jiannan Ouyang
  -1 siblings, 0 replies; 192+ messages in thread
From: Jiannan Ouyang @ 2013-06-07 23:41 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jeremy Fitzhardinge, gregkh, linux-doc, Peter Zijlstra,
	Andrew Jones, virtualization, andi, H. Peter Anvin,
	Stefano Stabellini, xen-devel, KVM, x86, Ingo Molnar,
	Andrew M. Theurer, Rik van Riel, konrad.wilk, Jiannan Ouyang,
	Avi Kivity, Thomas Gleixner, chegu vinod, LKML,
	Srivatsa Vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhorst

Raghu, thanks for you input. I'm more than glad to work together with
you to make this idea work better.

-Jiannan

On Thu, Jun 6, 2013 at 11:15 PM, Raghavendra K T
<raghavendra.kt@linux.vnet.ibm.com> wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
>>
>> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>>>
>>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>>>
>>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>>
>>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>>> PV.
>>>>> So how this patch series compares with his patches on PLE enabled
>>>>> processors?
>>>>>
>>>>
>>>> No experiment results yet.
>>>>
>>>> An error is reported on a 20 core VM. I'm during an internship
>>>> relocation, and will start work on it next week.
>>>
>>>
>>> Preemptable spinlocks' testing update:
>>> I hit the same softlockup problem while testing on 32 core machine with
>>> 32 guest vcpus that Andrew had reported.
>>>
>>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>>> things seemed to be manageable for undercommit cases.
>>> But I still see degradation for undercommit w.r.t baseline itself on 32
>>> core machine (after tuning).
>>>
>>> (37.5% degradation w.r.t base line).
>>> I can give the full report after the all tests complete.
>>>
>>> For over-commit cases, I again started hitting softlockups (and
>>> degradation is worse). But as I said in the preemptable thread, the
>>> concept of preemptable locks looks promising (though I am still not a
>>> fan of  embedded TIMEOUT mechanism)
>>>
>>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>>> think I need to paste in the preemptable thread also)
>>>
>>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>>> scale well with large guests and also overcommit. we need to have a
>>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>>> for different types of lock too. The hashing mechanism that was used in
>>> Rik's spinlock backoff series fits better probably.
>>>
>>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>>> big queue (for large guests / overcommits) for lock.
>>> one way is to add a PV hook that does yield hypercall immediately for
>>> the waiters above some THRESHOLD so that they don't burn the CPU.
>>> ( I can do POC to check if  that idea works in improving situation
>>> at some later point of time)
>>>
>>
>> Preemptable-lock results from my run with 2^8 TIMEOUT:
>>
>> +-----------+-----------+-----------+------------+-----------+
>>                   ebizzy (records/sec) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>>      base        stdev        patched    stdev        %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x  5574.9000   237.4997    3484.2000   113.4449   -37.50202
>> 2x  2741.5000   561.3090     351.5000   140.5420   -87.17855
>> 3x  2146.2500   216.7718     194.8333    85.0303   -90.92215
>> 4x  1663.0000   141.9235     101.0000    57.7853   -93.92664
>> +-----------+-----------+-----------+------------+-----------+
>> +-----------+-----------+-----------+------------+-----------+
>>                 dbench  (Throughput) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>>       base        stdev        patched    stdev        %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x  14111.5600   754.4525   3930.1602   2547.2369    -72.14936
>> 2x  2481.6270    71.2665      181.1816    89.5368    -92.69908
>> 3x  1510.2483    31.8634      104.7243    53.2470    -93.06576
>> 4x  1029.4875    16.9166       72.3738    38.2432    -92.96992
>> +-----------+-----------+-----------+------------+-----------+
>>
>> Note we can not trust on overcommit results because of softlock-ups
>>
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed yield
> to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are better
> now (except ebizzy 1x). and for dbench I see now it is closer to base and
> even improvement in 4x
>
>
> +-----------+-----------+-----------+------------+-----------+
>                ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
>   base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>   5574.9000   237.4997     523.7000     1.4181   -90.60611
>   2741.5000   561.3090     597.8000    34.9755   -78.19442
>   2146.2500   216.7718     902.6667    82.4228   -57.94215
>   1663.0000   141.9235    1245.0000    67.2989   -25.13530
>
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>                 dbench  (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
>    base        stdev        patched    stdev        %improvement
> +-----------+-----------+-----------+------------+-----------+
>  14111.5600   754.4525     884.9051    24.4723   -93.72922
>   2481.6270    71.2665    2383.5700   333.2435    -3.95132
>   1510.2483    31.8634    1477.7358    50.5126    -2.15279
>   1029.4875    16.9166    1075.9225    13.9911     4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
>
>
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 19:21 ` Raghavendra K T
                   ` (32 preceding siblings ...)
  (?)
@ 2013-06-25 14:50 ` Andrew Theurer
  2013-06-26  8:45     ` Raghavendra K T
  -1 siblings, 1 reply; 192+ messages in thread
From: Andrew Theurer @ 2013-06-25 14:50 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
> 
> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
>    causing undercommit degradation (after PLE handler improvement).
> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
> 
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.

Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
tested these patches with and without PLE, as PLE is still not scalable
with large VMs.

System: x3850X5, 40 cores, 80 threads


1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
						Total
Configuration				Throughput(MB/s)	Notes

3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
[all 1x results look good here]


2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
-----------------------------------------------------------
						Total
Configuration				Throughput		Notes

3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
[PLE hinders pv-ticket improvements, but even with PLE off,
 we still off from ideal throughput (somewhere >20000)]


1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
----------------------------------------------------------
						Total
Configuration				Throughput		Notes

3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
[1x looking fine here]


2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
						Total
Configuration				Throughput		Notes

3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
[quite bad all around, but pv-tickets with PLE off the best so far.
 Still quite a bit off from ideal throughput]

In summary, I would state that the pv-ticket is an overall win, but the
current PLE handler tends to "get in the way" on these larger guests.

-Andrew


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 19:21 ` Raghavendra K T
                   ` (31 preceding siblings ...)
  (?)
@ 2013-06-25 14:50 ` Andrew Theurer
  -1 siblings, 0 replies; 192+ messages in thread
From: Andrew Theurer @ 2013-06-25 14:50 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, riel,
	konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhorst

On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
> 
> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
>    causing undercommit degradation (after PLE handler improvement).
> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
> 
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.

Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
tested these patches with and without PLE, as PLE is still not scalable
with large VMs.

System: x3850X5, 40 cores, 80 threads


1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
						Total
Configuration				Throughput(MB/s)	Notes

3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
[all 1x results look good here]


2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
-----------------------------------------------------------
						Total
Configuration				Throughput		Notes

3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
[PLE hinders pv-ticket improvements, but even with PLE off,
 we still off from ideal throughput (somewhere >20000)]


1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
----------------------------------------------------------
						Total
Configuration				Throughput		Notes

3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
[1x looking fine here]


2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
						Total
Configuration				Throughput		Notes

3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
[quite bad all around, but pv-tickets with PLE off the best so far.
 Still quite a bit off from ideal throughput]

In summary, I would state that the pv-ticket is an overall win, but the
current PLE handler tends to "get in the way" on these larger guests.

-Andrew

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-25 14:50 ` Andrew Theurer
@ 2013-06-26  8:45     ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26  8:45 UTC (permalink / raw)
  To: habanero
  Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri

On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism. The series provides
>> implementation for both Xen and KVM.
>>
>> Changes in V9:
>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>     causing undercommit degradation (after PLE handler improvement).
>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>> - Optimized halt exit path to use PLE handler
>>
>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>> at PLE handler's improvements, various optimizations in PLE handling
>> have been tried.
>
> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> tested these patches with and without PLE, as PLE is still not scalable
> with large VMs.
>

Hi Andrew,

Thanks for testing.

> System: x3850X5, 40 cores, 80 threads
>
>
> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> 						Total
> Configuration				Throughput(MB/s)	Notes
>
> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> [all 1x results look good here]

Yes. The 1x results look too close

>
>
> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> -----------------------------------------------------------
> 						Total
> Configuration				Throughput		Notes
>
> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests

I see 6.426% improvement with ple_on
and 161.87% improvement with ple_off. I think this is a very good sign
  for the patches

> [PLE hinders pv-ticket improvements, but even with PLE off,
>   we still off from ideal throughput (somewhere >20000)]
>

Okay, The ideal throughput you are referring is getting around atleast
80% of 1x throughput for over-commit. Yes we are still far away from
there.

>
> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> ----------------------------------------------------------
> 						Total
> Configuration				Throughput		Notes
>
> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> [1x looking fine here]
>

I see ple_off is little better here.

>
> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> 						Total
> Configuration				Throughput		Notes
>
> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> [quite bad all around, but pv-tickets with PLE off the best so far.
>   Still quite a bit off from ideal throughput]

This is again a remarkable improvement (307%).
This motivates me to add a patch to disable ple when pvspinlock is on.
probably we can add a hypercall that disables ple in kvm init patch.
but only problem I see is what if the guests are mixed.

  (i.e one guest has pvspinlock support but other does not. Host
supports pv)

/me thinks

>
> In summary, I would state that the pv-ticket is an overall win, but the
> current PLE handler tends to "get in the way" on these larger guests.
>
> -Andrew
>


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26  8:45     ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26  8:45 UTC (permalink / raw)
  To: habanero
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, riel,
	konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhorst

On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism. The series provides
>> implementation for both Xen and KVM.
>>
>> Changes in V9:
>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>     causing undercommit degradation (after PLE handler improvement).
>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>> - Optimized halt exit path to use PLE handler
>>
>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>> at PLE handler's improvements, various optimizations in PLE handling
>> have been tried.
>
> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> tested these patches with and without PLE, as PLE is still not scalable
> with large VMs.
>

Hi Andrew,

Thanks for testing.

> System: x3850X5, 40 cores, 80 threads
>
>
> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> 						Total
> Configuration				Throughput(MB/s)	Notes
>
> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> [all 1x results look good here]

Yes. The 1x results look too close

>
>
> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> -----------------------------------------------------------
> 						Total
> Configuration				Throughput		Notes
>
> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests

I see 6.426% improvement with ple_on
and 161.87% improvement with ple_off. I think this is a very good sign
  for the patches

> [PLE hinders pv-ticket improvements, but even with PLE off,
>   we still off from ideal throughput (somewhere >20000)]
>

Okay, The ideal throughput you are referring is getting around atleast
80% of 1x throughput for over-commit. Yes we are still far away from
there.

>
> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> ----------------------------------------------------------
> 						Total
> Configuration				Throughput		Notes
>
> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> [1x looking fine here]
>

I see ple_off is little better here.

>
> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> 						Total
> Configuration				Throughput		Notes
>
> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> [quite bad all around, but pv-tickets with PLE off the best so far.
>   Still quite a bit off from ideal throughput]

This is again a remarkable improvement (307%).
This motivates me to add a patch to disable ple when pvspinlock is on.
probably we can add a hypercall that disables ple in kvm init patch.
but only problem I see is what if the guests are mixed.

  (i.e one guest has pvspinlock support but other does not. Host
supports pv)

/me thinks

>
> In summary, I would state that the pv-ticket is an overall win, but the
> current PLE handler tends to "get in the way" on these larger guests.
>
> -Andrew
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26  8:45     ` Raghavendra K T
@ 2013-06-26 11:37       ` Andrew Jones
  -1 siblings, 0 replies; 192+ messages in thread
From: Andrew Jones @ 2013-06-26 11:37 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: habanero, gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini,
	linux-doc, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	virtualization, srivatsa.vaddagiri

On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>This series replaces the existing paravirtualized spinlock mechanism
> >>with a paravirtualized ticketlock mechanism. The series provides
> >>implementation for both Xen and KVM.
> >>
> >>Changes in V9:
> >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>    causing undercommit degradation (after PLE handler improvement).
> >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> >>- Optimized halt exit path to use PLE handler
> >>
> >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>at PLE handler's improvements, various optimizations in PLE handling
> >>have been tried.
> >
> >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> >tested these patches with and without PLE, as PLE is still not scalable
> >with large VMs.
> >
> 
> Hi Andrew,
> 
> Thanks for testing.
> 
> >System: x3850X5, 40 cores, 80 threads
> >
> >
> >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> >						Total
> >Configuration				Throughput(MB/s)	Notes
> >
> >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> >[all 1x results look good here]
> 
> Yes. The 1x results look too close
> 
> >
> >
> >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >-----------------------------------------------------------
> >						Total
> >Configuration				Throughput		Notes
> >
> >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> 
> I see 6.426% improvement with ple_on
> and 161.87% improvement with ple_off. I think this is a very good sign
>  for the patches
> 
> >[PLE hinders pv-ticket improvements, but even with PLE off,
> >  we still off from ideal throughput (somewhere >20000)]
> >
> 
> Okay, The ideal throughput you are referring is getting around atleast
> 80% of 1x throughput for over-commit. Yes we are still far away from
> there.
> 
> >
> >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >----------------------------------------------------------
> >						Total
> >Configuration				Throughput		Notes
> >
> >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> >[1x looking fine here]
> >
> 
> I see ple_off is little better here.
> 
> >
> >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> >						Total
> >Configuration				Throughput		Notes
> >
> >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> >[quite bad all around, but pv-tickets with PLE off the best so far.
> >  Still quite a bit off from ideal throughput]
> 
> This is again a remarkable improvement (307%).
> This motivates me to add a patch to disable ple when pvspinlock is on.
> probably we can add a hypercall that disables ple in kvm init patch.
> but only problem I see is what if the guests are mixed.
> 
>  (i.e one guest has pvspinlock support but other does not. Host
> supports pv)

How about reintroducing the idea to create per-kvm ple_gap,ple_window
state. We were headed down that road when considering a dynamic window at
one point. Then you can just set a single guest's ple_gap to zero, which
would lead to PLE being disabled for that guest. We could also revisit
the dynamic window then.

drew

> 
> /me thinks
> 
> >
> >In summary, I would state that the pv-ticket is an overall win, but the
> >current PLE handler tends to "get in the way" on these larger guests.
> >
> >-Andrew
> >
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 11:37       ` Andrew Jones
  0 siblings, 0 replies; 192+ messages in thread
From: Andrew Jones @ 2013-06-26 11:37 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, linux-doc, peterz, virtualization, andi, hpa,
	stefano.stabellini, xen-devel, kvm, x86, mingo, habanero, riel,
	konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhorst

On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>This series replaces the existing paravirtualized spinlock mechanism
> >>with a paravirtualized ticketlock mechanism. The series provides
> >>implementation for both Xen and KVM.
> >>
> >>Changes in V9:
> >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>    causing undercommit degradation (after PLE handler improvement).
> >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> >>- Optimized halt exit path to use PLE handler
> >>
> >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>at PLE handler's improvements, various optimizations in PLE handling
> >>have been tried.
> >
> >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> >tested these patches with and without PLE, as PLE is still not scalable
> >with large VMs.
> >
> 
> Hi Andrew,
> 
> Thanks for testing.
> 
> >System: x3850X5, 40 cores, 80 threads
> >
> >
> >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> >						Total
> >Configuration				Throughput(MB/s)	Notes
> >
> >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> >[all 1x results look good here]
> 
> Yes. The 1x results look too close
> 
> >
> >
> >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >-----------------------------------------------------------
> >						Total
> >Configuration				Throughput		Notes
> >
> >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> 
> I see 6.426% improvement with ple_on
> and 161.87% improvement with ple_off. I think this is a very good sign
>  for the patches
> 
> >[PLE hinders pv-ticket improvements, but even with PLE off,
> >  we still off from ideal throughput (somewhere >20000)]
> >
> 
> Okay, The ideal throughput you are referring is getting around atleast
> 80% of 1x throughput for over-commit. Yes we are still far away from
> there.
> 
> >
> >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >----------------------------------------------------------
> >						Total
> >Configuration				Throughput		Notes
> >
> >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> >[1x looking fine here]
> >
> 
> I see ple_off is little better here.
> 
> >
> >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> >						Total
> >Configuration				Throughput		Notes
> >
> >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> >[quite bad all around, but pv-tickets with PLE off the best so far.
> >  Still quite a bit off from ideal throughput]
> 
> This is again a remarkable improvement (307%).
> This motivates me to add a patch to disable ple when pvspinlock is on.
> probably we can add a hypercall that disables ple in kvm init patch.
> but only problem I see is what if the guests are mixed.
> 
>  (i.e one guest has pvspinlock support but other does not. Host
> supports pv)

How about reintroducing the idea to create per-kvm ple_gap,ple_window
state. We were headed down that road when considering a dynamic window at
one point. Then you can just set a single guest's ple_gap to zero, which
would lead to PLE being disabled for that guest. We could also revisit
the dynamic window then.

drew

> 
> /me thinks
> 
> >
> >In summary, I would state that the pv-ticket is an overall win, but the
> >current PLE handler tends to "get in the way" on these larger guests.
> >
> >-Andrew
> >
> 

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 11:37       ` Andrew Jones
@ 2013-06-26 12:52         ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-06-26 12:52 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Raghavendra K T, habanero, mingo, jeremy, x86, konrad.wilk, hpa,
	pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > >>This series replaces the existing paravirtualized spinlock mechanism
> > >>with a paravirtualized ticketlock mechanism. The series provides
> > >>implementation for both Xen and KVM.
> > >>
> > >>Changes in V9:
> > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > >>    causing undercommit degradation (after PLE handler improvement).
> > >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> > >>- Optimized halt exit path to use PLE handler
> > >>
> > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > >>at PLE handler's improvements, various optimizations in PLE handling
> > >>have been tried.
> > >
> > >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> > >tested these patches with and without PLE, as PLE is still not scalable
> > >with large VMs.
> > >
> > 
> > Hi Andrew,
> > 
> > Thanks for testing.
> > 
> > >System: x3850X5, 40 cores, 80 threads
> > >
> > >
> > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput(MB/s)	Notes
> > >
> > >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> > >[all 1x results look good here]
> > 
> > Yes. The 1x results look too close
> > 
> > >
> > >
> > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > >-----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput		Notes
> > >
> > >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> > >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> > >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> > >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> > 
> > I see 6.426% improvement with ple_on
> > and 161.87% improvement with ple_off. I think this is a very good sign
> >  for the patches
> > 
> > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > >  we still off from ideal throughput (somewhere >20000)]
> > >
> > 
> > Okay, The ideal throughput you are referring is getting around atleast
> > 80% of 1x throughput for over-commit. Yes we are still far away from
> > there.
> > 
> > >
> > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > >----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput		Notes
> > >
> > >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> > >[1x looking fine here]
> > >
> > 
> > I see ple_off is little better here.
> > 
> > >
> > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput		Notes
> > >
> > >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> > >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> > >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> > >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > >  Still quite a bit off from ideal throughput]
> > 
> > This is again a remarkable improvement (307%).
> > This motivates me to add a patch to disable ple when pvspinlock is on.
> > probably we can add a hypercall that disables ple in kvm init patch.
> > but only problem I see is what if the guests are mixed.
> > 
> >  (i.e one guest has pvspinlock support but other does not. Host
> > supports pv)
> 
> How about reintroducing the idea to create per-kvm ple_gap,ple_window
> state. We were headed down that road when considering a dynamic window at
> one point. Then you can just set a single guest's ple_gap to zero, which
> would lead to PLE being disabled for that guest. We could also revisit
> the dynamic window then.
> 
Can be done, but lets understand why ple on is such a big problem. Is it
possible that ple gap and SPIN_THRESHOLD are not tuned properly?

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 12:52         ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-06-26 12:52 UTC (permalink / raw)
  To: Andrew Jones
  Cc: jeremy, Raghavendra K T, kvm, linux-doc, peterz, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	riel, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, gregkh,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > >>This series replaces the existing paravirtualized spinlock mechanism
> > >>with a paravirtualized ticketlock mechanism. The series provides
> > >>implementation for both Xen and KVM.
> > >>
> > >>Changes in V9:
> > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > >>    causing undercommit degradation (after PLE handler improvement).
> > >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> > >>- Optimized halt exit path to use PLE handler
> > >>
> > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > >>at PLE handler's improvements, various optimizations in PLE handling
> > >>have been tried.
> > >
> > >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> > >tested these patches with and without PLE, as PLE is still not scalable
> > >with large VMs.
> > >
> > 
> > Hi Andrew,
> > 
> > Thanks for testing.
> > 
> > >System: x3850X5, 40 cores, 80 threads
> > >
> > >
> > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput(MB/s)	Notes
> > >
> > >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> > >[all 1x results look good here]
> > 
> > Yes. The 1x results look too close
> > 
> > >
> > >
> > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > >-----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput		Notes
> > >
> > >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> > >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> > >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> > >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> > 
> > I see 6.426% improvement with ple_on
> > and 161.87% improvement with ple_off. I think this is a very good sign
> >  for the patches
> > 
> > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > >  we still off from ideal throughput (somewhere >20000)]
> > >
> > 
> > Okay, The ideal throughput you are referring is getting around atleast
> > 80% of 1x throughput for over-commit. Yes we are still far away from
> > there.
> > 
> > >
> > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > >----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput		Notes
> > >
> > >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> > >[1x looking fine here]
> > >
> > 
> > I see ple_off is little better here.
> > 
> > >
> > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > >						Total
> > >Configuration				Throughput		Notes
> > >
> > >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> > >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> > >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> > >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > >  Still quite a bit off from ideal throughput]
> > 
> > This is again a remarkable improvement (307%).
> > This motivates me to add a patch to disable ple when pvspinlock is on.
> > probably we can add a hypercall that disables ple in kvm init patch.
> > but only problem I see is what if the guests are mixed.
> > 
> >  (i.e one guest has pvspinlock support but other does not. Host
> > supports pv)
> 
> How about reintroducing the idea to create per-kvm ple_gap,ple_window
> state. We were headed down that road when considering a dynamic window at
> one point. Then you can just set a single guest's ple_gap to zero, which
> would lead to PLE being disabled for that guest. We could also revisit
> the dynamic window then.
> 
Can be done, but lets understand why ple on is such a big problem. Is it
possible that ple gap and SPIN_THRESHOLD are not tuned properly?

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 12:52         ` Gleb Natapov
@ 2013-06-26 13:40           ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26 13:40 UTC (permalink / raw)
  To: Gleb Natapov, habanero
  Cc: Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini,
	linux-doc, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	virtualization, srivatsa.vaddagiri

On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>> implementation for both Xen and KVM.
>>>>>
>>>>> Changes in V9:
>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>> - Optimized halt exit path to use PLE handler
>>>>>
>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>> have been tried.
>>>>
>>>> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>> with large VMs.
>>>>
>>>
>>> Hi Andrew,
>>>
>>> Thanks for testing.
>>>
>>>> System: x3850X5, 40 cores, 80 threads
>>>>
>>>>
>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput(MB/s)	Notes
>>>>
>>>> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
>>>> [all 1x results look good here]
>>>
>>> Yes. The 1x results look too close
>>>
>>>>
>>>>
>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>> -----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput		Notes
>>>>
>>>> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
>>>> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
>>>> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
>>>> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
>>>
>>> I see 6.426% improvement with ple_on
>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>   for the patches
>>>
>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>
>>>
>>> Okay, The ideal throughput you are referring is getting around atleast
>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>> there.
>>>
>>>>
>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput		Notes
>>>>
>>>> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
>>>> [1x looking fine here]
>>>>
>>>
>>> I see ple_off is little better here.
>>>
>>>>
>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput		Notes
>>>>
>>>> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
>>>> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
>>>> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
>>>> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>   Still quite a bit off from ideal throughput]
>>>
>>> This is again a remarkable improvement (307%).
>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>> probably we can add a hypercall that disables ple in kvm init patch.
>>> but only problem I see is what if the guests are mixed.
>>>
>>>   (i.e one guest has pvspinlock support but other does not. Host
>>> supports pv)
>>
>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>> state. We were headed down that road when considering a dynamic window at
>> one point. Then you can just set a single guest's ple_gap to zero, which
>> would lead to PLE being disabled for that guest. We could also revisit
>> the dynamic window then.
>>
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>

The one obvious reason I see is commit awareness inside the guest. for
under-commit there is no necessity to do PLE, but unfortunately we do.

atleast we return back immediately in case of potential undercommits,
but we still incur vmexit delay.
same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
for undercommit and less for overcommit.

with this patch series SPIN_THRESHOLD is increased to 32k to solely
avoid under-commit regressions but it would have eaten some amount of
overcommit performance.
In summary: excess halt-exit/pl-exit was one  main reason for
undercommit regression. (compared to pl disabled case)

1. dynamic ple window was one solution for PLE, which we can experiment
further. (at VM level or global).
The other experiment I was thinking is to extend spinlock to
accommodate vcpuid (Linus has opposed that but may be worth a
try).

2. Andrew Theurer had patch to reduce double runq lock that I will be 
testing.

I have some older experiments to retry though they did not give 
significant improvements before the PLE handler modified.

Andrew, do you have any other details to add (from perf report that you 
usually take with these experiments)?


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 13:40           ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26 13:40 UTC (permalink / raw)
  To: Gleb Natapov, habanero
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>> implementation for both Xen and KVM.
>>>>>
>>>>> Changes in V9:
>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>> - Optimized halt exit path to use PLE handler
>>>>>
>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>> have been tried.
>>>>
>>>> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>> with large VMs.
>>>>
>>>
>>> Hi Andrew,
>>>
>>> Thanks for testing.
>>>
>>>> System: x3850X5, 40 cores, 80 threads
>>>>
>>>>
>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput(MB/s)	Notes
>>>>
>>>> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
>>>> [all 1x results look good here]
>>>
>>> Yes. The 1x results look too close
>>>
>>>>
>>>>
>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>> -----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput		Notes
>>>>
>>>> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
>>>> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
>>>> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
>>>> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
>>>
>>> I see 6.426% improvement with ple_on
>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>   for the patches
>>>
>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>
>>>
>>> Okay, The ideal throughput you are referring is getting around atleast
>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>> there.
>>>
>>>>
>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput		Notes
>>>>
>>>> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
>>>> [1x looking fine here]
>>>>
>>>
>>> I see ple_off is little better here.
>>>
>>>>
>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> 						Total
>>>> Configuration				Throughput		Notes
>>>>
>>>> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
>>>> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
>>>> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
>>>> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>   Still quite a bit off from ideal throughput]
>>>
>>> This is again a remarkable improvement (307%).
>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>> probably we can add a hypercall that disables ple in kvm init patch.
>>> but only problem I see is what if the guests are mixed.
>>>
>>>   (i.e one guest has pvspinlock support but other does not. Host
>>> supports pv)
>>
>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>> state. We were headed down that road when considering a dynamic window at
>> one point. Then you can just set a single guest's ple_gap to zero, which
>> would lead to PLE being disabled for that guest. We could also revisit
>> the dynamic window then.
>>
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>

The one obvious reason I see is commit awareness inside the guest. for
under-commit there is no necessity to do PLE, but unfortunately we do.

atleast we return back immediately in case of potential undercommits,
but we still incur vmexit delay.
same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
for undercommit and less for overcommit.

with this patch series SPIN_THRESHOLD is increased to 32k to solely
avoid under-commit regressions but it would have eaten some amount of
overcommit performance.
In summary: excess halt-exit/pl-exit was one  main reason for
undercommit regression. (compared to pl disabled case)

1. dynamic ple window was one solution for PLE, which we can experiment
further. (at VM level or global).
The other experiment I was thinking is to extend spinlock to
accommodate vcpuid (Linus has opposed that but may be worth a
try).

2. Andrew Theurer had patch to reduce double runq lock that I will be 
testing.

I have some older experiments to retry though they did not give 
significant improvements before the PLE handler modified.

Andrew, do you have any other details to add (from perf report that you 
usually take with these experiments)?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 12:52         ` Gleb Natapov
@ 2013-06-26 14:13           ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-26 14:13 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, Raghavendra K T, habanero, mingo, jeremy, x86, hpa,
	pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jun 26, 2013 at 03:52:40PM +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >>    causing undercommit degradation (after PLE handler improvement).
> > > >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > > 
> > > Hi Andrew,
> > > 
> > > Thanks for testing.
> > > 
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput(MB/s)	Notes
> > > >
> > > >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > > 
> > > Yes. The 1x results look too close
> > > 
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> > > 
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > >  for the patches
> > > 
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > >  we still off from ideal throughput (somewhere >20000)]
> > > >
> > > 
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > > 
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > > 
> > > I see ple_off is little better here.
> > > 
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> > > >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > >  Still quite a bit off from ideal throughput]
> > > 
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > > 
> > >  (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> > 
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> > 
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?

It could be, but it also could be a microcode issue. The earlier version
of Intel (and AMD) CPUs did not have the best detection mechanism and had
a "jitter" to them. The ple gap and ple window values seemed to be choosen
based on microbenchmark - and while they might work great with Windows
type guests - the same is not said about Linux.

In which case if you fiddle with the ple gap/window you might incur
worst performance with Windows guests :-( Or older Linux guests
that use the byte-locking mechanism.

Perhaps the best option is to introduce - as a seperate patchset -
said dynamic window which will be off when pvticket lock is off - and
then based on further CPUs improvements, can turn it on/off?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 14:13           ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-26 14:13 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, x86, kvm, linux-doc, peterz, riel, virtualization, andi,
	hpa, xen-devel, Raghavendra K T, mingo, habanero, Andrew Jones,
	stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
	gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Wed, Jun 26, 2013 at 03:52:40PM +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >>    causing undercommit degradation (after PLE handler improvement).
> > > >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > > 
> > > Hi Andrew,
> > > 
> > > Thanks for testing.
> > > 
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput(MB/s)	Notes
> > > >
> > > >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > > 
> > > Yes. The 1x results look too close
> > > 
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> > > 
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > >  for the patches
> > > 
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > >  we still off from ideal throughput (somewhere >20000)]
> > > >
> > > 
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > > 
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > > 
> > > I see ple_off is little better here.
> > > 
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> > > >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > >  Still quite a bit off from ideal throughput]
> > > 
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > > 
> > >  (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> > 
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> > 
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?

It could be, but it also could be a microcode issue. The earlier version
of Intel (and AMD) CPUs did not have the best detection mechanism and had
a "jitter" to them. The ple gap and ple window values seemed to be choosen
based on microbenchmark - and while they might work great with Windows
type guests - the same is not said about Linux.

In which case if you fiddle with the ple gap/window you might incur
worst performance with Windows guests :-( Or older Linux guests
that use the byte-locking mechanism.

Perhaps the best option is to introduce - as a seperate patchset -
said dynamic window which will be off when pvticket lock is off - and
then based on further CPUs improvements, can turn it on/off?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 13:40           ` Raghavendra K T
  (?)
@ 2013-06-26 14:39           ` Chegu Vinod
  2013-06-26 15:37               ` Raghavendra K T
  -1 siblings, 1 reply; 192+ messages in thread
From: Chegu Vinod @ 2013-06-26 14:39 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, linux-doc, peterz, riel, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, kvm, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst


[-- Attachment #1.1: Type: text/plain, Size: 8566 bytes --]

On 6/26/2013 6:40 AM, Raghavendra K T wrote:
> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>> implementation for both Xen and KVM.
>>>>>>
>>>>>> Changes in V9:
>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>
>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to 
>>>>>> look
>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>> have been tried.
>>>>>
>>>>> Sorry for not posting this sooner.  I have tested the v9 
>>>>> pv-ticketlock
>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I 
>>>>> have
>>>>> tested these patches with and without PLE, as PLE is still not 
>>>>> scalable
>>>>> with large VMs.
>>>>>
>>>>
>>>> Hi Andrew,
>>>>
>>>> Thanks for testing.
>>>>
>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>
>>>>>
>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>>                         Total
>>>>> Configuration                Throughput(MB/s)    Notes
>>>>>
>>>>> 3.10-default-ple_on            22945            5% CPU in host 
>>>>> kernel, 2% spin_lock in guests
>>>>> 3.10-default-ple_off            23184            5% CPU in host 
>>>>> kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_on            22895            5% CPU in host 
>>>>> kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_off            23051            5% CPU in host 
>>>>> kernel, 2% spin_lock in guests
>>>>> [all 1x results look good here]
>>>>
>>>> Yes. The 1x results look too close
>>>>
>>>>>
>>>>>
>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>> -----------------------------------------------------------
>>>>>                         Total
>>>>> Configuration                Throughput        Notes
>>>>>
>>>>> 3.10-default-ple_on             6287            55% CPU host 
>>>>> kernel, 17% spin_lock in guests
>>>>> 3.10-default-ple_off             1849            2% CPU in host 
>>>>> kernel, 95% spin_lock in guests
>>>>> 3.10-pvticket-ple_on             6691            50% CPU in host 
>>>>> kernel, 15% spin_lock in guests
>>>>> 3.10-pvticket-ple_off            16464            8% CPU in host 
>>>>> kernel, 33% spin_lock in guests
>>>>
>>>> I see 6.426% improvement with ple_on
>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>   for the patches
>>>>
>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>
>>>>
>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>> there.
>>>>
>>>>>
>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>>                         Total
>>>>> Configuration                Throughput        Notes
>>>>>
>>>>> 3.10-default-ple_on            22736            6% CPU in host 
>>>>> kernel, 3% spin_lock in guests
>>>>> 3.10-default-ple_off            23377            5% CPU in host 
>>>>> kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_on            22471            6% CPU in host 
>>>>> kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_off            23445            5% CPU in host 
>>>>> kernel, 3% spin_lock in guests
>>>>> [1x looking fine here]
>>>>>
>>>>
>>>> I see ple_off is little better here.
>>>>
>>>>>
>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>>                         Total
>>>>> Configuration                Throughput        Notes
>>>>>
>>>>> 3.10-default-ple_on             1965            70% CPU in host 
>>>>> kernel, 34% spin_lock in guests
>>>>> 3.10-default-ple_off              226            2% CPU in host 
>>>>> kernel, 94% spin_lock in guests
>>>>> 3.10-pvticket-ple_on             1942            70% CPU in host 
>>>>> kernel, 35% spin_lock in guests
>>>>> 3.10-pvticket-ple_off             8003            11% CPU in host 
>>>>> kernel, 70% spin_lock in guests
>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>   Still quite a bit off from ideal throughput]
>>>>
>>>> This is again a remarkable improvement (307%).
>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>> but only problem I see is what if the guests are mixed.
>>>>
>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>> supports pv)
>>>
>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>> state. We were headed down that road when considering a dynamic 
>>> window at
>>> one point. Then you can just set a single guest's ple_gap to zero, 
>>> which
>>> would lead to PLE being disabled for that guest. We could also revisit
>>> the dynamic window then.
>>>
>> Can be done, but lets understand why ple on is such a big problem. Is it
>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>
>
> The one obvious reason I see is commit awareness inside the guest. for
> under-commit there is no necessity to do PLE, but unfortunately we do.
>
> atleast we return back immediately in case of potential undercommits,
> but we still incur vmexit delay.
> same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
> for undercommit and less for overcommit.
>
> with this patch series SPIN_THRESHOLD is increased to 32k to solely
> avoid under-commit regressions but it would have eaten some amount of
> overcommit performance.
> In summary: excess halt-exit/pl-exit was one  main reason for
> undercommit regression. (compared to pl disabled case)

I haven't yet tried these patches...hope to do so sometime soon.

Fwiw...after Raghu's last set of PLE changes that is now in 3.10-rc 
kernels...I didn't notice much difference in workload performance 
between PLE enabled vs. disabled. This is for under-commit (+pinned) 
large guest case.

Here is a small sampling of the guest exits collected via kvm ftrace for 
an OLTP-like workload which was keeping the guest ~85-90% busy on a 8 
socket Westmere-EX box (HT-off).

TIME_IN_GUEST 71.616293

TIME_ON_HOST 7.764597

MSR_READ 0.0003620.0%

NMI_WINDOW 0.0000020.0%

PAUSE_INSTRUCTION 0.1585952.0%

PENDING_INTERRUPT 0.0337790.4%

MSR_WRITE 0.0016950.0%

EXTERNAL_INTERRUPT 3.21086741.4%

IO_INSTRUCTION 0.0000180.0%

RDPMC 0.0000670.0%

HLT 2.82252336.4%

EXCEPTION_NMI 0.0083620.1%

CR_ACCESS 0.0100270.1%

APIC_ACCESS 1.51830019.6%



[  Don't mean to digress from the topic but in most of my under-commit + 
pinned large guest experiments with 3.10 kernels (using 2 or 3 different 
workloads) the time spent in halt exits are typically much more than the 
time spent in ple exits. Can anything be done to reduce the duration or 
avoid those exits ?  ]

>
> 1. dynamic ple window was one solution for PLE, which we can experiment
> further. (at VM level or global).

Is this the case where the dynamic PLE  window starts off at a value 
more suitable to reduce exits for under-commit (and pinned) cases and 
only when the host OS detects that the degree of under-commit is 
shrinking (i.e. moving towards having more vcpus to schedule and hence 
getting to be over committed) it adjusts the ple window more suitable to 
the over commit case ? or is this some different idea ?

Thanks
Vinod

> The other experiment I was thinking is to extend spinlock to
> accommodate vcpuid (Linus has opposed that but may be worth a
> try).
>


> 2. Andrew Theurer had patch to reduce double runq lock that I will be 
> testing.
>
> I have some older experiments to retry though they did not give 
> significant improvements before the PLE handler modified.
>
> Andrew, do you have any other details to add (from perf report that 
> you usually take with these experiments)?
>
> .
>


[-- Attachment #1.2: Type: text/html, Size: 44129 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 14:39           ` Chegu Vinod
@ 2013-06-26 15:37               ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26 15:37 UTC (permalink / raw)
  To: Chegu Vinod
  Cc: Gleb Natapov, habanero, Andrew Jones, mingo, jeremy, x86,
	konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On 06/26/2013 08:09 PM, Chegu Vinod wrote:
> On 6/26/2013 6:40 AM, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to
>>>>>>> look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner.  I have tested the v9
>>>>>> pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
>>>>>> have
>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>> scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput(MB/s)    Notes
>>>>>>
>>>>>> 3.10-default-ple_on            22945            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off            23184            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on            22895            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off            23051            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput        Notes
>>>>>>
>>>>>> 3.10-default-ple_on             6287            55% CPU host
>>>>>> kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off             1849            2% CPU in host
>>>>>> kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on             6691            50% CPU in host
>>>>>> kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off            16464            8% CPU in host
>>>>>> kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>>   for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput        Notes
>>>>>>
>>>>>> 3.10-default-ple_on            22736            6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off            23377            5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on            22471            6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off            23445            5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput        Notes
>>>>>>
>>>>>> 3.10-default-ple_on             1965            70% CPU in host
>>>>>> kernel, 34% spin_lock in guests
>>>>>> 3.10-default-ple_off              226            2% CPU in host
>>>>>> kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on             1942            70% CPU in host
>>>>>> kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off             8003            11% CPU in host
>>>>>> kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>   Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic
>>>> window at
>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>> which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
>> same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
>> for undercommit and less for overcommit.
>>
>> with this patch series SPIN_THRESHOLD is increased to 32k to solely
>> avoid under-commit regressions but it would have eaten some amount of
>> overcommit performance.
>> In summary: excess halt-exit/pl-exit was one  main reason for
>> undercommit regression. (compared to pl disabled case)
>
> I haven't yet tried these patches...hope to do so sometime soon.
>
> Fwiw...after Raghu's last set of PLE changes that is now in 3.10-rc
> kernels...I didn't notice much difference in workload performance
> between PLE enabled vs. disabled. This is for under-commit (+pinned)
> large guest case.
>

Hi Vinod,
Thanks for confirming that now ple enabled case is very close to ple
disabled.

> Here is a small sampling of the guest exits collected via kvm ftrace for
> an OLTP-like workload which was keeping the guest ~85-90% busy on a 8
> socket Westmere-EX box (HT-off).
>
> TIME_IN_GUEST 71.616293
>
> TIME_ON_HOST 7.764597
>
> MSR_READ 0.0003620.0%
>
> NMI_WINDOW 0.0000020.0%
>
> PAUSE_INSTRUCTION 0.1585952.0%
>
> PENDING_INTERRUPT 0.0337790.4%
>
> MSR_WRITE 0.0016950.0%
>
> EXTERNAL_INTERRUPT 3.21086741.4%
>
> IO_INSTRUCTION 0.0000180.0%
>
> RDPMC 0.0000670.0%
>
> HLT 2.82252336.4%
>
> EXCEPTION_NMI 0.0083620.1%
>
> CR_ACCESS 0.0100270.1%
>
> APIC_ACCESS 1.51830019.6%
>
>
>
> [  Don't mean to digress from the topic but in most of my under-commit +
> pinned large guest experiments with 3.10 kernels (using 2 or 3 different
> workloads) the time spent in halt exits are typically much more than the
> time spent in ple exits. Can anything be done to reduce the duration or
> avoid those exits ?  ]
>

I would say, using ple handler in halt exit path patch in this series,
[patch 18 kvm hypervisor: Add directed yield in vcpu block path], help
this. That is an independent patch to tryout.

>>
>> 1. dynamic ple window was one solution for PLE, which we can experiment
>> further. (at VM level or global).
>
> Is this the case where the dynamic PLE  window starts off at a value
> more suitable to reduce exits for under-commit (and pinned) cases and
> only when the host OS detects that the degree of under-commit is
> shrinking (i.e. moving towards having more vcpus to schedule and hence
> getting to be over committed) it adjusts the ple window more suitable to
> the over commit case ? or is this some different idea ?

Yes we are discussing about same idea.

>
> Thanks
> Vinod
>
>> The other experiment I was thinking is to extend spinlock to
>> accommodate vcpuid (Linus has opposed that but may be worth a
>> try).
>>
>
>
>> 2. Andrew Theurer had patch to reduce double runq lock that I will be
>> testing.
>>
>> I have some older experiments to retry though they did not give
>> significant improvements before the PLE handler modified.
>>
>> Andrew, do you have any other details to add (from perf report that
>> you usually take with these experiments)?


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 15:37               ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26 15:37 UTC (permalink / raw)
  To: Chegu Vinod
  Cc: jeremy, gregkh, linux-doc, peterz, riel, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, kvm, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/26/2013 08:09 PM, Chegu Vinod wrote:
> On 6/26/2013 6:40 AM, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to
>>>>>>> look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner.  I have tested the v9
>>>>>> pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
>>>>>> have
>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>> scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput(MB/s)    Notes
>>>>>>
>>>>>> 3.10-default-ple_on            22945            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off            23184            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on            22895            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off            23051            5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput        Notes
>>>>>>
>>>>>> 3.10-default-ple_on             6287            55% CPU host
>>>>>> kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off             1849            2% CPU in host
>>>>>> kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on             6691            50% CPU in host
>>>>>> kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off            16464            8% CPU in host
>>>>>> kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>>   for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput        Notes
>>>>>>
>>>>>> 3.10-default-ple_on            22736            6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off            23377            5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on            22471            6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off            23445            5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>>                         Total
>>>>>> Configuration                Throughput        Notes
>>>>>>
>>>>>> 3.10-default-ple_on             1965            70% CPU in host
>>>>>> kernel, 34% spin_lock in guests
>>>>>> 3.10-default-ple_off              226            2% CPU in host
>>>>>> kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on             1942            70% CPU in host
>>>>>> kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off             8003            11% CPU in host
>>>>>> kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>   Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic
>>>> window at
>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>> which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
>> same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
>> for undercommit and less for overcommit.
>>
>> with this patch series SPIN_THRESHOLD is increased to 32k to solely
>> avoid under-commit regressions but it would have eaten some amount of
>> overcommit performance.
>> In summary: excess halt-exit/pl-exit was one  main reason for
>> undercommit regression. (compared to pl disabled case)
>
> I haven't yet tried these patches...hope to do so sometime soon.
>
> Fwiw...after Raghu's last set of PLE changes that is now in 3.10-rc
> kernels...I didn't notice much difference in workload performance
> between PLE enabled vs. disabled. This is for under-commit (+pinned)
> large guest case.
>

Hi Vinod,
Thanks for confirming that now ple enabled case is very close to ple
disabled.

> Here is a small sampling of the guest exits collected via kvm ftrace for
> an OLTP-like workload which was keeping the guest ~85-90% busy on a 8
> socket Westmere-EX box (HT-off).
>
> TIME_IN_GUEST 71.616293
>
> TIME_ON_HOST 7.764597
>
> MSR_READ 0.0003620.0%
>
> NMI_WINDOW 0.0000020.0%
>
> PAUSE_INSTRUCTION 0.1585952.0%
>
> PENDING_INTERRUPT 0.0337790.4%
>
> MSR_WRITE 0.0016950.0%
>
> EXTERNAL_INTERRUPT 3.21086741.4%
>
> IO_INSTRUCTION 0.0000180.0%
>
> RDPMC 0.0000670.0%
>
> HLT 2.82252336.4%
>
> EXCEPTION_NMI 0.0083620.1%
>
> CR_ACCESS 0.0100270.1%
>
> APIC_ACCESS 1.51830019.6%
>
>
>
> [  Don't mean to digress from the topic but in most of my under-commit +
> pinned large guest experiments with 3.10 kernels (using 2 or 3 different
> workloads) the time spent in halt exits are typically much more than the
> time spent in ple exits. Can anything be done to reduce the duration or
> avoid those exits ?  ]
>

I would say, using ple handler in halt exit path patch in this series,
[patch 18 kvm hypervisor: Add directed yield in vcpu block path], help
this. That is an independent patch to tryout.

>>
>> 1. dynamic ple window was one solution for PLE, which we can experiment
>> further. (at VM level or global).
>
> Is this the case where the dynamic PLE  window starts off at a value
> more suitable to reduce exits for under-commit (and pinned) cases and
> only when the host OS detects that the degree of under-commit is
> shrinking (i.e. moving towards having more vcpus to schedule and hence
> getting to be over committed) it adjusts the ple window more suitable to
> the over commit case ? or is this some different idea ?

Yes we are discussing about same idea.

>
> Thanks
> Vinod
>
>> The other experiment I was thinking is to extend spinlock to
>> accommodate vcpuid (Linus has opposed that but may be worth a
>> try).
>>
>
>
>> 2. Andrew Theurer had patch to reduce double runq lock that I will be
>> testing.
>>
>> I have some older experiments to retry though they did not give
>> significant improvements before the PLE handler modified.
>>
>> Andrew, do you have any other details to add (from perf report that
>> you usually take with these experiments)?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 12:52         ` Gleb Natapov
@ 2013-06-26 15:56           ` Andrew Theurer
  -1 siblings, 0 replies; 192+ messages in thread
From: Andrew Theurer @ 2013-06-26 15:56 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, Raghavendra K T, mingo, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >>    causing undercommit degradation (after PLE handler improvement).
> > > >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > > 
> > > Hi Andrew,
> > > 
> > > Thanks for testing.
> > > 
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput(MB/s)	Notes
> > > >
> > > >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > > 
> > > Yes. The 1x results look too close
> > > 
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> > > 
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > >  for the patches
> > > 
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > >  we still off from ideal throughput (somewhere >20000)]
> > > >
> > > 
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > > 
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > > 
> > > I see ple_off is little better here.
> > > 
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> > > >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > >  Still quite a bit off from ideal throughput]
> > > 
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > > 
> > >  (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> > 
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> > 
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?

The biggest problem currently is the double_runqueue_lock from
yield_to():
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]

perf from host:
> 28.27%        396402  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock                            
>  4.65%         65667  qemu-system-x86  [kernel.kallsyms]        [k] __schedule                                
>  3.87%         54802  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch                        
>  3.32%         47022  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out                 
>  2.84%         40093  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run                              
>  2.70%         37672  qemu-system-x86  [kernel.kallsyms]        [k] yield_to                                  
>  2.63%         36859  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin                          
>  2.18%         30810  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state     

A tiny patch [included below] checks if the target task is running
before double_runqueue_lock (then bails if it is running).  This does
reduce the lock contention somewhat:

[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]  
   
perf from host:
> 20.51%        284829  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock                           
>  5.21%         72949  qemu-system-x86  [kernel.kallsyms]        [k] __schedule                               
>  3.70%         51962  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch                       
>  3.50%         48607  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin                         
>  3.22%         45214  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out                
>  3.18%         44546  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run                             
>  3.13%         43176  qemu-system-x86  [kernel.kallsyms]        [k] yield_to                                 
>  2.37%         33349  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state                    
>  2.06%         28503  qemu-system-x86  [kernel.kallsyms]        [k] get_pid_task    

So, the lock contention is reduced, and the results improve slightly
over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
still far off from no PLE at all (8003) and way off from a ideal
throughput (>20000).

One of the problems, IMO, is that we are chasing our tail and burning
too much CPU trying to fix the problem, but much of what is done is not
actually fixing the problem (getting the one vcpu holding the lock to
run again).  We end up spending a lot of cycles getting a lot of vcpus
running again, and most of them are not holding that lock.  One
indication of this is the context switches in the host:

[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]

pvticket with PLE on:  2579227.76/sec
pvticket with PLE pff:  233711.30/sec

That's over 10x context switches with PLE on.  All of this is for
yield_to, but IMO most of vcpus are probably yielding to vcpus which are
not actually holding the lock.

I would like to see how this changes by tracking the lock holder in the
pvticket lock structure, and when a vcpu spins beyond a threshold, the
vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
knows to be holding the lock.  Note that PLE is no longer needed for
this and the PLE detection should probably be disabled when the guest
has this ability.

Additionally, when other vcpus reach their spin threshold and also
identify the same target vcpu (the same lock), they may opt to not make
the yield_to hypercall, if another vcpu made the yield_to hypercall to
the same target vcpu -very-recently-, thus avoiding a redundant exit and
yield_to.

Another optimization may be to allow vcpu preemption to be visible
-inside- the guest.  If a vcpu reaches the spin threshold, then
identifies the lock holding vcpu, it then checks to see if a preemption
bit is set for that vcpu.  If it is not set, then it does nothing, and
if it is, it makes the yield_to hypercall.  This should help for locks
which really do have a big critical section, and the vcpus really do
need to spin for a while.

OK, one last thing.  This is a completely different approach at the
problem:  automatically adjust active vcpus from within a guest, with
some sort of daemon (vcpud?) to approximate the actual host cpu resource
available.  The daemon would monitor steal time and hot unplug vcpus to
reduce steal time to a small percentage. ending up with a slight cpu
overcommit.  It would also have to online vcpus if more cpu resource is
made available, again looking at steal time and adding vcpus until steal
time increases to a small percentage.  I am not sure if the overhead of
plugging/unplugging is worth it, but I would bet the guest would be far
more efficient, because (a) PLE and pvticket would be handling much
lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
its applications would have much better scalability because the active
vcpu count is much lower.

So, let's see what one of those situations would look like, without
actually writing something to do the unplugging/plugging for us.  Let's
take the one of the examples above, where we have 8 VMs, each defined
with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
the VMs, so we end up with a 1.1x effective overcommit (the last test
below).

[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench] 

							Total
Configuration						Throughput	Notes

3.10-default-ple_on					1965		70% CPU in host kernel, 34% spin_lock in guests		
3.10-default-ple_off			 		 226		2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on			 		1942		70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off			 		8003		11% CPU in host kernel, 70% spin_lock in guests
3.10-pvticket-ple-on_doublerq-opt	 		2161		68% CPU in host kernel, 33% spin_lock in guests		
3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged	22534		6% CPU in host kernel,  9% steal in guests, 2% spin_lock in guests

Finally, we get a nice result!  Note this is the lowest spin % in the guest.  The spin_lock in the host is also quite a bit better:


> 6.77%         55421  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock                             
> 4.29%         57345  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run                               
> 3.87%         62049  qemu-system-x86  [kernel.kallsyms]        [k] native_apic_msr_write                      
> 2.88%         45272  qemu-system-x86  [kernel.kallsyms]        [k] atomic_dec_and_mutex_lock                  
> 2.71%         39276  qemu-system-x86  [kvm]                    [k] vcpu_enter_guest                           
> 2.48%         38886  qemu-system-x86  [kernel.kallsyms]        [k] memset                                     
> 2.22%         18331  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin                           
> 2.09%         32628  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_alloc 

Also the host context switches dropped significantly (66%), to 38768/sec.

-Andrew





Patch to reduce double runqueue lock in yield_to():

Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..795d324 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4454,6 +4454,9 @@ again:
 		goto out_irq;
 	}
 
+	if (task_running(p_rq, p) || p->state)
+		goto out_irq;
+
 	double_rq_lock(rq, p_rq);
 	while (task_rq(p) != p_rq) {
 		double_rq_unlock(rq, p_rq);



^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 15:56           ` Andrew Theurer
  0 siblings, 0 replies; 192+ messages in thread
From: Andrew Theurer @ 2013-06-26 15:56 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, x86, kvm, linux-doc, peterz, riel, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, Raghavendra K T, mingo,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >>    causing undercommit degradation (after PLE handler improvement).
> > > >>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > > 
> > > Hi Andrew,
> > > 
> > > Thanks for testing.
> > > 
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput(MB/s)	Notes
> > > >
> > > >3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > > 
> > > Yes. The 1x results look too close
> > > 
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> > > 
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > >  for the patches
> > > 
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > >  we still off from ideal throughput (somewhere >20000)]
> > > >
> > > 
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > > 
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > > 
> > > I see ple_off is little better here.
> > > 
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > >						Total
> > > >Configuration				Throughput		Notes
> > > >
> > > >3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> > > >3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > >  Still quite a bit off from ideal throughput]
> > > 
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > > 
> > >  (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> > 
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> > 
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?

The biggest problem currently is the double_runqueue_lock from
yield_to():
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]

perf from host:
> 28.27%        396402  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock                            
>  4.65%         65667  qemu-system-x86  [kernel.kallsyms]        [k] __schedule                                
>  3.87%         54802  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch                        
>  3.32%         47022  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out                 
>  2.84%         40093  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run                              
>  2.70%         37672  qemu-system-x86  [kernel.kallsyms]        [k] yield_to                                  
>  2.63%         36859  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin                          
>  2.18%         30810  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state     

A tiny patch [included below] checks if the target task is running
before double_runqueue_lock (then bails if it is running).  This does
reduce the lock contention somewhat:

[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]  
   
perf from host:
> 20.51%        284829  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock                           
>  5.21%         72949  qemu-system-x86  [kernel.kallsyms]        [k] __schedule                               
>  3.70%         51962  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch                       
>  3.50%         48607  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin                         
>  3.22%         45214  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out                
>  3.18%         44546  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run                             
>  3.13%         43176  qemu-system-x86  [kernel.kallsyms]        [k] yield_to                                 
>  2.37%         33349  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state                    
>  2.06%         28503  qemu-system-x86  [kernel.kallsyms]        [k] get_pid_task    

So, the lock contention is reduced, and the results improve slightly
over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
still far off from no PLE at all (8003) and way off from a ideal
throughput (>20000).

One of the problems, IMO, is that we are chasing our tail and burning
too much CPU trying to fix the problem, but much of what is done is not
actually fixing the problem (getting the one vcpu holding the lock to
run again).  We end up spending a lot of cycles getting a lot of vcpus
running again, and most of them are not holding that lock.  One
indication of this is the context switches in the host:

[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]

pvticket with PLE on:  2579227.76/sec
pvticket with PLE pff:  233711.30/sec

That's over 10x context switches with PLE on.  All of this is for
yield_to, but IMO most of vcpus are probably yielding to vcpus which are
not actually holding the lock.

I would like to see how this changes by tracking the lock holder in the
pvticket lock structure, and when a vcpu spins beyond a threshold, the
vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
knows to be holding the lock.  Note that PLE is no longer needed for
this and the PLE detection should probably be disabled when the guest
has this ability.

Additionally, when other vcpus reach their spin threshold and also
identify the same target vcpu (the same lock), they may opt to not make
the yield_to hypercall, if another vcpu made the yield_to hypercall to
the same target vcpu -very-recently-, thus avoiding a redundant exit and
yield_to.

Another optimization may be to allow vcpu preemption to be visible
-inside- the guest.  If a vcpu reaches the spin threshold, then
identifies the lock holding vcpu, it then checks to see if a preemption
bit is set for that vcpu.  If it is not set, then it does nothing, and
if it is, it makes the yield_to hypercall.  This should help for locks
which really do have a big critical section, and the vcpus really do
need to spin for a while.

OK, one last thing.  This is a completely different approach at the
problem:  automatically adjust active vcpus from within a guest, with
some sort of daemon (vcpud?) to approximate the actual host cpu resource
available.  The daemon would monitor steal time and hot unplug vcpus to
reduce steal time to a small percentage. ending up with a slight cpu
overcommit.  It would also have to online vcpus if more cpu resource is
made available, again looking at steal time and adding vcpus until steal
time increases to a small percentage.  I am not sure if the overhead of
plugging/unplugging is worth it, but I would bet the guest would be far
more efficient, because (a) PLE and pvticket would be handling much
lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
its applications would have much better scalability because the active
vcpu count is much lower.

So, let's see what one of those situations would look like, without
actually writing something to do the unplugging/plugging for us.  Let's
take the one of the examples above, where we have 8 VMs, each defined
with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
the VMs, so we end up with a 1.1x effective overcommit (the last test
below).

[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench] 

							Total
Configuration						Throughput	Notes

3.10-default-ple_on					1965		70% CPU in host kernel, 34% spin_lock in guests		
3.10-default-ple_off			 		 226		2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on			 		1942		70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off			 		8003		11% CPU in host kernel, 70% spin_lock in guests
3.10-pvticket-ple-on_doublerq-opt	 		2161		68% CPU in host kernel, 33% spin_lock in guests		
3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged	22534		6% CPU in host kernel,  9% steal in guests, 2% spin_lock in guests

Finally, we get a nice result!  Note this is the lowest spin % in the guest.  The spin_lock in the host is also quite a bit better:


> 6.77%         55421  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock                             
> 4.29%         57345  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run                               
> 3.87%         62049  qemu-system-x86  [kernel.kallsyms]        [k] native_apic_msr_write                      
> 2.88%         45272  qemu-system-x86  [kernel.kallsyms]        [k] atomic_dec_and_mutex_lock                  
> 2.71%         39276  qemu-system-x86  [kvm]                    [k] vcpu_enter_guest                           
> 2.48%         38886  qemu-system-x86  [kernel.kallsyms]        [k] memset                                     
> 2.22%         18331  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin                           
> 2.09%         32628  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_alloc 

Also the host context switches dropped significantly (66%), to 38768/sec.

-Andrew





Patch to reduce double runqueue lock in yield_to():

Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..795d324 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4454,6 +4454,9 @@ again:
 		goto out_irq;
 	}
 
+	if (task_running(p_rq, p) || p->state)
+		goto out_irq;
+
 	double_rq_lock(rq, p_rq);
 	while (task_rq(p) != p_rq) {
 		double_rq_unlock(rq, p_rq);

^ permalink raw reply related	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 13:40           ` Raghavendra K T
@ 2013-06-26 16:11             ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-06-26 16:11 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: habanero, Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa,
	pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>This series replaces the existing paravirtualized spinlock mechanism
> >>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>implementation for both Xen and KVM.
> >>>>>
> >>>>>Changes in V9:
> >>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>>    causing undercommit degradation (after PLE handler improvement).
> >>>>>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>- Optimized halt exit path to use PLE handler
> >>>>>
> >>>>>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>>>>at PLE handler's improvements, various optimizations in PLE handling
> >>>>>have been tried.
> >>>>
> >>>>Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> >>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> >>>>tested these patches with and without PLE, as PLE is still not scalable
> >>>>with large VMs.
> >>>>
> >>>
> >>>Hi Andrew,
> >>>
> >>>Thanks for testing.
> >>>
> >>>>System: x3850X5, 40 cores, 80 threads
> >>>>
> >>>>
> >>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput(MB/s)	Notes
> >>>>
> >>>>3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> >>>>[all 1x results look good here]
> >>>
> >>>Yes. The 1x results look too close
> >>>
> >>>>
> >>>>
> >>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>-----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput		Notes
> >>>>
> >>>>3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> >>>>3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> >>>>3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> >>>>3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> >>>
> >>>I see 6.426% improvement with ple_on
> >>>and 161.87% improvement with ple_off. I think this is a very good sign
> >>>  for the patches
> >>>
> >>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>>  we still off from ideal throughput (somewhere >20000)]
> >>>>
> >>>
> >>>Okay, The ideal throughput you are referring is getting around atleast
> >>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>there.
> >>>
> >>>>
> >>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput		Notes
> >>>>
> >>>>3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> >>>>[1x looking fine here]
> >>>>
> >>>
> >>>I see ple_off is little better here.
> >>>
> >>>>
> >>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput		Notes
> >>>>
> >>>>3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> >>>>3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> >>>>3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> >>>>3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> >>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>>  Still quite a bit off from ideal throughput]
> >>>
> >>>This is again a remarkable improvement (307%).
> >>>This motivates me to add a patch to disable ple when pvspinlock is on.
> >>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>but only problem I see is what if the guests are mixed.
> >>>
> >>>  (i.e one guest has pvspinlock support but other does not. Host
> >>>supports pv)
> >>
> >>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>state. We were headed down that road when considering a dynamic window at
> >>one point. Then you can just set a single guest's ple_gap to zero, which
> >>would lead to PLE being disabled for that guest. We could also revisit
> >>the dynamic window then.
> >>
> >Can be done, but lets understand why ple on is such a big problem. Is it
> >possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >
> 
> The one obvious reason I see is commit awareness inside the guest. for
> under-commit there is no necessity to do PLE, but unfortunately we do.
> 
> atleast we return back immediately in case of potential undercommits,
> but we still incur vmexit delay.
But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
long enough) to not generate PLE exit we will not go into PLE handler
at all, no?

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 16:11             ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-06-26 16:11 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>This series replaces the existing paravirtualized spinlock mechanism
> >>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>implementation for both Xen and KVM.
> >>>>>
> >>>>>Changes in V9:
> >>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>>    causing undercommit degradation (after PLE handler improvement).
> >>>>>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>- Optimized halt exit path to use PLE handler
> >>>>>
> >>>>>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>>>>at PLE handler's improvements, various optimizations in PLE handling
> >>>>>have been tried.
> >>>>
> >>>>Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
> >>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
> >>>>tested these patches with and without PLE, as PLE is still not scalable
> >>>>with large VMs.
> >>>>
> >>>
> >>>Hi Andrew,
> >>>
> >>>Thanks for testing.
> >>>
> >>>>System: x3850X5, 40 cores, 80 threads
> >>>>
> >>>>
> >>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput(MB/s)	Notes
> >>>>
> >>>>3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
> >>>>[all 1x results look good here]
> >>>
> >>>Yes. The 1x results look too close
> >>>
> >>>>
> >>>>
> >>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>-----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput		Notes
> >>>>
> >>>>3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
> >>>>3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
> >>>>3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
> >>>>3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
> >>>
> >>>I see 6.426% improvement with ple_on
> >>>and 161.87% improvement with ple_off. I think this is a very good sign
> >>>  for the patches
> >>>
> >>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>>  we still off from ideal throughput (somewhere >20000)]
> >>>>
> >>>
> >>>Okay, The ideal throughput you are referring is getting around atleast
> >>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>there.
> >>>
> >>>>
> >>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput		Notes
> >>>>
> >>>>3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
> >>>>[1x looking fine here]
> >>>>
> >>>
> >>>I see ple_off is little better here.
> >>>
> >>>>
> >>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>>						Total
> >>>>Configuration				Throughput		Notes
> >>>>
> >>>>3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
> >>>>3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
> >>>>3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
> >>>>3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
> >>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>>  Still quite a bit off from ideal throughput]
> >>>
> >>>This is again a remarkable improvement (307%).
> >>>This motivates me to add a patch to disable ple when pvspinlock is on.
> >>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>but only problem I see is what if the guests are mixed.
> >>>
> >>>  (i.e one guest has pvspinlock support but other does not. Host
> >>>supports pv)
> >>
> >>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>state. We were headed down that road when considering a dynamic window at
> >>one point. Then you can just set a single guest's ple_gap to zero, which
> >>would lead to PLE being disabled for that guest. We could also revisit
> >>the dynamic window then.
> >>
> >Can be done, but lets understand why ple on is such a big problem. Is it
> >possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >
> 
> The one obvious reason I see is commit awareness inside the guest. for
> under-commit there is no necessity to do PLE, but unfortunately we do.
> 
> atleast we return back immediately in case of potential undercommits,
> but we still incur vmexit delay.
But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
long enough) to not generate PLE exit we will not go into PLE handler
at all, no?

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 16:11             ` Gleb Natapov
  (?)
@ 2013-06-26 17:54             ` Raghavendra K T
  2013-07-09  9:11                 ` Raghavendra K T
  -1 siblings, 1 reply; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26 17:54 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: habanero, Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa,
	pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
>>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput(MB/s)	Notes
>>>>>>
>>>>>> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput		Notes
>>>>>>
>>>>>> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>>   for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput		Notes
>>>>>>
>>>>>> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput		Notes
>>>>>>
>>>>>> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
>>>>>> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>   Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic window at
>>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> long enough) to not generate PLE exit we will not go into PLE handler
> at all, no?
>

Yes. you are right. dynamic ple window was an attempt to solve it.

Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
exits in under-commits and increasing ple_window may be sometimes
counter productive as it affects other busy-wait constructs such as
flush_tlb AFAIK.
So if we could have had a dynamically changing SPIN_THRESHOLD too, that
would be nice.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 16:11             ` Gleb Natapov
  (?)
  (?)
@ 2013-06-26 17:54             ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-26 17:54 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
>>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput(MB/s)	Notes
>>>>>>
>>>>>> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput		Notes
>>>>>>
>>>>>> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>>   for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput		Notes
>>>>>>
>>>>>> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> 						Total
>>>>>> Configuration				Throughput		Notes
>>>>>>
>>>>>> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
>>>>>> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>   Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic window at
>>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> long enough) to not generate PLE exit we will not go into PLE handler
> at all, no?
>

Yes. you are right. dynamic ple window was an attempt to solve it.

Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
exits in under-commits and increasing ple_window may be sometimes
counter productive as it affects other busy-wait constructs such as
flush_tlb AFAIK.
So if we could have had a dynamically changing SPIN_THRESHOLD too, that
would be nice.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 15:56           ` Andrew Theurer
@ 2013-07-01  9:30             ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-01  9:30 UTC (permalink / raw)
  To: habanero
  Cc: Gleb Natapov, Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa,
	pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On 06/26/2013 09:26 PM, Andrew Theurer wrote:
> On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>> implementation for both Xen and KVM.
>>>>>>
>>>>>> Changes in V9:
>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>
>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>> have been tried.
>>>>>
>>>>> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>> with large VMs.
>>>>>
>>>>
>>>> Hi Andrew,
>>>>
>>>> Thanks for testing.
>>>>
>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>
>>>>>
>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput(MB/s)	Notes
>>>>>
>>>>> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
>>>>> [all 1x results look good here]
>>>>
>>>> Yes. The 1x results look too close
>>>>
>>>>>
>>>>>
>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>> -----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput		Notes
>>>>>
>>>>> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
>>>>> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
>>>>
>>>> I see 6.426% improvement with ple_on
>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>   for the patches
>>>>
>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>
>>>>
>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>> there.
>>>>
>>>>>
>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput		Notes
>>>>>
>>>>> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
>>>>> [1x looking fine here]
>>>>>
>>>>
>>>> I see ple_off is little better here.
>>>>
>>>>>
>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput		Notes
>>>>>
>>>>> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
>>>>> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>   Still quite a bit off from ideal throughput]
>>>>
>>>> This is again a remarkable improvement (307%).
>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>> but only problem I see is what if the guests are mixed.
>>>>
>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>> supports pv)
>>>
>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>> state. We were headed down that road when considering a dynamic window at
>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>> would lead to PLE being disabled for that guest. We could also revisit
>>> the dynamic window then.
>>>
>> Can be done, but lets understand why ple on is such a big problem. Is it
>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>
> The biggest problem currently is the double_runqueue_lock from
> yield_to():
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 28.27%        396402  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock
>>   4.65%         65667  qemu-system-x86  [kernel.kallsyms]        [k] __schedule
>>   3.87%         54802  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch
>>   3.32%         47022  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out
>>   2.84%         40093  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run
>>   2.70%         37672  qemu-system-x86  [kernel.kallsyms]        [k] yield_to
>>   2.63%         36859  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin
>>   2.18%         30810  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state
>
> A tiny patch [included below] checks if the target task is running
> before double_runqueue_lock (then bails if it is running).  This does
> reduce the lock contention somewhat:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 20.51%        284829  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock
>>   5.21%         72949  qemu-system-x86  [kernel.kallsyms]        [k] __schedule
>>   3.70%         51962  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch
>>   3.50%         48607  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin
>>   3.22%         45214  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out
>>   3.18%         44546  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run
>>   3.13%         43176  qemu-system-x86  [kernel.kallsyms]        [k] yield_to
>>   2.37%         33349  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state
>>   2.06%         28503  qemu-system-x86  [kernel.kallsyms]        [k] get_pid_task
>
> So, the lock contention is reduced, and the results improve slightly
> over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
> still far off from no PLE at all (8003) and way off from a ideal
> throughput (>20000).
>
> One of the problems, IMO, is that we are chasing our tail and burning
> too much CPU trying to fix the problem, but much of what is done is not
> actually fixing the problem (getting the one vcpu holding the lock to
> run again).  We end up spending a lot of cycles getting a lot of vcpus
> running again, and most of them are not holding that lock.  One
> indication of this is the context switches in the host:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> pvticket with PLE on:  2579227.76/sec
> pvticket with PLE pff:  233711.30/sec
>
> That's over 10x context switches with PLE on.  All of this is for
> yield_to, but IMO most of vcpus are probably yielding to vcpus which are
> not actually holding the lock.
>
> I would like to see how this changes by tracking the lock holder in the
> pvticket lock structure, and when a vcpu spins beyond a threshold, the
> vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
> knows to be holding the lock.  Note that PLE is no longer needed for
> this and the PLE detection should probably be disabled when the guest
> has this ability.
>
> Additionally, when other vcpus reach their spin threshold and also
> identify the same target vcpu (the same lock), they may opt to not make
> the yield_to hypercall, if another vcpu made the yield_to hypercall to
> the same target vcpu -very-recently-, thus avoiding a redundant exit and
> yield_to.
>
> Another optimization may be to allow vcpu preemption to be visible
> -inside- the guest.  If a vcpu reaches the spin threshold, then
> identifies the lock holding vcpu, it then checks to see if a preemption
> bit is set for that vcpu.  If it is not set, then it does nothing, and
> if it is, it makes the yield_to hypercall.  This should help for locks
> which really do have a big critical section, and the vcpus really do
> need to spin for a while.
>
> OK, one last thing.  This is a completely different approach at the
> problem:  automatically adjust active vcpus from within a guest, with
> some sort of daemon (vcpud?) to approximate the actual host cpu resource
> available.  The daemon would monitor steal time and hot unplug vcpus to
> reduce steal time to a small percentage. ending up with a slight cpu
> overcommit.  It would also have to online vcpus if more cpu resource is
> made available, again looking at steal time and adding vcpus until steal
> time increases to a small percentage.  I am not sure if the overhead of
> plugging/unplugging is worth it, but I would bet the guest would be far
> more efficient, because (a) PLE and pvticket would be handling much
> lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
> its applications would have much better scalability because the active
> vcpu count is much lower.
>
> So, let's see what one of those situations would look like, without
> actually writing something to do the unplugging/plugging for us.  Let's
> take the one of the examples above, where we have 8 VMs, each defined
> with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
> the VMs, so we end up with a 1.1x effective overcommit (the last test
> below).
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> 							Total
> Configuration						Throughput	Notes
>
> 3.10-default-ple_on					1965		70% CPU in host kernel, 34% spin_lock in guests		
> 3.10-default-ple_off			 		 226		2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on			 		1942		70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off			 		8003		11% CPU in host kernel, 70% spin_lock in guests
> 3.10-pvticket-ple-on_doublerq-opt	 		2161		68% CPU in host kernel, 33% spin_lock in guests		
> 3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged	22534		6% CPU in host kernel,  9% steal in guests, 2% spin_lock in guests
>
> Finally, we get a nice result!  Note this is the lowest spin % in the guest.  The spin_lock in the host is also quite a bit better:
>
>
>> 6.77%         55421  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock
>> 4.29%         57345  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run
>> 3.87%         62049  qemu-system-x86  [kernel.kallsyms]        [k] native_apic_msr_write
>> 2.88%         45272  qemu-system-x86  [kernel.kallsyms]        [k] atomic_dec_and_mutex_lock
>> 2.71%         39276  qemu-system-x86  [kvm]                    [k] vcpu_enter_guest
>> 2.48%         38886  qemu-system-x86  [kernel.kallsyms]        [k] memset
>> 2.22%         18331  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin
>> 2.09%         32628  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_alloc
>
> Also the host context switches dropped significantly (66%), to 38768/sec.
>
> -Andrew
>
>
>
>
>
> Patch to reduce double runqueue lock in yield_to():
>
> Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..795d324 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4454,6 +4454,9 @@ again:
>   		goto out_irq;
>   	}
>
> +	if (task_running(p_rq, p) || p->state)
> +		goto out_irq;
> +
>   	double_rq_lock(rq, p_rq);
>   	while (task_rq(p) != p_rq) {
>   		double_rq_unlock(rq, p_rq);
>
>

Hi Andrew,
I found that this patch, indeed helped to gain little more on top of
V10 pvspinlock patches in my test.

Here is the result on 32vcpus guest on 32 core machine (HT diabled)
test again.

patched kernel = 3.10-rc2 + v10 pvspinlock + reducing double rq patch


+---+-----------+-----------+-----------+------------+-----------+
                 ebizzy (rec/sec higher is better)
+---+-----------+-----------+-----------+------------+-----------+
       base      stdev         patched       stdev     %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x   5574.9000   237.4997	  5494.6000   164.7451	  -1.44038
2x   2741.5000   561.3090	  3472.6000    98.6376	  26.66788
3x   2146.2500   216.7718	  2293.6667    56.7872	   6.86857
4x   1663.0000   141.9235	  1856.0000   120.7524	  11.60553
+---+-----------+-----------+-----------+------------+-----------+
+---+-----------+-----------+-----------+------------+-----------+
                 dbench (throughput higher is better)
+---+-----------+-----------+-----------+------------+-----------+
        base      stdev         patched       stdev     %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x   14111.5600   754.4525	 14695.3600   104.6816	   4.13703
2x    2481.6270    71.2665	  2774.8420    58.4845	  11.81543
3x    1510.2483    31.8634	  1539.7300    36.1814	   1.95211
4x    1029.4875    16.9166	  1059.9800    27.4114	   2.96191
+---+-----------+-----------+-----------+------------+-----------+



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-01  9:30             ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-01  9:30 UTC (permalink / raw)
  To: habanero
  Cc: jeremy, gregkh, linux-doc, peterz, riel, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, kvm, x86, mingo,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 06/26/2013 09:26 PM, Andrew Theurer wrote:
> On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>> implementation for both Xen and KVM.
>>>>>>
>>>>>> Changes in V9:
>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>     causing undercommit degradation (after PLE handler improvement).
>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>
>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>> have been tried.
>>>>>
>>>>> Sorry for not posting this sooner.  I have tested the v9 pv-ticketlock
>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I have
>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>> with large VMs.
>>>>>
>>>>
>>>> Hi Andrew,
>>>>
>>>> Thanks for testing.
>>>>
>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>
>>>>>
>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput(MB/s)	Notes
>>>>>
>>>>> 3.10-default-ple_on			22945			5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-default-ple_off			23184			5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			22895			5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			23051			5% CPU in host kernel, 2% spin_lock in guests
>>>>> [all 1x results look good here]
>>>>
>>>> Yes. The 1x results look too close
>>>>
>>>>>
>>>>>
>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>> -----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput		Notes
>>>>>
>>>>> 3.10-default-ple_on			 6287			55% CPU  host kernel, 17% spin_lock in guests
>>>>> 3.10-default-ple_off			 1849			2% CPU in host kernel, 95% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			 6691			50% CPU in host kernel, 15% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			16464			8% CPU in host kernel, 33% spin_lock in guests
>>>>
>>>> I see 6.426% improvement with ple_on
>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>   for the patches
>>>>
>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>
>>>>
>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>> there.
>>>>
>>>>>
>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput		Notes
>>>>>
>>>>> 3.10-default-ple_on			22736			6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-default-ple_off			23377			5% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			22471			6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			23445			5% CPU in host kernel, 3% spin_lock in guests
>>>>> [1x looking fine here]
>>>>>
>>>>
>>>> I see ple_off is little better here.
>>>>
>>>>>
>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> 						Total
>>>>> Configuration				Throughput		Notes
>>>>>
>>>>> 3.10-default-ple_on			 1965			70% CPU in host kernel, 34% spin_lock in guests		
>>>>> 3.10-default-ple_off			  226			2% CPU in host kernel, 94% spin_lock in guests
>>>>> 3.10-pvticket-ple_on			 1942			70% CPU in host kernel, 35% spin_lock in guests
>>>>> 3.10-pvticket-ple_off			 8003			11% CPU in host kernel, 70% spin_lock in guests
>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>   Still quite a bit off from ideal throughput]
>>>>
>>>> This is again a remarkable improvement (307%).
>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>> but only problem I see is what if the guests are mixed.
>>>>
>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>> supports pv)
>>>
>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>> state. We were headed down that road when considering a dynamic window at
>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>> would lead to PLE being disabled for that guest. We could also revisit
>>> the dynamic window then.
>>>
>> Can be done, but lets understand why ple on is such a big problem. Is it
>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>
> The biggest problem currently is the double_runqueue_lock from
> yield_to():
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 28.27%        396402  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock
>>   4.65%         65667  qemu-system-x86  [kernel.kallsyms]        [k] __schedule
>>   3.87%         54802  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch
>>   3.32%         47022  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out
>>   2.84%         40093  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run
>>   2.70%         37672  qemu-system-x86  [kernel.kallsyms]        [k] yield_to
>>   2.63%         36859  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin
>>   2.18%         30810  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state
>
> A tiny patch [included below] checks if the target task is running
> before double_runqueue_lock (then bails if it is running).  This does
> reduce the lock contention somewhat:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 20.51%        284829  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock
>>   5.21%         72949  qemu-system-x86  [kernel.kallsyms]        [k] __schedule
>>   3.70%         51962  qemu-system-x86  [kernel.kallsyms]        [k] finish_task_switch
>>   3.50%         48607  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin
>>   3.22%         45214  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_task_sched_out
>>   3.18%         44546  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run
>>   3.13%         43176  qemu-system-x86  [kernel.kallsyms]        [k] yield_to
>>   2.37%         33349  qemu-system-x86  [kvm_intel]              [k] __vmx_load_host_state
>>   2.06%         28503  qemu-system-x86  [kernel.kallsyms]        [k] get_pid_task
>
> So, the lock contention is reduced, and the results improve slightly
> over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
> still far off from no PLE at all (8003) and way off from a ideal
> throughput (>20000).
>
> One of the problems, IMO, is that we are chasing our tail and burning
> too much CPU trying to fix the problem, but much of what is done is not
> actually fixing the problem (getting the one vcpu holding the lock to
> run again).  We end up spending a lot of cycles getting a lot of vcpus
> running again, and most of them are not holding that lock.  One
> indication of this is the context switches in the host:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> pvticket with PLE on:  2579227.76/sec
> pvticket with PLE pff:  233711.30/sec
>
> That's over 10x context switches with PLE on.  All of this is for
> yield_to, but IMO most of vcpus are probably yielding to vcpus which are
> not actually holding the lock.
>
> I would like to see how this changes by tracking the lock holder in the
> pvticket lock structure, and when a vcpu spins beyond a threshold, the
> vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
> knows to be holding the lock.  Note that PLE is no longer needed for
> this and the PLE detection should probably be disabled when the guest
> has this ability.
>
> Additionally, when other vcpus reach their spin threshold and also
> identify the same target vcpu (the same lock), they may opt to not make
> the yield_to hypercall, if another vcpu made the yield_to hypercall to
> the same target vcpu -very-recently-, thus avoiding a redundant exit and
> yield_to.
>
> Another optimization may be to allow vcpu preemption to be visible
> -inside- the guest.  If a vcpu reaches the spin threshold, then
> identifies the lock holding vcpu, it then checks to see if a preemption
> bit is set for that vcpu.  If it is not set, then it does nothing, and
> if it is, it makes the yield_to hypercall.  This should help for locks
> which really do have a big critical section, and the vcpus really do
> need to spin for a while.
>
> OK, one last thing.  This is a completely different approach at the
> problem:  automatically adjust active vcpus from within a guest, with
> some sort of daemon (vcpud?) to approximate the actual host cpu resource
> available.  The daemon would monitor steal time and hot unplug vcpus to
> reduce steal time to a small percentage. ending up with a slight cpu
> overcommit.  It would also have to online vcpus if more cpu resource is
> made available, again looking at steal time and adding vcpus until steal
> time increases to a small percentage.  I am not sure if the overhead of
> plugging/unplugging is worth it, but I would bet the guest would be far
> more efficient, because (a) PLE and pvticket would be handling much
> lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
> its applications would have much better scalability because the active
> vcpu count is much lower.
>
> So, let's see what one of those situations would look like, without
> actually writing something to do the unplugging/plugging for us.  Let's
> take the one of the examples above, where we have 8 VMs, each defined
> with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
> the VMs, so we end up with a 1.1x effective overcommit (the last test
> below).
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> 							Total
> Configuration						Throughput	Notes
>
> 3.10-default-ple_on					1965		70% CPU in host kernel, 34% spin_lock in guests		
> 3.10-default-ple_off			 		 226		2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on			 		1942		70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off			 		8003		11% CPU in host kernel, 70% spin_lock in guests
> 3.10-pvticket-ple-on_doublerq-opt	 		2161		68% CPU in host kernel, 33% spin_lock in guests		
> 3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged	22534		6% CPU in host kernel,  9% steal in guests, 2% spin_lock in guests
>
> Finally, we get a nice result!  Note this is the lowest spin % in the guest.  The spin_lock in the host is also quite a bit better:
>
>
>> 6.77%         55421  qemu-system-x86  [kernel.kallsyms]        [k] _raw_spin_lock
>> 4.29%         57345  qemu-system-x86  [kvm_intel]              [k] vmx_vcpu_run
>> 3.87%         62049  qemu-system-x86  [kernel.kallsyms]        [k] native_apic_msr_write
>> 2.88%         45272  qemu-system-x86  [kernel.kallsyms]        [k] atomic_dec_and_mutex_lock
>> 2.71%         39276  qemu-system-x86  [kvm]                    [k] vcpu_enter_guest
>> 2.48%         38886  qemu-system-x86  [kernel.kallsyms]        [k] memset
>> 2.22%         18331  qemu-system-x86  [kvm]                    [k] kvm_vcpu_on_spin
>> 2.09%         32628  qemu-system-x86  [kernel.kallsyms]        [k] perf_event_alloc
>
> Also the host context switches dropped significantly (66%), to 38768/sec.
>
> -Andrew
>
>
>
>
>
> Patch to reduce double runqueue lock in yield_to():
>
> Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..795d324 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4454,6 +4454,9 @@ again:
>   		goto out_irq;
>   	}
>
> +	if (task_running(p_rq, p) || p->state)
> +		goto out_irq;
> +
>   	double_rq_lock(rq, p_rq);
>   	while (task_rq(p) != p_rq) {
>   		double_rq_unlock(rq, p_rq);
>
>

Hi Andrew,
I found that this patch, indeed helped to gain little more on top of
V10 pvspinlock patches in my test.

Here is the result on 32vcpus guest on 32 core machine (HT diabled)
test again.

patched kernel = 3.10-rc2 + v10 pvspinlock + reducing double rq patch


+---+-----------+-----------+-----------+------------+-----------+
                 ebizzy (rec/sec higher is better)
+---+-----------+-----------+-----------+------------+-----------+
       base      stdev         patched       stdev     %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x   5574.9000   237.4997	  5494.6000   164.7451	  -1.44038
2x   2741.5000   561.3090	  3472.6000    98.6376	  26.66788
3x   2146.2500   216.7718	  2293.6667    56.7872	   6.86857
4x   1663.0000   141.9235	  1856.0000   120.7524	  11.60553
+---+-----------+-----------+-----------+------------+-----------+
+---+-----------+-----------+-----------+------------+-----------+
                 dbench (throughput higher is better)
+---+-----------+-----------+-----------+------------+-----------+
        base      stdev         patched       stdev     %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x   14111.5600   754.4525	 14695.3600   104.6816	   4.13703
2x    2481.6270    71.2665	  2774.8420    58.4845	  11.81543
3x    1510.2483    31.8634	  1539.7300    36.1814	   1.95211
4x    1029.4875    16.9166	  1059.9800    27.4114	   2.96191
+---+-----------+-----------+-----------+------------+-----------+

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-26 17:54             ` Raghavendra K T
@ 2013-07-09  9:11                 ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-09  9:11 UTC (permalink / raw)
  To: Gleb Natapov, Andrew Jones, mingo, ouyang
  Cc: habanero, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, gregkh, agraf, chegu_vinod, torvalds, avi.kivity,
	tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	virtualization, srivatsa.vaddagiri

On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>> mechanism
>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>
>>>>>>>> Changes in V9:
>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>>     causing undercommit degradation (after PLE handler
>>>>>>>> improvement).
>>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>
>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>> to look
>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>> handling
>>>>>>>> have been tried.
>>>>>>>
>>>>>>> Sorry for not posting this sooner.  I have tested the v9
>>>>>>> pv-ticketlock
>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
>>>>>>> have
>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>> scalable
>>>>>>> with large VMs.
>>>>>>>
>>>>>>
>>>>>> Hi Andrew,
>>>>>>
>>>>>> Thanks for testing.
>>>>>>
>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput(MB/s)    Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on            22945            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-default-ple_off            23184            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on            22895            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off            23051            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> [all 1x results look good here]
>>>>>>
>>>>>> Yes. The 1x results look too close
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>> -----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput        Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on             6287            55% CPU  host
>>>>>>> kernel, 17% spin_lock in guests
>>>>>>> 3.10-default-ple_off             1849            2% CPU in host
>>>>>>> kernel, 95% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on             6691            50% CPU in host
>>>>>>> kernel, 15% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off            16464            8% CPU in host
>>>>>>> kernel, 33% spin_lock in guests
>>>>>>
>>>>>> I see 6.426% improvement with ple_on
>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>> sign
>>>>>>   for the patches
>>>>>>
>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>>
>>>>>>
>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>> atleast
>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>> there.
>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput        Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on            22736            6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-default-ple_off            23377            5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on            22471            6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off            23445            5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> [1x looking fine here]
>>>>>>>
>>>>>>
>>>>>> I see ple_off is little better here.
>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput        Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on             1965            70% CPU in host
>>>>>>> kernel, 34% spin_lock in guests
>>>>>>> 3.10-default-ple_off              226            2% CPU in host
>>>>>>> kernel, 94% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on             1942            70% CPU in host
>>>>>>> kernel, 35% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off             8003            11% CPU in host
>>>>>>> kernel, 70% spin_lock in guests
>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>>   Still quite a bit off from ideal throughput]
>>>>>>
>>>>>> This is again a remarkable improvement (307%).
>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>> on.
>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>
>>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>>> supports pv)
>>>>>
>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>> state. We were headed down that road when considering a dynamic
>>>>> window at
>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>> which
>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>> the dynamic window then.
>>>>>
>>>> Can be done, but lets understand why ple on is such a big problem.
>>>> Is it
>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>
>>>
>>> The one obvious reason I see is commit awareness inside the guest. for
>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>
>>> atleast we return back immediately in case of potential undercommits,
>>> but we still incur vmexit delay.
>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>> long enough) to not generate PLE exit we will not go into PLE handler
>> at all, no?
>>
>
> Yes. you are right. dynamic ple window was an attempt to solve it.
>
> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> exits in under-commits and increasing ple_window may be sometimes
> counter productive as it affects other busy-wait constructs such as
> flush_tlb AFAIK.
> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> would be nice.
>

Gleb, Andrew,
I tested with the global ple window change (similar to what I posted 
here https://lkml.org/lkml/2012/11/11/14 ),
But did not see good result. May be it is good to go with per VM
ple_window.

Gleb,
Can you elaborate little more on what you have in mind regarding per VM 
ple_window. (maintaining part of it as a per vm variable is clear to
  me), but is it that we have to load that every time of guest entry?

I 'll try that idea next.

Ingo, Gleb,

 From the results perspective, Andrew Theurer, Vinod's test results are
pro-pvspinlock.
Could you please help me to know what will make it a mergeable
candidate?.

I agree that Jiannan's Preemptable Lock idea is promising and we could
evaluate that  approach, and make the best one get into kernel and also
will carry on discussion with Jiannan to improve that patch.
Experiments so far have been good for smaller machine but it is not
scaling for bigger machines.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-09  9:11                 ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-09  9:11 UTC (permalink / raw)
  To: Gleb Natapov, Andrew Jones, mingo, ouyang
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, x86, habanero, riel,
	konrad.wilk, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
	stephan.diestelhorst

On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>> mechanism
>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>
>>>>>>>> Changes in V9:
>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>>     causing undercommit degradation (after PLE handler
>>>>>>>> improvement).
>>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>
>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>> to look
>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>> handling
>>>>>>>> have been tried.
>>>>>>>
>>>>>>> Sorry for not posting this sooner.  I have tested the v9
>>>>>>> pv-ticketlock
>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
>>>>>>> have
>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>> scalable
>>>>>>> with large VMs.
>>>>>>>
>>>>>>
>>>>>> Hi Andrew,
>>>>>>
>>>>>> Thanks for testing.
>>>>>>
>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput(MB/s)    Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on            22945            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-default-ple_off            23184            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on            22895            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off            23051            5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> [all 1x results look good here]
>>>>>>
>>>>>> Yes. The 1x results look too close
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>> -----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput        Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on             6287            55% CPU  host
>>>>>>> kernel, 17% spin_lock in guests
>>>>>>> 3.10-default-ple_off             1849            2% CPU in host
>>>>>>> kernel, 95% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on             6691            50% CPU in host
>>>>>>> kernel, 15% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off            16464            8% CPU in host
>>>>>>> kernel, 33% spin_lock in guests
>>>>>>
>>>>>> I see 6.426% improvement with ple_on
>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>> sign
>>>>>>   for the patches
>>>>>>
>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>>
>>>>>>
>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>> atleast
>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>> there.
>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput        Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on            22736            6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-default-ple_off            23377            5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on            22471            6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off            23445            5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> [1x looking fine here]
>>>>>>>
>>>>>>
>>>>>> I see ple_off is little better here.
>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>>                         Total
>>>>>>> Configuration                Throughput        Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on             1965            70% CPU in host
>>>>>>> kernel, 34% spin_lock in guests
>>>>>>> 3.10-default-ple_off              226            2% CPU in host
>>>>>>> kernel, 94% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on             1942            70% CPU in host
>>>>>>> kernel, 35% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off             8003            11% CPU in host
>>>>>>> kernel, 70% spin_lock in guests
>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>>   Still quite a bit off from ideal throughput]
>>>>>>
>>>>>> This is again a remarkable improvement (307%).
>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>> on.
>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>
>>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>>> supports pv)
>>>>>
>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>> state. We were headed down that road when considering a dynamic
>>>>> window at
>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>> which
>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>> the dynamic window then.
>>>>>
>>>> Can be done, but lets understand why ple on is such a big problem.
>>>> Is it
>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>
>>>
>>> The one obvious reason I see is commit awareness inside the guest. for
>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>
>>> atleast we return back immediately in case of potential undercommits,
>>> but we still incur vmexit delay.
>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>> long enough) to not generate PLE exit we will not go into PLE handler
>> at all, no?
>>
>
> Yes. you are right. dynamic ple window was an attempt to solve it.
>
> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> exits in under-commits and increasing ple_window may be sometimes
> counter productive as it affects other busy-wait constructs such as
> flush_tlb AFAIK.
> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> would be nice.
>

Gleb, Andrew,
I tested with the global ple window change (similar to what I posted 
here https://lkml.org/lkml/2012/11/11/14 ),
But did not see good result. May be it is good to go with per VM
ple_window.

Gleb,
Can you elaborate little more on what you have in mind regarding per VM 
ple_window. (maintaining part of it as a per vm variable is clear to
  me), but is it that we have to load that every time of guest entry?

I 'll try that idea next.

Ingo, Gleb,

 From the results perspective, Andrew Theurer, Vinod's test results are
pro-pvspinlock.
Could you please help me to know what will make it a mergeable
candidate?.

I agree that Jiannan's Preemptable Lock idea is promising and we could
evaluate that  approach, and make the best one get into kernel and also
will carry on discussion with Jiannan to improve that patch.
Experiments so far have been good for smaller machine but it is not
scaling for bigger machines.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-09  9:11                 ` Raghavendra K T
@ 2013-07-10 10:33                   ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:33 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> >On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> >>On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> >>>On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >>>>On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>>>>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>>>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>>>>This series replaces the existing paravirtualized spinlock
> >>>>>>>>mechanism
> >>>>>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>>>>implementation for both Xen and KVM.
> >>>>>>>>
> >>>>>>>>Changes in V9:
> >>>>>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>>>>>    causing undercommit degradation (after PLE handler
> >>>>>>>>improvement).
> >>>>>>>>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>>>>- Optimized halt exit path to use PLE handler
> >>>>>>>>
> >>>>>>>>V8 of PVspinlock was posted last year. After Avi's suggestions
> >>>>>>>>to look
> >>>>>>>>at PLE handler's improvements, various optimizations in PLE
> >>>>>>>>handling
> >>>>>>>>have been tried.
> >>>>>>>
> >>>>>>>Sorry for not posting this sooner.  I have tested the v9
> >>>>>>>pv-ticketlock
> >>>>>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
> >>>>>>>have
> >>>>>>>tested these patches with and without PLE, as PLE is still not
> >>>>>>>scalable
> >>>>>>>with large VMs.
> >>>>>>>
> >>>>>>
> >>>>>>Hi Andrew,
> >>>>>>
> >>>>>>Thanks for testing.
> >>>>>>
> >>>>>>>System: x3850X5, 40 cores, 80 threads
> >>>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput(MB/s)    Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on            22945            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-default-ple_off            23184            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on            22895            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off            23051            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>[all 1x results look good here]
> >>>>>>
> >>>>>>Yes. The 1x results look too close
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>>>>-----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput        Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on             6287            55% CPU  host
> >>>>>>>kernel, 17% spin_lock in guests
> >>>>>>>3.10-default-ple_off             1849            2% CPU in host
> >>>>>>>kernel, 95% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on             6691            50% CPU in host
> >>>>>>>kernel, 15% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off            16464            8% CPU in host
> >>>>>>>kernel, 33% spin_lock in guests
> >>>>>>
> >>>>>>I see 6.426% improvement with ple_on
> >>>>>>and 161.87% improvement with ple_off. I think this is a very good
> >>>>>>sign
> >>>>>>  for the patches
> >>>>>>
> >>>>>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>>>>>  we still off from ideal throughput (somewhere >20000)]
> >>>>>>>
> >>>>>>
> >>>>>>Okay, The ideal throughput you are referring is getting around
> >>>>>>atleast
> >>>>>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>>>>there.
> >>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput        Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on            22736            6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-default-ple_off            23377            5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on            22471            6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off            23445            5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>[1x looking fine here]
> >>>>>>>
> >>>>>>
> >>>>>>I see ple_off is little better here.
> >>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput        Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on             1965            70% CPU in host
> >>>>>>>kernel, 34% spin_lock in guests
> >>>>>>>3.10-default-ple_off              226            2% CPU in host
> >>>>>>>kernel, 94% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on             1942            70% CPU in host
> >>>>>>>kernel, 35% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off             8003            11% CPU in host
> >>>>>>>kernel, 70% spin_lock in guests
> >>>>>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>>>>>  Still quite a bit off from ideal throughput]
> >>>>>>
> >>>>>>This is again a remarkable improvement (307%).
> >>>>>>This motivates me to add a patch to disable ple when pvspinlock is
> >>>>>>on.
> >>>>>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>>>>but only problem I see is what if the guests are mixed.
> >>>>>>
> >>>>>>  (i.e one guest has pvspinlock support but other does not. Host
> >>>>>>supports pv)
> >>>>>
> >>>>>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>>>>state. We were headed down that road when considering a dynamic
> >>>>>window at
> >>>>>one point. Then you can just set a single guest's ple_gap to zero,
> >>>>>which
> >>>>>would lead to PLE being disabled for that guest. We could also revisit
> >>>>>the dynamic window then.
> >>>>>
> >>>>Can be done, but lets understand why ple on is such a big problem.
> >>>>Is it
> >>>>possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >>>>
> >>>
> >>>The one obvious reason I see is commit awareness inside the guest. for
> >>>under-commit there is no necessity to do PLE, but unfortunately we do.
> >>>
> >>>atleast we return back immediately in case of potential undercommits,
> >>>but we still incur vmexit delay.
> >>But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> >>long enough) to not generate PLE exit we will not go into PLE handler
> >>at all, no?
> >>
> >
> >Yes. you are right. dynamic ple window was an attempt to solve it.
> >
> >Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >exits in under-commits and increasing ple_window may be sometimes
> >counter productive as it affects other busy-wait constructs such as
> >flush_tlb AFAIK.
> >So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >would be nice.
> >
> 
> Gleb, Andrew,
> I tested with the global ple window change (similar to what I posted
> here https://lkml.org/lkml/2012/11/11/14 ),
This does not look global. It changes PLE per vcpu.

> But did not see good result. May be it is good to go with per VM
> ple_window.
> 
> Gleb,
> Can you elaborate little more on what you have in mind regarding per
> VM ple_window. (maintaining part of it as a per vm variable is clear
> to
>  me), but is it that we have to load that every time of guest entry?
> 
Only when it changes, shouldn't be to often no?

> I 'll try that idea next.
> 
> Ingo, Gleb,
> 
> From the results perspective, Andrew Theurer, Vinod's test results are
> pro-pvspinlock.
> Could you please help me to know what will make it a mergeable
> candidate?.
> 
I need to spend more time reviewing it :) The problem with PV interfaces
is that they are easy to add but hard to get rid of if better solution
(HW or otherwise) appears.

> I agree that Jiannan's Preemptable Lock idea is promising and we could
> evaluate that  approach, and make the best one get into kernel and also
> will carry on discussion with Jiannan to improve that patch.
That would be great. The work is stalled from what I can tell.

> Experiments so far have been good for smaller machine but it is not
> scaling for bigger machines.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 10:33                   ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:33 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> >On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> >>On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> >>>On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >>>>On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>>>>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>>>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>>>>This series replaces the existing paravirtualized spinlock
> >>>>>>>>mechanism
> >>>>>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>>>>implementation for both Xen and KVM.
> >>>>>>>>
> >>>>>>>>Changes in V9:
> >>>>>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>>>>>    causing undercommit degradation (after PLE handler
> >>>>>>>>improvement).
> >>>>>>>>- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>>>>- Optimized halt exit path to use PLE handler
> >>>>>>>>
> >>>>>>>>V8 of PVspinlock was posted last year. After Avi's suggestions
> >>>>>>>>to look
> >>>>>>>>at PLE handler's improvements, various optimizations in PLE
> >>>>>>>>handling
> >>>>>>>>have been tried.
> >>>>>>>
> >>>>>>>Sorry for not posting this sooner.  I have tested the v9
> >>>>>>>pv-ticketlock
> >>>>>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
> >>>>>>>have
> >>>>>>>tested these patches with and without PLE, as PLE is still not
> >>>>>>>scalable
> >>>>>>>with large VMs.
> >>>>>>>
> >>>>>>
> >>>>>>Hi Andrew,
> >>>>>>
> >>>>>>Thanks for testing.
> >>>>>>
> >>>>>>>System: x3850X5, 40 cores, 80 threads
> >>>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput(MB/s)    Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on            22945            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-default-ple_off            23184            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on            22895            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off            23051            5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>[all 1x results look good here]
> >>>>>>
> >>>>>>Yes. The 1x results look too close
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>>>>-----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput        Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on             6287            55% CPU  host
> >>>>>>>kernel, 17% spin_lock in guests
> >>>>>>>3.10-default-ple_off             1849            2% CPU in host
> >>>>>>>kernel, 95% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on             6691            50% CPU in host
> >>>>>>>kernel, 15% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off            16464            8% CPU in host
> >>>>>>>kernel, 33% spin_lock in guests
> >>>>>>
> >>>>>>I see 6.426% improvement with ple_on
> >>>>>>and 161.87% improvement with ple_off. I think this is a very good
> >>>>>>sign
> >>>>>>  for the patches
> >>>>>>
> >>>>>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>>>>>  we still off from ideal throughput (somewhere >20000)]
> >>>>>>>
> >>>>>>
> >>>>>>Okay, The ideal throughput you are referring is getting around
> >>>>>>atleast
> >>>>>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>>>>there.
> >>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput        Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on            22736            6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-default-ple_off            23377            5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on            22471            6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off            23445            5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>[1x looking fine here]
> >>>>>>>
> >>>>>>
> >>>>>>I see ple_off is little better here.
> >>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>>                        Total
> >>>>>>>Configuration                Throughput        Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on             1965            70% CPU in host
> >>>>>>>kernel, 34% spin_lock in guests
> >>>>>>>3.10-default-ple_off              226            2% CPU in host
> >>>>>>>kernel, 94% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on             1942            70% CPU in host
> >>>>>>>kernel, 35% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off             8003            11% CPU in host
> >>>>>>>kernel, 70% spin_lock in guests
> >>>>>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>>>>>  Still quite a bit off from ideal throughput]
> >>>>>>
> >>>>>>This is again a remarkable improvement (307%).
> >>>>>>This motivates me to add a patch to disable ple when pvspinlock is
> >>>>>>on.
> >>>>>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>>>>but only problem I see is what if the guests are mixed.
> >>>>>>
> >>>>>>  (i.e one guest has pvspinlock support but other does not. Host
> >>>>>>supports pv)
> >>>>>
> >>>>>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>>>>state. We were headed down that road when considering a dynamic
> >>>>>window at
> >>>>>one point. Then you can just set a single guest's ple_gap to zero,
> >>>>>which
> >>>>>would lead to PLE being disabled for that guest. We could also revisit
> >>>>>the dynamic window then.
> >>>>>
> >>>>Can be done, but lets understand why ple on is such a big problem.
> >>>>Is it
> >>>>possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >>>>
> >>>
> >>>The one obvious reason I see is commit awareness inside the guest. for
> >>>under-commit there is no necessity to do PLE, but unfortunately we do.
> >>>
> >>>atleast we return back immediately in case of potential undercommits,
> >>>but we still incur vmexit delay.
> >>But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> >>long enough) to not generate PLE exit we will not go into PLE handler
> >>at all, no?
> >>
> >
> >Yes. you are right. dynamic ple window was an attempt to solve it.
> >
> >Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >exits in under-commits and increasing ple_window may be sometimes
> >counter productive as it affects other busy-wait constructs such as
> >flush_tlb AFAIK.
> >So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >would be nice.
> >
> 
> Gleb, Andrew,
> I tested with the global ple window change (similar to what I posted
> here https://lkml.org/lkml/2012/11/11/14 ),
This does not look global. It changes PLE per vcpu.

> But did not see good result. May be it is good to go with per VM
> ple_window.
> 
> Gleb,
> Can you elaborate little more on what you have in mind regarding per
> VM ple_window. (maintaining part of it as a per vm variable is clear
> to
>  me), but is it that we have to load that every time of guest entry?
> 
Only when it changes, shouldn't be to often no?

> I 'll try that idea next.
> 
> Ingo, Gleb,
> 
> From the results perspective, Andrew Theurer, Vinod's test results are
> pro-pvspinlock.
> Could you please help me to know what will make it a mergeable
> candidate?.
> 
I need to spend more time reviewing it :) The problem with PV interfaces
is that they are easy to add but hard to get rid of if better solution
(HW or otherwise) appears.

> I agree that Jiannan's Preemptable Lock idea is promising and we could
> evaluate that  approach, and make the best one get into kernel and also
> will carry on discussion with Jiannan to improve that patch.
That would be great. The work is stalled from what I can tell.

> Experiments so far have been good for smaller machine but it is not
> scaling for bigger machines.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:33                   ` Gleb Natapov
@ 2013-07-10 10:40                     ` Peter Zijlstra
  -1 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2013-07-10 10:40 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Raghavendra K T, Andrew Jones, mingo, ouyang, habanero, jeremy,
	x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:

Here's an idea, trim the damn email ;-) -- not only directed at gleb.

> > Ingo, Gleb,
> > 
> > From the results perspective, Andrew Theurer, Vinod's test results are
> > pro-pvspinlock.
> > Could you please help me to know what will make it a mergeable
> > candidate?.
> > 
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.

How so? Just make sure the registration for the PV interface is optional; that
is, allow it to fail. A guest that fails the PV setup will either have to try
another PV interface or fall back to 'native'.

> > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > evaluate that  approach, and make the best one get into kernel and also
> > will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.

I absolutely hated that stuff because it wrecked the native code.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 10:40                     ` Peter Zijlstra
  0 siblings, 0 replies; 192+ messages in thread
From: Peter Zijlstra @ 2013-07-10 10:40 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, Raghavendra K T, kvm, linux-doc, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:

Here's an idea, trim the damn email ;-) -- not only directed at gleb.

> > Ingo, Gleb,
> > 
> > From the results perspective, Andrew Theurer, Vinod's test results are
> > pro-pvspinlock.
> > Could you please help me to know what will make it a mergeable
> > candidate?.
> > 
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.

How so? Just make sure the registration for the PV interface is optional; that
is, allow it to fail. A guest that fails the PV setup will either have to try
another PV interface or fall back to 'native'.

> > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > evaluate that  approach, and make the best one get into kernel and also
> > will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.

I absolutely hated that stuff because it wrecked the native code.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:40                     ` Peter Zijlstra
@ 2013-07-10 10:47                       ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:47 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Raghavendra K T, Andrew Jones, mingo, ouyang, habanero, jeremy,
	x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> 
> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> 
Good idea.

> > > Ingo, Gleb,
> > > 
> > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > pro-pvspinlock.
> > > Could you please help me to know what will make it a mergeable
> > > candidate?.
> > > 
> > I need to spend more time reviewing it :) The problem with PV interfaces
> > is that they are easy to add but hard to get rid of if better solution
> > (HW or otherwise) appears.
> 
> How so? Just make sure the registration for the PV interface is optional; that
> is, allow it to fail. A guest that fails the PV setup will either have to try
> another PV interface or fall back to 'native'.
> 
We have to carry PV around for live migration purposes. PV interface
cannot disappear under a running guest.

> > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > evaluate that  approach, and make the best one get into kernel and also
> > > will carry on discussion with Jiannan to improve that patch.
> > That would be great. The work is stalled from what I can tell.
> 
> I absolutely hated that stuff because it wrecked the native code.
Yes, the idea was to hide it from native code behind PV hooks.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 10:47                       ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:47 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: jeremy, Raghavendra K T, kvm, linux-doc, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> 
> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> 
Good idea.

> > > Ingo, Gleb,
> > > 
> > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > pro-pvspinlock.
> > > Could you please help me to know what will make it a mergeable
> > > candidate?.
> > > 
> > I need to spend more time reviewing it :) The problem with PV interfaces
> > is that they are easy to add but hard to get rid of if better solution
> > (HW or otherwise) appears.
> 
> How so? Just make sure the registration for the PV interface is optional; that
> is, allow it to fail. A guest that fails the PV setup will either have to try
> another PV interface or fall back to 'native'.
> 
We have to carry PV around for live migration purposes. PV interface
cannot disappear under a running guest.

> > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > evaluate that  approach, and make the best one get into kernel and also
> > > will carry on discussion with Jiannan to improve that patch.
> > That would be great. The work is stalled from what I can tell.
> 
> I absolutely hated that stuff because it wrecked the native code.
Yes, the idea was to hide it from native code behind PV hooks.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:33                   ` Gleb Natapov
                                     ` (2 preceding siblings ...)
  (?)
@ 2013-07-10 11:24                   ` Raghavendra K T
  2013-07-10 11:41                       ` Gleb Natapov
  -1 siblings, 1 reply; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:24 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
>>> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>>>> mechanism
>>>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>>>
>>>>>>>>>> Changes in V9:
>>>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>>>>     causing undercommit degradation (after PLE handler
>>>>>>>>>> improvement).
>>>>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>>>
>>>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>>>> to look
>>>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>>>> handling
>>>>>>>>>> have been tried.
>>>>>>>>>
>>>>>>>>> Sorry for not posting this sooner.  I have tested the v9
>>>>>>>>> pv-ticketlock
>>>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
>>>>>>>>> have
>>>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>>>> scalable
>>>>>>>>> with large VMs.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Hi Andrew,
>>>>>>>>
>>>>>>>> Thanks for testing.
>>>>>>>>
>>>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput(MB/s)    Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on            22945            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off            23184            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on            22895            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off            23051            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> [all 1x results look good here]
>>>>>>>>
>>>>>>>> Yes. The 1x results look too close
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>>>> -----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput        Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on             6287            55% CPU  host
>>>>>>>>> kernel, 17% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off             1849            2% CPU in host
>>>>>>>>> kernel, 95% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on             6691            50% CPU in host
>>>>>>>>> kernel, 15% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off            16464            8% CPU in host
>>>>>>>>> kernel, 33% spin_lock in guests
>>>>>>>>
>>>>>>>> I see 6.426% improvement with ple_on
>>>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>>>> sign
>>>>>>>>   for the patches
>>>>>>>>
>>>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>>>>
>>>>>>>>
>>>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>>>> atleast
>>>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>>>> there.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput        Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on            22736            6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off            23377            5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on            22471            6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off            23445            5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> [1x looking fine here]
>>>>>>>>>
>>>>>>>>
>>>>>>>> I see ple_off is little better here.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput        Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on             1965            70% CPU in host
>>>>>>>>> kernel, 34% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off              226            2% CPU in host
>>>>>>>>> kernel, 94% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on             1942            70% CPU in host
>>>>>>>>> kernel, 35% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off             8003            11% CPU in host
>>>>>>>>> kernel, 70% spin_lock in guests
>>>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>>>>   Still quite a bit off from ideal throughput]
>>>>>>>>
>>>>>>>> This is again a remarkable improvement (307%).
>>>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>>>> on.
>>>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>>>
>>>>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>>>>> supports pv)
>>>>>>>
>>>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>>>> state. We were headed down that road when considering a dynamic
>>>>>>> window at
>>>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>>>> which
>>>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>>>> the dynamic window then.
>>>>>>>
>>>>>> Can be done, but lets understand why ple on is such a big problem.
>>>>>> Is it
>>>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>>>
>>>>>
>>>>> The one obvious reason I see is commit awareness inside the guest. for
>>>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>>>
>>>>> atleast we return back immediately in case of potential undercommits,
>>>>> but we still incur vmexit delay.
>>>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>>>> long enough) to not generate PLE exit we will not go into PLE handler
>>>> at all, no?
>>>>
>>>
>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.
>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>>   me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?
>
>> I 'll try that idea next.
>>
>> Ingo, Gleb,
>>
>>  From the results perspective, Andrew Theurer, Vinod's test results are
>> pro-pvspinlock.
>> Could you please help me to know what will make it a mergeable
>> candidate?.
>>
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.

Infact Avi had acked the whole V8 series, but delayed for seeing how
PLE improvement would affect it.

The only addition from that series has been
1. tuning the SPIN_THRESHOLD to 32k (from 2k)
and
2. the halt handler now calls vcpu_on_spin to take the advantage of PLE 
improvements. (this can also go as an independent patch into kvm)

The rationale for making SPIN_THERSHOLD 32k needs big explanation.
Before PLE improvements, as you know,
kvm undercommit scenario was very worse in ple enabled cases. (compared 
to ple disabled cases).
pvspinlock patches behaved equally bad in undercommit. Both had similar 
reason so at the end there was no degradation w.r.t base.

The reason for bad performance in PLE case was unneeded vcpu iteration 
in ple handler resulting in high yield_to calls and double run queue locks.
With pvspinlock applied, same villain role was played by excessive halt 
exits.

But after ple handler improved, we needed to throttle unnecessary halts
in undercommit for pvspinlock to be on par with 1x result.

>
>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>> evaluate that  approach, and make the best one get into kernel and also
>> will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.

Jiannan is trying to improve that. and also 'am helping with testing
etc internally too.
Despite of being a great idea some how, hardcoded TIMEOUT to delay
the checking the lock availability is somehow not working great, and
still seeing some softlockups. AFAIK, Linus also hated TIMEOUT idea
in Rik's spinlock backoff patches because it is difficult to tune on 
baremetal and can have some adverse effect on virtualization too.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:33                   ` Gleb Natapov
  (?)
  (?)
@ 2013-07-10 11:24                   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:24 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
>>> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>>>> mechanism
>>>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>>>
>>>>>>>>>> Changes in V9:
>>>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>>>>     causing undercommit degradation (after PLE handler
>>>>>>>>>> improvement).
>>>>>>>>>> - Added  kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>>>
>>>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>>>> to look
>>>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>>>> handling
>>>>>>>>>> have been tried.
>>>>>>>>>
>>>>>>>>> Sorry for not posting this sooner.  I have tested the v9
>>>>>>>>> pv-ticketlock
>>>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs.  I
>>>>>>>>> have
>>>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>>>> scalable
>>>>>>>>> with large VMs.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Hi Andrew,
>>>>>>>>
>>>>>>>> Thanks for testing.
>>>>>>>>
>>>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput(MB/s)    Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on            22945            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off            23184            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on            22895            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off            23051            5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> [all 1x results look good here]
>>>>>>>>
>>>>>>>> Yes. The 1x results look too close
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>>>> -----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput        Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on             6287            55% CPU  host
>>>>>>>>> kernel, 17% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off             1849            2% CPU in host
>>>>>>>>> kernel, 95% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on             6691            50% CPU in host
>>>>>>>>> kernel, 15% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off            16464            8% CPU in host
>>>>>>>>> kernel, 33% spin_lock in guests
>>>>>>>>
>>>>>>>> I see 6.426% improvement with ple_on
>>>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>>>> sign
>>>>>>>>   for the patches
>>>>>>>>
>>>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>>>>   we still off from ideal throughput (somewhere >20000)]
>>>>>>>>>
>>>>>>>>
>>>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>>>> atleast
>>>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>>>> there.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput        Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on            22736            6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off            23377            5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on            22471            6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off            23445            5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> [1x looking fine here]
>>>>>>>>>
>>>>>>>>
>>>>>>>> I see ple_off is little better here.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>>                         Total
>>>>>>>>> Configuration                Throughput        Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on             1965            70% CPU in host
>>>>>>>>> kernel, 34% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off              226            2% CPU in host
>>>>>>>>> kernel, 94% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on             1942            70% CPU in host
>>>>>>>>> kernel, 35% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off             8003            11% CPU in host
>>>>>>>>> kernel, 70% spin_lock in guests
>>>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>>>>   Still quite a bit off from ideal throughput]
>>>>>>>>
>>>>>>>> This is again a remarkable improvement (307%).
>>>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>>>> on.
>>>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>>>
>>>>>>>>   (i.e one guest has pvspinlock support but other does not. Host
>>>>>>>> supports pv)
>>>>>>>
>>>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>>>> state. We were headed down that road when considering a dynamic
>>>>>>> window at
>>>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>>>> which
>>>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>>>> the dynamic window then.
>>>>>>>
>>>>>> Can be done, but lets understand why ple on is such a big problem.
>>>>>> Is it
>>>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>>>
>>>>>
>>>>> The one obvious reason I see is commit awareness inside the guest. for
>>>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>>>
>>>>> atleast we return back immediately in case of potential undercommits,
>>>>> but we still incur vmexit delay.
>>>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>>>> long enough) to not generate PLE exit we will not go into PLE handler
>>>> at all, no?
>>>>
>>>
>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.
>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>>   me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?
>
>> I 'll try that idea next.
>>
>> Ingo, Gleb,
>>
>>  From the results perspective, Andrew Theurer, Vinod's test results are
>> pro-pvspinlock.
>> Could you please help me to know what will make it a mergeable
>> candidate?.
>>
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.

Infact Avi had acked the whole V8 series, but delayed for seeing how
PLE improvement would affect it.

The only addition from that series has been
1. tuning the SPIN_THRESHOLD to 32k (from 2k)
and
2. the halt handler now calls vcpu_on_spin to take the advantage of PLE 
improvements. (this can also go as an independent patch into kvm)

The rationale for making SPIN_THERSHOLD 32k needs big explanation.
Before PLE improvements, as you know,
kvm undercommit scenario was very worse in ple enabled cases. (compared 
to ple disabled cases).
pvspinlock patches behaved equally bad in undercommit. Both had similar 
reason so at the end there was no degradation w.r.t base.

The reason for bad performance in PLE case was unneeded vcpu iteration 
in ple handler resulting in high yield_to calls and double run queue locks.
With pvspinlock applied, same villain role was played by excessive halt 
exits.

But after ple handler improved, we needed to throttle unnecessary halts
in undercommit for pvspinlock to be on par with 1x result.

>
>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>> evaluate that  approach, and make the best one get into kernel and also
>> will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.

Jiannan is trying to improve that. and also 'am helping with testing
etc internally too.
Despite of being a great idea some how, hardcoded TIMEOUT to delay
the checking the lock availability is somehow not working great, and
still seeing some softlockups. AFAIK, Linus also hated TIMEOUT idea
in Rik's spinlock backoff patches because it is difficult to tune on 
baremetal and can have some adverse effect on virtualization too.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:47                       ` Gleb Natapov
@ 2013-07-10 11:28                         ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:28 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Peter Zijlstra, Andrew Jones, mingo, ouyang, habanero, jeremy,
	x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>
>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>
> Good idea.
>
>>>> Ingo, Gleb,
>>>>
>>>>  From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> How so? Just make sure the registration for the PV interface is optional; that
>> is, allow it to fail. A guest that fails the PV setup will either have to try
>> another PV interface or fall back to 'native'.
>>
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.
>

IIRC, The only requirement was running state of the vcpu to be retained.
This was addressed by
[PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl 
to aid migration.

I would have to know more if I missed something here.

>>>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>>>> evaluate that  approach, and make the best one get into kernel and also
>>>> will carry on discussion with Jiannan to improve that patch.
>>> That would be great. The work is stalled from what I can tell.
>>
>> I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
>


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:28                         ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:28 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, Peter Zijlstra, riel,
	virtualization, andi, hpa, stefano.stabellini, xen-devel, x86,
	mingo, habanero, Andrew Jones, konrad.wilk, ouyang, avi.kivity,
	tglx, chegu_vinod, linux-kernel, srivatsa.vaddagiri, attilio.rao,
	pbonzini, torvalds, stephan.diestelhorst

On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>
>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>
> Good idea.
>
>>>> Ingo, Gleb,
>>>>
>>>>  From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> How so? Just make sure the registration for the PV interface is optional; that
>> is, allow it to fail. A guest that fails the PV setup will either have to try
>> another PV interface or fall back to 'native'.
>>
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.
>

IIRC, The only requirement was running state of the vcpu to be retained.
This was addressed by
[PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl 
to aid migration.

I would have to know more if I missed something here.

>>>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>>>> evaluate that  approach, and make the best one get into kernel and also
>>>> will carry on discussion with Jiannan to improve that patch.
>>> That would be great. The work is stalled from what I can tell.
>>
>> I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
>

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 11:28                         ` Raghavendra K T
@ 2013-07-10 11:29                           ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:29 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Peter Zijlstra, Andrew Jones, mingo, ouyang, habanero, jeremy,
	x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jul 10, 2013 at 04:58:29PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> >On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> >>On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> >>
> >>Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> >>
> >Good idea.
> >
> >>>>Ingo, Gleb,
> >>>>
> >>>> From the results perspective, Andrew Theurer, Vinod's test results are
> >>>>pro-pvspinlock.
> >>>>Could you please help me to know what will make it a mergeable
> >>>>candidate?.
> >>>>
> >>>I need to spend more time reviewing it :) The problem with PV interfaces
> >>>is that they are easy to add but hard to get rid of if better solution
> >>>(HW or otherwise) appears.
> >>
> >>How so? Just make sure the registration for the PV interface is optional; that
> >>is, allow it to fail. A guest that fails the PV setup will either have to try
> >>another PV interface or fall back to 'native'.
> >>
> >We have to carry PV around for live migration purposes. PV interface
> >cannot disappear under a running guest.
> >
> 
> IIRC, The only requirement was running state of the vcpu to be retained.
> This was addressed by
> [PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE
> ioctl to aid migration.
> 
> I would have to know more if I missed something here.
> 
I was not talking about the state that has to be migrated, but
HV<->guest interface that has to be preserved after migration.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:29                           ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:29 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, Peter Zijlstra, riel,
	virtualization, andi, hpa, stefano.stabellini, xen-devel, x86,
	mingo, habanero, Andrew Jones, konrad.wilk, ouyang, avi.kivity,
	tglx, chegu_vinod, linux-kernel, srivatsa.vaddagiri, attilio.rao,
	pbonzini, torvalds, stephan.diestelhorst

On Wed, Jul 10, 2013 at 04:58:29PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> >On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> >>On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> >>
> >>Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> >>
> >Good idea.
> >
> >>>>Ingo, Gleb,
> >>>>
> >>>> From the results perspective, Andrew Theurer, Vinod's test results are
> >>>>pro-pvspinlock.
> >>>>Could you please help me to know what will make it a mergeable
> >>>>candidate?.
> >>>>
> >>>I need to spend more time reviewing it :) The problem with PV interfaces
> >>>is that they are easy to add but hard to get rid of if better solution
> >>>(HW or otherwise) appears.
> >>
> >>How so? Just make sure the registration for the PV interface is optional; that
> >>is, allow it to fail. A guest that fails the PV setup will either have to try
> >>another PV interface or fall back to 'native'.
> >>
> >We have to carry PV around for live migration purposes. PV interface
> >cannot disappear under a running guest.
> >
> 
> IIRC, The only requirement was running state of the vcpu to be retained.
> This was addressed by
> [PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE
> ioctl to aid migration.
> 
> I would have to know more if I missed something here.
> 
I was not talking about the state that has to be migrated, but
HV<->guest interface that has to be preserved after migration.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 11:28                         ` Raghavendra K T
@ 2013-07-10 11:40                           ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:40 UTC (permalink / raw)
  To: Gleb Natapov, Peter Zijlstra
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

dropping stephen becuase of bounce

On 07/10/2013 04:58 PM, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
>> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>>
>>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>>
>> Good idea.
>>
>>>>> Ingo, Gleb,
>>>>>
>>>>>  From the results perspective, Andrew Theurer, Vinod's test results
>>>>> are
>>>>> pro-pvspinlock.
>>>>> Could you please help me to know what will make it a mergeable
>>>>> candidate?.
>>>>>
>>>> I need to spend more time reviewing it :) The problem with PV
>>>> interfaces
>>>> is that they are easy to add but hard to get rid of if better solution
>>>> (HW or otherwise) appears.
>>>
>>> How so? Just make sure the registration for the PV interface is
>>> optional; that
>>> is, allow it to fail. A guest that fails the PV setup will either
>>> have to try
>>> another PV interface or fall back to 'native'.
>>>

Forgot to add. Yes currently pvspinlocks are not enabled by default and
also, we have jump_label mechanism to enable it.
[...]


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:40                           ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:40 UTC (permalink / raw)
  To: Gleb Natapov, Peter Zijlstra
  Cc: jeremy, gregkh, kvm, linux-doc, riel, virtualization, andi, hpa,
	stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

dropping stephen becuase of bounce

On 07/10/2013 04:58 PM, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
>> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>>
>>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>>
>> Good idea.
>>
>>>>> Ingo, Gleb,
>>>>>
>>>>>  From the results perspective, Andrew Theurer, Vinod's test results
>>>>> are
>>>>> pro-pvspinlock.
>>>>> Could you please help me to know what will make it a mergeable
>>>>> candidate?.
>>>>>
>>>> I need to spend more time reviewing it :) The problem with PV
>>>> interfaces
>>>> is that they are easy to add but hard to get rid of if better solution
>>>> (HW or otherwise) appears.
>>>
>>> How so? Just make sure the registration for the PV interface is
>>> optional; that
>>> is, allow it to fail. A guest that fails the PV setup will either
>>> have to try
>>> another PV interface or fall back to 'native'.
>>>

Forgot to add. Yes currently pvspinlocks are not enabled by default and
also, we have jump_label mechanism to enable it.
[...]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 11:24                   ` Raghavendra K T
@ 2013-07-10 11:41                       ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:41 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
> >>Ingo, Gleb,
> >>
> >> From the results perspective, Andrew Theurer, Vinod's test results are
> >>pro-pvspinlock.
> >>Could you please help me to know what will make it a mergeable
> >>candidate?.
> >>
> >I need to spend more time reviewing it :) The problem with PV interfaces
> >is that they are easy to add but hard to get rid of if better solution
> >(HW or otherwise) appears.
> 
> Infact Avi had acked the whole V8 series, but delayed for seeing how
> PLE improvement would affect it.
> 
I see that Ingo was happy with it too.

> The only addition from that series has been
> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
> and
> 2. the halt handler now calls vcpu_on_spin to take the advantage of
> PLE improvements. (this can also go as an independent patch into
> kvm)
> 
> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
> Before PLE improvements, as you know,
> kvm undercommit scenario was very worse in ple enabled cases.
> (compared to ple disabled cases).
> pvspinlock patches behaved equally bad in undercommit. Both had
> similar reason so at the end there was no degradation w.r.t base.
> 
> The reason for bad performance in PLE case was unneeded vcpu
> iteration in ple handler resulting in high yield_to calls and double
> run queue locks.
> With pvspinlock applied, same villain role was played by excessive
> halt exits.
> 
> But after ple handler improved, we needed to throttle unnecessary halts
> in undercommit for pvspinlock to be on par with 1x result.
> 
Make sense. I will review it ASAP. BTW the latest version is V10 right?

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:41                       ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:41 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst

On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
> >>Ingo, Gleb,
> >>
> >> From the results perspective, Andrew Theurer, Vinod's test results are
> >>pro-pvspinlock.
> >>Could you please help me to know what will make it a mergeable
> >>candidate?.
> >>
> >I need to spend more time reviewing it :) The problem with PV interfaces
> >is that they are easy to add but hard to get rid of if better solution
> >(HW or otherwise) appears.
> 
> Infact Avi had acked the whole V8 series, but delayed for seeing how
> PLE improvement would affect it.
> 
I see that Ingo was happy with it too.

> The only addition from that series has been
> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
> and
> 2. the halt handler now calls vcpu_on_spin to take the advantage of
> PLE improvements. (this can also go as an independent patch into
> kvm)
> 
> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
> Before PLE improvements, as you know,
> kvm undercommit scenario was very worse in ple enabled cases.
> (compared to ple disabled cases).
> pvspinlock patches behaved equally bad in undercommit. Both had
> similar reason so at the end there was no degradation w.r.t base.
> 
> The reason for bad performance in PLE case was unneeded vcpu
> iteration in ple handler resulting in high yield_to calls and double
> run queue locks.
> With pvspinlock applied, same villain role was played by excessive
> halt exits.
> 
> But after ple handler improved, we needed to throttle unnecessary halts
> in undercommit for pvspinlock to be on par with 1x result.
> 
Make sense. I will review it ASAP. BTW the latest version is V10 right?

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 11:41                       ` Gleb Natapov
@ 2013-07-10 11:50                         ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:50 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On 07/10/2013 05:11 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
>>>> Ingo, Gleb,
>>>>
>>>>  From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> Infact Avi had acked the whole V8 series, but delayed for seeing how
>> PLE improvement would affect it.
>>
> I see that Ingo was happy with it too.
>
>> The only addition from that series has been
>> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
>> and
>> 2. the halt handler now calls vcpu_on_spin to take the advantage of
>> PLE improvements. (this can also go as an independent patch into
>> kvm)
>>
>> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
>> Before PLE improvements, as you know,
>> kvm undercommit scenario was very worse in ple enabled cases.
>> (compared to ple disabled cases).
>> pvspinlock patches behaved equally bad in undercommit. Both had
>> similar reason so at the end there was no degradation w.r.t base.
>>
>> The reason for bad performance in PLE case was unneeded vcpu
>> iteration in ple handler resulting in high yield_to calls and double
>> run queue locks.
>> With pvspinlock applied, same villain role was played by excessive
>> halt exits.
>>
>> But after ple handler improved, we needed to throttle unnecessary halts
>> in undercommit for pvspinlock to be on par with 1x result.
>>
> Make sense. I will review it ASAP. BTW the latest version is V10 right?
>

Yes. Thank you.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:50                         ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:50 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On 07/10/2013 05:11 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
>>>> Ingo, Gleb,
>>>>
>>>>  From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> Infact Avi had acked the whole V8 series, but delayed for seeing how
>> PLE improvement would affect it.
>>
> I see that Ingo was happy with it too.
>
>> The only addition from that series has been
>> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
>> and
>> 2. the halt handler now calls vcpu_on_spin to take the advantage of
>> PLE improvements. (this can also go as an independent patch into
>> kvm)
>>
>> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
>> Before PLE improvements, as you know,
>> kvm undercommit scenario was very worse in ple enabled cases.
>> (compared to ple disabled cases).
>> pvspinlock patches behaved equally bad in undercommit. Both had
>> similar reason so at the end there was no degradation w.r.t base.
>>
>> The reason for bad performance in PLE case was unneeded vcpu
>> iteration in ple handler resulting in high yield_to calls and double
>> run queue locks.
>> With pvspinlock applied, same villain role was played by excessive
>> halt exits.
>>
>> But after ple handler improved, we needed to throttle unnecessary halts
>> in undercommit for pvspinlock to be on par with 1x result.
>>
> Make sense. I will review it ASAP. BTW the latest version is V10 right?
>

Yes. Thank you.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:47                       ` Gleb Natapov
@ 2013-07-10 15:03                         ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-10 15:03 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Peter Zijlstra, Raghavendra K T, Andrew Jones, mingo, ouyang,
	habanero, jeremy, x86, hpa, pbonzini, linux-doc, xen-devel,
	mtosatti, stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> > 
> > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > 
> Good idea.
> 
> > > > Ingo, Gleb,
> > > > 
> > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > pro-pvspinlock.
> > > > Could you please help me to know what will make it a mergeable
> > > > candidate?.
> > > > 
> > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > is that they are easy to add but hard to get rid of if better solution
> > > (HW or otherwise) appears.
> > 
> > How so? Just make sure the registration for the PV interface is optional; that
> > is, allow it to fail. A guest that fails the PV setup will either have to try
> > another PV interface or fall back to 'native'.
> > 
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.

Why can't it? This is the same as handling say XSAVE operations. Some hosts
might have it - some might not. It is the job of the toolstack to make sure
to not migrate to the hosts which don't have it. Or bound the guest to the
lowest interface (so don't enable the PV interface if the other hosts in the
cluster can't support this flag)?

> 
> > > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > > evaluate that  approach, and make the best one get into kernel and also
> > > > will carry on discussion with Jiannan to improve that patch.
> > > That would be great. The work is stalled from what I can tell.
> > 
> > I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
> 
> --
> 			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 15:03                         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-10 15:03 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, Raghavendra K T, kvm, linux-doc, Peter Zijlstra, riel,
	virtualization, andi, hpa, xen-devel, x86, mingo, habanero,
	Andrew Jones, stefano.stabellini, ouyang, avi.kivity, tglx,
	chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> > 
> > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > 
> Good idea.
> 
> > > > Ingo, Gleb,
> > > > 
> > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > pro-pvspinlock.
> > > > Could you please help me to know what will make it a mergeable
> > > > candidate?.
> > > > 
> > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > is that they are easy to add but hard to get rid of if better solution
> > > (HW or otherwise) appears.
> > 
> > How so? Just make sure the registration for the PV interface is optional; that
> > is, allow it to fail. A guest that fails the PV setup will either have to try
> > another PV interface or fall back to 'native'.
> > 
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.

Why can't it? This is the same as handling say XSAVE operations. Some hosts
might have it - some might not. It is the job of the toolstack to make sure
to not migrate to the hosts which don't have it. Or bound the guest to the
lowest interface (so don't enable the PV interface if the other hosts in the
cluster can't support this flag)?

> 
> > > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > > evaluate that  approach, and make the best one get into kernel and also
> > > > will carry on discussion with Jiannan to improve that patch.
> > > That would be great. The work is stalled from what I can tell.
> > 
> > I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
> 
> --
> 			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 15:03                         ` Konrad Rzeszutek Wilk
@ 2013-07-10 15:16                           ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 15:16 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Peter Zijlstra, Raghavendra K T, Andrew Jones, mingo, ouyang,
	habanero, jeremy, x86, hpa, pbonzini, linux-doc, xen-devel,
	mtosatti, stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> > > 
> > > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > > 
> > Good idea.
> > 
> > > > > Ingo, Gleb,
> > > > > 
> > > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > > pro-pvspinlock.
> > > > > Could you please help me to know what will make it a mergeable
> > > > > candidate?.
> > > > > 
> > > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > > is that they are easy to add but hard to get rid of if better solution
> > > > (HW or otherwise) appears.
> > > 
> > > How so? Just make sure the registration for the PV interface is optional; that
> > > is, allow it to fail. A guest that fails the PV setup will either have to try
> > > another PV interface or fall back to 'native'.
> > > 
> > We have to carry PV around for live migration purposes. PV interface
> > cannot disappear under a running guest.
> 
> Why can't it? This is the same as handling say XSAVE operations. Some hosts
> might have it - some might not. It is the job of the toolstack to make sure
> to not migrate to the hosts which don't have it. Or bound the guest to the
> lowest interface (so don't enable the PV interface if the other hosts in the
> cluster can't support this flag)?
XSAVE is HW feature and it is not going disappear under you after software
upgrade. Upgrading kernel on part of your hosts and no longer been
able to migrate to them is not something people who use live migration
expect. In practise it means that updating all hosts in a datacenter to
newer kernel is no longer possible without rebooting VMs.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 15:16                           ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-10 15:16 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: jeremy, Raghavendra K T, kvm, linux-doc, Peter Zijlstra, riel,
	virtualization, andi, hpa, xen-devel, x86, mingo, habanero,
	Andrew Jones, stefano.stabellini, ouyang, avi.kivity, tglx,
	chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> > > 
> > > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > > 
> > Good idea.
> > 
> > > > > Ingo, Gleb,
> > > > > 
> > > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > > pro-pvspinlock.
> > > > > Could you please help me to know what will make it a mergeable
> > > > > candidate?.
> > > > > 
> > > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > > is that they are easy to add but hard to get rid of if better solution
> > > > (HW or otherwise) appears.
> > > 
> > > How so? Just make sure the registration for the PV interface is optional; that
> > > is, allow it to fail. A guest that fails the PV setup will either have to try
> > > another PV interface or fall back to 'native'.
> > > 
> > We have to carry PV around for live migration purposes. PV interface
> > cannot disappear under a running guest.
> 
> Why can't it? This is the same as handling say XSAVE operations. Some hosts
> might have it - some might not. It is the job of the toolstack to make sure
> to not migrate to the hosts which don't have it. Or bound the guest to the
> lowest interface (so don't enable the PV interface if the other hosts in the
> cluster can't support this flag)?
XSAVE is HW feature and it is not going disappear under you after software
upgrade. Upgrading kernel on part of your hosts and no longer been
able to migrate to them is not something people who use live migration
expect. In practise it means that updating all hosts in a datacenter to
newer kernel is no longer possible without rebooting VMs.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 15:16                           ` Gleb Natapov
@ 2013-07-11  0:12                             ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-11  0:12 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Peter Zijlstra, Raghavendra K T, Andrew Jones, mingo, ouyang,
	habanero, jeremy, x86, hpa, pbonzini, linux-doc, xen-devel,
	mtosatti, stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri

Gleb Natapov <gleb@redhat.com> wrote:
>On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
>> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>> > > 
>> > > Here's an idea, trim the damn email ;-) -- not only directed at
>gleb.
>> > > 
>> > Good idea.
>> > 
>> > > > > Ingo, Gleb,
>> > > > > 
>> > > > > From the results perspective, Andrew Theurer, Vinod's test
>results are
>> > > > > pro-pvspinlock.
>> > > > > Could you please help me to know what will make it a
>mergeable
>> > > > > candidate?.
>> > > > > 
>> > > > I need to spend more time reviewing it :) The problem with PV
>interfaces
>> > > > is that they are easy to add but hard to get rid of if better
>solution
>> > > > (HW or otherwise) appears.
>> > > 
>> > > How so? Just make sure the registration for the PV interface is
>optional; that
>> > > is, allow it to fail. A guest that fails the PV setup will either
>have to try
>> > > another PV interface or fall back to 'native'.
>> > > 
>> > We have to carry PV around for live migration purposes. PV
>interface
>> > cannot disappear under a running guest.
>> 
>> Why can't it? This is the same as handling say XSAVE operations. Some
>hosts
>> might have it - some might not. It is the job of the toolstack to
>make sure
>> to not migrate to the hosts which don't have it. Or bound the guest
>to the
>> lowest interface (so don't enable the PV interface if the other hosts
>in the
>> cluster can't support this flag)?
>XSAVE is HW feature and it is not going disappear under you after
>software
>upgrade. Upgrading kernel on part of your hosts and no longer been
>able to migrate to them is not something people who use live migration
>expect. In practise it means that updating all hosts in a datacenter to
>newer kernel is no longer possible without rebooting VMs.
>
>--
>			Gleb.

I see. Perhaps then if the hardware becomes much better at this then another PV interface can be provided which will use the static_key to turn off the PV spin lock and use the bare metal version (or perhaps some forms of super ellision locks). That does mean the host has to do something when this PV interface is invoked for the older guests. 

Anyhow that said I think the benefits are pretty neat right now and benefit much and worrying about whether the hardware vendors will provide something new is not benefiting users. What perhaps then needs to be addressed is how to have an obsolete mechanism in this if the hardware becomes superb? 
-- 
Sent from my Android phone. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11  0:12                             ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 192+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-11  0:12 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, Raghavendra K T, kvm, linux-doc, Peter Zijlstra, riel,
	virtualization, andi, hpa, xen-devel, x86, mingo, habanero,
	Andrew Jones, stefano.stabellini, ouyang, avi.kivity, tglx,
	chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

Gleb Natapov <gleb@redhat.com> wrote:
>On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
>> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>> > > 
>> > > Here's an idea, trim the damn email ;-) -- not only directed at
>gleb.
>> > > 
>> > Good idea.
>> > 
>> > > > > Ingo, Gleb,
>> > > > > 
>> > > > > From the results perspective, Andrew Theurer, Vinod's test
>results are
>> > > > > pro-pvspinlock.
>> > > > > Could you please help me to know what will make it a
>mergeable
>> > > > > candidate?.
>> > > > > 
>> > > > I need to spend more time reviewing it :) The problem with PV
>interfaces
>> > > > is that they are easy to add but hard to get rid of if better
>solution
>> > > > (HW or otherwise) appears.
>> > > 
>> > > How so? Just make sure the registration for the PV interface is
>optional; that
>> > > is, allow it to fail. A guest that fails the PV setup will either
>have to try
>> > > another PV interface or fall back to 'native'.
>> > > 
>> > We have to carry PV around for live migration purposes. PV
>interface
>> > cannot disappear under a running guest.
>> 
>> Why can't it? This is the same as handling say XSAVE operations. Some
>hosts
>> might have it - some might not. It is the job of the toolstack to
>make sure
>> to not migrate to the hosts which don't have it. Or bound the guest
>to the
>> lowest interface (so don't enable the PV interface if the other hosts
>in the
>> cluster can't support this flag)?
>XSAVE is HW feature and it is not going disappear under you after
>software
>upgrade. Upgrading kernel on part of your hosts and no longer been
>able to migrate to them is not something people who use live migration
>expect. In practise it means that updating all hosts in a datacenter to
>newer kernel is no longer possible without rebooting VMs.
>
>--
>			Gleb.

I see. Perhaps then if the hardware becomes much better at this then another PV interface can be provided which will use the static_key to turn off the PV spin lock and use the bare metal version (or perhaps some forms of super ellision locks). That does mean the host has to do something when this PV interface is invoked for the older guests. 

Anyhow that said I think the benefits are pretty neat right now and benefit much and worrying about whether the hardware vendors will provide something new is not benefiting users. What perhaps then needs to be addressed is how to have an obsolete mechanism in this if the hardware becomes superb? 
-- 
Sent from my Android phone. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:33                   ` Gleb Natapov
                                     ` (4 preceding siblings ...)
  (?)
@ 2013-07-11  9:13                   ` Raghavendra K T
  2013-07-11  9:48                       ` Gleb Natapov
  -1 siblings, 1 reply; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11  9:13 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On 07/10/2013 04:03 PM, Gleb Natapov wrote:
[...] trimmed

>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.

Okay. Got it. I was thinking it would change the global value. But IIRC
  It is changing global sysfs value and per vcpu ple_window.
Sorry. I missed this part yesterday.

>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>>   me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?

Ok. Thinking how to do. read the register and writeback if there need
to be a change during guest entry?



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-10 10:33                   ` Gleb Natapov
                                     ` (3 preceding siblings ...)
  (?)
@ 2013-07-11  9:13                   ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11  9:13 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On 07/10/2013 04:03 PM, Gleb Natapov wrote:
[...] trimmed

>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.

Okay. Got it. I was thinking it would change the global value. But IIRC
  It is changing global sysfs value and per vcpu ple_window.
Sorry. I missed this part yesterday.

>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>>   me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?

Ok. Thinking how to do. read the register and writeback if there need
to be a change during guest entry?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11  9:13                   ` Raghavendra K T
@ 2013-07-11  9:48                       ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-11  9:48 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> [...] trimmed
> 
> >>>Yes. you are right. dynamic ple window was an attempt to solve it.
> >>>
> >>>Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >>>exits in under-commits and increasing ple_window may be sometimes
> >>>counter productive as it affects other busy-wait constructs such as
> >>>flush_tlb AFAIK.
> >>>So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >>>would be nice.
> >>>
> >>
> >>Gleb, Andrew,
> >>I tested with the global ple window change (similar to what I posted
> >>here https://lkml.org/lkml/2012/11/11/14 ),
> >This does not look global. It changes PLE per vcpu.
> 
> Okay. Got it. I was thinking it would change the global value. But IIRC
>  It is changing global sysfs value and per vcpu ple_window.
> Sorry. I missed this part yesterday.
> 
Yes, it changes sysfs value but this does not affect already created
vcpus.

> >
> >>But did not see good result. May be it is good to go with per VM
> >>ple_window.
> >>
> >>Gleb,
> >>Can you elaborate little more on what you have in mind regarding per
> >>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>to
> >>  me), but is it that we have to load that every time of guest entry?
> >>
> >Only when it changes, shouldn't be to often no?
> 
> Ok. Thinking how to do. read the register and writeback if there need
> to be a change during guest entry?
> 
Why not do it like in the patch you've linked? When value changes write it
to VMCS of the current vcpu.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11  9:48                       ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-11  9:48 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> [...] trimmed
> 
> >>>Yes. you are right. dynamic ple window was an attempt to solve it.
> >>>
> >>>Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >>>exits in under-commits and increasing ple_window may be sometimes
> >>>counter productive as it affects other busy-wait constructs such as
> >>>flush_tlb AFAIK.
> >>>So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >>>would be nice.
> >>>
> >>
> >>Gleb, Andrew,
> >>I tested with the global ple window change (similar to what I posted
> >>here https://lkml.org/lkml/2012/11/11/14 ),
> >This does not look global. It changes PLE per vcpu.
> 
> Okay. Got it. I was thinking it would change the global value. But IIRC
>  It is changing global sysfs value and per vcpu ple_window.
> Sorry. I missed this part yesterday.
> 
Yes, it changes sysfs value but this does not affect already created
vcpus.

> >
> >>But did not see good result. May be it is good to go with per VM
> >>ple_window.
> >>
> >>Gleb,
> >>Can you elaborate little more on what you have in mind regarding per
> >>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>to
> >>  me), but is it that we have to load that every time of guest entry?
> >>
> >Only when it changes, shouldn't be to often no?
> 
> Ok. Thinking how to do. read the register and writeback if there need
> to be a change during guest entry?
> 
Why not do it like in the patch you've linked? When value changes write it
to VMCS of the current vcpu.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11  9:48                       ` Gleb Natapov
  (?)
@ 2013-07-11 10:10                       ` Raghavendra K T
  2013-07-11 10:11                           ` Gleb Natapov
  -1 siblings, 1 reply; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:10 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On 07/11/2013 03:18 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
>> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
>> [...] trimmed
>>
>>>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>>>
>>>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>>>> exits in under-commits and increasing ple_window may be sometimes
>>>>> counter productive as it affects other busy-wait constructs such as
>>>>> flush_tlb AFAIK.
>>>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>>>> would be nice.
>>>>>
>>>>
>>>> Gleb, Andrew,
>>>> I tested with the global ple window change (similar to what I posted
>>>> here https://lkml.org/lkml/2012/11/11/14 ),
>>> This does not look global. It changes PLE per vcpu.
>>
>> Okay. Got it. I was thinking it would change the global value. But IIRC
>>   It is changing global sysfs value and per vcpu ple_window.
>> Sorry. I missed this part yesterday.
>>
> Yes, it changes sysfs value but this does not affect already created
> vcpus.
>
>>>
>>>> But did not see good result. May be it is good to go with per VM
>>>> ple_window.
>>>>
>>>> Gleb,
>>>> Can you elaborate little more on what you have in mind regarding per
>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>> to
>>>>   me), but is it that we have to load that every time of guest entry?
>>>>
>>> Only when it changes, shouldn't be to often no?
>>
>> Ok. Thinking how to do. read the register and writeback if there need
>> to be a change during guest entry?
>>
> Why not do it like in the patch you've linked? When value changes write it
> to VMCS of the current vcpu.
>

Yes. can be done. So the running vcpu's ple_window gets updated only
after next pl-exit. right?


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11  9:48                       ` Gleb Natapov
  (?)
  (?)
@ 2013-07-11 10:10                       ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:10 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On 07/11/2013 03:18 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
>> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
>> [...] trimmed
>>
>>>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>>>
>>>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>>>> exits in under-commits and increasing ple_window may be sometimes
>>>>> counter productive as it affects other busy-wait constructs such as
>>>>> flush_tlb AFAIK.
>>>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>>>> would be nice.
>>>>>
>>>>
>>>> Gleb, Andrew,
>>>> I tested with the global ple window change (similar to what I posted
>>>> here https://lkml.org/lkml/2012/11/11/14 ),
>>> This does not look global. It changes PLE per vcpu.
>>
>> Okay. Got it. I was thinking it would change the global value. But IIRC
>>   It is changing global sysfs value and per vcpu ple_window.
>> Sorry. I missed this part yesterday.
>>
> Yes, it changes sysfs value but this does not affect already created
> vcpus.
>
>>>
>>>> But did not see good result. May be it is good to go with per VM
>>>> ple_window.
>>>>
>>>> Gleb,
>>>> Can you elaborate little more on what you have in mind regarding per
>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>> to
>>>>   me), but is it that we have to load that every time of guest entry?
>>>>
>>> Only when it changes, shouldn't be to often no?
>>
>> Ok. Thinking how to do. read the register and writeback if there need
>> to be a change during guest entry?
>>
> Why not do it like in the patch you've linked? When value changes write it
> to VMCS of the current vcpu.
>

Yes. can be done. So the running vcpu's ple_window gets updated only
after next pl-exit. right?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11 10:10                       ` Raghavendra K T
@ 2013-07-11 10:11                           ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:11 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>Gleb,
> >>>>Can you elaborate little more on what you have in mind regarding per
> >>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>to
> >>>>  me), but is it that we have to load that every time of guest entry?
> >>>>
> >>>Only when it changes, shouldn't be to often no?
> >>
> >>Ok. Thinking how to do. read the register and writeback if there need
> >>to be a change during guest entry?
> >>
> >Why not do it like in the patch you've linked? When value changes write it
> >to VMCS of the current vcpu.
> >
> 
> Yes. can be done. So the running vcpu's ple_window gets updated only
> after next pl-exit. right?
I am not sure what you mean. You cannot change vcpu's ple_window while
vcpu is in a guest mode.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 10:11                           ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:11 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>Gleb,
> >>>>Can you elaborate little more on what you have in mind regarding per
> >>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>to
> >>>>  me), but is it that we have to load that every time of guest entry?
> >>>>
> >>>Only when it changes, shouldn't be to often no?
> >>
> >>Ok. Thinking how to do. read the register and writeback if there need
> >>to be a change during guest entry?
> >>
> >Why not do it like in the patch you've linked? When value changes write it
> >to VMCS of the current vcpu.
> >
> 
> Yes. can be done. So the running vcpu's ple_window gets updated only
> after next pl-exit. right?
I am not sure what you mean. You cannot change vcpu's ple_window while
vcpu is in a guest mode.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11 10:11                           ` Gleb Natapov
@ 2013-07-11 10:53                             ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:53 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>> Gleb,
>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>> to
>>>>>>   me), but is it that we have to load that every time of guest entry?
>>>>>>
>>>>> Only when it changes, shouldn't be to often no?
>>>>
>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>> to be a change during guest entry?
>>>>
>>> Why not do it like in the patch you've linked? When value changes write it
>>> to VMCS of the current vcpu.
>>>
>>
>> Yes. can be done. So the running vcpu's ple_window gets updated only
>> after next pl-exit. right?
> I am not sure what you mean. You cannot change vcpu's ple_window while
> vcpu is in a guest mode.
>

I agree with that. Both of us are on the same page.
  What I meant is,
suppose the per VM ple_window changes when a vcpu x of that VM  was running,
it will get its ple_window updated during next run.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 10:53                             ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:53 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>> Gleb,
>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>> to
>>>>>>   me), but is it that we have to load that every time of guest entry?
>>>>>>
>>>>> Only when it changes, shouldn't be to often no?
>>>>
>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>> to be a change during guest entry?
>>>>
>>> Why not do it like in the patch you've linked? When value changes write it
>>> to VMCS of the current vcpu.
>>>
>>
>> Yes. can be done. So the running vcpu's ple_window gets updated only
>> after next pl-exit. right?
> I am not sure what you mean. You cannot change vcpu's ple_window while
> vcpu is in a guest mode.
>

I agree with that. Both of us are on the same page.
  What I meant is,
suppose the per VM ple_window changes when a vcpu x of that VM  was running,
it will get its ple_window updated during next run.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11 10:53                             ` Raghavendra K T
@ 2013-07-11 10:56                               ` Gleb Natapov
  -1 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:56 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> >On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>>>Gleb,
> >>>>>>Can you elaborate little more on what you have in mind regarding per
> >>>>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>>>to
> >>>>>>  me), but is it that we have to load that every time of guest entry?
> >>>>>>
> >>>>>Only when it changes, shouldn't be to often no?
> >>>>
> >>>>Ok. Thinking how to do. read the register and writeback if there need
> >>>>to be a change during guest entry?
> >>>>
> >>>Why not do it like in the patch you've linked? When value changes write it
> >>>to VMCS of the current vcpu.
> >>>
> >>
> >>Yes. can be done. So the running vcpu's ple_window gets updated only
> >>after next pl-exit. right?
> >I am not sure what you mean. You cannot change vcpu's ple_window while
> >vcpu is in a guest mode.
> >
> 
> I agree with that. Both of us are on the same page.
>  What I meant is,
> suppose the per VM ple_window changes when a vcpu x of that VM  was running,
> it will get its ple_window updated during next run.
Ah, I think "per VM" is what confuses me. Why do you want to have "per
VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
cannot change while vcpu is running.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 10:56                               ` Gleb Natapov
  0 siblings, 0 replies; 192+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:56 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> >On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>>>Gleb,
> >>>>>>Can you elaborate little more on what you have in mind regarding per
> >>>>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>>>to
> >>>>>>  me), but is it that we have to load that every time of guest entry?
> >>>>>>
> >>>>>Only when it changes, shouldn't be to often no?
> >>>>
> >>>>Ok. Thinking how to do. read the register and writeback if there need
> >>>>to be a change during guest entry?
> >>>>
> >>>Why not do it like in the patch you've linked? When value changes write it
> >>>to VMCS of the current vcpu.
> >>>
> >>
> >>Yes. can be done. So the running vcpu's ple_window gets updated only
> >>after next pl-exit. right?
> >I am not sure what you mean. You cannot change vcpu's ple_window while
> >vcpu is in a guest mode.
> >
> 
> I agree with that. Both of us are on the same page.
>  What I meant is,
> suppose the per VM ple_window changes when a vcpu x of that VM  was running,
> it will get its ple_window updated during next run.
Ah, I think "per VM" is what confuses me. Why do you want to have "per
VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
cannot change while vcpu is running.

--
			Gleb.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11 10:56                               ` Gleb Natapov
  (?)
@ 2013-07-11 11:14                               ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11 11:14 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
	hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
	stefano.stabellini, andi, attilio.rao, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
	virtualization, srivatsa.vaddagiri

On 07/11/2013 04:26 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
>> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
>>> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>>>> Gleb,
>>>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>>>> to
>>>>>>>>   me), but is it that we have to load that every time of guest entry?
>>>>>>>>
>>>>>>> Only when it changes, shouldn't be to often no?
>>>>>>
>>>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>>>> to be a change during guest entry?
>>>>>>
>>>>> Why not do it like in the patch you've linked? When value changes write it
>>>>> to VMCS of the current vcpu.
>>>>>
>>>>
>>>> Yes. can be done. So the running vcpu's ple_window gets updated only
>>>> after next pl-exit. right?
>>> I am not sure what you mean. You cannot change vcpu's ple_window while
>>> vcpu is in a guest mode.
>>>
>>
>> I agree with that. Both of us are on the same page.
>>   What I meant is,
>> suppose the per VM ple_window changes when a vcpu x of that VM  was running,
>> it will get its ple_window updated during next run.
> Ah, I think "per VM" is what confuses me. Why do you want to have "per
> VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
> cannot change while vcpu is running.
>

Okay. Got that. My initial feeling was vcpu does not "feel" the global
load. But I think that should be of no problem. instead we will not need
atomic operations to update ple_window, which is better.


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-07-11 10:56                               ` Gleb Natapov
  (?)
  (?)
@ 2013-07-11 11:14                               ` Raghavendra K T
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-07-11 11:14 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
	andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
	Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds

On 07/11/2013 04:26 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
>> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
>>> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>>>> Gleb,
>>>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>>>> to
>>>>>>>>   me), but is it that we have to load that every time of guest entry?
>>>>>>>>
>>>>>>> Only when it changes, shouldn't be to often no?
>>>>>>
>>>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>>>> to be a change during guest entry?
>>>>>>
>>>>> Why not do it like in the patch you've linked? When value changes write it
>>>>> to VMCS of the current vcpu.
>>>>>
>>>>
>>>> Yes. can be done. So the running vcpu's ple_window gets updated only
>>>> after next pl-exit. right?
>>> I am not sure what you mean. You cannot change vcpu's ple_window while
>>> vcpu is in a guest mode.
>>>
>>
>> I agree with that. Both of us are on the same page.
>>   What I meant is,
>> suppose the per VM ple_window changes when a vcpu x of that VM  was running,
>> it will get its ple_window updated during next run.
> Ah, I think "per VM" is what confuses me. Why do you want to have "per
> VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
> cannot change while vcpu is running.
>

Okay. Got that. My initial feeling was vcpu does not "feel" the global
load. But I think that should be of no problem. instead we will not need
atomic operations to update ple_window, which is better.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 20:14 ` Andi Kleen
                     ` (2 preceding siblings ...)
  2013-06-04 10:58   ` Raghavendra K T
@ 2013-06-04 10:58   ` Raghavendra K T
  3 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04 10:58 UTC (permalink / raw)
  To: Andi Kleen
  Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	habanero, xen-devel, peterz, mtosatti, stefano.stabellini,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, riel, drjones,
	virtualization, srivatsa.vaddagiri

On 06/02/2013 01:44 AM, Andi Kleen wrote:
>
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.
>
> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would
> be fine with using an static key hook in the main path
> like I do for all the other lock types.
>
> It also uses interrupt ops patching, for that it would
> be still needed though.
>

Hi Andi, IIUC, you are okay with the current approach overall right?


^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 20:14 ` Andi Kleen
  2013-06-01 20:28     ` Jeremy Fitzhardinge
  2013-06-01 20:28   ` Jeremy Fitzhardinge
@ 2013-06-04 10:58   ` Raghavendra K T
  2013-06-04 10:58   ` Raghavendra K T
  3 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-04 10:58 UTC (permalink / raw)
  To: Andi Kleen
  Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
	srivatsa.vaddagiri, hpa, stefano.stabellini, xen-devel, x86,
	mingo, habanero, riel, konrad.wilk, ouyang, avi.kivity, tglx,
	chegu_vinod, linux-kernel, attilio.rao, pbonzini, torvalds

On 06/02/2013 01:44 AM, Andi Kleen wrote:
>
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.
>
> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would
> be fine with using an static key hook in the main path
> like I do for all the other lock types.
>
> It also uses interrupt ops patching, for that it would
> be still needed though.
>

Hi Andi, IIUC, you are okay with the current approach overall right?

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 20:28     ` Jeremy Fitzhardinge
@ 2013-06-01 20:46       ` Andi Kleen
  -1 siblings, 0 replies; 192+ messages in thread
From: Andi Kleen @ 2013-06-01 20:46 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andi Kleen, Raghavendra K T, gleb, mingo, x86, konrad.wilk, hpa,
	pbonzini, linux-doc, habanero, xen-devel, peterz, mtosatti,
	stefano.stabellini, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, drjones, virtualization,
	srivatsa.vaddagiri

On Sat, Jun 01, 2013 at 01:28:00PM -0700, Jeremy Fitzhardinge wrote:
> On 06/01/2013 01:14 PM, Andi Kleen wrote:
> > FWIW I use the paravirt spinlock ops for adding lock elision
> > to the spinlocks.
> 
> Does lock elision still use the ticketlock algorithm/structure, or are
> they different?  If they're still basically ticketlocks, then it seems
> to me that they're complimentary - hle handles the fastpath, and pv the
> slowpath.

It uses the ticketlock algorithm/structure, but:

- it needs to know that the lock is free with an own operation
- it has an additional field for strong adaptation state
(but that field is independent of the low level lock implementation,
so can be used with any kind of lock)

So currently it inlines the ticket lock code into its own.

Doing pv on the slow path would be possible, but would need
some additional (minor) hooks I think.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 20:46       ` Andi Kleen
  0 siblings, 0 replies; 192+ messages in thread
From: Andi Kleen @ 2013-06-01 20:46 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: x86, linux-doc, peterz, drjones, virtualization, Andi Kleen, hpa,
	stefano.stabellini, xen-devel, kvm, Raghavendra K T, mingo,
	habanero, riel, konrad.wilk, ouyang, avi.kivity, tglx,
	chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

On Sat, Jun 01, 2013 at 01:28:00PM -0700, Jeremy Fitzhardinge wrote:
> On 06/01/2013 01:14 PM, Andi Kleen wrote:
> > FWIW I use the paravirt spinlock ops for adding lock elision
> > to the spinlocks.
> 
> Does lock elision still use the ticketlock algorithm/structure, or are
> they different?  If they're still basically ticketlocks, then it seems
> to me that they're complimentary - hle handles the fastpath, and pv the
> slowpath.

It uses the ticketlock algorithm/structure, but:

- it needs to know that the lock is free with an own operation
- it has an additional field for strong adaptation state
(but that field is independent of the low level lock implementation,
so can be used with any kind of lock)

So currently it inlines the ticket lock code into its own.

Doing pv on the slow path would be possible, but would need
some additional (minor) hooks I think.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 20:14 ` Andi Kleen
@ 2013-06-01 20:28     ` Jeremy Fitzhardinge
  2013-06-01 20:28   ` Jeremy Fitzhardinge
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 192+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:28 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Raghavendra K T, gleb, mingo, x86, konrad.wilk, hpa, pbonzini,
	linux-doc, habanero, xen-devel, peterz, mtosatti,
	stefano.stabellini, attilio.rao, ouyang, gregkh, agraf,
	chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
	stephan.diestelhorst, riel, drjones, virtualization,
	srivatsa.vaddagiri

On 06/01/2013 01:14 PM, Andi Kleen wrote:
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.

Does lock elision still use the ticketlock algorithm/structure, or are
they different?  If they're still basically ticketlocks, then it seems
to me that they're complimentary - hle handles the fastpath, and pv the
slowpath.

> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would 
> be fine with using an static key hook in the main path
> like I do for all the other lock types.

Right.

    J

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 20:28     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 192+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:28 UTC (permalink / raw)
  To: Andi Kleen
  Cc: x86, kvm, linux-doc, peterz, drjones, virtualization,
	srivatsa.vaddagiri, hpa, stefano.stabellini, xen-devel, gleb,
	Raghavendra K T, agraf, mingo, habanero, konrad.wilk, ouyang,
	mtosatti, avi.kivity, tglx, chegu_vinod, gregkh, linux-kernel,
	attilio.rao, pbonzini, torvalds, stephan.diestelhorst

On 06/01/2013 01:14 PM, Andi Kleen wrote:
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.

Does lock elision still use the ticketlock algorithm/structure, or are
they different?  If they're still basically ticketlocks, then it seems
to me that they're complimentary - hle handles the fastpath, and pv the
slowpath.

> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would 
> be fine with using an static key hook in the main path
> like I do for all the other lock types.

Right.

    J

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01 20:14 ` Andi Kleen
  2013-06-01 20:28     ` Jeremy Fitzhardinge
@ 2013-06-01 20:28   ` Jeremy Fitzhardinge
  2013-06-04 10:58   ` Raghavendra K T
  2013-06-04 10:58   ` Raghavendra K T
  3 siblings, 0 replies; 192+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:28 UTC (permalink / raw)
  To: Andi Kleen
  Cc: x86, kvm, linux-doc, peterz, drjones, virtualization,
	srivatsa.vaddagiri, hpa, stefano.stabellini, xen-devel,
	Raghavendra K T, mingo, habanero, riel, konrad.wilk, ouyang,
	avi.kivity, tglx, chegu_vinod, gregkh, linux-kernel, attilio.rao,
	pbonzini, torvalds, stephan.diestelhorst

On 06/01/2013 01:14 PM, Andi Kleen wrote:
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.

Does lock elision still use the ticketlock algorithm/structure, or are
they different?  If they're still basically ticketlocks, then it seems
to me that they're complimentary - hle handles the fastpath, and pv the
slowpath.

> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would 
> be fine with using an static key hook in the main path
> like I do for all the other lock types.

Right.

    J

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01  8:21 ` Raghavendra K T
                   ` (2 preceding siblings ...)
  (?)
@ 2013-06-01 20:14 ` Andi Kleen
  2013-06-01 20:28     ` Jeremy Fitzhardinge
                     ` (3 more replies)
  -1 siblings, 4 replies; 192+ messages in thread
From: Andi Kleen @ 2013-06-01 20:14 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
	habanero, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
	attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
	avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
	drjones, virtualization, srivatsa.vaddagiri


FWIW I use the paravirt spinlock ops for adding lock elision
to the spinlocks.

This needs to be done at the top level (so the level you're removing)

However I don't like the pv mechanism very much and would 
be fine with using an static key hook in the main path
like I do for all the other lock types.

It also uses interrupt ops patching, for that it would 
be still needed though.

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01  8:21 ` Raghavendra K T
                   ` (3 preceding siblings ...)
  (?)
@ 2013-06-01 20:14 ` Andi Kleen
  -1 siblings, 0 replies; 192+ messages in thread
From: Andi Kleen @ 2013-06-01 20:14 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: jeremy, gregkh, linux-doc, peterz, drjones, virtualization, andi,
	hpa, stefano.stabellini, xen-devel, kvm, x86, mingo, habanero,
	riel, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
	linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
	torvalds, stephan.diestelhorst


FWIW I use the paravirt spinlock ops for adding lock elision
to the spinlocks.

This needs to be done at the top level (so the level you're removing)

However I don't like the pv mechanism very much and would 
be fine with using an static key hook in the main path
like I do for all the other lock types.

It also uses interrupt ops patching, for that it would 
be still needed though.

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 19:21 Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst


This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.

Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
   causing undercommit degradation (after PLE handler improvement).
- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler

V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.

With this series we see that we could get little more improvements on top
of that. 

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;
	for (;;) {
		unsigned count = SPIN_THRESHOLD;
		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();
which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	     static_key_false(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */
		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".


Results:
=======
base = 3.10-rc2 kernel
patched = base + this series

The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.

+-----------+-----------+-----------+------------+-----------+
               ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
              dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
+-----------+-----------+-----------+------------+-----------+

Your suggestions and comments are welcome.

github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9


Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines. 

The older series was tested by Attilio for Xen implementation [1].

Jeremy Fitzhardinge (9):
 x86/spinlock: Replace pv spinlocks with pv ticketlocks
 x86/ticketlock: Collapse a layer of functions
 xen: Defer spinlock setup until boot CPU setup
 xen/pvticketlock: Xen implementation for PV ticket locks
 xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
 x86/pvticketlock: Use callee-save for lock_spinning
 x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
 x86/ticketlock: Add slowpath logic
 xen/pvticketlock: Allow interrupts to be enabled while blocking

Andrew Jones (1):
 Split jumplabel ratelimit

Stefano Stabellini (1):
 xen: Enable PV ticketlocks on HVM Xen

Srivatsa Vaddagiri (3):
 kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
 kvm guest : Add configuration support to enable debug information for KVM Guests
 kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

Raghavendra K T (5):
 x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
 kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
 Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
 Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
 Add directed yield in vcpu block path

---
Link in V8 has links to previous patch series and also whole history.

V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119

 Documentation/virtual/kvm/cpuid.txt      |   4 +
 Documentation/virtual/kvm/hypercalls.txt |  13 ++
 arch/ia64/include/asm/kvm_host.h         |   5 +
 arch/powerpc/include/asm/kvm_host.h      |   5 +
 arch/s390/include/asm/kvm_host.h         |   5 +
 arch/x86/Kconfig                         |  10 +
 arch/x86/include/asm/kvm_host.h          |   7 +-
 arch/x86/include/asm/kvm_para.h          |  14 +-
 arch/x86/include/asm/paravirt.h          |  32 +--
 arch/x86/include/asm/paravirt_types.h    |  10 +-
 arch/x86/include/asm/spinlock.h          | 128 +++++++----
 arch/x86/include/asm/spinlock_types.h    |  16 +-
 arch/x86/include/uapi/asm/kvm_para.h     |   1 +
 arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
 arch/x86/kvm/cpuid.c                     |   3 +-
 arch/x86/kvm/lapic.c                     |   5 +-
 arch/x86/kvm/x86.c                       |  39 +++-
 arch/x86/xen/smp.c                       |   3 +-
 arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
 include/linux/jump_label.h               |  26 +--
 include/linux/jump_label_ratelimit.h     |  34 +++
 include/linux/kvm_host.h                 |   2 +-
 include/linux/perf_event.h               |   1 +
 include/uapi/linux/kvm_para.h            |   1 +
 kernel/jump_label.c                      |   1 +
 virt/kvm/kvm_main.c                      |   6 +-
 27 files changed, 645 insertions(+), 384 deletions(-)

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01  8:21 ` Raghavendra K T
  (?)
  (?)
@ 2013-06-01 19:21 ` Raghavendra KT
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra KT @ 2013-06-01 19:21 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jeremy Fitzhardinge, gregkh, gleb, linux-doc, peterz, drjones,
	virtualization, andi, hpa, stefano.stabellini, xen-devel, kvm,
	x86, agraf, mingo, habanero, konrad.wilk, ouyang, avi.kivity,
	Thomas Gleixner, chegu_vinod, Marcelo Tosatti,
	Linux Kernel Mailing List, Srivatsa Vaddagiri, attilio.rao,
	pbonzini, torvalds, stephan.diestelhorst


[-- Attachment #1.1: Type: text/plain, Size: 100 bytes --]

Sorry! Please ignore this thread. My sendmail script aborted in between and
resending whole series.

[-- Attachment #1.2: Type: text/html, Size: 179 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
  2013-06-01  8:21 ` Raghavendra K T
  (?)
@ 2013-06-01 19:21 ` Raghavendra KT
  -1 siblings, 0 replies; 192+ messages in thread
From: Raghavendra KT @ 2013-06-01 19:21 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: Jeremy Fitzhardinge, gregkh, linux-doc, peterz, drjones,
	virtualization, andi, hpa, stefano.stabellini, xen-devel, kvm,
	x86, mingo, habanero, Rik van Riel, konrad.wilk, ouyang,
	avi.kivity, Thomas Gleixner, chegu_vinod,
	Linux Kernel Mailing List, Srivatsa Vaddagiri, attilio.rao,
	pbonzini, torvalds, stephan.diestelhorst


[-- Attachment #1.1: Type: text/plain, Size: 100 bytes --]

Sorry! Please ignore this thread. My sendmail script aborted in between and
resending whole series.

[-- Attachment #1.2: Type: text/html, Size: 179 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01  8:21 ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01  8:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri


This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.

Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
   causing undercommit degradation (after PLE handler improvement).
- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler

V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.

With this series we see that we could get little more improvements on top
of that. 

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;
	for (;;) {
		unsigned count = SPIN_THRESHOLD;
		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();
which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	     static_key_false(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */
		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".


Results:
=======
base = 3.10-rc2 kernel
patched = base + this series

The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.

+-----------+-----------+-----------+------------+-----------+
               ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
              dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
+-----------+-----------+-----------+------------+-----------+

Your suggestions and comments are welcome.

github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9


Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines. 

The older series was tested by Attilio for Xen implementation [1].

Jeremy Fitzhardinge (9):
 x86/spinlock: Replace pv spinlocks with pv ticketlocks
 x86/ticketlock: Collapse a layer of functions
 xen: Defer spinlock setup until boot CPU setup
 xen/pvticketlock: Xen implementation for PV ticket locks
 xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
 x86/pvticketlock: Use callee-save for lock_spinning
 x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
 x86/ticketlock: Add slowpath logic
 xen/pvticketlock: Allow interrupts to be enabled while blocking

Andrew Jones (1):
 Split jumplabel ratelimit

Stefano Stabellini (1):
 xen: Enable PV ticketlocks on HVM Xen

Srivatsa Vaddagiri (3):
 kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
 kvm guest : Add configuration support to enable debug information for KVM Guests
 kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

Raghavendra K T (5):
 x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
 kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
 Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
 Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
 Add directed yield in vcpu block path

---
Link in V8 has links to previous patch series and also whole history.

V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119

 Documentation/virtual/kvm/cpuid.txt      |   4 +
 Documentation/virtual/kvm/hypercalls.txt |  13 ++
 arch/ia64/include/asm/kvm_host.h         |   5 +
 arch/powerpc/include/asm/kvm_host.h      |   5 +
 arch/s390/include/asm/kvm_host.h         |   5 +
 arch/x86/Kconfig                         |  10 +
 arch/x86/include/asm/kvm_host.h          |   7 +-
 arch/x86/include/asm/kvm_para.h          |  14 +-
 arch/x86/include/asm/paravirt.h          |  32 +--
 arch/x86/include/asm/paravirt_types.h    |  10 +-
 arch/x86/include/asm/spinlock.h          | 128 +++++++----
 arch/x86/include/asm/spinlock_types.h    |  16 +-
 arch/x86/include/uapi/asm/kvm_para.h     |   1 +
 arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
 arch/x86/kvm/cpuid.c                     |   3 +-
 arch/x86/kvm/lapic.c                     |   5 +-
 arch/x86/kvm/x86.c                       |  39 +++-
 arch/x86/xen/smp.c                       |   3 +-
 arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
 include/linux/jump_label.h               |  26 +--
 include/linux/jump_label_ratelimit.h     |  34 +++
 include/linux/kvm_host.h                 |   2 +-
 include/linux/perf_event.h               |   1 +
 include/uapi/linux/kvm_para.h            |   1 +
 kernel/jump_label.c                      |   1 +
 virt/kvm/kvm_main.c                      |   6 +-
 27 files changed, 645 insertions(+), 384 deletions(-)


^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01  8:21 Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01  8:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
	xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
	ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
	srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst


This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.

Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
   causing undercommit degradation (after PLE handler improvement).
- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler

V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.

With this series we see that we could get little more improvements on top
of that. 

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;
	for (;;) {
		unsigned count = SPIN_THRESHOLD;
		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();
which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	     static_key_false(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */
		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".


Results:
=======
base = 3.10-rc2 kernel
patched = base + this series

The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.

+-----------+-----------+-----------+------------+-----------+
               ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
              dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
+-----------+-----------+-----------+------------+-----------+

Your suggestions and comments are welcome.

github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9


Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines. 

The older series was tested by Attilio for Xen implementation [1].

Jeremy Fitzhardinge (9):
 x86/spinlock: Replace pv spinlocks with pv ticketlocks
 x86/ticketlock: Collapse a layer of functions
 xen: Defer spinlock setup until boot CPU setup
 xen/pvticketlock: Xen implementation for PV ticket locks
 xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
 x86/pvticketlock: Use callee-save for lock_spinning
 x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
 x86/ticketlock: Add slowpath logic
 xen/pvticketlock: Allow interrupts to be enabled while blocking

Andrew Jones (1):
 Split jumplabel ratelimit

Stefano Stabellini (1):
 xen: Enable PV ticketlocks on HVM Xen

Srivatsa Vaddagiri (3):
 kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
 kvm guest : Add configuration support to enable debug information for KVM Guests
 kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

Raghavendra K T (5):
 x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
 kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
 Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
 Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
 Add directed yield in vcpu block path

---
Link in V8 has links to previous patch series and also whole history.

V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119

 Documentation/virtual/kvm/cpuid.txt      |   4 +
 Documentation/virtual/kvm/hypercalls.txt |  13 ++
 arch/ia64/include/asm/kvm_host.h         |   5 +
 arch/powerpc/include/asm/kvm_host.h      |   5 +
 arch/s390/include/asm/kvm_host.h         |   5 +
 arch/x86/Kconfig                         |  10 +
 arch/x86/include/asm/kvm_host.h          |   7 +-
 arch/x86/include/asm/kvm_para.h          |  14 +-
 arch/x86/include/asm/paravirt.h          |  32 +--
 arch/x86/include/asm/paravirt_types.h    |  10 +-
 arch/x86/include/asm/spinlock.h          | 128 +++++++----
 arch/x86/include/asm/spinlock_types.h    |  16 +-
 arch/x86/include/uapi/asm/kvm_para.h     |   1 +
 arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
 arch/x86/kvm/cpuid.c                     |   3 +-
 arch/x86/kvm/lapic.c                     |   5 +-
 arch/x86/kvm/x86.c                       |  39 +++-
 arch/x86/xen/smp.c                       |   3 +-
 arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
 include/linux/jump_label.h               |  26 +--
 include/linux/jump_label_ratelimit.h     |  34 +++
 include/linux/kvm_host.h                 |   2 +-
 include/linux/perf_event.h               |   1 +
 include/uapi/linux/kvm_para.h            |   1 +
 kernel/jump_label.c                      |   1 +
 virt/kvm/kvm_main.c                      |   6 +-
 27 files changed, 645 insertions(+), 384 deletions(-)

^ permalink raw reply	[flat|nested] 192+ messages in thread

* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01  8:21 ` Raghavendra K T
  0 siblings, 0 replies; 192+ messages in thread
From: Raghavendra K T @ 2013-06-01  8:21 UTC (permalink / raw)
  To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
  Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
	mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
	agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
	linux-kernel, stephan.diestelhorst, riel, drjones,
	virtualization, srivatsa.vaddagiri


This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.

Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
   causing undercommit degradation (after PLE handler improvement).
- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler

V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.

With this series we see that we could get little more improvements on top
of that. 

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;
	for (;;) {
		unsigned count = SPIN_THRESHOLD;
		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();
which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	     static_key_false(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;
		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */
		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".


Results:
=======
base = 3.10-rc2 kernel
patched = base + this series

The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.

+-----------+-----------+-----------+------------+-----------+
               ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x  5574.9000   237.4997    5618.0000    94.0366     0.77311
2x  2741.5000   561.3090    3332.0000   102.4738    21.53930
3x  2146.2500   216.7718    2302.3333    76.3870     7.27237
4x  1663.0000   141.9235    1753.7500    83.5220     5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
              dbench  (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
    base        stdev        patched    stdev        %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600   754.4525   14645.9900   114.3087     3.78718
2x  2481.6270    71.2665    2667.1280    73.8193     7.47498
3x  1510.2483    31.8634    1503.8792    36.0777    -0.42173
4x  1029.4875    16.9166    1039.7069    43.8840     0.99267
+-----------+-----------+-----------+------------+-----------+

Your suggestions and comments are welcome.

github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9


Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines. 

The older series was tested by Attilio for Xen implementation [1].

Jeremy Fitzhardinge (9):
 x86/spinlock: Replace pv spinlocks with pv ticketlocks
 x86/ticketlock: Collapse a layer of functions
 xen: Defer spinlock setup until boot CPU setup
 xen/pvticketlock: Xen implementation for PV ticket locks
 xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
 x86/pvticketlock: Use callee-save for lock_spinning
 x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
 x86/ticketlock: Add slowpath logic
 xen/pvticketlock: Allow interrupts to be enabled while blocking

Andrew Jones (1):
 Split jumplabel ratelimit

Stefano Stabellini (1):
 xen: Enable PV ticketlocks on HVM Xen

Srivatsa Vaddagiri (3):
 kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
 kvm guest : Add configuration support to enable debug information for KVM Guests
 kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

Raghavendra K T (5):
 x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
 kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
 Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
 Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
 Add directed yield in vcpu block path

---
Link in V8 has links to previous patch series and also whole history.

V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119

 Documentation/virtual/kvm/cpuid.txt      |   4 +
 Documentation/virtual/kvm/hypercalls.txt |  13 ++
 arch/ia64/include/asm/kvm_host.h         |   5 +
 arch/powerpc/include/asm/kvm_host.h      |   5 +
 arch/s390/include/asm/kvm_host.h         |   5 +
 arch/x86/Kconfig                         |  10 +
 arch/x86/include/asm/kvm_host.h          |   7 +-
 arch/x86/include/asm/kvm_para.h          |  14 +-
 arch/x86/include/asm/paravirt.h          |  32 +--
 arch/x86/include/asm/paravirt_types.h    |  10 +-
 arch/x86/include/asm/spinlock.h          | 128 +++++++----
 arch/x86/include/asm/spinlock_types.h    |  16 +-
 arch/x86/include/uapi/asm/kvm_para.h     |   1 +
 arch/x86/kernel/kvm.c                    | 256 +++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c     |  18 +-
 arch/x86/kvm/cpuid.c                     |   3 +-
 arch/x86/kvm/lapic.c                     |   5 +-
 arch/x86/kvm/x86.c                       |  39 +++-
 arch/x86/xen/smp.c                       |   3 +-
 arch/x86/xen/spinlock.c                  | 384 ++++++++++---------------------
 include/linux/jump_label.h               |  26 +--
 include/linux/jump_label_ratelimit.h     |  34 +++
 include/linux/kvm_host.h                 |   2 +-
 include/linux/perf_event.h               |   1 +
 include/uapi/linux/kvm_para.h            |   1 +
 kernel/jump_label.c                      |   1 +
 virt/kvm/kvm_main.c                      |   6 +-
 27 files changed, 645 insertions(+), 384 deletions(-)


^ permalink raw reply	[flat|nested] 192+ messages in thread

end of thread, other threads:[~2013-07-11 11:14 UTC | newest]

Thread overview: 192+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-01 19:21 [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks Raghavendra K T
2013-06-01 19:21 ` Raghavendra K T
2013-06-01 19:21 ` [PATCH RFC V9 1/19] x86/spinlock: Replace pv spinlocks with pv ticketlocks Raghavendra K T
2013-06-01 19:21   ` Raghavendra K T
2013-06-01 19:21   ` Raghavendra K T
2013-06-01 20:32   ` Jeremy Fitzhardinge
2013-06-01 20:32     ` Jeremy Fitzhardinge
2013-06-02  6:54     ` Raghavendra K T
2013-06-02  6:54       ` Raghavendra K T
2013-06-01 20:32   ` Jeremy Fitzhardinge
2013-06-01 19:22 ` [PATCH RFC V9 2/19] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks Raghavendra K T
2013-06-01 19:22   ` Raghavendra K T
2013-06-01 19:22   ` Raghavendra K T
2013-06-03 15:28   ` Konrad Rzeszutek Wilk
2013-06-03 15:28     ` Konrad Rzeszutek Wilk
2013-06-01 19:22 ` [PATCH RFC V9 3/19] x86/ticketlock: Collapse a layer of functions Raghavendra K T
2013-06-01 19:22 ` Raghavendra K T
2013-06-01 19:22   ` Raghavendra K T
2013-06-03 15:28   ` Konrad Rzeszutek Wilk
2013-06-03 15:28     ` Konrad Rzeszutek Wilk
2013-06-01 19:22 ` [PATCH RFC V9 4/19] xen: Defer spinlock setup until boot CPU setup Raghavendra K T
2013-06-01 19:22   ` Raghavendra K T
2013-06-01 19:22   ` Raghavendra K T
2013-06-01 19:23 ` [PATCH RFC V9 5/19] xen/pvticketlock: Xen implementation for PV ticket locks Raghavendra K T
2013-06-01 19:23   ` Raghavendra K T
2013-06-03 16:03   ` Konrad Rzeszutek Wilk
2013-06-03 16:03     ` Konrad Rzeszutek Wilk
2013-06-04  7:21     ` Raghavendra K T
2013-06-04  7:21       ` Raghavendra K T
2013-06-01 19:23 ` Raghavendra K T
2013-06-01 19:23 ` [PATCH RFC V9 6/19] xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks Raghavendra K T
2013-06-01 19:23   ` Raghavendra K T
2013-06-01 19:23 ` Raghavendra K T
2013-06-01 19:23 ` [PATCH RFC V9 7/19] x86/pvticketlock: Use callee-save for lock_spinning Raghavendra K T
2013-06-01 19:23   ` Raghavendra K T
2013-06-01 19:23   ` Raghavendra K T
2013-06-01 19:24 ` [PATCH RFC V9 8/19] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 Raghavendra K T
2013-06-01 19:24   ` Raghavendra K T
2013-06-01 19:24   ` Raghavendra K T
2013-06-03 15:53   ` Konrad Rzeszutek Wilk
2013-06-03 15:53     ` Konrad Rzeszutek Wilk
2013-06-01 19:24 ` [PATCH RFC V9 9/19] Split out rate limiting from jump_label.h Raghavendra K T
2013-06-01 19:24 ` Raghavendra K T
2013-06-01 19:24   ` Raghavendra K T
2013-06-03 15:56   ` Konrad Rzeszutek Wilk
2013-06-03 15:56     ` Konrad Rzeszutek Wilk
2013-06-04  7:15     ` Raghavendra K T
2013-06-04  7:15       ` Raghavendra K T
2013-06-01 19:24 ` [PATCH RFC V9 10/19] x86/ticketlock: Add slowpath logic Raghavendra K T
2013-06-01 19:24 ` Raghavendra K T
2013-06-01 19:24   ` Raghavendra K T
2013-06-01 19:24 ` [PATCH RFC V9 11/19] xen/pvticketlock: Allow interrupts to be enabled while blocking Raghavendra K T
2013-06-01 19:24 ` Raghavendra K T
2013-06-01 19:24   ` Raghavendra K T
2013-06-01 19:25 ` [PATCH RFC V9 12/19] xen: Enable PV ticketlocks on HVM Xen Raghavendra K T
2013-06-01 19:25   ` Raghavendra K T
2013-06-03 15:57   ` Konrad Rzeszutek Wilk
2013-06-03 15:57     ` Konrad Rzeszutek Wilk
2013-06-04  7:16     ` Raghavendra K T
2013-06-04  7:16       ` Raghavendra K T
2013-06-04 14:44       ` Konrad Rzeszutek Wilk
2013-06-04 14:44         ` Konrad Rzeszutek Wilk
2013-06-04 15:00         ` Raghavendra K T
2013-06-04 15:00         ` Raghavendra K T
2013-06-01 19:25 ` Raghavendra K T
2013-06-01 19:25 ` [PATCH RFC V9 13/19] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks Raghavendra K T
2013-06-01 19:25   ` Raghavendra K T
2013-06-01 19:25 ` Raghavendra K T
2013-06-01 19:25 ` [PATCH RFC V9 14/19] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration Raghavendra K T
2013-06-01 19:25   ` Raghavendra K T
2013-06-01 19:25 ` Raghavendra K T
2013-06-01 19:25 ` [PATCH RFC V9 15/19] kvm guest : Add configuration support to enable debug information for KVM Guests Raghavendra K T
2013-06-01 19:25   ` Raghavendra K T
2013-06-01 19:25   ` Raghavendra K T
2013-06-01 19:25 ` [PATCH RFC V9 16/19] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor Raghavendra K T
2013-06-01 19:25   ` Raghavendra K T
2013-06-03 16:00   ` Konrad Rzeszutek Wilk
2013-06-03 16:00     ` Konrad Rzeszutek Wilk
2013-06-04  7:19     ` Raghavendra K T
2013-06-04  7:19     ` Raghavendra K T
2013-06-01 19:25 ` Raghavendra K T
2013-06-01 19:26 ` [PATCH RFC V9 17/19] kvm hypervisor : Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic Raghavendra K T
2013-06-01 19:26   ` Raghavendra K T
2013-06-01 19:26 ` Raghavendra K T
2013-06-01 19:26 ` [PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Raghavendra K T
2013-06-01 19:26   ` Raghavendra K T
2013-06-01 19:26   ` Raghavendra K T
2013-06-03 16:04   ` Konrad Rzeszutek Wilk
2013-06-03 16:04     ` Konrad Rzeszutek Wilk
2013-06-04  7:22     ` Raghavendra K T
2013-06-04  7:22       ` Raghavendra K T
2013-06-01 19:26 ` [PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path Raghavendra K T
2013-06-01 19:26   ` Raghavendra K T
2013-06-01 19:26   ` Raghavendra K T
2013-06-03 16:05   ` Konrad Rzeszutek Wilk
2013-06-03 16:05     ` Konrad Rzeszutek Wilk
2013-06-04  7:28     ` Raghavendra K T
2013-06-04  7:28       ` Raghavendra K T
2013-06-02  8:07 ` [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks Gleb Natapov
2013-06-02  8:07   ` Gleb Natapov
2013-06-02 16:20   ` Jiannan Ouyang
2013-06-02 16:20     ` Jiannan Ouyang
2013-06-03  1:40     ` Raghavendra K T
2013-06-03  1:40       ` Raghavendra K T
2013-06-03  6:21       ` Raghavendra K T
2013-06-07  6:15         ` Raghavendra K T
2013-06-07  6:15           ` Raghavendra K T
2013-06-07 13:29           ` Andrew Theurer
2013-06-07 13:29           ` Andrew Theurer
2013-06-07 23:41           ` Jiannan Ouyang
2013-06-07 23:41             ` Jiannan Ouyang
2013-06-07 23:41           ` Jiannan Ouyang
2013-06-03  6:21       ` Raghavendra K T
2013-06-02 16:20   ` Jiannan Ouyang
2013-06-25 14:50 ` Andrew Theurer
2013-06-25 14:50 ` Andrew Theurer
2013-06-26  8:45   ` Raghavendra K T
2013-06-26  8:45     ` Raghavendra K T
2013-06-26 11:37     ` Andrew Jones
2013-06-26 11:37       ` Andrew Jones
2013-06-26 12:52       ` Gleb Natapov
2013-06-26 12:52         ` Gleb Natapov
2013-06-26 13:40         ` Raghavendra K T
2013-06-26 13:40           ` Raghavendra K T
2013-06-26 14:39           ` Chegu Vinod
2013-06-26 15:37             ` Raghavendra K T
2013-06-26 15:37               ` Raghavendra K T
2013-06-26 16:11           ` Gleb Natapov
2013-06-26 16:11             ` Gleb Natapov
2013-06-26 17:54             ` Raghavendra K T
2013-07-09  9:11               ` Raghavendra K T
2013-07-09  9:11                 ` Raghavendra K T
2013-07-10 10:33                 ` Gleb Natapov
2013-07-10 10:33                   ` Gleb Natapov
2013-07-10 10:40                   ` Peter Zijlstra
2013-07-10 10:40                     ` Peter Zijlstra
2013-07-10 10:47                     ` Gleb Natapov
2013-07-10 10:47                       ` Gleb Natapov
2013-07-10 11:28                       ` Raghavendra K T
2013-07-10 11:28                         ` Raghavendra K T
2013-07-10 11:29                         ` Gleb Natapov
2013-07-10 11:29                           ` Gleb Natapov
2013-07-10 11:40                         ` Raghavendra K T
2013-07-10 11:40                           ` Raghavendra K T
2013-07-10 15:03                       ` Konrad Rzeszutek Wilk
2013-07-10 15:03                         ` Konrad Rzeszutek Wilk
2013-07-10 15:16                         ` Gleb Natapov
2013-07-10 15:16                           ` Gleb Natapov
2013-07-11  0:12                           ` Konrad Rzeszutek Wilk
2013-07-11  0:12                             ` Konrad Rzeszutek Wilk
2013-07-10 11:24                   ` Raghavendra K T
2013-07-10 11:24                   ` Raghavendra K T
2013-07-10 11:41                     ` Gleb Natapov
2013-07-10 11:41                       ` Gleb Natapov
2013-07-10 11:50                       ` Raghavendra K T
2013-07-10 11:50                         ` Raghavendra K T
2013-07-11  9:13                   ` Raghavendra K T
2013-07-11  9:13                   ` Raghavendra K T
2013-07-11  9:48                     ` Gleb Natapov
2013-07-11  9:48                       ` Gleb Natapov
2013-07-11 10:10                       ` Raghavendra K T
2013-07-11 10:11                         ` Gleb Natapov
2013-07-11 10:11                           ` Gleb Natapov
2013-07-11 10:53                           ` Raghavendra K T
2013-07-11 10:53                             ` Raghavendra K T
2013-07-11 10:56                             ` Gleb Natapov
2013-07-11 10:56                               ` Gleb Natapov
2013-07-11 11:14                               ` Raghavendra K T
2013-07-11 11:14                               ` Raghavendra K T
2013-07-11 10:10                       ` Raghavendra K T
2013-06-26 17:54             ` Raghavendra K T
2013-06-26 14:13         ` Konrad Rzeszutek Wilk
2013-06-26 14:13           ` Konrad Rzeszutek Wilk
2013-06-26 15:56         ` Andrew Theurer
2013-06-26 15:56           ` Andrew Theurer
2013-07-01  9:30           ` Raghavendra K T
2013-07-01  9:30             ` Raghavendra K T
  -- strict thread matches above, loose matches on Subject: below --
2013-06-01 19:21 Raghavendra K T
2013-06-01  8:21 Raghavendra K T
2013-06-01  8:21 Raghavendra K T
2013-06-01  8:21 ` Raghavendra K T
2013-06-01 19:21 ` Raghavendra KT
2013-06-01 19:21 ` Raghavendra KT
2013-06-01 20:14 ` Andi Kleen
2013-06-01 20:28   ` Jeremy Fitzhardinge
2013-06-01 20:28     ` Jeremy Fitzhardinge
2013-06-01 20:46     ` Andi Kleen
2013-06-01 20:46       ` Andi Kleen
2013-06-01 20:28   ` Jeremy Fitzhardinge
2013-06-04 10:58   ` Raghavendra K T
2013-06-04 10:58   ` Raghavendra K T
2013-06-01 20:14 ` Andi Kleen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.