* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 19:21 Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.
Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
causing undercommit degradation (after PLE handler improvement).
- Added kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler
V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.
With this series we see that we could get little more improvements on top
of that.
Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs). This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning. (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).
(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longer in contention, it also clears the slowpath flag.
The "slowpath state" is stored in the LSB of the within the lock tail
ticket. This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).
For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.
The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.
The inner part of ticket lock code becomes:
inc = xadd(&lock->tickets, inc);
inc.tail &= ~TICKET_SLOWPATH_FLAG;
if (likely(inc.head == inc.tail))
goto out;
for (;;) {
unsigned count = SPIN_THRESHOLD;
do {
if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out: barrier();
which results in:
push %rbp
mov %rsp,%rbp
mov $0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f # Slowpath if lock in contention
pop %rbp
retq
### SLOWPATH START
1: and $-2,%edx
movzbl %dl,%esi
2: mov $0x800,%eax
jmp 4f
3: pause
sub $0x1,%eax
je 5f
4: movzbl (%rdi),%ecx
cmp %cl,%dl
jne 3b
pop %rbp
retq
5: callq *__ticket_lock_spinning
jmp 2b
### SLOWPATH END
with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:
push %rbp
mov %rsp,%rbp
mov $0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f
pop %rbp
retq
### SLOWPATH START
1: pause
movzbl (%rdi),%eax
cmp %dl,%al
jne 1b
pop %rbp
retq
### SLOWPATH END
The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail". This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set. The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).
This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.
if (TICKET_SLOWPATH_FLAG &&
static_key_false(¶virt_ticketlocks_enabled))) {
arch_spinlock_t prev;
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
} else
__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
push %rbp
mov %rsp,%rbp
nop5 # replaced by 5-byte jmp 2f when PV enabled
# non-PV unlock
addb $0x2,(%rdi)
1: pop %rbp
retq
### PV unlock ###
2: movzwl (%rdi),%esi # Fetch prev
lock addb $0x2,(%rdi) # Do unlock
testb $0x1,0x1(%rdi) # Test flag
je 1b # Finished if not set
### Slow path ###
add $2,%sil # Add "head" in old lock state
mov %esi,%edx
and $0xfe,%dh # clear slowflag for comparison
movzbl %dh,%eax
cmp %dl,%al # If head == tail (uncontended)
je 4f # clear slowpath flag
# Kick next CPU waiting for lock
3: movzbl %sil,%esi
callq *pv_lock_ops.kick
pop %rbp
retq
# Lock no longer contended - clear slowflag
4: mov %esi,%eax
lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
cmp %si,%ax
jne 3b # If clear failed, then kick
pop %rbp
retq
So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".
Results:
=======
base = 3.10-rc2 kernel
patched = base + this series
The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
+-----------+-----------+-----------+------------+-----------+
Your suggestions and comments are welcome.
github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines.
The older series was tested by Attilio for Xen implementation [1].
Jeremy Fitzhardinge (9):
x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/ticketlock: Collapse a layer of functions
xen: Defer spinlock setup until boot CPU setup
xen/pvticketlock: Xen implementation for PV ticket locks
xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/ticketlock: Add slowpath logic
xen/pvticketlock: Allow interrupts to be enabled while blocking
Andrew Jones (1):
Split jumplabel ratelimit
Stefano Stabellini (1):
xen: Enable PV ticketlocks on HVM Xen
Srivatsa Vaddagiri (3):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
kvm guest : Add configuration support to enable debug information for KVM Guests
kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
Raghavendra K T (5):
x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
Add directed yield in vcpu block path
---
Link in V8 has links to previous patch series and also whole history.
V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119
Documentation/virtual/kvm/cpuid.txt | 4 +
Documentation/virtual/kvm/hypercalls.txt | 13 ++
arch/ia64/include/asm/kvm_host.h | 5 +
arch/powerpc/include/asm/kvm_host.h | 5 +
arch/s390/include/asm/kvm_host.h | 5 +
arch/x86/Kconfig | 10 +
arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/include/asm/kvm_para.h | 14 +-
arch/x86/include/asm/paravirt.h | 32 +--
arch/x86/include/asm/paravirt_types.h | 10 +-
arch/x86/include/asm/spinlock.h | 128 +++++++----
arch/x86/include/asm/spinlock_types.h | 16 +-
arch/x86/include/uapi/asm/kvm_para.h | 1 +
arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 18 +-
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/lapic.c | 5 +-
arch/x86/kvm/x86.c | 39 +++-
arch/x86/xen/smp.c | 3 +-
arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
include/linux/jump_label.h | 26 +--
include/linux/jump_label_ratelimit.h | 34 +++
include/linux/kvm_host.h | 2 +-
include/linux/perf_event.h | 1 +
include/uapi/linux/kvm_para.h | 1 +
kernel/jump_label.c | 1 +
virt/kvm/kvm_main.c | 6 +-
27 files changed, 645 insertions(+), 384 deletions(-)
^ permalink raw reply [flat|nested] 96+ messages in thread
* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 19:21 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
linux-kernel, stephan.diestelhorst, riel, drjones,
virtualization, srivatsa.vaddagiri
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.
Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
causing undercommit degradation (after PLE handler improvement).
- Added kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler
V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.
With this series we see that we could get little more improvements on top
of that.
Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs). This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning. (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).
(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longer in contention, it also clears the slowpath flag.
The "slowpath state" is stored in the LSB of the within the lock tail
ticket. This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).
For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.
The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.
The inner part of ticket lock code becomes:
inc = xadd(&lock->tickets, inc);
inc.tail &= ~TICKET_SLOWPATH_FLAG;
if (likely(inc.head == inc.tail))
goto out;
for (;;) {
unsigned count = SPIN_THRESHOLD;
do {
if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out: barrier();
which results in:
push %rbp
mov %rsp,%rbp
mov $0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f # Slowpath if lock in contention
pop %rbp
retq
### SLOWPATH START
1: and $-2,%edx
movzbl %dl,%esi
2: mov $0x800,%eax
jmp 4f
3: pause
sub $0x1,%eax
je 5f
4: movzbl (%rdi),%ecx
cmp %cl,%dl
jne 3b
pop %rbp
retq
5: callq *__ticket_lock_spinning
jmp 2b
### SLOWPATH END
with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:
push %rbp
mov %rsp,%rbp
mov $0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f
pop %rbp
retq
### SLOWPATH START
1: pause
movzbl (%rdi),%eax
cmp %dl,%al
jne 1b
pop %rbp
retq
### SLOWPATH END
The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail". This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set. The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).
This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.
if (TICKET_SLOWPATH_FLAG &&
static_key_false(¶virt_ticketlocks_enabled))) {
arch_spinlock_t prev;
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
} else
__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
push %rbp
mov %rsp,%rbp
nop5 # replaced by 5-byte jmp 2f when PV enabled
# non-PV unlock
addb $0x2,(%rdi)
1: pop %rbp
retq
### PV unlock ###
2: movzwl (%rdi),%esi # Fetch prev
lock addb $0x2,(%rdi) # Do unlock
testb $0x1,0x1(%rdi) # Test flag
je 1b # Finished if not set
### Slow path ###
add $2,%sil # Add "head" in old lock state
mov %esi,%edx
and $0xfe,%dh # clear slowflag for comparison
movzbl %dh,%eax
cmp %dl,%al # If head == tail (uncontended)
je 4f # clear slowpath flag
# Kick next CPU waiting for lock
3: movzbl %sil,%esi
callq *pv_lock_ops.kick
pop %rbp
retq
# Lock no longer contended - clear slowflag
4: mov %esi,%eax
lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
cmp %si,%ax
jne 3b # If clear failed, then kick
pop %rbp
retq
So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".
Results:
=======
base = 3.10-rc2 kernel
patched = base + this series
The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
+-----------+-----------+-----------+------------+-----------+
Your suggestions and comments are welcome.
github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines.
The older series was tested by Attilio for Xen implementation [1].
Jeremy Fitzhardinge (9):
x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/ticketlock: Collapse a layer of functions
xen: Defer spinlock setup until boot CPU setup
xen/pvticketlock: Xen implementation for PV ticket locks
xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/ticketlock: Add slowpath logic
xen/pvticketlock: Allow interrupts to be enabled while blocking
Andrew Jones (1):
Split jumplabel ratelimit
Stefano Stabellini (1):
xen: Enable PV ticketlocks on HVM Xen
Srivatsa Vaddagiri (3):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
kvm guest : Add configuration support to enable debug information for KVM Guests
kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
Raghavendra K T (5):
x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
Add directed yield in vcpu block path
---
Link in V8 has links to previous patch series and also whole history.
V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119
Documentation/virtual/kvm/cpuid.txt | 4 +
Documentation/virtual/kvm/hypercalls.txt | 13 ++
arch/ia64/include/asm/kvm_host.h | 5 +
arch/powerpc/include/asm/kvm_host.h | 5 +
arch/s390/include/asm/kvm_host.h | 5 +
arch/x86/Kconfig | 10 +
arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/include/asm/kvm_para.h | 14 +-
arch/x86/include/asm/paravirt.h | 32 +--
arch/x86/include/asm/paravirt_types.h | 10 +-
arch/x86/include/asm/spinlock.h | 128 +++++++----
arch/x86/include/asm/spinlock_types.h | 16 +-
arch/x86/include/uapi/asm/kvm_para.h | 1 +
arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 18 +-
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/lapic.c | 5 +-
arch/x86/kvm/x86.c | 39 +++-
arch/x86/xen/smp.c | 3 +-
arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
include/linux/jump_label.h | 26 +--
include/linux/jump_label_ratelimit.h | 34 +++
include/linux/kvm_host.h | 2 +-
include/linux/perf_event.h | 1 +
include/uapi/linux/kvm_para.h | 1 +
kernel/jump_label.c | 1 +
virt/kvm/kvm_main.c | 6 +-
27 files changed, 645 insertions(+), 384 deletions(-)
^ permalink raw reply [flat|nested] 96+ messages in thread
* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 19:21 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-01 19:21 UTC (permalink / raw)
To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
linux-kernel, stephan.diestelhorst, riel, drjones,
virtualization, srivatsa.vaddagiri
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.
Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
causing undercommit degradation (after PLE handler improvement).
- Added kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler
V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.
With this series we see that we could get little more improvements on top
of that.
Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs). This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning. (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).
(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longer in contention, it also clears the slowpath flag.
The "slowpath state" is stored in the LSB of the within the lock tail
ticket. This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).
For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.
The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.
The inner part of ticket lock code becomes:
inc = xadd(&lock->tickets, inc);
inc.tail &= ~TICKET_SLOWPATH_FLAG;
if (likely(inc.head == inc.tail))
goto out;
for (;;) {
unsigned count = SPIN_THRESHOLD;
do {
if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out: barrier();
which results in:
push %rbp
mov %rsp,%rbp
mov $0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f # Slowpath if lock in contention
pop %rbp
retq
### SLOWPATH START
1: and $-2,%edx
movzbl %dl,%esi
2: mov $0x800,%eax
jmp 4f
3: pause
sub $0x1,%eax
je 5f
4: movzbl (%rdi),%ecx
cmp %cl,%dl
jne 3b
pop %rbp
retq
5: callq *__ticket_lock_spinning
jmp 2b
### SLOWPATH END
with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:
push %rbp
mov %rsp,%rbp
mov $0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f
pop %rbp
retq
### SLOWPATH START
1: pause
movzbl (%rdi),%eax
cmp %dl,%al
jne 1b
pop %rbp
retq
### SLOWPATH END
The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail". This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set. The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).
This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.
if (TICKET_SLOWPATH_FLAG &&
static_key_false(¶virt_ticketlocks_enabled))) {
arch_spinlock_t prev;
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
} else
__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
push %rbp
mov %rsp,%rbp
nop5 # replaced by 5-byte jmp 2f when PV enabled
# non-PV unlock
addb $0x2,(%rdi)
1: pop %rbp
retq
### PV unlock ###
2: movzwl (%rdi),%esi # Fetch prev
lock addb $0x2,(%rdi) # Do unlock
testb $0x1,0x1(%rdi) # Test flag
je 1b # Finished if not set
### Slow path ###
add $2,%sil # Add "head" in old lock state
mov %esi,%edx
and $0xfe,%dh # clear slowflag for comparison
movzbl %dh,%eax
cmp %dl,%al # If head == tail (uncontended)
je 4f # clear slowpath flag
# Kick next CPU waiting for lock
3: movzbl %sil,%esi
callq *pv_lock_ops.kick
pop %rbp
retq
# Lock no longer contended - clear slowflag
4: mov %esi,%eax
lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
cmp %si,%ax
jne 3b # If clear failed, then kick
pop %rbp
retq
So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".
Results:
=======
base = 3.10-rc2 kernel
patched = base + this series
The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
+-----------+-----------+-----------+------------+-----------+
Your suggestions and comments are welcome.
github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines.
The older series was tested by Attilio for Xen implementation [1].
Jeremy Fitzhardinge (9):
x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/ticketlock: Collapse a layer of functions
xen: Defer spinlock setup until boot CPU setup
xen/pvticketlock: Xen implementation for PV ticket locks
xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/ticketlock: Add slowpath logic
xen/pvticketlock: Allow interrupts to be enabled while blocking
Andrew Jones (1):
Split jumplabel ratelimit
Stefano Stabellini (1):
xen: Enable PV ticketlocks on HVM Xen
Srivatsa Vaddagiri (3):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
kvm guest : Add configuration support to enable debug information for KVM Guests
kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
Raghavendra K T (5):
x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
Add directed yield in vcpu block path
---
Link in V8 has links to previous patch series and also whole history.
V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119
Documentation/virtual/kvm/cpuid.txt | 4 +
Documentation/virtual/kvm/hypercalls.txt | 13 ++
arch/ia64/include/asm/kvm_host.h | 5 +
arch/powerpc/include/asm/kvm_host.h | 5 +
arch/s390/include/asm/kvm_host.h | 5 +
arch/x86/Kconfig | 10 +
arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/include/asm/kvm_para.h | 14 +-
arch/x86/include/asm/paravirt.h | 32 +--
arch/x86/include/asm/paravirt_types.h | 10 +-
arch/x86/include/asm/spinlock.h | 128 +++++++----
arch/x86/include/asm/spinlock_types.h | 16 +-
arch/x86/include/uapi/asm/kvm_para.h | 1 +
arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 18 +-
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/lapic.c | 5 +-
arch/x86/kvm/x86.c | 39 +++-
arch/x86/xen/smp.c | 3 +-
arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
include/linux/jump_label.h | 26 +--
include/linux/jump_label_ratelimit.h | 34 +++
include/linux/kvm_host.h | 2 +-
include/linux/perf_event.h | 1 +
include/uapi/linux/kvm_para.h | 1 +
kernel/jump_label.c | 1 +
virt/kvm/kvm_main.c | 6 +-
27 files changed, 645 insertions(+), 384 deletions(-)
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 19:21 ` Raghavendra K T
@ 2013-06-02 8:07 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-06-02 8:07 UTC (permalink / raw)
To: Raghavendra K T
Cc: mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
habanero, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
drjones, virtualization, srivatsa.vaddagiri
On Sun, Jun 02, 2013 at 12:51:25AM +0530, Raghavendra K T wrote:
>
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
>
High level question here. We have a big hope for "Preemptable Ticket
Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
ticketing spinlocks in overcommit scenarios problem without need for PV.
So how this patch series compares with his patches on PLE enabled processors?
> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
> causing undercommit degradation (after PLE handler improvement).
> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
>
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.
>
> With this series we see that we could get little more improvements on top
> of that.
>
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs). This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning. (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
>
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
>
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
>
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
>
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
> iterations, then call out to the __ticket_lock_spinning() pvop,
> which allows a backend to block the vCPU rather than spinning. This
> pvop can set the lock into "slowpath state".
>
> - When releasing a lock, if it is in "slowpath state", the call
> __ticket_unlock_kick() to kick the next vCPU in line awake. If the
> lock is no longer in contention, it also clears the slowpath flag.
>
> The "slowpath state" is stored in the LSB of the within the lock tail
> ticket. This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
>
> For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
> another vcpu out of halt state.
> The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
>
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.
>
> The fast path (taking an uncontended lock which isn't in "slowpath"
> state) is optimal, identical to the non-paravirtualized case.
>
> The inner part of ticket lock code becomes:
> inc = xadd(&lock->tickets, inc);
> inc.tail &= ~TICKET_SLOWPATH_FLAG;
>
> if (likely(inc.head == inc.tail))
> goto out;
> for (;;) {
> unsigned count = SPIN_THRESHOLD;
> do {
> if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
> goto out;
> cpu_relax();
> } while (--count);
> __ticket_lock_spinning(lock, inc.tail);
> }
> out: barrier();
> which results in:
> push %rbp
> mov %rsp,%rbp
>
> mov $0x200,%eax
> lock xadd %ax,(%rdi)
> movzbl %ah,%edx
> cmp %al,%dl
> jne 1f # Slowpath if lock in contention
>
> pop %rbp
> retq
>
> ### SLOWPATH START
> 1: and $-2,%edx
> movzbl %dl,%esi
>
> 2: mov $0x800,%eax
> jmp 4f
>
> 3: pause
> sub $0x1,%eax
> je 5f
>
> 4: movzbl (%rdi),%ecx
> cmp %cl,%dl
> jne 3b
>
> pop %rbp
> retq
>
> 5: callq *__ticket_lock_spinning
> jmp 2b
> ### SLOWPATH END
>
> with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
> the fastpath case is straight through (taking the lock without
> contention), and the spin loop is out of line:
>
> push %rbp
> mov %rsp,%rbp
>
> mov $0x100,%eax
> lock xadd %ax,(%rdi)
> movzbl %ah,%edx
> cmp %al,%dl
> jne 1f
>
> pop %rbp
> retq
>
> ### SLOWPATH START
> 1: pause
> movzbl (%rdi),%eax
> cmp %dl,%al
> jne 1b
>
> pop %rbp
> retq
> ### SLOWPATH END
>
> The unlock code is complicated by the need to both add to the lock's
> "head" and fetch the slowpath flag from "tail". This version of the
> patch uses a locked add to do this, followed by a test to see if the
> slowflag is set. The lock prefix acts as a full memory barrier, so we
> can be sure that other CPUs will have seen the unlock before we read
> the flag (without the barrier the read could be fetched from the
> store queue before it hits memory, which could result in a deadlock).
>
> This is is all unnecessary complication if you're not using PV ticket
> locks, it also uses the jump-label machinery to use the standard
> "add"-based unlock in the non-PV case.
>
> if (TICKET_SLOWPATH_FLAG &&
> static_key_false(¶virt_ticketlocks_enabled))) {
> arch_spinlock_t prev;
> prev = *lock;
> add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>
> /* add_smp() is a full mb() */
> if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> __ticket_unlock_slowpath(lock, prev);
> } else
> __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
> which generates:
> push %rbp
> mov %rsp,%rbp
>
> nop5 # replaced by 5-byte jmp 2f when PV enabled
>
> # non-PV unlock
> addb $0x2,(%rdi)
>
> 1: pop %rbp
> retq
>
> ### PV unlock ###
> 2: movzwl (%rdi),%esi # Fetch prev
>
> lock addb $0x2,(%rdi) # Do unlock
>
> testb $0x1,0x1(%rdi) # Test flag
> je 1b # Finished if not set
>
> ### Slow path ###
> add $2,%sil # Add "head" in old lock state
> mov %esi,%edx
> and $0xfe,%dh # clear slowflag for comparison
> movzbl %dh,%eax
> cmp %dl,%al # If head == tail (uncontended)
> je 4f # clear slowpath flag
>
> # Kick next CPU waiting for lock
> 3: movzbl %sil,%esi
> callq *pv_lock_ops.kick
>
> pop %rbp
> retq
>
> # Lock no longer contended - clear slowflag
> 4: mov %esi,%eax
> lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
> cmp %si,%ax
> jne 3b # If clear failed, then kick
>
> pop %rbp
> retq
>
> So when not using PV ticketlocks, the unlock sequence just has a
> 5-byte nop added to it, and the PV case is reasonable straightforward
> aside from requiring a "lock add".
>
>
> Results:
> =======
> base = 3.10-rc2 kernel
> patched = base + this series
>
> The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
> with 32 KVM guest vcpu 8GB RAM.
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
> 2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
> 3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
> 4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
> 2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
> 3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
> 4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
> +-----------+-----------+-----------+------------+-----------+
>
> Your suggestions and comments are welcome.
>
> github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
>
>
> Please note that we set SPIN_THRESHOLD = 32k with this series,
> that would eatup little bit of overcommit performance of PLE machines
> and overall performance of non-PLE machines.
>
> The older series was tested by Attilio for Xen implementation [1].
>
> Jeremy Fitzhardinge (9):
> x86/spinlock: Replace pv spinlocks with pv ticketlocks
> x86/ticketlock: Collapse a layer of functions
> xen: Defer spinlock setup until boot CPU setup
> xen/pvticketlock: Xen implementation for PV ticket locks
> xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
> x86/pvticketlock: Use callee-save for lock_spinning
> x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
> x86/ticketlock: Add slowpath logic
> xen/pvticketlock: Allow interrupts to be enabled while blocking
>
> Andrew Jones (1):
> Split jumplabel ratelimit
>
> Stefano Stabellini (1):
> xen: Enable PV ticketlocks on HVM Xen
>
> Srivatsa Vaddagiri (3):
> kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
> kvm guest : Add configuration support to enable debug information for KVM Guests
> kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
>
> Raghavendra K T (5):
> x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
> kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
> Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
> Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
> Add directed yield in vcpu block path
>
> ---
> Link in V8 has links to previous patch series and also whole history.
>
> V8 PV Ticketspinlock for Xen/KVM link:
> [1] https://lkml.org/lkml/2012/5/2/119
>
> Documentation/virtual/kvm/cpuid.txt | 4 +
> Documentation/virtual/kvm/hypercalls.txt | 13 ++
> arch/ia64/include/asm/kvm_host.h | 5 +
> arch/powerpc/include/asm/kvm_host.h | 5 +
> arch/s390/include/asm/kvm_host.h | 5 +
> arch/x86/Kconfig | 10 +
> arch/x86/include/asm/kvm_host.h | 7 +-
> arch/x86/include/asm/kvm_para.h | 14 +-
> arch/x86/include/asm/paravirt.h | 32 +--
> arch/x86/include/asm/paravirt_types.h | 10 +-
> arch/x86/include/asm/spinlock.h | 128 +++++++----
> arch/x86/include/asm/spinlock_types.h | 16 +-
> arch/x86/include/uapi/asm/kvm_para.h | 1 +
> arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
> arch/x86/kernel/paravirt-spinlocks.c | 18 +-
> arch/x86/kvm/cpuid.c | 3 +-
> arch/x86/kvm/lapic.c | 5 +-
> arch/x86/kvm/x86.c | 39 +++-
> arch/x86/xen/smp.c | 3 +-
> arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
> include/linux/jump_label.h | 26 +--
> include/linux/jump_label_ratelimit.h | 34 +++
> include/linux/kvm_host.h | 2 +-
> include/linux/perf_event.h | 1 +
> include/uapi/linux/kvm_para.h | 1 +
> kernel/jump_label.c | 1 +
> virt/kvm/kvm_main.c | 6 +-
> 27 files changed, 645 insertions(+), 384 deletions(-)
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-02 8:07 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-06-02 8:07 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
riel, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Sun, Jun 02, 2013 at 12:51:25AM +0530, Raghavendra K T wrote:
>
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
>
High level question here. We have a big hope for "Preemptable Ticket
Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
ticketing spinlocks in overcommit scenarios problem without need for PV.
So how this patch series compares with his patches on PLE enabled processors?
> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
> causing undercommit degradation (after PLE handler improvement).
> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
>
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.
>
> With this series we see that we could get little more improvements on top
> of that.
>
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs). This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning. (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
>
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
>
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
>
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
>
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
> iterations, then call out to the __ticket_lock_spinning() pvop,
> which allows a backend to block the vCPU rather than spinning. This
> pvop can set the lock into "slowpath state".
>
> - When releasing a lock, if it is in "slowpath state", the call
> __ticket_unlock_kick() to kick the next vCPU in line awake. If the
> lock is no longer in contention, it also clears the slowpath flag.
>
> The "slowpath state" is stored in the LSB of the within the lock tail
> ticket. This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
>
> For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
> another vcpu out of halt state.
> The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
>
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.
>
> The fast path (taking an uncontended lock which isn't in "slowpath"
> state) is optimal, identical to the non-paravirtualized case.
>
> The inner part of ticket lock code becomes:
> inc = xadd(&lock->tickets, inc);
> inc.tail &= ~TICKET_SLOWPATH_FLAG;
>
> if (likely(inc.head == inc.tail))
> goto out;
> for (;;) {
> unsigned count = SPIN_THRESHOLD;
> do {
> if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
> goto out;
> cpu_relax();
> } while (--count);
> __ticket_lock_spinning(lock, inc.tail);
> }
> out: barrier();
> which results in:
> push %rbp
> mov %rsp,%rbp
>
> mov $0x200,%eax
> lock xadd %ax,(%rdi)
> movzbl %ah,%edx
> cmp %al,%dl
> jne 1f # Slowpath if lock in contention
>
> pop %rbp
> retq
>
> ### SLOWPATH START
> 1: and $-2,%edx
> movzbl %dl,%esi
>
> 2: mov $0x800,%eax
> jmp 4f
>
> 3: pause
> sub $0x1,%eax
> je 5f
>
> 4: movzbl (%rdi),%ecx
> cmp %cl,%dl
> jne 3b
>
> pop %rbp
> retq
>
> 5: callq *__ticket_lock_spinning
> jmp 2b
> ### SLOWPATH END
>
> with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
> the fastpath case is straight through (taking the lock without
> contention), and the spin loop is out of line:
>
> push %rbp
> mov %rsp,%rbp
>
> mov $0x100,%eax
> lock xadd %ax,(%rdi)
> movzbl %ah,%edx
> cmp %al,%dl
> jne 1f
>
> pop %rbp
> retq
>
> ### SLOWPATH START
> 1: pause
> movzbl (%rdi),%eax
> cmp %dl,%al
> jne 1b
>
> pop %rbp
> retq
> ### SLOWPATH END
>
> The unlock code is complicated by the need to both add to the lock's
> "head" and fetch the slowpath flag from "tail". This version of the
> patch uses a locked add to do this, followed by a test to see if the
> slowflag is set. The lock prefix acts as a full memory barrier, so we
> can be sure that other CPUs will have seen the unlock before we read
> the flag (without the barrier the read could be fetched from the
> store queue before it hits memory, which could result in a deadlock).
>
> This is is all unnecessary complication if you're not using PV ticket
> locks, it also uses the jump-label machinery to use the standard
> "add"-based unlock in the non-PV case.
>
> if (TICKET_SLOWPATH_FLAG &&
> static_key_false(¶virt_ticketlocks_enabled))) {
> arch_spinlock_t prev;
> prev = *lock;
> add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>
> /* add_smp() is a full mb() */
> if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> __ticket_unlock_slowpath(lock, prev);
> } else
> __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
> which generates:
> push %rbp
> mov %rsp,%rbp
>
> nop5 # replaced by 5-byte jmp 2f when PV enabled
>
> # non-PV unlock
> addb $0x2,(%rdi)
>
> 1: pop %rbp
> retq
>
> ### PV unlock ###
> 2: movzwl (%rdi),%esi # Fetch prev
>
> lock addb $0x2,(%rdi) # Do unlock
>
> testb $0x1,0x1(%rdi) # Test flag
> je 1b # Finished if not set
>
> ### Slow path ###
> add $2,%sil # Add "head" in old lock state
> mov %esi,%edx
> and $0xfe,%dh # clear slowflag for comparison
> movzbl %dh,%eax
> cmp %dl,%al # If head == tail (uncontended)
> je 4f # clear slowpath flag
>
> # Kick next CPU waiting for lock
> 3: movzbl %sil,%esi
> callq *pv_lock_ops.kick
>
> pop %rbp
> retq
>
> # Lock no longer contended - clear slowflag
> 4: mov %esi,%eax
> lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
> cmp %si,%ax
> jne 3b # If clear failed, then kick
>
> pop %rbp
> retq
>
> So when not using PV ticketlocks, the unlock sequence just has a
> 5-byte nop added to it, and the PV case is reasonable straightforward
> aside from requiring a "lock add".
>
>
> Results:
> =======
> base = 3.10-rc2 kernel
> patched = base + this series
>
> The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
> with 32 KVM guest vcpu 8GB RAM.
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
> 2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
> 3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
> 4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
> 2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
> 3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
> 4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
> +-----------+-----------+-----------+------------+-----------+
>
> Your suggestions and comments are welcome.
>
> github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
>
>
> Please note that we set SPIN_THRESHOLD = 32k with this series,
> that would eatup little bit of overcommit performance of PLE machines
> and overall performance of non-PLE machines.
>
> The older series was tested by Attilio for Xen implementation [1].
>
> Jeremy Fitzhardinge (9):
> x86/spinlock: Replace pv spinlocks with pv ticketlocks
> x86/ticketlock: Collapse a layer of functions
> xen: Defer spinlock setup until boot CPU setup
> xen/pvticketlock: Xen implementation for PV ticket locks
> xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
> x86/pvticketlock: Use callee-save for lock_spinning
> x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
> x86/ticketlock: Add slowpath logic
> xen/pvticketlock: Allow interrupts to be enabled while blocking
>
> Andrew Jones (1):
> Split jumplabel ratelimit
>
> Stefano Stabellini (1):
> xen: Enable PV ticketlocks on HVM Xen
>
> Srivatsa Vaddagiri (3):
> kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
> kvm guest : Add configuration support to enable debug information for KVM Guests
> kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
>
> Raghavendra K T (5):
> x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
> kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
> Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
> Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
> Add directed yield in vcpu block path
>
> ---
> Link in V8 has links to previous patch series and also whole history.
>
> V8 PV Ticketspinlock for Xen/KVM link:
> [1] https://lkml.org/lkml/2012/5/2/119
>
> Documentation/virtual/kvm/cpuid.txt | 4 +
> Documentation/virtual/kvm/hypercalls.txt | 13 ++
> arch/ia64/include/asm/kvm_host.h | 5 +
> arch/powerpc/include/asm/kvm_host.h | 5 +
> arch/s390/include/asm/kvm_host.h | 5 +
> arch/x86/Kconfig | 10 +
> arch/x86/include/asm/kvm_host.h | 7 +-
> arch/x86/include/asm/kvm_para.h | 14 +-
> arch/x86/include/asm/paravirt.h | 32 +--
> arch/x86/include/asm/paravirt_types.h | 10 +-
> arch/x86/include/asm/spinlock.h | 128 +++++++----
> arch/x86/include/asm/spinlock_types.h | 16 +-
> arch/x86/include/uapi/asm/kvm_para.h | 1 +
> arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
> arch/x86/kernel/paravirt-spinlocks.c | 18 +-
> arch/x86/kvm/cpuid.c | 3 +-
> arch/x86/kvm/lapic.c | 5 +-
> arch/x86/kvm/x86.c | 39 +++-
> arch/x86/xen/smp.c | 3 +-
> arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
> include/linux/jump_label.h | 26 +--
> include/linux/jump_label_ratelimit.h | 34 +++
> include/linux/kvm_host.h | 2 +-
> include/linux/perf_event.h | 1 +
> include/uapi/linux/kvm_para.h | 1 +
> kernel/jump_label.c | 1 +
> virt/kvm/kvm_main.c | 6 +-
> 27 files changed, 645 insertions(+), 384 deletions(-)
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-02 8:07 ` Gleb Natapov
@ 2013-06-02 16:20 ` Jiannan Ouyang
-1 siblings, 0 replies; 96+ messages in thread
From: Jiannan Ouyang @ 2013-06-02 16:20 UTC (permalink / raw)
To: Gleb Natapov
Cc: Raghavendra K T, Ingo Molnar, Jeremy Fitzhardinge, x86,
konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
stefano.stabellini, andi, attilio.rao, Jiannan Ouyang, gregkh,
agraf, chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM,
LKML, stephan.diestelhorst, Rik van Riel, Andrew Jones,
virtualization, Srivatsa Vaddagiri
On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
> High level question here. We have a big hope for "Preemptable Ticket
> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> ticketing spinlocks in overcommit scenarios problem without need for PV.
> So how this patch series compares with his patches on PLE enabled processors?
>
No experiment results yet.
An error is reported on a 20 core VM. I'm during an internship
relocation, and will start work on it next week.
--
Jiannan
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-02 16:20 ` Jiannan Ouyang
0 siblings, 0 replies; 96+ messages in thread
From: Jiannan Ouyang @ 2013-06-02 16:20 UTC (permalink / raw)
To: Gleb Natapov
Cc: Raghavendra K T, Ingo Molnar, Jeremy Fitzhardinge, x86,
konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
stefano.stabellini, andi, attilio.rao, Jiannan Ouyang, gregkh,
agraf, chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM,
LKML, stephan.diestelhorst, Rik van Riel, Andrew Jones,
virtualization
On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
> High level question here. We have a big hope for "Preemptable Ticket
> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> ticketing spinlocks in overcommit scenarios problem without need for PV.
> So how this patch series compares with his patches on PLE enabled processors?
>
No experiment results yet.
An error is reported on a 20 core VM. I'm during an internship
relocation, and will start work on it next week.
--
Jiannan
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-02 16:20 ` Jiannan Ouyang
@ 2013-06-03 1:40 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-03 1:40 UTC (permalink / raw)
To: Jiannan Ouyang, Gleb Natapov
Cc: Ingo Molnar, Jeremy Fitzhardinge, x86, konrad.wilk,
H. Peter Anvin, pbonzini, linux-doc, Andrew M. Theurer,
xen-devel, Peter Zijlstra, Marcelo Tosatti, stefano.stabellini,
andi, attilio.rao, gregkh, agraf, chegu vinod, torvalds,
Avi Kivity, Thomas Gleixner, KVM, LKML, stephan.diestelhorst,
Rik van Riel, Andrew Jones, virtualization, Srivatsa Vaddagiri
On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>
>> High level question here. We have a big hope for "Preemptable Ticket
>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>> So how this patch series compares with his patches on PLE enabled processors?
>>
>
> No experiment results yet.
>
> An error is reported on a 20 core VM. I'm during an internship
> relocation, and will start work on it next week.
Preemptable spinlocks' testing update:
I hit the same softlockup problem while testing on 32 core machine with
32 guest vcpus that Andrew had reported.
After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
things seemed to be manageable for undercommit cases.
But I still see degradation for undercommit w.r.t baseline itself on 32
core machine (after tuning).
(37.5% degradation w.r.t base line).
I can give the full report after the all tests complete.
For over-commit cases, I again started hitting softlockups (and
degradation is worse). But as I said in the preemptable thread, the
concept of preemptable locks looks promising (though I am still not a
fan of embedded TIMEOUT mechanism)
Here is my opinion of TODOs for preemptable locks to make it better ( I
think I need to paste in the preemptable thread also)
1. Current TIMEOUT UNIT seem to be on higher side and also it does not
scale well with large guests and also overcommit. we need to have a
sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
for different types of lock too. The hashing mechanism that was used in
Rik's spinlock backoff series fits better probably.
2. I do not think TIMEOUT_UNIT itself would work great when we have a
big queue (for large guests / overcommits) for lock.
one way is to add a PV hook that does yield hypercall immediately for
the waiters above some THRESHOLD so that they don't burn the CPU.
( I can do POC to check if that idea works in improving situation
at some later point of time)
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-03 1:40 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-03 1:40 UTC (permalink / raw)
To: Jiannan Ouyang, Gleb Natapov
Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
Andrew Jones, virtualization, andi, H. Peter Anvin,
stefano.stabellini, xen-devel, x86, Ingo Molnar,
Andrew M. Theurer, Rik van Riel, konrad.wilk, Avi Kivity,
Thomas Gleixner, chegu vinod, LKML, Srivatsa Vaddagiri,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>
>> High level question here. We have a big hope for "Preemptable Ticket
>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>> So how this patch series compares with his patches on PLE enabled processors?
>>
>
> No experiment results yet.
>
> An error is reported on a 20 core VM. I'm during an internship
> relocation, and will start work on it next week.
Preemptable spinlocks' testing update:
I hit the same softlockup problem while testing on 32 core machine with
32 guest vcpus that Andrew had reported.
After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
things seemed to be manageable for undercommit cases.
But I still see degradation for undercommit w.r.t baseline itself on 32
core machine (after tuning).
(37.5% degradation w.r.t base line).
I can give the full report after the all tests complete.
For over-commit cases, I again started hitting softlockups (and
degradation is worse). But as I said in the preemptable thread, the
concept of preemptable locks looks promising (though I am still not a
fan of embedded TIMEOUT mechanism)
Here is my opinion of TODOs for preemptable locks to make it better ( I
think I need to paste in the preemptable thread also)
1. Current TIMEOUT UNIT seem to be on higher side and also it does not
scale well with large guests and also overcommit. we need to have a
sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
for different types of lock too. The hashing mechanism that was used in
Rik's spinlock backoff series fits better probably.
2. I do not think TIMEOUT_UNIT itself would work great when we have a
big queue (for large guests / overcommits) for lock.
one way is to add a PV hook that does yield hypercall immediately for
the waiters above some THRESHOLD so that they don't burn the CPU.
( I can do POC to check if that idea works in improving situation
at some later point of time)
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-03 1:40 ` Raghavendra K T
(?)
@ 2013-06-03 6:21 ` Raghavendra K T
2013-06-07 6:15 ` Raghavendra K T
-1 siblings, 1 reply; 96+ messages in thread
From: Raghavendra K T @ 2013-06-03 6:21 UTC (permalink / raw)
To: Jiannan Ouyang, Gleb Natapov
Cc: Ingo Molnar, Jeremy Fitzhardinge, x86, konrad.wilk,
H. Peter Anvin, pbonzini, linux-doc, Andrew M. Theurer,
xen-devel, Peter Zijlstra, Marcelo Tosatti, stefano.stabellini,
andi, attilio.rao, gregkh, agraf, chegu vinod, torvalds,
Avi Kivity, Thomas Gleixner, KVM, LKML, stephan.diestelhorst,
Rik van Riel, Andrew Jones, virtualization, Srivatsa Vaddagiri
On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>
>>> High level question here. We have a big hope for "Preemptable Ticket
>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>>> So how this patch series compares with his patches on PLE enabled
>>> processors?
>>>
>>
>> No experiment results yet.
>>
>> An error is reported on a 20 core VM. I'm during an internship
>> relocation, and will start work on it next week.
>
> Preemptable spinlocks' testing update:
> I hit the same softlockup problem while testing on 32 core machine with
> 32 guest vcpus that Andrew had reported.
>
> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> things seemed to be manageable for undercommit cases.
> But I still see degradation for undercommit w.r.t baseline itself on 32
> core machine (after tuning).
>
> (37.5% degradation w.r.t base line).
> I can give the full report after the all tests complete.
>
> For over-commit cases, I again started hitting softlockups (and
> degradation is worse). But as I said in the preemptable thread, the
> concept of preemptable locks looks promising (though I am still not a
> fan of embedded TIMEOUT mechanism)
>
> Here is my opinion of TODOs for preemptable locks to make it better ( I
> think I need to paste in the preemptable thread also)
>
> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> scale well with large guests and also overcommit. we need to have a
> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> for different types of lock too. The hashing mechanism that was used in
> Rik's spinlock backoff series fits better probably.
>
> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> big queue (for large guests / overcommits) for lock.
> one way is to add a PV hook that does yield hypercall immediately for
> the waiters above some THRESHOLD so that they don't burn the CPU.
> ( I can do POC to check if that idea works in improving situation
> at some later point of time)
>
Preemptable-lock results from my run with 2^8 TIMEOUT:
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
+-----------+-----------+-----------+------------+-----------+
Note we can not trust on overcommit results because of softlock-ups
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-03 6:21 ` Raghavendra K T
@ 2013-06-07 6:15 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-07 6:15 UTC (permalink / raw)
To: Jiannan Ouyang, Gleb Natapov
Cc: Ingo Molnar, Jeremy Fitzhardinge, x86, konrad.wilk,
H. Peter Anvin, pbonzini, linux-doc, Andrew M. Theurer,
xen-devel, Peter Zijlstra, Marcelo Tosatti, stefano.stabellini,
andi, attilio.rao, gregkh, agraf, chegu vinod, torvalds,
Avi Kivity, Thomas Gleixner, KVM, LKML, stephan.diestelhorst,
Rik van Riel, Andrew Jones, virtualization, Srivatsa Vaddagiri
On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>
>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>> PV.
>>>> So how this patch series compares with his patches on PLE enabled
>>>> processors?
>>>>
>>>
>>> No experiment results yet.
>>>
>>> An error is reported on a 20 core VM. I'm during an internship
>>> relocation, and will start work on it next week.
>>
>> Preemptable spinlocks' testing update:
>> I hit the same softlockup problem while testing on 32 core machine with
>> 32 guest vcpus that Andrew had reported.
>>
>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>> things seemed to be manageable for undercommit cases.
>> But I still see degradation for undercommit w.r.t baseline itself on 32
>> core machine (after tuning).
>>
>> (37.5% degradation w.r.t base line).
>> I can give the full report after the all tests complete.
>>
>> For over-commit cases, I again started hitting softlockups (and
>> degradation is worse). But as I said in the preemptable thread, the
>> concept of preemptable locks looks promising (though I am still not a
>> fan of embedded TIMEOUT mechanism)
>>
>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>> think I need to paste in the preemptable thread also)
>>
>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>> scale well with large guests and also overcommit. we need to have a
>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>> for different types of lock too. The hashing mechanism that was used in
>> Rik's spinlock backoff series fits better probably.
>>
>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>> big queue (for large guests / overcommits) for lock.
>> one way is to add a PV hook that does yield hypercall immediately for
>> the waiters above some THRESHOLD so that they don't burn the CPU.
>> ( I can do POC to check if that idea works in improving situation
>> at some later point of time)
>>
>
> Preemptable-lock results from my run with 2^8 TIMEOUT:
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
> 2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
> 3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
> 4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
> 2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
> 3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
> 4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
> +-----------+-----------+-----------+------------+-----------+
>
> Note we can not trust on overcommit results because of softlock-ups
>
Hi, I tried
(1) TIMEOUT=(2^7)
(2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed
yield to other vCPUs.
Now I do not see any soft-lockup in overcommit cases and results are
better now (except ebizzy 1x). and for dbench I see now it is closer to
base and even improvement in 4x
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
5574.9000 237.4997 523.7000 1.4181 -90.60611
2741.5000 561.3090 597.8000 34.9755 -78.19442
2146.2500 216.7718 902.6667 82.4228 -57.94215
1663.0000 141.9235 1245.0000 67.2989 -25.13530
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
14111.5600 754.4525 884.9051 24.4723 -93.72922
2481.6270 71.2665 2383.5700 333.2435 -3.95132
1510.2483 31.8634 1477.7358 50.5126 -2.15279
1029.4875 16.9166 1075.9225 13.9911 4.51050
+-----------+-----------+-----------+------------+-----------+
IMO hash based timeout is worth a try further.
I think little more tuning will get more better results.
Jiannan, When you start working on this, I can also help
to get best of preemptable lock idea if you wish and share
the patches I tried.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-07 6:15 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-07 6:15 UTC (permalink / raw)
To: Jiannan Ouyang, Gleb Natapov
Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
Andrew Jones, virtualization, andi, H. Peter Anvin,
stefano.stabellini, xen-devel, x86, Ingo Molnar,
Andrew M. Theurer, Rik van Riel, konrad.wilk, Avi Kivity,
Thomas Gleixner, chegu vinod, LKML, Srivatsa Vaddagiri,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>
>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>> PV.
>>>> So how this patch series compares with his patches on PLE enabled
>>>> processors?
>>>>
>>>
>>> No experiment results yet.
>>>
>>> An error is reported on a 20 core VM. I'm during an internship
>>> relocation, and will start work on it next week.
>>
>> Preemptable spinlocks' testing update:
>> I hit the same softlockup problem while testing on 32 core machine with
>> 32 guest vcpus that Andrew had reported.
>>
>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>> things seemed to be manageable for undercommit cases.
>> But I still see degradation for undercommit w.r.t baseline itself on 32
>> core machine (after tuning).
>>
>> (37.5% degradation w.r.t base line).
>> I can give the full report after the all tests complete.
>>
>> For over-commit cases, I again started hitting softlockups (and
>> degradation is worse). But as I said in the preemptable thread, the
>> concept of preemptable locks looks promising (though I am still not a
>> fan of embedded TIMEOUT mechanism)
>>
>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>> think I need to paste in the preemptable thread also)
>>
>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>> scale well with large guests and also overcommit. we need to have a
>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>> for different types of lock too. The hashing mechanism that was used in
>> Rik's spinlock backoff series fits better probably.
>>
>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>> big queue (for large guests / overcommits) for lock.
>> one way is to add a PV hook that does yield hypercall immediately for
>> the waiters above some THRESHOLD so that they don't burn the CPU.
>> ( I can do POC to check if that idea works in improving situation
>> at some later point of time)
>>
>
> Preemptable-lock results from my run with 2^8 TIMEOUT:
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
> 2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
> 3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
> 4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
> 2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
> 3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
> 4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
> +-----------+-----------+-----------+------------+-----------+
>
> Note we can not trust on overcommit results because of softlock-ups
>
Hi, I tried
(1) TIMEOUT=(2^7)
(2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed
yield to other vCPUs.
Now I do not see any soft-lockup in overcommit cases and results are
better now (except ebizzy 1x). and for dbench I see now it is closer to
base and even improvement in 4x
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
5574.9000 237.4997 523.7000 1.4181 -90.60611
2741.5000 561.3090 597.8000 34.9755 -78.19442
2146.2500 216.7718 902.6667 82.4228 -57.94215
1663.0000 141.9235 1245.0000 67.2989 -25.13530
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
14111.5600 754.4525 884.9051 24.4723 -93.72922
2481.6270 71.2665 2383.5700 333.2435 -3.95132
1510.2483 31.8634 1477.7358 50.5126 -2.15279
1029.4875 16.9166 1075.9225 13.9911 4.51050
+-----------+-----------+-----------+------------+-----------+
IMO hash based timeout is worth a try further.
I think little more tuning will get more better results.
Jiannan, When you start working on this, I can also help
to get best of preemptable lock idea if you wish and share
the patches I tried.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-07 6:15 ` Raghavendra K T
(?)
@ 2013-06-07 13:29 ` Andrew Theurer
-1 siblings, 0 replies; 96+ messages in thread
From: Andrew Theurer @ 2013-06-07 13:29 UTC (permalink / raw)
To: Raghavendra K T
Cc: Jiannan Ouyang, Gleb Natapov, Ingo Molnar, Jeremy Fitzhardinge,
x86, konrad.wilk, H. Peter Anvin, pbonzini, linux-doc, xen-devel,
Peter Zijlstra, Marcelo Tosatti, stefano.stabellini, andi,
attilio.rao, gregkh, agraf, chegu vinod, torvalds, Avi Kivity,
Thomas Gleixner, KVM, LKML, stephan.diestelhorst, Rik van Riel,
Andrew Jones, virtualization, Srivatsa Vaddagiri
On Fri, 2013-06-07 at 11:45 +0530, Raghavendra K T wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> > On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> >> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> >>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
> >>>
> >>>> High level question here. We have a big hope for "Preemptable Ticket
> >>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> >>>> ticketing spinlocks in overcommit scenarios problem without need for
> >>>> PV.
> >>>> So how this patch series compares with his patches on PLE enabled
> >>>> processors?
> >>>>
> >>>
> >>> No experiment results yet.
> >>>
> >>> An error is reported on a 20 core VM. I'm during an internship
> >>> relocation, and will start work on it next week.
> >>
> >> Preemptable spinlocks' testing update:
> >> I hit the same softlockup problem while testing on 32 core machine with
> >> 32 guest vcpus that Andrew had reported.
> >>
> >> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> >> things seemed to be manageable for undercommit cases.
> >> But I still see degradation for undercommit w.r.t baseline itself on 32
> >> core machine (after tuning).
> >>
> >> (37.5% degradation w.r.t base line).
> >> I can give the full report after the all tests complete.
> >>
> >> For over-commit cases, I again started hitting softlockups (and
> >> degradation is worse). But as I said in the preemptable thread, the
> >> concept of preemptable locks looks promising (though I am still not a
> >> fan of embedded TIMEOUT mechanism)
> >>
> >> Here is my opinion of TODOs for preemptable locks to make it better ( I
> >> think I need to paste in the preemptable thread also)
> >>
> >> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> >> scale well with large guests and also overcommit. we need to have a
> >> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> >> for different types of lock too. The hashing mechanism that was used in
> >> Rik's spinlock backoff series fits better probably.
> >>
> >> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> >> big queue (for large guests / overcommits) for lock.
> >> one way is to add a PV hook that does yield hypercall immediately for
> >> the waiters above some THRESHOLD so that they don't burn the CPU.
> >> ( I can do POC to check if that idea works in improving situation
> >> at some later point of time)
> >>
> >
> > Preemptable-lock results from my run with 2^8 TIMEOUT:
> >
> > +-----------+-----------+-----------+------------+-----------+
> > ebizzy (records/sec) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> > base stdev patched stdev %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
> > 2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
> > 3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
> > 4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
> > +-----------+-----------+-----------+------------+-----------+
> > +-----------+-----------+-----------+------------+-----------+
> > dbench (Throughput) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> > base stdev patched stdev %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
> > 2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
> > 3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
> > 4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
> > +-----------+-----------+-----------+------------+-----------+
> >
> > Note we can not trust on overcommit results because of softlock-ups
> >
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed
> yield to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are
> better now (except ebizzy 1x). and for dbench I see now it is closer to
> base and even improvement in 4x
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 5574.9000 237.4997 523.7000 1.4181 -90.60611
> 2741.5000 561.3090 597.8000 34.9755 -78.19442
> 2146.2500 216.7718 902.6667 82.4228 -57.94215
> 1663.0000 141.9235 1245.0000 67.2989 -25.13530
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 14111.5600 754.4525 884.9051 24.4723 -93.72922
> 2481.6270 71.2665 2383.5700 333.2435 -3.95132
> 1510.2483 31.8634 1477.7358 50.5126 -2.15279
> 1029.4875 16.9166 1075.9225 13.9911 4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
The problem I see (especially for dbench) is that we are still way off
what I would consider the goal. IMO, 2x over-commit result should be a
bit lower than 50% (to account for switching overhead and less cache
warmth). We are at about 17.5% for 2x. I am thinking we need a
completely different approach to get there, but of course I do not know
what that is yet :)
I am testing your patches now and hopefully with some analysis data we
can better understand what's going on.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
-Andrew Theurer
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-07 6:15 ` Raghavendra K T
(?)
(?)
@ 2013-06-07 13:29 ` Andrew Theurer
-1 siblings, 0 replies; 96+ messages in thread
From: Andrew Theurer @ 2013-06-07 13:29 UTC (permalink / raw)
To: Raghavendra K T
Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
Andrew Jones, virtualization, andi, H. Peter Anvin,
stefano.stabellini, xen-devel, x86, Ingo Molnar, Rik van Riel,
konrad.wilk, Jiannan Ouyang, Avi Kivity, Thomas Gleixner,
chegu vinod, LKML, Srivatsa Vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Fri, 2013-06-07 at 11:45 +0530, Raghavendra K T wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
> > On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> >> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
> >>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
> >>>
> >>>> High level question here. We have a big hope for "Preemptable Ticket
> >>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> >>>> ticketing spinlocks in overcommit scenarios problem without need for
> >>>> PV.
> >>>> So how this patch series compares with his patches on PLE enabled
> >>>> processors?
> >>>>
> >>>
> >>> No experiment results yet.
> >>>
> >>> An error is reported on a 20 core VM. I'm during an internship
> >>> relocation, and will start work on it next week.
> >>
> >> Preemptable spinlocks' testing update:
> >> I hit the same softlockup problem while testing on 32 core machine with
> >> 32 guest vcpus that Andrew had reported.
> >>
> >> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> >> things seemed to be manageable for undercommit cases.
> >> But I still see degradation for undercommit w.r.t baseline itself on 32
> >> core machine (after tuning).
> >>
> >> (37.5% degradation w.r.t base line).
> >> I can give the full report after the all tests complete.
> >>
> >> For over-commit cases, I again started hitting softlockups (and
> >> degradation is worse). But as I said in the preemptable thread, the
> >> concept of preemptable locks looks promising (though I am still not a
> >> fan of embedded TIMEOUT mechanism)
> >>
> >> Here is my opinion of TODOs for preemptable locks to make it better ( I
> >> think I need to paste in the preemptable thread also)
> >>
> >> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> >> scale well with large guests and also overcommit. we need to have a
> >> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> >> for different types of lock too. The hashing mechanism that was used in
> >> Rik's spinlock backoff series fits better probably.
> >>
> >> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> >> big queue (for large guests / overcommits) for lock.
> >> one way is to add a PV hook that does yield hypercall immediately for
> >> the waiters above some THRESHOLD so that they don't burn the CPU.
> >> ( I can do POC to check if that idea works in improving situation
> >> at some later point of time)
> >>
> >
> > Preemptable-lock results from my run with 2^8 TIMEOUT:
> >
> > +-----------+-----------+-----------+------------+-----------+
> > ebizzy (records/sec) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> > base stdev patched stdev %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
> > 2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
> > 3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
> > 4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
> > +-----------+-----------+-----------+------------+-----------+
> > +-----------+-----------+-----------+------------+-----------+
> > dbench (Throughput) higher is better
> > +-----------+-----------+-----------+------------+-----------+
> > base stdev patched stdev %improvement
> > +-----------+-----------+-----------+------------+-----------+
> > 1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
> > 2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
> > 3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
> > 4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
> > +-----------+-----------+-----------+------------+-----------+
> >
> > Note we can not trust on overcommit results because of softlock-ups
> >
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed
> yield to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are
> better now (except ebizzy 1x). and for dbench I see now it is closer to
> base and even improvement in 4x
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 5574.9000 237.4997 523.7000 1.4181 -90.60611
> 2741.5000 561.3090 597.8000 34.9755 -78.19442
> 2146.2500 216.7718 902.6667 82.4228 -57.94215
> 1663.0000 141.9235 1245.0000 67.2989 -25.13530
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 14111.5600 754.4525 884.9051 24.4723 -93.72922
> 2481.6270 71.2665 2383.5700 333.2435 -3.95132
> 1510.2483 31.8634 1477.7358 50.5126 -2.15279
> 1029.4875 16.9166 1075.9225 13.9911 4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
The problem I see (especially for dbench) is that we are still way off
what I would consider the goal. IMO, 2x over-commit result should be a
bit lower than 50% (to account for switching overhead and less cache
warmth). We are at about 17.5% for 2x. I am thinking we need a
completely different approach to get there, but of course I do not know
what that is yet :)
I am testing your patches now and hopefully with some analysis data we
can better understand what's going on.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
-Andrew Theurer
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-07 6:15 ` Raghavendra K T
@ 2013-06-07 23:41 ` Jiannan Ouyang
-1 siblings, 0 replies; 96+ messages in thread
From: Jiannan Ouyang @ 2013-06-07 23:41 UTC (permalink / raw)
To: Raghavendra K T
Cc: Jiannan Ouyang, Gleb Natapov, Ingo Molnar, Jeremy Fitzhardinge,
x86, konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
Stefano Stabellini, andi, attilio.rao, gregkh, Alexander Graf,
chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM, LKML,
stephan.diestelhorst, Rik van Riel, Andrew Jones, virtualization,
Srivatsa Vaddagiri
Raghu, thanks for you input. I'm more than glad to work together with
you to make this idea work better.
-Jiannan
On Thu, Jun 6, 2013 at 11:15 PM, Raghavendra K T
<raghavendra.kt@linux.vnet.ibm.com> wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
>>
>> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>>>
>>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>>>
>>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>>
>>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>>> PV.
>>>>> So how this patch series compares with his patches on PLE enabled
>>>>> processors?
>>>>>
>>>>
>>>> No experiment results yet.
>>>>
>>>> An error is reported on a 20 core VM. I'm during an internship
>>>> relocation, and will start work on it next week.
>>>
>>>
>>> Preemptable spinlocks' testing update:
>>> I hit the same softlockup problem while testing on 32 core machine with
>>> 32 guest vcpus that Andrew had reported.
>>>
>>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>>> things seemed to be manageable for undercommit cases.
>>> But I still see degradation for undercommit w.r.t baseline itself on 32
>>> core machine (after tuning).
>>>
>>> (37.5% degradation w.r.t base line).
>>> I can give the full report after the all tests complete.
>>>
>>> For over-commit cases, I again started hitting softlockups (and
>>> degradation is worse). But as I said in the preemptable thread, the
>>> concept of preemptable locks looks promising (though I am still not a
>>> fan of embedded TIMEOUT mechanism)
>>>
>>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>>> think I need to paste in the preemptable thread also)
>>>
>>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>>> scale well with large guests and also overcommit. we need to have a
>>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>>> for different types of lock too. The hashing mechanism that was used in
>>> Rik's spinlock backoff series fits better probably.
>>>
>>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>>> big queue (for large guests / overcommits) for lock.
>>> one way is to add a PV hook that does yield hypercall immediately for
>>> the waiters above some THRESHOLD so that they don't burn the CPU.
>>> ( I can do POC to check if that idea works in improving situation
>>> at some later point of time)
>>>
>>
>> Preemptable-lock results from my run with 2^8 TIMEOUT:
>>
>> +-----------+-----------+-----------+------------+-----------+
>> ebizzy (records/sec) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>> base stdev patched stdev %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
>> 2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
>> 3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
>> 4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
>> +-----------+-----------+-----------+------------+-----------+
>> +-----------+-----------+-----------+------------+-----------+
>> dbench (Throughput) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>> base stdev patched stdev %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
>> 2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
>> 3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
>> 4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
>> +-----------+-----------+-----------+------------+-----------+
>>
>> Note we can not trust on overcommit results because of softlock-ups
>>
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed yield
> to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are better
> now (except ebizzy 1x). and for dbench I see now it is closer to base and
> even improvement in 4x
>
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 5574.9000 237.4997 523.7000 1.4181 -90.60611
> 2741.5000 561.3090 597.8000 34.9755 -78.19442
> 2146.2500 216.7718 902.6667 82.4228 -57.94215
> 1663.0000 141.9235 1245.0000 67.2989 -25.13530
>
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 14111.5600 754.4525 884.9051 24.4723 -93.72922
> 2481.6270 71.2665 2383.5700 333.2435 -3.95132
> 1510.2483 31.8634 1477.7358 50.5126 -2.15279
> 1029.4875 16.9166 1075.9225 13.9911 4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
>
>
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-07 23:41 ` Jiannan Ouyang
0 siblings, 0 replies; 96+ messages in thread
From: Jiannan Ouyang @ 2013-06-07 23:41 UTC (permalink / raw)
To: Raghavendra K T
Cc: Jiannan Ouyang, Gleb Natapov, Ingo Molnar, Jeremy Fitzhardinge,
x86, konrad.wilk, H. Peter Anvin, pbonzini, linux-doc,
Andrew M. Theurer, xen-devel, Peter Zijlstra, Marcelo Tosatti,
Stefano Stabellini, andi, attilio.rao, gregkh, Alexander Graf,
chegu vinod, torvalds, Avi Kivity, Thomas Gleixner, KVM, LKML,
stephan.diestelhorst, Rik van Riel, Andrew Jones, virtualiz
Raghu, thanks for you input. I'm more than glad to work together with
you to make this idea work better.
-Jiannan
On Thu, Jun 6, 2013 at 11:15 PM, Raghavendra K T
<raghavendra.kt@linux.vnet.ibm.com> wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
>>
>> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>>>
>>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>>>
>>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>>
>>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>>> PV.
>>>>> So how this patch series compares with his patches on PLE enabled
>>>>> processors?
>>>>>
>>>>
>>>> No experiment results yet.
>>>>
>>>> An error is reported on a 20 core VM. I'm during an internship
>>>> relocation, and will start work on it next week.
>>>
>>>
>>> Preemptable spinlocks' testing update:
>>> I hit the same softlockup problem while testing on 32 core machine with
>>> 32 guest vcpus that Andrew had reported.
>>>
>>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>>> things seemed to be manageable for undercommit cases.
>>> But I still see degradation for undercommit w.r.t baseline itself on 32
>>> core machine (after tuning).
>>>
>>> (37.5% degradation w.r.t base line).
>>> I can give the full report after the all tests complete.
>>>
>>> For over-commit cases, I again started hitting softlockups (and
>>> degradation is worse). But as I said in the preemptable thread, the
>>> concept of preemptable locks looks promising (though I am still not a
>>> fan of embedded TIMEOUT mechanism)
>>>
>>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>>> think I need to paste in the preemptable thread also)
>>>
>>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>>> scale well with large guests and also overcommit. we need to have a
>>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>>> for different types of lock too. The hashing mechanism that was used in
>>> Rik's spinlock backoff series fits better probably.
>>>
>>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>>> big queue (for large guests / overcommits) for lock.
>>> one way is to add a PV hook that does yield hypercall immediately for
>>> the waiters above some THRESHOLD so that they don't burn the CPU.
>>> ( I can do POC to check if that idea works in improving situation
>>> at some later point of time)
>>>
>>
>> Preemptable-lock results from my run with 2^8 TIMEOUT:
>>
>> +-----------+-----------+-----------+------------+-----------+
>> ebizzy (records/sec) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>> base stdev patched stdev %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
>> 2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
>> 3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
>> 4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
>> +-----------+-----------+-----------+------------+-----------+
>> +-----------+-----------+-----------+------------+-----------+
>> dbench (Throughput) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>> base stdev patched stdev %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
>> 2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
>> 3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
>> 4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
>> +-----------+-----------+-----------+------------+-----------+
>>
>> Note we can not trust on overcommit results because of softlock-ups
>>
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed yield
> to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are better
> now (except ebizzy 1x). and for dbench I see now it is closer to base and
> even improvement in 4x
>
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 5574.9000 237.4997 523.7000 1.4181 -90.60611
> 2741.5000 561.3090 597.8000 34.9755 -78.19442
> 2146.2500 216.7718 902.6667 82.4228 -57.94215
> 1663.0000 141.9235 1245.0000 67.2989 -25.13530
>
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 14111.5600 754.4525 884.9051 24.4723 -93.72922
> 2481.6270 71.2665 2383.5700 333.2435 -3.95132
> 1510.2483 31.8634 1477.7358 50.5126 -2.15279
> 1029.4875 16.9166 1075.9225 13.9911 4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
>
>
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-07 6:15 ` Raghavendra K T
` (3 preceding siblings ...)
(?)
@ 2013-06-07 23:41 ` Jiannan Ouyang
-1 siblings, 0 replies; 96+ messages in thread
From: Jiannan Ouyang @ 2013-06-07 23:41 UTC (permalink / raw)
To: Raghavendra K T
Cc: Jeremy Fitzhardinge, gregkh, linux-doc, Peter Zijlstra,
Andrew Jones, virtualization, andi, H. Peter Anvin,
Stefano Stabellini, xen-devel, KVM, x86, Ingo Molnar,
Andrew M. Theurer, Rik van Riel, konrad.wilk, Jiannan Ouyang,
Avi Kivity, Thomas Gleixner, chegu vinod, LKML,
Srivatsa Vaddagiri, attilio.rao, pbonzini, torvalds,
stephan.diestelhorst
Raghu, thanks for you input. I'm more than glad to work together with
you to make this idea work better.
-Jiannan
On Thu, Jun 6, 2013 at 11:15 PM, Raghavendra K T
<raghavendra.kt@linux.vnet.ibm.com> wrote:
> On 06/03/2013 11:51 AM, Raghavendra K T wrote:
>>
>> On 06/03/2013 07:10 AM, Raghavendra K T wrote:
>>>
>>> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>>>>
>>>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>>>
>>>>> High level question here. We have a big hope for "Preemptable Ticket
>>>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>>>> ticketing spinlocks in overcommit scenarios problem without need for
>>>>> PV.
>>>>> So how this patch series compares with his patches on PLE enabled
>>>>> processors?
>>>>>
>>>>
>>>> No experiment results yet.
>>>>
>>>> An error is reported on a 20 core VM. I'm during an internship
>>>> relocation, and will start work on it next week.
>>>
>>>
>>> Preemptable spinlocks' testing update:
>>> I hit the same softlockup problem while testing on 32 core machine with
>>> 32 guest vcpus that Andrew had reported.
>>>
>>> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
>>> things seemed to be manageable for undercommit cases.
>>> But I still see degradation for undercommit w.r.t baseline itself on 32
>>> core machine (after tuning).
>>>
>>> (37.5% degradation w.r.t base line).
>>> I can give the full report after the all tests complete.
>>>
>>> For over-commit cases, I again started hitting softlockups (and
>>> degradation is worse). But as I said in the preemptable thread, the
>>> concept of preemptable locks looks promising (though I am still not a
>>> fan of embedded TIMEOUT mechanism)
>>>
>>> Here is my opinion of TODOs for preemptable locks to make it better ( I
>>> think I need to paste in the preemptable thread also)
>>>
>>> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
>>> scale well with large guests and also overcommit. we need to have a
>>> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
>>> for different types of lock too. The hashing mechanism that was used in
>>> Rik's spinlock backoff series fits better probably.
>>>
>>> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
>>> big queue (for large guests / overcommits) for lock.
>>> one way is to add a PV hook that does yield hypercall immediately for
>>> the waiters above some THRESHOLD so that they don't burn the CPU.
>>> ( I can do POC to check if that idea works in improving situation
>>> at some later point of time)
>>>
>>
>> Preemptable-lock results from my run with 2^8 TIMEOUT:
>>
>> +-----------+-----------+-----------+------------+-----------+
>> ebizzy (records/sec) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>> base stdev patched stdev %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
>> 2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
>> 3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
>> 4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
>> +-----------+-----------+-----------+------------+-----------+
>> +-----------+-----------+-----------+------------+-----------+
>> dbench (Throughput) higher is better
>> +-----------+-----------+-----------+------------+-----------+
>> base stdev patched stdev %improvement
>> +-----------+-----------+-----------+------------+-----------+
>> 1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
>> 2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
>> 3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
>> 4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
>> +-----------+-----------+-----------+------------+-----------+
>>
>> Note we can not trust on overcommit results because of softlock-ups
>>
>
> Hi, I tried
> (1) TIMEOUT=(2^7)
>
> (2) having yield hypercall that uses kvm_vcpu_on_spin() to do directed yield
> to other vCPUs.
>
> Now I do not see any soft-lockup in overcommit cases and results are better
> now (except ebizzy 1x). and for dbench I see now it is closer to base and
> even improvement in 4x
>
>
> +-----------+-----------+-----------+------------+-----------+
> ebizzy (records/sec) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 5574.9000 237.4997 523.7000 1.4181 -90.60611
> 2741.5000 561.3090 597.8000 34.9755 -78.19442
> 2146.2500 216.7718 902.6667 82.4228 -57.94215
> 1663.0000 141.9235 1245.0000 67.2989 -25.13530
>
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
> dbench (Throughput) higher is better
> +-----------+-----------+-----------+------------+-----------+
> base stdev patched stdev %improvement
> +-----------+-----------+-----------+------------+-----------+
> 14111.5600 754.4525 884.9051 24.4723 -93.72922
> 2481.6270 71.2665 2383.5700 333.2435 -3.95132
> 1510.2483 31.8634 1477.7358 50.5126 -2.15279
> 1029.4875 16.9166 1075.9225 13.9911 4.51050
> +-----------+-----------+-----------+------------+-----------+
>
>
> IMO hash based timeout is worth a try further.
> I think little more tuning will get more better results.
>
> Jiannan, When you start working on this, I can also help
> to get best of preemptable lock idea if you wish and share
> the patches I tried.
>
>
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-03 1:40 ` Raghavendra K T
(?)
(?)
@ 2013-06-03 6:21 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-03 6:21 UTC (permalink / raw)
To: Jiannan Ouyang, Gleb Natapov
Cc: Jeremy Fitzhardinge, gregkh, KVM, linux-doc, Peter Zijlstra,
Andrew Jones, virtualization, andi, H. Peter Anvin,
stefano.stabellini, xen-devel, x86, Ingo Molnar,
Andrew M. Theurer, Rik van Riel, konrad.wilk, Avi Kivity,
Thomas Gleixner, chegu vinod, LKML, Srivatsa Vaddagiri,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
On 06/03/2013 07:10 AM, Raghavendra K T wrote:
> On 06/02/2013 09:50 PM, Jiannan Ouyang wrote:
>> On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
>>
>>> High level question here. We have a big hope for "Preemptable Ticket
>>> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
>>> ticketing spinlocks in overcommit scenarios problem without need for PV.
>>> So how this patch series compares with his patches on PLE enabled
>>> processors?
>>>
>>
>> No experiment results yet.
>>
>> An error is reported on a 20 core VM. I'm during an internship
>> relocation, and will start work on it next week.
>
> Preemptable spinlocks' testing update:
> I hit the same softlockup problem while testing on 32 core machine with
> 32 guest vcpus that Andrew had reported.
>
> After that i started tuning TIMEOUT_UNIT, and when I went till (1<<8),
> things seemed to be manageable for undercommit cases.
> But I still see degradation for undercommit w.r.t baseline itself on 32
> core machine (after tuning).
>
> (37.5% degradation w.r.t base line).
> I can give the full report after the all tests complete.
>
> For over-commit cases, I again started hitting softlockups (and
> degradation is worse). But as I said in the preemptable thread, the
> concept of preemptable locks looks promising (though I am still not a
> fan of embedded TIMEOUT mechanism)
>
> Here is my opinion of TODOs for preemptable locks to make it better ( I
> think I need to paste in the preemptable thread also)
>
> 1. Current TIMEOUT UNIT seem to be on higher side and also it does not
> scale well with large guests and also overcommit. we need to have a
> sort of adaptive mechanism and better is sort of different TIMEOUT_UNITS
> for different types of lock too. The hashing mechanism that was used in
> Rik's spinlock backoff series fits better probably.
>
> 2. I do not think TIMEOUT_UNIT itself would work great when we have a
> big queue (for large guests / overcommits) for lock.
> one way is to add a PV hook that does yield hypercall immediately for
> the waiters above some THRESHOLD so that they don't burn the CPU.
> ( I can do POC to check if that idea works in improving situation
> at some later point of time)
>
Preemptable-lock results from my run with 2^8 TIMEOUT:
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 3484.2000 113.4449 -37.50202
2x 2741.5000 561.3090 351.5000 140.5420 -87.17855
3x 2146.2500 216.7718 194.8333 85.0303 -90.92215
4x 1663.0000 141.9235 101.0000 57.7853 -93.92664
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 3930.1602 2547.2369 -72.14936
2x 2481.6270 71.2665 181.1816 89.5368 -92.69908
3x 1510.2483 31.8634 104.7243 53.2470 -93.06576
4x 1029.4875 16.9166 72.3738 38.2432 -92.96992
+-----------+-----------+-----------+------------+-----------+
Note we can not trust on overcommit results because of softlock-ups
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-02 8:07 ` Gleb Natapov
(?)
(?)
@ 2013-06-02 16:20 ` Jiannan Ouyang
-1 siblings, 0 replies; 96+ messages in thread
From: Jiannan Ouyang @ 2013-06-02 16:20 UTC (permalink / raw)
To: Gleb Natapov
Cc: Jeremy Fitzhardinge, x86, KVM, linux-doc, Peter Zijlstra,
Andrew Jones, virtualization, andi, H. Peter Anvin,
stefano.stabellini, xen-devel, Raghavendra K T, Ingo Molnar,
Andrew M. Theurer, Rik van Riel, konrad.wilk, Jiannan Ouyang,
Avi Kivity, Thomas Gleixner, chegu vinod, gregkh, LKML,
Srivatsa Vaddagiri, attilio.rao, pbonzini, torvalds,
stephan.diestelhor
On Sun, Jun 2, 2013 at 1:07 AM, Gleb Natapov <gleb@redhat.com> wrote:
> High level question here. We have a big hope for "Preemptable Ticket
> Spinlock" patch series by Jiannan Ouyang to solve most, if not all,
> ticketing spinlocks in overcommit scenarios problem without need for PV.
> So how this patch series compares with his patches on PLE enabled processors?
>
No experiment results yet.
An error is reported on a 20 core VM. I'm during an internship
relocation, and will start work on it next week.
--
Jiannan
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 19:21 ` Raghavendra K T
(?)
(?)
@ 2013-06-25 14:50 ` Andrew Theurer
2013-06-26 8:45 ` Raghavendra K T
-1 siblings, 1 reply; 96+ messages in thread
From: Andrew Theurer @ 2013-06-25 14:50 UTC (permalink / raw)
To: Raghavendra K T
Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
xen-devel, peterz, mtosatti, stefano.stabellini, andi,
attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
drjones, virtualization, srivatsa.vaddagiri
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
>
> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
> causing undercommit degradation (after PLE handler improvement).
> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
>
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.
Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
tested these patches with and without PLE, as PLE is still not scalable
with large VMs.
System: x3850X5, 40 cores, 80 threads
1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
Total
Configuration Throughput(MB/s) Notes
3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
[all 1x results look good here]
2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
-----------------------------------------------------------
Total
Configuration Throughput Notes
3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
[PLE hinders pv-ticket improvements, but even with PLE off,
we still off from ideal throughput (somewhere >20000)]
1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
----------------------------------------------------------
Total
Configuration Throughput Notes
3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
[1x looking fine here]
2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
Total
Configuration Throughput Notes
3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
[quite bad all around, but pv-tickets with PLE off the best so far.
Still quite a bit off from ideal throughput]
In summary, I would state that the pv-ticket is an overall win, but the
current PLE handler tends to "get in the way" on these larger guests.
-Andrew
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-25 14:50 ` Andrew Theurer
@ 2013-06-26 8:45 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 8:45 UTC (permalink / raw)
To: habanero
Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
xen-devel, peterz, mtosatti, stefano.stabellini, andi,
attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
drjones, virtualization, srivatsa.vaddagiri
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism. The series provides
>> implementation for both Xen and KVM.
>>
>> Changes in V9:
>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>> causing undercommit degradation (after PLE handler improvement).
>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>> - Optimized halt exit path to use PLE handler
>>
>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>> at PLE handler's improvements, various optimizations in PLE handling
>> have been tried.
>
> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> tested these patches with and without PLE, as PLE is still not scalable
> with large VMs.
>
Hi Andrew,
Thanks for testing.
> System: x3850X5, 40 cores, 80 threads
>
>
> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput(MB/s) Notes
>
> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> [all 1x results look good here]
Yes. The 1x results look too close
>
>
> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> -----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
I see 6.426% improvement with ple_on
and 161.87% improvement with ple_off. I think this is a very good sign
for the patches
> [PLE hinders pv-ticket improvements, but even with PLE off,
> we still off from ideal throughput (somewhere >20000)]
>
Okay, The ideal throughput you are referring is getting around atleast
80% of 1x throughput for over-commit. Yes we are still far away from
there.
>
> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> [1x looking fine here]
>
I see ple_off is little better here.
>
> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> [quite bad all around, but pv-tickets with PLE off the best so far.
> Still quite a bit off from ideal throughput]
This is again a remarkable improvement (307%).
This motivates me to add a patch to disable ple when pvspinlock is on.
probably we can add a hypercall that disables ple in kvm init patch.
but only problem I see is what if the guests are mixed.
(i.e one guest has pvspinlock support but other does not. Host
supports pv)
/me thinks
>
> In summary, I would state that the pv-ticket is an overall win, but the
> current PLE handler tends to "get in the way" on these larger guests.
>
> -Andrew
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 8:45 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 8:45 UTC (permalink / raw)
To: habanero
Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, riel,
konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
stephan.diestelhorst
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism. The series provides
>> implementation for both Xen and KVM.
>>
>> Changes in V9:
>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>> causing undercommit degradation (after PLE handler improvement).
>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>> - Optimized halt exit path to use PLE handler
>>
>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>> at PLE handler's improvements, various optimizations in PLE handling
>> have been tried.
>
> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> tested these patches with and without PLE, as PLE is still not scalable
> with large VMs.
>
Hi Andrew,
Thanks for testing.
> System: x3850X5, 40 cores, 80 threads
>
>
> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput(MB/s) Notes
>
> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> [all 1x results look good here]
Yes. The 1x results look too close
>
>
> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> -----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
I see 6.426% improvement with ple_on
and 161.87% improvement with ple_off. I think this is a very good sign
for the patches
> [PLE hinders pv-ticket improvements, but even with PLE off,
> we still off from ideal throughput (somewhere >20000)]
>
Okay, The ideal throughput you are referring is getting around atleast
80% of 1x throughput for over-commit. Yes we are still far away from
there.
>
> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> [1x looking fine here]
>
I see ple_off is little better here.
>
> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> [quite bad all around, but pv-tickets with PLE off the best so far.
> Still quite a bit off from ideal throughput]
This is again a remarkable improvement (307%).
This motivates me to add a patch to disable ple when pvspinlock is on.
probably we can add a hypercall that disables ple in kvm init patch.
but only problem I see is what if the guests are mixed.
(i.e one guest has pvspinlock support but other does not. Host
supports pv)
/me thinks
>
> In summary, I would state that the pv-ticket is an overall win, but the
> current PLE handler tends to "get in the way" on these larger guests.
>
> -Andrew
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 8:45 ` Raghavendra K T
@ 2013-06-26 11:37 ` Andrew Jones
-1 siblings, 0 replies; 96+ messages in thread
From: Andrew Jones @ 2013-06-26 11:37 UTC (permalink / raw)
To: Raghavendra K T
Cc: habanero, gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini,
linux-doc, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
virtualization, srivatsa.vaddagiri
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>This series replaces the existing paravirtualized spinlock mechanism
> >>with a paravirtualized ticketlock mechanism. The series provides
> >>implementation for both Xen and KVM.
> >>
> >>Changes in V9:
> >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >> causing undercommit degradation (after PLE handler improvement).
> >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> >>- Optimized halt exit path to use PLE handler
> >>
> >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>at PLE handler's improvements, various optimizations in PLE handling
> >>have been tried.
> >
> >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> >tested these patches with and without PLE, as PLE is still not scalable
> >with large VMs.
> >
>
> Hi Andrew,
>
> Thanks for testing.
>
> >System: x3850X5, 40 cores, 80 threads
> >
> >
> >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> > Total
> >Configuration Throughput(MB/s) Notes
> >
> >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> >[all 1x results look good here]
>
> Yes. The 1x results look too close
>
> >
> >
> >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >-----------------------------------------------------------
> > Total
> >Configuration Throughput Notes
> >
> >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>
> I see 6.426% improvement with ple_on
> and 161.87% improvement with ple_off. I think this is a very good sign
> for the patches
>
> >[PLE hinders pv-ticket improvements, but even with PLE off,
> > we still off from ideal throughput (somewhere >20000)]
> >
>
> Okay, The ideal throughput you are referring is getting around atleast
> 80% of 1x throughput for over-commit. Yes we are still far away from
> there.
>
> >
> >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >----------------------------------------------------------
> > Total
> >Configuration Throughput Notes
> >
> >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> >[1x looking fine here]
> >
>
> I see ple_off is little better here.
>
> >
> >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> > Total
> >Configuration Throughput Notes
> >
> >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> >[quite bad all around, but pv-tickets with PLE off the best so far.
> > Still quite a bit off from ideal throughput]
>
> This is again a remarkable improvement (307%).
> This motivates me to add a patch to disable ple when pvspinlock is on.
> probably we can add a hypercall that disables ple in kvm init patch.
> but only problem I see is what if the guests are mixed.
>
> (i.e one guest has pvspinlock support but other does not. Host
> supports pv)
How about reintroducing the idea to create per-kvm ple_gap,ple_window
state. We were headed down that road when considering a dynamic window at
one point. Then you can just set a single guest's ple_gap to zero, which
would lead to PLE being disabled for that guest. We could also revisit
the dynamic window then.
drew
>
> /me thinks
>
> >
> >In summary, I would state that the pv-ticket is an overall win, but the
> >current PLE handler tends to "get in the way" on these larger guests.
> >
> >-Andrew
> >
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 11:37 ` Andrew Jones
0 siblings, 0 replies; 96+ messages in thread
From: Andrew Jones @ 2013-06-26 11:37 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, linux-doc, peterz, virtualization, andi, hpa,
stefano.stabellini, xen-devel, kvm, x86, mingo, habanero, riel,
konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
stephan.diestelhorst
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>This series replaces the existing paravirtualized spinlock mechanism
> >>with a paravirtualized ticketlock mechanism. The series provides
> >>implementation for both Xen and KVM.
> >>
> >>Changes in V9:
> >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >> causing undercommit degradation (after PLE handler improvement).
> >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> >>- Optimized halt exit path to use PLE handler
> >>
> >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>at PLE handler's improvements, various optimizations in PLE handling
> >>have been tried.
> >
> >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> >tested these patches with and without PLE, as PLE is still not scalable
> >with large VMs.
> >
>
> Hi Andrew,
>
> Thanks for testing.
>
> >System: x3850X5, 40 cores, 80 threads
> >
> >
> >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> > Total
> >Configuration Throughput(MB/s) Notes
> >
> >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> >[all 1x results look good here]
>
> Yes. The 1x results look too close
>
> >
> >
> >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >-----------------------------------------------------------
> > Total
> >Configuration Throughput Notes
> >
> >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>
> I see 6.426% improvement with ple_on
> and 161.87% improvement with ple_off. I think this is a very good sign
> for the patches
>
> >[PLE hinders pv-ticket improvements, but even with PLE off,
> > we still off from ideal throughput (somewhere >20000)]
> >
>
> Okay, The ideal throughput you are referring is getting around atleast
> 80% of 1x throughput for over-commit. Yes we are still far away from
> there.
>
> >
> >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >----------------------------------------------------------
> > Total
> >Configuration Throughput Notes
> >
> >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> >[1x looking fine here]
> >
>
> I see ple_off is little better here.
>
> >
> >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >----------------------------------------------------------
> > Total
> >Configuration Throughput Notes
> >
> >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> >[quite bad all around, but pv-tickets with PLE off the best so far.
> > Still quite a bit off from ideal throughput]
>
> This is again a remarkable improvement (307%).
> This motivates me to add a patch to disable ple when pvspinlock is on.
> probably we can add a hypercall that disables ple in kvm init patch.
> but only problem I see is what if the guests are mixed.
>
> (i.e one guest has pvspinlock support but other does not. Host
> supports pv)
How about reintroducing the idea to create per-kvm ple_gap,ple_window
state. We were headed down that road when considering a dynamic window at
one point. Then you can just set a single guest's ple_gap to zero, which
would lead to PLE being disabled for that guest. We could also revisit
the dynamic window then.
drew
>
> /me thinks
>
> >
> >In summary, I would state that the pv-ticket is an overall win, but the
> >current PLE handler tends to "get in the way" on these larger guests.
> >
> >-Andrew
> >
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 11:37 ` Andrew Jones
@ 2013-06-26 12:52 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-06-26 12:52 UTC (permalink / raw)
To: Andrew Jones
Cc: Raghavendra K T, habanero, mingo, jeremy, x86, konrad.wilk, hpa,
pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > >>This series replaces the existing paravirtualized spinlock mechanism
> > >>with a paravirtualized ticketlock mechanism. The series provides
> > >>implementation for both Xen and KVM.
> > >>
> > >>Changes in V9:
> > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > >> causing undercommit degradation (after PLE handler improvement).
> > >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> > >>- Optimized halt exit path to use PLE handler
> > >>
> > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > >>at PLE handler's improvements, various optimizations in PLE handling
> > >>have been tried.
> > >
> > >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> > >tested these patches with and without PLE, as PLE is still not scalable
> > >with large VMs.
> > >
> >
> > Hi Andrew,
> >
> > Thanks for testing.
> >
> > >System: x3850X5, 40 cores, 80 threads
> > >
> > >
> > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > > Total
> > >Configuration Throughput(MB/s) Notes
> > >
> > >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> > >[all 1x results look good here]
> >
> > Yes. The 1x results look too close
> >
> > >
> > >
> > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > >-----------------------------------------------------------
> > > Total
> > >Configuration Throughput Notes
> > >
> > >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> > >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> > >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> > >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> >
> > I see 6.426% improvement with ple_on
> > and 161.87% improvement with ple_off. I think this is a very good sign
> > for the patches
> >
> > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > we still off from ideal throughput (somewhere >20000)]
> > >
> >
> > Okay, The ideal throughput you are referring is getting around atleast
> > 80% of 1x throughput for over-commit. Yes we are still far away from
> > there.
> >
> > >
> > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > >----------------------------------------------------------
> > > Total
> > >Configuration Throughput Notes
> > >
> > >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> > >[1x looking fine here]
> > >
> >
> > I see ple_off is little better here.
> >
> > >
> > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > > Total
> > >Configuration Throughput Notes
> > >
> > >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> > >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> > >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> > >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > Still quite a bit off from ideal throughput]
> >
> > This is again a remarkable improvement (307%).
> > This motivates me to add a patch to disable ple when pvspinlock is on.
> > probably we can add a hypercall that disables ple in kvm init patch.
> > but only problem I see is what if the guests are mixed.
> >
> > (i.e one guest has pvspinlock support but other does not. Host
> > supports pv)
>
> How about reintroducing the idea to create per-kvm ple_gap,ple_window
> state. We were headed down that road when considering a dynamic window at
> one point. Then you can just set a single guest's ple_gap to zero, which
> would lead to PLE being disabled for that guest. We could also revisit
> the dynamic window then.
>
Can be done, but lets understand why ple on is such a big problem. Is it
possible that ple gap and SPIN_THRESHOLD are not tuned properly?
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 12:52 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-06-26 12:52 UTC (permalink / raw)
To: Andrew Jones
Cc: jeremy, Raghavendra K T, kvm, linux-doc, peterz, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
riel, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, gregkh,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > >>This series replaces the existing paravirtualized spinlock mechanism
> > >>with a paravirtualized ticketlock mechanism. The series provides
> > >>implementation for both Xen and KVM.
> > >>
> > >>Changes in V9:
> > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > >> causing undercommit degradation (after PLE handler improvement).
> > >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> > >>- Optimized halt exit path to use PLE handler
> > >>
> > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > >>at PLE handler's improvements, various optimizations in PLE handling
> > >>have been tried.
> > >
> > >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> > >tested these patches with and without PLE, as PLE is still not scalable
> > >with large VMs.
> > >
> >
> > Hi Andrew,
> >
> > Thanks for testing.
> >
> > >System: x3850X5, 40 cores, 80 threads
> > >
> > >
> > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > > Total
> > >Configuration Throughput(MB/s) Notes
> > >
> > >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> > >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> > >[all 1x results look good here]
> >
> > Yes. The 1x results look too close
> >
> > >
> > >
> > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > >-----------------------------------------------------------
> > > Total
> > >Configuration Throughput Notes
> > >
> > >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> > >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> > >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> > >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> >
> > I see 6.426% improvement with ple_on
> > and 161.87% improvement with ple_off. I think this is a very good sign
> > for the patches
> >
> > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > we still off from ideal throughput (somewhere >20000)]
> > >
> >
> > Okay, The ideal throughput you are referring is getting around atleast
> > 80% of 1x throughput for over-commit. Yes we are still far away from
> > there.
> >
> > >
> > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > >----------------------------------------------------------
> > > Total
> > >Configuration Throughput Notes
> > >
> > >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> > >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> > >[1x looking fine here]
> > >
> >
> > I see ple_off is little better here.
> >
> > >
> > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > >----------------------------------------------------------
> > > Total
> > >Configuration Throughput Notes
> > >
> > >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> > >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> > >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> > >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > Still quite a bit off from ideal throughput]
> >
> > This is again a remarkable improvement (307%).
> > This motivates me to add a patch to disable ple when pvspinlock is on.
> > probably we can add a hypercall that disables ple in kvm init patch.
> > but only problem I see is what if the guests are mixed.
> >
> > (i.e one guest has pvspinlock support but other does not. Host
> > supports pv)
>
> How about reintroducing the idea to create per-kvm ple_gap,ple_window
> state. We were headed down that road when considering a dynamic window at
> one point. Then you can just set a single guest's ple_gap to zero, which
> would lead to PLE being disabled for that guest. We could also revisit
> the dynamic window then.
>
Can be done, but lets understand why ple on is such a big problem. Is it
possible that ple gap and SPIN_THRESHOLD are not tuned properly?
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 12:52 ` Gleb Natapov
@ 2013-06-26 13:40 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 13:40 UTC (permalink / raw)
To: Gleb Natapov, habanero
Cc: Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini,
linux-doc, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
virtualization, srivatsa.vaddagiri
On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>> implementation for both Xen and KVM.
>>>>>
>>>>> Changes in V9:
>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>> - Optimized halt exit path to use PLE handler
>>>>>
>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>> have been tried.
>>>>
>>>> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>> with large VMs.
>>>>
>>>
>>> Hi Andrew,
>>>
>>> Thanks for testing.
>>>
>>>> System: x3850X5, 40 cores, 80 threads
>>>>
>>>>
>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput(MB/s) Notes
>>>>
>>>> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
>>>> [all 1x results look good here]
>>>
>>> Yes. The 1x results look too close
>>>
>>>>
>>>>
>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>> -----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput Notes
>>>>
>>>> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
>>>> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
>>>> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
>>>> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>>>
>>> I see 6.426% improvement with ple_on
>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>> for the patches
>>>
>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>> we still off from ideal throughput (somewhere >20000)]
>>>>
>>>
>>> Okay, The ideal throughput you are referring is getting around atleast
>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>> there.
>>>
>>>>
>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput Notes
>>>>
>>>> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
>>>> [1x looking fine here]
>>>>
>>>
>>> I see ple_off is little better here.
>>>
>>>>
>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput Notes
>>>>
>>>> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
>>>> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
>>>> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
>>>> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>> Still quite a bit off from ideal throughput]
>>>
>>> This is again a remarkable improvement (307%).
>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>> probably we can add a hypercall that disables ple in kvm init patch.
>>> but only problem I see is what if the guests are mixed.
>>>
>>> (i.e one guest has pvspinlock support but other does not. Host
>>> supports pv)
>>
>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>> state. We were headed down that road when considering a dynamic window at
>> one point. Then you can just set a single guest's ple_gap to zero, which
>> would lead to PLE being disabled for that guest. We could also revisit
>> the dynamic window then.
>>
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>
The one obvious reason I see is commit awareness inside the guest. for
under-commit there is no necessity to do PLE, but unfortunately we do.
atleast we return back immediately in case of potential undercommits,
but we still incur vmexit delay.
same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
for undercommit and less for overcommit.
with this patch series SPIN_THRESHOLD is increased to 32k to solely
avoid under-commit regressions but it would have eaten some amount of
overcommit performance.
In summary: excess halt-exit/pl-exit was one main reason for
undercommit regression. (compared to pl disabled case)
1. dynamic ple window was one solution for PLE, which we can experiment
further. (at VM level or global).
The other experiment I was thinking is to extend spinlock to
accommodate vcpuid (Linus has opposed that but may be worth a
try).
2. Andrew Theurer had patch to reduce double runq lock that I will be
testing.
I have some older experiments to retry though they did not give
significant improvements before the PLE handler modified.
Andrew, do you have any other details to add (from perf report that you
usually take with these experiments)?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 13:40 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 13:40 UTC (permalink / raw)
To: Gleb Natapov, habanero
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>> implementation for both Xen and KVM.
>>>>>
>>>>> Changes in V9:
>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>> - Optimized halt exit path to use PLE handler
>>>>>
>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>> have been tried.
>>>>
>>>> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>> with large VMs.
>>>>
>>>
>>> Hi Andrew,
>>>
>>> Thanks for testing.
>>>
>>>> System: x3850X5, 40 cores, 80 threads
>>>>
>>>>
>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput(MB/s) Notes
>>>>
>>>> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
>>>> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
>>>> [all 1x results look good here]
>>>
>>> Yes. The 1x results look too close
>>>
>>>>
>>>>
>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>> -----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput Notes
>>>>
>>>> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
>>>> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
>>>> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
>>>> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>>>
>>> I see 6.426% improvement with ple_on
>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>> for the patches
>>>
>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>> we still off from ideal throughput (somewhere >20000)]
>>>>
>>>
>>> Okay, The ideal throughput you are referring is getting around atleast
>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>> there.
>>>
>>>>
>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput Notes
>>>>
>>>> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
>>>> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
>>>> [1x looking fine here]
>>>>
>>>
>>> I see ple_off is little better here.
>>>
>>>>
>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>> ----------------------------------------------------------
>>>> Total
>>>> Configuration Throughput Notes
>>>>
>>>> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
>>>> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
>>>> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
>>>> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>> Still quite a bit off from ideal throughput]
>>>
>>> This is again a remarkable improvement (307%).
>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>> probably we can add a hypercall that disables ple in kvm init patch.
>>> but only problem I see is what if the guests are mixed.
>>>
>>> (i.e one guest has pvspinlock support but other does not. Host
>>> supports pv)
>>
>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>> state. We were headed down that road when considering a dynamic window at
>> one point. Then you can just set a single guest's ple_gap to zero, which
>> would lead to PLE being disabled for that guest. We could also revisit
>> the dynamic window then.
>>
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>
The one obvious reason I see is commit awareness inside the guest. for
under-commit there is no necessity to do PLE, but unfortunately we do.
atleast we return back immediately in case of potential undercommits,
but we still incur vmexit delay.
same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
for undercommit and less for overcommit.
with this patch series SPIN_THRESHOLD is increased to 32k to solely
avoid under-commit regressions but it would have eaten some amount of
overcommit performance.
In summary: excess halt-exit/pl-exit was one main reason for
undercommit regression. (compared to pl disabled case)
1. dynamic ple window was one solution for PLE, which we can experiment
further. (at VM level or global).
The other experiment I was thinking is to extend spinlock to
accommodate vcpuid (Linus has opposed that but may be worth a
try).
2. Andrew Theurer had patch to reduce double runq lock that I will be
testing.
I have some older experiments to retry though they did not give
significant improvements before the PLE handler modified.
Andrew, do you have any other details to add (from perf report that you
usually take with these experiments)?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 13:40 ` Raghavendra K T
(?)
@ 2013-06-26 14:39 ` Chegu Vinod
2013-06-26 15:37 ` Raghavendra K T
-1 siblings, 1 reply; 96+ messages in thread
From: Chegu Vinod @ 2013-06-26 14:39 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, linux-doc, peterz, riel, virtualization, andi,
hpa, stefano.stabellini, xen-devel, kvm, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
[-- Attachment #1.1: Type: text/plain, Size: 8566 bytes --]
On 6/26/2013 6:40 AM, Raghavendra K T wrote:
> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>> implementation for both Xen and KVM.
>>>>>>
>>>>>> Changes in V9:
>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>
>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to
>>>>>> look
>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>> have been tried.
>>>>>
>>>>> Sorry for not posting this sooner. I have tested the v9
>>>>> pv-ticketlock
>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
>>>>> have
>>>>> tested these patches with and without PLE, as PLE is still not
>>>>> scalable
>>>>> with large VMs.
>>>>>
>>>>
>>>> Hi Andrew,
>>>>
>>>> Thanks for testing.
>>>>
>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>
>>>>>
>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput(MB/s) Notes
>>>>>
>>>>> 3.10-default-ple_on 22945 5% CPU in host
>>>>> kernel, 2% spin_lock in guests
>>>>> 3.10-default-ple_off 23184 5% CPU in host
>>>>> kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host
>>>>> kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host
>>>>> kernel, 2% spin_lock in guests
>>>>> [all 1x results look good here]
>>>>
>>>> Yes. The 1x results look too close
>>>>
>>>>>
>>>>>
>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>> -----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 6287 55% CPU host
>>>>> kernel, 17% spin_lock in guests
>>>>> 3.10-default-ple_off 1849 2% CPU in host
>>>>> kernel, 95% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host
>>>>> kernel, 15% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host
>>>>> kernel, 33% spin_lock in guests
>>>>
>>>> I see 6.426% improvement with ple_on
>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>> for the patches
>>>>
>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>
>>>>
>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>> there.
>>>>
>>>>>
>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 22736 6% CPU in host
>>>>> kernel, 3% spin_lock in guests
>>>>> 3.10-default-ple_off 23377 5% CPU in host
>>>>> kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host
>>>>> kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host
>>>>> kernel, 3% spin_lock in guests
>>>>> [1x looking fine here]
>>>>>
>>>>
>>>> I see ple_off is little better here.
>>>>
>>>>>
>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 1965 70% CPU in host
>>>>> kernel, 34% spin_lock in guests
>>>>> 3.10-default-ple_off 226 2% CPU in host
>>>>> kernel, 94% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host
>>>>> kernel, 35% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host
>>>>> kernel, 70% spin_lock in guests
>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>> Still quite a bit off from ideal throughput]
>>>>
>>>> This is again a remarkable improvement (307%).
>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>> but only problem I see is what if the guests are mixed.
>>>>
>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>> supports pv)
>>>
>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>> state. We were headed down that road when considering a dynamic
>>> window at
>>> one point. Then you can just set a single guest's ple_gap to zero,
>>> which
>>> would lead to PLE being disabled for that guest. We could also revisit
>>> the dynamic window then.
>>>
>> Can be done, but lets understand why ple on is such a big problem. Is it
>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>
>
> The one obvious reason I see is commit awareness inside the guest. for
> under-commit there is no necessity to do PLE, but unfortunately we do.
>
> atleast we return back immediately in case of potential undercommits,
> but we still incur vmexit delay.
> same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
> for undercommit and less for overcommit.
>
> with this patch series SPIN_THRESHOLD is increased to 32k to solely
> avoid under-commit regressions but it would have eaten some amount of
> overcommit performance.
> In summary: excess halt-exit/pl-exit was one main reason for
> undercommit regression. (compared to pl disabled case)
I haven't yet tried these patches...hope to do so sometime soon.
Fwiw...after Raghu's last set of PLE changes that is now in 3.10-rc
kernels...I didn't notice much difference in workload performance
between PLE enabled vs. disabled. This is for under-commit (+pinned)
large guest case.
Here is a small sampling of the guest exits collected via kvm ftrace for
an OLTP-like workload which was keeping the guest ~85-90% busy on a 8
socket Westmere-EX box (HT-off).
TIME_IN_GUEST 71.616293
TIME_ON_HOST 7.764597
MSR_READ 0.0003620.0%
NMI_WINDOW 0.0000020.0%
PAUSE_INSTRUCTION 0.1585952.0%
PENDING_INTERRUPT 0.0337790.4%
MSR_WRITE 0.0016950.0%
EXTERNAL_INTERRUPT 3.21086741.4%
IO_INSTRUCTION 0.0000180.0%
RDPMC 0.0000670.0%
HLT 2.82252336.4%
EXCEPTION_NMI 0.0083620.1%
CR_ACCESS 0.0100270.1%
APIC_ACCESS 1.51830019.6%
[ Don't mean to digress from the topic but in most of my under-commit +
pinned large guest experiments with 3.10 kernels (using 2 or 3 different
workloads) the time spent in halt exits are typically much more than the
time spent in ple exits. Can anything be done to reduce the duration or
avoid those exits ? ]
>
> 1. dynamic ple window was one solution for PLE, which we can experiment
> further. (at VM level or global).
Is this the case where the dynamic PLE window starts off at a value
more suitable to reduce exits for under-commit (and pinned) cases and
only when the host OS detects that the degree of under-commit is
shrinking (i.e. moving towards having more vcpus to schedule and hence
getting to be over committed) it adjusts the ple window more suitable to
the over commit case ? or is this some different idea ?
Thanks
Vinod
> The other experiment I was thinking is to extend spinlock to
> accommodate vcpuid (Linus has opposed that but may be worth a
> try).
>
> 2. Andrew Theurer had patch to reduce double runq lock that I will be
> testing.
>
> I have some older experiments to retry though they did not give
> significant improvements before the PLE handler modified.
>
> Andrew, do you have any other details to add (from perf report that
> you usually take with these experiments)?
>
> .
>
[-- Attachment #1.2: Type: text/html, Size: 44129 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 14:39 ` Chegu Vinod
@ 2013-06-26 15:37 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 15:37 UTC (permalink / raw)
To: Chegu Vinod
Cc: Gleb Natapov, habanero, Andrew Jones, mingo, jeremy, x86,
konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, peterz,
mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
agraf, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On 06/26/2013 08:09 PM, Chegu Vinod wrote:
> On 6/26/2013 6:40 AM, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to
>>>>>>> look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner. I have tested the v9
>>>>>> pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
>>>>>> have
>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>> scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22945 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off 23184 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 6287 55% CPU host
>>>>>> kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off 1849 2% CPU in host
>>>>>> kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host
>>>>>> kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host
>>>>>> kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>> for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22736 6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off 23377 5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 1965 70% CPU in host
>>>>>> kernel, 34% spin_lock in guests
>>>>>> 3.10-default-ple_off 226 2% CPU in host
>>>>>> kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host
>>>>>> kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host
>>>>>> kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>> Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic
>>>> window at
>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>> which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
>> same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
>> for undercommit and less for overcommit.
>>
>> with this patch series SPIN_THRESHOLD is increased to 32k to solely
>> avoid under-commit regressions but it would have eaten some amount of
>> overcommit performance.
>> In summary: excess halt-exit/pl-exit was one main reason for
>> undercommit regression. (compared to pl disabled case)
>
> I haven't yet tried these patches...hope to do so sometime soon.
>
> Fwiw...after Raghu's last set of PLE changes that is now in 3.10-rc
> kernels...I didn't notice much difference in workload performance
> between PLE enabled vs. disabled. This is for under-commit (+pinned)
> large guest case.
>
Hi Vinod,
Thanks for confirming that now ple enabled case is very close to ple
disabled.
> Here is a small sampling of the guest exits collected via kvm ftrace for
> an OLTP-like workload which was keeping the guest ~85-90% busy on a 8
> socket Westmere-EX box (HT-off).
>
> TIME_IN_GUEST 71.616293
>
> TIME_ON_HOST 7.764597
>
> MSR_READ 0.0003620.0%
>
> NMI_WINDOW 0.0000020.0%
>
> PAUSE_INSTRUCTION 0.1585952.0%
>
> PENDING_INTERRUPT 0.0337790.4%
>
> MSR_WRITE 0.0016950.0%
>
> EXTERNAL_INTERRUPT 3.21086741.4%
>
> IO_INSTRUCTION 0.0000180.0%
>
> RDPMC 0.0000670.0%
>
> HLT 2.82252336.4%
>
> EXCEPTION_NMI 0.0083620.1%
>
> CR_ACCESS 0.0100270.1%
>
> APIC_ACCESS 1.51830019.6%
>
>
>
> [ Don't mean to digress from the topic but in most of my under-commit +
> pinned large guest experiments with 3.10 kernels (using 2 or 3 different
> workloads) the time spent in halt exits are typically much more than the
> time spent in ple exits. Can anything be done to reduce the duration or
> avoid those exits ? ]
>
I would say, using ple handler in halt exit path patch in this series,
[patch 18 kvm hypervisor: Add directed yield in vcpu block path], help
this. That is an independent patch to tryout.
>>
>> 1. dynamic ple window was one solution for PLE, which we can experiment
>> further. (at VM level or global).
>
> Is this the case where the dynamic PLE window starts off at a value
> more suitable to reduce exits for under-commit (and pinned) cases and
> only when the host OS detects that the degree of under-commit is
> shrinking (i.e. moving towards having more vcpus to schedule and hence
> getting to be over committed) it adjusts the ple window more suitable to
> the over commit case ? or is this some different idea ?
Yes we are discussing about same idea.
>
> Thanks
> Vinod
>
>> The other experiment I was thinking is to extend spinlock to
>> accommodate vcpuid (Linus has opposed that but may be worth a
>> try).
>>
>
>
>> 2. Andrew Theurer had patch to reduce double runq lock that I will be
>> testing.
>>
>> I have some older experiments to retry though they did not give
>> significant improvements before the PLE handler modified.
>>
>> Andrew, do you have any other details to add (from perf report that
>> you usually take with these experiments)?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 15:37 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 15:37 UTC (permalink / raw)
To: Chegu Vinod
Cc: jeremy, gregkh, linux-doc, peterz, riel, virtualization, andi,
hpa, stefano.stabellini, xen-devel, kvm, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On 06/26/2013 08:09 PM, Chegu Vinod wrote:
> On 6/26/2013 6:40 AM, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to
>>>>>>> look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner. I have tested the v9
>>>>>> pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
>>>>>> have
>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>> scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22945 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off 23184 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host
>>>>>> kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 6287 55% CPU host
>>>>>> kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off 1849 2% CPU in host
>>>>>> kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host
>>>>>> kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host
>>>>>> kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>> for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22736 6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off 23377 5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host
>>>>>> kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 1965 70% CPU in host
>>>>>> kernel, 34% spin_lock in guests
>>>>>> 3.10-default-ple_off 226 2% CPU in host
>>>>>> kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host
>>>>>> kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host
>>>>>> kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>> Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic
>>>> window at
>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>> which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
>> same applies to SPIN_THRESHOLD. SPIN_THRESHOLD should be ideally more
>> for undercommit and less for overcommit.
>>
>> with this patch series SPIN_THRESHOLD is increased to 32k to solely
>> avoid under-commit regressions but it would have eaten some amount of
>> overcommit performance.
>> In summary: excess halt-exit/pl-exit was one main reason for
>> undercommit regression. (compared to pl disabled case)
>
> I haven't yet tried these patches...hope to do so sometime soon.
>
> Fwiw...after Raghu's last set of PLE changes that is now in 3.10-rc
> kernels...I didn't notice much difference in workload performance
> between PLE enabled vs. disabled. This is for under-commit (+pinned)
> large guest case.
>
Hi Vinod,
Thanks for confirming that now ple enabled case is very close to ple
disabled.
> Here is a small sampling of the guest exits collected via kvm ftrace for
> an OLTP-like workload which was keeping the guest ~85-90% busy on a 8
> socket Westmere-EX box (HT-off).
>
> TIME_IN_GUEST 71.616293
>
> TIME_ON_HOST 7.764597
>
> MSR_READ 0.0003620.0%
>
> NMI_WINDOW 0.0000020.0%
>
> PAUSE_INSTRUCTION 0.1585952.0%
>
> PENDING_INTERRUPT 0.0337790.4%
>
> MSR_WRITE 0.0016950.0%
>
> EXTERNAL_INTERRUPT 3.21086741.4%
>
> IO_INSTRUCTION 0.0000180.0%
>
> RDPMC 0.0000670.0%
>
> HLT 2.82252336.4%
>
> EXCEPTION_NMI 0.0083620.1%
>
> CR_ACCESS 0.0100270.1%
>
> APIC_ACCESS 1.51830019.6%
>
>
>
> [ Don't mean to digress from the topic but in most of my under-commit +
> pinned large guest experiments with 3.10 kernels (using 2 or 3 different
> workloads) the time spent in halt exits are typically much more than the
> time spent in ple exits. Can anything be done to reduce the duration or
> avoid those exits ? ]
>
I would say, using ple handler in halt exit path patch in this series,
[patch 18 kvm hypervisor: Add directed yield in vcpu block path], help
this. That is an independent patch to tryout.
>>
>> 1. dynamic ple window was one solution for PLE, which we can experiment
>> further. (at VM level or global).
>
> Is this the case where the dynamic PLE window starts off at a value
> more suitable to reduce exits for under-commit (and pinned) cases and
> only when the host OS detects that the degree of under-commit is
> shrinking (i.e. moving towards having more vcpus to schedule and hence
> getting to be over committed) it adjusts the ple window more suitable to
> the over commit case ? or is this some different idea ?
Yes we are discussing about same idea.
>
> Thanks
> Vinod
>
>> The other experiment I was thinking is to extend spinlock to
>> accommodate vcpuid (Linus has opposed that but may be worth a
>> try).
>>
>
>
>> 2. Andrew Theurer had patch to reduce double runq lock that I will be
>> testing.
>>
>> I have some older experiments to retry though they did not give
>> significant improvements before the PLE handler modified.
>>
>> Andrew, do you have any other details to add (from perf report that
>> you usually take with these experiments)?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 13:40 ` Raghavendra K T
@ 2013-06-26 16:11 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-06-26 16:11 UTC (permalink / raw)
To: Raghavendra K T
Cc: habanero, Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa,
pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>This series replaces the existing paravirtualized spinlock mechanism
> >>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>implementation for both Xen and KVM.
> >>>>>
> >>>>>Changes in V9:
> >>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>> causing undercommit degradation (after PLE handler improvement).
> >>>>>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>- Optimized halt exit path to use PLE handler
> >>>>>
> >>>>>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>>>>at PLE handler's improvements, various optimizations in PLE handling
> >>>>>have been tried.
> >>>>
> >>>>Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> >>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> >>>>tested these patches with and without PLE, as PLE is still not scalable
> >>>>with large VMs.
> >>>>
> >>>
> >>>Hi Andrew,
> >>>
> >>>Thanks for testing.
> >>>
> >>>>System: x3850X5, 40 cores, 80 threads
> >>>>
> >>>>
> >>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput(MB/s) Notes
> >>>>
> >>>>3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> >>>>[all 1x results look good here]
> >>>
> >>>Yes. The 1x results look too close
> >>>
> >>>>
> >>>>
> >>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>-----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput Notes
> >>>>
> >>>>3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> >>>>3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> >>>>3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> >>>>3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> >>>
> >>>I see 6.426% improvement with ple_on
> >>>and 161.87% improvement with ple_off. I think this is a very good sign
> >>> for the patches
> >>>
> >>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>> we still off from ideal throughput (somewhere >20000)]
> >>>>
> >>>
> >>>Okay, The ideal throughput you are referring is getting around atleast
> >>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>there.
> >>>
> >>>>
> >>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput Notes
> >>>>
> >>>>3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> >>>>[1x looking fine here]
> >>>>
> >>>
> >>>I see ple_off is little better here.
> >>>
> >>>>
> >>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput Notes
> >>>>
> >>>>3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> >>>>3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> >>>>3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> >>>>3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> >>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>> Still quite a bit off from ideal throughput]
> >>>
> >>>This is again a remarkable improvement (307%).
> >>>This motivates me to add a patch to disable ple when pvspinlock is on.
> >>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>but only problem I see is what if the guests are mixed.
> >>>
> >>> (i.e one guest has pvspinlock support but other does not. Host
> >>>supports pv)
> >>
> >>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>state. We were headed down that road when considering a dynamic window at
> >>one point. Then you can just set a single guest's ple_gap to zero, which
> >>would lead to PLE being disabled for that guest. We could also revisit
> >>the dynamic window then.
> >>
> >Can be done, but lets understand why ple on is such a big problem. Is it
> >possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >
>
> The one obvious reason I see is commit awareness inside the guest. for
> under-commit there is no necessity to do PLE, but unfortunately we do.
>
> atleast we return back immediately in case of potential undercommits,
> but we still incur vmexit delay.
But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
long enough) to not generate PLE exit we will not go into PLE handler
at all, no?
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 16:11 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-06-26 16:11 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>This series replaces the existing paravirtualized spinlock mechanism
> >>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>implementation for both Xen and KVM.
> >>>>>
> >>>>>Changes in V9:
> >>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>> causing undercommit degradation (after PLE handler improvement).
> >>>>>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>- Optimized halt exit path to use PLE handler
> >>>>>
> >>>>>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> >>>>>at PLE handler's improvements, various optimizations in PLE handling
> >>>>>have been tried.
> >>>>
> >>>>Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> >>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> >>>>tested these patches with and without PLE, as PLE is still not scalable
> >>>>with large VMs.
> >>>>
> >>>
> >>>Hi Andrew,
> >>>
> >>>Thanks for testing.
> >>>
> >>>>System: x3850X5, 40 cores, 80 threads
> >>>>
> >>>>
> >>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput(MB/s) Notes
> >>>>
> >>>>3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> >>>>3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> >>>>[all 1x results look good here]
> >>>
> >>>Yes. The 1x results look too close
> >>>
> >>>>
> >>>>
> >>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>-----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput Notes
> >>>>
> >>>>3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> >>>>3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> >>>>3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> >>>>3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> >>>
> >>>I see 6.426% improvement with ple_on
> >>>and 161.87% improvement with ple_off. I think this is a very good sign
> >>> for the patches
> >>>
> >>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>> we still off from ideal throughput (somewhere >20000)]
> >>>>
> >>>
> >>>Okay, The ideal throughput you are referring is getting around atleast
> >>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>there.
> >>>
> >>>>
> >>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput Notes
> >>>>
> >>>>3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> >>>>3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> >>>>[1x looking fine here]
> >>>>
> >>>
> >>>I see ple_off is little better here.
> >>>
> >>>>
> >>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>----------------------------------------------------------
> >>>> Total
> >>>>Configuration Throughput Notes
> >>>>
> >>>>3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> >>>>3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> >>>>3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> >>>>3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> >>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>> Still quite a bit off from ideal throughput]
> >>>
> >>>This is again a remarkable improvement (307%).
> >>>This motivates me to add a patch to disable ple when pvspinlock is on.
> >>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>but only problem I see is what if the guests are mixed.
> >>>
> >>> (i.e one guest has pvspinlock support but other does not. Host
> >>>supports pv)
> >>
> >>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>state. We were headed down that road when considering a dynamic window at
> >>one point. Then you can just set a single guest's ple_gap to zero, which
> >>would lead to PLE being disabled for that guest. We could also revisit
> >>the dynamic window then.
> >>
> >Can be done, but lets understand why ple on is such a big problem. Is it
> >possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >
>
> The one obvious reason I see is commit awareness inside the guest. for
> under-commit there is no necessity to do PLE, but unfortunately we do.
>
> atleast we return back immediately in case of potential undercommits,
> but we still incur vmexit delay.
But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
long enough) to not generate PLE exit we will not go into PLE handler
at all, no?
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 16:11 ` Gleb Natapov
(?)
@ 2013-06-26 17:54 ` Raghavendra K T
2013-07-09 9:11 ` Raghavendra K T
-1 siblings, 1 reply; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 17:54 UTC (permalink / raw)
To: Gleb Natapov
Cc: habanero, Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa,
pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
>>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>> for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
>>>>>> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>> Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic window at
>>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> long enough) to not generate PLE exit we will not go into PLE handler
> at all, no?
>
Yes. you are right. dynamic ple window was an attempt to solve it.
Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
exits in under-commits and increasing ple_window may be sometimes
counter productive as it affects other busy-wait constructs such as
flush_tlb AFAIK.
So if we could have had a dynamically changing SPIN_THRESHOLD too, that
would be nice.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 17:54 ` Raghavendra K T
@ 2013-07-09 9:11 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-09 9:11 UTC (permalink / raw)
To: Gleb Natapov, Andrew Jones, mingo, ouyang
Cc: habanero, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
xen-devel, peterz, mtosatti, stefano.stabellini, andi,
attilio.rao, gregkh, agraf, chegu_vinod, torvalds, avi.kivity,
tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
virtualization, srivatsa.vaddagiri
On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>> mechanism
>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>
>>>>>>>> Changes in V9:
>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>> causing undercommit degradation (after PLE handler
>>>>>>>> improvement).
>>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>
>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>> to look
>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>> handling
>>>>>>>> have been tried.
>>>>>>>
>>>>>>> Sorry for not posting this sooner. I have tested the v9
>>>>>>> pv-ticketlock
>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
>>>>>>> have
>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>> scalable
>>>>>>> with large VMs.
>>>>>>>
>>>>>>
>>>>>> Hi Andrew,
>>>>>>
>>>>>> Thanks for testing.
>>>>>>
>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 22945 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-default-ple_off 23184 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> [all 1x results look good here]
>>>>>>
>>>>>> Yes. The 1x results look too close
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>> -----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 6287 55% CPU host
>>>>>>> kernel, 17% spin_lock in guests
>>>>>>> 3.10-default-ple_off 1849 2% CPU in host
>>>>>>> kernel, 95% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host
>>>>>>> kernel, 15% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host
>>>>>>> kernel, 33% spin_lock in guests
>>>>>>
>>>>>> I see 6.426% improvement with ple_on
>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>> sign
>>>>>> for the patches
>>>>>>
>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>>
>>>>>>
>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>> atleast
>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>> there.
>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 22736 6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-default-ple_off 23377 5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> [1x looking fine here]
>>>>>>>
>>>>>>
>>>>>> I see ple_off is little better here.
>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 1965 70% CPU in host
>>>>>>> kernel, 34% spin_lock in guests
>>>>>>> 3.10-default-ple_off 226 2% CPU in host
>>>>>>> kernel, 94% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host
>>>>>>> kernel, 35% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host
>>>>>>> kernel, 70% spin_lock in guests
>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>> Still quite a bit off from ideal throughput]
>>>>>>
>>>>>> This is again a remarkable improvement (307%).
>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>> on.
>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>
>>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>>> supports pv)
>>>>>
>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>> state. We were headed down that road when considering a dynamic
>>>>> window at
>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>> which
>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>> the dynamic window then.
>>>>>
>>>> Can be done, but lets understand why ple on is such a big problem.
>>>> Is it
>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>
>>>
>>> The one obvious reason I see is commit awareness inside the guest. for
>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>
>>> atleast we return back immediately in case of potential undercommits,
>>> but we still incur vmexit delay.
>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>> long enough) to not generate PLE exit we will not go into PLE handler
>> at all, no?
>>
>
> Yes. you are right. dynamic ple window was an attempt to solve it.
>
> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> exits in under-commits and increasing ple_window may be sometimes
> counter productive as it affects other busy-wait constructs such as
> flush_tlb AFAIK.
> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> would be nice.
>
Gleb, Andrew,
I tested with the global ple window change (similar to what I posted
here https://lkml.org/lkml/2012/11/11/14 ),
But did not see good result. May be it is good to go with per VM
ple_window.
Gleb,
Can you elaborate little more on what you have in mind regarding per VM
ple_window. (maintaining part of it as a per vm variable is clear to
me), but is it that we have to load that every time of guest entry?
I 'll try that idea next.
Ingo, Gleb,
From the results perspective, Andrew Theurer, Vinod's test results are
pro-pvspinlock.
Could you please help me to know what will make it a mergeable
candidate?.
I agree that Jiannan's Preemptable Lock idea is promising and we could
evaluate that approach, and make the best one get into kernel and also
will carry on discussion with Jiannan to improve that patch.
Experiments so far have been good for smaller machine but it is not
scaling for bigger machines.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-09 9:11 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-09 9:11 UTC (permalink / raw)
To: Gleb Natapov, Andrew Jones, mingo, ouyang
Cc: jeremy, gregkh, kvm, linux-doc, peterz, virtualization, andi,
hpa, stefano.stabellini, xen-devel, x86, habanero, riel,
konrad.wilk, avi.kivity, tglx, chegu_vinod, linux-kernel,
srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
stephan.diestelhorst
On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>> mechanism
>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>
>>>>>>>> Changes in V9:
>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>> causing undercommit degradation (after PLE handler
>>>>>>>> improvement).
>>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>
>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>> to look
>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>> handling
>>>>>>>> have been tried.
>>>>>>>
>>>>>>> Sorry for not posting this sooner. I have tested the v9
>>>>>>> pv-ticketlock
>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
>>>>>>> have
>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>> scalable
>>>>>>> with large VMs.
>>>>>>>
>>>>>>
>>>>>> Hi Andrew,
>>>>>>
>>>>>> Thanks for testing.
>>>>>>
>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 22945 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-default-ple_off 23184 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host
>>>>>>> kernel, 2% spin_lock in guests
>>>>>>> [all 1x results look good here]
>>>>>>
>>>>>> Yes. The 1x results look too close
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>> -----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 6287 55% CPU host
>>>>>>> kernel, 17% spin_lock in guests
>>>>>>> 3.10-default-ple_off 1849 2% CPU in host
>>>>>>> kernel, 95% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host
>>>>>>> kernel, 15% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host
>>>>>>> kernel, 33% spin_lock in guests
>>>>>>
>>>>>> I see 6.426% improvement with ple_on
>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>> sign
>>>>>> for the patches
>>>>>>
>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>>
>>>>>>
>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>> atleast
>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>> there.
>>>>>>
>>>>>>>
>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 22736 6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-default-ple_off 23377 5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host
>>>>>>> kernel, 3% spin_lock in guests
>>>>>>> [1x looking fine here]
>>>>>>>
>>>>>>
>>>>>> I see ple_off is little better here.
>>>>>>
>>>>>>>
>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>> ----------------------------------------------------------
>>>>>>> Total
>>>>>>> Configuration Throughput Notes
>>>>>>>
>>>>>>> 3.10-default-ple_on 1965 70% CPU in host
>>>>>>> kernel, 34% spin_lock in guests
>>>>>>> 3.10-default-ple_off 226 2% CPU in host
>>>>>>> kernel, 94% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host
>>>>>>> kernel, 35% spin_lock in guests
>>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host
>>>>>>> kernel, 70% spin_lock in guests
>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>> Still quite a bit off from ideal throughput]
>>>>>>
>>>>>> This is again a remarkable improvement (307%).
>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>> on.
>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>
>>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>>> supports pv)
>>>>>
>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>> state. We were headed down that road when considering a dynamic
>>>>> window at
>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>> which
>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>> the dynamic window then.
>>>>>
>>>> Can be done, but lets understand why ple on is such a big problem.
>>>> Is it
>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>
>>>
>>> The one obvious reason I see is commit awareness inside the guest. for
>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>
>>> atleast we return back immediately in case of potential undercommits,
>>> but we still incur vmexit delay.
>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>> long enough) to not generate PLE exit we will not go into PLE handler
>> at all, no?
>>
>
> Yes. you are right. dynamic ple window was an attempt to solve it.
>
> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> exits in under-commits and increasing ple_window may be sometimes
> counter productive as it affects other busy-wait constructs such as
> flush_tlb AFAIK.
> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> would be nice.
>
Gleb, Andrew,
I tested with the global ple window change (similar to what I posted
here https://lkml.org/lkml/2012/11/11/14 ),
But did not see good result. May be it is good to go with per VM
ple_window.
Gleb,
Can you elaborate little more on what you have in mind regarding per VM
ple_window. (maintaining part of it as a per vm variable is clear to
me), but is it that we have to load that every time of guest entry?
I 'll try that idea next.
Ingo, Gleb,
From the results perspective, Andrew Theurer, Vinod's test results are
pro-pvspinlock.
Could you please help me to know what will make it a mergeable
candidate?.
I agree that Jiannan's Preemptable Lock idea is promising and we could
evaluate that approach, and make the best one get into kernel and also
will carry on discussion with Jiannan to improve that patch.
Experiments so far have been good for smaller machine but it is not
scaling for bigger machines.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-09 9:11 ` Raghavendra K T
@ 2013-07-10 10:33 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:33 UTC (permalink / raw)
To: Raghavendra K T
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> >On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> >>On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> >>>On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >>>>On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>>>>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>>>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>>>>This series replaces the existing paravirtualized spinlock
> >>>>>>>>mechanism
> >>>>>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>>>>implementation for both Xen and KVM.
> >>>>>>>>
> >>>>>>>>Changes in V9:
> >>>>>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>>>>> causing undercommit degradation (after PLE handler
> >>>>>>>>improvement).
> >>>>>>>>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>>>>- Optimized halt exit path to use PLE handler
> >>>>>>>>
> >>>>>>>>V8 of PVspinlock was posted last year. After Avi's suggestions
> >>>>>>>>to look
> >>>>>>>>at PLE handler's improvements, various optimizations in PLE
> >>>>>>>>handling
> >>>>>>>>have been tried.
> >>>>>>>
> >>>>>>>Sorry for not posting this sooner. I have tested the v9
> >>>>>>>pv-ticketlock
> >>>>>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
> >>>>>>>have
> >>>>>>>tested these patches with and without PLE, as PLE is still not
> >>>>>>>scalable
> >>>>>>>with large VMs.
> >>>>>>>
> >>>>>>
> >>>>>>Hi Andrew,
> >>>>>>
> >>>>>>Thanks for testing.
> >>>>>>
> >>>>>>>System: x3850X5, 40 cores, 80 threads
> >>>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput(MB/s) Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 22945 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-default-ple_off 23184 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 22895 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 23051 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>[all 1x results look good here]
> >>>>>>
> >>>>>>Yes. The 1x results look too close
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>>>>-----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 6287 55% CPU host
> >>>>>>>kernel, 17% spin_lock in guests
> >>>>>>>3.10-default-ple_off 1849 2% CPU in host
> >>>>>>>kernel, 95% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 6691 50% CPU in host
> >>>>>>>kernel, 15% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 16464 8% CPU in host
> >>>>>>>kernel, 33% spin_lock in guests
> >>>>>>
> >>>>>>I see 6.426% improvement with ple_on
> >>>>>>and 161.87% improvement with ple_off. I think this is a very good
> >>>>>>sign
> >>>>>> for the patches
> >>>>>>
> >>>>>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>>>>> we still off from ideal throughput (somewhere >20000)]
> >>>>>>>
> >>>>>>
> >>>>>>Okay, The ideal throughput you are referring is getting around
> >>>>>>atleast
> >>>>>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>>>>there.
> >>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 22736 6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-default-ple_off 23377 5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 22471 6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 23445 5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>[1x looking fine here]
> >>>>>>>
> >>>>>>
> >>>>>>I see ple_off is little better here.
> >>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 1965 70% CPU in host
> >>>>>>>kernel, 34% spin_lock in guests
> >>>>>>>3.10-default-ple_off 226 2% CPU in host
> >>>>>>>kernel, 94% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 1942 70% CPU in host
> >>>>>>>kernel, 35% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 8003 11% CPU in host
> >>>>>>>kernel, 70% spin_lock in guests
> >>>>>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>>>>> Still quite a bit off from ideal throughput]
> >>>>>>
> >>>>>>This is again a remarkable improvement (307%).
> >>>>>>This motivates me to add a patch to disable ple when pvspinlock is
> >>>>>>on.
> >>>>>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>>>>but only problem I see is what if the guests are mixed.
> >>>>>>
> >>>>>> (i.e one guest has pvspinlock support but other does not. Host
> >>>>>>supports pv)
> >>>>>
> >>>>>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>>>>state. We were headed down that road when considering a dynamic
> >>>>>window at
> >>>>>one point. Then you can just set a single guest's ple_gap to zero,
> >>>>>which
> >>>>>would lead to PLE being disabled for that guest. We could also revisit
> >>>>>the dynamic window then.
> >>>>>
> >>>>Can be done, but lets understand why ple on is such a big problem.
> >>>>Is it
> >>>>possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >>>>
> >>>
> >>>The one obvious reason I see is commit awareness inside the guest. for
> >>>under-commit there is no necessity to do PLE, but unfortunately we do.
> >>>
> >>>atleast we return back immediately in case of potential undercommits,
> >>>but we still incur vmexit delay.
> >>But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> >>long enough) to not generate PLE exit we will not go into PLE handler
> >>at all, no?
> >>
> >
> >Yes. you are right. dynamic ple window was an attempt to solve it.
> >
> >Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >exits in under-commits and increasing ple_window may be sometimes
> >counter productive as it affects other busy-wait constructs such as
> >flush_tlb AFAIK.
> >So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >would be nice.
> >
>
> Gleb, Andrew,
> I tested with the global ple window change (similar to what I posted
> here https://lkml.org/lkml/2012/11/11/14 ),
This does not look global. It changes PLE per vcpu.
> But did not see good result. May be it is good to go with per VM
> ple_window.
>
> Gleb,
> Can you elaborate little more on what you have in mind regarding per
> VM ple_window. (maintaining part of it as a per vm variable is clear
> to
> me), but is it that we have to load that every time of guest entry?
>
Only when it changes, shouldn't be to often no?
> I 'll try that idea next.
>
> Ingo, Gleb,
>
> From the results perspective, Andrew Theurer, Vinod's test results are
> pro-pvspinlock.
> Could you please help me to know what will make it a mergeable
> candidate?.
>
I need to spend more time reviewing it :) The problem with PV interfaces
is that they are easy to add but hard to get rid of if better solution
(HW or otherwise) appears.
> I agree that Jiannan's Preemptable Lock idea is promising and we could
> evaluate that approach, and make the best one get into kernel and also
> will carry on discussion with Jiannan to improve that patch.
That would be great. The work is stalled from what I can tell.
> Experiments so far have been good for smaller machine but it is not
> scaling for bigger machines.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 10:33 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:33 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
> >On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> >>On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
> >>>On 06/26/2013 06:22 PM, Gleb Natapov wrote:
> >>>>On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> >>>>>On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> >>>>>>On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> >>>>>>>On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> >>>>>>>>This series replaces the existing paravirtualized spinlock
> >>>>>>>>mechanism
> >>>>>>>>with a paravirtualized ticketlock mechanism. The series provides
> >>>>>>>>implementation for both Xen and KVM.
> >>>>>>>>
> >>>>>>>>Changes in V9:
> >>>>>>>>- Changed spin_threshold to 32k to avoid excess halt exits that are
> >>>>>>>> causing undercommit degradation (after PLE handler
> >>>>>>>>improvement).
> >>>>>>>>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> >>>>>>>>- Optimized halt exit path to use PLE handler
> >>>>>>>>
> >>>>>>>>V8 of PVspinlock was posted last year. After Avi's suggestions
> >>>>>>>>to look
> >>>>>>>>at PLE handler's improvements, various optimizations in PLE
> >>>>>>>>handling
> >>>>>>>>have been tried.
> >>>>>>>
> >>>>>>>Sorry for not posting this sooner. I have tested the v9
> >>>>>>>pv-ticketlock
> >>>>>>>patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
> >>>>>>>have
> >>>>>>>tested these patches with and without PLE, as PLE is still not
> >>>>>>>scalable
> >>>>>>>with large VMs.
> >>>>>>>
> >>>>>>
> >>>>>>Hi Andrew,
> >>>>>>
> >>>>>>Thanks for testing.
> >>>>>>
> >>>>>>>System: x3850X5, 40 cores, 80 threads
> >>>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput(MB/s) Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 22945 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-default-ple_off 23184 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 22895 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 23051 5% CPU in host
> >>>>>>>kernel, 2% spin_lock in guests
> >>>>>>>[all 1x results look good here]
> >>>>>>
> >>>>>>Yes. The 1x results look too close
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> >>>>>>>-----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 6287 55% CPU host
> >>>>>>>kernel, 17% spin_lock in guests
> >>>>>>>3.10-default-ple_off 1849 2% CPU in host
> >>>>>>>kernel, 95% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 6691 50% CPU in host
> >>>>>>>kernel, 15% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 16464 8% CPU in host
> >>>>>>>kernel, 33% spin_lock in guests
> >>>>>>
> >>>>>>I see 6.426% improvement with ple_on
> >>>>>>and 161.87% improvement with ple_off. I think this is a very good
> >>>>>>sign
> >>>>>> for the patches
> >>>>>>
> >>>>>>>[PLE hinders pv-ticket improvements, but even with PLE off,
> >>>>>>> we still off from ideal throughput (somewhere >20000)]
> >>>>>>>
> >>>>>>
> >>>>>>Okay, The ideal throughput you are referring is getting around
> >>>>>>atleast
> >>>>>>80% of 1x throughput for over-commit. Yes we are still far away from
> >>>>>>there.
> >>>>>>
> >>>>>>>
> >>>>>>>1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 22736 6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-default-ple_off 23377 5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 22471 6% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 23445 5% CPU in host
> >>>>>>>kernel, 3% spin_lock in guests
> >>>>>>>[1x looking fine here]
> >>>>>>>
> >>>>>>
> >>>>>>I see ple_off is little better here.
> >>>>>>
> >>>>>>>
> >>>>>>>2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> >>>>>>>----------------------------------------------------------
> >>>>>>> Total
> >>>>>>>Configuration Throughput Notes
> >>>>>>>
> >>>>>>>3.10-default-ple_on 1965 70% CPU in host
> >>>>>>>kernel, 34% spin_lock in guests
> >>>>>>>3.10-default-ple_off 226 2% CPU in host
> >>>>>>>kernel, 94% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_on 1942 70% CPU in host
> >>>>>>>kernel, 35% spin_lock in guests
> >>>>>>>3.10-pvticket-ple_off 8003 11% CPU in host
> >>>>>>>kernel, 70% spin_lock in guests
> >>>>>>>[quite bad all around, but pv-tickets with PLE off the best so far.
> >>>>>>> Still quite a bit off from ideal throughput]
> >>>>>>
> >>>>>>This is again a remarkable improvement (307%).
> >>>>>>This motivates me to add a patch to disable ple when pvspinlock is
> >>>>>>on.
> >>>>>>probably we can add a hypercall that disables ple in kvm init patch.
> >>>>>>but only problem I see is what if the guests are mixed.
> >>>>>>
> >>>>>> (i.e one guest has pvspinlock support but other does not. Host
> >>>>>>supports pv)
> >>>>>
> >>>>>How about reintroducing the idea to create per-kvm ple_gap,ple_window
> >>>>>state. We were headed down that road when considering a dynamic
> >>>>>window at
> >>>>>one point. Then you can just set a single guest's ple_gap to zero,
> >>>>>which
> >>>>>would lead to PLE being disabled for that guest. We could also revisit
> >>>>>the dynamic window then.
> >>>>>
> >>>>Can be done, but lets understand why ple on is such a big problem.
> >>>>Is it
> >>>>possible that ple gap and SPIN_THRESHOLD are not tuned properly?
> >>>>
> >>>
> >>>The one obvious reason I see is commit awareness inside the guest. for
> >>>under-commit there is no necessity to do PLE, but unfortunately we do.
> >>>
> >>>atleast we return back immediately in case of potential undercommits,
> >>>but we still incur vmexit delay.
> >>But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> >>long enough) to not generate PLE exit we will not go into PLE handler
> >>at all, no?
> >>
> >
> >Yes. you are right. dynamic ple window was an attempt to solve it.
> >
> >Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >exits in under-commits and increasing ple_window may be sometimes
> >counter productive as it affects other busy-wait constructs such as
> >flush_tlb AFAIK.
> >So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >would be nice.
> >
>
> Gleb, Andrew,
> I tested with the global ple window change (similar to what I posted
> here https://lkml.org/lkml/2012/11/11/14 ),
This does not look global. It changes PLE per vcpu.
> But did not see good result. May be it is good to go with per VM
> ple_window.
>
> Gleb,
> Can you elaborate little more on what you have in mind regarding per
> VM ple_window. (maintaining part of it as a per vm variable is clear
> to
> me), but is it that we have to load that every time of guest entry?
>
Only when it changes, shouldn't be to often no?
> I 'll try that idea next.
>
> Ingo, Gleb,
>
> From the results perspective, Andrew Theurer, Vinod's test results are
> pro-pvspinlock.
> Could you please help me to know what will make it a mergeable
> candidate?.
>
I need to spend more time reviewing it :) The problem with PV interfaces
is that they are easy to add but hard to get rid of if better solution
(HW or otherwise) appears.
> I agree that Jiannan's Preemptable Lock idea is promising and we could
> evaluate that approach, and make the best one get into kernel and also
> will carry on discussion with Jiannan to improve that patch.
That would be great. The work is stalled from what I can tell.
> Experiments so far have been good for smaller machine but it is not
> scaling for bigger machines.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:33 ` Gleb Natapov
@ 2013-07-10 10:40 ` Peter Zijlstra
-1 siblings, 0 replies; 96+ messages in thread
From: Peter Zijlstra @ 2013-07-10 10:40 UTC (permalink / raw)
To: Gleb Natapov
Cc: Raghavendra K T, Andrew Jones, mingo, ouyang, habanero, jeremy,
x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > Ingo, Gleb,
> >
> > From the results perspective, Andrew Theurer, Vinod's test results are
> > pro-pvspinlock.
> > Could you please help me to know what will make it a mergeable
> > candidate?.
> >
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.
How so? Just make sure the registration for the PV interface is optional; that
is, allow it to fail. A guest that fails the PV setup will either have to try
another PV interface or fall back to 'native'.
> > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > evaluate that approach, and make the best one get into kernel and also
> > will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.
I absolutely hated that stuff because it wrecked the native code.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 10:40 ` Peter Zijlstra
0 siblings, 0 replies; 96+ messages in thread
From: Peter Zijlstra @ 2013-07-10 10:40 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, Raghavendra K T, kvm, linux-doc, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > Ingo, Gleb,
> >
> > From the results perspective, Andrew Theurer, Vinod's test results are
> > pro-pvspinlock.
> > Could you please help me to know what will make it a mergeable
> > candidate?.
> >
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.
How so? Just make sure the registration for the PV interface is optional; that
is, allow it to fail. A guest that fails the PV setup will either have to try
another PV interface or fall back to 'native'.
> > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > evaluate that approach, and make the best one get into kernel and also
> > will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.
I absolutely hated that stuff because it wrecked the native code.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:40 ` Peter Zijlstra
@ 2013-07-10 10:47 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:47 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Raghavendra K T, Andrew Jones, mingo, ouyang, habanero, jeremy,
x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>
> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>
Good idea.
> > > Ingo, Gleb,
> > >
> > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > pro-pvspinlock.
> > > Could you please help me to know what will make it a mergeable
> > > candidate?.
> > >
> > I need to spend more time reviewing it :) The problem with PV interfaces
> > is that they are easy to add but hard to get rid of if better solution
> > (HW or otherwise) appears.
>
> How so? Just make sure the registration for the PV interface is optional; that
> is, allow it to fail. A guest that fails the PV setup will either have to try
> another PV interface or fall back to 'native'.
>
We have to carry PV around for live migration purposes. PV interface
cannot disappear under a running guest.
> > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > evaluate that approach, and make the best one get into kernel and also
> > > will carry on discussion with Jiannan to improve that patch.
> > That would be great. The work is stalled from what I can tell.
>
> I absolutely hated that stuff because it wrecked the native code.
Yes, the idea was to hide it from native code behind PV hooks.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 10:47 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 10:47 UTC (permalink / raw)
To: Peter Zijlstra
Cc: jeremy, Raghavendra K T, kvm, linux-doc, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>
> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>
Good idea.
> > > Ingo, Gleb,
> > >
> > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > pro-pvspinlock.
> > > Could you please help me to know what will make it a mergeable
> > > candidate?.
> > >
> > I need to spend more time reviewing it :) The problem with PV interfaces
> > is that they are easy to add but hard to get rid of if better solution
> > (HW or otherwise) appears.
>
> How so? Just make sure the registration for the PV interface is optional; that
> is, allow it to fail. A guest that fails the PV setup will either have to try
> another PV interface or fall back to 'native'.
>
We have to carry PV around for live migration purposes. PV interface
cannot disappear under a running guest.
> > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > evaluate that approach, and make the best one get into kernel and also
> > > will carry on discussion with Jiannan to improve that patch.
> > That would be great. The work is stalled from what I can tell.
>
> I absolutely hated that stuff because it wrecked the native code.
Yes, the idea was to hide it from native code behind PV hooks.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:47 ` Gleb Natapov
@ 2013-07-10 11:28 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:28 UTC (permalink / raw)
To: Gleb Natapov
Cc: Peter Zijlstra, Andrew Jones, mingo, ouyang, habanero, jeremy,
x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>
>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>
> Good idea.
>
>>>> Ingo, Gleb,
>>>>
>>>> From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> How so? Just make sure the registration for the PV interface is optional; that
>> is, allow it to fail. A guest that fails the PV setup will either have to try
>> another PV interface or fall back to 'native'.
>>
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.
>
IIRC, The only requirement was running state of the vcpu to be retained.
This was addressed by
[PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl
to aid migration.
I would have to know more if I missed something here.
>>>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>>>> evaluate that approach, and make the best one get into kernel and also
>>>> will carry on discussion with Jiannan to improve that patch.
>>> That would be great. The work is stalled from what I can tell.
>>
>> I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:28 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:28 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, Peter Zijlstra, riel,
virtualization, andi, hpa, stefano.stabellini, xen-devel, x86,
mingo, habanero, Andrew Jones, konrad.wilk, ouyang, avi.kivity,
tglx, chegu_vinod, linux-kernel, srivatsa.vaddagiri, attilio.rao,
pbonzini, torvalds, stephan.diestelhorst
On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>
>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>
> Good idea.
>
>>>> Ingo, Gleb,
>>>>
>>>> From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> How so? Just make sure the registration for the PV interface is optional; that
>> is, allow it to fail. A guest that fails the PV setup will either have to try
>> another PV interface or fall back to 'native'.
>>
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.
>
IIRC, The only requirement was running state of the vcpu to be retained.
This was addressed by
[PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl
to aid migration.
I would have to know more if I missed something here.
>>>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>>>> evaluate that approach, and make the best one get into kernel and also
>>>> will carry on discussion with Jiannan to improve that patch.
>>> That would be great. The work is stalled from what I can tell.
>>
>> I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
>
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 11:28 ` Raghavendra K T
@ 2013-07-10 11:29 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:29 UTC (permalink / raw)
To: Raghavendra K T
Cc: Peter Zijlstra, Andrew Jones, mingo, ouyang, habanero, jeremy,
x86, konrad.wilk, hpa, pbonzini, linux-doc, xen-devel, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jul 10, 2013 at 04:58:29PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> >On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> >>On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> >>
> >>Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> >>
> >Good idea.
> >
> >>>>Ingo, Gleb,
> >>>>
> >>>> From the results perspective, Andrew Theurer, Vinod's test results are
> >>>>pro-pvspinlock.
> >>>>Could you please help me to know what will make it a mergeable
> >>>>candidate?.
> >>>>
> >>>I need to spend more time reviewing it :) The problem with PV interfaces
> >>>is that they are easy to add but hard to get rid of if better solution
> >>>(HW or otherwise) appears.
> >>
> >>How so? Just make sure the registration for the PV interface is optional; that
> >>is, allow it to fail. A guest that fails the PV setup will either have to try
> >>another PV interface or fall back to 'native'.
> >>
> >We have to carry PV around for live migration purposes. PV interface
> >cannot disappear under a running guest.
> >
>
> IIRC, The only requirement was running state of the vcpu to be retained.
> This was addressed by
> [PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE
> ioctl to aid migration.
>
> I would have to know more if I missed something here.
>
I was not talking about the state that has to be migrated, but
HV<->guest interface that has to be preserved after migration.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:29 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:29 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, Peter Zijlstra, riel,
virtualization, andi, hpa, stefano.stabellini, xen-devel, x86,
mingo, habanero, Andrew Jones, konrad.wilk, ouyang, avi.kivity,
tglx, chegu_vinod, linux-kernel, srivatsa.vaddagiri, attilio.rao,
pbonzini, torvalds, stephan.diestelhorst
On Wed, Jul 10, 2013 at 04:58:29PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
> >On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> >>On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> >>
> >>Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> >>
> >Good idea.
> >
> >>>>Ingo, Gleb,
> >>>>
> >>>> From the results perspective, Andrew Theurer, Vinod's test results are
> >>>>pro-pvspinlock.
> >>>>Could you please help me to know what will make it a mergeable
> >>>>candidate?.
> >>>>
> >>>I need to spend more time reviewing it :) The problem with PV interfaces
> >>>is that they are easy to add but hard to get rid of if better solution
> >>>(HW or otherwise) appears.
> >>
> >>How so? Just make sure the registration for the PV interface is optional; that
> >>is, allow it to fail. A guest that fails the PV setup will either have to try
> >>another PV interface or fall back to 'native'.
> >>
> >We have to carry PV around for live migration purposes. PV interface
> >cannot disappear under a running guest.
> >
>
> IIRC, The only requirement was running state of the vcpu to be retained.
> This was addressed by
> [PATCH RFC V10 13/18] kvm : Fold pv_unhalt flag into GET_MP_STATE
> ioctl to aid migration.
>
> I would have to know more if I missed something here.
>
I was not talking about the state that has to be migrated, but
HV<->guest interface that has to be preserved after migration.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 11:28 ` Raghavendra K T
@ 2013-07-10 11:40 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:40 UTC (permalink / raw)
To: Gleb Natapov, Peter Zijlstra
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
dropping stephen becuase of bounce
On 07/10/2013 04:58 PM, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
>> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>>
>>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>>
>> Good idea.
>>
>>>>> Ingo, Gleb,
>>>>>
>>>>> From the results perspective, Andrew Theurer, Vinod's test results
>>>>> are
>>>>> pro-pvspinlock.
>>>>> Could you please help me to know what will make it a mergeable
>>>>> candidate?.
>>>>>
>>>> I need to spend more time reviewing it :) The problem with PV
>>>> interfaces
>>>> is that they are easy to add but hard to get rid of if better solution
>>>> (HW or otherwise) appears.
>>>
>>> How so? Just make sure the registration for the PV interface is
>>> optional; that
>>> is, allow it to fail. A guest that fails the PV setup will either
>>> have to try
>>> another PV interface or fall back to 'native'.
>>>
Forgot to add. Yes currently pvspinlocks are not enabled by default and
also, we have jump_label mechanism to enable it.
[...]
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:40 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:40 UTC (permalink / raw)
To: Gleb Natapov, Peter Zijlstra
Cc: jeremy, gregkh, kvm, linux-doc, riel, virtualization, andi, hpa,
stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
dropping stephen becuase of bounce
On 07/10/2013 04:58 PM, Raghavendra K T wrote:
> On 07/10/2013 04:17 PM, Gleb Natapov wrote:
>> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>>> On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>>>
>>> Here's an idea, trim the damn email ;-) -- not only directed at gleb.
>>>
>> Good idea.
>>
>>>>> Ingo, Gleb,
>>>>>
>>>>> From the results perspective, Andrew Theurer, Vinod's test results
>>>>> are
>>>>> pro-pvspinlock.
>>>>> Could you please help me to know what will make it a mergeable
>>>>> candidate?.
>>>>>
>>>> I need to spend more time reviewing it :) The problem with PV
>>>> interfaces
>>>> is that they are easy to add but hard to get rid of if better solution
>>>> (HW or otherwise) appears.
>>>
>>> How so? Just make sure the registration for the PV interface is
>>> optional; that
>>> is, allow it to fail. A guest that fails the PV setup will either
>>> have to try
>>> another PV interface or fall back to 'native'.
>>>
Forgot to add. Yes currently pvspinlocks are not enabled by default and
also, we have jump_label mechanism to enable it.
[...]
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:47 ` Gleb Natapov
@ 2013-07-10 15:03 ` Konrad Rzeszutek Wilk
-1 siblings, 0 replies; 96+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-10 15:03 UTC (permalink / raw)
To: Gleb Natapov
Cc: Peter Zijlstra, Raghavendra K T, Andrew Jones, mingo, ouyang,
habanero, jeremy, x86, hpa, pbonzini, linux-doc, xen-devel,
mtosatti, stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> >
> > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> >
> Good idea.
>
> > > > Ingo, Gleb,
> > > >
> > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > pro-pvspinlock.
> > > > Could you please help me to know what will make it a mergeable
> > > > candidate?.
> > > >
> > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > is that they are easy to add but hard to get rid of if better solution
> > > (HW or otherwise) appears.
> >
> > How so? Just make sure the registration for the PV interface is optional; that
> > is, allow it to fail. A guest that fails the PV setup will either have to try
> > another PV interface or fall back to 'native'.
> >
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.
Why can't it? This is the same as handling say XSAVE operations. Some hosts
might have it - some might not. It is the job of the toolstack to make sure
to not migrate to the hosts which don't have it. Or bound the guest to the
lowest interface (so don't enable the PV interface if the other hosts in the
cluster can't support this flag)?
>
> > > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > > evaluate that approach, and make the best one get into kernel and also
> > > > will carry on discussion with Jiannan to improve that patch.
> > > That would be great. The work is stalled from what I can tell.
> >
> > I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
>
> --
> Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 15:03 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 96+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-10 15:03 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, Raghavendra K T, kvm, linux-doc, Peter Zijlstra, riel,
virtualization, andi, hpa, xen-devel, x86, mingo, habanero,
Andrew Jones, stefano.stabellini, ouyang, avi.kivity, tglx,
chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> >
> > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> >
> Good idea.
>
> > > > Ingo, Gleb,
> > > >
> > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > pro-pvspinlock.
> > > > Could you please help me to know what will make it a mergeable
> > > > candidate?.
> > > >
> > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > is that they are easy to add but hard to get rid of if better solution
> > > (HW or otherwise) appears.
> >
> > How so? Just make sure the registration for the PV interface is optional; that
> > is, allow it to fail. A guest that fails the PV setup will either have to try
> > another PV interface or fall back to 'native'.
> >
> We have to carry PV around for live migration purposes. PV interface
> cannot disappear under a running guest.
Why can't it? This is the same as handling say XSAVE operations. Some hosts
might have it - some might not. It is the job of the toolstack to make sure
to not migrate to the hosts which don't have it. Or bound the guest to the
lowest interface (so don't enable the PV interface if the other hosts in the
cluster can't support this flag)?
>
> > > > I agree that Jiannan's Preemptable Lock idea is promising and we could
> > > > evaluate that approach, and make the best one get into kernel and also
> > > > will carry on discussion with Jiannan to improve that patch.
> > > That would be great. The work is stalled from what I can tell.
> >
> > I absolutely hated that stuff because it wrecked the native code.
> Yes, the idea was to hide it from native code behind PV hooks.
>
> --
> Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 15:03 ` Konrad Rzeszutek Wilk
@ 2013-07-10 15:16 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 15:16 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: Peter Zijlstra, Raghavendra K T, Andrew Jones, mingo, ouyang,
habanero, jeremy, x86, hpa, pbonzini, linux-doc, xen-devel,
mtosatti, stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> > >
> > > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > >
> > Good idea.
> >
> > > > > Ingo, Gleb,
> > > > >
> > > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > > pro-pvspinlock.
> > > > > Could you please help me to know what will make it a mergeable
> > > > > candidate?.
> > > > >
> > > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > > is that they are easy to add but hard to get rid of if better solution
> > > > (HW or otherwise) appears.
> > >
> > > How so? Just make sure the registration for the PV interface is optional; that
> > > is, allow it to fail. A guest that fails the PV setup will either have to try
> > > another PV interface or fall back to 'native'.
> > >
> > We have to carry PV around for live migration purposes. PV interface
> > cannot disappear under a running guest.
>
> Why can't it? This is the same as handling say XSAVE operations. Some hosts
> might have it - some might not. It is the job of the toolstack to make sure
> to not migrate to the hosts which don't have it. Or bound the guest to the
> lowest interface (so don't enable the PV interface if the other hosts in the
> cluster can't support this flag)?
XSAVE is HW feature and it is not going disappear under you after software
upgrade. Upgrading kernel on part of your hosts and no longer been
able to migrate to them is not something people who use live migration
expect. In practise it means that updating all hosts in a datacenter to
newer kernel is no longer possible without rebooting VMs.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 15:16 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 15:16 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: jeremy, Raghavendra K T, kvm, linux-doc, Peter Zijlstra, riel,
virtualization, andi, hpa, xen-devel, x86, mingo, habanero,
Andrew Jones, stefano.stabellini, ouyang, avi.kivity, tglx,
chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
> > >
> > > Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > >
> > Good idea.
> >
> > > > > Ingo, Gleb,
> > > > >
> > > > > From the results perspective, Andrew Theurer, Vinod's test results are
> > > > > pro-pvspinlock.
> > > > > Could you please help me to know what will make it a mergeable
> > > > > candidate?.
> > > > >
> > > > I need to spend more time reviewing it :) The problem with PV interfaces
> > > > is that they are easy to add but hard to get rid of if better solution
> > > > (HW or otherwise) appears.
> > >
> > > How so? Just make sure the registration for the PV interface is optional; that
> > > is, allow it to fail. A guest that fails the PV setup will either have to try
> > > another PV interface or fall back to 'native'.
> > >
> > We have to carry PV around for live migration purposes. PV interface
> > cannot disappear under a running guest.
>
> Why can't it? This is the same as handling say XSAVE operations. Some hosts
> might have it - some might not. It is the job of the toolstack to make sure
> to not migrate to the hosts which don't have it. Or bound the guest to the
> lowest interface (so don't enable the PV interface if the other hosts in the
> cluster can't support this flag)?
XSAVE is HW feature and it is not going disappear under you after software
upgrade. Upgrading kernel on part of your hosts and no longer been
able to migrate to them is not something people who use live migration
expect. In practise it means that updating all hosts in a datacenter to
newer kernel is no longer possible without rebooting VMs.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 15:16 ` Gleb Natapov
@ 2013-07-11 0:12 ` Konrad Rzeszutek Wilk
-1 siblings, 0 replies; 96+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-11 0:12 UTC (permalink / raw)
To: Gleb Natapov
Cc: Peter Zijlstra, Raghavendra K T, Andrew Jones, mingo, ouyang,
habanero, jeremy, x86, hpa, pbonzini, linux-doc, xen-devel,
mtosatti, stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
Gleb Natapov <gleb@redhat.com> wrote:
>On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
>> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>> > >
>> > > Here's an idea, trim the damn email ;-) -- not only directed at
>gleb.
>> > >
>> > Good idea.
>> >
>> > > > > Ingo, Gleb,
>> > > > >
>> > > > > From the results perspective, Andrew Theurer, Vinod's test
>results are
>> > > > > pro-pvspinlock.
>> > > > > Could you please help me to know what will make it a
>mergeable
>> > > > > candidate?.
>> > > > >
>> > > > I need to spend more time reviewing it :) The problem with PV
>interfaces
>> > > > is that they are easy to add but hard to get rid of if better
>solution
>> > > > (HW or otherwise) appears.
>> > >
>> > > How so? Just make sure the registration for the PV interface is
>optional; that
>> > > is, allow it to fail. A guest that fails the PV setup will either
>have to try
>> > > another PV interface or fall back to 'native'.
>> > >
>> > We have to carry PV around for live migration purposes. PV
>interface
>> > cannot disappear under a running guest.
>>
>> Why can't it? This is the same as handling say XSAVE operations. Some
>hosts
>> might have it - some might not. It is the job of the toolstack to
>make sure
>> to not migrate to the hosts which don't have it. Or bound the guest
>to the
>> lowest interface (so don't enable the PV interface if the other hosts
>in the
>> cluster can't support this flag)?
>XSAVE is HW feature and it is not going disappear under you after
>software
>upgrade. Upgrading kernel on part of your hosts and no longer been
>able to migrate to them is not something people who use live migration
>expect. In practise it means that updating all hosts in a datacenter to
>newer kernel is no longer possible without rebooting VMs.
>
>--
> Gleb.
I see. Perhaps then if the hardware becomes much better at this then another PV interface can be provided which will use the static_key to turn off the PV spin lock and use the bare metal version (or perhaps some forms of super ellision locks). That does mean the host has to do something when this PV interface is invoked for the older guests.
Anyhow that said I think the benefits are pretty neat right now and benefit much and worrying about whether the hardware vendors will provide something new is not benefiting users. What perhaps then needs to be addressed is how to have an obsolete mechanism in this if the hardware becomes superb?
--
Sent from my Android phone. Please excuse my brevity.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 0:12 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 96+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-07-11 0:12 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, Raghavendra K T, kvm, linux-doc, Peter Zijlstra, riel,
virtualization, andi, hpa, xen-devel, x86, mingo, habanero,
Andrew Jones, stefano.stabellini, ouyang, avi.kivity, tglx,
chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
Gleb Natapov <gleb@redhat.com> wrote:
>On Wed, Jul 10, 2013 at 11:03:15AM -0400, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jul 10, 2013 at 01:47:17PM +0300, Gleb Natapov wrote:
>> > On Wed, Jul 10, 2013 at 12:40:47PM +0200, Peter Zijlstra wrote:
>> > > On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
>> > >
>> > > Here's an idea, trim the damn email ;-) -- not only directed at
>gleb.
>> > >
>> > Good idea.
>> >
>> > > > > Ingo, Gleb,
>> > > > >
>> > > > > From the results perspective, Andrew Theurer, Vinod's test
>results are
>> > > > > pro-pvspinlock.
>> > > > > Could you please help me to know what will make it a
>mergeable
>> > > > > candidate?.
>> > > > >
>> > > > I need to spend more time reviewing it :) The problem with PV
>interfaces
>> > > > is that they are easy to add but hard to get rid of if better
>solution
>> > > > (HW or otherwise) appears.
>> > >
>> > > How so? Just make sure the registration for the PV interface is
>optional; that
>> > > is, allow it to fail. A guest that fails the PV setup will either
>have to try
>> > > another PV interface or fall back to 'native'.
>> > >
>> > We have to carry PV around for live migration purposes. PV
>interface
>> > cannot disappear under a running guest.
>>
>> Why can't it? This is the same as handling say XSAVE operations. Some
>hosts
>> might have it - some might not. It is the job of the toolstack to
>make sure
>> to not migrate to the hosts which don't have it. Or bound the guest
>to the
>> lowest interface (so don't enable the PV interface if the other hosts
>in the
>> cluster can't support this flag)?
>XSAVE is HW feature and it is not going disappear under you after
>software
>upgrade. Upgrading kernel on part of your hosts and no longer been
>able to migrate to them is not something people who use live migration
>expect. In practise it means that updating all hosts in a datacenter to
>newer kernel is no longer possible without rebooting VMs.
>
>--
> Gleb.
I see. Perhaps then if the hardware becomes much better at this then another PV interface can be provided which will use the static_key to turn off the PV spin lock and use the bare metal version (or perhaps some forms of super ellision locks). That does mean the host has to do something when this PV interface is invoked for the older guests.
Anyhow that said I think the benefits are pretty neat right now and benefit much and worrying about whether the hardware vendors will provide something new is not benefiting users. What perhaps then needs to be addressed is how to have an obsolete mechanism in this if the hardware becomes superb?
--
Sent from my Android phone. Please excuse my brevity.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:33 ` Gleb Natapov
(?)
(?)
@ 2013-07-10 11:24 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:24 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
>>> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>>>> mechanism
>>>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>>>
>>>>>>>>>> Changes in V9:
>>>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>>>> causing undercommit degradation (after PLE handler
>>>>>>>>>> improvement).
>>>>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>>>
>>>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>>>> to look
>>>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>>>> handling
>>>>>>>>>> have been tried.
>>>>>>>>>
>>>>>>>>> Sorry for not posting this sooner. I have tested the v9
>>>>>>>>> pv-ticketlock
>>>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
>>>>>>>>> have
>>>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>>>> scalable
>>>>>>>>> with large VMs.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Hi Andrew,
>>>>>>>>
>>>>>>>> Thanks for testing.
>>>>>>>>
>>>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 22945 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 23184 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> [all 1x results look good here]
>>>>>>>>
>>>>>>>> Yes. The 1x results look too close
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>>>> -----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 6287 55% CPU host
>>>>>>>>> kernel, 17% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 1849 2% CPU in host
>>>>>>>>> kernel, 95% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host
>>>>>>>>> kernel, 15% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host
>>>>>>>>> kernel, 33% spin_lock in guests
>>>>>>>>
>>>>>>>> I see 6.426% improvement with ple_on
>>>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>>>> sign
>>>>>>>> for the patches
>>>>>>>>
>>>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>>>>
>>>>>>>>
>>>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>>>> atleast
>>>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>>>> there.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 22736 6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 23377 5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> [1x looking fine here]
>>>>>>>>>
>>>>>>>>
>>>>>>>> I see ple_off is little better here.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 1965 70% CPU in host
>>>>>>>>> kernel, 34% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 226 2% CPU in host
>>>>>>>>> kernel, 94% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host
>>>>>>>>> kernel, 35% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host
>>>>>>>>> kernel, 70% spin_lock in guests
>>>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>>>> Still quite a bit off from ideal throughput]
>>>>>>>>
>>>>>>>> This is again a remarkable improvement (307%).
>>>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>>>> on.
>>>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>>>
>>>>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>>>>> supports pv)
>>>>>>>
>>>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>>>> state. We were headed down that road when considering a dynamic
>>>>>>> window at
>>>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>>>> which
>>>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>>>> the dynamic window then.
>>>>>>>
>>>>>> Can be done, but lets understand why ple on is such a big problem.
>>>>>> Is it
>>>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>>>
>>>>>
>>>>> The one obvious reason I see is commit awareness inside the guest. for
>>>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>>>
>>>>> atleast we return back immediately in case of potential undercommits,
>>>>> but we still incur vmexit delay.
>>>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>>>> long enough) to not generate PLE exit we will not go into PLE handler
>>>> at all, no?
>>>>
>>>
>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.
>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>> me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?
>
>> I 'll try that idea next.
>>
>> Ingo, Gleb,
>>
>> From the results perspective, Andrew Theurer, Vinod's test results are
>> pro-pvspinlock.
>> Could you please help me to know what will make it a mergeable
>> candidate?.
>>
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.
Infact Avi had acked the whole V8 series, but delayed for seeing how
PLE improvement would affect it.
The only addition from that series has been
1. tuning the SPIN_THRESHOLD to 32k (from 2k)
and
2. the halt handler now calls vcpu_on_spin to take the advantage of PLE
improvements. (this can also go as an independent patch into kvm)
The rationale for making SPIN_THERSHOLD 32k needs big explanation.
Before PLE improvements, as you know,
kvm undercommit scenario was very worse in ple enabled cases. (compared
to ple disabled cases).
pvspinlock patches behaved equally bad in undercommit. Both had similar
reason so at the end there was no degradation w.r.t base.
The reason for bad performance in PLE case was unneeded vcpu iteration
in ple handler resulting in high yield_to calls and double run queue locks.
With pvspinlock applied, same villain role was played by excessive halt
exits.
But after ple handler improved, we needed to throttle unnecessary halts
in undercommit for pvspinlock to be on par with 1x result.
>
>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>> evaluate that approach, and make the best one get into kernel and also
>> will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.
Jiannan is trying to improve that. and also 'am helping with testing
etc internally too.
Despite of being a great idea some how, hardcoded TIMEOUT to delay
the checking the lock availability is somehow not working great, and
still seeing some softlockups. AFAIK, Linus also hated TIMEOUT idea
in Rik's spinlock backoff patches because it is difficult to tune on
baremetal and can have some adverse effect on virtualization too.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:33 ` Gleb Natapov
` (2 preceding siblings ...)
(?)
@ 2013-07-10 11:24 ` Raghavendra K T
2013-07-10 11:41 ` Gleb Natapov
-1 siblings, 1 reply; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:24 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> On Tue, Jul 09, 2013 at 02:41:30PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 11:24 PM, Raghavendra K T wrote:
>>> On 06/26/2013 09:41 PM, Gleb Natapov wrote:
>>>> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>>>>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>>>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>>>>> This series replaces the existing paravirtualized spinlock
>>>>>>>>>> mechanism
>>>>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>>>>> implementation for both Xen and KVM.
>>>>>>>>>>
>>>>>>>>>> Changes in V9:
>>>>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>>>>> causing undercommit degradation (after PLE handler
>>>>>>>>>> improvement).
>>>>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>>>>
>>>>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions
>>>>>>>>>> to look
>>>>>>>>>> at PLE handler's improvements, various optimizations in PLE
>>>>>>>>>> handling
>>>>>>>>>> have been tried.
>>>>>>>>>
>>>>>>>>> Sorry for not posting this sooner. I have tested the v9
>>>>>>>>> pv-ticketlock
>>>>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I
>>>>>>>>> have
>>>>>>>>> tested these patches with and without PLE, as PLE is still not
>>>>>>>>> scalable
>>>>>>>>> with large VMs.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Hi Andrew,
>>>>>>>>
>>>>>>>> Thanks for testing.
>>>>>>>>
>>>>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 22945 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 23184 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host
>>>>>>>>> kernel, 2% spin_lock in guests
>>>>>>>>> [all 1x results look good here]
>>>>>>>>
>>>>>>>> Yes. The 1x results look too close
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>>>>> -----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 6287 55% CPU host
>>>>>>>>> kernel, 17% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 1849 2% CPU in host
>>>>>>>>> kernel, 95% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host
>>>>>>>>> kernel, 15% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host
>>>>>>>>> kernel, 33% spin_lock in guests
>>>>>>>>
>>>>>>>> I see 6.426% improvement with ple_on
>>>>>>>> and 161.87% improvement with ple_off. I think this is a very good
>>>>>>>> sign
>>>>>>>> for the patches
>>>>>>>>
>>>>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>>>>
>>>>>>>>
>>>>>>>> Okay, The ideal throughput you are referring is getting around
>>>>>>>> atleast
>>>>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>>>>> there.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 22736 6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 23377 5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host
>>>>>>>>> kernel, 3% spin_lock in guests
>>>>>>>>> [1x looking fine here]
>>>>>>>>>
>>>>>>>>
>>>>>>>> I see ple_off is little better here.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>>>>> ----------------------------------------------------------
>>>>>>>>> Total
>>>>>>>>> Configuration Throughput Notes
>>>>>>>>>
>>>>>>>>> 3.10-default-ple_on 1965 70% CPU in host
>>>>>>>>> kernel, 34% spin_lock in guests
>>>>>>>>> 3.10-default-ple_off 226 2% CPU in host
>>>>>>>>> kernel, 94% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host
>>>>>>>>> kernel, 35% spin_lock in guests
>>>>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host
>>>>>>>>> kernel, 70% spin_lock in guests
>>>>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>>>>> Still quite a bit off from ideal throughput]
>>>>>>>>
>>>>>>>> This is again a remarkable improvement (307%).
>>>>>>>> This motivates me to add a patch to disable ple when pvspinlock is
>>>>>>>> on.
>>>>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>>>>> but only problem I see is what if the guests are mixed.
>>>>>>>>
>>>>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>>>>> supports pv)
>>>>>>>
>>>>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>>>>> state. We were headed down that road when considering a dynamic
>>>>>>> window at
>>>>>>> one point. Then you can just set a single guest's ple_gap to zero,
>>>>>>> which
>>>>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>>>>> the dynamic window then.
>>>>>>>
>>>>>> Can be done, but lets understand why ple on is such a big problem.
>>>>>> Is it
>>>>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>>>>
>>>>>
>>>>> The one obvious reason I see is commit awareness inside the guest. for
>>>>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>>>>
>>>>> atleast we return back immediately in case of potential undercommits,
>>>>> but we still incur vmexit delay.
>>>> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
>>>> long enough) to not generate PLE exit we will not go into PLE handler
>>>> at all, no?
>>>>
>>>
>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.
>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>> me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?
>
>> I 'll try that idea next.
>>
>> Ingo, Gleb,
>>
>> From the results perspective, Andrew Theurer, Vinod's test results are
>> pro-pvspinlock.
>> Could you please help me to know what will make it a mergeable
>> candidate?.
>>
> I need to spend more time reviewing it :) The problem with PV interfaces
> is that they are easy to add but hard to get rid of if better solution
> (HW or otherwise) appears.
Infact Avi had acked the whole V8 series, but delayed for seeing how
PLE improvement would affect it.
The only addition from that series has been
1. tuning the SPIN_THRESHOLD to 32k (from 2k)
and
2. the halt handler now calls vcpu_on_spin to take the advantage of PLE
improvements. (this can also go as an independent patch into kvm)
The rationale for making SPIN_THERSHOLD 32k needs big explanation.
Before PLE improvements, as you know,
kvm undercommit scenario was very worse in ple enabled cases. (compared
to ple disabled cases).
pvspinlock patches behaved equally bad in undercommit. Both had similar
reason so at the end there was no degradation w.r.t base.
The reason for bad performance in PLE case was unneeded vcpu iteration
in ple handler resulting in high yield_to calls and double run queue locks.
With pvspinlock applied, same villain role was played by excessive halt
exits.
But after ple handler improved, we needed to throttle unnecessary halts
in undercommit for pvspinlock to be on par with 1x result.
>
>> I agree that Jiannan's Preemptable Lock idea is promising and we could
>> evaluate that approach, and make the best one get into kernel and also
>> will carry on discussion with Jiannan to improve that patch.
> That would be great. The work is stalled from what I can tell.
Jiannan is trying to improve that. and also 'am helping with testing
etc internally too.
Despite of being a great idea some how, hardcoded TIMEOUT to delay
the checking the lock availability is somehow not working great, and
still seeing some softlockups. AFAIK, Linus also hated TIMEOUT idea
in Rik's spinlock backoff patches because it is difficult to tune on
baremetal and can have some adverse effect on virtualization too.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 11:24 ` Raghavendra K T
@ 2013-07-10 11:41 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:41 UTC (permalink / raw)
To: Raghavendra K T
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
> >>Ingo, Gleb,
> >>
> >> From the results perspective, Andrew Theurer, Vinod's test results are
> >>pro-pvspinlock.
> >>Could you please help me to know what will make it a mergeable
> >>candidate?.
> >>
> >I need to spend more time reviewing it :) The problem with PV interfaces
> >is that they are easy to add but hard to get rid of if better solution
> >(HW or otherwise) appears.
>
> Infact Avi had acked the whole V8 series, but delayed for seeing how
> PLE improvement would affect it.
>
I see that Ingo was happy with it too.
> The only addition from that series has been
> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
> and
> 2. the halt handler now calls vcpu_on_spin to take the advantage of
> PLE improvements. (this can also go as an independent patch into
> kvm)
>
> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
> Before PLE improvements, as you know,
> kvm undercommit scenario was very worse in ple enabled cases.
> (compared to ple disabled cases).
> pvspinlock patches behaved equally bad in undercommit. Both had
> similar reason so at the end there was no degradation w.r.t base.
>
> The reason for bad performance in PLE case was unneeded vcpu
> iteration in ple handler resulting in high yield_to calls and double
> run queue locks.
> With pvspinlock applied, same villain role was played by excessive
> halt exits.
>
> But after ple handler improved, we needed to throttle unnecessary halts
> in undercommit for pvspinlock to be on par with 1x result.
>
Make sense. I will review it ASAP. BTW the latest version is V10 right?
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:41 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-10 11:41 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
> >>Ingo, Gleb,
> >>
> >> From the results perspective, Andrew Theurer, Vinod's test results are
> >>pro-pvspinlock.
> >>Could you please help me to know what will make it a mergeable
> >>candidate?.
> >>
> >I need to spend more time reviewing it :) The problem with PV interfaces
> >is that they are easy to add but hard to get rid of if better solution
> >(HW or otherwise) appears.
>
> Infact Avi had acked the whole V8 series, but delayed for seeing how
> PLE improvement would affect it.
>
I see that Ingo was happy with it too.
> The only addition from that series has been
> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
> and
> 2. the halt handler now calls vcpu_on_spin to take the advantage of
> PLE improvements. (this can also go as an independent patch into
> kvm)
>
> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
> Before PLE improvements, as you know,
> kvm undercommit scenario was very worse in ple enabled cases.
> (compared to ple disabled cases).
> pvspinlock patches behaved equally bad in undercommit. Both had
> similar reason so at the end there was no degradation w.r.t base.
>
> The reason for bad performance in PLE case was unneeded vcpu
> iteration in ple handler resulting in high yield_to calls and double
> run queue locks.
> With pvspinlock applied, same villain role was played by excessive
> halt exits.
>
> But after ple handler improved, we needed to throttle unnecessary halts
> in undercommit for pvspinlock to be on par with 1x result.
>
Make sense. I will review it ASAP. BTW the latest version is V10 right?
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 11:41 ` Gleb Natapov
@ 2013-07-10 11:50 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:50 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On 07/10/2013 05:11 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
>>>> Ingo, Gleb,
>>>>
>>>> From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> Infact Avi had acked the whole V8 series, but delayed for seeing how
>> PLE improvement would affect it.
>>
> I see that Ingo was happy with it too.
>
>> The only addition from that series has been
>> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
>> and
>> 2. the halt handler now calls vcpu_on_spin to take the advantage of
>> PLE improvements. (this can also go as an independent patch into
>> kvm)
>>
>> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
>> Before PLE improvements, as you know,
>> kvm undercommit scenario was very worse in ple enabled cases.
>> (compared to ple disabled cases).
>> pvspinlock patches behaved equally bad in undercommit. Both had
>> similar reason so at the end there was no degradation w.r.t base.
>>
>> The reason for bad performance in PLE case was unneeded vcpu
>> iteration in ple handler resulting in high yield_to calls and double
>> run queue locks.
>> With pvspinlock applied, same villain role was played by excessive
>> halt exits.
>>
>> But after ple handler improved, we needed to throttle unnecessary halts
>> in undercommit for pvspinlock to be on par with 1x result.
>>
> Make sense. I will review it ASAP. BTW the latest version is V10 right?
>
Yes. Thank you.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-10 11:50 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-10 11:50 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On 07/10/2013 05:11 PM, Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 04:54:12PM +0530, Raghavendra K T wrote:
>>>> Ingo, Gleb,
>>>>
>>>> From the results perspective, Andrew Theurer, Vinod's test results are
>>>> pro-pvspinlock.
>>>> Could you please help me to know what will make it a mergeable
>>>> candidate?.
>>>>
>>> I need to spend more time reviewing it :) The problem with PV interfaces
>>> is that they are easy to add but hard to get rid of if better solution
>>> (HW or otherwise) appears.
>>
>> Infact Avi had acked the whole V8 series, but delayed for seeing how
>> PLE improvement would affect it.
>>
> I see that Ingo was happy with it too.
>
>> The only addition from that series has been
>> 1. tuning the SPIN_THRESHOLD to 32k (from 2k)
>> and
>> 2. the halt handler now calls vcpu_on_spin to take the advantage of
>> PLE improvements. (this can also go as an independent patch into
>> kvm)
>>
>> The rationale for making SPIN_THERSHOLD 32k needs big explanation.
>> Before PLE improvements, as you know,
>> kvm undercommit scenario was very worse in ple enabled cases.
>> (compared to ple disabled cases).
>> pvspinlock patches behaved equally bad in undercommit. Both had
>> similar reason so at the end there was no degradation w.r.t base.
>>
>> The reason for bad performance in PLE case was unneeded vcpu
>> iteration in ple handler resulting in high yield_to calls and double
>> run queue locks.
>> With pvspinlock applied, same villain role was played by excessive
>> halt exits.
>>
>> But after ple handler improved, we needed to throttle unnecessary halts
>> in undercommit for pvspinlock to be on par with 1x result.
>>
> Make sense. I will review it ASAP. BTW the latest version is V10 right?
>
Yes. Thank you.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:33 ` Gleb Natapov
` (3 preceding siblings ...)
(?)
@ 2013-07-11 9:13 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 9:13 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On 07/10/2013 04:03 PM, Gleb Natapov wrote:
[...] trimmed
>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.
Okay. Got it. I was thinking it would change the global value. But IIRC
It is changing global sysfs value and per vcpu ple_window.
Sorry. I missed this part yesterday.
>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>> me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?
Ok. Thinking how to do. read the register and writeback if there need
to be a change during guest entry?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-10 10:33 ` Gleb Natapov
` (4 preceding siblings ...)
(?)
@ 2013-07-11 9:13 ` Raghavendra K T
2013-07-11 9:48 ` Gleb Natapov
-1 siblings, 1 reply; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 9:13 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On 07/10/2013 04:03 PM, Gleb Natapov wrote:
[...] trimmed
>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>
>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>> exits in under-commits and increasing ple_window may be sometimes
>>> counter productive as it affects other busy-wait constructs such as
>>> flush_tlb AFAIK.
>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>> would be nice.
>>>
>>
>> Gleb, Andrew,
>> I tested with the global ple window change (similar to what I posted
>> here https://lkml.org/lkml/2012/11/11/14 ),
> This does not look global. It changes PLE per vcpu.
Okay. Got it. I was thinking it would change the global value. But IIRC
It is changing global sysfs value and per vcpu ple_window.
Sorry. I missed this part yesterday.
>
>> But did not see good result. May be it is good to go with per VM
>> ple_window.
>>
>> Gleb,
>> Can you elaborate little more on what you have in mind regarding per
>> VM ple_window. (maintaining part of it as a per vm variable is clear
>> to
>> me), but is it that we have to load that every time of guest entry?
>>
> Only when it changes, shouldn't be to often no?
Ok. Thinking how to do. read the register and writeback if there need
to be a change during guest entry?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 9:13 ` Raghavendra K T
@ 2013-07-11 9:48 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-11 9:48 UTC (permalink / raw)
To: Raghavendra K T
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> [...] trimmed
>
> >>>Yes. you are right. dynamic ple window was an attempt to solve it.
> >>>
> >>>Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >>>exits in under-commits and increasing ple_window may be sometimes
> >>>counter productive as it affects other busy-wait constructs such as
> >>>flush_tlb AFAIK.
> >>>So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >>>would be nice.
> >>>
> >>
> >>Gleb, Andrew,
> >>I tested with the global ple window change (similar to what I posted
> >>here https://lkml.org/lkml/2012/11/11/14 ),
> >This does not look global. It changes PLE per vcpu.
>
> Okay. Got it. I was thinking it would change the global value. But IIRC
> It is changing global sysfs value and per vcpu ple_window.
> Sorry. I missed this part yesterday.
>
Yes, it changes sysfs value but this does not affect already created
vcpus.
> >
> >>But did not see good result. May be it is good to go with per VM
> >>ple_window.
> >>
> >>Gleb,
> >>Can you elaborate little more on what you have in mind regarding per
> >>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>to
> >> me), but is it that we have to load that every time of guest entry?
> >>
> >Only when it changes, shouldn't be to often no?
>
> Ok. Thinking how to do. read the register and writeback if there need
> to be a change during guest entry?
>
Why not do it like in the patch you've linked? When value changes write it
to VMCS of the current vcpu.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 9:48 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-11 9:48 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
> [...] trimmed
>
> >>>Yes. you are right. dynamic ple window was an attempt to solve it.
> >>>
> >>>Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
> >>>exits in under-commits and increasing ple_window may be sometimes
> >>>counter productive as it affects other busy-wait constructs such as
> >>>flush_tlb AFAIK.
> >>>So if we could have had a dynamically changing SPIN_THRESHOLD too, that
> >>>would be nice.
> >>>
> >>
> >>Gleb, Andrew,
> >>I tested with the global ple window change (similar to what I posted
> >>here https://lkml.org/lkml/2012/11/11/14 ),
> >This does not look global. It changes PLE per vcpu.
>
> Okay. Got it. I was thinking it would change the global value. But IIRC
> It is changing global sysfs value and per vcpu ple_window.
> Sorry. I missed this part yesterday.
>
Yes, it changes sysfs value but this does not affect already created
vcpus.
> >
> >>But did not see good result. May be it is good to go with per VM
> >>ple_window.
> >>
> >>Gleb,
> >>Can you elaborate little more on what you have in mind regarding per
> >>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>to
> >> me), but is it that we have to load that every time of guest entry?
> >>
> >Only when it changes, shouldn't be to often no?
>
> Ok. Thinking how to do. read the register and writeback if there need
> to be a change during guest entry?
>
Why not do it like in the patch you've linked? When value changes write it
to VMCS of the current vcpu.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 9:48 ` Gleb Natapov
(?)
@ 2013-07-11 10:10 ` Raghavendra K T
2013-07-11 10:11 ` Gleb Natapov
-1 siblings, 1 reply; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:10 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On 07/11/2013 03:18 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
>> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
>> [...] trimmed
>>
>>>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>>>
>>>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>>>> exits in under-commits and increasing ple_window may be sometimes
>>>>> counter productive as it affects other busy-wait constructs such as
>>>>> flush_tlb AFAIK.
>>>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>>>> would be nice.
>>>>>
>>>>
>>>> Gleb, Andrew,
>>>> I tested with the global ple window change (similar to what I posted
>>>> here https://lkml.org/lkml/2012/11/11/14 ),
>>> This does not look global. It changes PLE per vcpu.
>>
>> Okay. Got it. I was thinking it would change the global value. But IIRC
>> It is changing global sysfs value and per vcpu ple_window.
>> Sorry. I missed this part yesterday.
>>
> Yes, it changes sysfs value but this does not affect already created
> vcpus.
>
>>>
>>>> But did not see good result. May be it is good to go with per VM
>>>> ple_window.
>>>>
>>>> Gleb,
>>>> Can you elaborate little more on what you have in mind regarding per
>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>> to
>>>> me), but is it that we have to load that every time of guest entry?
>>>>
>>> Only when it changes, shouldn't be to often no?
>>
>> Ok. Thinking how to do. read the register and writeback if there need
>> to be a change during guest entry?
>>
> Why not do it like in the patch you've linked? When value changes write it
> to VMCS of the current vcpu.
>
Yes. can be done. So the running vcpu's ple_window gets updated only
after next pl-exit. right?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 10:10 ` Raghavendra K T
@ 2013-07-11 10:11 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:11 UTC (permalink / raw)
To: Raghavendra K T
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>Gleb,
> >>>>Can you elaborate little more on what you have in mind regarding per
> >>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>to
> >>>> me), but is it that we have to load that every time of guest entry?
> >>>>
> >>>Only when it changes, shouldn't be to often no?
> >>
> >>Ok. Thinking how to do. read the register and writeback if there need
> >>to be a change during guest entry?
> >>
> >Why not do it like in the patch you've linked? When value changes write it
> >to VMCS of the current vcpu.
> >
>
> Yes. can be done. So the running vcpu's ple_window gets updated only
> after next pl-exit. right?
I am not sure what you mean. You cannot change vcpu's ple_window while
vcpu is in a guest mode.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 10:11 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:11 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>Gleb,
> >>>>Can you elaborate little more on what you have in mind regarding per
> >>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>to
> >>>> me), but is it that we have to load that every time of guest entry?
> >>>>
> >>>Only when it changes, shouldn't be to often no?
> >>
> >>Ok. Thinking how to do. read the register and writeback if there need
> >>to be a change during guest entry?
> >>
> >Why not do it like in the patch you've linked? When value changes write it
> >to VMCS of the current vcpu.
> >
>
> Yes. can be done. So the running vcpu's ple_window gets updated only
> after next pl-exit. right?
I am not sure what you mean. You cannot change vcpu's ple_window while
vcpu is in a guest mode.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 10:11 ` Gleb Natapov
@ 2013-07-11 10:53 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:53 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>> Gleb,
>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>> to
>>>>>> me), but is it that we have to load that every time of guest entry?
>>>>>>
>>>>> Only when it changes, shouldn't be to often no?
>>>>
>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>> to be a change during guest entry?
>>>>
>>> Why not do it like in the patch you've linked? When value changes write it
>>> to VMCS of the current vcpu.
>>>
>>
>> Yes. can be done. So the running vcpu's ple_window gets updated only
>> after next pl-exit. right?
> I am not sure what you mean. You cannot change vcpu's ple_window while
> vcpu is in a guest mode.
>
I agree with that. Both of us are on the same page.
What I meant is,
suppose the per VM ple_window changes when a vcpu x of that VM was running,
it will get its ple_window updated during next run.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 10:53 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:53 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>> Gleb,
>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>> to
>>>>>> me), but is it that we have to load that every time of guest entry?
>>>>>>
>>>>> Only when it changes, shouldn't be to often no?
>>>>
>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>> to be a change during guest entry?
>>>>
>>> Why not do it like in the patch you've linked? When value changes write it
>>> to VMCS of the current vcpu.
>>>
>>
>> Yes. can be done. So the running vcpu's ple_window gets updated only
>> after next pl-exit. right?
> I am not sure what you mean. You cannot change vcpu's ple_window while
> vcpu is in a guest mode.
>
I agree with that. Both of us are on the same page.
What I meant is,
suppose the per VM ple_window changes when a vcpu x of that VM was running,
it will get its ple_window updated during next run.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 10:53 ` Raghavendra K T
@ 2013-07-11 10:56 ` Gleb Natapov
-1 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:56 UTC (permalink / raw)
To: Raghavendra K T
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> >On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>>>Gleb,
> >>>>>>Can you elaborate little more on what you have in mind regarding per
> >>>>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>>>to
> >>>>>> me), but is it that we have to load that every time of guest entry?
> >>>>>>
> >>>>>Only when it changes, shouldn't be to often no?
> >>>>
> >>>>Ok. Thinking how to do. read the register and writeback if there need
> >>>>to be a change during guest entry?
> >>>>
> >>>Why not do it like in the patch you've linked? When value changes write it
> >>>to VMCS of the current vcpu.
> >>>
> >>
> >>Yes. can be done. So the running vcpu's ple_window gets updated only
> >>after next pl-exit. right?
> >I am not sure what you mean. You cannot change vcpu's ple_window while
> >vcpu is in a guest mode.
> >
>
> I agree with that. Both of us are on the same page.
> What I meant is,
> suppose the per VM ple_window changes when a vcpu x of that VM was running,
> it will get its ple_window updated during next run.
Ah, I think "per VM" is what confuses me. Why do you want to have "per
VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
cannot change while vcpu is running.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-11 10:56 ` Gleb Natapov
0 siblings, 0 replies; 96+ messages in thread
From: Gleb Natapov @ 2013-07-11 10:56 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
> >On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
> >>>>>>Gleb,
> >>>>>>Can you elaborate little more on what you have in mind regarding per
> >>>>>>VM ple_window. (maintaining part of it as a per vm variable is clear
> >>>>>>to
> >>>>>> me), but is it that we have to load that every time of guest entry?
> >>>>>>
> >>>>>Only when it changes, shouldn't be to often no?
> >>>>
> >>>>Ok. Thinking how to do. read the register and writeback if there need
> >>>>to be a change during guest entry?
> >>>>
> >>>Why not do it like in the patch you've linked? When value changes write it
> >>>to VMCS of the current vcpu.
> >>>
> >>
> >>Yes. can be done. So the running vcpu's ple_window gets updated only
> >>after next pl-exit. right?
> >I am not sure what you mean. You cannot change vcpu's ple_window while
> >vcpu is in a guest mode.
> >
>
> I agree with that. Both of us are on the same page.
> What I meant is,
> suppose the per VM ple_window changes when a vcpu x of that VM was running,
> it will get its ple_window updated during next run.
Ah, I think "per VM" is what confuses me. Why do you want to have "per
VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
cannot change while vcpu is running.
--
Gleb.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 10:56 ` Gleb Natapov
(?)
@ 2013-07-11 11:14 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 11:14 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, mingo, ouyang, habanero, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel, riel,
virtualization, srivatsa.vaddagiri
On 07/11/2013 04:26 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
>> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
>>> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>>>> Gleb,
>>>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>>>> to
>>>>>>>> me), but is it that we have to load that every time of guest entry?
>>>>>>>>
>>>>>>> Only when it changes, shouldn't be to often no?
>>>>>>
>>>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>>>> to be a change during guest entry?
>>>>>>
>>>>> Why not do it like in the patch you've linked? When value changes write it
>>>>> to VMCS of the current vcpu.
>>>>>
>>>>
>>>> Yes. can be done. So the running vcpu's ple_window gets updated only
>>>> after next pl-exit. right?
>>> I am not sure what you mean. You cannot change vcpu's ple_window while
>>> vcpu is in a guest mode.
>>>
>>
>> I agree with that. Both of us are on the same page.
>> What I meant is,
>> suppose the per VM ple_window changes when a vcpu x of that VM was running,
>> it will get its ple_window updated during next run.
> Ah, I think "per VM" is what confuses me. Why do you want to have "per
> VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
> cannot change while vcpu is running.
>
Okay. Got that. My initial feeling was vcpu does not "feel" the global
load. But I think that should be of no problem. instead we will not need
atomic operations to update ple_window, which is better.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 10:56 ` Gleb Natapov
(?)
(?)
@ 2013-07-11 11:14 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 11:14 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On 07/11/2013 04:26 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote:
>> On 07/11/2013 03:41 PM, Gleb Natapov wrote:
>>> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote:
>>>>>>>> Gleb,
>>>>>>>> Can you elaborate little more on what you have in mind regarding per
>>>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>>>>>> to
>>>>>>>> me), but is it that we have to load that every time of guest entry?
>>>>>>>>
>>>>>>> Only when it changes, shouldn't be to often no?
>>>>>>
>>>>>> Ok. Thinking how to do. read the register and writeback if there need
>>>>>> to be a change during guest entry?
>>>>>>
>>>>> Why not do it like in the patch you've linked? When value changes write it
>>>>> to VMCS of the current vcpu.
>>>>>
>>>>
>>>> Yes. can be done. So the running vcpu's ple_window gets updated only
>>>> after next pl-exit. right?
>>> I am not sure what you mean. You cannot change vcpu's ple_window while
>>> vcpu is in a guest mode.
>>>
>>
>> I agree with that. Both of us are on the same page.
>> What I meant is,
>> suppose the per VM ple_window changes when a vcpu x of that VM was running,
>> it will get its ple_window updated during next run.
> Ah, I think "per VM" is what confuses me. Why do you want to have "per
> VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows
> cannot change while vcpu is running.
>
Okay. Got that. My initial feeling was vcpu does not "feel" the global
load. But I think that should be of no problem. instead we will not need
atomic operations to update ple_window, which is better.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-07-11 9:48 ` Gleb Natapov
(?)
(?)
@ 2013-07-11 10:10 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-11 10:10 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds
On 07/11/2013 03:18 PM, Gleb Natapov wrote:
> On Thu, Jul 11, 2013 at 02:43:03PM +0530, Raghavendra K T wrote:
>> On 07/10/2013 04:03 PM, Gleb Natapov wrote:
>> [...] trimmed
>>
>>>>> Yes. you are right. dynamic ple window was an attempt to solve it.
>>>>>
>>>>> Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
>>>>> exits in under-commits and increasing ple_window may be sometimes
>>>>> counter productive as it affects other busy-wait constructs such as
>>>>> flush_tlb AFAIK.
>>>>> So if we could have had a dynamically changing SPIN_THRESHOLD too, that
>>>>> would be nice.
>>>>>
>>>>
>>>> Gleb, Andrew,
>>>> I tested with the global ple window change (similar to what I posted
>>>> here https://lkml.org/lkml/2012/11/11/14 ),
>>> This does not look global. It changes PLE per vcpu.
>>
>> Okay. Got it. I was thinking it would change the global value. But IIRC
>> It is changing global sysfs value and per vcpu ple_window.
>> Sorry. I missed this part yesterday.
>>
> Yes, it changes sysfs value but this does not affect already created
> vcpus.
>
>>>
>>>> But did not see good result. May be it is good to go with per VM
>>>> ple_window.
>>>>
>>>> Gleb,
>>>> Can you elaborate little more on what you have in mind regarding per
>>>> VM ple_window. (maintaining part of it as a per vm variable is clear
>>>> to
>>>> me), but is it that we have to load that every time of guest entry?
>>>>
>>> Only when it changes, shouldn't be to often no?
>>
>> Ok. Thinking how to do. read the register and writeback if there need
>> to be a change during guest entry?
>>
> Why not do it like in the patch you've linked? When value changes write it
> to VMCS of the current vcpu.
>
Yes. can be done. So the running vcpu's ple_window gets updated only
after next pl-exit. right?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 16:11 ` Gleb Natapov
(?)
(?)
@ 2013-06-26 17:54 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-26 17:54 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, gregkh, kvm, linux-doc, peterz, riel, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, habanero,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On 06/26/2013 09:41 PM, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
>> On 06/26/2013 06:22 PM, Gleb Natapov wrote:
>>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>>> implementation for both Xen and KVM.
>>>>>>>
>>>>>>> Changes in V9:
>>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>>
>>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>>> have been tried.
>>>>>>
>>>>>> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
>>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
>>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>>> with large VMs.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Thanks for testing.
>>>>>
>>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>>
>>>>>>
>>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput(MB/s) Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
>>>>>> [all 1x results look good here]
>>>>>
>>>>> Yes. The 1x results look too close
>>>>>
>>>>>>
>>>>>>
>>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>>> -----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
>>>>>> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>>>>>
>>>>> I see 6.426% improvement with ple_on
>>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>>> for the patches
>>>>>
>>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>>
>>>>>
>>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>>> there.
>>>>>
>>>>>>
>>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
>>>>>> [1x looking fine here]
>>>>>>
>>>>>
>>>>> I see ple_off is little better here.
>>>>>
>>>>>>
>>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>>> ----------------------------------------------------------
>>>>>> Total
>>>>>> Configuration Throughput Notes
>>>>>>
>>>>>> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
>>>>>> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
>>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
>>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
>>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>>> Still quite a bit off from ideal throughput]
>>>>>
>>>>> This is again a remarkable improvement (307%).
>>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>>> but only problem I see is what if the guests are mixed.
>>>>>
>>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>>> supports pv)
>>>>
>>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>>> state. We were headed down that road when considering a dynamic window at
>>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>>> would lead to PLE being disabled for that guest. We could also revisit
>>>> the dynamic window then.
>>>>
>>> Can be done, but lets understand why ple on is such a big problem. Is it
>>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>>>
>>
>> The one obvious reason I see is commit awareness inside the guest. for
>> under-commit there is no necessity to do PLE, but unfortunately we do.
>>
>> atleast we return back immediately in case of potential undercommits,
>> but we still incur vmexit delay.
> But why do we? If SPIN_THRESHOLD will be short enough (or ple windows
> long enough) to not generate PLE exit we will not go into PLE handler
> at all, no?
>
Yes. you are right. dynamic ple window was an attempt to solve it.
Probelm is, reducing the SPIN_THRESHOLD is resulting in excess halt
exits in under-commits and increasing ple_window may be sometimes
counter productive as it affects other busy-wait constructs such as
flush_tlb AFAIK.
So if we could have had a dynamically changing SPIN_THRESHOLD too, that
would be nice.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 12:52 ` Gleb Natapov
@ 2013-06-26 14:13 ` Konrad Rzeszutek Wilk
-1 siblings, 0 replies; 96+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-26 14:13 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, Raghavendra K T, habanero, mingo, jeremy, x86, hpa,
pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, Jun 26, 2013 at 03:52:40PM +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >> causing undercommit degradation (after PLE handler improvement).
> > > >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > >
> > > Hi Andrew,
> > >
> > > Thanks for testing.
> > >
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput(MB/s) Notes
> > > >
> > > >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > >
> > > Yes. The 1x results look too close
> > >
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> > >
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > > for the patches
> > >
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > > we still off from ideal throughput (somewhere >20000)]
> > > >
> > >
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > >
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > >
> > > I see ple_off is little better here.
> > >
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> > > >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > > Still quite a bit off from ideal throughput]
> > >
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > >
> > > (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> >
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> >
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
It could be, but it also could be a microcode issue. The earlier version
of Intel (and AMD) CPUs did not have the best detection mechanism and had
a "jitter" to them. The ple gap and ple window values seemed to be choosen
based on microbenchmark - and while they might work great with Windows
type guests - the same is not said about Linux.
In which case if you fiddle with the ple gap/window you might incur
worst performance with Windows guests :-( Or older Linux guests
that use the byte-locking mechanism.
Perhaps the best option is to introduce - as a seperate patchset -
said dynamic window which will be off when pvticket lock is off - and
then based on further CPUs improvements, can turn it on/off?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 14:13 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 96+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-06-26 14:13 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, x86, kvm, linux-doc, peterz, riel, virtualization, andi,
hpa, xen-devel, Raghavendra K T, mingo, habanero, Andrew Jones,
stefano.stabellini, ouyang, avi.kivity, tglx, chegu_vinod,
gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Wed, Jun 26, 2013 at 03:52:40PM +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >> causing undercommit degradation (after PLE handler improvement).
> > > >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > >
> > > Hi Andrew,
> > >
> > > Thanks for testing.
> > >
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput(MB/s) Notes
> > > >
> > > >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > >
> > > Yes. The 1x results look too close
> > >
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> > >
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > > for the patches
> > >
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > > we still off from ideal throughput (somewhere >20000)]
> > > >
> > >
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > >
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > >
> > > I see ple_off is little better here.
> > >
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> > > >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > > Still quite a bit off from ideal throughput]
> > >
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > >
> > > (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> >
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> >
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
It could be, but it also could be a microcode issue. The earlier version
of Intel (and AMD) CPUs did not have the best detection mechanism and had
a "jitter" to them. The ple gap and ple window values seemed to be choosen
based on microbenchmark - and while they might work great with Windows
type guests - the same is not said about Linux.
In which case if you fiddle with the ple gap/window you might incur
worst performance with Windows guests :-( Or older Linux guests
that use the byte-locking mechanism.
Perhaps the best option is to introduce - as a seperate patchset -
said dynamic window which will be off when pvticket lock is off - and
then based on further CPUs improvements, can turn it on/off?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 12:52 ` Gleb Natapov
@ 2013-06-26 15:56 ` Andrew Theurer
-1 siblings, 0 replies; 96+ messages in thread
From: Andrew Theurer @ 2013-06-26 15:56 UTC (permalink / raw)
To: Gleb Natapov
Cc: Andrew Jones, Raghavendra K T, mingo, jeremy, x86, konrad.wilk,
hpa, pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >> causing undercommit degradation (after PLE handler improvement).
> > > >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > >
> > > Hi Andrew,
> > >
> > > Thanks for testing.
> > >
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput(MB/s) Notes
> > > >
> > > >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > >
> > > Yes. The 1x results look too close
> > >
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> > >
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > > for the patches
> > >
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > > we still off from ideal throughput (somewhere >20000)]
> > > >
> > >
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > >
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > >
> > > I see ple_off is little better here.
> > >
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> > > >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > > Still quite a bit off from ideal throughput]
> > >
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > >
> > > (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> >
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> >
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
The biggest problem currently is the double_runqueue_lock from
yield_to():
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
perf from host:
> 28.27% 396402 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
> 4.65% 65667 qemu-system-x86 [kernel.kallsyms] [k] __schedule
> 3.87% 54802 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
> 3.32% 47022 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
> 2.84% 40093 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
> 2.70% 37672 qemu-system-x86 [kernel.kallsyms] [k] yield_to
> 2.63% 36859 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
> 2.18% 30810 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
A tiny patch [included below] checks if the target task is running
before double_runqueue_lock (then bails if it is running). This does
reduce the lock contention somewhat:
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
perf from host:
> 20.51% 284829 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
> 5.21% 72949 qemu-system-x86 [kernel.kallsyms] [k] __schedule
> 3.70% 51962 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
> 3.50% 48607 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
> 3.22% 45214 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
> 3.18% 44546 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
> 3.13% 43176 qemu-system-x86 [kernel.kallsyms] [k] yield_to
> 2.37% 33349 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
> 2.06% 28503 qemu-system-x86 [kernel.kallsyms] [k] get_pid_task
So, the lock contention is reduced, and the results improve slightly
over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
still far off from no PLE at all (8003) and way off from a ideal
throughput (>20000).
One of the problems, IMO, is that we are chasing our tail and burning
too much CPU trying to fix the problem, but much of what is done is not
actually fixing the problem (getting the one vcpu holding the lock to
run again). We end up spending a lot of cycles getting a lot of vcpus
running again, and most of them are not holding that lock. One
indication of this is the context switches in the host:
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
pvticket with PLE on: 2579227.76/sec
pvticket with PLE pff: 233711.30/sec
That's over 10x context switches with PLE on. All of this is for
yield_to, but IMO most of vcpus are probably yielding to vcpus which are
not actually holding the lock.
I would like to see how this changes by tracking the lock holder in the
pvticket lock structure, and when a vcpu spins beyond a threshold, the
vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
knows to be holding the lock. Note that PLE is no longer needed for
this and the PLE detection should probably be disabled when the guest
has this ability.
Additionally, when other vcpus reach their spin threshold and also
identify the same target vcpu (the same lock), they may opt to not make
the yield_to hypercall, if another vcpu made the yield_to hypercall to
the same target vcpu -very-recently-, thus avoiding a redundant exit and
yield_to.
Another optimization may be to allow vcpu preemption to be visible
-inside- the guest. If a vcpu reaches the spin threshold, then
identifies the lock holding vcpu, it then checks to see if a preemption
bit is set for that vcpu. If it is not set, then it does nothing, and
if it is, it makes the yield_to hypercall. This should help for locks
which really do have a big critical section, and the vcpus really do
need to spin for a while.
OK, one last thing. This is a completely different approach at the
problem: automatically adjust active vcpus from within a guest, with
some sort of daemon (vcpud?) to approximate the actual host cpu resource
available. The daemon would monitor steal time and hot unplug vcpus to
reduce steal time to a small percentage. ending up with a slight cpu
overcommit. It would also have to online vcpus if more cpu resource is
made available, again looking at steal time and adding vcpus until steal
time increases to a small percentage. I am not sure if the overhead of
plugging/unplugging is worth it, but I would bet the guest would be far
more efficient, because (a) PLE and pvticket would be handling much
lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
its applications would have much better scalability because the active
vcpu count is much lower.
So, let's see what one of those situations would look like, without
actually writing something to do the unplugging/plugging for us. Let's
take the one of the examples above, where we have 8 VMs, each defined
with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
the VMs, so we end up with a 1.1x effective overcommit (the last test
below).
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
Total
Configuration Throughput Notes
3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
3.10-pvticket-ple-on_doublerq-opt 2161 68% CPU in host kernel, 33% spin_lock in guests
3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged 22534 6% CPU in host kernel, 9% steal in guests, 2% spin_lock in guests
Finally, we get a nice result! Note this is the lowest spin % in the guest. The spin_lock in the host is also quite a bit better:
> 6.77% 55421 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
> 4.29% 57345 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
> 3.87% 62049 qemu-system-x86 [kernel.kallsyms] [k] native_apic_msr_write
> 2.88% 45272 qemu-system-x86 [kernel.kallsyms] [k] atomic_dec_and_mutex_lock
> 2.71% 39276 qemu-system-x86 [kvm] [k] vcpu_enter_guest
> 2.48% 38886 qemu-system-x86 [kernel.kallsyms] [k] memset
> 2.22% 18331 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
> 2.09% 32628 qemu-system-x86 [kernel.kallsyms] [k] perf_event_alloc
Also the host context switches dropped significantly (66%), to 38768/sec.
-Andrew
Patch to reduce double runqueue lock in yield_to():
Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..795d324 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4454,6 +4454,9 @@ again:
goto out_irq;
}
+ if (task_running(p_rq, p) || p->state)
+ goto out_irq;
+
double_rq_lock(rq, p_rq);
while (task_rq(p) != p_rq) {
double_rq_unlock(rq, p_rq);
^ permalink raw reply related [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-26 15:56 ` Andrew Theurer
0 siblings, 0 replies; 96+ messages in thread
From: Andrew Theurer @ 2013-06-26 15:56 UTC (permalink / raw)
To: Gleb Natapov
Cc: jeremy, x86, kvm, linux-doc, peterz, riel, virtualization, andi,
hpa, stefano.stabellini, xen-devel, Raghavendra K T, mingo,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
gregkh, linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
> > On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
> > > On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> > > >On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> > > >>This series replaces the existing paravirtualized spinlock mechanism
> > > >>with a paravirtualized ticketlock mechanism. The series provides
> > > >>implementation for both Xen and KVM.
> > > >>
> > > >>Changes in V9:
> > > >>- Changed spin_threshold to 32k to avoid excess halt exits that are
> > > >> causing undercommit degradation (after PLE handler improvement).
> > > >>- Added kvm_irq_delivery_to_apic (suggested by Gleb)
> > > >>- Optimized halt exit path to use PLE handler
> > > >>
> > > >>V8 of PVspinlock was posted last year. After Avi's suggestions to look
> > > >>at PLE handler's improvements, various optimizations in PLE handling
> > > >>have been tried.
> > > >
> > > >Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> > > >patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> > > >tested these patches with and without PLE, as PLE is still not scalable
> > > >with large VMs.
> > > >
> > >
> > > Hi Andrew,
> > >
> > > Thanks for testing.
> > >
> > > >System: x3850X5, 40 cores, 80 threads
> > > >
> > > >
> > > >1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput(MB/s) Notes
> > > >
> > > >3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> > > >3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> > > >[all 1x results look good here]
> > >
> > > Yes. The 1x results look too close
> > >
> > > >
> > > >
> > > >2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> > > >-----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> > > >3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> > > >3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> > > >3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
> > >
> > > I see 6.426% improvement with ple_on
> > > and 161.87% improvement with ple_off. I think this is a very good sign
> > > for the patches
> > >
> > > >[PLE hinders pv-ticket improvements, but even with PLE off,
> > > > we still off from ideal throughput (somewhere >20000)]
> > > >
> > >
> > > Okay, The ideal throughput you are referring is getting around atleast
> > > 80% of 1x throughput for over-commit. Yes we are still far away from
> > > there.
> > >
> > > >
> > > >1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> > > >3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> > > >[1x looking fine here]
> > > >
> > >
> > > I see ple_off is little better here.
> > >
> > > >
> > > >2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> > > >----------------------------------------------------------
> > > > Total
> > > >Configuration Throughput Notes
> > > >
> > > >3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> > > >3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> > > >3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> > > >3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> > > >[quite bad all around, but pv-tickets with PLE off the best so far.
> > > > Still quite a bit off from ideal throughput]
> > >
> > > This is again a remarkable improvement (307%).
> > > This motivates me to add a patch to disable ple when pvspinlock is on.
> > > probably we can add a hypercall that disables ple in kvm init patch.
> > > but only problem I see is what if the guests are mixed.
> > >
> > > (i.e one guest has pvspinlock support but other does not. Host
> > > supports pv)
> >
> > How about reintroducing the idea to create per-kvm ple_gap,ple_window
> > state. We were headed down that road when considering a dynamic window at
> > one point. Then you can just set a single guest's ple_gap to zero, which
> > would lead to PLE being disabled for that guest. We could also revisit
> > the dynamic window then.
> >
> Can be done, but lets understand why ple on is such a big problem. Is it
> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
The biggest problem currently is the double_runqueue_lock from
yield_to():
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
perf from host:
> 28.27% 396402 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
> 4.65% 65667 qemu-system-x86 [kernel.kallsyms] [k] __schedule
> 3.87% 54802 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
> 3.32% 47022 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
> 2.84% 40093 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
> 2.70% 37672 qemu-system-x86 [kernel.kallsyms] [k] yield_to
> 2.63% 36859 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
> 2.18% 30810 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
A tiny patch [included below] checks if the target task is running
before double_runqueue_lock (then bails if it is running). This does
reduce the lock contention somewhat:
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
perf from host:
> 20.51% 284829 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
> 5.21% 72949 qemu-system-x86 [kernel.kallsyms] [k] __schedule
> 3.70% 51962 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
> 3.50% 48607 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
> 3.22% 45214 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
> 3.18% 44546 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
> 3.13% 43176 qemu-system-x86 [kernel.kallsyms] [k] yield_to
> 2.37% 33349 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
> 2.06% 28503 qemu-system-x86 [kernel.kallsyms] [k] get_pid_task
So, the lock contention is reduced, and the results improve slightly
over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
still far off from no PLE at all (8003) and way off from a ideal
throughput (>20000).
One of the problems, IMO, is that we are chasing our tail and burning
too much CPU trying to fix the problem, but much of what is done is not
actually fixing the problem (getting the one vcpu holding the lock to
run again). We end up spending a lot of cycles getting a lot of vcpus
running again, and most of them are not holding that lock. One
indication of this is the context switches in the host:
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
pvticket with PLE on: 2579227.76/sec
pvticket with PLE pff: 233711.30/sec
That's over 10x context switches with PLE on. All of this is for
yield_to, but IMO most of vcpus are probably yielding to vcpus which are
not actually holding the lock.
I would like to see how this changes by tracking the lock holder in the
pvticket lock structure, and when a vcpu spins beyond a threshold, the
vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
knows to be holding the lock. Note that PLE is no longer needed for
this and the PLE detection should probably be disabled when the guest
has this ability.
Additionally, when other vcpus reach their spin threshold and also
identify the same target vcpu (the same lock), they may opt to not make
the yield_to hypercall, if another vcpu made the yield_to hypercall to
the same target vcpu -very-recently-, thus avoiding a redundant exit and
yield_to.
Another optimization may be to allow vcpu preemption to be visible
-inside- the guest. If a vcpu reaches the spin threshold, then
identifies the lock holding vcpu, it then checks to see if a preemption
bit is set for that vcpu. If it is not set, then it does nothing, and
if it is, it makes the yield_to hypercall. This should help for locks
which really do have a big critical section, and the vcpus really do
need to spin for a while.
OK, one last thing. This is a completely different approach at the
problem: automatically adjust active vcpus from within a guest, with
some sort of daemon (vcpud?) to approximate the actual host cpu resource
available. The daemon would monitor steal time and hot unplug vcpus to
reduce steal time to a small percentage. ending up with a slight cpu
overcommit. It would also have to online vcpus if more cpu resource is
made available, again looking at steal time and adding vcpus until steal
time increases to a small percentage. I am not sure if the overhead of
plugging/unplugging is worth it, but I would bet the guest would be far
more efficient, because (a) PLE and pvticket would be handling much
lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
its applications would have much better scalability because the active
vcpu count is much lower.
So, let's see what one of those situations would look like, without
actually writing something to do the unplugging/plugging for us. Let's
take the one of the examples above, where we have 8 VMs, each defined
with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
the VMs, so we end up with a 1.1x effective overcommit (the last test
below).
[2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
Total
Configuration Throughput Notes
3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
3.10-pvticket-ple-on_doublerq-opt 2161 68% CPU in host kernel, 33% spin_lock in guests
3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged 22534 6% CPU in host kernel, 9% steal in guests, 2% spin_lock in guests
Finally, we get a nice result! Note this is the lowest spin % in the guest. The spin_lock in the host is also quite a bit better:
> 6.77% 55421 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
> 4.29% 57345 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
> 3.87% 62049 qemu-system-x86 [kernel.kallsyms] [k] native_apic_msr_write
> 2.88% 45272 qemu-system-x86 [kernel.kallsyms] [k] atomic_dec_and_mutex_lock
> 2.71% 39276 qemu-system-x86 [kvm] [k] vcpu_enter_guest
> 2.48% 38886 qemu-system-x86 [kernel.kallsyms] [k] memset
> 2.22% 18331 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
> 2.09% 32628 qemu-system-x86 [kernel.kallsyms] [k] perf_event_alloc
Also the host context switches dropped significantly (66%), to 38768/sec.
-Andrew
Patch to reduce double runqueue lock in yield_to():
Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..795d324 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4454,6 +4454,9 @@ again:
goto out_irq;
}
+ if (task_running(p_rq, p) || p->state)
+ goto out_irq;
+
double_rq_lock(rq, p_rq);
while (task_rq(p) != p_rq) {
double_rq_unlock(rq, p_rq);
^ permalink raw reply related [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-26 15:56 ` Andrew Theurer
@ 2013-07-01 9:30 ` Raghavendra K T
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-01 9:30 UTC (permalink / raw)
To: habanero
Cc: Gleb Natapov, Andrew Jones, mingo, jeremy, x86, konrad.wilk, hpa,
pbonzini, linux-doc, xen-devel, peterz, mtosatti,
stefano.stabellini, andi, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, virtualization, srivatsa.vaddagiri
On 06/26/2013 09:26 PM, Andrew Theurer wrote:
> On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>> implementation for both Xen and KVM.
>>>>>>
>>>>>> Changes in V9:
>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>
>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>> have been tried.
>>>>>
>>>>> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>> with large VMs.
>>>>>
>>>>
>>>> Hi Andrew,
>>>>
>>>> Thanks for testing.
>>>>
>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>
>>>>>
>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput(MB/s) Notes
>>>>>
>>>>> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
>>>>> [all 1x results look good here]
>>>>
>>>> Yes. The 1x results look too close
>>>>
>>>>>
>>>>>
>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>> -----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
>>>>> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>>>>
>>>> I see 6.426% improvement with ple_on
>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>> for the patches
>>>>
>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>
>>>>
>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>> there.
>>>>
>>>>>
>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
>>>>> [1x looking fine here]
>>>>>
>>>>
>>>> I see ple_off is little better here.
>>>>
>>>>>
>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
>>>>> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>> Still quite a bit off from ideal throughput]
>>>>
>>>> This is again a remarkable improvement (307%).
>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>> but only problem I see is what if the guests are mixed.
>>>>
>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>> supports pv)
>>>
>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>> state. We were headed down that road when considering a dynamic window at
>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>> would lead to PLE being disabled for that guest. We could also revisit
>>> the dynamic window then.
>>>
>> Can be done, but lets understand why ple on is such a big problem. Is it
>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>
> The biggest problem currently is the double_runqueue_lock from
> yield_to():
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 28.27% 396402 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
>> 4.65% 65667 qemu-system-x86 [kernel.kallsyms] [k] __schedule
>> 3.87% 54802 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
>> 3.32% 47022 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
>> 2.84% 40093 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
>> 2.70% 37672 qemu-system-x86 [kernel.kallsyms] [k] yield_to
>> 2.63% 36859 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
>> 2.18% 30810 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
>
> A tiny patch [included below] checks if the target task is running
> before double_runqueue_lock (then bails if it is running). This does
> reduce the lock contention somewhat:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 20.51% 284829 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
>> 5.21% 72949 qemu-system-x86 [kernel.kallsyms] [k] __schedule
>> 3.70% 51962 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
>> 3.50% 48607 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
>> 3.22% 45214 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
>> 3.18% 44546 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
>> 3.13% 43176 qemu-system-x86 [kernel.kallsyms] [k] yield_to
>> 2.37% 33349 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
>> 2.06% 28503 qemu-system-x86 [kernel.kallsyms] [k] get_pid_task
>
> So, the lock contention is reduced, and the results improve slightly
> over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
> still far off from no PLE at all (8003) and way off from a ideal
> throughput (>20000).
>
> One of the problems, IMO, is that we are chasing our tail and burning
> too much CPU trying to fix the problem, but much of what is done is not
> actually fixing the problem (getting the one vcpu holding the lock to
> run again). We end up spending a lot of cycles getting a lot of vcpus
> running again, and most of them are not holding that lock. One
> indication of this is the context switches in the host:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> pvticket with PLE on: 2579227.76/sec
> pvticket with PLE pff: 233711.30/sec
>
> That's over 10x context switches with PLE on. All of this is for
> yield_to, but IMO most of vcpus are probably yielding to vcpus which are
> not actually holding the lock.
>
> I would like to see how this changes by tracking the lock holder in the
> pvticket lock structure, and when a vcpu spins beyond a threshold, the
> vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
> knows to be holding the lock. Note that PLE is no longer needed for
> this and the PLE detection should probably be disabled when the guest
> has this ability.
>
> Additionally, when other vcpus reach their spin threshold and also
> identify the same target vcpu (the same lock), they may opt to not make
> the yield_to hypercall, if another vcpu made the yield_to hypercall to
> the same target vcpu -very-recently-, thus avoiding a redundant exit and
> yield_to.
>
> Another optimization may be to allow vcpu preemption to be visible
> -inside- the guest. If a vcpu reaches the spin threshold, then
> identifies the lock holding vcpu, it then checks to see if a preemption
> bit is set for that vcpu. If it is not set, then it does nothing, and
> if it is, it makes the yield_to hypercall. This should help for locks
> which really do have a big critical section, and the vcpus really do
> need to spin for a while.
>
> OK, one last thing. This is a completely different approach at the
> problem: automatically adjust active vcpus from within a guest, with
> some sort of daemon (vcpud?) to approximate the actual host cpu resource
> available. The daemon would monitor steal time and hot unplug vcpus to
> reduce steal time to a small percentage. ending up with a slight cpu
> overcommit. It would also have to online vcpus if more cpu resource is
> made available, again looking at steal time and adding vcpus until steal
> time increases to a small percentage. I am not sure if the overhead of
> plugging/unplugging is worth it, but I would bet the guest would be far
> more efficient, because (a) PLE and pvticket would be handling much
> lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
> its applications would have much better scalability because the active
> vcpu count is much lower.
>
> So, let's see what one of those situations would look like, without
> actually writing something to do the unplugging/plugging for us. Let's
> take the one of the examples above, where we have 8 VMs, each defined
> with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
> the VMs, so we end up with a 1.1x effective overcommit (the last test
> below).
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> 3.10-pvticket-ple-on_doublerq-opt 2161 68% CPU in host kernel, 33% spin_lock in guests
> 3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged 22534 6% CPU in host kernel, 9% steal in guests, 2% spin_lock in guests
>
> Finally, we get a nice result! Note this is the lowest spin % in the guest. The spin_lock in the host is also quite a bit better:
>
>
>> 6.77% 55421 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
>> 4.29% 57345 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
>> 3.87% 62049 qemu-system-x86 [kernel.kallsyms] [k] native_apic_msr_write
>> 2.88% 45272 qemu-system-x86 [kernel.kallsyms] [k] atomic_dec_and_mutex_lock
>> 2.71% 39276 qemu-system-x86 [kvm] [k] vcpu_enter_guest
>> 2.48% 38886 qemu-system-x86 [kernel.kallsyms] [k] memset
>> 2.22% 18331 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
>> 2.09% 32628 qemu-system-x86 [kernel.kallsyms] [k] perf_event_alloc
>
> Also the host context switches dropped significantly (66%), to 38768/sec.
>
> -Andrew
>
>
>
>
>
> Patch to reduce double runqueue lock in yield_to():
>
> Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..795d324 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4454,6 +4454,9 @@ again:
> goto out_irq;
> }
>
> + if (task_running(p_rq, p) || p->state)
> + goto out_irq;
> +
> double_rq_lock(rq, p_rq);
> while (task_rq(p) != p_rq) {
> double_rq_unlock(rq, p_rq);
>
>
Hi Andrew,
I found that this patch, indeed helped to gain little more on top of
V10 pvspinlock patches in my test.
Here is the result on 32vcpus guest on 32 core machine (HT diabled)
test again.
patched kernel = 3.10-rc2 + v10 pvspinlock + reducing double rq patch
+---+-----------+-----------+-----------+------------+-----------+
ebizzy (rec/sec higher is better)
+---+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5494.6000 164.7451 -1.44038
2x 2741.5000 561.3090 3472.6000 98.6376 26.66788
3x 2146.2500 216.7718 2293.6667 56.7872 6.86857
4x 1663.0000 141.9235 1856.0000 120.7524 11.60553
+---+-----------+-----------+-----------+------------+-----------+
+---+-----------+-----------+-----------+------------+-----------+
dbench (throughput higher is better)
+---+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14695.3600 104.6816 4.13703
2x 2481.6270 71.2665 2774.8420 58.4845 11.81543
3x 1510.2483 31.8634 1539.7300 36.1814 1.95211
4x 1029.4875 16.9166 1059.9800 27.4114 2.96191
+---+-----------+-----------+-----------+------------+-----------+
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-07-01 9:30 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-07-01 9:30 UTC (permalink / raw)
To: habanero
Cc: jeremy, gregkh, linux-doc, peterz, riel, virtualization, andi,
hpa, stefano.stabellini, xen-devel, kvm, x86, mingo,
Andrew Jones, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
On 06/26/2013 09:26 PM, Andrew Theurer wrote:
> On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
>> On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
>>> On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
>>>> On 06/25/2013 08:20 PM, Andrew Theurer wrote:
>>>>> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>>>>>> This series replaces the existing paravirtualized spinlock mechanism
>>>>>> with a paravirtualized ticketlock mechanism. The series provides
>>>>>> implementation for both Xen and KVM.
>>>>>>
>>>>>> Changes in V9:
>>>>>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>>>>>> causing undercommit degradation (after PLE handler improvement).
>>>>>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>>>>>> - Optimized halt exit path to use PLE handler
>>>>>>
>>>>>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>>>>>> at PLE handler's improvements, various optimizations in PLE handling
>>>>>> have been tried.
>>>>>
>>>>> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
>>>>> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
>>>>> tested these patches with and without PLE, as PLE is still not scalable
>>>>> with large VMs.
>>>>>
>>>>
>>>> Hi Andrew,
>>>>
>>>> Thanks for testing.
>>>>
>>>>> System: x3850X5, 40 cores, 80 threads
>>>>>
>>>>>
>>>>> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput(MB/s) Notes
>>>>>
>>>>> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
>>>>> [all 1x results look good here]
>>>>
>>>> Yes. The 1x results look too close
>>>>
>>>>>
>>>>>
>>>>> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
>>>>> -----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
>>>>> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
>>>>
>>>> I see 6.426% improvement with ple_on
>>>> and 161.87% improvement with ple_off. I think this is a very good sign
>>>> for the patches
>>>>
>>>>> [PLE hinders pv-ticket improvements, but even with PLE off,
>>>>> we still off from ideal throughput (somewhere >20000)]
>>>>>
>>>>
>>>> Okay, The ideal throughput you are referring is getting around atleast
>>>> 80% of 1x throughput for over-commit. Yes we are still far away from
>>>> there.
>>>>
>>>>>
>>>>> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
>>>>> [1x looking fine here]
>>>>>
>>>>
>>>> I see ple_off is little better here.
>>>>
>>>>>
>>>>> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
>>>>> ----------------------------------------------------------
>>>>> Total
>>>>> Configuration Throughput Notes
>>>>>
>>>>> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
>>>>> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
>>>>> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
>>>>> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
>>>>> [quite bad all around, but pv-tickets with PLE off the best so far.
>>>>> Still quite a bit off from ideal throughput]
>>>>
>>>> This is again a remarkable improvement (307%).
>>>> This motivates me to add a patch to disable ple when pvspinlock is on.
>>>> probably we can add a hypercall that disables ple in kvm init patch.
>>>> but only problem I see is what if the guests are mixed.
>>>>
>>>> (i.e one guest has pvspinlock support but other does not. Host
>>>> supports pv)
>>>
>>> How about reintroducing the idea to create per-kvm ple_gap,ple_window
>>> state. We were headed down that road when considering a dynamic window at
>>> one point. Then you can just set a single guest's ple_gap to zero, which
>>> would lead to PLE being disabled for that guest. We could also revisit
>>> the dynamic window then.
>>>
>> Can be done, but lets understand why ple on is such a big problem. Is it
>> possible that ple gap and SPIN_THRESHOLD are not tuned properly?
>
> The biggest problem currently is the double_runqueue_lock from
> yield_to():
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 28.27% 396402 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
>> 4.65% 65667 qemu-system-x86 [kernel.kallsyms] [k] __schedule
>> 3.87% 54802 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
>> 3.32% 47022 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
>> 2.84% 40093 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
>> 2.70% 37672 qemu-system-x86 [kernel.kallsyms] [k] yield_to
>> 2.63% 36859 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
>> 2.18% 30810 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
>
> A tiny patch [included below] checks if the target task is running
> before double_runqueue_lock (then bails if it is running). This does
> reduce the lock contention somewhat:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> perf from host:
>> 20.51% 284829 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
>> 5.21% 72949 qemu-system-x86 [kernel.kallsyms] [k] __schedule
>> 3.70% 51962 qemu-system-x86 [kernel.kallsyms] [k] finish_task_switch
>> 3.50% 48607 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
>> 3.22% 45214 qemu-system-x86 [kernel.kallsyms] [k] perf_event_task_sched_out
>> 3.18% 44546 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
>> 3.13% 43176 qemu-system-x86 [kernel.kallsyms] [k] yield_to
>> 2.37% 33349 qemu-system-x86 [kvm_intel] [k] __vmx_load_host_state
>> 2.06% 28503 qemu-system-x86 [kernel.kallsyms] [k] get_pid_task
>
> So, the lock contention is reduced, and the results improve slightly
> over default PLE/yield_to (in this case 1942 -> 2161, 11%), but this is
> still far off from no PLE at all (8003) and way off from a ideal
> throughput (>20000).
>
> One of the problems, IMO, is that we are chasing our tail and burning
> too much CPU trying to fix the problem, but much of what is done is not
> actually fixing the problem (getting the one vcpu holding the lock to
> run again). We end up spending a lot of cycles getting a lot of vcpus
> running again, and most of them are not holding that lock. One
> indication of this is the context switches in the host:
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> pvticket with PLE on: 2579227.76/sec
> pvticket with PLE pff: 233711.30/sec
>
> That's over 10x context switches with PLE on. All of this is for
> yield_to, but IMO most of vcpus are probably yielding to vcpus which are
> not actually holding the lock.
>
> I would like to see how this changes by tracking the lock holder in the
> pvticket lock structure, and when a vcpu spins beyond a threshold, the
> vcpu makes a hypercall to yield_to a -vCPU-it-specifies-, the one it
> knows to be holding the lock. Note that PLE is no longer needed for
> this and the PLE detection should probably be disabled when the guest
> has this ability.
>
> Additionally, when other vcpus reach their spin threshold and also
> identify the same target vcpu (the same lock), they may opt to not make
> the yield_to hypercall, if another vcpu made the yield_to hypercall to
> the same target vcpu -very-recently-, thus avoiding a redundant exit and
> yield_to.
>
> Another optimization may be to allow vcpu preemption to be visible
> -inside- the guest. If a vcpu reaches the spin threshold, then
> identifies the lock holding vcpu, it then checks to see if a preemption
> bit is set for that vcpu. If it is not set, then it does nothing, and
> if it is, it makes the yield_to hypercall. This should help for locks
> which really do have a big critical section, and the vcpus really do
> need to spin for a while.
>
> OK, one last thing. This is a completely different approach at the
> problem: automatically adjust active vcpus from within a guest, with
> some sort of daemon (vcpud?) to approximate the actual host cpu resource
> available. The daemon would monitor steal time and hot unplug vcpus to
> reduce steal time to a small percentage. ending up with a slight cpu
> overcommit. It would also have to online vcpus if more cpu resource is
> made available, again looking at steal time and adding vcpus until steal
> time increases to a small percentage. I am not sure if the overhead of
> plugging/unplugging is worth it, but I would bet the guest would be far
> more efficient, because (a) PLE and pvticket would be handling much
> lower effective cpu overcommit (let's say ~1.1x) and (b) the guest and
> its applications would have much better scalability because the active
> vcpu count is much lower.
>
> So, let's see what one of those situations would look like, without
> actually writing something to do the unplugging/plugging for us. Let's
> take the one of the examples above, where we have 8 VMs, each defined
> with 20 vcpus, for 2x overcommit, but let's unplug 9 vcpus in each of
> the VMs, so we end up with a 1.1x effective overcommit (the last test
> below).
>
> [2x overcommit with 20-vCPU VMs (8 VMs) all running dbench]
>
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> 3.10-pvticket-ple-on_doublerq-opt 2161 68% CPU in host kernel, 33% spin_lock in guests
> 3.10-pvticket-ple_on_doublerq-opt_9vcpus-unplugged 22534 6% CPU in host kernel, 9% steal in guests, 2% spin_lock in guests
>
> Finally, we get a nice result! Note this is the lowest spin % in the guest. The spin_lock in the host is also quite a bit better:
>
>
>> 6.77% 55421 qemu-system-x86 [kernel.kallsyms] [k] _raw_spin_lock
>> 4.29% 57345 qemu-system-x86 [kvm_intel] [k] vmx_vcpu_run
>> 3.87% 62049 qemu-system-x86 [kernel.kallsyms] [k] native_apic_msr_write
>> 2.88% 45272 qemu-system-x86 [kernel.kallsyms] [k] atomic_dec_and_mutex_lock
>> 2.71% 39276 qemu-system-x86 [kvm] [k] vcpu_enter_guest
>> 2.48% 38886 qemu-system-x86 [kernel.kallsyms] [k] memset
>> 2.22% 18331 qemu-system-x86 [kvm] [k] kvm_vcpu_on_spin
>> 2.09% 32628 qemu-system-x86 [kernel.kallsyms] [k] perf_event_alloc
>
> Also the host context switches dropped significantly (66%), to 38768/sec.
>
> -Andrew
>
>
>
>
>
> Patch to reduce double runqueue lock in yield_to():
>
> Signed-off-by: Andrew Theurer <habanero@linux.vnet.ibm.com>
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..795d324 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4454,6 +4454,9 @@ again:
> goto out_irq;
> }
>
> + if (task_running(p_rq, p) || p->state)
> + goto out_irq;
> +
> double_rq_lock(rq, p_rq);
> while (task_rq(p) != p_rq) {
> double_rq_unlock(rq, p_rq);
>
>
Hi Andrew,
I found that this patch, indeed helped to gain little more on top of
V10 pvspinlock patches in my test.
Here is the result on 32vcpus guest on 32 core machine (HT diabled)
test again.
patched kernel = 3.10-rc2 + v10 pvspinlock + reducing double rq patch
+---+-----------+-----------+-----------+------------+-----------+
ebizzy (rec/sec higher is better)
+---+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5494.6000 164.7451 -1.44038
2x 2741.5000 561.3090 3472.6000 98.6376 26.66788
3x 2146.2500 216.7718 2293.6667 56.7872 6.86857
4x 1663.0000 141.9235 1856.0000 120.7524 11.60553
+---+-----------+-----------+-----------+------------+-----------+
+---+-----------+-----------+-----------+------------+-----------+
dbench (throughput higher is better)
+---+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+---+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14695.3600 104.6816 4.13703
2x 2481.6270 71.2665 2774.8420 58.4845 11.81543
3x 1510.2483 31.8634 1539.7300 36.1814 1.95211
4x 1029.4875 16.9166 1059.9800 27.4114 2.96191
+---+-----------+-----------+-----------+------------+-----------+
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 19:21 ` Raghavendra K T
` (2 preceding siblings ...)
(?)
@ 2013-06-25 14:50 ` Andrew Theurer
-1 siblings, 0 replies; 96+ messages in thread
From: Andrew Theurer @ 2013-06-25 14:50 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
andi, hpa, stefano.stabellini, xen-devel, x86, mingo, riel,
konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
srivatsa.vaddagiri, attilio.rao, pbonzini, torvalds,
stephan.diestelhorst
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.
>
> Changes in V9:
> - Changed spin_threshold to 32k to avoid excess halt exits that are
> causing undercommit degradation (after PLE handler improvement).
> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
> - Optimized halt exit path to use PLE handler
>
> V8 of PVspinlock was posted last year. After Avi's suggestions to look
> at PLE handler's improvements, various optimizations in PLE handling
> have been tried.
Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
tested these patches with and without PLE, as PLE is still not scalable
with large VMs.
System: x3850X5, 40 cores, 80 threads
1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
Total
Configuration Throughput(MB/s) Notes
3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
[all 1x results look good here]
2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
-----------------------------------------------------------
Total
Configuration Throughput Notes
3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
[PLE hinders pv-ticket improvements, but even with PLE off,
we still off from ideal throughput (somewhere >20000)]
1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
----------------------------------------------------------
Total
Configuration Throughput Notes
3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
[1x looking fine here]
2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
----------------------------------------------------------
Total
Configuration Throughput Notes
3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
[quite bad all around, but pv-tickets with PLE off the best so far.
Still quite a bit off from ideal throughput]
In summary, I would state that the pv-ticket is an overall win, but the
current PLE handler tends to "get in the way" on these larger guests.
-Andrew
^ permalink raw reply [flat|nested] 96+ messages in thread
* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 8:21 Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-01 8:21 UTC (permalink / raw)
To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
Cc: gregkh, kvm, linux-doc, peterz, drjones, virtualization, andi,
xen-devel, Raghavendra K T, habanero, riel, stefano.stabellini,
ouyang, avi.kivity, tglx, chegu_vinod, linux-kernel,
srivatsa.vaddagiri, attilio.rao, torvalds, stephan.diestelhorst
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.
Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
causing undercommit degradation (after PLE handler improvement).
- Added kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler
V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.
With this series we see that we could get little more improvements on top
of that.
Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs). This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning. (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).
(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longer in contention, it also clears the slowpath flag.
The "slowpath state" is stored in the LSB of the within the lock tail
ticket. This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).
For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.
The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.
The inner part of ticket lock code becomes:
inc = xadd(&lock->tickets, inc);
inc.tail &= ~TICKET_SLOWPATH_FLAG;
if (likely(inc.head == inc.tail))
goto out;
for (;;) {
unsigned count = SPIN_THRESHOLD;
do {
if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out: barrier();
which results in:
push %rbp
mov %rsp,%rbp
mov $0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f # Slowpath if lock in contention
pop %rbp
retq
### SLOWPATH START
1: and $-2,%edx
movzbl %dl,%esi
2: mov $0x800,%eax
jmp 4f
3: pause
sub $0x1,%eax
je 5f
4: movzbl (%rdi),%ecx
cmp %cl,%dl
jne 3b
pop %rbp
retq
5: callq *__ticket_lock_spinning
jmp 2b
### SLOWPATH END
with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:
push %rbp
mov %rsp,%rbp
mov $0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f
pop %rbp
retq
### SLOWPATH START
1: pause
movzbl (%rdi),%eax
cmp %dl,%al
jne 1b
pop %rbp
retq
### SLOWPATH END
The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail". This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set. The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).
This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.
if (TICKET_SLOWPATH_FLAG &&
static_key_false(¶virt_ticketlocks_enabled))) {
arch_spinlock_t prev;
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
} else
__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
push %rbp
mov %rsp,%rbp
nop5 # replaced by 5-byte jmp 2f when PV enabled
# non-PV unlock
addb $0x2,(%rdi)
1: pop %rbp
retq
### PV unlock ###
2: movzwl (%rdi),%esi # Fetch prev
lock addb $0x2,(%rdi) # Do unlock
testb $0x1,0x1(%rdi) # Test flag
je 1b # Finished if not set
### Slow path ###
add $2,%sil # Add "head" in old lock state
mov %esi,%edx
and $0xfe,%dh # clear slowflag for comparison
movzbl %dh,%eax
cmp %dl,%al # If head == tail (uncontended)
je 4f # clear slowpath flag
# Kick next CPU waiting for lock
3: movzbl %sil,%esi
callq *pv_lock_ops.kick
pop %rbp
retq
# Lock no longer contended - clear slowflag
4: mov %esi,%eax
lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
cmp %si,%ax
jne 3b # If clear failed, then kick
pop %rbp
retq
So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".
Results:
=======
base = 3.10-rc2 kernel
patched = base + this series
The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
+-----------+-----------+-----------+------------+-----------+
Your suggestions and comments are welcome.
github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines.
The older series was tested by Attilio for Xen implementation [1].
Jeremy Fitzhardinge (9):
x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/ticketlock: Collapse a layer of functions
xen: Defer spinlock setup until boot CPU setup
xen/pvticketlock: Xen implementation for PV ticket locks
xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/ticketlock: Add slowpath logic
xen/pvticketlock: Allow interrupts to be enabled while blocking
Andrew Jones (1):
Split jumplabel ratelimit
Stefano Stabellini (1):
xen: Enable PV ticketlocks on HVM Xen
Srivatsa Vaddagiri (3):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
kvm guest : Add configuration support to enable debug information for KVM Guests
kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
Raghavendra K T (5):
x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
Add directed yield in vcpu block path
---
Link in V8 has links to previous patch series and also whole history.
V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119
Documentation/virtual/kvm/cpuid.txt | 4 +
Documentation/virtual/kvm/hypercalls.txt | 13 ++
arch/ia64/include/asm/kvm_host.h | 5 +
arch/powerpc/include/asm/kvm_host.h | 5 +
arch/s390/include/asm/kvm_host.h | 5 +
arch/x86/Kconfig | 10 +
arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/include/asm/kvm_para.h | 14 +-
arch/x86/include/asm/paravirt.h | 32 +--
arch/x86/include/asm/paravirt_types.h | 10 +-
arch/x86/include/asm/spinlock.h | 128 +++++++----
arch/x86/include/asm/spinlock_types.h | 16 +-
arch/x86/include/uapi/asm/kvm_para.h | 1 +
arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 18 +-
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/lapic.c | 5 +-
arch/x86/kvm/x86.c | 39 +++-
arch/x86/xen/smp.c | 3 +-
arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
include/linux/jump_label.h | 26 +--
include/linux/jump_label_ratelimit.h | 34 +++
include/linux/kvm_host.h | 2 +-
include/linux/perf_event.h | 1 +
include/uapi/linux/kvm_para.h | 1 +
kernel/jump_label.c | 1 +
virt/kvm/kvm_main.c | 6 +-
27 files changed, 645 insertions(+), 384 deletions(-)
^ permalink raw reply [flat|nested] 96+ messages in thread
* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 8:21 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-01 8:21 UTC (permalink / raw)
To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
linux-kernel, stephan.diestelhorst, riel, drjones,
virtualization, srivatsa.vaddagiri
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.
Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
causing undercommit degradation (after PLE handler improvement).
- Added kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler
V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.
With this series we see that we could get little more improvements on top
of that.
Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs). This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning. (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).
(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longer in contention, it also clears the slowpath flag.
The "slowpath state" is stored in the LSB of the within the lock tail
ticket. This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).
For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.
The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.
The inner part of ticket lock code becomes:
inc = xadd(&lock->tickets, inc);
inc.tail &= ~TICKET_SLOWPATH_FLAG;
if (likely(inc.head == inc.tail))
goto out;
for (;;) {
unsigned count = SPIN_THRESHOLD;
do {
if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out: barrier();
which results in:
push %rbp
mov %rsp,%rbp
mov $0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f # Slowpath if lock in contention
pop %rbp
retq
### SLOWPATH START
1: and $-2,%edx
movzbl %dl,%esi
2: mov $0x800,%eax
jmp 4f
3: pause
sub $0x1,%eax
je 5f
4: movzbl (%rdi),%ecx
cmp %cl,%dl
jne 3b
pop %rbp
retq
5: callq *__ticket_lock_spinning
jmp 2b
### SLOWPATH END
with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:
push %rbp
mov %rsp,%rbp
mov $0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f
pop %rbp
retq
### SLOWPATH START
1: pause
movzbl (%rdi),%eax
cmp %dl,%al
jne 1b
pop %rbp
retq
### SLOWPATH END
The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail". This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set. The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).
This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.
if (TICKET_SLOWPATH_FLAG &&
static_key_false(¶virt_ticketlocks_enabled))) {
arch_spinlock_t prev;
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
} else
__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
push %rbp
mov %rsp,%rbp
nop5 # replaced by 5-byte jmp 2f when PV enabled
# non-PV unlock
addb $0x2,(%rdi)
1: pop %rbp
retq
### PV unlock ###
2: movzwl (%rdi),%esi # Fetch prev
lock addb $0x2,(%rdi) # Do unlock
testb $0x1,0x1(%rdi) # Test flag
je 1b # Finished if not set
### Slow path ###
add $2,%sil # Add "head" in old lock state
mov %esi,%edx
and $0xfe,%dh # clear slowflag for comparison
movzbl %dh,%eax
cmp %dl,%al # If head == tail (uncontended)
je 4f # clear slowpath flag
# Kick next CPU waiting for lock
3: movzbl %sil,%esi
callq *pv_lock_ops.kick
pop %rbp
retq
# Lock no longer contended - clear slowflag
4: mov %esi,%eax
lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
cmp %si,%ax
jne 3b # If clear failed, then kick
pop %rbp
retq
So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".
Results:
=======
base = 3.10-rc2 kernel
patched = base + this series
The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
+-----------+-----------+-----------+------------+-----------+
Your suggestions and comments are welcome.
github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines.
The older series was tested by Attilio for Xen implementation [1].
Jeremy Fitzhardinge (9):
x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/ticketlock: Collapse a layer of functions
xen: Defer spinlock setup until boot CPU setup
xen/pvticketlock: Xen implementation for PV ticket locks
xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/ticketlock: Add slowpath logic
xen/pvticketlock: Allow interrupts to be enabled while blocking
Andrew Jones (1):
Split jumplabel ratelimit
Stefano Stabellini (1):
xen: Enable PV ticketlocks on HVM Xen
Srivatsa Vaddagiri (3):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
kvm guest : Add configuration support to enable debug information for KVM Guests
kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
Raghavendra K T (5):
x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
Add directed yield in vcpu block path
---
Link in V8 has links to previous patch series and also whole history.
V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119
Documentation/virtual/kvm/cpuid.txt | 4 +
Documentation/virtual/kvm/hypercalls.txt | 13 ++
arch/ia64/include/asm/kvm_host.h | 5 +
arch/powerpc/include/asm/kvm_host.h | 5 +
arch/s390/include/asm/kvm_host.h | 5 +
arch/x86/Kconfig | 10 +
arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/include/asm/kvm_para.h | 14 +-
arch/x86/include/asm/paravirt.h | 32 +--
arch/x86/include/asm/paravirt_types.h | 10 +-
arch/x86/include/asm/spinlock.h | 128 +++++++----
arch/x86/include/asm/spinlock_types.h | 16 +-
arch/x86/include/uapi/asm/kvm_para.h | 1 +
arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 18 +-
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/lapic.c | 5 +-
arch/x86/kvm/x86.c | 39 +++-
arch/x86/xen/smp.c | 3 +-
arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
include/linux/jump_label.h | 26 +--
include/linux/jump_label_ratelimit.h | 34 +++
include/linux/kvm_host.h | 2 +-
include/linux/perf_event.h | 1 +
include/uapi/linux/kvm_para.h | 1 +
kernel/jump_label.c | 1 +
virt/kvm/kvm_main.c | 6 +-
27 files changed, 645 insertions(+), 384 deletions(-)
^ permalink raw reply [flat|nested] 96+ messages in thread
* [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 8:21 ` Raghavendra K T
0 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-01 8:21 UTC (permalink / raw)
To: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini
Cc: linux-doc, habanero, Raghavendra K T, xen-devel, peterz,
mtosatti, stefano.stabellini, andi, attilio.rao, ouyang, gregkh,
agraf, chegu_vinod, torvalds, avi.kivity, tglx, kvm,
linux-kernel, stephan.diestelhorst, riel, drjones,
virtualization, srivatsa.vaddagiri
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.
Changes in V9:
- Changed spin_threshold to 32k to avoid excess halt exits that are
causing undercommit degradation (after PLE handler improvement).
- Added kvm_irq_delivery_to_apic (suggested by Gleb)
- Optimized halt exit path to use PLE handler
V8 of PVspinlock was posted last year. After Avi's suggestions to look
at PLE handler's improvements, various optimizations in PLE handling
have been tried.
With this series we see that we could get little more improvements on top
of that.
Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs). This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning. (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).
(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longer in contention, it also clears the slowpath flag.
The "slowpath state" is stored in the LSB of the within the lock tail
ticket. This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).
For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.
The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.
The inner part of ticket lock code becomes:
inc = xadd(&lock->tickets, inc);
inc.tail &= ~TICKET_SLOWPATH_FLAG;
if (likely(inc.head == inc.tail))
goto out;
for (;;) {
unsigned count = SPIN_THRESHOLD;
do {
if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out: barrier();
which results in:
push %rbp
mov %rsp,%rbp
mov $0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f # Slowpath if lock in contention
pop %rbp
retq
### SLOWPATH START
1: and $-2,%edx
movzbl %dl,%esi
2: mov $0x800,%eax
jmp 4f
3: pause
sub $0x1,%eax
je 5f
4: movzbl (%rdi),%ecx
cmp %cl,%dl
jne 3b
pop %rbp
retq
5: callq *__ticket_lock_spinning
jmp 2b
### SLOWPATH END
with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:
push %rbp
mov %rsp,%rbp
mov $0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f
pop %rbp
retq
### SLOWPATH START
1: pause
movzbl (%rdi),%eax
cmp %dl,%al
jne 1b
pop %rbp
retq
### SLOWPATH END
The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail". This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set. The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).
This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.
if (TICKET_SLOWPATH_FLAG &&
static_key_false(¶virt_ticketlocks_enabled))) {
arch_spinlock_t prev;
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
} else
__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
push %rbp
mov %rsp,%rbp
nop5 # replaced by 5-byte jmp 2f when PV enabled
# non-PV unlock
addb $0x2,(%rdi)
1: pop %rbp
retq
### PV unlock ###
2: movzwl (%rdi),%esi # Fetch prev
lock addb $0x2,(%rdi) # Do unlock
testb $0x1,0x1(%rdi) # Test flag
je 1b # Finished if not set
### Slow path ###
add $2,%sil # Add "head" in old lock state
mov %esi,%edx
and $0xfe,%dh # clear slowflag for comparison
movzbl %dh,%eax
cmp %dl,%al # If head == tail (uncontended)
je 4f # clear slowpath flag
# Kick next CPU waiting for lock
3: movzbl %sil,%esi
callq *pv_lock_ops.kick
pop %rbp
retq
# Lock no longer contended - clear slowflag
4: mov %esi,%eax
lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
cmp %si,%ax
jne 3b # If clear failed, then kick
pop %rbp
retq
So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".
Results:
=======
base = 3.10-rc2 kernel
patched = base + this series
The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 5574.9000 237.4997 5618.0000 94.0366 0.77311
2x 2741.5000 561.3090 3332.0000 102.4738 21.53930
3x 2146.2500 216.7718 2302.3333 76.3870 7.27237
4x 1663.0000 141.9235 1753.7500 83.5220 5.45701
+-----------+-----------+-----------+------------+-----------+
+-----------+-----------+-----------+------------+-----------+
dbench (Throughput) higher is better
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improvement
+-----------+-----------+-----------+------------+-----------+
1x 14111.5600 754.4525 14645.9900 114.3087 3.78718
2x 2481.6270 71.2665 2667.1280 73.8193 7.47498
3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173
4x 1029.4875 16.9166 1039.7069 43.8840 0.99267
+-----------+-----------+-----------+------------+-----------+
Your suggestions and comments are welcome.
github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
Please note that we set SPIN_THRESHOLD = 32k with this series,
that would eatup little bit of overcommit performance of PLE machines
and overall performance of non-PLE machines.
The older series was tested by Attilio for Xen implementation [1].
Jeremy Fitzhardinge (9):
x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/ticketlock: Collapse a layer of functions
xen: Defer spinlock setup until boot CPU setup
xen/pvticketlock: Xen implementation for PV ticket locks
xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks
x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/ticketlock: Add slowpath logic
xen/pvticketlock: Allow interrupts to be enabled while blocking
Andrew Jones (1):
Split jumplabel ratelimit
Stefano Stabellini (1):
xen: Enable PV ticketlocks on HVM Xen
Srivatsa Vaddagiri (3):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
kvm guest : Add configuration support to enable debug information for KVM Guests
kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
Raghavendra K T (5):
x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic
Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
Add directed yield in vcpu block path
---
Link in V8 has links to previous patch series and also whole history.
V8 PV Ticketspinlock for Xen/KVM link:
[1] https://lkml.org/lkml/2012/5/2/119
Documentation/virtual/kvm/cpuid.txt | 4 +
Documentation/virtual/kvm/hypercalls.txt | 13 ++
arch/ia64/include/asm/kvm_host.h | 5 +
arch/powerpc/include/asm/kvm_host.h | 5 +
arch/s390/include/asm/kvm_host.h | 5 +
arch/x86/Kconfig | 10 +
arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/include/asm/kvm_para.h | 14 +-
arch/x86/include/asm/paravirt.h | 32 +--
arch/x86/include/asm/paravirt_types.h | 10 +-
arch/x86/include/asm/spinlock.h | 128 +++++++----
arch/x86/include/asm/spinlock_types.h | 16 +-
arch/x86/include/uapi/asm/kvm_para.h | 1 +
arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 18 +-
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/lapic.c | 5 +-
arch/x86/kvm/x86.c | 39 +++-
arch/x86/xen/smp.c | 3 +-
arch/x86/xen/spinlock.c | 384 ++++++++++---------------------
include/linux/jump_label.h | 26 +--
include/linux/jump_label_ratelimit.h | 34 +++
include/linux/kvm_host.h | 2 +-
include/linux/perf_event.h | 1 +
include/uapi/linux/kvm_para.h | 1 +
kernel/jump_label.c | 1 +
virt/kvm/kvm_main.c | 6 +-
27 files changed, 645 insertions(+), 384 deletions(-)
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 8:21 ` Raghavendra K T
(?)
@ 2013-06-01 19:21 ` Raghavendra KT
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra KT @ 2013-06-01 19:21 UTC (permalink / raw)
To: Raghavendra K T
Cc: Jeremy Fitzhardinge, gregkh, linux-doc, peterz, drjones,
virtualization, andi, hpa, stefano.stabellini, xen-devel, kvm,
x86, mingo, habanero, Rik van Riel, konrad.wilk, ouyang,
avi.kivity, Thomas Gleixner, chegu_vinod,
Linux Kernel Mailing List, Srivatsa Vaddagiri, attilio.rao,
pbonzini, torvalds, stephan.diestelhorst
[-- Attachment #1.1: Type: text/plain, Size: 100 bytes --]
Sorry! Please ignore this thread. My sendmail script aborted in between and
resending whole series.
[-- Attachment #1.2: Type: text/html, Size: 179 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 8:21 ` Raghavendra K T
(?)
(?)
@ 2013-06-01 19:21 ` Raghavendra KT
-1 siblings, 0 replies; 96+ messages in thread
From: Raghavendra KT @ 2013-06-01 19:21 UTC (permalink / raw)
To: Raghavendra K T
Cc: Jeremy Fitzhardinge, gregkh, gleb, linux-doc, peterz, drjones,
virtualization, andi, hpa, stefano.stabellini, xen-devel, kvm,
x86, agraf, mingo, habanero, konrad.wilk, ouyang, avi.kivity,
Thomas Gleixner, chegu_vinod, Marcelo Tosatti,
Linux Kernel Mailing List, Srivatsa Vaddagiri, attilio.rao,
pbonzini, torvalds, stephan.diestelhorst
[-- Attachment #1.1: Type: text/plain, Size: 100 bytes --]
Sorry! Please ignore this thread. My sendmail script aborted in between and
resending whole series.
[-- Attachment #1.2: Type: text/html, Size: 179 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 8:21 ` Raghavendra K T
` (2 preceding siblings ...)
(?)
@ 2013-06-01 20:14 ` Andi Kleen
2013-06-01 20:28 ` Jeremy Fitzhardinge
` (3 more replies)
-1 siblings, 4 replies; 96+ messages in thread
From: Andi Kleen @ 2013-06-01 20:14 UTC (permalink / raw)
To: Raghavendra K T
Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
habanero, xen-devel, peterz, mtosatti, stefano.stabellini, andi,
attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
avi.kivity, tglx, kvm, linux-kernel, stephan.diestelhorst, riel,
drjones, virtualization, srivatsa.vaddagiri
FWIW I use the paravirt spinlock ops for adding lock elision
to the spinlocks.
This needs to be done at the top level (so the level you're removing)
However I don't like the pv mechanism very much and would
be fine with using an static key hook in the main path
like I do for all the other lock types.
It also uses interrupt ops patching, for that it would
be still needed though.
-Andi
--
ak@linux.intel.com -- Speaking for myself only.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 20:14 ` Andi Kleen
@ 2013-06-01 20:28 ` Jeremy Fitzhardinge
2013-06-01 20:28 ` Jeremy Fitzhardinge
` (2 subsequent siblings)
3 siblings, 0 replies; 96+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:28 UTC (permalink / raw)
To: Andi Kleen
Cc: Raghavendra K T, gleb, mingo, x86, konrad.wilk, hpa, pbonzini,
linux-doc, habanero, xen-devel, peterz, mtosatti,
stefano.stabellini, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, drjones, virtualization,
srivatsa.vaddagiri
On 06/01/2013 01:14 PM, Andi Kleen wrote:
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.
Does lock elision still use the ticketlock algorithm/structure, or are
they different? If they're still basically ticketlocks, then it seems
to me that they're complimentary - hle handles the fastpath, and pv the
slowpath.
> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would
> be fine with using an static key hook in the main path
> like I do for all the other lock types.
Right.
J
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 20:28 ` Jeremy Fitzhardinge
0 siblings, 0 replies; 96+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:28 UTC (permalink / raw)
To: Andi Kleen
Cc: x86, kvm, linux-doc, peterz, drjones, virtualization,
srivatsa.vaddagiri, hpa, stefano.stabellini, xen-devel, gleb,
Raghavendra K T, agraf, mingo, habanero, konrad.wilk, ouyang,
mtosatti, avi.kivity, tglx, chegu_vinod, gregkh, linux-kernel,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
On 06/01/2013 01:14 PM, Andi Kleen wrote:
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.
Does lock elision still use the ticketlock algorithm/structure, or are
they different? If they're still basically ticketlocks, then it seems
to me that they're complimentary - hle handles the fastpath, and pv the
slowpath.
> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would
> be fine with using an static key hook in the main path
> like I do for all the other lock types.
Right.
J
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 20:28 ` Jeremy Fitzhardinge
@ 2013-06-01 20:46 ` Andi Kleen
-1 siblings, 0 replies; 96+ messages in thread
From: Andi Kleen @ 2013-06-01 20:46 UTC (permalink / raw)
To: Jeremy Fitzhardinge
Cc: Andi Kleen, Raghavendra K T, gleb, mingo, x86, konrad.wilk, hpa,
pbonzini, linux-doc, habanero, xen-devel, peterz, mtosatti,
stefano.stabellini, attilio.rao, ouyang, gregkh, agraf,
chegu_vinod, torvalds, avi.kivity, tglx, kvm, linux-kernel,
stephan.diestelhorst, riel, drjones, virtualization,
srivatsa.vaddagiri
On Sat, Jun 01, 2013 at 01:28:00PM -0700, Jeremy Fitzhardinge wrote:
> On 06/01/2013 01:14 PM, Andi Kleen wrote:
> > FWIW I use the paravirt spinlock ops for adding lock elision
> > to the spinlocks.
>
> Does lock elision still use the ticketlock algorithm/structure, or are
> they different? If they're still basically ticketlocks, then it seems
> to me that they're complimentary - hle handles the fastpath, and pv the
> slowpath.
It uses the ticketlock algorithm/structure, but:
- it needs to know that the lock is free with an own operation
- it has an additional field for strong adaptation state
(but that field is independent of the low level lock implementation,
so can be used with any kind of lock)
So currently it inlines the ticket lock code into its own.
Doing pv on the slow path would be possible, but would need
some additional (minor) hooks I think.
-Andi
--
ak@linux.intel.com -- Speaking for myself only.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
@ 2013-06-01 20:46 ` Andi Kleen
0 siblings, 0 replies; 96+ messages in thread
From: Andi Kleen @ 2013-06-01 20:46 UTC (permalink / raw)
To: Jeremy Fitzhardinge
Cc: x86, linux-doc, peterz, drjones, virtualization, Andi Kleen, hpa,
stefano.stabellini, xen-devel, kvm, Raghavendra K T, mingo,
habanero, riel, konrad.wilk, ouyang, avi.kivity, tglx,
chegu_vinod, gregkh, linux-kernel, srivatsa.vaddagiri,
attilio.rao, pbonzini, torvalds, stephan.diestelhorst
On Sat, Jun 01, 2013 at 01:28:00PM -0700, Jeremy Fitzhardinge wrote:
> On 06/01/2013 01:14 PM, Andi Kleen wrote:
> > FWIW I use the paravirt spinlock ops for adding lock elision
> > to the spinlocks.
>
> Does lock elision still use the ticketlock algorithm/structure, or are
> they different? If they're still basically ticketlocks, then it seems
> to me that they're complimentary - hle handles the fastpath, and pv the
> slowpath.
It uses the ticketlock algorithm/structure, but:
- it needs to know that the lock is free with an own operation
- it has an additional field for strong adaptation state
(but that field is independent of the low level lock implementation,
so can be used with any kind of lock)
So currently it inlines the ticket lock code into its own.
Doing pv on the slow path would be possible, but would need
some additional (minor) hooks I think.
-Andi
--
ak@linux.intel.com -- Speaking for myself only.
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 20:14 ` Andi Kleen
2013-06-01 20:28 ` Jeremy Fitzhardinge
@ 2013-06-01 20:28 ` Jeremy Fitzhardinge
2013-06-04 10:58 ` Raghavendra K T
2013-06-04 10:58 ` Raghavendra K T
3 siblings, 0 replies; 96+ messages in thread
From: Jeremy Fitzhardinge @ 2013-06-01 20:28 UTC (permalink / raw)
To: Andi Kleen
Cc: x86, kvm, linux-doc, peterz, drjones, virtualization,
srivatsa.vaddagiri, hpa, stefano.stabellini, xen-devel,
Raghavendra K T, mingo, habanero, riel, konrad.wilk, ouyang,
avi.kivity, tglx, chegu_vinod, gregkh, linux-kernel, attilio.rao,
pbonzini, torvalds, stephan.diestelhorst
On 06/01/2013 01:14 PM, Andi Kleen wrote:
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.
Does lock elision still use the ticketlock algorithm/structure, or are
they different? If they're still basically ticketlocks, then it seems
to me that they're complimentary - hle handles the fastpath, and pv the
slowpath.
> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would
> be fine with using an static key hook in the main path
> like I do for all the other lock types.
Right.
J
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 20:14 ` Andi Kleen
2013-06-01 20:28 ` Jeremy Fitzhardinge
2013-06-01 20:28 ` Jeremy Fitzhardinge
@ 2013-06-04 10:58 ` Raghavendra K T
2013-06-04 10:58 ` Raghavendra K T
3 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-04 10:58 UTC (permalink / raw)
To: Andi Kleen
Cc: jeremy, gregkh, kvm, linux-doc, peterz, drjones, virtualization,
srivatsa.vaddagiri, hpa, stefano.stabellini, xen-devel, x86,
mingo, habanero, riel, konrad.wilk, ouyang, avi.kivity, tglx,
chegu_vinod, linux-kernel, attilio.rao, pbonzini, torvalds
On 06/02/2013 01:44 AM, Andi Kleen wrote:
>
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.
>
> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would
> be fine with using an static key hook in the main path
> like I do for all the other lock types.
>
> It also uses interrupt ops patching, for that it would
> be still needed though.
>
Hi Andi, IIUC, you are okay with the current approach overall right?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 20:14 ` Andi Kleen
` (2 preceding siblings ...)
2013-06-04 10:58 ` Raghavendra K T
@ 2013-06-04 10:58 ` Raghavendra K T
3 siblings, 0 replies; 96+ messages in thread
From: Raghavendra K T @ 2013-06-04 10:58 UTC (permalink / raw)
To: Andi Kleen
Cc: gleb, mingo, jeremy, x86, konrad.wilk, hpa, pbonzini, linux-doc,
habanero, xen-devel, peterz, mtosatti, stefano.stabellini,
attilio.rao, ouyang, gregkh, agraf, chegu_vinod, torvalds,
avi.kivity, tglx, kvm, linux-kernel, riel, drjones,
virtualization, srivatsa.vaddagiri
On 06/02/2013 01:44 AM, Andi Kleen wrote:
>
> FWIW I use the paravirt spinlock ops for adding lock elision
> to the spinlocks.
>
> This needs to be done at the top level (so the level you're removing)
>
> However I don't like the pv mechanism very much and would
> be fine with using an static key hook in the main path
> like I do for all the other lock types.
>
> It also uses interrupt ops patching, for that it would
> be still needed though.
>
Hi Andi, IIUC, you are okay with the current approach overall right?
^ permalink raw reply [flat|nested] 96+ messages in thread
* Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
2013-06-01 8:21 ` Raghavendra K T
` (3 preceding siblings ...)
(?)
@ 2013-06-01 20:14 ` Andi Kleen
-1 siblings, 0 replies; 96+ messages in thread
From: Andi Kleen @ 2013-06-01 20:14 UTC (permalink / raw)
To: Raghavendra K T
Cc: jeremy, gregkh, linux-doc, peterz, drjones, virtualization, andi,
hpa, stefano.stabellini, xen-devel, kvm, x86, mingo, habanero,
riel, konrad.wilk, ouyang, avi.kivity, tglx, chegu_vinod,
linux-kernel, srivatsa.vaddagiri, attilio.rao, pbonzini,
torvalds, stephan.diestelhorst
FWIW I use the paravirt spinlock ops for adding lock elision
to the spinlocks.
This needs to be done at the top level (so the level you're removing)
However I don't like the pv mechanism very much and would
be fine with using an static key hook in the main path
like I do for all the other lock types.
It also uses interrupt ops patching, for that it would
be still needed though.
-Andi
--
ak@linux.intel.com -- Speaking for myself only.
^ permalink raw reply [flat|nested] 96+ messages in thread
end of thread, other threads:[~2013-07-11 11:14 UTC | newest]
Thread overview: 96+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-01 19:21 [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks Raghavendra K T
-- strict thread matches above, loose matches on Subject: below --
2013-06-01 19:21 Raghavendra K T
2013-06-01 19:21 ` Raghavendra K T
2013-06-02 8:07 ` Gleb Natapov
2013-06-02 8:07 ` Gleb Natapov
2013-06-02 16:20 ` Jiannan Ouyang
2013-06-02 16:20 ` Jiannan Ouyang
2013-06-03 1:40 ` Raghavendra K T
2013-06-03 1:40 ` Raghavendra K T
2013-06-03 6:21 ` Raghavendra K T
2013-06-07 6:15 ` Raghavendra K T
2013-06-07 6:15 ` Raghavendra K T
2013-06-07 13:29 ` Andrew Theurer
2013-06-07 13:29 ` Andrew Theurer
2013-06-07 23:41 ` Jiannan Ouyang
2013-06-07 23:41 ` Jiannan Ouyang
2013-06-07 23:41 ` Jiannan Ouyang
2013-06-03 6:21 ` Raghavendra K T
2013-06-02 16:20 ` Jiannan Ouyang
2013-06-25 14:50 ` Andrew Theurer
2013-06-26 8:45 ` Raghavendra K T
2013-06-26 8:45 ` Raghavendra K T
2013-06-26 11:37 ` Andrew Jones
2013-06-26 11:37 ` Andrew Jones
2013-06-26 12:52 ` Gleb Natapov
2013-06-26 12:52 ` Gleb Natapov
2013-06-26 13:40 ` Raghavendra K T
2013-06-26 13:40 ` Raghavendra K T
2013-06-26 14:39 ` Chegu Vinod
2013-06-26 15:37 ` Raghavendra K T
2013-06-26 15:37 ` Raghavendra K T
2013-06-26 16:11 ` Gleb Natapov
2013-06-26 16:11 ` Gleb Natapov
2013-06-26 17:54 ` Raghavendra K T
2013-07-09 9:11 ` Raghavendra K T
2013-07-09 9:11 ` Raghavendra K T
2013-07-10 10:33 ` Gleb Natapov
2013-07-10 10:33 ` Gleb Natapov
2013-07-10 10:40 ` Peter Zijlstra
2013-07-10 10:40 ` Peter Zijlstra
2013-07-10 10:47 ` Gleb Natapov
2013-07-10 10:47 ` Gleb Natapov
2013-07-10 11:28 ` Raghavendra K T
2013-07-10 11:28 ` Raghavendra K T
2013-07-10 11:29 ` Gleb Natapov
2013-07-10 11:29 ` Gleb Natapov
2013-07-10 11:40 ` Raghavendra K T
2013-07-10 11:40 ` Raghavendra K T
2013-07-10 15:03 ` Konrad Rzeszutek Wilk
2013-07-10 15:03 ` Konrad Rzeszutek Wilk
2013-07-10 15:16 ` Gleb Natapov
2013-07-10 15:16 ` Gleb Natapov
2013-07-11 0:12 ` Konrad Rzeszutek Wilk
2013-07-11 0:12 ` Konrad Rzeszutek Wilk
2013-07-10 11:24 ` Raghavendra K T
2013-07-10 11:24 ` Raghavendra K T
2013-07-10 11:41 ` Gleb Natapov
2013-07-10 11:41 ` Gleb Natapov
2013-07-10 11:50 ` Raghavendra K T
2013-07-10 11:50 ` Raghavendra K T
2013-07-11 9:13 ` Raghavendra K T
2013-07-11 9:13 ` Raghavendra K T
2013-07-11 9:48 ` Gleb Natapov
2013-07-11 9:48 ` Gleb Natapov
2013-07-11 10:10 ` Raghavendra K T
2013-07-11 10:11 ` Gleb Natapov
2013-07-11 10:11 ` Gleb Natapov
2013-07-11 10:53 ` Raghavendra K T
2013-07-11 10:53 ` Raghavendra K T
2013-07-11 10:56 ` Gleb Natapov
2013-07-11 10:56 ` Gleb Natapov
2013-07-11 11:14 ` Raghavendra K T
2013-07-11 11:14 ` Raghavendra K T
2013-07-11 10:10 ` Raghavendra K T
2013-06-26 17:54 ` Raghavendra K T
2013-06-26 14:13 ` Konrad Rzeszutek Wilk
2013-06-26 14:13 ` Konrad Rzeszutek Wilk
2013-06-26 15:56 ` Andrew Theurer
2013-06-26 15:56 ` Andrew Theurer
2013-07-01 9:30 ` Raghavendra K T
2013-07-01 9:30 ` Raghavendra K T
2013-06-25 14:50 ` Andrew Theurer
2013-06-01 8:21 Raghavendra K T
2013-06-01 8:21 Raghavendra K T
2013-06-01 8:21 ` Raghavendra K T
2013-06-01 19:21 ` Raghavendra KT
2013-06-01 19:21 ` Raghavendra KT
2013-06-01 20:14 ` Andi Kleen
2013-06-01 20:28 ` Jeremy Fitzhardinge
2013-06-01 20:28 ` Jeremy Fitzhardinge
2013-06-01 20:46 ` Andi Kleen
2013-06-01 20:46 ` Andi Kleen
2013-06-01 20:28 ` Jeremy Fitzhardinge
2013-06-04 10:58 ` Raghavendra K T
2013-06-04 10:58 ` Raghavendra K T
2013-06-01 20:14 ` Andi Kleen
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.