All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler
@ 2012-07-09  6:20 Raghavendra K T
  2012-07-09  6:20 ` [PATCH RFC 1/2] kvm vcpu: Note down pause loop exit Raghavendra K T
                   ` (4 more replies)
  0 siblings, 5 replies; 52+ messages in thread
From: Raghavendra K T @ 2012-07-09  6:20 UTC (permalink / raw)
  To: H. Peter Anvin, Thomas Gleixner, Marcelo Tosatti, Ingo Molnar,
	Avi Kivity, Rik van Riel
  Cc: S390, Carsten Otte, Christian Borntraeger, KVM, Raghavendra K T,
	chegu vinod, Andrew M. Theurer, LKML, X86, Gleb Natapov,
	linux390, Srivatsa Vaddagiri, Joerg Roedel


Currently Pause Looop Exit (PLE) handler is doing directed yield to a
random VCPU on PL exit. Though we already have filtering while choosing
the candidate to yield_to, we can do better.

Problem is, for large vcpu guests, we have more probability of yielding
to a bad vcpu. We are not able to prevent directed yield to same guy who
has done PL exit recently, who perhaps spins again and wastes CPU.

Fix that by keeping track of who has done PL exit. So The Algorithm in series
give chance to a VCPU which has:

 (a) Not done PLE exit at all (probably he is preempted lock-holder)

 (b) VCPU skipped in last iteration because it did PL exit, and probably
 has become eligible now (next eligible lock holder)

Future enhancemnets:
  (1) Currently we have a boolean to decide on eligibility of vcpu. It
    would be nice if I get feedback on guest (>32 vcpu) whether we can
    improve better with integer counter. (with counter = say f(log n )).
  
  (2) We have not considered system load during iteration of vcpu. With
   that information we can limit the scan and also decide whether schedule()
   is better. [ I am able to use #kicked vcpus to decide on this But may
   be there are better ideas like information from global loadavg.]

  (3) We can exploit this further with PV patches since it also knows about
   next eligible lock-holder.

Summary: There is a huge improvement for moderate / no overcommit scenario
 for kvm based guest on PLE machine (which is difficult ;) ).

Result:
Base : kernel 3.5.0-rc5 with Rik's Ple handler fix

Machine : Intel(R) Xeon(R) CPU X7560  @ 2.27GHz, 4 numa node, 256GB RAM,
          32 core machine

Host: enterprise linux  gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)
  with test kernels 

Guest: fedora 16 with 32 vcpus 8GB memory. 

Benchmarks:
1) kernbench: kernbench-0.5 (kernbench -f -H -M -o 2*vcpu)
Very first run in kernbench is omitted.

2) sysbench: 0.4.12
sysbench --test=oltp --db-driver=pgsql prepare
sysbench --num-threads=2*vcpu --max-requests=100000 --test=oltp --oltp-table-size=500000 --db-driver=pgsql --oltp-read-only run
Note that driver for this pgsql.

3) ebizzy: release 0.3
cmd: ebizzy -S 120 

              1) kernbench (time in sec lesser is better)
+-----------+-----------+-----------+------------+-----------+
   base_rik    stdev       patched      stdev       %improve
+-----------+-----------+-----------+------------+-----------+
1x  49.2300     1.0171	    38.3792     1.3659	   28.27261%
2x  91.9358     1.7768	    85.8842     1.6654      7.04623%
+-----------+-----------+-----------+------------+-----------+

              2) sysbench (time in sec lesser is better)
+-----------+-----------+-----------+------------+-----------+
   base_rik    stdev       patched      stdev       %improve
+-----------+-----------+-----------+------------+-----------+
1x  12.1623     0.0942	    12.1674     0.3126	  -0.04192%
2x  14.3069     0.8520	    14.1879     0.6811	   0.83874%
+-----------+-----------+-----------+------------+-----------+

Note that 1x scenario differs in only third decimal place and
degradation/improvemnet for sysbench will not be seen even with
higher confidence interval.


              3) ebizzy (records/sec more is better)
+-----------+-----------+-----------+------------+-----------+
   base_rik    stdev       patched      stdev       %improve
+-----------+-----------+-----------+------------+-----------+
1x  1129.2500  28.6793    2316.6250    53.0066     105.14722%
2x  1892.3750  75.1112    2386.5000   168.8033      26.11137%
+-----------+-----------+-----------+------------+-----------+

kernbench 1x: 4 fast runs = 12 runs avg
kernbench 2x: 4 fast runs = 12 runs avg

sysbench 1x: 8runs avg
sysbench 2x: 8runs avg

ebizzy 1x: 8runs avg
ebizzy 2x: 8runs avg

Thanks Vatsa and Srikar for brainstorming discussions regarding
optimizations.

 Raghavendra K T (2):
   kvm vcpu: Note down pause loop exit
   kvm PLE handler: Choose better candidate for directed yield

 arch/s390/include/asm/kvm_host.h |    5 +++++
 arch/x86/include/asm/kvm_host.h  |    9 ++++++++-
 arch/x86/kvm/svm.c               |    1 +
 arch/x86/kvm/vmx.c               |    1 +
 arch/x86/kvm/x86.c               |   18 +++++++++++++++++-
 virt/kvm/kvm_main.c              |    3 +++
 6 files changed, 35 insertions(+), 2 deletions(-)


^ permalink raw reply	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2012-07-12 12:32 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-09  6:20 [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler Raghavendra K T
2012-07-09  6:20 ` [PATCH RFC 1/2] kvm vcpu: Note down pause loop exit Raghavendra K T
2012-07-09  6:33   ` Raghavendra K T
2012-07-09  6:33     ` Raghavendra K T
2012-07-09 22:39   ` Rik van Riel
2012-07-10 11:22     ` Raghavendra K T
2012-07-11  8:53   ` Avi Kivity
2012-07-11 10:52     ` Raghavendra K T
2012-07-11 11:18       ` Avi Kivity
2012-07-11 11:56         ` Raghavendra K T
2012-07-11 12:41           ` Andrew Jones
2012-07-12 10:58       ` Nikunj A Dadhania
2012-07-12 11:02         ` Raghavendra K T
2012-07-09  6:20 ` [PATCH RFC 2/2] kvm PLE handler: Choose better candidate for directed yield Raghavendra K T
2012-07-09 22:30   ` Rik van Riel
2012-07-10 11:46     ` Raghavendra K T
2012-07-09  7:55 ` [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler Christian Borntraeger
2012-07-10  8:27   ` Raghavendra K T
2012-07-11  9:06   ` Avi Kivity
2012-07-11 10:17     ` Christian Borntraeger
2012-07-11 11:04       ` Avi Kivity
2012-07-11 11:16         ` Alexander Graf
2012-07-11 11:23           ` Avi Kivity
2012-07-11 11:52             ` Alexander Graf
2012-07-11 12:48               ` Avi Kivity
2012-07-12  2:19             ` Benjamin Herrenschmidt
2012-07-11 11:18         ` Christian Borntraeger
2012-07-11 11:39           ` Avi Kivity
2012-07-12  5:11             ` Raghavendra K T
2012-07-12  8:11               ` Avi Kivity
2012-07-12  8:32                 ` Raghavendra K T
2012-07-12  2:17         ` Benjamin Herrenschmidt
2012-07-12  8:12           ` Avi Kivity
2012-07-12 11:24             ` Benjamin Herrenschmidt
2012-07-12 10:38         ` Nikunj A Dadhania
2012-07-11 11:51       ` Raghavendra K T
2012-07-11 11:55         ` Christian Borntraeger
2012-07-11 12:04           ` Raghavendra K T
2012-07-11 13:04         ` Raghavendra K T
2012-07-09 21:47 ` Andrew Theurer
2012-07-09 21:47   ` Andrew Theurer
2012-07-10  9:26   ` Raghavendra K T
2012-07-10 10:07   ` [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler : detailed result Raghavendra K T
2012-07-10 11:54   ` [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler Raghavendra K T
2012-07-10 13:27     ` Andrew Theurer
2012-07-11  9:00   ` Avi Kivity
2012-07-11 13:59     ` Raghavendra K T
2012-07-11 14:01       ` Raghavendra K T
2012-07-12  8:15         ` Avi Kivity
2012-07-12  8:25           ` Raghavendra K T
2012-07-12 12:31             ` Avi Kivity
2012-07-09 22:28 ` Rik van Riel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.