kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: zamsden@redhat.com, glommer@redhat.com, mtosatti@redhat.com,
	avi@redhat.com, kvm@vger.kernel.org
Subject: [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock
Date: Tue, 31 Aug 2010 16:36:41 +0800	[thread overview]
Message-ID: <20100831083216.10672.20413.stgit@FreeLancer> (raw)

The following series implements three tests for kvmclock:

Wall clock test: check whether the kvmclock could return a correct TOD
by pass the correct one to test and check it with the value returned
by kvmclock.

Monotonic cycle test: check whether the cycles of kvmclock grows
monotonically in smp guests.

Performance test: Measure the performance ( through guest TSC cycles )
of kvmclock driver for both with and without the guest side
adjustment.

Change since V1:

- Change the max cpu to 64 in order to test in larger systems.
- Make the per-cpu structure hv_clock 4 byte aligned.
- Fix the bug of the clobber list in atomic64_cmpxchg().
- Add the test of pure performance test based on guest TSC cycle.
- Record the count of stall when doing monotonic cycle test.
- Introduce some new type definitions.
- Fix the bug of the wallclock calculation and test.

Here is a test result I got form a guest with 64 vcpus on a 64 cores
intel X7550:

The test command is qemu-system-x86_64 -device testdev,chardev=testlog
-chardev file,id=testlog,path=msr.out -kernel ./x86/kvmclock_test.flat
-smp 64 --append 100000

For 64bit guests:
The guest cycles spent on geting raw cycles is about 21543417.
The guest cycles spent on geting adjusted cycles is 1067264988.

Which means it takes about 50x more time on correcting the cycles
than just returning cycles supplied by hypervisor.

For 32bit guests:
The guest cycles spent on geting raw cycles is about 26916562.
The guest cycles spent on geting adjusted cycles is 2020119174.

Which means it taks about 75x more time on correcting the cycles than
just returning cycles supplied by hypervisor.

And for 4 core guests: we only take about 10x-15x more time on
correcting.

Pay attention:
1 The performance test just read cycle and do not check anything else.
2 I've run the test for serverl times, the result are similar.
3 The cycles were measured from guest, so maybe not accurate.

Here is a sample test results of 64bit guest with 64 vcpus:

......
Check the stability of raw cycle ...
Worst warp -10 
Worst warp -6537 
Worst warp -11170 
Worst warp -11370 
Worst warp -11372 
Worst warp -31754 
Worst warp -31876 
Worst warp -31904 
Worst warp -36771 
Worst warp -37382 
Worst warp -37688 
Worst warp -37707 
Total vcpus: 64
Test  loops: 100000
Total warps:  53811
Total stalls: 408
Worst warp:   -37707
Raw cycle is not stable
Monotonic cycle test:
Total vcpus: 64
Test  loops: 100000
Total warps:  0
Total stalls: 214
Worst warp:   0
Measure the performance of raw cycle ...
Total vcpus: 64
Test  loops: 100000
TSC cycles:  21543417
Measure the performance of adjusted cycle ...
Total vcpus: 64
Test  loops: 100000
TSC cycles:  1067264988

---

Jason Wang (8):
      Introduce some type definiations.
      Increase max_cpu to 64
      Introduce memory barriers.
      Introduce atomic operations
      Export tsc related helpers
      Introduce atol()
      Add a simple kvmclock driver
      Add tests for kvm-clock


 config-x86-common.mak |    7 +
 lib/libcflat.h        |    5 +
 lib/string.c          |   31 +++++
 lib/x86/atomic.c      |   37 ++++++
 lib/x86/atomic.h      |  164 +++++++++++++++++++++++++++++
 lib/x86/processor.h   |   22 ++++
 lib/x86/smp.h         |    4 +
 x86/README            |    1 
 x86/cstart.S          |    2 
 x86/cstart64.S        |    2 
 x86/kvmclock.c        |  279 +++++++++++++++++++++++++++++++++++++++++++++++++
 x86/kvmclock.h        |   60 +++++++++++
 x86/kvmclock_test.c   |  166 +++++++++++++++++++++++++++++
 x86/tsc.c             |   16 ---
 x86/unittests.cfg     |    5 +
 x86/vmexit.c          |   15 ---
 16 files changed, 783 insertions(+), 33 deletions(-)
 create mode 100644 lib/x86/atomic.c
 create mode 100644 lib/x86/atomic.h
 create mode 100644 x86/kvmclock.c
 create mode 100644 x86/kvmclock.h
 create mode 100644 x86/kvmclock_test.c

-- 
Jason Wang

             reply	other threads:[~2010-08-31  8:36 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-31  8:36 Jason Wang [this message]
2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 1/8] Introduce some type definiations Jason Wang
2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 2/8] Increase max_cpu to 64 Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 3/8] Introduce memory barriers Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 4/8] Introduce atomic operations Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 5/8] Export tsc related helpers Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 6/8] Introduce atol() Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 7/8] Add a simple kvmclock driver Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 8/8] Add tests for kvm-clock Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100831083216.10672.20413.stgit@FreeLancer \
    --to=jasowang@redhat.com \
    --cc=avi@redhat.com \
    --cc=glommer@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=zamsden@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).