kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock
@ 2010-08-31  8:36 Jason Wang
  2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 1/8] Introduce some type definiations Jason Wang
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:36 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

The following series implements three tests for kvmclock:

Wall clock test: check whether the kvmclock could return a correct TOD
by pass the correct one to test and check it with the value returned
by kvmclock.

Monotonic cycle test: check whether the cycles of kvmclock grows
monotonically in smp guests.

Performance test: Measure the performance ( through guest TSC cycles )
of kvmclock driver for both with and without the guest side
adjustment.

Change since V1:

- Change the max cpu to 64 in order to test in larger systems.
- Make the per-cpu structure hv_clock 4 byte aligned.
- Fix the bug of the clobber list in atomic64_cmpxchg().
- Add the test of pure performance test based on guest TSC cycle.
- Record the count of stall when doing monotonic cycle test.
- Introduce some new type definitions.
- Fix the bug of the wallclock calculation and test.

Here is a test result I got form a guest with 64 vcpus on a 64 cores
intel X7550:

The test command is qemu-system-x86_64 -device testdev,chardev=testlog
-chardev file,id=testlog,path=msr.out -kernel ./x86/kvmclock_test.flat
-smp 64 --append 100000

For 64bit guests:
The guest cycles spent on geting raw cycles is about 21543417.
The guest cycles spent on geting adjusted cycles is 1067264988.

Which means it takes about 50x more time on correcting the cycles
than just returning cycles supplied by hypervisor.

For 32bit guests:
The guest cycles spent on geting raw cycles is about 26916562.
The guest cycles spent on geting adjusted cycles is 2020119174.

Which means it taks about 75x more time on correcting the cycles than
just returning cycles supplied by hypervisor.

And for 4 core guests: we only take about 10x-15x more time on
correcting.

Pay attention:
1 The performance test just read cycle and do not check anything else.
2 I've run the test for serverl times, the result are similar.
3 The cycles were measured from guest, so maybe not accurate.

Here is a sample test results of 64bit guest with 64 vcpus:

......
Check the stability of raw cycle ...
Worst warp -10 
Worst warp -6537 
Worst warp -11170 
Worst warp -11370 
Worst warp -11372 
Worst warp -31754 
Worst warp -31876 
Worst warp -31904 
Worst warp -36771 
Worst warp -37382 
Worst warp -37688 
Worst warp -37707 
Total vcpus: 64
Test  loops: 100000
Total warps:  53811
Total stalls: 408
Worst warp:   -37707
Raw cycle is not stable
Monotonic cycle test:
Total vcpus: 64
Test  loops: 100000
Total warps:  0
Total stalls: 214
Worst warp:   0
Measure the performance of raw cycle ...
Total vcpus: 64
Test  loops: 100000
TSC cycles:  21543417
Measure the performance of adjusted cycle ...
Total vcpus: 64
Test  loops: 100000
TSC cycles:  1067264988

---

Jason Wang (8):
      Introduce some type definiations.
      Increase max_cpu to 64
      Introduce memory barriers.
      Introduce atomic operations
      Export tsc related helpers
      Introduce atol()
      Add a simple kvmclock driver
      Add tests for kvm-clock


 config-x86-common.mak |    7 +
 lib/libcflat.h        |    5 +
 lib/string.c          |   31 +++++
 lib/x86/atomic.c      |   37 ++++++
 lib/x86/atomic.h      |  164 +++++++++++++++++++++++++++++
 lib/x86/processor.h   |   22 ++++
 lib/x86/smp.h         |    4 +
 x86/README            |    1 
 x86/cstart.S          |    2 
 x86/cstart64.S        |    2 
 x86/kvmclock.c        |  279 +++++++++++++++++++++++++++++++++++++++++++++++++
 x86/kvmclock.h        |   60 +++++++++++
 x86/kvmclock_test.c   |  166 +++++++++++++++++++++++++++++
 x86/tsc.c             |   16 ---
 x86/unittests.cfg     |    5 +
 x86/vmexit.c          |   15 ---
 16 files changed, 783 insertions(+), 33 deletions(-)
 create mode 100644 lib/x86/atomic.c
 create mode 100644 lib/x86/atomic.h
 create mode 100644 x86/kvmclock.c
 create mode 100644 x86/kvmclock.h
 create mode 100644 x86/kvmclock_test.c

-- 
Jason Wang

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 1/8] Introduce some type definiations.
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
@ 2010-08-31  8:36 ` Jason Wang
  2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 2/8] Increase max_cpu to 64 Jason Wang
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:36 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 lib/libcflat.h |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/lib/libcflat.h b/lib/libcflat.h
index d0d3df2..9cb90cf 100644
--- a/lib/libcflat.h
+++ b/lib/libcflat.h
@@ -23,10 +23,14 @@
 #include <stdarg.h>
 
 typedef unsigned char u8;
+typedef signed char s8;
 typedef unsigned short u16;
+typedef signed short s16;
 typedef unsigned u32;
+typedef signed s32;
 typedef unsigned long ulong;
 typedef unsigned long long u64;
+typedef signed long long s64;
 typedef unsigned long size_t;
 typedef _Bool bool;
 


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 2/8] Increase max_cpu to 64
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
  2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 1/8] Introduce some type definiations Jason Wang
@ 2010-08-31  8:36 ` Jason Wang
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 3/8] Introduce memory barriers Jason Wang
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:36 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 x86/cstart.S   |    2 +-
 x86/cstart64.S |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/x86/cstart.S b/x86/cstart.S
index ae1fdbb..bc8d563 100644
--- a/x86/cstart.S
+++ b/x86/cstart.S
@@ -6,7 +6,7 @@ boot_idt = 0
 
 ipi_vector = 0x20
 
-max_cpus = 4
+max_cpus = 64
 
 .bss
 
diff --git a/x86/cstart64.S b/x86/cstart64.S
index ef09d82..71014d8 100644
--- a/x86/cstart64.S
+++ b/x86/cstart64.S
@@ -6,7 +6,7 @@ boot_idt = 0
 
 ipi_vector = 0x20
 
-max_cpus = 4
+max_cpus = 64
 
 .bss
 


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 3/8] Introduce memory barriers.
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
  2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 1/8] Introduce some type definiations Jason Wang
  2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 2/8] Increase max_cpu to 64 Jason Wang
@ 2010-08-31  8:37 ` Jason Wang
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 4/8] Introduce atomic operations Jason Wang
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:37 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 lib/x86/smp.h |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/lib/x86/smp.h b/lib/x86/smp.h
index c2e7350..df5fdba 100644
--- a/lib/x86/smp.h
+++ b/lib/x86/smp.h
@@ -1,6 +1,10 @@
 #ifndef __SMP_H
 #define __SMP_H
 
+#define mb() 	asm volatile("mfence":::"memory")
+#define rmb()	asm volatile("lfence":::"memory")
+#define wmb()	asm volatile("sfence" ::: "memory")
+
 struct spinlock {
     int v;
 };


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 4/8] Introduce atomic operations
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
                   ` (2 preceding siblings ...)
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 3/8] Introduce memory barriers Jason Wang
@ 2010-08-31  8:37 ` Jason Wang
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 5/8] Export tsc related helpers Jason Wang
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:37 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 config-x86-common.mak |    1 
 lib/x86/atomic.c      |   37 +++++++++++
 lib/x86/atomic.h      |  164 +++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 202 insertions(+), 0 deletions(-)
 create mode 100644 lib/x86/atomic.c
 create mode 100644 lib/x86/atomic.h

diff --git a/config-x86-common.mak b/config-x86-common.mak
index fdb9e3e..b8ca859 100644
--- a/config-x86-common.mak
+++ b/config-x86-common.mak
@@ -10,6 +10,7 @@ cflatobjs += \
 
 cflatobjs += lib/x86/fwcfg.o
 cflatobjs += lib/x86/apic.o
+cflatobjs += lib/x86/atomic.o
 
 $(libcflat): LDFLAGS += -nostdlib
 $(libcflat): CFLAGS += -ffreestanding -I lib
diff --git a/lib/x86/atomic.c b/lib/x86/atomic.c
new file mode 100644
index 0000000..da74ff2
--- /dev/null
+++ b/lib/x86/atomic.c
@@ -0,0 +1,37 @@
+#include <libcflat.h>
+#include "atomic.h"
+
+#ifdef __i386__
+
+u64 atomic64_cmpxchg(atomic64_t *v, u64 old, u64 new)
+{
+        u32 low = new;
+        u32 high = new >> 32;
+
+        asm volatile("lock cmpxchg8b %1\n"
+                     : "+A" (old),
+                       "+m" (*(volatile long long *)&v->counter)
+                     : "b" (low), "c" (high)
+                     : "memory"
+                     );
+
+        return old;
+}
+
+#else
+
+u64 atomic64_cmpxchg(atomic64_t *v, u64 old, u64 new)
+{
+        u64 ret;
+        u64 _old = old;
+        u64 _new = new;
+
+        asm volatile("lock cmpxchgq %2,%1"
+                     : "=a" (ret), "+m" (*(volatile long *)&v->counter)
+                     : "r" (_new), "0" (_old)
+                     : "memory"
+                     );
+        return ret;
+}
+
+#endif
diff --git a/lib/x86/atomic.h b/lib/x86/atomic.h
new file mode 100644
index 0000000..de2f033
--- /dev/null
+++ b/lib/x86/atomic.h
@@ -0,0 +1,164 @@
+#ifndef __ATOMIC_H
+#define __ATOMIC_H
+
+typedef struct {
+	volatile int counter;
+} atomic_t;
+
+#ifdef __i386__
+
+/**
+ * atomic_read - read atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline int atomic_read(const atomic_t *v)
+{
+	return v->counter;
+}
+
+/**
+ * atomic_set - set atomic variable
+ * @v: pointer of type atomic_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic_set(atomic_t *v, int i)
+{
+	v->counter = i;
+}
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+	asm volatile("lock incl %0"
+		     : "+m" (v->counter));
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+	asm volatile("lock decl %0"
+		     : "+m" (v->counter));
+}
+
+typedef struct {
+	u64 __attribute__((aligned(8))) counter;
+} atomic64_t;
+
+#define ATOMIC64_INIT(val)	{ (val) }
+
+/**
+ * atomic64_read - read atomic64 variable
+ * @ptr:      pointer to type atomic64_t
+ *
+ * Atomically reads the value of @ptr and returns it.
+ */
+static inline u64 atomic64_read(atomic64_t *ptr)
+{
+	u64 res;
+
+	/*
+	 * Note, we inline this atomic64_t primitive because
+	 * it only clobbers EAX/EDX and leaves the others
+	 * untouched. We also (somewhat subtly) rely on the
+	 * fact that cmpxchg8b returns the current 64-bit value
+	 * of the memory location we are touching:
+	 */
+	asm volatile("mov %%ebx, %%eax\n\t"
+                     "mov %%ecx, %%edx\n\t"
+                     "lock cmpxchg8b %1\n"
+                     : "=&A" (res)
+                     : "m" (*ptr)
+                     );
+	return res;
+}
+
+u64 atomic64_cmpxchg(atomic64_t *v, u64 old, u64 new);
+
+#elif defined(__x86_64__)
+
+/**
+ * atomic_read - read atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically reads the value of @v.
+ */
+static inline int atomic_read(const atomic_t *v)
+{
+	return v->counter;
+}
+
+/**
+ * atomic_set - set atomic variable
+ * @v: pointer of type atomic_t
+ * @i: required value
+ *
+ * Atomically sets the value of @v to @i.
+ */
+static inline void atomic_set(atomic_t *v, int i)
+{
+	v->counter = i;
+}
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+	asm volatile("lock incl %0"
+		     : "=m" (v->counter)
+		     : "m" (v->counter));
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+	asm volatile("lock decl %0"
+		     : "=m" (v->counter)
+		     : "m" (v->counter));
+}
+
+typedef struct {
+	long long counter;
+} atomic64_t;
+
+#define ATOMIC64_INIT(i)	{ (i) }
+
+/**
+ * atomic64_read - read atomic64 variable
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically reads the value of @v.
+ * Doesn't imply a read memory barrier.
+ */
+static inline long atomic64_read(const atomic64_t *v)
+{
+	return v->counter;
+}
+
+u64 atomic64_cmpxchg(atomic64_t *v, u64 old, u64 new);
+
+#endif
+
+#endif


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 5/8] Export tsc related helpers
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
                   ` (3 preceding siblings ...)
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 4/8] Introduce atomic operations Jason Wang
@ 2010-08-31  8:37 ` Jason Wang
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 6/8] Introduce atol() Jason Wang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:37 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 lib/x86/processor.h |   22 ++++++++++++++++++++++
 x86/tsc.c           |   16 +---------------
 x86/vmexit.c        |   15 ---------------
 3 files changed, 23 insertions(+), 30 deletions(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 67d7ca5..c348808 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -258,4 +258,26 @@ static inline void sti(void)
     asm volatile ("sti");
 }
 
+static inline unsigned long long rdtsc()
+{
+	long long r;
+
+#ifdef __x86_64__
+	unsigned a, d;
+
+	asm volatile ("rdtsc" : "=a"(a), "=d"(d));
+	r = a | ((long long)d << 32);
+#else
+	asm volatile ("rdtsc" : "=A"(r));
+#endif
+	return r;
+}
+
+static inline void wrtsc(u64 tsc)
+{
+	unsigned a = tsc, d = tsc >> 32;
+
+	asm volatile("wrmsr" : : "a"(a), "d"(d), "c"(0x10));
+}
+
 #endif
diff --git a/x86/tsc.c b/x86/tsc.c
index 394a9c6..58f332d 100644
--- a/x86/tsc.c
+++ b/x86/tsc.c
@@ -1,19 +1,5 @@
 #include "libcflat.h"
-
-u64 rdtsc(void)
-{
-	unsigned a, d;
-
-	asm volatile("rdtsc" : "=a"(a), "=d"(d));
-	return a | (u64)d << 32;
-}
-
-void wrtsc(u64 tsc)
-{
-	unsigned a = tsc, d = tsc >> 32;
-
-	asm volatile("wrmsr" : : "a"(a), "d"(d), "c"(0x10));
-}
+#include "processor.h"
 
 void test_wrtsc(u64 t1)
 {
diff --git a/x86/vmexit.c b/x86/vmexit.c
index 84c384d..551083d 100644
--- a/x86/vmexit.c
+++ b/x86/vmexit.c
@@ -3,21 +3,6 @@
 #include "smp.h"
 #include "processor.h"
 
-static inline unsigned long long rdtsc()
-{
-	long long r;
-
-#ifdef __x86_64__
-	unsigned a, d;
-
-	asm volatile ("rdtsc" : "=a"(a), "=d"(d));
-	r = a | ((long long)d << 32);
-#else
-	asm volatile ("rdtsc" : "=A"(r));
-#endif
-	return r;
-}
-
 static unsigned int inl(unsigned short port)
 {
     unsigned int val;


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 6/8] Introduce atol()
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
                   ` (4 preceding siblings ...)
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 5/8] Export tsc related helpers Jason Wang
@ 2010-08-31  8:37 ` Jason Wang
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 7/8] Add a simple kvmclock driver Jason Wang
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 8/8] Add tests for kvm-clock Jason Wang
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:37 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 lib/libcflat.h |    1 +
 lib/string.c   |   31 +++++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+), 0 deletions(-)

diff --git a/lib/libcflat.h b/lib/libcflat.h
index 9cb90cf..d4ee2e0 100644
--- a/lib/libcflat.h
+++ b/lib/libcflat.h
@@ -50,6 +50,7 @@ extern void puts(const char *s);
 
 extern void *memset(void *s, int c, size_t n);
 
+extern long atol(const char *ptr);
 #define ARRAY_SIZE(_a)  (sizeof(_a)/sizeof((_a)[0]))
 
 #endif
diff --git a/lib/string.c b/lib/string.c
index acac3c0..1f19f5c 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -30,3 +30,34 @@ void *memset(void *s, int c, size_t n)
 
     return s;
 }
+
+long atol(const char *ptr)
+{
+    long acc = 0;
+    const char *s = ptr;
+    int neg, c;
+
+    while (*s == ' ' || *s == '\t')
+        s++;
+    if (*s == '-'){
+        neg = 1;
+        s++;
+    } else {
+        neg = 0;
+        if (*s == '+')
+            s++;
+    }
+
+    while (*s) {
+        if (*s < '0' || *s > '9')
+            break;
+        c = *s - '0';
+        acc = acc * 10 + c;
+        s++;
+    }
+
+    if (neg)
+        acc = -acc;
+
+    return acc;
+}


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 7/8] Add a simple kvmclock driver
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
                   ` (5 preceding siblings ...)
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 6/8] Introduce atol() Jason Wang
@ 2010-08-31  8:37 ` Jason Wang
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 8/8] Add tests for kvm-clock Jason Wang
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:37 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

Most of the codes were borrowed from arxh/x86/kernel/kvmclock.c. A
special bit: PV_CLOCK_CYCLE_RAW_TEST_BIT is used to notify the driver
to return unadjusted cycles.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 x86/kvmclock.c |  279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 x86/kvmclock.h |   60 ++++++++++++
 2 files changed, 339 insertions(+), 0 deletions(-)
 create mode 100644 x86/kvmclock.c
 create mode 100644 x86/kvmclock.h

diff --git a/x86/kvmclock.c b/x86/kvmclock.c
new file mode 100644
index 0000000..0624da3
--- /dev/null
+++ b/x86/kvmclock.c
@@ -0,0 +1,279 @@
+#include "libcflat.h"
+#include "smp.h"
+#include "atomic.h"
+#include "processor.h"
+#include "kvmclock.h"
+
+#define unlikely(x)	__builtin_expect(!!(x), 0)
+#define likely(x)	__builtin_expect(!!(x), 1)
+
+
+struct pvclock_vcpu_time_info __attribute__((aligned(4))) hv_clock[MAX_CPU];
+struct pvclock_wall_clock wall_clock;
+static unsigned char valid_flags = 0;
+static atomic64_t last_value = ATOMIC64_INIT(0);
+
+/*
+ * Scale a 64-bit delta by scaling and multiplying by a 32-bit fraction,
+ * yielding a 64-bit result.
+ */
+static inline u64 scale_delta(u64 delta, u32 mul_frac, int shift)
+{
+	u64 product;
+#ifdef __i386__
+	u32 tmp1, tmp2;
+#endif
+
+	if (shift < 0)
+		delta >>= -shift;
+	else
+		delta <<= shift;
+
+#ifdef __i386__
+	__asm__ (
+		"mul  %5       ; "
+		"mov  %4,%%eax ; "
+		"mov  %%edx,%4 ; "
+		"mul  %5       ; "
+		"xor  %5,%5    ; "
+		"add  %4,%%eax ; "
+		"adc  %5,%%edx ; "
+		: "=A" (product), "=r" (tmp1), "=r" (tmp2)
+		: "a" ((u32)delta), "1" ((u32)(delta >> 32)), "2" (mul_frac) );
+#elif defined(__x86_64__)
+	__asm__ (
+		"mul %%rdx ; shrd $32,%%rdx,%%rax"
+		: "=a" (product) : "0" (delta), "d" ((u64)mul_frac) );
+#else
+#error implement me!
+#endif
+
+	return product;
+}
+
+#ifdef __i386__
+# define do_div(n,base) ({					\
+	u32 __base = (base);    				\
+	u32 __rem;						\
+	__rem = ((u64)(n)) % __base;                            \
+	(n) = ((u64)(n)) / __base;				\
+	__rem;							\
+ })
+#else
+u32 __attribute__((weak)) __div64_32(u64 *n, u32 base)
+{
+	u64 rem = *n;
+	u64 b = base;
+	u64 res, d = 1;
+	u32 high = rem >> 32;
+
+	/* Reduce the thing a bit first */
+	res = 0;
+	if (high >= base) {
+		high /= base;
+		res = (u64) high << 32;
+		rem -= (u64) (high*base) << 32;
+	}
+
+	while ((s64)b > 0 && b < rem) {
+		b = b+b;
+		d = d+d;
+	}
+
+	do {
+		if (rem >= b) {
+			rem -= b;
+			res += d;
+		}
+		b >>= 1;
+		d >>= 1;
+	} while (d);
+
+	*n = res;
+	return rem;
+}
+
+# define do_div(n,base) ({				\
+	u32 __base = (base);    			\
+	u32 __rem;					\
+	(void)(((typeof((n)) *)0) == ((u64 *)0));	\
+	if (likely(((n) >> 32) == 0)) {			\
+		__rem = (u32)(n) % __base;		\
+		(n) = (u32)(n) / __base;		\
+	} else 						\
+		__rem = __div64_32(&(n), __base);	\
+	__rem;						\
+ })
+#endif
+
+/**
+ * set_normalized_timespec - set timespec sec and nsec parts and normalize
+ *
+ * @ts:		pointer to timespec variable to be set
+ * @sec:	seconds to set
+ * @nsec:	nanoseconds to set
+ *
+ * Set seconds and nanoseconds field of a timespec variable and
+ * normalize to the timespec storage format
+ *
+ * Note: The tv_nsec part is always in the range of
+ *	0 <= tv_nsec < NSEC_PER_SEC
+ * For negative values only the tv_sec field is negative !
+ */
+void set_normalized_timespec(struct timespec *ts, long sec, s64 nsec)
+{
+	while (nsec >= NSEC_PER_SEC) {
+		/*
+		 * The following asm() prevents the compiler from
+		 * optimising this loop into a modulo operation. See
+		 * also __iter_div_u64_rem() in include/linux/time.h
+		 */
+		asm("" : "+rm"(nsec));
+		nsec -= NSEC_PER_SEC;
+		++sec;
+	}
+	while (nsec < 0) {
+		asm("" : "+rm"(nsec));
+		nsec += NSEC_PER_SEC;
+		--sec;
+	}
+	ts->tv_sec = sec;
+	ts->tv_nsec = nsec;
+}
+
+static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time *shadow)
+{
+	u64 delta = rdtsc() - shadow->tsc_timestamp;
+	return scale_delta(delta, shadow->tsc_to_nsec_mul, shadow->tsc_shift);
+}
+
+/*
+ * Reads a consistent set of time-base values from hypervisor,
+ * into a shadow data area.
+ */
+static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
+					struct pvclock_vcpu_time_info *src)
+{
+	do {
+		dst->version = src->version;
+		rmb();		/* fetch version before data */
+		dst->tsc_timestamp     = src->tsc_timestamp;
+		dst->system_timestamp  = src->system_time;
+		dst->tsc_to_nsec_mul   = src->tsc_to_system_mul;
+		dst->tsc_shift         = src->tsc_shift;
+		dst->flags             = src->flags;
+		rmb();		/* test version after fetching data */
+	} while ((src->version & 1) || (dst->version != src->version));
+
+	return dst->version;
+}
+
+cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src)
+{
+	struct pvclock_shadow_time shadow;
+	unsigned version;
+	cycle_t ret, offset;
+	u64 last;
+
+	do {
+		version = pvclock_get_time_values(&shadow, src);
+		barrier();
+		offset = pvclock_get_nsec_offset(&shadow);
+		ret = shadow.system_timestamp + offset;
+		barrier();
+	} while (version != src->version);
+
+	if ((valid_flags & PVCLOCK_RAW_CYCLE_BIT) ||
+            ((valid_flags & PVCLOCK_TSC_STABLE_BIT) &&
+             (shadow.flags & PVCLOCK_TSC_STABLE_BIT)))
+                return ret;
+
+	/*
+	 * Assumption here is that last_value, a global accumulator, always goes
+	 * forward. If we are less than that, we should not be much smaller.
+	 * We assume there is an error marging we're inside, and then the correction
+	 * does not sacrifice accuracy.
+	 *
+	 * For reads: global may have changed between test and return,
+	 * but this means someone else updated poked the clock at a later time.
+	 * We just need to make sure we are not seeing a backwards event.
+	 *
+	 * For updates: last_value = ret is not enough, since two vcpus could be
+	 * updating at the same time, and one of them could be slightly behind,
+	 * making the assumption that last_value always go forward fail to hold.
+	 */
+	last = atomic64_read(&last_value);
+	do {
+		if (ret < last)
+			return last;
+		last = atomic64_cmpxchg(&last_value, last, ret);
+	} while (unlikely(last != ret));
+
+	return ret;
+}
+
+cycle_t kvm_clock_read()
+{
+        struct pvclock_vcpu_time_info *src;
+        cycle_t ret;
+        int index = smp_id();
+
+        src = &hv_clock[index];
+        ret = pvclock_clocksource_read(src);
+        return ret;
+}
+
+void kvm_clock_init(void *data)
+{
+        int index = smp_id();
+        struct pvclock_vcpu_time_info *hvc = &hv_clock[index];
+
+        printf("kvm-clock: cpu %d, msr 0x:%lx \n", index, hvc);
+        wrmsr(MSR_KVM_SYSTEM_TIME, (unsigned long)hvc | 1);
+}
+
+void kvm_clock_clear(void *data)
+{
+        wrmsr(MSR_KVM_SYSTEM_TIME, 0LL);
+}
+
+void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
+			    struct pvclock_vcpu_time_info *vcpu_time,
+			    struct timespec *ts)
+{
+	u32 version;
+	u64 delta;
+	struct timespec now;
+
+	/* get wallclock at system boot */
+	do {
+		version = wall_clock->version;
+		rmb();		/* fetch version before time */
+		now.tv_sec  = wall_clock->sec;
+		now.tv_nsec = wall_clock->nsec;
+		rmb();		/* fetch time before checking version */
+	} while ((wall_clock->version & 1) || (version != wall_clock->version));
+
+	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
+	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
+
+	now.tv_nsec = do_div(delta, NSEC_PER_SEC);
+	now.tv_sec = delta;
+
+	set_normalized_timespec(ts, now.tv_sec, now.tv_nsec);
+}
+
+void kvm_get_wallclock(struct timespec *ts)
+{
+        struct pvclock_vcpu_time_info *vcpu_time;
+        int index = smp_id();
+
+        wrmsr(MSR_KVM_WALL_CLOCK, (unsigned long)&wall_clock);
+        vcpu_time = &hv_clock[index];
+        pvclock_read_wallclock(&wall_clock, vcpu_time, ts);
+}
+
+void pvclock_set_flags(unsigned char flags)
+{
+        valid_flags = flags;
+}
diff --git a/x86/kvmclock.h b/x86/kvmclock.h
new file mode 100644
index 0000000..166a338
--- /dev/null
+++ b/x86/kvmclock.h
@@ -0,0 +1,60 @@
+#ifndef KVMCLOCK_H
+#define KVMCLOCK_H
+
+#define MSR_KVM_WALL_CLOCK  0x11
+#define MSR_KVM_SYSTEM_TIME 0x12
+
+#define MAX_CPU 64
+
+#define PVCLOCK_TSC_STABLE_BIT (1 << 0)
+#define PVCLOCK_RAW_CYCLE_BIT (1 << 7) /* Get raw cycle */
+
+# define NSEC_PER_SEC			1000000000ULL
+
+typedef u64 cycle_t;
+
+struct pvclock_vcpu_time_info {
+	u32   version;
+	u32   pad0;
+	u64   tsc_timestamp;
+	u64   system_time;
+	u32   tsc_to_system_mul;
+	s8    tsc_shift;
+	u8    flags;
+	u8    pad[2];
+} __attribute__((__packed__)); /* 32 bytes */
+
+struct pvclock_wall_clock {
+	u32   version;
+	u32   sec;
+	u32   nsec;
+} __attribute__((__packed__));
+
+/*
+ * These are perodically updated
+ *    xen: magic shared_info page
+ *    kvm: gpa registered via msr
+ * and then copied here.
+ */
+struct pvclock_shadow_time {
+	u64 tsc_timestamp;     /* TSC at last update of time vals.  */
+	u64 system_timestamp;  /* Time, in nanosecs, since boot.    */
+	u32 tsc_to_nsec_mul;
+	int tsc_shift;
+	u32 version;
+	u8  flags;
+};
+
+
+struct timespec {
+        long   tv_sec;
+        long   tv_nsec;
+};
+
+void pvclock_set_flags(unsigned char flags);
+cycle_t kvm_clock_read();
+void kvm_get_wallclock(struct timespec *ts);
+void kvm_clock_init(void *data);
+void kvm_clock_clear(void *data);
+
+#endif


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH kvm-unit-tests v2 8/8] Add tests for kvm-clock
  2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
                   ` (6 preceding siblings ...)
  2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 7/8] Add a simple kvmclock driver Jason Wang
@ 2010-08-31  8:37 ` Jason Wang
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2010-08-31  8:37 UTC (permalink / raw)
  To: zamsden, glommer, mtosatti, avi, kvm

This patch implements three tests for kvmclock. First one check whether
the date of time returned by kvmclock matches the value got from
host. Second one check whether the cycle of kvmclock grows
monotonically in smp guest and the count of warps and stalls are also
recorded. The last test is a performance test to measure the guest
cycles cost when trying to read cycle of kvmclock.

Three parameters were accepted by the test: test loops, seconds
since 1970-01-01 00:00:00 UTC which could be easily get through date
+%s and the max accepted offset value between the tod of guest and
host.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 config-x86-common.mak |    6 +-
 x86/README            |    1 
 x86/kvmclock_test.c   |  166 +++++++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg     |    5 +
 4 files changed, 177 insertions(+), 1 deletions(-)
 create mode 100644 x86/kvmclock_test.c

diff --git a/config-x86-common.mak b/config-x86-common.mak
index b8ca859..b541c1c 100644
--- a/config-x86-common.mak
+++ b/config-x86-common.mak
@@ -26,7 +26,8 @@ FLATLIBS = lib/libcflat.a $(libgcc)
 tests-common = $(TEST_DIR)/vmexit.flat $(TEST_DIR)/tsc.flat \
                $(TEST_DIR)/smptest.flat  $(TEST_DIR)/port80.flat \
                $(TEST_DIR)/realmode.flat $(TEST_DIR)/msr.flat \
-               $(TEST_DIR)/hypercall.flat $(TEST_DIR)/sieve.flat
+               $(TEST_DIR)/hypercall.flat $(TEST_DIR)/sieve.flat \
+               $(TEST_DIR)/kvmclock_test.flat
 
 tests_and_config = $(TEST_DIR)/*.flat $(TEST_DIR)/unittests.cfg
 
@@ -70,6 +71,9 @@ $(TEST_DIR)/rmap_chain.flat: $(cstart.o) $(TEST_DIR)/rmap_chain.o \
 
 $(TEST_DIR)/svm.flat: $(cstart.o) $(TEST_DIR)/vm.o
 
+$(TEST_DIR)/kvmclock_test.flat: $(cstart.o) $(TEST_DIR)/kvmclock.o \
+                                $(TEST_DIR)/kvmclock_test.o
+
 arch_clean:
 	$(RM) $(TEST_DIR)/*.o $(TEST_DIR)/*.flat \
 	$(TEST_DIR)/.*.d $(TEST_DIR)/lib/.*.d $(TEST_DIR)/lib/*.o
diff --git a/x86/README b/x86/README
index ab5a2ae..215fb2f 100644
--- a/x86/README
+++ b/x86/README
@@ -12,3 +12,4 @@ sieve: heavy memory access with no paging and with paging static and with paging
 smptest: run smp_id() on every cpu and compares return value to number
 tsc: write to tsc(0) and write to tsc(100000000000) and read it back
 vmexit: long loops for each: cpuid, vmcall, mov_from_cr8, mov_to_cr8, inl_pmtimer, ipi, ipi+halt
+kvmclock_test: test of wallclock, monotonic cycle and performance of kvmclock
diff --git a/x86/kvmclock_test.c b/x86/kvmclock_test.c
new file mode 100644
index 0000000..5b14ae2
--- /dev/null
+++ b/x86/kvmclock_test.c
@@ -0,0 +1,166 @@
+#include "libcflat.h"
+#include "smp.h"
+#include "atomic.h"
+#include "processor.h"
+#include "kvmclock.h"
+
+#define DEFAULT_TEST_LOOPS 100000000L
+#define DEFAULT_THRESHOLD  5L
+
+struct test_info {
+        struct spinlock lock;
+        long loops;               /* test loops */
+        u64 warps;                /* warp count */
+        u64 stalls;               /* stall count */
+        long long worst;          /* worst warp */
+        volatile cycle_t last;    /* last cycle seen by test */
+        atomic_t ncpus;           /* number of cpu in the test*/
+        int check;                /* check cycle ? */
+};
+
+struct test_info ti[4];
+
+static int wallclock_test(long sec, long threshold)
+{
+        long ksec, offset;
+        struct timespec ts;
+
+        printf("Wallclock test, threshold %ld\n", threshold);
+        kvm_get_wallclock(&ts);
+        ksec = ts.tv_sec;
+
+        offset = ksec - sec;
+        printf("Seconds get from host:     %ld\n", sec);
+        printf("Seconds get from kvmclock: %ld\n", ksec);
+        printf("Offset:                    %ld\n", offset);
+
+        if (offset > threshold || offset < -threshold) {
+                printf("offset too large!\n");
+                return 1;
+        }
+
+        return 0;
+}
+
+static void kvm_clock_test(void *data)
+{
+        struct test_info *hv_test_info = (struct test_info *)data;
+        long i, check = hv_test_info->check;
+
+        for (i = 0; i < hv_test_info->loops; i++){
+                cycle_t t0, t1;
+                long long delta;
+
+                if (check == 0) {
+                        kvm_clock_read();
+                        continue;
+                }
+
+                spin_lock(&hv_test_info->lock);
+                t1 = kvm_clock_read();
+                t0 = hv_test_info->last;
+                hv_test_info->last = kvm_clock_read();
+                spin_unlock(&hv_test_info->lock);
+
+                delta = t1 - t0;
+                if (delta < 0) {
+                        spin_lock(&hv_test_info->lock);
+                        ++hv_test_info->warps;
+                        if (delta < hv_test_info->worst){
+                                hv_test_info->worst = delta;
+                                printf("Worst warp %lld %\n", hv_test_info->worst);
+                        }
+                        spin_unlock(&hv_test_info->lock);
+                }
+                if (delta == 0)
+                        ++hv_test_info->stalls;
+
+                if (!((unsigned long)i & 31))
+                        asm volatile("rep; nop");
+        }
+
+        atomic_dec(&hv_test_info->ncpus);
+}
+
+static int cycle_test(int ncpus, long loops, int check, struct test_info *ti)
+{
+        int i;
+        unsigned long long begin, end;
+
+        begin = rdtsc();
+
+        atomic_set(&ti->ncpus, ncpus);
+        ti->loops = loops;
+        ti->check = check;
+        for (i = ncpus - 1; i >= 0; i--)
+                on_cpu_async(i, kvm_clock_test, (void *)ti);
+
+        /* Wait for the end of other vcpu */
+        while(atomic_read(&ti->ncpus))
+                ;
+
+        end = rdtsc();
+
+        printf("Total vcpus: %d\n", ncpus);
+        printf("Test  loops: %ld\n", ti->loops);
+        if (check == 1) {
+                printf("Total warps:  %lld\n", ti->warps);
+                printf("Total stalls: %lld\n", ti->stalls);
+                printf("Worst warp:   %lld\n", ti->worst);
+        } else
+                printf("TSC cycles:  %lld\n", end - begin);
+
+        return ti->warps ? 1 : 0;
+}
+
+int main(int ac, char **av)
+{
+        int ncpus = cpu_count();
+        int nerr = 0, i;
+        long loops = DEFAULT_TEST_LOOPS;
+        long sec = 0;
+        long threshold = DEFAULT_THRESHOLD;
+
+        if (ac > 1)
+                loops = atol(av[1]);
+        if (ac > 2)
+                sec = atol(av[2]);
+        if (ac > 3)
+                threshold = atol(av[3]);
+
+        smp_init();
+
+        if (ncpus > MAX_CPU)
+                ncpus = MAX_CPU;
+        for (i = 0; i < ncpus; ++i)
+                on_cpu(i, kvm_clock_init, (void *)0);
+
+        if (ac > 2)
+                nerr += wallclock_test(sec, threshold);
+
+        printf("Check the stability of raw cycle ...\n");
+        pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT
+                          | PVCLOCK_RAW_CYCLE_BIT);
+        if (cycle_test(ncpus, loops, 1, &ti[0]))
+                printf("Raw cycle is not stable\n");
+        else
+                printf("Raw cycle is stable\n");
+
+        pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT);
+        printf("Monotonic cycle test:\n");
+        nerr += cycle_test(ncpus, loops, 1, &ti[1]);
+
+        printf("Measure the performance of raw cycle ...\n");
+        pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT
+                          | PVCLOCK_RAW_CYCLE_BIT);
+        cycle_test(ncpus, loops, 0, &ti[2]);
+
+        printf("Measure the performance of adjusted cycle ...\n");
+        pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT);
+        cycle_test(ncpus, loops, 0, &ti[3]);
+
+        for (i = 0; i < ncpus; ++i)
+                on_cpu(i, kvm_clock_clear, (void *)0);
+
+        return nerr > 0 ? 1 : 0;
+}
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 7796e41..228ac1d 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -63,3 +63,8 @@ extra_params = -enable-nesting -cpu qemu64,+svm
 file = svm.flat
 smp = 2
 extra_params = -cpu qemu64,-svm
+
+[kvmclock_test]
+file = kvmclock_test.flat
+smp = 2
+extra_params = --append "10000000 `date +%s`"


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-08-31  8:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-31  8:36 [PATCH kvm-unit-tests v2 0/8] Tests for kvmclock Jason Wang
2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 1/8] Introduce some type definiations Jason Wang
2010-08-31  8:36 ` [PATCH kvm-unit-tests v2 2/8] Increase max_cpu to 64 Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 3/8] Introduce memory barriers Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 4/8] Introduce atomic operations Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 5/8] Export tsc related helpers Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 6/8] Introduce atol() Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 7/8] Add a simple kvmclock driver Jason Wang
2010-08-31  8:37 ` [PATCH kvm-unit-tests v2 8/8] Add tests for kvm-clock Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).