All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: kvm@vger.kernel.org
Cc: johnstul@us.ibm.com, jeremy@goop.org, glommer@parallels.com,
	zamsden@gmail.com, gleb@redhat.com, avi@redhat.com,
	pbonzini@redhat.com, Marcelo Tosatti <mtosatti@redhat.com>
Subject: [patch 13/16] KVM: x86: implement PVCLOCK_TSC_STABLE_BIT pvclock flag
Date: Wed, 31 Oct 2012 20:47:09 -0200	[thread overview]
Message-ID: <20121031224824.430341485@redhat.com> (raw)
In-Reply-To: 20121031224656.417434866@redhat.com

[-- Attachment #1: 14-host-pass-stable-pvclock-flag --]
[-- Type: text/plain, Size: 13339 bytes --]

KVM added a global variable to guarantee monotonicity in the guest. 
One of the reasons for that is that the time between

	1. ktime_get_ts(&timespec);
	2. rdtscll(tsc);

Is variable. That is, given a host with stable TSC, suppose that
two VCPUs read the same time via ktime_get_ts() above.

The time required to execute 2. is not the same on those two instances 
executing in different VCPUS (cache misses, interrupts...).

If the TSC value that is used by the host to interpolate when 
calculating the monotonic time is the same value used to calculate
the tsc_timestamp value stored in the pvclock data structure, and
a single <system_timestamp, tsc_timestamp> tuple is visible to all 
vcpus simultaneously, this problem disappears. See comment on top
of pvclock_update_vm_gtod_copy for details.

Monotonicity is then guaranteed by synchronicity of the host TSCs
and guest TSCs. 

Set TSC stable pvclock flag in that case, allowing the guest to read
clock from userspace.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

Index: vsyscall/arch/x86/kvm/x86.c
===================================================================
--- vsyscall.orig/arch/x86/kvm/x86.c
+++ vsyscall/arch/x86/kvm/x86.c
@@ -46,6 +46,7 @@
 #include <linux/uaccess.h>
 #include <linux/hash.h>
 #include <linux/pci.h>
+#include <linux/pvclock_gtod.h>
 #include <trace/events/kvm.h>
 
 #define CREATE_TRACE_POINTS
@@ -1135,8 +1136,149 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu
 
 EXPORT_SYMBOL_GPL(kvm_write_tsc);
 
+static cycle_t read_tsc(void)
+{
+	cycle_t ret;
+	u64 last;
+
+	/*
+	 * Empirically, a fence (of type that depends on the CPU)
+	 * before rdtsc is enough to ensure that rdtsc is ordered
+	 * with respect to loads.  The various CPU manuals are unclear
+	 * as to whether rdtsc can be reordered with later loads,
+	 * but no one has ever seen it happen.
+	 */
+	rdtsc_barrier();
+	ret = (cycle_t)vget_cycles();
+
+	last = pvclock_gtod_data.clock.cycle_last;
+
+	if (likely(ret >= last))
+		return ret;
+
+	/*
+	 * GCC likes to generate cmov here, but this branch is extremely
+	 * predictable (it's just a funciton of time and the likely is
+	 * very likely) and there's a data dependence, so force GCC
+	 * to generate a branch instead.  I don't barrier() because
+	 * we don't actually need a barrier, and if this function
+	 * ever gets inlined it will generate worse code.
+	 */
+	asm volatile ("");
+	return last;
+}
+
+static inline u64 vgettsc(cycle_t *cycle_now)
+{
+	long v;
+	struct pvclock_gtod_data *gtod = &pvclock_gtod_data;
+
+	*cycle_now = read_tsc();
+
+	v = (*cycle_now - gtod->clock.cycle_last) & gtod->clock.mask;
+	return v * gtod->clock.mult;
+}
+
+static int do_monotonic(struct timespec *ts, cycle_t *cycle_now)
+{
+	unsigned long seq;
+	u64 ns;
+	int mode;
+	struct pvclock_gtod_data *gtod = &pvclock_gtod_data;
+
+	ts->tv_nsec = 0;
+	do {
+		seq = read_seqcount_begin(&gtod->seq);
+		mode = gtod->clock.vclock_mode;
+		ts->tv_sec = gtod->monotonic_time_sec;
+		ns = gtod->monotonic_time_snsec;
+		ns += vgettsc(cycle_now);
+		ns >>= gtod->clock.shift;
+	} while (unlikely(read_seqcount_retry(&gtod->seq, seq)));
+	timespec_add_ns(ts, ns);
+
+	return mode;
+}
+
+/* returns true if host is using tsc clocksource */
+static bool kvm_get_time_and_clockread(s64 *kernel_ns, cycle_t *cycle_now)
+{
+	struct timespec ts;
+
+	/* checked again under seqlock below */
+	if (pvclock_gtod_data.clock.vclock_mode != VCLOCK_TSC)
+		return false;
+
+	if (do_monotonic(&ts, cycle_now) != VCLOCK_TSC)
+		return false;
+
+	monotonic_to_bootbased(&ts);
+	*kernel_ns = timespec_to_ns(&ts);
+
+	return true;
+}
+
+
+/*
+ *
+ * Assuming a stable TSC across physical CPUS, the following condition
+ * is possible. Each numbered line represents an event visible to both
+ * CPUs at the next numbered event.
+ *
+ * "timespecX" represents host monotonic time. "tscX" represents
+ * RDTSC value.
+ *
+ * 		VCPU0 on CPU0		|	VCPU1 on CPU1
+ *
+ * 1.  read timespec0,tsc0
+ * 2.					| timespec1 = timespec0 + N
+ * 					| tsc1 = tsc0 + M
+ * 3. transition to guest		| transition to guest
+ * 4. ret0 = timespec0 + (rdtsc - tsc0) |
+ * 5.				        | ret1 = timespec1 + (rdtsc - tsc1)
+ * 				        | ret1 = timespec0 + N + (rdtsc - (tsc0 + M))
+ *
+ * Since ret0 update is visible to VCPU1 at time 5, to obey monotonicity:
+ *
+ * 	- ret0 < ret1
+ *	- timespec0 + (rdtsc - tsc0) < timespec0 + N + (rdtsc - (tsc0 + M))
+ *		...
+ *	- 0 < N - M => M < N
+ *
+ * That is, when timespec0 != timespec1, M < N. Unfortunately that is not
+ * always the case (the difference between two distinct xtime instances
+ * might be smaller then the difference between corresponding TSC reads,
+ * when updating guest vcpus pvclock areas).
+ *
+ * To avoid that problem, do not allow visibility of distinct
+ * system_timestamp/tsc_timestamp values simultaneously: use a master
+ * copy of host monotonic time values. Update that master copy
+ * in lockstep.
+ *
+ * Rely on synchronization of host TSCs for monotonicity.
+ *
+ */
+
+static void pvclock_update_vm_gtod_copy(struct kvm *kvm)
+{
+	struct kvm_arch *ka = &kvm->arch;
+	int vclock_mode;
+
+	/*
+ 	 * If the host uses TSC clock, then passthrough TSC as stable
+	 * to the guest.
+	 */
+	ka->use_master_clock = kvm_get_time_and_clockread(
+					&ka->master_kernel_ns,
+					&ka->master_cycle_now);
+
+	vclock_mode = pvclock_gtod_data.clock.vclock_mode;
+	trace_kvm_update_master_clock(ka->use_master_clock, vclock_mode);
+}
+
 static void kvm_write_pvtime(struct kvm_vcpu *v, struct page *page,
-			     unsigned int offset_in_page, gpa_t gpa)
+			     unsigned int offset_in_page, gpa_t gpa,
+			     bool use_master_clock)
 {
 	struct kvm_vcpu_arch *vcpu = &v->arch;
 	void *shared_kaddr;
@@ -1155,6 +1297,10 @@ static void kvm_write_pvtime(struct kvm_
 		vcpu->pvclock_set_guest_stopped_request = false;
 	}
 
+	/* If the host uses TSC clocksource, then it is stable */
+	if (use_master_clock)
+		pvclock_flags |= PVCLOCK_TSC_STABLE_BIT;
+
 	vcpu->hv_clock.flags = pvclock_flags;
 
 	memcpy(shared_kaddr + offset_in_page, &vcpu->hv_clock,
@@ -1169,14 +1315,18 @@ static int kvm_guest_time_update(struct 
 {
 	unsigned long flags;
 	struct kvm_vcpu_arch *vcpu = &v->arch;
+	struct kvm_arch *ka = &v->kvm->arch;
 	unsigned long this_tsc_khz;
 	s64 kernel_ns, max_kernel_ns;
 	u64 tsc_timestamp;
+	u64 host_tsc;
+	bool use_master_clock;
+
+	kernel_ns = 0;
+	host_tsc = 0;
 
 	/* Keep irq disabled to prevent changes to the clock */
 	local_irq_save(flags);
-	tsc_timestamp = kvm_x86_ops->read_l1_tsc(v, native_read_tsc());
-	kernel_ns = get_kernel_ns();
 	this_tsc_khz = __get_cpu_var(cpu_tsc_khz);
 	if (unlikely(this_tsc_khz == 0)) {
 		local_irq_restore(flags);
@@ -1185,6 +1335,24 @@ static int kvm_guest_time_update(struct 
 	}
 
 	/*
+ 	 * If the host uses TSC clock, then passthrough TSC as stable
+	 * to the guest.
+	 */
+	spin_lock(&ka->pvclock_gtod_sync_lock);
+	use_master_clock = ka->use_master_clock;
+	if (use_master_clock) {
+		host_tsc = ka->master_cycle_now;
+		kernel_ns = ka->master_kernel_ns;
+	}
+	spin_unlock(&ka->pvclock_gtod_sync_lock);
+	if (!use_master_clock) {
+		host_tsc = native_read_tsc();
+		kernel_ns = get_kernel_ns();
+	}
+
+	tsc_timestamp = kvm_x86_ops->read_l1_tsc(v, host_tsc);
+
+	/*
 	 * We may have to catch up the TSC to match elapsed wall clock
 	 * time for two reasons, even if kvmclock is used.
 	 *   1) CPU could have been running below the maximum TSC rate
@@ -1245,8 +1413,14 @@ static int kvm_guest_time_update(struct 
 		vcpu->hw_tsc_khz = this_tsc_khz;
 	}
 
-	if (max_kernel_ns > kernel_ns)
-		kernel_ns = max_kernel_ns;
+	/* with a master <monotonic time, tsc value> tuple,
+ 	 * pvclock clock reads always increase at the (scaled) rate
+ 	 * of guest TSC - no need to deal with sampling errors.
+ 	 */
+	if (!use_master_clock) {
+		if (max_kernel_ns > kernel_ns)
+			kernel_ns = max_kernel_ns;
+	}
 
 	/* With all the info we got, fill in the values */
 	vcpu->hv_clock.tsc_timestamp = tsc_timestamp;
@@ -1262,10 +1436,12 @@ static int kvm_guest_time_update(struct 
 	 */
 	vcpu->hv_clock.version += 2;
 
- 	kvm_write_pvtime(v, vcpu->time_page, vcpu->time_offset, vcpu->time);
+ 	kvm_write_pvtime(v, vcpu->time_page, vcpu->time_offset, vcpu->time,
+			 use_master_clock);
  	if (vcpu->uspace_time_page)
  		kvm_write_pvtime(v, vcpu->uspace_time_page,
- 				 vcpu->uspace_time_offset, vcpu->uspace_time);
+ 				 vcpu->uspace_time_offset, vcpu->uspace_time,
+				 use_master_clock);
 
 	return 0;
 }
@@ -5302,6 +5478,28 @@ static void process_nmi(struct kvm_vcpu 
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
 
+static void kvm_gen_update_masterclock(struct kvm *kvm)
+{
+	int i;
+	struct kvm_vcpu *vcpu;
+	struct kvm_arch *ka = &kvm->arch;
+
+	spin_lock(&ka->pvclock_gtod_sync_lock);
+	kvm_make_mclock_inprogress_request(kvm);
+	/* no guest entries from this point */
+	pvclock_update_vm_gtod_copy(kvm);
+
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		set_bit(KVM_REQ_CLOCK_UPDATE, &vcpu->requests);
+
+	/* guest entries allowed */
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		clear_bit(KVM_REQ_MCLOCK_INPROGRESS, &vcpu->requests);
+
+	spin_unlock(&ka->pvclock_gtod_sync_lock);
+
+}
+
 static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 {
 	int r;
@@ -5314,6 +5512,8 @@ static int vcpu_enter_guest(struct kvm_v
 			kvm_mmu_unload(vcpu);
 		if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
 			__kvm_migrate_timers(vcpu);
+		if (kvm_check_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu))
+			kvm_gen_update_masterclock(vcpu->kvm);
 		if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
 			r = kvm_guest_time_update(vcpu);
 			if (unlikely(r))
@@ -6219,6 +6419,8 @@ int kvm_arch_hardware_enable(void *garba
 			kvm_for_each_vcpu(i, vcpu, kvm) {
 				vcpu->arch.tsc_offset_adjustment += delta_cyc;
 				vcpu->arch.last_host_tsc = local_tsc;
+				set_bit(KVM_REQ_MASTERCLOCK_UPDATE,
+					&vcpu->requests);
 			}
 
 			/*
@@ -6356,6 +6558,9 @@ int kvm_arch_init_vm(struct kvm *kvm, un
 
 	raw_spin_lock_init(&kvm->arch.tsc_write_lock);
 	mutex_init(&kvm->arch.apic_map_lock);
+	spin_lock_init(&kvm->arch.pvclock_gtod_sync_lock);
+
+	pvclock_update_vm_gtod_copy(kvm);
 
 	return 0;
 }
Index: vsyscall/arch/x86/include/asm/kvm_host.h
===================================================================
--- vsyscall.orig/arch/x86/include/asm/kvm_host.h
+++ vsyscall/arch/x86/include/asm/kvm_host.h
@@ -22,6 +22,7 @@
 #include <linux/kvm_para.h>
 #include <linux/kvm_types.h>
 #include <linux/perf_event.h>
+#include <linux/pvclock_gtod.h>
 
 #include <asm/pvclock-abi.h>
 #include <asm/desc.h>
@@ -563,6 +564,11 @@ struct kvm_arch {
 	u64 cur_tsc_offset;
 	u8  cur_tsc_generation;
 
+	spinlock_t pvclock_gtod_sync_lock;
+	bool use_master_clock;
+	u64 master_kernel_ns;
+	cycle_t master_cycle_now;
+
 	struct kvm_xen_hvm_config xen_hvm_config;
 
 	/* fields used by HYPER-V emulation */
Index: vsyscall/include/linux/kvm_host.h
===================================================================
--- vsyscall.orig/include/linux/kvm_host.h
+++ vsyscall/include/linux/kvm_host.h
@@ -118,6 +118,8 @@ static inline bool is_error_page(struct 
 #define KVM_REQ_IMMEDIATE_EXIT    15
 #define KVM_REQ_PMU               16
 #define KVM_REQ_PMI               17
+#define KVM_REQ_MASTERCLOCK_UPDATE  18
+#define KVM_REQ_MCLOCK_INPROGRESS 19
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID		0
 #define KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID	1
@@ -527,6 +529,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *
 
 void kvm_flush_remote_tlbs(struct kvm *kvm);
 void kvm_reload_remote_mmus(struct kvm *kvm);
+void kvm_make_mclock_inprogress_request(struct kvm *kvm);
 
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg);
Index: vsyscall/virt/kvm/kvm_main.c
===================================================================
--- vsyscall.orig/virt/kvm/kvm_main.c
+++ vsyscall/virt/kvm/kvm_main.c
@@ -212,6 +212,11 @@ void kvm_reload_remote_mmus(struct kvm *
 	make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
 }
 
+void kvm_make_mclock_inprogress_request(struct kvm *kvm)
+{
+	make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS);
+}
+
 int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
 {
 	struct page *page;
Index: vsyscall/arch/x86/kvm/trace.h
===================================================================
--- vsyscall.orig/arch/x86/kvm/trace.h
+++ vsyscall/arch/x86/kvm/trace.h
@@ -4,6 +4,7 @@
 #include <linux/tracepoint.h>
 #include <asm/vmx.h>
 #include <asm/svm.h>
+#include <asm/clocksource.h>
 
 #undef TRACE_SYSTEM
 #define TRACE_SYSTEM kvm
@@ -754,6 +755,31 @@ TRACE_EVENT(
 		  __entry->write ? "Write" : "Read",
 		  __entry->gpa_match ? "GPA" : "GVA")
 );
+
+#define host_clocks				\
+	{VCLOCK_NONE, "none"},			\
+	{VCLOCK_TSC,  "tsc"},			\
+	{VCLOCK_HPET, "hpet"}			\
+
+TRACE_EVENT(kvm_update_master_clock,
+	TP_PROTO(bool use_master_clock, unsigned int host_clock),
+	TP_ARGS(use_master_clock, host_clock),
+
+	TP_STRUCT__entry(
+		__field(		bool,	use_master_clock	)
+		__field(	unsigned int,	host_clock		)
+	),
+
+	TP_fast_assign(
+		__entry->use_master_clock	= use_master_clock;
+		__entry->host_clock		= host_clock;
+	),
+
+	TP_printk("masterclock %d hostclock %s",
+		  __entry->use_master_clock,
+		  __print_symbolic(__entry->host_clock, host_clocks))
+);
+
 #endif /* _TRACE_KVM_H */
 
 #undef TRACE_INCLUDE_PATH



  parent reply	other threads:[~2012-10-31 22:47 UTC|newest]

Thread overview: 94+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-24 13:13 [patch 00/18] pvclock vsyscall support + KVM hypervisor support (v2) Marcelo Tosatti
2012-10-24 13:13 ` [patch 01/18] KVM: x86: retain pvclock guest stopped bit in guest memory Marcelo Tosatti
2012-10-24 13:13 ` [patch 02/18] x86: pvclock: make sure rdtsc doesnt speculate out of region Marcelo Tosatti
2012-10-24 13:13 ` [patch 03/18] x86: pvclock: remove pvclock_shadow_time Marcelo Tosatti
2012-10-30  9:23   ` Avi Kivity
2012-10-30  9:24     ` Avi Kivity
2012-10-24 13:13 ` [patch 04/18] x86: pvclock: create helper for pvclock data retrieval Marcelo Tosatti
2012-10-24 13:13 ` [patch 05/18] x86: pvclock: fix flags usage race Marcelo Tosatti
2012-10-24 13:13 ` [patch 06/18] x86: pvclock: introduce helper to read flags Marcelo Tosatti
2012-10-24 13:13 ` [patch 07/18] sched: add notifier for cross-cpu migrations Marcelo Tosatti
2012-10-24 13:13 ` [patch 08/18] x86: pvclock: generic pvclock vsyscall initialization Marcelo Tosatti
2012-10-29 14:18   ` Glauber Costa
2012-10-29 14:54     ` Marcelo Tosatti
2012-10-29 17:46       ` Jeremy Fitzhardinge
2012-10-29 14:39   ` Glauber Costa
2012-10-24 13:13 ` [patch 09/18] KVM: x86: introduce facility to support vsyscall pvclock, via MSR Marcelo Tosatti
2012-10-29 14:45   ` Glauber Costa
2012-10-29 17:44     ` Jeremy Fitzhardinge
2012-10-29 18:40       ` Marcelo Tosatti
2012-10-30  7:41         ` Glauber Costa
2012-10-30  9:39         ` Avi Kivity
2012-10-31  3:12           ` Marcelo Tosatti
2012-11-02 10:21             ` Glauber Costa
2012-10-30  7:38       ` Glauber Costa
2012-10-24 13:13 ` [patch 10/18] x86: kvm guest: pvclock vsyscall support Marcelo Tosatti
2012-10-24 13:13 ` [patch 11/18] x86: vsyscall: pass mode to gettime backend Marcelo Tosatti
2012-10-29 14:47   ` Glauber Costa
2012-10-29 18:41     ` Marcelo Tosatti
2012-10-30  7:42       ` Glauber Costa
2012-10-24 13:13 ` [patch 12/18] x86: vdso: pvclock gettime support Marcelo Tosatti
2012-10-29 14:59   ` Glauber Costa
2012-10-29 18:42     ` Marcelo Tosatti
2012-10-30  7:49       ` Glauber Costa
2012-10-31  3:16         ` Marcelo Tosatti
2012-10-24 13:13 ` [patch 13/18] KVM: x86: pass host_tsc to read_l1_tsc Marcelo Tosatti
2012-10-29 15:04   ` Glauber Costa
2012-10-29 18:45     ` Marcelo Tosatti
2012-10-30  7:55       ` Glauber Costa
2012-10-24 13:13 ` [patch 14/18] time: export time information for KVM pvclock Marcelo Tosatti
2012-11-10  1:02   ` John Stultz
2012-11-13 21:07     ` Marcelo Tosatti
2012-10-24 13:13 ` [patch 15/18] KVM: x86: implement PVCLOCK_TSC_STABLE_BIT pvclock flag Marcelo Tosatti
2012-10-30  8:34   ` Glauber Costa
2012-10-31  3:19     ` [patch 15/18] KVM: x86: implement PVCLOCK_TSC_STABLE_BIT pvclock flag\ Marcelo Tosatti
2012-10-24 13:13 ` [patch 16/18] KVM: x86: notifier for clocksource changes Marcelo Tosatti
2012-10-24 13:13 ` [patch 17/18] KVM: x86: add kvm_arch_vcpu_postcreate callback, move TSC initialization Marcelo Tosatti
2012-10-24 13:13 ` [patch 18/18] KVM: x86: require matched TSC offsets for master clock Marcelo Tosatti
2012-10-31 22:46 ` [patch 00/16] pvclock vsyscall support + KVM hypervisor support (v3) Marcelo Tosatti
2012-10-31 22:46   ` [patch 01/16] KVM: x86: retain pvclock guest stopped bit in guest memory Marcelo Tosatti
2012-11-01 10:39     ` Gleb Natapov
2012-11-01 20:51       ` Marcelo Tosatti
2012-11-01 13:44     ` Glauber Costa
2012-10-31 22:46   ` [patch 02/16] x86: pvclock: make sure rdtsc doesnt speculate out of region Marcelo Tosatti
2012-11-01 11:48     ` Gleb Natapov
2012-11-01 13:49       ` Glauber Costa
2012-11-01 13:51         ` Gleb Natapov
2012-11-01 20:56         ` Marcelo Tosatti
2012-11-01 22:13           ` Gleb Natapov
2012-11-01 22:21             ` Marcelo Tosatti
2012-11-02  6:02               ` Gleb Natapov
2012-10-31 22:46   ` [patch 03/16] x86: pvclock: remove pvclock_shadow_time Marcelo Tosatti
2012-11-01 13:52     ` Glauber Costa
2012-10-31 22:47   ` [patch 04/16] x86: pvclock: create helper for pvclock data retrieval Marcelo Tosatti
2012-11-01 14:04     ` Glauber Costa
2012-11-01 20:57       ` Marcelo Tosatti
2012-10-31 22:47   ` [patch 05/16] x86: pvclock: introduce helper to read flags Marcelo Tosatti
2012-11-01 14:07     ` Glauber Costa
2012-11-01 21:08       ` Marcelo Tosatti
2012-10-31 22:47   ` [patch 06/16] sched: add notifier for cross-cpu migrations Marcelo Tosatti
2012-11-01 14:08     ` Glauber Costa
2012-10-31 22:47   ` [patch 07/16] x86: pvclock: generic pvclock vsyscall initialization Marcelo Tosatti
2012-11-01 14:19     ` Glauber Costa
2012-10-31 22:47   ` [patch 08/16] KVM: x86: introduce facility to support vsyscall pvclock, via MSR Marcelo Tosatti
2012-11-01 14:28     ` Glauber Costa
2012-11-01 21:39       ` Marcelo Tosatti
2012-11-02 10:23         ` Glauber Costa
2012-11-02 13:00           ` Marcelo Tosatti
2012-11-05  8:03             ` Glauber Costa
2012-10-31 22:47   ` [patch 09/16] x86: kvm guest: pvclock vsyscall support Marcelo Tosatti
2012-11-02  9:42     ` Glauber Costa
2012-11-05  8:35       ` Marcelo Tosatti
2012-10-31 22:47   ` [patch 10/16] x86: vdso: pvclock gettime support Marcelo Tosatti
2012-11-01 14:41     ` Glauber Costa
2012-11-01 21:42       ` Marcelo Tosatti
2012-11-02  0:33         ` Marcelo Tosatti
2012-11-02 10:25           ` Glauber Costa
2012-11-14 10:42     ` Gleb Natapov
2012-11-14 22:42       ` Marcelo Tosatti
2012-10-31 22:47   ` [patch 11/16] KVM: x86: pass host_tsc to read_l1_tsc Marcelo Tosatti
2012-10-31 22:47   ` [patch 12/16] time: export time information for KVM pvclock Marcelo Tosatti
2012-10-31 22:47   ` Marcelo Tosatti [this message]
2012-10-31 22:47   ` [patch 14/16] KVM: x86: notifier for clocksource changes Marcelo Tosatti
2012-10-31 22:47   ` [patch 15/16] KVM: x86: add kvm_arch_vcpu_postcreate callback, move TSC initialization Marcelo Tosatti
2012-10-31 22:47   ` [patch 16/16] KVM: x86: require matched TSC offsets for master clock Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121031224824.430341485@redhat.com \
    --to=mtosatti@redhat.com \
    --cc=avi@redhat.com \
    --cc=gleb@redhat.com \
    --cc=glommer@parallels.com \
    --cc=jeremy@goop.org \
    --cc=johnstul@us.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=zamsden@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.