All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations
@ 2020-06-15 23:07 Jim Mattson
  2020-06-15 23:07 ` [PATCH 2/2] kvm: x86: Track synchronized vCPUs in nr_vcpus_matched_tsc Jim Mattson
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Jim Mattson @ 2020-06-15 23:07 UTC (permalink / raw)
  To: kvm, Paolo Bonzini; +Cc: Jim Mattson, Peter Shier, Oliver Upton

Start a new TSC synchronization generation whenever the
IA32_TIME_STAMP_COUNTER MSR is written on a vCPU that has already
participated in the current TSC synchronization generation.

Previously, it was not possible to restore the IA32_TIME_STAMP_COUNTER
MSR to a value less than the TSC frequency. Since vCPU initialization
sets the IA32_TIME_STAMP_COUNTER MSR to zero, a subsequent
KVM_SET_MSRS ioctl that attempted to write a small value to the
IA32_TIME_STAMP_COUNTER MSR was viewed as an attempt at TSC
synchronization. Notably, this was the case even for single vCPU VMs,
which were always synchronized.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Oliver Upton <oupton@google.com>
---
 arch/x86/kvm/x86.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9e41b5135340..2555ea2cd91e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2015,7 +2015,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	u64 offset, ns, elapsed;
 	unsigned long flags;
 	bool matched;
-	bool already_matched;
 	u64 data = msr->data;
 	bool synchronizing = false;
 
@@ -2032,7 +2031,8 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
 			 * kvm_clock stable after CPU hotplug
 			 */
 			synchronizing = true;
-		} else {
+		} else if (vcpu->arch.this_tsc_generation !=
+			   kvm->arch.cur_tsc_generation) {
 			u64 tsc_exp = kvm->arch.last_tsc_write +
 						nsec_to_cycles(vcpu, elapsed);
 			u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
@@ -2062,7 +2062,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
 			offset = kvm_compute_tsc_offset(vcpu, data);
 		}
 		matched = true;
-		already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
 	} else {
 		/*
 		 * We split periods of matched TSC writes into generations.
@@ -2102,12 +2101,10 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
 
 	spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
-	if (!matched) {
-		kvm->arch.nr_vcpus_matched_tsc = 0;
-	} else if (!already_matched) {
+	if (matched)
 		kvm->arch.nr_vcpus_matched_tsc++;
-	}
-
+	else
+		kvm->arch.nr_vcpus_matched_tsc = 0;
 	kvm_track_tsc_matching(vcpu);
 	spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
 }
-- 
2.27.0.290.gba653c62da-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] kvm: x86: Track synchronized vCPUs in nr_vcpus_matched_tsc
  2020-06-15 23:07 [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations Jim Mattson
@ 2020-06-15 23:07 ` Jim Mattson
  2020-06-22 20:05 ` [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations Jim Mattson
  2020-06-22 22:32 ` Paolo Bonzini
  2 siblings, 0 replies; 8+ messages in thread
From: Jim Mattson @ 2020-06-15 23:07 UTC (permalink / raw)
  To: kvm, Paolo Bonzini; +Cc: Jim Mattson, Peter Shier, Oliver Upton

Use nr_vcpus_matched_tsc to track the number of vCPUs that have
synchronized TSCs in the current TSC synchronization
generation. Previously, we recorded the number of vCPUs with
synchronized TSCs minus one.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Oliver Upton <oupton@google.com>
---
 arch/x86/kvm/x86.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2555ea2cd91e..013265d61363 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1921,7 +1921,7 @@ static void kvm_track_tsc_matching(struct kvm_vcpu *vcpu)
 	struct kvm_arch *ka = &vcpu->kvm->arch;
 	struct pvclock_gtod_data *gtod = &pvclock_gtod_data;
 
-	vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 ==
+	vcpus_matched = (ka->nr_vcpus_matched_tsc ==
 			 atomic_read(&vcpu->kvm->online_vcpus));
 
 	/*
@@ -2104,7 +2104,7 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	if (matched)
 		kvm->arch.nr_vcpus_matched_tsc++;
 	else
-		kvm->arch.nr_vcpus_matched_tsc = 0;
+		kvm->arch.nr_vcpus_matched_tsc = 1;
 	kvm_track_tsc_matching(vcpu);
 	spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
 }
@@ -2295,7 +2295,7 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm)
 	int vclock_mode;
 	bool host_tsc_clocksource, vcpus_matched;
 
-	vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 ==
+	vcpus_matched = (ka->nr_vcpus_matched_tsc ==
 			atomic_read(&kvm->online_vcpus));
 
 	/*
-- 
2.27.0.290.gba653c62da-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations
  2020-06-15 23:07 [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations Jim Mattson
  2020-06-15 23:07 ` [PATCH 2/2] kvm: x86: Track synchronized vCPUs in nr_vcpus_matched_tsc Jim Mattson
@ 2020-06-22 20:05 ` Jim Mattson
  2020-06-22 22:32 ` Paolo Bonzini
  2 siblings, 0 replies; 8+ messages in thread
From: Jim Mattson @ 2020-06-22 20:05 UTC (permalink / raw)
  To: kvm list, Paolo Bonzini; +Cc: Peter Shier, Oliver Upton

On Mon, Jun 15, 2020 at 4:07 PM Jim Mattson <jmattson@google.com> wrote:
>
> Start a new TSC synchronization generation whenever the
> IA32_TIME_STAMP_COUNTER MSR is written on a vCPU that has already
> participated in the current TSC synchronization generation.
>
> Previously, it was not possible to restore the IA32_TIME_STAMP_COUNTER
> MSR to a value less than the TSC frequency. Since vCPU initialization
> sets the IA32_TIME_STAMP_COUNTER MSR to zero, a subsequent
> KVM_SET_MSRS ioctl that attempted to write a small value to the
> IA32_TIME_STAMP_COUNTER MSR was viewed as an attempt at TSC
> synchronization. Notably, this was the case even for single vCPU VMs,
> which were always synchronized.
>
> Signed-off-by: Jim Mattson <jmattson@google.com>
> Reviewed-by: Peter Shier <pshier@google.com>
> Reviewed-by: Oliver Upton <oupton@google.com>
> ---
>  arch/x86/kvm/x86.c | 13 +++++--------
>  1 file changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9e41b5135340..2555ea2cd91e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2015,7 +2015,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>         u64 offset, ns, elapsed;
>         unsigned long flags;
>         bool matched;
> -       bool already_matched;
>         u64 data = msr->data;
>         bool synchronizing = false;
>
> @@ -2032,7 +2031,8 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>                          * kvm_clock stable after CPU hotplug
>                          */
>                         synchronizing = true;
> -               } else {
> +               } else if (vcpu->arch.this_tsc_generation !=
> +                          kvm->arch.cur_tsc_generation) {
>                         u64 tsc_exp = kvm->arch.last_tsc_write +
>                                                 nsec_to_cycles(vcpu, elapsed);
>                         u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
> @@ -2062,7 +2062,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>                         offset = kvm_compute_tsc_offset(vcpu, data);
>                 }
>                 matched = true;
> -               already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
>         } else {
>                 /*
>                  * We split periods of matched TSC writes into generations.
> @@ -2102,12 +2101,10 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>         raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
>
>         spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
> -       if (!matched) {
> -               kvm->arch.nr_vcpus_matched_tsc = 0;
> -       } else if (!already_matched) {
> +       if (matched)
>                 kvm->arch.nr_vcpus_matched_tsc++;
> -       }
> -
> +       else
> +               kvm->arch.nr_vcpus_matched_tsc = 0;
>         kvm_track_tsc_matching(vcpu);
>         spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
>  }
> --
> 2.27.0.290.gba653c62da-goog
>
Ping.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations
  2020-06-15 23:07 [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations Jim Mattson
  2020-06-15 23:07 ` [PATCH 2/2] kvm: x86: Track synchronized vCPUs in nr_vcpus_matched_tsc Jim Mattson
  2020-06-22 20:05 ` [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations Jim Mattson
@ 2020-06-22 22:32 ` Paolo Bonzini
  2020-06-22 22:36   ` Jim Mattson
  2 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2020-06-22 22:32 UTC (permalink / raw)
  To: Jim Mattson, kvm; +Cc: Peter Shier, Oliver Upton

On 16/06/20 01:07, Jim Mattson wrote:
> +		} else if (vcpu->arch.this_tsc_generation !=
> +			   kvm->arch.cur_tsc_generation) {
>  			u64 tsc_exp = kvm->arch.last_tsc_write +
>  						nsec_to_cycles(vcpu, elapsed);
>  			u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;

Can this cause the same vCPU to be counted multiple times in
nr_vcpus_matched_tsc?  I think you need to keep already_matched (see
also the commit message for 0d3da0d26e3c, "KVM: x86: fix TSC matching",
2014-07-09, which introduced that variable).

Thanks,

Paolo

> @@ -2062,7 +2062,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  			offset = kvm_compute_tsc_offset(vcpu, data);
>  		}
>  		matched = true;
> -		already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
>  	} else {
>  		/*
>  		 * We split periods of matched TSC writes into generations.
> @@ -2102,12 +2101,10 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  	raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
>  
>  	spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
> -	if (!matched) {
> -		kvm->arch.nr_vcpus_matched_tsc = 0;
> -	} else if (!already_matched) {
> +	if (matched)
>  		kvm->arch.nr_vcpus_matched_tsc++;
> -	}
> -
> +	else
> +		kvm->arch.nr_vcpus_matched_tsc = 0;


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations
  2020-06-22 22:32 ` Paolo Bonzini
@ 2020-06-22 22:36   ` Jim Mattson
  2020-06-22 23:02     ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Jim Mattson @ 2020-06-22 22:36 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm list, Peter Shier, Oliver Upton

On Mon, Jun 22, 2020 at 3:33 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 16/06/20 01:07, Jim Mattson wrote:
> > +             } else if (vcpu->arch.this_tsc_generation !=
> > +                        kvm->arch.cur_tsc_generation) {
> >                       u64 tsc_exp = kvm->arch.last_tsc_write +
> >                                               nsec_to_cycles(vcpu, elapsed);
> >                       u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
>
> Can this cause the same vCPU to be counted multiple times in
> nr_vcpus_matched_tsc?  I think you need to keep already_matched (see
> also the commit message for 0d3da0d26e3c, "KVM: x86: fix TSC matching",
> 2014-07-09, which introduced that variable).

No. In the case where we previously might have counted the vCPU a
second time, we now start a brand-new generation, and the vCPU is the
first to be counted for the new generation.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations
  2020-06-22 22:36   ` Jim Mattson
@ 2020-06-22 23:02     ` Paolo Bonzini
  2020-06-23 17:00       ` Jim Mattson
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2020-06-22 23:02 UTC (permalink / raw)
  To: Jim Mattson; +Cc: kvm list, Peter Shier, Oliver Upton

On 23/06/20 00:36, Jim Mattson wrote:
> On Mon, Jun 22, 2020 at 3:33 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>>
>> On 16/06/20 01:07, Jim Mattson wrote:
>>> +             } else if (vcpu->arch.this_tsc_generation !=
>>> +                        kvm->arch.cur_tsc_generation) {
>>>                       u64 tsc_exp = kvm->arch.last_tsc_write +
>>>                                               nsec_to_cycles(vcpu, elapsed);
>>>                       u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
>>
>> Can this cause the same vCPU to be counted multiple times in
>> nr_vcpus_matched_tsc?  I think you need to keep already_matched (see
>> also the commit message for 0d3da0d26e3c, "KVM: x86: fix TSC matching",
>> 2014-07-09, which introduced that variable).
> 
> No. In the case where we previously might have counted the vCPU a
> second time, we now start a brand-new generation, and the vCPU is the
> first to be counted for the new generation.

Right, because synchronizing is false.  But I'm worried that a migration
at the wrong time would cause a wrong start of a new generation.

start:
	all TSCs are 0

mid of synchronization
	some TSCs are adjusted by a small amount, gen 1 is started

----------------- migration -------------

start:
	all TSCs are 0

restore state
	all TSCs are written with KVM_SET_MSR, gen 1 is	started and
	completed

after execution restarts
	guests finishes updating TSCs, gen 2 starts

and now nr_vcpus_matched_tsc never reaches the maximum.

Paolo


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations
  2020-06-22 23:02     ` Paolo Bonzini
@ 2020-06-23 17:00       ` Jim Mattson
  2020-06-23 17:02         ` Oliver Upton
  0 siblings, 1 reply; 8+ messages in thread
From: Jim Mattson @ 2020-06-23 17:00 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm list, Peter Shier, Oliver Upton

On Mon, Jun 22, 2020 at 4:02 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 23/06/20 00:36, Jim Mattson wrote:
> > On Mon, Jun 22, 2020 at 3:33 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
> >>
> >> On 16/06/20 01:07, Jim Mattson wrote:
> >>> +             } else if (vcpu->arch.this_tsc_generation !=
> >>> +                        kvm->arch.cur_tsc_generation) {
> >>>                       u64 tsc_exp = kvm->arch.last_tsc_write +
> >>>                                               nsec_to_cycles(vcpu, elapsed);
> >>>                       u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
> >>
> >> Can this cause the same vCPU to be counted multiple times in
> >> nr_vcpus_matched_tsc?  I think you need to keep already_matched (see
> >> also the commit message for 0d3da0d26e3c, "KVM: x86: fix TSC matching",
> >> 2014-07-09, which introduced that variable).
> >
> > No. In the case where we previously might have counted the vCPU a
> > second time, we now start a brand-new generation, and the vCPU is the
> > first to be counted for the new generation.
>
> Right, because synchronizing is false.  But I'm worried that a migration
> at the wrong time would cause a wrong start of a new generation.
>
> start:
>         all TSCs are 0
>
> mid of synchronization
>         some TSCs are adjusted by a small amount, gen 1 is started
>
> ----------------- migration -------------
>
> start:
>         all TSCs are 0
>
> restore state
>         all TSCs are written with KVM_SET_MSR, gen 1 is started and
>         completed
>
> after execution restarts
>         guests finishes updating TSCs, gen 2 starts
>
> and now nr_vcpus_matched_tsc never reaches the maximum.
>
> Paolo

Hmmm...

Perhaps these heuristics are, in fact, irreparable. I'll ask Oliver to
upstream our ioctls for {GET,SET}_TSC_OFFSET. These ioctls don't help
on ancient Intel CPUs without TSC offsetting, but they do the trick on
most CPUs.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations
  2020-06-23 17:00       ` Jim Mattson
@ 2020-06-23 17:02         ` Oliver Upton
  0 siblings, 0 replies; 8+ messages in thread
From: Oliver Upton @ 2020-06-23 17:02 UTC (permalink / raw)
  To: Jim Mattson; +Cc: Paolo Bonzini, kvm list, Peter Shier

On Tue, Jun 23, 2020 at 10:00 AM Jim Mattson <jmattson@google.com> wrote:
>
> On Mon, Jun 22, 2020 at 4:02 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
> >
> > On 23/06/20 00:36, Jim Mattson wrote:
> > > On Mon, Jun 22, 2020 at 3:33 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
> > >>
> > >> On 16/06/20 01:07, Jim Mattson wrote:
> > >>> +             } else if (vcpu->arch.this_tsc_generation !=
> > >>> +                        kvm->arch.cur_tsc_generation) {
> > >>>                       u64 tsc_exp = kvm->arch.last_tsc_write +
> > >>>                                               nsec_to_cycles(vcpu, elapsed);
> > >>>                       u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
> > >>
> > >> Can this cause the same vCPU to be counted multiple times in
> > >> nr_vcpus_matched_tsc?  I think you need to keep already_matched (see
> > >> also the commit message for 0d3da0d26e3c, "KVM: x86: fix TSC matching",
> > >> 2014-07-09, which introduced that variable).
> > >
> > > No. In the case where we previously might have counted the vCPU a
> > > second time, we now start a brand-new generation, and the vCPU is the
> > > first to be counted for the new generation.
> >
> > Right, because synchronizing is false.  But I'm worried that a migration
> > at the wrong time would cause a wrong start of a new generation.
> >
> > start:
> >         all TSCs are 0
> >
> > mid of synchronization
> >         some TSCs are adjusted by a small amount, gen 1 is started
> >
> > ----------------- migration -------------
> >
> > start:
> >         all TSCs are 0
> >
> > restore state
> >         all TSCs are written with KVM_SET_MSR, gen 1 is started and
> >         completed
> >
> > after execution restarts
> >         guests finishes updating TSCs, gen 2 starts
> >
> > and now nr_vcpus_matched_tsc never reaches the maximum.
> >
> > Paolo
>
> Hmmm...
>
> Perhaps these heuristics are, in fact, irreparable. I'll ask Oliver to
> upstream our ioctls for {GET,SET}_TSC_OFFSET. These ioctls don't help
> on ancient Intel CPUs without TSC offsetting, but they do the trick on
> most CPUs.

Indeed, I'll rebase and send these out soon.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-06-23 17:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-15 23:07 [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations Jim Mattson
2020-06-15 23:07 ` [PATCH 2/2] kvm: x86: Track synchronized vCPUs in nr_vcpus_matched_tsc Jim Mattson
2020-06-22 20:05 ` [PATCH 1/2] kvm: x86: Refine kvm_write_tsc synchronization generations Jim Mattson
2020-06-22 22:32 ` Paolo Bonzini
2020-06-22 22:36   ` Jim Mattson
2020-06-22 23:02     ` Paolo Bonzini
2020-06-23 17:00       ` Jim Mattson
2020-06-23 17:02         ` Oliver Upton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.