From mboxrd@z Thu Jan 1 00:00:00 1970 From: Roman Kagan Subject: Re: [PATCH kvm-unit-tests] KVM: x86: add hyperv clock test case Date: Wed, 25 May 2016 21:33:07 +0300 Message-ID: <20160525183306.GB18943@rkaganb.sw.ru> References: <1453989899-30351-1-git-send-email-pbonzini@redhat.com> <20160128162206.GA29344@rkaganb.sw.ru> <56B22CB4.9090404@redhat.com> <20160421170157.GA16360@rkaganb.sw.ru> <20160422133240.GA9108@rkaganb.sw.ru> <571A68AF.1030907@redhat.com> <20160425084722.GA31039@rkaganb.sw.ru> <20160426103455.GA21656@rkaganb.sw.ru> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: , "Denis V. Lunev" , Marcelo Tosatti To: Paolo Bonzini Return-path: Received: from mail-db3on0113.outbound.protection.outlook.com ([157.55.234.113]:64192 "EHLO emea01-db3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750742AbcEYSdS (ORCPT ); Wed, 25 May 2016 14:33:18 -0400 Content-Disposition: inline In-Reply-To: <20160426103455.GA21656@rkaganb.sw.ru> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Apr 26, 2016 at 01:34:56PM +0300, Roman Kagan wrote: > On Mon, Apr 25, 2016 at 11:47:23AM +0300, Roman Kagan wrote: > > On Fri, Apr 22, 2016 at 08:08:47PM +0200, Paolo Bonzini wrote: > > > On 22/04/2016 15:32, Roman Kagan wrote: > > > > The first value is derived from the kvm_clock's tsc_to_system_mul and > > > > tsc_shift, and matches hosts's vcpu->hw_tsc_khz. The second is > > > > calibrated using emulated HPET. The difference is those +14 ppm. > > > > > > > > This is on i7-2600, invariant TSC present, TSC scaling not present. > > > > > > > > I'll dig further but I'd appreciate any comment on whether it was within > > > > tolerance or not. > > > > > > The solution to the bug is to change the Hyper-V reference time MSR to > > > use the same formula as the Hyper-V TSC-based clock. Likewise, > > > KVM_GET_CLOCK and KVM_SET_CLOCK should not use ktime_get_ns(). > > > > Umm, I'm not sure it's a good idea... > > > > E.g. virtualized HPET sits in userspace and thus uses > > clock_gettime(CLOCK_MONOTONIC), so the drift will remain. > > > > AFAICT the root cause is the following: KVM master clock uses the same > > multiplier/shift as the vsyscall time in host userspace. However, the > > offsets in vsyscall_gtod_data get updated all the time with corrections > > from NTP and so on. Therefore even if the TSC rate is somewhat > > miscalibrated, the error is kept small in vsyscall time functions. OTOH > > the offsets in KVM clock are basically never updated, so the error keeps > > linearly growing over time. > > This seems to be due to a typo: > > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -5819,7 +5819,7 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused, > /* disable master clock if host does not trust, or does not > * use, TSC clocksource > */ > - if (gtod->clock.vclock_mode != VCLOCK_TSC && > + if (gtod->clock.vclock_mode == VCLOCK_TSC && > atomic_read(&kvm_guest_has_master_clock) != 0) > queue_work(system_long_wq, &pvclock_gtod_work); > > > as a result, the global pvclock_gtod_data was kept up to date, but the > requests to update per-vm copies were never issued. > > With the patch I'm now seeing different test failures which I'm looking > into. > > Meanwhile I'm wondering if this scheme is not too costly: on my machine > pvclock_gtod_notify() is called at kHz rate, and the work it schedules > does > > static void pvclock_gtod_update_fn(struct work_struct *work) > { > [...] > spin_lock(&kvm_lock); > list_for_each_entry(kvm, &vm_list, vm_list) > kvm_for_each_vcpu(i, vcpu, kvm) > kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu); > atomic_set(&kvm_guest_has_master_clock, 0); > spin_unlock(&kvm_lock); > } > > KVM_REQ_MASTERCLOCK_UPDATE makes all VCPUs synchronize: > > static void kvm_gen_update_masterclock(struct kvm *kvm) > { > [...] > spin_lock(&ka->pvclock_gtod_sync_lock); > kvm_make_mclock_inprogress_request(kvm); > /* no guest entries from this point */ > pvclock_update_vm_gtod_copy(kvm); > > kvm_for_each_vcpu(i, vcpu, kvm) > kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); > > /* guest entries allowed */ > kvm_for_each_vcpu(i, vcpu, kvm) > clear_bit(KVM_REQ_MCLOCK_INPROGRESS, &vcpu->requests); > > spin_unlock(&ka->pvclock_gtod_sync_lock); > [...] > } > > so on a host with many VMs it may become an issue. Ping Roman.