* [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock
@ 2017-11-09 2:52 Wanpeng Li
2017-11-09 4:39 ` Konrad Rzeszutek Wilk
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Wanpeng Li @ 2017-11-09 2:52 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Wanpeng Li
From: Wanpeng Li <wanpeng.li@hotmail.com>
watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [qemu-system-x86:10185]
CPU: 6 PID: 10185 Comm: qemu-system-x86 Tainted: G OE 4.14.0-rc4+ #4
RIP: 0010:kvm_get_time_scale+0x4e/0xa0 [kvm]
Call Trace:
? get_kvmclock_ns+0xa3/0x140 [kvm]
get_time_ref_counter+0x5a/0x80 [kvm]
kvm_hv_process_stimers+0x120/0x5f0 [kvm]
? kvm_hv_process_stimers+0x120/0x5f0 [kvm]
? preempt_schedule+0x27/0x30
? ___preempt_schedule+0x16/0x18
kvm_arch_vcpu_ioctl_run+0x4b4/0x1690 [kvm]
? kvm_arch_vcpu_load+0x47/0x230 [kvm]
kvm_vcpu_ioctl+0x33a/0x620 [kvm]
? kvm_vcpu_ioctl+0x33a/0x620 [kvm]
? kvm_vm_ioctl_check_extension_generic+0x3b/0x40 [kvm]
? kvm_dev_ioctl+0x279/0x6c0 [kvm]
do_vfs_ioctl+0xa1/0x5d0
? __fget+0x73/0xa0
SyS_ioctl+0x79/0x90
entry_SYSCALL_64_fastpath+0x1e/0xa9
This can be reproduced when running kvm-unit-tests/hyperv_stimer.flat and
cpu-hotplug stress simultaneously. __this_cpu_read(cpu_tsc_khz) returns 0
(set in kvmclock_cpu_down_prep()) when the pCPU is unhotplug which results
in kvm_get_time_scale() gets into an infinite loop.
This patch fixes it by treating the unhotplug pCPU as not using master clock.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
---
arch/x86/kvm/x86.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 03869eb..d61dcce3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1795,10 +1795,13 @@ u64 get_kvmclock_ns(struct kvm *kvm)
/* both __this_cpu_read() and rdtsc() should be on the same cpu */
get_cpu();
- kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
- &hv_clock.tsc_shift,
- &hv_clock.tsc_to_system_mul);
- ret = __pvclock_read_cycles(&hv_clock, rdtsc());
+ if (__this_cpu_read(cpu_tsc_khz)) {
+ kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
+ &hv_clock.tsc_shift,
+ &hv_clock.tsc_to_system_mul);
+ ret = __pvclock_read_cycles(&hv_clock, rdtsc());
+ } else
+ ret = ktime_get_boot_ns() + ka->kvmclock_offset;
put_cpu();
--
2.7.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock
2017-11-09 2:52 [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock Wanpeng Li
@ 2017-11-09 4:39 ` Konrad Rzeszutek Wilk
2017-11-09 5:47 ` Wanpeng Li
2017-11-15 1:17 ` Wanpeng Li
2017-11-15 10:29 ` David Hildenbrand
2 siblings, 1 reply; 6+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-11-09 4:39 UTC (permalink / raw)
To: Wanpeng Li
Cc: linux-kernel, kvm, Paolo Bonzini, Radim Krčmář,
Wanpeng Li
On Wed, Nov 08, 2017 at 06:52:12PM -0800, Wanpeng Li wrote:
> From: Wanpeng Li <wanpeng.li@hotmail.com>
>
> watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [qemu-system-x86:10185]
> CPU: 6 PID: 10185 Comm: qemu-system-x86 Tainted: G OE 4.14.0-rc4+ #4
> RIP: 0010:kvm_get_time_scale+0x4e/0xa0 [kvm]
> Call Trace:
> ? get_kvmclock_ns+0xa3/0x140 [kvm]
> get_time_ref_counter+0x5a/0x80 [kvm]
> kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> ? kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> ? preempt_schedule+0x27/0x30
> ? ___preempt_schedule+0x16/0x18
> kvm_arch_vcpu_ioctl_run+0x4b4/0x1690 [kvm]
> ? kvm_arch_vcpu_load+0x47/0x230 [kvm]
> kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> ? kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> ? kvm_vm_ioctl_check_extension_generic+0x3b/0x40 [kvm]
> ? kvm_dev_ioctl+0x279/0x6c0 [kvm]
> do_vfs_ioctl+0xa1/0x5d0
> ? __fget+0x73/0xa0
> SyS_ioctl+0x79/0x90
> entry_SYSCALL_64_fastpath+0x1e/0xa9
>
> This can be reproduced when running kvm-unit-tests/hyperv_stimer.flat and
> cpu-hotplug stress simultaneously. __this_cpu_read(cpu_tsc_khz) returns 0
> (set in kvmclock_cpu_down_prep()) when the pCPU is unhotplug which results
> in kvm_get_time_scale() gets into an infinite loop.
>
> This patch fixes it by treating the unhotplug pCPU as not using master clock.
>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> ---
> arch/x86/kvm/x86.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
Could you include the v1, v2, and v3 changes? I kind got lost
why the earlier versions were incorrect.
Usually one does it by saying:
v1: New version
v2: Changed xyz , etc..
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 03869eb..d61dcce3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1795,10 +1795,13 @@ u64 get_kvmclock_ns(struct kvm *kvm)
> /* both __this_cpu_read() and rdtsc() should be on the same cpu */
> get_cpu();
>
> - kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
> - &hv_clock.tsc_shift,
> - &hv_clock.tsc_to_system_mul);
> - ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> + if (__this_cpu_read(cpu_tsc_khz)) {
> + kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
> + &hv_clock.tsc_shift,
> + &hv_clock.tsc_to_system_mul);
> + ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> + } else
> + ret = ktime_get_boot_ns() + ka->kvmclock_offset;
>
> put_cpu();
>
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock
2017-11-09 4:39 ` Konrad Rzeszutek Wilk
@ 2017-11-09 5:47 ` Wanpeng Li
0 siblings, 0 replies; 6+ messages in thread
From: Wanpeng Li @ 2017-11-09 5:47 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: linux-kernel, kvm, Paolo Bonzini, Radim Krčmář,
Wanpeng Li
2017-11-09 12:39 GMT+08:00 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>:
> On Wed, Nov 08, 2017 at 06:52:12PM -0800, Wanpeng Li wrote:
>> From: Wanpeng Li <wanpeng.li@hotmail.com>
>>
>> watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [qemu-system-x86:10185]
>> CPU: 6 PID: 10185 Comm: qemu-system-x86 Tainted: G OE 4.14.0-rc4+ #4
>> RIP: 0010:kvm_get_time_scale+0x4e/0xa0 [kvm]
>> Call Trace:
>> ? get_kvmclock_ns+0xa3/0x140 [kvm]
>> get_time_ref_counter+0x5a/0x80 [kvm]
>> kvm_hv_process_stimers+0x120/0x5f0 [kvm]
>> ? kvm_hv_process_stimers+0x120/0x5f0 [kvm]
>> ? preempt_schedule+0x27/0x30
>> ? ___preempt_schedule+0x16/0x18
>> kvm_arch_vcpu_ioctl_run+0x4b4/0x1690 [kvm]
>> ? kvm_arch_vcpu_load+0x47/0x230 [kvm]
>> kvm_vcpu_ioctl+0x33a/0x620 [kvm]
>> ? kvm_vcpu_ioctl+0x33a/0x620 [kvm]
>> ? kvm_vm_ioctl_check_extension_generic+0x3b/0x40 [kvm]
>> ? kvm_dev_ioctl+0x279/0x6c0 [kvm]
>> do_vfs_ioctl+0xa1/0x5d0
>> ? __fget+0x73/0xa0
>> SyS_ioctl+0x79/0x90
>> entry_SYSCALL_64_fastpath+0x1e/0xa9
>>
>> This can be reproduced when running kvm-unit-tests/hyperv_stimer.flat and
>> cpu-hotplug stress simultaneously. __this_cpu_read(cpu_tsc_khz) returns 0
>> (set in kvmclock_cpu_down_prep()) when the pCPU is unhotplug which results
>> in kvm_get_time_scale() gets into an infinite loop.
>>
>> This patch fixes it by treating the unhotplug pCPU as not using master clock.
>>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
>> ---
>> arch/x86/kvm/x86.c | 11 +++++++----
>> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> Could you include the v1, v2, and v3 changes? I kind got lost
> why the earlier versions were incorrect.
>
> Usually one does it by saying:
> v1: New version
> v2: Changed xyz , etc..
Yeah, I forgot to add this:
v2 -> v3:
* treat the unhotplug pCPU as not using master clock
v1 -> v2:
* avoid infinite loop in kvm_get_time_scale()
Regards,
Wanpeng Li
>
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 03869eb..d61dcce3 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -1795,10 +1795,13 @@ u64 get_kvmclock_ns(struct kvm *kvm)
>> /* both __this_cpu_read() and rdtsc() should be on the same cpu */
>> get_cpu();
>>
>> - kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
>> - &hv_clock.tsc_shift,
>> - &hv_clock.tsc_to_system_mul);
>> - ret = __pvclock_read_cycles(&hv_clock, rdtsc());
>> + if (__this_cpu_read(cpu_tsc_khz)) {
>> + kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
>> + &hv_clock.tsc_shift,
>> + &hv_clock.tsc_to_system_mul);
>> + ret = __pvclock_read_cycles(&hv_clock, rdtsc());
>> + } else
>> + ret = ktime_get_boot_ns() + ka->kvmclock_offset;
>>
>> put_cpu();
>>
>> --
>> 2.7.4
>>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock
2017-11-09 2:52 [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock Wanpeng Li
2017-11-09 4:39 ` Konrad Rzeszutek Wilk
@ 2017-11-15 1:17 ` Wanpeng Li
2017-11-16 18:25 ` Radim Krčmář
2017-11-15 10:29 ` David Hildenbrand
2 siblings, 1 reply; 6+ messages in thread
From: Wanpeng Li @ 2017-11-15 1:17 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Wanpeng Li
Ping, :)
2017-11-09 10:52 GMT+08:00 Wanpeng Li <kernellwp@gmail.com>:
> From: Wanpeng Li <wanpeng.li@hotmail.com>
>
> watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [qemu-system-x86:10185]
> CPU: 6 PID: 10185 Comm: qemu-system-x86 Tainted: G OE 4.14.0-rc4+ #4
> RIP: 0010:kvm_get_time_scale+0x4e/0xa0 [kvm]
> Call Trace:
> ? get_kvmclock_ns+0xa3/0x140 [kvm]
> get_time_ref_counter+0x5a/0x80 [kvm]
> kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> ? kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> ? preempt_schedule+0x27/0x30
> ? ___preempt_schedule+0x16/0x18
> kvm_arch_vcpu_ioctl_run+0x4b4/0x1690 [kvm]
> ? kvm_arch_vcpu_load+0x47/0x230 [kvm]
> kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> ? kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> ? kvm_vm_ioctl_check_extension_generic+0x3b/0x40 [kvm]
> ? kvm_dev_ioctl+0x279/0x6c0 [kvm]
> do_vfs_ioctl+0xa1/0x5d0
> ? __fget+0x73/0xa0
> SyS_ioctl+0x79/0x90
> entry_SYSCALL_64_fastpath+0x1e/0xa9
>
> This can be reproduced when running kvm-unit-tests/hyperv_stimer.flat and
> cpu-hotplug stress simultaneously. __this_cpu_read(cpu_tsc_khz) returns 0
> (set in kvmclock_cpu_down_prep()) when the pCPU is unhotplug which results
> in kvm_get_time_scale() gets into an infinite loop.
>
> This patch fixes it by treating the unhotplug pCPU as not using master clock.
>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> ---
> arch/x86/kvm/x86.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 03869eb..d61dcce3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1795,10 +1795,13 @@ u64 get_kvmclock_ns(struct kvm *kvm)
> /* both __this_cpu_read() and rdtsc() should be on the same cpu */
> get_cpu();
>
> - kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
> - &hv_clock.tsc_shift,
> - &hv_clock.tsc_to_system_mul);
> - ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> + if (__this_cpu_read(cpu_tsc_khz)) {
> + kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
> + &hv_clock.tsc_shift,
> + &hv_clock.tsc_to_system_mul);
> + ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> + } else
> + ret = ktime_get_boot_ns() + ka->kvmclock_offset;
>
> put_cpu();
>
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock
2017-11-09 2:52 [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock Wanpeng Li
2017-11-09 4:39 ` Konrad Rzeszutek Wilk
2017-11-15 1:17 ` Wanpeng Li
@ 2017-11-15 10:29 ` David Hildenbrand
2 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand @ 2017-11-15 10:29 UTC (permalink / raw)
To: Wanpeng Li, linux-kernel, kvm
Cc: Paolo Bonzini, Radim Krčmář, Wanpeng Li
On 09.11.2017 03:52, Wanpeng Li wrote:
> From: Wanpeng Li <wanpeng.li@hotmail.com>
>
> watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [qemu-system-x86:10185]
> CPU: 6 PID: 10185 Comm: qemu-system-x86 Tainted: G OE 4.14.0-rc4+ #4
> RIP: 0010:kvm_get_time_scale+0x4e/0xa0 [kvm]
> Call Trace:
> ? get_kvmclock_ns+0xa3/0x140 [kvm]
> get_time_ref_counter+0x5a/0x80 [kvm]
> kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> ? kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> ? preempt_schedule+0x27/0x30
> ? ___preempt_schedule+0x16/0x18
> kvm_arch_vcpu_ioctl_run+0x4b4/0x1690 [kvm]
> ? kvm_arch_vcpu_load+0x47/0x230 [kvm]
> kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> ? kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> ? kvm_vm_ioctl_check_extension_generic+0x3b/0x40 [kvm]
> ? kvm_dev_ioctl+0x279/0x6c0 [kvm]
> do_vfs_ioctl+0xa1/0x5d0
> ? __fget+0x73/0xa0
> SyS_ioctl+0x79/0x90
> entry_SYSCALL_64_fastpath+0x1e/0xa9
>
> This can be reproduced when running kvm-unit-tests/hyperv_stimer.flat and
> cpu-hotplug stress simultaneously. __this_cpu_read(cpu_tsc_khz) returns 0
> (set in kvmclock_cpu_down_prep()) when the pCPU is unhotplug which results
> in kvm_get_time_scale() gets into an infinite loop.
>
> This patch fixes it by treating the unhotplug pCPU as not using master clock.
>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> ---
> arch/x86/kvm/x86.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 03869eb..d61dcce3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1795,10 +1795,13 @@ u64 get_kvmclock_ns(struct kvm *kvm)
> /* both __this_cpu_read() and rdtsc() should be on the same cpu */
> get_cpu();
>
> - kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
> - &hv_clock.tsc_shift,
> - &hv_clock.tsc_to_system_mul);
> - ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> + if (__this_cpu_read(cpu_tsc_khz)) {
> + kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
> + &hv_clock.tsc_shift,
> + &hv_clock.tsc_to_system_mul);
> + ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> + } else
Can we add a comment like
/* unplugging a physical CPU -> avoid using the master clock */
> + ret = ktime_get_boot_ns() + ka->kvmclock_offset;
>
> put_cpu();
>
>
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock
2017-11-15 1:17 ` Wanpeng Li
@ 2017-11-16 18:25 ` Radim Krčmář
0 siblings, 0 replies; 6+ messages in thread
From: Radim Krčmář @ 2017-11-16 18:25 UTC (permalink / raw)
To: Wanpeng Li; +Cc: linux-kernel, kvm, Paolo Bonzini, Wanpeng Li
2017-11-15 09:17+0800, Wanpeng Li:
> Ping, :)
Ah, sorry, I got distracted while learning about the hotplug mechanism.
Indeed we cannot move move the callback earlier because the cpufreq
driver kvm uses on crappy hardware gets set in CPUHP_AP_ONLINE_DYN, which
is way too late.
> 2017-11-09 10:52 GMT+08:00 Wanpeng Li <kernellwp@gmail.com>:
> > From: Wanpeng Li <wanpeng.li@hotmail.com>
> >
> > watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [qemu-system-x86:10185]
> > CPU: 6 PID: 10185 Comm: qemu-system-x86 Tainted: G OE 4.14.0-rc4+ #4
> > RIP: 0010:kvm_get_time_scale+0x4e/0xa0 [kvm]
> > Call Trace:
> > ? get_kvmclock_ns+0xa3/0x140 [kvm]
> > get_time_ref_counter+0x5a/0x80 [kvm]
> > kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> > ? kvm_hv_process_stimers+0x120/0x5f0 [kvm]
> > ? preempt_schedule+0x27/0x30
> > ? ___preempt_schedule+0x16/0x18
> > kvm_arch_vcpu_ioctl_run+0x4b4/0x1690 [kvm]
> > ? kvm_arch_vcpu_load+0x47/0x230 [kvm]
> > kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> > ? kvm_vcpu_ioctl+0x33a/0x620 [kvm]
> > ? kvm_vm_ioctl_check_extension_generic+0x3b/0x40 [kvm]
> > ? kvm_dev_ioctl+0x279/0x6c0 [kvm]
> > do_vfs_ioctl+0xa1/0x5d0
> > ? __fget+0x73/0xa0
> > SyS_ioctl+0x79/0x90
> > entry_SYSCALL_64_fastpath+0x1e/0xa9
> >
> > This can be reproduced when running kvm-unit-tests/hyperv_stimer.flat and
> > cpu-hotplug stress simultaneously. __this_cpu_read(cpu_tsc_khz) returns 0
> > (set in kvmclock_cpu_down_prep()) when the pCPU is unhotplug which results
> > in kvm_get_time_scale() gets into an infinite loop.
> >
> > This patch fixes it by treating the unhotplug pCPU as not using master clock.
> >
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> > ---
> > arch/x86/kvm/x86.c | 11 +++++++----
> > 1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 03869eb..d61dcce3 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -1795,10 +1795,13 @@ u64 get_kvmclock_ns(struct kvm *kvm)
> > /* both __this_cpu_read() and rdtsc() should be on the same cpu */
> > get_cpu();
> >
> > - kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
> > - &hv_clock.tsc_shift,
> > - &hv_clock.tsc_to_system_mul);
> > - ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> > + if (__this_cpu_read(cpu_tsc_khz)) {
> > + kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
Would be safer to read __this_cpu_read(cpu_tsc_khz) only once, but I
think it works for now as unplug thread must be scheduled and get_cpu()
prevents changes.
> > + &hv_clock.tsc_shift,
> > + &hv_clock.tsc_to_system_mul);
> > + ret = __pvclock_read_cycles(&hv_clock, rdtsc());
> > + } else
> > + ret = ktime_get_boot_ns() + ka->kvmclock_offset;
Not pretty, but gets the job done ...
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-11-16 18:25 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-09 2:52 [PATCH v3] KVM: X86: Fix softlockup when get the current kvmclock Wanpeng Li
2017-11-09 4:39 ` Konrad Rzeszutek Wilk
2017-11-09 5:47 ` Wanpeng Li
2017-11-15 1:17 ` Wanpeng Li
2017-11-16 18:25 ` Radim Krčmář
2017-11-15 10:29 ` David Hildenbrand
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.