* Another nice lockdep print in nested SVM code
@ 2022-04-26 16:39 Maxim Levitsky
2022-04-26 16:43 ` Maxim Levitsky
2022-04-26 17:29 ` Paolo Bonzini
0 siblings, 2 replies; 7+ messages in thread
From: Maxim Levitsky @ 2022-04-26 16:39 UTC (permalink / raw)
To: kvm; +Cc: Sean Christopherson
Just for reference, noticed today:
[ 1355.788381] ------------[ cut here ]------------
[ 1355.788820] do not call blocking ops when !TASK_RUNNING; state=1 set at [<00000000cce298a6>] kvm_vcpu_block+0x54/0xa0 [kvm]
[ 1355.789843] WARNING: CPU: 0 PID: 8927 at kernel/sched/core.c:9662 __might_sleep+0x68/0x70
[ 1355.790573] Modules linked in: kvm_amd(O) ccp kvm(O) irqbypass tun xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT uinput snd_seq_dummy snd_hrtimer ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_mangle iptable_nat nf_nat bridge rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace rfkill sunrpc vfat fat snd_hda_codec_generic snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hwdep snd_hda_core rng_core snd_seq snd_seq_device input_leds snd_pcm joydev snd_timer snd lpc_ich mfd_core virtio_input efi_pstore pcspkr rtc_cmos button sch_fq_codel ext4 mbcache jbd2 hid_generic usbhid hid virtio_gpu virtio_dma_buf drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops virtio_blk virtio_net drm net_failover virtio_console failover i2c_core crc32_pclmul crc32c_intel xhci_pci xhci_hcd virtio_pci virtio virtio_pci_legacy_dev virtio_pci_modern_dev virtio_ring dm_mirror dm_region_hash dm_log fuse ipv6 autofs4 [last unloaded: ccp]
[ 1355.797870] CPU: 0 PID: 8927 Comm: CPU 0/KVM Tainted: G W O 5.18.0-rc4.unstable #5
[ 1355.798637] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 1355.799295] RIP: 0010:__might_sleep+0x68/0x70
[ 1355.799730] Code: e8 1d fe ff ff 41 5c 41 5d 5d c3 c6 05 5c b3 4c 01 01 90 48 8b 90 80 14 00 00 48 c7 c7 a8 4e 2c 82 48 89 d1 e8 b4 f8 8a 00 90 <0f> 0b 90 90 eb c8 66 90 0f 1f 44 00 00 55 48 89 e5 41 56 41 be 08
[ 1355.801313] RSP: 0018:ffffc90002183be8 EFLAGS: 00010282
[ 1355.801783] RAX: 0000000000000000 RBX: ffffc90002183c40 RCX: 0000000000000000
[ 1355.802402] RDX: ffff88846ce2c1c0 RSI: ffff88846ce1f7a0 RDI: ffff88846ce1f7a0
[ 1355.803046] RBP: ffffc90002183bf8 R08: ffff88847df6efe8 R09: 0000000000013ffb
[ 1355.803690] R10: 0000000000000553 R11: ffff88847df0f000 R12: ffffffffa0921898
[ 1355.804294] R13: 00000000000000a1 R14: 00000000003e0688 R15: ffffea0000000000
[ 1355.804901] FS: 00007f630ffff640(0000) GS:ffff88846ce00000(0000) knlGS:0000000000000000
[ 1355.805641] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1355.806145] CR2: 0000000000000000 CR3: 000000011dc16000 CR4: 0000000000350ef0
[ 1355.806759] Call Trace:
[ 1355.806971] <TASK>
[ 1355.807187] kvm_vcpu_map+0x159/0x190 [kvm]
[ 1355.807628] nested_svm_vmexit+0x4c/0x7f0 [kvm_amd]
[ 1355.808036] ? kvm_vcpu_block+0x54/0xa0 [kvm]
[ 1355.808450] svm_check_nested_events+0x97/0x390 [kvm_amd]
[ 1355.808920] kvm_check_nested_events+0x1c/0x40 [kvm]
[ 1355.809396] kvm_arch_vcpu_runnable+0x4e/0x190 [kvm]
[ 1355.809892] kvm_vcpu_check_block+0x4f/0x100 [kvm]
[ 1355.810349] ? kvm_vcpu_check_block+0x5/0x100 [kvm]
[ 1355.810806] ? kvm_vcpu_block+0x54/0xa0 [kvm]
[ 1355.811259] kvm_vcpu_block+0x6b/0xa0 [kvm]
[ 1355.811666] kvm_vcpu_halt+0x3f/0x490 [kvm]
[ 1355.812049] kvm_arch_vcpu_ioctl_run+0xb0b/0x1d00 [kvm]
[ 1355.812539] ? rcu_read_lock_sched_held+0x16/0x80
[ 1355.813013] ? lock_release+0x1c4/0x270
[ 1355.813365] ? __wake_up_common+0x8d/0x180
[ 1355.813743] ? _raw_spin_unlock_irq+0x28/0x40
[ 1355.814139] kvm_vcpu_ioctl+0x289/0x750 [kvm]
[ 1355.814537] ? kvm_vcpu_ioctl+0x289/0x750 [kvm]
[ 1355.814963] ? lock_release+0x1c4/0x270
[ 1355.815321] ? __fget_files+0xe1/0x1a0
[ 1355.815690] __x64_sys_ioctl+0x8e/0xc0
[ 1355.816019] do_syscall_64+0x36/0x80
[ 1355.816336] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 1355.816776] RIP: 0033:0x7f632cc340ab
[ 1355.817109] Code: ff ff ff 85 c0 79 9b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 9d bd 0c 00 f7 d8 64 89 01 48
[ 1355.818713] RSP: 002b:00007f630fffd5c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 1355.819381] RAX: ffffffffffffffda RBX: 0000560c199db3e0 RCX: 00007f632cc340ab
[ 1355.820006] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000024
[ 1355.820610] RBP: 00007f630fffd6c0 R08: 0000560c18a71e60 R09: 000000000000ffff
[ 1355.821238] R10: 0000560c185cac1a R11: 0000000000000246 R12: 00007fff68dfeb0e
[ 1355.821866] R13: 00007fff68dfeb0f R14: 0000000000000000 R15: 00007f630ffff640
[ 1355.822490] </TASK>
[ 1355.822690] irq event stamp: 0
[ 1355.823011] hardirqs last enabled at (0): [<0000000000000000>] 0x0
[ 1355.823817] hardirqs last disabled at (0): [<ffffffff811339cb>] copy_process+0x94b/0x1ec0
[ 1355.824820] softirqs last enabled at (0): [<ffffffff811339cb>] copy_process+0x94b/0x1ec0
[ 1355.825856] softirqs last disabled at (0): [<0000000000000000>] 0x0
[ 1355.826633] ---[ end trace 0000000000000000 ]---
I never liked the fact that we map the vmcb12 on vmexit, we should keep it mapped from nested vmentry to nested vmexit.
I can make a patch for this.
Also injecting #GP if either of the maps fail is no doubt wrong - something I know for a while but never got to fix it.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another nice lockdep print in nested SVM code
2022-04-26 16:39 Another nice lockdep print in nested SVM code Maxim Levitsky
@ 2022-04-26 16:43 ` Maxim Levitsky
2022-04-26 16:47 ` Paolo Bonzini
2022-04-26 17:29 ` Paolo Bonzini
1 sibling, 1 reply; 7+ messages in thread
From: Maxim Levitsky @ 2022-04-26 16:43 UTC (permalink / raw)
To: kvm; +Cc: Sean Christopherson
On Tue, 2022-04-26 at 19:39 +0300, Maxim Levitsky wrote:
> Just for reference, noticed today:
>
> [ 1355.788381] ------------[ cut here ]------------
> [ 1355.788820] do not call blocking ops when !TASK_RUNNING; state=1 set at [<00000000cce298a6>] kvm_vcpu_block+0x54/0xa0 [kvm]
> [ 1355.789843] WARNING: CPU: 0 PID: 8927 at kernel/sched/core.c:9662 __might_sleep+0x68/0x70
> [ 1355.790573] Modules linked in: kvm_amd(O) ccp kvm(O) irqbypass tun xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT uinput snd_seq_dummy snd_hrtimer ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_mangle iptable_nat nf_nat bridge rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace rfkill sunrpc vfat fat snd_hda_codec_generic snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hwdep snd_hda_core rng_core snd_seq snd_seq_device input_leds snd_pcm joydev snd_timer snd lpc_ich mfd_core virtio_input efi_pstore pcspkr rtc_cmos button sch_fq_codel ext4 mbcache jbd2 hid_generic usbhid hid virtio_gpu virtio_dma_buf drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops virtio_blk virtio_net drm net_failover virtio_console failover i2c_core crc32_pclmul crc32c_intel xhci_pci xhci_hcd virtio_pci virtio virtio_pci_legacy_dev virtio_pci_modern_dev virtio_ring dm_mirror dm_region_hash dm_log fuse ipv6 autofs4 [last unloaded: ccp]
> [ 1355.797870] CPU: 0 PID: 8927 Comm: CPU 0/KVM Tainted: G W O 5.18.0-rc4.unstable #5
> [ 1355.798637] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
> [ 1355.799295] RIP: 0010:__might_sleep+0x68/0x70
> [ 1355.799730] Code: e8 1d fe ff ff 41 5c 41 5d 5d c3 c6 05 5c b3 4c 01 01 90 48 8b 90 80 14 00 00 48 c7 c7 a8 4e 2c 82 48 89 d1 e8 b4 f8 8a 00 90 <0f> 0b 90 90 eb c8 66 90 0f 1f 44 00 00 55 48 89 e5 41 56 41 be 08
> [ 1355.801313] RSP: 0018:ffffc90002183be8 EFLAGS: 00010282
> [ 1355.801783] RAX: 0000000000000000 RBX: ffffc90002183c40 RCX: 0000000000000000
> [ 1355.802402] RDX: ffff88846ce2c1c0 RSI: ffff88846ce1f7a0 RDI: ffff88846ce1f7a0
> [ 1355.803046] RBP: ffffc90002183bf8 R08: ffff88847df6efe8 R09: 0000000000013ffb
> [ 1355.803690] R10: 0000000000000553 R11: ffff88847df0f000 R12: ffffffffa0921898
> [ 1355.804294] R13: 00000000000000a1 R14: 00000000003e0688 R15: ffffea0000000000
> [ 1355.804901] FS: 00007f630ffff640(0000) GS:ffff88846ce00000(0000) knlGS:0000000000000000
> [ 1355.805641] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 1355.806145] CR2: 0000000000000000 CR3: 000000011dc16000 CR4: 0000000000350ef0
> [ 1355.806759] Call Trace:
> [ 1355.806971] <TASK>
> [ 1355.807187] kvm_vcpu_map+0x159/0x190 [kvm]
> [ 1355.807628] nested_svm_vmexit+0x4c/0x7f0 [kvm_amd]
> [ 1355.808036] ? kvm_vcpu_block+0x54/0xa0 [kvm]
> [ 1355.808450] svm_check_nested_events+0x97/0x390 [kvm_amd]
> [ 1355.808920] kvm_check_nested_events+0x1c/0x40 [kvm]
> [ 1355.809396] kvm_arch_vcpu_runnable+0x4e/0x190 [kvm]
> [ 1355.809892] kvm_vcpu_check_block+0x4f/0x100 [kvm]
> [ 1355.810349] ? kvm_vcpu_check_block+0x5/0x100 [kvm]
> [ 1355.810806] ? kvm_vcpu_block+0x54/0xa0 [kvm]
> [ 1355.811259] kvm_vcpu_block+0x6b/0xa0 [kvm]
> [ 1355.811666] kvm_vcpu_halt+0x3f/0x490 [kvm]
> [ 1355.812049] kvm_arch_vcpu_ioctl_run+0xb0b/0x1d00 [kvm]
> [ 1355.812539] ? rcu_read_lock_sched_held+0x16/0x80
> [ 1355.813013] ? lock_release+0x1c4/0x270
> [ 1355.813365] ? __wake_up_common+0x8d/0x180
> [ 1355.813743] ? _raw_spin_unlock_irq+0x28/0x40
> [ 1355.814139] kvm_vcpu_ioctl+0x289/0x750 [kvm]
> [ 1355.814537] ? kvm_vcpu_ioctl+0x289/0x750 [kvm]
> [ 1355.814963] ? lock_release+0x1c4/0x270
> [ 1355.815321] ? __fget_files+0xe1/0x1a0
> [ 1355.815690] __x64_sys_ioctl+0x8e/0xc0
> [ 1355.816019] do_syscall_64+0x36/0x80
> [ 1355.816336] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 1355.816776] RIP: 0033:0x7f632cc340ab
> [ 1355.817109] Code: ff ff ff 85 c0 79 9b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 9d bd 0c 00 f7 d8 64 89 01 48
> [ 1355.818713] RSP: 002b:00007f630fffd5c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> [ 1355.819381] RAX: ffffffffffffffda RBX: 0000560c199db3e0 RCX: 00007f632cc340ab
> [ 1355.820006] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000024
> [ 1355.820610] RBP: 00007f630fffd6c0 R08: 0000560c18a71e60 R09: 000000000000ffff
> [ 1355.821238] R10: 0000560c185cac1a R11: 0000000000000246 R12: 00007fff68dfeb0e
> [ 1355.821866] R13: 00007fff68dfeb0f R14: 0000000000000000 R15: 00007f630ffff640
> [ 1355.822490] </TASK>
> [ 1355.822690] irq event stamp: 0
> [ 1355.823011] hardirqs last enabled at (0): [<0000000000000000>] 0x0
> [ 1355.823817] hardirqs last disabled at (0): [<ffffffff811339cb>] copy_process+0x94b/0x1ec0
> [ 1355.824820] softirqs last enabled at (0): [<ffffffff811339cb>] copy_process+0x94b/0x1ec0
> [ 1355.825856] softirqs last disabled at (0): [<0000000000000000>] 0x0
> [ 1355.826633] ---[ end trace 0000000000000000 ]---
>
>
> I never liked the fact that we map the vmcb12 on vmexit, we should keep it mapped from nested vmentry to nested vmexit.
> I can make a patch for this.
>
> Also injecting #GP if either of the maps fail is no doubt wrong - something I know for a while but never got to fix it.
Actually for vmrun, that #GP is I think sort of correct now - that is what AMD cpus do on 'invalid physical address',
but for VM exit, we just need to have the vmcb mapped instead of mapping it again - injecting #GP at that point
which will go to the nested guest is just wrong.
Best regards,
Maxim Levitsky
>
> Best regards,
> Maxim Levitsky
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another nice lockdep print in nested SVM code
2022-04-26 16:43 ` Maxim Levitsky
@ 2022-04-26 16:47 ` Paolo Bonzini
2022-04-26 16:56 ` Maxim Levitsky
0 siblings, 1 reply; 7+ messages in thread
From: Paolo Bonzini @ 2022-04-26 16:47 UTC (permalink / raw)
To: Maxim Levitsky, kvm; +Cc: Sean Christopherson
On 4/26/22 18:43, Maxim Levitsky wrote:
> Actually for vmrun, that #GP is I think sort of correct now - that is
> what AMD cpus do on 'invalid physical address', but for VM exit, we
> just need to have the vmcb mapped instead of mapping it again -
> injecting #GP at that point which will go to the nested guest is just
> wrong.
Yeah, in that case I can't think of anything better than triple fault.
Paolo
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another nice lockdep print in nested SVM code
2022-04-26 16:47 ` Paolo Bonzini
@ 2022-04-26 16:56 ` Maxim Levitsky
2022-04-26 17:05 ` Paolo Bonzini
0 siblings, 1 reply; 7+ messages in thread
From: Maxim Levitsky @ 2022-04-26 16:56 UTC (permalink / raw)
To: Paolo Bonzini, kvm; +Cc: Sean Christopherson
On Tue, 2022-04-26 at 18:47 +0200, Paolo Bonzini wrote:
> On 4/26/22 18:43, Maxim Levitsky wrote:
> > Actually for vmrun, that #GP is I think sort of correct now - that is
> > what AMD cpus do on 'invalid physical address', but for VM exit, we
> > just need to have the vmcb mapped instead of mapping it again -
> > injecting #GP at that point which will go to the nested guest is just
> > wrong.
>
> Yeah, in that case I can't think of anything better than triple fault.
>
> Paolo
>
But do you think that it would be better to keep the vmcb12 page mapped/pinned while doing the nested run
which will both solve the problem of calling sleeping function and allow us
to remove the case of map failing on vm exit?
If I remember correctly, vmx does something like that.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another nice lockdep print in nested SVM code
2022-04-26 16:56 ` Maxim Levitsky
@ 2022-04-26 17:05 ` Paolo Bonzini
2022-04-26 17:08 ` Maxim Levitsky
0 siblings, 1 reply; 7+ messages in thread
From: Paolo Bonzini @ 2022-04-26 17:05 UTC (permalink / raw)
To: Maxim Levitsky, kvm; +Cc: Sean Christopherson
On 4/26/22 18:56, Maxim Levitsky wrote:
>> Yeah, in that case I can't think of anything better than triple fault.
>>
>> Paolo
>>
> But do you think that it would be better to keep the vmcb12 page mapped/pinned while doing the nested run
> which will both solve the problem of calling sleeping function and allow us
> to remove the case of map failing on vm exit?
>
> If I remember correctly, vmx does something like that.
Yes, it does, but that's because vmx would have to do a lot of mapping
and unmapping for vmread/vmwrite on highmem (32-bit) kernels. So it
maps at vmptrld and unmaps and vmclear. I wouldn't consider it an
example to follow.
Paolo
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another nice lockdep print in nested SVM code
2022-04-26 17:05 ` Paolo Bonzini
@ 2022-04-26 17:08 ` Maxim Levitsky
0 siblings, 0 replies; 7+ messages in thread
From: Maxim Levitsky @ 2022-04-26 17:08 UTC (permalink / raw)
To: Paolo Bonzini, kvm; +Cc: Sean Christopherson
On Tue, 2022-04-26 at 19:05 +0200, Paolo Bonzini wrote:
> On 4/26/22 18:56, Maxim Levitsky wrote:
> > > Yeah, in that case I can't think of anything better than triple fault.
> > >
> > > Paolo
> > >
> > But do you think that it would be better to keep the vmcb12 page mapped/pinned while doing the nested run
> > which will both solve the problem of calling sleeping function and allow us
> > to remove the case of map failing on vm exit?
> >
> > If I remember correctly, vmx does something like that.
>
> Yes, it does, but that's because vmx would have to do a lot of mapping
> and unmapping for vmread/vmwrite on highmem (32-bit) kernels. So it
> maps at vmptrld and unmaps and vmclear. I wouldn't consider it an
> example to follow.
All right, but what are the downsides of keeping this page mapped?
Best regards,
Maxim Levitsky
>
> Paolo
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Another nice lockdep print in nested SVM code
2022-04-26 16:39 Another nice lockdep print in nested SVM code Maxim Levitsky
2022-04-26 16:43 ` Maxim Levitsky
@ 2022-04-26 17:29 ` Paolo Bonzini
1 sibling, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2022-04-26 17:29 UTC (permalink / raw)
To: Maxim Levitsky, kvm; +Cc: Sean Christopherson
On 4/26/22 18:39, Maxim Levitsky wrote:
> [ 1355.807187] kvm_vcpu_map+0x159/0x190 [kvm]
> [ 1355.807628] nested_svm_vmexit+0x4c/0x7f0 [kvm_amd]
> [ 1355.808036] ? kvm_vcpu_block+0x54/0xa0 [kvm]
> [ 1355.808450] svm_check_nested_events+0x97/0x390 [kvm_amd]
> [ 1355.808920] kvm_check_nested_events+0x1c/0x40 [kvm]
When called from kvm_vcpu_halt, it is not even necessary to do the
vmexit immediately. kvm_arch_vcpu_runnable should do the right thing
anyway (e.g. kvm_arch_interrupt_allowed checks is_guest_mode for both
VMX and SVM).
The only case that is missing is MTF; it needs to be added to
hv_timer_pending, like this
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4ff36610af6a..e2e4f60159e9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1504,7 +1504,7 @@ struct kvm_x86_ops {
struct kvm_x86_nested_ops {
void (*leave_nested)(struct kvm_vcpu *vcpu);
int (*check_events)(struct kvm_vcpu *vcpu);
- bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
+ bool (*has_events)(struct kvm_vcpu *vcpu);
void (*triple_fault)(struct kvm_vcpu *vcpu);
int (*get_state)(struct kvm_vcpu *vcpu,
struct kvm_nested_state __user *user_kvm_nested_state,
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 856c87563883..2744b905865c 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3857,6 +3857,10 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
to_vmx(vcpu)->nested.preemption_timer_expired;
}
+static bool nested_vmx_has_events(struct kvm_vcpu *vcpu)
+{
+ return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
+
static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -6809,7 +6813,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
struct kvm_x86_nested_ops vmx_nested_ops = {
.leave_nested = vmx_leave_nested,
.check_events = vmx_check_nested_events,
- .hv_timer_pending = nested_vmx_preemption_timer_pending,
+ .has_events = nested_vmx_has_events,
.triple_fault = nested_vmx_triple_fault,
.get_state = vmx_get_nested_state,
.set_state = vmx_set_nested_state,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7f21d9fe816f..231c55c4b33d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9471,8 +9471,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
}
if (is_guest_mode(vcpu) &&
- kvm_x86_ops.nested_ops->hv_timer_pending &&
- kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+ kvm_x86_ops.nested_ops->has_events &&
+ kvm_x86_ops.nested_ops->has_events(vcpu))
*req_immediate_exit = true;
WARN_ON(vcpu->arch.exception.pending);
@@ -12185,8 +12185,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
return true;
if (is_guest_mode(vcpu) &&
- kvm_x86_ops.nested_ops->hv_timer_pending &&
- kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+ kvm_x86_ops.nested_ops->has_events &&
+ kvm_x86_ops.nested_ops->has_events(vcpu))
return true;
return false;
> [ 1355.809396] kvm_arch_vcpu_runnable+0x4e/0x190 [kvm]
> [ 1355.809892] kvm_vcpu_check_block+0x4f/0x100 [kvm]
> [ 1355.810349] ? kvm_vcpu_check_block+0x5/0x100 [kvm]
> [ 1355.810806] ? kvm_vcpu_block+0x54/0xa0 [kvm]
> [ 1355.811259] kvm_vcpu_block+0x6b/0xa0 [kvm]
> [ 1355.811666] kvm_vcpu_halt+0x3f/0x490 [kvm]
> [ 1355.812049] kvm_arch_vcpu_ioctl_run+0xb0b/0x1d00 [kvm]
> [ 1355.812539] ? rcu_read_lock_sched_held+0x16/0x80
> [ 1355.813013] ? lock_release+0x1c4/0x270
> [ 1355.813365] ? __wake_up_common+0x8d/0x180
> [ 1355.813743] ? _raw_spin_unlock_irq+0x28/0x40
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-04-26 17:30 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-26 16:39 Another nice lockdep print in nested SVM code Maxim Levitsky
2022-04-26 16:43 ` Maxim Levitsky
2022-04-26 16:47 ` Paolo Bonzini
2022-04-26 16:56 ` Maxim Levitsky
2022-04-26 17:05 ` Paolo Bonzini
2022-04-26 17:08 ` Maxim Levitsky
2022-04-26 17:29 ` Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.