All of lore.kernel.org
 help / color / mirror / Atom feed
From: Brian Rak <brak@vultr.com>
To: Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org
Subject: Re: Deadlock due to EPT_VIOLATION
Date: Fri, 26 May 2023 12:59:29 -0400	[thread overview]
Message-ID: <bce4b387-638d-7f3c-ca9b-12ff6e020bad@vultr.com> (raw)
In-Reply-To: <44ba516b-afe0-505d-1a87-90d489f9e03f@gameservers.com>


On 5/24/2023 9:39 AM, Brian Rak wrote:
>
> On 5/23/2023 12:22 PM, Sean Christopherson wrote:
>> Nit, this isn't deadlock.  It may or may not even be a livelock 
>> AFAICT.  The vCPUs
>> are simply stuck and not making forward progress, _why_ they aren't 
>> making forward
>> progress is unknown at this point (obviously :-) ).
>>
>> On Tue, May 23, 2023, Brian Rak wrote:
>>> We've been hitting an issue lately where KVM guests (w/ qemu) have been
>>> getting stuck in a loop of EPT_VIOLATIONs, and end up requiring a guest
>>> reboot to fix.
>>>
>>> On Intel machines the trace ends up looking like:
>>>
>>> �� �CPU-2386625 [094] 6598425.465404: 
>>> kvm_entry:����������� vcpu 1, rip
>>> 0xffffffffc0771aa2
>>> �� �CPU-2386625 [094] 6598425.465405: 
>>> kvm_exit:������������ reason
>>> EPT_VIOLATION rip 0xffffffffc0771aa2 info 683 800000ec
>>> �� �CPU-2386625 [094] 6598425.465405: 
>>> kvm_page_fault:������ vcpu 1 rip
>>> 0xffffffffc0771aa2 address 0x0000000002594fe0 error_code 0x683
>>> �� �CPU-2386625 [094] 6598425.465406: 
>>> kvm_inj_virq:�������� IRQ 0xec
>>> [reinjected]
>>> �� �CPU-2386625 [094] 6598425.465406: 
>>> kvm_entry:����������� vcpu 1, rip
>>> 0xffffffffc0771aa2
>>> �� �CPU-2386625 [094] 6598425.465407: 
>>> kvm_exit:������������ reason
>>> EPT_VIOLATION rip 0xffffffffc0771aa2 info 683 800000ec
>>> �� �CPU-2386625 [094] 6598425.465407: 
>>> kvm_page_fault:������ vcpu 1 rip
>>> 0xffffffffc0771aa2 address 0x0000000002594fe0 error_code 0x683
>>> �� �CPU-2386625 [094] 6598425.465408: 
>>> kvm_inj_virq:�������� IRQ 0xec
>>> [reinjected]
>>> �� �CPU-2386625 [094] 6598425.465408: 
>>> kvm_entry:����������� vcpu 1, rip
>>> 0xffffffffc0771aa2
>>> �� �CPU-2386625 [094] 6598425.465409: 
>>> kvm_exit:������������ reason
>>> EPT_VIOLATION rip 0xffffffffc0771aa2 info 683 800000ec
>>> �� �CPU-2386625 [094] 6598425.465410: 
>>> kvm_page_fault:������ vcpu 1 rip
>>> 0xffffffffc0771aa2 address 0x0000000002594fe0 error_code 0x683
>>> �� �CPU-2386625 [094] 6598425.465410: 
>>> kvm_inj_virq:�������� IRQ 0xec
>>> [reinjected]
>>> �� �CPU-2386625 [094] 6598425.465410: 
>>> kvm_entry:����������� vcpu 1, rip
>>> 0xffffffffc0771aa2
>>> �� �CPU-2386625 [094] 6598425.465411: 
>>> kvm_exit:������������ reason
>>> EPT_VIOLATION rip 0xffffffffc0771aa2 info 683 800000ec
>>> �� �CPU-2386625 [094] 6598425.465412: 
>>> kvm_page_fault:������ vcpu 1 rip
>>> 0xffffffffc0771aa2 address 0x0000000002594fe0 error_code 0x683
>>> �� �CPU-2386625 [094] 6598425.465413: 
>>> kvm_inj_virq:�������� IRQ 0xec
>>> [reinjected]
>>> �� �CPU-2386625 [094] 6598425.465413: 
>>> kvm_entry:����������� vcpu 1, rip
>>> 0xffffffffc0771aa2
>>> �� �CPU-2386625 [094] 6598425.465414: 
>>> kvm_exit:������������ reason
>>> EPT_VIOLATION rip 0xffffffffc0771aa2 info 683 800000ec
>>> �� �CPU-2386625 [094] 6598425.465414: 
>>> kvm_page_fault:������ vcpu 1 rip
>>> 0xffffffffc0771aa2 address 0x0000000002594fe0 error_code 0x683
>>> �� �CPU-2386625 [094] 6598425.465415: 
>>> kvm_inj_virq:�������� IRQ 0xec
>>> [reinjected]
>>> �� �CPU-2386625 [094] 6598425.465415: 
>>> kvm_entry:����������� vcpu 1, rip
>>> 0xffffffffc0771aa2
>>> �� �CPU-2386625 [094] 6598425.465417: 
>>> kvm_exit:������������ reason
>>> EPT_VIOLATION rip 0xffffffffc0771aa2 info 683 800000ec
>>> �� �CPU-2386625 [094] 6598425.465417: 
>>> kvm_page_fault:������ vcpu 1 rip
>>> 0xffffffffc0771aa2 address 0x0000000002594fe0 error_code 0x683
>>>
>>> on AMD machines, we end up with:
>>>
>>> �� �CPU-14414 [063] 3039492.055571: 
>>> kvm_page_fault:������ vcpu 0 rip
>>> 0xffffffffb172ab2b address 0x0000000f88eb2ff8 error_code 0x200000006
>>> �� �CPU-14414 [063] 3039492.055571: 
>>> kvm_entry:����������� vcpu 0, rip
>>> 0xffffffffb172ab2b
>>> �� �CPU-14414 [063] 3039492.055572: 
>>> kvm_exit:������������ reason EXIT_NPF
>>> rip 0xffffffffb172ab2b info 200000006 f88eb2ff8
>>> �� �CPU-14414 [063] 3039492.055572: 
>>> kvm_page_fault:������ vcpu 0 rip
>>> 0xffffffffb172ab2b address 0x0000000f88eb2ff8 error_code 0x200000006
>>> �� �CPU-14414 [063] 3039492.055573: 
>>> kvm_entry:����������� vcpu 0, rip
>>> 0xffffffffb172ab2b
>>> �� �CPU-14414 [063] 3039492.055574: 
>>> kvm_exit:������������ reason EXIT_NPF
>>> rip 0xffffffffb172ab2b info 200000006 f88eb2ff8
>>> �� �CPU-14414 [063] 3039492.055574: 
>>> kvm_page_fault:������ vcpu 0 rip
>>> 0xffffffffb172ab2b address 0x0000000f88eb2ff8 error_code 0x200000006
>>> �� �CPU-14414 [063] 3039492.055575: 
>>> kvm_entry:����������� vcpu 0, rip
>>> 0xffffffffb172ab2b
>>> �� �CPU-14414 [063] 3039492.055575: 
>>> kvm_exit:������������ reason EXIT_NPF
>>> rip 0xffffffffb172ab2b info 200000006 f88eb2ff8
>>> �� �CPU-14414 [063] 3039492.055576: 
>>> kvm_page_fault:������ vcpu 0 rip
>>> 0xffffffffb172ab2b address 0x0000000f88eb2ff8 error_code 0x200000006
>>> �� �CPU-14414 [063] 3039492.055576: 
>>> kvm_entry:����������� vcpu 0, rip
>>> 0xffffffffb172ab2b
>>> �� �CPU-14414 [063] 3039492.055577: 
>>> kvm_exit:������������ reason EXIT_NPF
>>> rip 0xffffffffb172ab2b info 200000006 f88eb2ff8
>>> �� �CPU-14414 [063] 3039492.055577: 
>>> kvm_page_fault:������ vcpu 0 rip
>>> 0xffffffffb172ab2b address 0x0000000f88eb2ff8 error_code 0x200000006
>>> �� �CPU-14414 [063] 3039492.055578: 
>>> kvm_entry:����������� vcpu 0, rip
>>> 0xffffffffb172ab2b
>>> �� �CPU-14414 [063] 3039492.055579: 
>>> kvm_exit:������������ reason EXIT_NPF
>>> rip 0xffffffffb172ab2b info 200000006 f88eb2ff8
>>> �� �CPU-14414 [063] 3039492.055579: 
>>> kvm_page_fault:������ vcpu 0 rip
>>> 0xffffffffb172ab2b address 0x0000000f88eb2ff8 error_code 0x200000006
>>> �� �CPU-14414 [063] 3039492.055580: 
>>> kvm_entry:����������� vcpu 0, rip
>>> 0xffffffffb172ab2b
>> In both cases, the TDP fault (EPT violation on Intel, #NPF on AMD) is 
>> occurring
>> when translating a guest paging structure.  I can't glean much from 
>> the AMD case,
>> but in the Intel trace, the fault occurs during delivery of the timer 
>> interrupt
>> (vector 0xec).  That may or may not be relevant to what's going on.
>>
>> It's definitely suspicious that both traces show that the guest is 
>> stuck faulting
>> on a guest paging structure.  Purely from a probability perspective, 
>> the odds of
>> that being a coincidence are low, though definitely not impossible.
>>
>>> The qemu process ends up looking like this once it happens:
>>>
>>> �� �0x00007fdc6a51be26 in internal_fallocate64 (fd=-514841856, 
>>> offset=16,
>>> len=140729021657088) at ../sysdeps/posix/posix_fallocate64.c:36
>>> �� �36���� return EINVAL;
>>> �� �(gdb) thread apply all bt
>>>
>>> �� �Thread 6 (Thread 0x7fdbdefff700 (LWP 879746) "vnc_worker"):
>>> �� �#0� futex_wait_cancelable (private=0, expected=0,
>>> futex_word=0x7fdc688f66cc) at ../sysdeps/nptl/futex-internal.h:186
>>> �� �#1� __pthread_cond_wait_common (abstime=0x0, clockid=0,
>>> mutex=0x7fdc688f66d8, cond=0x7fdc688f66a0) at pthread_cond_wait.c:508
>>> �� �#2� __pthread_cond_wait (cond=cond@entry=0x7fdc688f66a0,
>>> mutex=mutex@entry=0x7fdc688f66d8) at pthread_cond_wait.c:638
>>> �� �#3� 0x0000563424cbd32b in qemu_cond_wait_impl 
>>> (cond=0x7fdc688f66a0,
>>> mutex=0x7fdc688f66d8, file=0x563424d302b4 "../../ui/vnc-jobs.c", 
>>> line=248)
>>> at ../../util/qemu-thread-posix.c:220
>>> �� �#4� 0x00005634247dac33 in vnc_worker_thread_loop 
>>> (queue=0x7fdc688f66a0)
>>> at ../../ui/vnc-jobs.c:248
>>> �� �#5� 0x00005634247db8f8 in vnc_worker_thread
>>> (arg=arg@entry=0x7fdc688f66a0) at ../../ui/vnc-jobs.c:361
>>> �� �#6� 0x0000563424cbc7e9 in qemu_thread_start 
>>> (args=0x7fdbdeffcf30) at
>>> ../../util/qemu-thread-posix.c:505
>>> �� �#7� 0x00007fdc6a8e1ea7 in start_thread (arg=<optimized 
>>> out>) at
>>> pthread_create.c:477
>>> �� �#8� 0x00007fdc6a527a2f in clone () at
>>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>>>
>>> �� �Thread 5 (Thread 0x7fdbe5dff700 (LWP 879738) "CPU 1/KVM"):
>>> �� �#0� 0x00007fdc6a51d5f7 in preadv64v2 (fd=1756258112,
>>> vector=0x563424b5f007 <kvm_vcpu_ioctl+103>, count=0, offset=0, 
>>> flags=44672)
>>> at ../sysdeps/unix/sysv/linux/preadv64v2.c:31
>>> �� �#1� 0x0000000000000000 in ?? ()
>>>
>>> �� �Thread 4 (Thread 0x7fdbe6fff700 (LWP 879737) "CPU 0/KVM"):
>>> �� �#0� 0x00007fdc6a51d5f7 in preadv64v2 (fd=1755834304,
>>> vector=0x563424b5f007 <kvm_vcpu_ioctl+103>, count=0, offset=0, 
>>> flags=44672)
>>> at ../sysdeps/unix/sysv/linux/preadv64v2.c:31
>>> �� �#1� 0x0000000000000000 in ?? ()
>>>
>>> �� �Thread 3 (Thread 0x7fdbe83ff700 (LWP 879735) "IO 
>>> mon_iothread"):
>>> �� �#0� 0x00007fdc6a51bd2f in internal_fallocate64 
>>> (fd=-413102080, offset=3,
>>> len=4294967295) at ../sysdeps/posix/posix_fallocate64.c:32
>>> �� �#1� 0x000d5572b9bb0764 in ?? ()
>>> �� �#2� 0x000000016891db00 in ?? ()
>>> �� �#3� 0xffffffff7fffffff in ?? ()
>>> �� �#4� 0xf6b8254512850600 in ?? ()
>>> �� �#5� 0x0000000000000000 in ?? ()
>>>
>>> �� �Thread 2 (Thread 0x7fdc693ff700 (LWP 879730) "qemu-kvm"):
>>> �� �#0� 0x00007fdc6a5212e9 in ?? () from
>>> target:/lib/x86_64-linux-gnu/libc.so.6
>>> �� �#1� 0x0000563424cbd9aa in qemu_futex_wait 
>>> (val=<optimized out>,
>>> f=<optimized out>) at ./include/qemu/futex.h:29
>>> �� �#2� qemu_event_wait (ev=ev@entry=0x5634254bd1a8 
>>> <rcu_call_ready_event>)
>>> at ../../util/qemu-thread-posix.c:430
>>> �� �#3� 0x0000563424cc6d80 in call_rcu_thread 
>>> (opaque=opaque@entry=0x0) at
>>> ../../util/rcu.c:261
>>> �� �#4� 0x0000563424cbc7e9 in qemu_thread_start 
>>> (args=0x7fdc693fcf30) at
>>> ../../util/qemu-thread-posix.c:505
>>> �� �#5� 0x00007fdc6a8e1ea7 in start_thread (arg=<optimized 
>>> out>) at
>>> pthread_create.c:477
>>> �� �#6� 0x00007fdc6a527a2f in clone () at
>>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>>>
>>> �� �Thread 1 (Thread 0x7fdc69c3a680 (LWP 879712) "qemu-kvm"):
>>> �� �#0� 0x00007fdc6a51be26 in internal_fallocate64 
>>> (fd=-514841856,
>>> offset=16, len=140729021657088) at 
>>> ../sysdeps/posix/posix_fallocate64.c:36
>>> �� �#1� 0x0000000000000000 in ?? ()
>>>
>>> We first started seeing this back in 5.19, and we're still seeing it 
>>> as of
>>> 6.1.24 (likely later too, we don't have a ton of data for newer 
>>> versions).�
>>> We haven't been able to link this to any specific hardware.
>> Just to double check, this is the host kernel version, correct? When 
>> you upgraded
>> to kernel 5.19, did you change anything else in the stack?  E.g. did 
>> you upgrade
>> QEMU at the same time?  And what kernel were you upgrading from?
> Those are host kernel versions, correct.  We went from 5.15 -> 5.19.  
> That was the only change at the time.
>>
>>> It appears to happen more often on Intel, but our sample size is 
>>> much larger
>>> there.� Guest operating system type/version doesn't appear to 
>>> matter.� This
>>> usually happens to guests with a heavy network/disk workload, but it 
>>> can
>>> happen to even idle guests. This has happened on qemu 7.0 and 7.2 
>>> (upgrading
>>> to 7.2.2 is on our list to do).
>>>
>>> Where do we go from here?� We haven't really made a lot of 
>>> progress in
>>> figuring out why this keeps happening, nor have we been able to come 
>>> up with
>>> a reliable way to reproduce it.
>> Is it possible to capture a failure with the trace_kvm_unmap_hva_range,
>> kvm_mmu_spte_requested and kvm_mmu_set_spte tracepoints enabled?  
>> That will hopefully
>> provide insight into why the vCPU keeps faulting, e.g. should show if 
>> KVM is
>> installing a "bad" SPTE, or if KVM is doing nothing and intentionally 
>> retrying
>> the fault because there are constant and/or unresolve mmu_notifier 
>> events.  My
>> guess is that it's the latter (KVM doing nothing) due to the 
>> fallocate() calls
>> in the stack, but that's really just a guess.
>
> In that trace, I had all the kvm/kvmmmu events enabled (trace-cmd 
> record -e kvm -e kvmmmu).  Just to be sure, I repeated this with 
> `trace-cmd record -e all`:
>
>              CPU-1365880 [038] 5559771.610941: rcu_utilization:      
> Start context switch
>              CPU-1365880 [038] 5559771.610941: rcu_utilization:      
> End context switch
>              CPU-1365880 [038] 5559771.610942: write_msr:            
> 1d9, value 4000
>              CPU-1365880 [038] 5559771.610942: kvm_exit:             
> reason EPT_VIOLATION rip 0xffffffffb0e9bed0 info 83 0
>              CPU-1365880 [038] 5559771.610942: kvm_page_fault:       
> vcpu 0 rip 0xffffffffb0e9bed0 address 0x000000016e5b2ff8 error_code 0x83
>              CPU-1365880 [038] 5559771.610943: kvm_entry:            
> vcpu 0, rip 0xffffffffb0e9bed0
>              CPU-1365880 [038] 5559771.610943: rcu_utilization:      
> Start context switch
>              CPU-1365880 [038] 5559771.610943: rcu_utilization:      
> End context switch
>              CPU-1365880 [038] 5559771.610944: write_msr:            
> 1d9, value 4000
>              CPU-1365880 [038] 5559771.610944: kvm_exit:             
> reason EPT_VIOLATION rip 0xffffffffb0e9bed0 info 83 0
>              CPU-1365880 [038] 5559771.610944: kvm_page_fault:       
> vcpu 0 rip 0xffffffffb0e9bed0 address 0x000000016e5b2ff8 error_code 0x83
>              CPU-1365880 [038] 5559771.610945: kvm_entry:            
> vcpu 0, rip 0xffffffffb0e9bed0
>              CPU-1365880 [038] 5559771.610945: rcu_utilization:      
> Start context switch
>              CPU-1365880 [038] 5559771.610945: rcu_utilization:      
> End context switch
>              CPU-1365880 [038] 5559771.610946: write_msr:            
> 1d9, value 4000
>              CPU-1365880 [038] 5559771.610946: kvm_exit:             
> reason EPT_VIOLATION rip 0xffffffffb0e9bed0 info 83 0
>              CPU-1365880 [038] 5559771.610946: kvm_page_fault:       
> vcpu 0 rip 0xffffffffb0e9bed0 address 0x000000016e5b2ff8 error_code 0x83
>              CPU-1365880 [038] 5559771.610947: kvm_entry:            
> vcpu 0, rip 0xffffffffb0e9bed0
>              CPU-1365880 [038] 5559771.610947: rcu_utilization:      
> Start context switch
>              CPU-1365880 [038] 5559771.610947: rcu_utilization:      
> End context switch
>              CPU-1365880 [038] 5559771.610948: write_msr:            
> 1d9, value 4000
>              CPU-1365880 [038] 5559771.610948: kvm_exit:             
> reason EPT_VIOLATION rip 0xffffffffb0e9bed0 info 83 0
>              CPU-1365880 [038] 5559771.610948: kvm_page_fault:       
> vcpu 0 rip 0xffffffffb0e9bed0 address 0x000000016e5b2ff8 error_code 0x83
>              CPU-1365880 [038] 5559771.610948: kvm_entry:            
> vcpu 0, rip 0xffffffffb0e9bed0
>              CPU-1365880 [038] 5559771.610949: rcu_utilization:      
> Start context switch
>              CPU-1365880 [038] 5559771.610949: rcu_utilization:      
> End context switch
>              CPU-1365880 [038] 5559771.610949: write_msr:            
> 1d9, value 4000
>              CPU-1365880 [038] 5559771.610950: kvm_exit:             
> reason EPT_VIOLATION rip 0xffffffffb0e9bed0 info 83 0
>              CPU-1365880 [038] 5559771.610950: kvm_page_fault:       
> vcpu 0 rip 0xffffffffb0e9bed0 address 0x000000016e5b2ff8 error_code 0x83
>              CPU-1365880 [038] 5559771.610950: kvm_entry:            
> vcpu 0, rip 0xffffffffb0e9bed0
>              CPU-1365880 [038] 5559771.610950: rcu_utilization:      
> Start context switch
>              CPU-1365880 [038] 5559771.610951: rcu_utilization:      
> End context switch
>              CPU-1365880 [038] 5559771.610951: write_msr:            
> 1d9, value 4000
>              CPU-1365880 [038] 5559771.610951: kvm_exit:             
> reason EPT_VIOLATION rip 0xffffffffb0e9bed0 info 83 0
>              CPU-1365880 [038] 5559771.610952: kvm_page_fault:       
> vcpu 0 rip 0xffffffffb0e9bed0 address 0x000000016e5b2ff8 error_code 0x83
>              CPU-1365880 [038] 5559771.610952: kvm_entry:            
> vcpu 0, rip 0xffffffffb0e9bed0
>              CPU-1365880 [038] 5559771.610952: rcu_utilization:      
> Start context switch
>              CPU-1365880 [038] 5559771.610952: rcu_utilization:      
> End context switch
>              CPU-1365880 [038] 5559771.610953: write_msr:            
> 1d9, value 4000
>
>> The other thing that would be helpful would be getting kernel stack 
>> traces of the
>> relevant tasks/threads.  The vCPU stack traces won't be interesting, 
>> but it'll
>> likely help to see what the fallocate() tasks are doing.
> I'll see what I can come up with here, I was running into some 
> difficulty getting useful stack traces out of the VM

I didn't have any luck gathering guest-level stack traces - kaslr makes 
it pretty difficult even if I have the guest kernel symbols.

Interestingly we've been seeing this on Windows guests as well, which 
would rule out a bug within the guest kernel.  This was from windows:

              CPU-5638  [034] 5446408.762619: rcu_utilization:      End 
context switch
              CPU-5638  [034] 5446408.762620: write_msr: 1d9, value 4000
              CPU-5638  [034] 5446408.762620: kvm_exit: reason 
EPT_VIOLATION rip 0xfffff802057fee67 info 683 800000d1
              CPU-5638  [034] 5446408.762621: kvm_page_fault: vcpu 0 rip 
0xfffff802057fee67 address 0x0000000054200f80 error_code 0x683
              CPU-5638  [034] 5446408.762621: kvm_inj_virq: IRQ 0xd1 
[reinjected]
              CPU-5638  [034] 5446408.762622: kvm_entry: vcpu 0, rip 
0xfffff802057fee67
              CPU-5638  [034] 5446408.762622: rcu_utilization: Start 
context switch
              CPU-5638  [034] 5446408.762622: rcu_utilization: End 
context switch
              CPU-5638  [034] 5446408.762623: write_msr: 1d9, value 4000
              CPU-5638  [034] 5446408.762624: kvm_exit: reason 
EPT_VIOLATION rip 0xfffff802057fee67 info 683 800000d1
              CPU-5638  [034] 5446408.762624: kvm_page_fault: vcpu 0 rip 
0xfffff802057fee67 address 0x0000000054200f80 error_code 0x683
              CPU-5638  [034] 5446408.762625: kvm_inj_virq: IRQ 0xd1 
[reinjected]
              CPU-5638  [034] 5446408.762625: kvm_entry: vcpu 0, rip 
0xfffff802057fee67
              CPU-5638  [034] 5446408.762625: rcu_utilization: Start 
context switch
              CPU-5638  [034] 5446408.762625: rcu_utilization: End 
context switch
              CPU-5638  [034] 5446408.762626: write_msr: 1d9, value 4000
              CPU-5638  [034] 5446408.762627: kvm_exit: reason 
EPT_VIOLATION rip 0xfffff802057fee67 info 683 800000d1
              CPU-5638  [034] 5446408.762627: kvm_page_fault: vcpu 0 rip 
0xfffff802057fee67 address 0x0000000054200f80 error_code 0x683
              CPU-5638  [034] 5446408.762628: kvm_inj_virq: IRQ 0xd1 
[reinjected]
              CPU-5638  [034] 5446408.762628: kvm_entry: vcpu 0, rip 
0xfffff802057fee67
              CPU-5638  [034] 5446408.762628: rcu_utilization: Start 
context switch
              CPU-5638  [034] 5446408.762628: rcu_utilization: End 
context switch
              CPU-5638  [034] 5446408.762629: write_msr: 1d9, value 4000
              CPU-5638  [034] 5446408.762630: kvm_exit: reason 
EPT_VIOLATION rip 0xfffff802057fee67 info 683 800000d1
              CPU-5638  [034] 5446408.762630: kvm_page_fault: vcpu 0 rip 
0xfffff802057fee67 address 0x0000000054200f80 error_code 0x683
              CPU-5638  [034] 5446408.762631: kvm_inj_virq: IRQ 0xd1 
[reinjected]
              CPU-5638  [034] 5446408.762631: kvm_entry: vcpu 0, rip 
0xfffff802057fee67
              CPU-5638  [034] 5446408.762631: rcu_utilization: Start 
context switch
              CPU-5638  [034] 5446408.762632: rcu_utilization: End 
context switch
              CPU-5638  [034] 5446408.762633: write_msr: 1d9, value 4000
              CPU-5638  [034] 5446408.762633: kvm_exit: reason 
EPT_VIOLATION rip 0xfffff802057fee67 info 683 800000d1
              CPU-5638  [034] 5446408.762633: kvm_page_fault: vcpu 0 rip 
0xfffff802057fee67 address 0x0000000054200f80 error_code 0x683
              CPU-5638  [034] 5446408.762634: kvm_inj_virq: IRQ 0xd1 
[reinjected]
              CPU-5638  [034] 5446408.762634: kvm_entry: vcpu 0, rip 
0xfffff802057fee67
              CPU-5638  [034] 5446408.762635: rcu_utilization: Start 
context switch
              CPU-5638  [034] 5446408.762635: rcu_utilization: End 
context switch
              CPU-5638  [034] 5446408.762636: write_msr: 1d9, value 4000
              CPU-5638  [034] 5446408.762636: kvm_exit: reason 
EPT_VIOLATION rip 0xfffff802057fee67 info 683 800000d1
              CPU-5638  [034] 5446408.762636: kvm_page_fault: vcpu 0 rip 
0xfffff802057fee67 address 0x0000000054200f80 error_code 0x683
              CPU-5638  [034] 5446408.762637: kvm_inj_virq: IRQ 0xd1 
[reinjected]
              CPU-5638  [034] 5446408.762638: kvm_entry: vcpu 0, rip 
0xfffff802057fee67
              CPU-5638  [034] 5446408.762638: rcu_utilization: Start 
context switch
              CPU-5638  [034] 5446408.762638: rcu_utilization: End 
context switch
              CPU-5638  [034] 5446408.762639: write_msr: 1d9, value 4000
              CPU-5638  [034] 5446408.762639: kvm_exit: reason 
EPT_VIOLATION rip 0xfffff802057fee67 info 683 800000d1
              CPU-5638  [034] 5446408.762640: kvm_page_fault: vcpu 0 rip 
0xfffff802057fee67 address 0x0000000054200f80 error_code 0x683
              CPU-5638  [034] 5446408.762640: kvm_inj_virq: IRQ 0xd1 
[reinjected]


  reply	other threads:[~2023-05-26 16:59 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-23 14:02 Deadlock due to EPT_VIOLATION Brian Rak
2023-05-23 16:22 ` Sean Christopherson
2023-05-24 13:39   ` Brian Rak
2023-05-26 16:59     ` Brian Rak [this message]
2023-05-26 21:02       ` Sean Christopherson
2023-05-30 17:35         ` Brian Rak
2023-05-30 18:36           ` Sean Christopherson
2023-05-31 17:40             ` Brian Rak
2023-07-21 14:34             ` Amaan Cheval
2023-07-21 17:37               ` Sean Christopherson
2023-07-24 12:08                 ` Amaan Cheval
2023-07-25 17:30                   ` Sean Christopherson
2023-08-02 14:21                     ` Amaan Cheval
2023-08-02 15:34                       ` Sean Christopherson
2023-08-02 16:45                         ` Amaan Cheval
2023-08-02 17:52                           ` Sean Christopherson
2023-08-08 15:34                             ` Amaan Cheval
2023-08-08 17:07                               ` Sean Christopherson
2023-08-10  0:48                                 ` Eric Wheeler
2023-08-10  1:27                                   ` Eric Wheeler
2023-08-10 23:58                                     ` Sean Christopherson
2023-08-11 12:37                                       ` Amaan Cheval
2023-08-11 18:02                                         ` Sean Christopherson
2023-08-12  0:50                                           ` Eric Wheeler
2023-08-14 17:29                                             ` Sean Christopherson
2023-08-15  0:30                                 ` Eric Wheeler
2023-08-15 16:10                                   ` Sean Christopherson
2023-08-16 23:54                                     ` Eric Wheeler
2023-08-17 18:21                                       ` Sean Christopherson
2023-08-18  0:55                                         ` Eric Wheeler
2023-08-18 14:33                                           ` Sean Christopherson
2023-08-18 23:06                                             ` Eric Wheeler
2023-08-21 20:27                                               ` Eric Wheeler
2023-08-21 23:51                                                 ` Sean Christopherson
2023-08-22  0:11                                                   ` Sean Christopherson
2023-08-22  1:10                                                   ` Eric Wheeler
2023-08-22 15:11                                                     ` Sean Christopherson
2023-08-22 21:23                                                       ` Eric Wheeler
2023-08-22 21:32                                                         ` Sean Christopherson
2023-08-23  0:39                                                       ` Eric Wheeler
2023-08-23 17:54                                                         ` Sean Christopherson
2023-08-23 19:44                                                           ` Eric Wheeler
2023-08-23 22:12                                                           ` Eric Wheeler
2023-08-23 22:32                                                             ` Eric Wheeler
2023-08-23 23:21                                                               ` Sean Christopherson
2023-08-24  0:30                                                                 ` Eric Wheeler
2023-08-24  0:52                                                                   ` Sean Christopherson
2023-08-24 23:51                                                                     ` Eric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bce4b387-638d-7f3c-ca9b-12ff6e020bad@vultr.com \
    --to=brak@vultr.com \
    --cc=brak@gameservers.com \
    --cc=kvm@vger.kernel.org \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.