All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
@ 2021-10-09  1:01 Sean Christopherson
  2021-10-09  1:01 ` [PATCH 1/2] KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS memslot Sean Christopherson
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Sean Christopherson @ 2021-10-09  1:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Maxim Levitsky

Belated "code review" for Maxim's recent series to rework the AVIC inhibit
code.  Using the global APICv status in the page fault path is wrong as
the correct status is always the vCPU's, since that status is accurate
with respect to the time of the page fault.  In a similar vein, the code
to change the inhibit can be cleaned up since KVM can't rely on ordering
between the update and the request for anything except consumers of the
request.

Sean Christopherson (2):
  KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS
    memslot
  KVM: x86: Simplify APICv update request logic

 arch/x86/kvm/mmu/mmu.c |  2 +-
 arch/x86/kvm/x86.c     | 16 +++++++---------
 2 files changed, 8 insertions(+), 10 deletions(-)

-- 
2.33.0.882.g93a45727a2-goog


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/2] KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS memslot
  2021-10-09  1:01 [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Sean Christopherson
@ 2021-10-09  1:01 ` Sean Christopherson
  2021-10-10 12:47   ` Maxim Levitsky
  2021-10-09  1:01 ` [PATCH 2/2] KVM: x86: Simplify APICv update request logic Sean Christopherson
  2021-10-10 12:37 ` [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Maxim Levitsky
  2 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2021-10-09  1:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Maxim Levitsky

Query the vCPU's APICv status, not the overall VM's status, when handling
a page fault that hit the APIC Access Page memslot.  If an APICv status
update is pending, using the VM's status is non-deterministic as the
initiating vCPU may or may not have updated overall VM's status.  E.g. if
a vCPU hits an APIC Access page fault with APICv disabled and a different
vCPU is simultaneously performing an APICv update, the page fault handler
will incorrectly skip the special APIC access page MMIO handling.

Using the vCPU's status in the page fault handler is correct regardless
of any pending APICv updates, as the vCPU's status is accurate with
respect to the last VM-Enter, and thus reflects the context in which the
page fault occurred.

Cc: Maxim Levitsky <mlevitsk@redhat.com>
Fixes: 9cc13d60ba6b ("KVM: x86/mmu: allow APICv memslot to be enabled but invisible")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 24a9f4c3f5e7..d36e205b90a5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3853,7 +3853,7 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
 		 * when the AVIC is re-enabled.
 		 */
 		if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT &&
-		    !kvm_apicv_activated(vcpu->kvm)) {
+		    !kvm_vcpu_apicv_active(vcpu)) {
 			*r = RET_PF_EMULATE;
 			return true;
 		}
-- 
2.33.0.882.g93a45727a2-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] KVM: x86: Simplify APICv update request logic
  2021-10-09  1:01 [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Sean Christopherson
  2021-10-09  1:01 ` [PATCH 1/2] KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS memslot Sean Christopherson
@ 2021-10-09  1:01 ` Sean Christopherson
  2021-10-10 12:49   ` Maxim Levitsky
  2021-10-10 12:37 ` [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Maxim Levitsky
  2 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2021-10-09  1:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Maxim Levitsky

Drop confusing and flawed code that intentionally sets that per-VM APICv
inhibit mask after sending KVM_REQ_APICV_UPDATE to all vCPUs.  The code
is confusing because it's not obvious that there's no race between a CPU
seeing the request and consuming the new mask.  The code works only
because the request handling path takes the same lock, i.e. responding
vCPUs will be blocked until the full update completes.

The concept is flawed because ordering the mask update after the request
can't be relied upon for correct behavior.  The only guarantee provided
by kvm_make_all_cpus_request() is that all vCPUs exited the guest.  It
does not guarantee all vCPUs are waiting on the lock.  E.g. a VCPU could
be in the process of handling an emulated MMIO APIC access page fault
that occurred before the APICv update was initiated, and thus toggling
and reading the per-VM field would be racy.  If correctness matters, KVM
either needs to use the per-vCPU status (if appropriate), take the lock,
or have some other mechanism that guarantees the per-VM status is correct.

Cc: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4a52a08707de..960c2d196843 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9431,29 +9431,27 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_update_apicv);
 
 void __kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
 {
-	unsigned long old, new;
+	unsigned long old;
 
 	if (!kvm_x86_ops.check_apicv_inhibit_reasons ||
 	    !static_call(kvm_x86_check_apicv_inhibit_reasons)(bit))
 		return;
 
-	old = new = kvm->arch.apicv_inhibit_reasons;
+	old = kvm->arch.apicv_inhibit_reasons;
 
 	if (activate)
-		__clear_bit(bit, &new);
+		__clear_bit(bit, &kvm->arch.apicv_inhibit_reasons);
 	else
-		__set_bit(bit, &new);
+		__set_bit(bit, &kvm->arch.apicv_inhibit_reasons);
 
-	if (!!old != !!new) {
+	if (!!old != !!kvm->arch.apicv_inhibit_reasons) {
 		trace_kvm_apicv_update_request(activate, bit);
 		kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);
-		kvm->arch.apicv_inhibit_reasons = new;
-		if (new) {
+		if (kvm->arch.apicv_inhibit_reasons) {
 			unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE);
 			kvm_zap_gfn_range(kvm, gfn, gfn+1);
 		}
-	} else
-		kvm->arch.apicv_inhibit_reasons = new;
+	}
 }
 EXPORT_SYMBOL_GPL(__kvm_request_apicv_update);
 
-- 
2.33.0.882.g93a45727a2-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-09  1:01 [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Sean Christopherson
  2021-10-09  1:01 ` [PATCH 1/2] KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS memslot Sean Christopherson
  2021-10-09  1:01 ` [PATCH 2/2] KVM: x86: Simplify APICv update request logic Sean Christopherson
@ 2021-10-10 12:37 ` Maxim Levitsky
  2021-10-11 14:27   ` Sean Christopherson
  2 siblings, 1 reply; 14+ messages in thread
From: Maxim Levitsky @ 2021-10-10 12:37 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Fri, 2021-10-08 at 18:01 -0700, Sean Christopherson wrote:
> Belated "code review" for Maxim's recent series to rework the AVIC inhibit
> code.  Using the global APICv status in the page fault path is wrong as
> the correct status is always the vCPU's, since that status is accurate
> with respect to the time of the page fault.  In a similar vein, the code
> to change the inhibit can be cleaned up since KVM can't rely on ordering
> between the update and the request for anything except consumers of the
> request.
> 
> Sean Christopherson (2):
>   KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS
>     memslot
>   KVM: x86: Simplify APICv update request logic
> 
>  arch/x86/kvm/mmu/mmu.c |  2 +-
>  arch/x86/kvm/x86.c     | 16 +++++++---------
>  2 files changed, 8 insertions(+), 10 deletions(-)
> 

Are you sure about it? Let me explain how the algorithm works:

- kvm_request_apicv_update:

	- take kvm->arch.apicv_update_lock

	- if inhibition state doesn't really change (kvm->arch.apicv_inhibit_reasons still zero or non zero)
		- update kvm->arch.apicv_inhibit_reasons
		- release the lock

	- raise KVM_REQ_APICV_UPDATE
		* since kvm->arch.apicv_update_lock is taken, all vCPUs will be kicked out of guest
		  mode and will be either doing someing in the KVM (like page fault) or stuck on trying to process that request
                  the important thing is that no vCPU will be able to get back to the guest mode.

	- update the kvm->arch.apicv_inhibit_reasons
		* since we hold vm->arch.apicv_update_lock vcpus can't see the new value

	- update the SPTE that covers the APIC's mmio window:

		- if we enable AVIC, then do nothing.
			
			* First vCPU to access it will page fault and populate that SPTE

			* If we race with page fault again no problem, worst case the page fault
			  doesn't populte the SPTE, and we will get another page fault later
			  and it will. 

			  -> SPTE not present + AVIC enabled is not a problem, it just causes
			  a spurious page fault, and then retried at which point AVIC is used.

			  It is nice to re-install the SPTE as fast as possible to avoid such
			  faults for performance reasons.

		- if we disable AVIC, then we zap the spte:

			* page fault should not happen just before zapping as AVIC is enabled on the vCPUs now.
			  even if it does happen, it doesn't matter if it does populate the SPTE, as we will zap it anyway.

			* during the zapping we take the mmu lock and use mmu notifier counter hack
			  to avoid racing with page fault that can happen concurrently with it.

			* if page fault on another vCPU happens after the zapping, it will see the correct 
			  kvm->arch.apicv_inhibit_reasons (but likely incorrect its own vCPU AVIC inhibit state)
			  and will not re-populate the SPTE.

			  -> and SPTE present + AVIC inhibited on this vCPU is the problem,
			  as this will cause writes to AVIC to disappear into that dummy page mapped by that SPTE.

			  That is why patch 1 IMHO is wrong.

	- release the kvm->arch.apicv_update_lock
		* at that point all vCPUs can re-enter but they all will process the KVM_REQ_APICV_UPDATE
		  prior to that, which will update their AVIC state.


Best regards,
	Maxim Levitsky




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS memslot
  2021-10-09  1:01 ` [PATCH 1/2] KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS memslot Sean Christopherson
@ 2021-10-10 12:47   ` Maxim Levitsky
  0 siblings, 0 replies; 14+ messages in thread
From: Maxim Levitsky @ 2021-10-10 12:47 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Fri, 2021-10-08 at 18:01 -0700, Sean Christopherson wrote:
> Query the vCPU's APICv status, not the overall VM's status, when handling
> a page fault that hit the APIC Access Page memslot.  If an APICv status
> update is pending, using the VM's status is non-deterministic as the
> initiating vCPU may or may not have updated overall VM's status.  E.g. if
> a vCPU hits an APIC Access page fault with APICv disabled and a different
> vCPU is simultaneously performing an APICv update, the page fault handler
> will incorrectly skip the special APIC access page MMIO handling.
> 
> Using the vCPU's status in the page fault handler is correct regardless
> of any pending APICv updates, as the vCPU's status is accurate with
> respect to the last VM-Enter, and thus reflects the context in which the
> page fault occurred.

Actually I don't think that this patch is correct, and the current code is correct.

- The page fault can happen if one of the following is true:

	- AVIC is currently inhibited.
	
	- AVIC is currently inhibited but is in the process of being uninhibited.

	- AVIC is not inhibited but has never been accessed by a VCPU after it was uninihibited.

	This will *usually* cause this code to populate the corresponding SPTE entry and re-enter the guest which 
	  will make the AVIC work on instruction re-execution without a page fault.

        It depends if the page fault code sees new or old value of the global inhibition state, which is not possible
	to avoid, as the page fault can happen anytime.

        If the code doesn't populate the SPTE entry, the access will be emulated (which is correct too, and next access
	will page fault again and that fault will re-install the SPTE.


Note that AVIC's SPTE is *VM global*, just like all other SPTEs.

- The decision is here to poplute the SPTE and retry or just emulate the APIC read/write without populating it.

  Since AVIC read/writes the same apic register page, reading it now, or populating the SPTE, enabling AVIC and letting the AVIC read/write it should read/write the same values.

  Thus the real decision here is if to populate the SPTE or not.

- If AVIC is currently inhibited on this VCPU, but global AVIC inhibit is already OFF, we do want
  to populute the SPTE, and prior to guest entry we will update the vCPU inhibit state to disable inhibition on this VCPU.

So its the global AVIC inhibit state, is what is correct to use for this decision IMHO.

Best regards,
	Maxim Levitsky


> 
> Cc: Maxim Levitsky <mlevitsk@redhat.com>
> Fixes: 9cc13d60ba6b ("KVM: x86/mmu: allow APICv memslot to be enabled but invisible")
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 24a9f4c3f5e7..d36e205b90a5 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3853,7 +3853,7 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
>  		 * when the AVIC is re-enabled.
>  		 */
>  		if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT &&
> -		    !kvm_apicv_activated(vcpu->kvm)) {
> +		    !kvm_vcpu_apicv_active(vcpu)) {
>  			*r = RET_PF_EMULATE;
>  			return true;
>  		}



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] KVM: x86: Simplify APICv update request logic
  2021-10-09  1:01 ` [PATCH 2/2] KVM: x86: Simplify APICv update request logic Sean Christopherson
@ 2021-10-10 12:49   ` Maxim Levitsky
  2021-10-11 17:55     ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: Maxim Levitsky @ 2021-10-10 12:49 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On Fri, 2021-10-08 at 18:01 -0700, Sean Christopherson wrote:
> Drop confusing and flawed code that intentionally sets that per-VM APICv
> inhibit mask after sending KVM_REQ_APICV_UPDATE to all vCPUs.  The code
> is confusing because it's not obvious that there's no race between a CPU
> seeing the request and consuming the new mask.  The code works only
> because the request handling path takes the same lock, i.e. responding
> vCPUs will be blocked until the full update completes.

Actually this code is here on purpose:

While it is true that the main reader of apicv_inhibit_reasons (KVM_REQ_APICV_UPDATE handler)
does take the kvm->arch.apicv_update_lock lock, so it will see the correct value
regardless of this patch, the reason why this code first raises the KVM_REQ_APICV_UPDATE
and only then updates the arch.apicv_inhibit_reasons is that I put a warning into svm_vcpu_run
which checks that per cpu AVIC inhibit state matches the global AVIC inhibit state.

That warning proved to be very useful to ensure that AVIC inhibit works correctly.

If this patch is applied, the warning can no longer work reliably unless
it takes the apicv_update_lock which will have a performance hit.

The reason is that if we just update apicv_inhibit_reasons, we can race
with vCPU which is about to re-enter the guest mode and trigger this warning.

Best regards,
	Maxim Levitsky

> 
> The concept is flawed because ordering the mask update after the request
> can't be relied upon for correct behavior.  The only guarantee provided
> by kvm_make_all_cpus_request() is that all vCPUs exited the guest.  It
> does not guarantee all vCPUs are waiting on the lock.  E.g. a VCPU could
> be in the process of handling an emulated MMIO APIC access page fault
> that occurred before the APICv update was initiated, and thus toggling
> and reading the per-VM field would be racy.  If correctness matters, KVM
> either needs to use the per-vCPU status (if appropriate), take the lock,
> or have some other mechanism that guarantees the per-VM status is correct.
> 
> Cc: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/x86.c | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 4a52a08707de..960c2d196843 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9431,29 +9431,27 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_update_apicv);
>  
>  void __kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
>  {
> -	unsigned long old, new;
> +	unsigned long old;
>  
>  	if (!kvm_x86_ops.check_apicv_inhibit_reasons ||
>  	    !static_call(kvm_x86_check_apicv_inhibit_reasons)(bit))
>  		return;
>  
> -	old = new = kvm->arch.apicv_inhibit_reasons;
> +	old = kvm->arch.apicv_inhibit_reasons;
>  
>  	if (activate)
> -		__clear_bit(bit, &new);
> +		__clear_bit(bit, &kvm->arch.apicv_inhibit_reasons);
>  	else
> -		__set_bit(bit, &new);
> +		__set_bit(bit, &kvm->arch.apicv_inhibit_reasons);
>  
> -	if (!!old != !!new) {
> +	if (!!old != !!kvm->arch.apicv_inhibit_reasons) {
>  		trace_kvm_apicv_update_request(activate, bit);
>  		kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);
> -		kvm->arch.apicv_inhibit_reasons = new;
> -		if (new) {
> +		if (kvm->arch.apicv_inhibit_reasons) {
>  			unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE);
>  			kvm_zap_gfn_range(kvm, gfn, gfn+1);
>  		}
> -	} else
> -		kvm->arch.apicv_inhibit_reasons = new;
> +	}
>  }
>  EXPORT_SYMBOL_GPL(__kvm_request_apicv_update);
>  



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-10 12:37 ` [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Maxim Levitsky
@ 2021-10-11 14:27   ` Sean Christopherson
  2021-10-11 16:58     ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2021-10-11 14:27 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Sun, Oct 10, 2021, Maxim Levitsky wrote:
> On Fri, 2021-10-08 at 18:01 -0700, Sean Christopherson wrote:
> > Belated "code review" for Maxim's recent series to rework the AVIC inhibit
> > code.  Using the global APICv status in the page fault path is wrong as
> > the correct status is always the vCPU's, since that status is accurate
> > with respect to the time of the page fault.  In a similar vein, the code
> > to change the inhibit can be cleaned up since KVM can't rely on ordering
> > between the update and the request for anything except consumers of the
> > request.
> > 
> > Sean Christopherson (2):
> >   KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS
> >     memslot
> >   KVM: x86: Simplify APICv update request logic
> > 
> >  arch/x86/kvm/mmu/mmu.c |  2 +-
> >  arch/x86/kvm/x86.c     | 16 +++++++---------
> >  2 files changed, 8 insertions(+), 10 deletions(-)
> > 
> 
> Are you sure about it? Let me explain how the algorithm works:
> 
> - kvm_request_apicv_update:
> 
> 	- take kvm->arch.apicv_update_lock
> 
> 	- if inhibition state doesn't really change (kvm->arch.apicv_inhibit_reasons still zero or non zero)
> 		- update kvm->arch.apicv_inhibit_reasons
> 		- release the lock
> 
> 	- raise KVM_REQ_APICV_UPDATE
> 		* since kvm->arch.apicv_update_lock is taken, all vCPUs will be
> 		kicked out of guest mode and will be either doing someing in
> 		the KVM (like page fault) or stuck on trying to process that
> 		request the important thing is that no vCPU will be able to get
> 		back to the guest mode.
> 
> 	- update the kvm->arch.apicv_inhibit_reasons
> 		* since we hold vm->arch.apicv_update_lock vcpus can't see the new value

This assertion is incorrect, kvm_apicv_activated() is not guarded by the lock.

> 	- update the SPTE that covers the APIC's mmio window:

This won't affect in-flight page faults.


   vCPU0                               vCPU1
   =====                               =====
   Disabled APICv
   #NPT                                Acquire apicv_update_lock
                                       Re-enable APICv
   kvm_apicv_activated() == false
   incorrectly handle as regular MMIO
                                       zap APIC pages
   MMIO cache has bad entry

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-11 14:27   ` Sean Christopherson
@ 2021-10-11 16:58     ` Sean Christopherson
  2021-10-12  9:53       ` Maxim Levitsky
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2021-10-11 16:58 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Mon, Oct 11, 2021, Sean Christopherson wrote:
> On Sun, Oct 10, 2021, Maxim Levitsky wrote:
> > On Fri, 2021-10-08 at 18:01 -0700, Sean Christopherson wrote:
> > > Belated "code review" for Maxim's recent series to rework the AVIC inhibit
> > > code.  Using the global APICv status in the page fault path is wrong as
> > > the correct status is always the vCPU's, since that status is accurate
> > > with respect to the time of the page fault.  In a similar vein, the code
> > > to change the inhibit can be cleaned up since KVM can't rely on ordering
> > > between the update and the request for anything except consumers of the
> > > request.
> > > 
> > > Sean Christopherson (2):
> > >   KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS
> > >     memslot
> > >   KVM: x86: Simplify APICv update request logic
> > > 
> > >  arch/x86/kvm/mmu/mmu.c |  2 +-
> > >  arch/x86/kvm/x86.c     | 16 +++++++---------
> > >  2 files changed, 8 insertions(+), 10 deletions(-)
> > > 
> > 
> > Are you sure about it? Let me explain how the algorithm works:
> > 
> > - kvm_request_apicv_update:
> > 
> > 	- take kvm->arch.apicv_update_lock
> > 
> > 	- if inhibition state doesn't really change (kvm->arch.apicv_inhibit_reasons still zero or non zero)
> > 		- update kvm->arch.apicv_inhibit_reasons
> > 		- release the lock
> > 
> > 	- raise KVM_REQ_APICV_UPDATE
> > 		* since kvm->arch.apicv_update_lock is taken, all vCPUs will be
> > 		kicked out of guest mode and will be either doing someing in
> > 		the KVM (like page fault) or stuck on trying to process that
> > 		request the important thing is that no vCPU will be able to get
> > 		back to the guest mode.
> > 
> > 	- update the kvm->arch.apicv_inhibit_reasons
> > 		* since we hold vm->arch.apicv_update_lock vcpus can't see the new value
> 
> This assertion is incorrect, kvm_apicv_activated() is not guarded by the lock.
> 
> > 	- update the SPTE that covers the APIC's mmio window:
> 
> This won't affect in-flight page faults.
> 
> 
>    vCPU0                               vCPU1
>    =====                               =====
>    Disabled APICv
>    #NPT                                Acquire apicv_update_lock
>                                        Re-enable APICv
>    kvm_apicv_activated() == false

Doh, that's supposed to be "true".

>    incorrectly handle as regular MMIO
>                                        zap APIC pages
>    MMIO cache has bad entry

Argh, I forgot the memslot is still there, so the access won't be treated as MMIO
and thus won't end up in the MMIO cache.

So I agree that the code is functionally ok, but I'd still prefer to switch to
kvm_vcpu_apicv_active() so that this code is coherent with respect to the APICv
status at the time the fault occurred.

My objection to using kvm_apicv_activated() is that the result is completely
non-deterministic with respect to the vCPU's APICv status at the time of the
fault.  It works because all of the other mechanisms that are in place, e.g.
elevating the MMU notifier count, but the fact that the result is non-deterministic
means that using the per-vCPU status is also functionally ok.

At a minimum, I'd like to add a blurb in the kvm_faultin_pfn() comment to call out
the reliance on mmu_notifier_seq.

E.g. if kvm_zap_gfn_range() wins the race to acquire mmu_lock() after APICv is
inhibited/disabled by __kvm_request_apicv_update(), then direct_page_fault() will
retry the fault due to the change in mmu_notifier_seq.  If direct_page_fault()
wins the race, then kvm_zap_gfn_range() will zap the freshly-installed SPTEs.
For the uninhibit/enable case, at worst KVM will emulate an access that could have
been accelerated by retrying the instruction.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] KVM: x86: Simplify APICv update request logic
  2021-10-10 12:49   ` Maxim Levitsky
@ 2021-10-11 17:55     ` Sean Christopherson
  0 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2021-10-11 17:55 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Sun, Oct 10, 2021, Maxim Levitsky wrote:
> On Fri, 2021-10-08 at 18:01 -0700, Sean Christopherson wrote:
> > Drop confusing and flawed code that intentionally sets that per-VM APICv
> > inhibit mask after sending KVM_REQ_APICV_UPDATE to all vCPUs.  The code
> > is confusing because it's not obvious that there's no race between a CPU
> > seeing the request and consuming the new mask.  The code works only
> > because the request handling path takes the same lock, i.e. responding
> > vCPUs will be blocked until the full update completes.
> 
> Actually this code is here on purpose:
>
> While it is true that the main reader of apicv_inhibit_reasons (KVM_REQ_APICV_UPDATE handler)
> does take the kvm->arch.apicv_update_lock lock, so it will see the correct value
> regardless of this patch, the reason why this code first raises the KVM_REQ_APICV_UPDATE
> and only then updates the arch.apicv_inhibit_reasons is that I put a warning into svm_vcpu_run
> which checks that per cpu AVIC inhibit state matches the global AVIC inhibit state.
> 
> That warning proved to be very useful to ensure that AVIC inhibit works correctly.
> 
> If this patch is applied, the warning can no longer work reliably unless
> it takes the apicv_update_lock which will have a performance hit.
> 
> The reason is that if we just update apicv_inhibit_reasons, we can race
> with vCPU which is about to re-enter the guest mode and trigger this warning.

Ah, and it relies on kvm_make_all_cpus_request() to wait for vCPUs to ack the
IRQ before updating apicv_inhibit_reasons, and then relies on kvm_vcpu_update_apicv()
to stall on acquiring apicv_update_lock() so that the vCPU can't redo svm_vcpu_run()
without seeing the new inhibit state.

I'll drop this patch and send one to add comments, there are a lot of subtle/hidden
dependencies here.  Setting the inhibit _after_ the request in particular needs a
comment as it goes directly against the behavior of pretty much every other request
flow.

Thanks!

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-11 16:58     ` Sean Christopherson
@ 2021-10-12  9:53       ` Maxim Levitsky
  2021-10-15 16:15         ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: Maxim Levitsky @ 2021-10-12  9:53 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Mon, 2021-10-11 at 16:58 +0000, Sean Christopherson wrote:
> On Mon, Oct 11, 2021, Sean Christopherson wrote:
> > On Sun, Oct 10, 2021, Maxim Levitsky wrote:
> > > On Fri, 2021-10-08 at 18:01 -0700, Sean Christopherson wrote:
> > > > Belated "code review" for Maxim's recent series to rework the AVIC inhibit
> > > > code.  Using the global APICv status in the page fault path is wrong as
> > > > the correct status is always the vCPU's, since that status is accurate
> > > > with respect to the time of the page fault.  In a similar vein, the code
> > > > to change the inhibit can be cleaned up since KVM can't rely on ordering
> > > > between the update and the request for anything except consumers of the
> > > > request.
> > > > 
> > > > Sean Christopherson (2):
> > > >   KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS
> > > >     memslot
> > > >   KVM: x86: Simplify APICv update request logic
> > > > 
> > > >  arch/x86/kvm/mmu/mmu.c |  2 +-
> > > >  arch/x86/kvm/x86.c     | 16 +++++++---------
> > > >  2 files changed, 8 insertions(+), 10 deletions(-)
> > > > 
> > > 
> > > Are you sure about it? Let me explain how the algorithm works:
> > > 
> > > - kvm_request_apicv_update:
> > > 
> > > 	- take kvm->arch.apicv_update_lock
> > > 
> > > 	- if inhibition state doesn't really change (kvm->arch.apicv_inhibit_reasons still zero or non zero)
> > > 		- update kvm->arch.apicv_inhibit_reasons
> > > 		- release the lock
> > > 
> > > 	- raise KVM_REQ_APICV_UPDATE
> > > 		* since kvm->arch.apicv_update_lock is taken, all vCPUs will be
> > > 		kicked out of guest mode and will be either doing someing in
> > > 		the KVM (like page fault) or stuck on trying to process that
> > > 		request the important thing is that no vCPU will be able to get
> > > 		back to the guest mode.
> > > 
> > > 	- update the kvm->arch.apicv_inhibit_reasons
> > > 		* since we hold vm->arch.apicv_update_lock vcpus can't see the new value
> > 
> > This assertion is incorrect, kvm_apicv_activated() is not guarded by the lock.
> > 
> > > 	- update the SPTE that covers the APIC's mmio window:
> > 
> > This won't affect in-flight page faults.
> > 
> > 
> >    vCPU0                               vCPU1
> >    =====                               =====
> >    Disabled APICv
> >    #NPT                                Acquire apicv_update_lock
> >                                        Re-enable APICv
> >    kvm_apicv_activated() == false
> 
> Doh, that's supposed to be "true".
> 
> >    incorrectly handle as regular MMIO
> >                                        zap APIC pages
> >    MMIO cache has bad entry
> 
> Argh, I forgot the memslot is still there, so the access won't be treated as MMIO
> and thus won't end up in the MMIO cache.
> 
> So I agree that the code is functionally ok, but I'd still prefer to switch to
> kvm_vcpu_apicv_active() so that this code is coherent with respect to the APICv
> status at the time the fault occurred.
> 
> My objection to using kvm_apicv_activated() is that the result is completely
> non-deterministic with respect to the vCPU's APICv status at the time of the
> fault.  It works because all of the other mechanisms that are in place, e.g.
> elevating the MMU notifier count, but the fact that the result is non-deterministic
> means that using the per-vCPU status is also functionally ok.

The problem is that it is just not correct to use local AVIC enable state 
to determine if we want to populate the SPTE or or just jump to the emulation.


For example, assuming that the AVIC is now enabled on all vCPUs,
we can have this scenario:

    vCPU0                                   vCPU1
    =====                                   =====

- disable AVIC
- VMRUN
                                        - #NPT on AVIC MMIO access
                                        - *stuck on something prior to the page fault code*
- enable AVIC
- VMRUN
                                        - *still stuck on something prior to the page fault code*

- disable AVIC:

  - raise KVM_REQ_APICV_UPDATE request
					
  - set global avic state to disable

  - zap the SPTE (does nothing, doesn't race
	with anything either)

  - handle KVM_REQ_APICV_UPDATE -
    - disable vCPU0 AVIC

- VMRUN
					- *still stuck on something prior to the page fault code*

                                                            ...
                                                            ...
                                                            ...

                                        - now vCPU1 finally starts running the page fault code.

                                        - vCPU1 AVIC is still enabled 
                                          (because vCPU1 never handled KVM_REQ_APICV_UPDATE),
                                          so the page fault code will populate the SPTE.
                                          

                                        - handle KVM_REQ_APICV_UPDATE
                                           - finally disable vCPU1 AVIC

                                        - VMRUN (vCPU1 AVIC disabled, SPTE populated)

					                 ***boom***



> 
> At a minimum, I'd like to add a blurb in the kvm_faultin_pfn() comment to call out
> the reliance on mmu_notifier_seq.

This is a very good idea!


> 
> E.g. if kvm_zap_gfn_range() wins the race to acquire mmu_lock() after APICv is
> inhibited/disabled by __kvm_request_apicv_update(), then direct_page_fault() will
> retry the fault due to the change in mmu_notifier_seq.  If direct_page_fault()
> wins the race, then kvm_zap_gfn_range() will zap the freshly-installed SPTEs.
> For the uninhibit/enable case, at worst KVM will emulate an access that could have
> been accelerated by retrying the instruction.

Yes, 100% agree. 

The thing was super tricky to implement to avoid races that happen otherwise
this way or another.



Best regards,
	Maxim Levitsky

> 





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-12  9:53       ` Maxim Levitsky
@ 2021-10-15 16:15         ` Sean Christopherson
  2021-10-15 16:23           ` Paolo Bonzini
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2021-10-15 16:15 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Tue, Oct 12, 2021, Maxim Levitsky wrote:
> On Mon, 2021-10-11 at 16:58 +0000, Sean Christopherson wrote:
> > Argh, I forgot the memslot is still there, so the access won't be treated as MMIO
> > and thus won't end up in the MMIO cache.
> > 
> > So I agree that the code is functionally ok, but I'd still prefer to switch to
> > kvm_vcpu_apicv_active() so that this code is coherent with respect to the APICv
> > status at the time the fault occurred.
> > 
> > My objection to using kvm_apicv_activated() is that the result is completely
> > non-deterministic with respect to the vCPU's APICv status at the time of the
> > fault.  It works because all of the other mechanisms that are in place, e.g.
> > elevating the MMU notifier count, but the fact that the result is non-deterministic
> > means that using the per-vCPU status is also functionally ok.
> 
> The problem is that it is just not correct to use local AVIC enable state 
> to determine if we want to populate the SPTE or or just jump to the emulation.
> 
> 
> For example, assuming that the AVIC is now enabled on all vCPUs,
> we can have this scenario:
> 
>     vCPU0                                   vCPU1
>     =====                                   =====
> 
> - disable AVIC
> - VMRUN
>                                         - #NPT on AVIC MMIO access
>                                         - *stuck on something prior to the page fault code*
> - enable AVIC
> - VMRUN
>                                         - *still stuck on something prior to the page fault code*
> 
> - disable AVIC:
> 
>   - raise KVM_REQ_APICV_UPDATE request
> 					
>   - set global avic state to disable
> 
>   - zap the SPTE (does nothing, doesn't race
> 	with anything either)
> 
>   - handle KVM_REQ_APICV_UPDATE -
>     - disable vCPU0 AVIC
> 
> - VMRUN
> 					- *still stuck on something prior to the page fault code*
> 
>                                                             ...
>                                                             ...
>                                                             ...
> 
>                                         - now vCPU1 finally starts running the page fault code.
> 
>                                         - vCPU1 AVIC is still enabled 
>                                           (because vCPU1 never handled KVM_REQ_APICV_UPDATE),
>                                           so the page fault code will populate the SPTE.

But vCPU1 won't install the SPTE if it loses the race to acquire mmu_lock, because
kvm_zap_gfn_range() bumps the notifier sequence and so vCPU1 will retry the fault.
If vCPU1 wins the race, i.e. sees the same sequence number, then the zap is
guaranteed to find the newly-installed SPTE.

And IMO, retrying is the desired behavior.  Installing a SPTE based on the global
state works, but it's all kinds of weird to knowingly take an action the directly
contradicts the current vCPU state.

FWIW, I had gone so far as to type this up to handle the situation you described
before remembering the sequence interaction.

		/*
		 * If the APIC access page exists but is disabled, go directly
		 * to emulation without caching the MMIO access or creating a
		 * MMIO SPTE.  That way the cache doesn't need to be purged
		 * when the AVIC is re-enabled.
		 */
		if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT) {
			/*
			 * Retry the fault if an APICv update is pending, as
			 * the kvm_zap_gfn_range() when APICv becomes inhibited
			 * may have already occurred, in which case installing
			 * a SPTE would be incorrect.
			 */
			if (!kvm_vcpu_apicv_active(vcpu)) {
				*r = RET_PF_EMULATE;
				return true;
			} else if (kvm_test_request(KVM_REQ_APICV_UPDATE, vcpu)) {
				*r = RET_PF_RETRY;
				return true;
			}
		}

>                                         - handle KVM_REQ_APICV_UPDATE
>                                            - finally disable vCPU1 AVIC
> 
>                                         - VMRUN (vCPU1 AVIC disabled, SPTE populated)
> 
> 					                 ***boom***

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-15 16:15         ` Sean Christopherson
@ 2021-10-15 16:23           ` Paolo Bonzini
  2021-10-15 16:36             ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2021-10-15 16:23 UTC (permalink / raw)
  To: Sean Christopherson, Maxim Levitsky
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 15/10/21 18:15, Sean Christopherson wrote:
>>
>>                                          - now vCPU1 finally starts running the page fault code.
>>
>>                                          - vCPU1 AVIC is still enabled
>>                                            (because vCPU1 never handled KVM_REQ_APICV_UPDATE),
>>                                            so the page fault code will populate the SPTE.
> But vCPU1 won't install the SPTE if it loses the race to acquire mmu_lock, because
> kvm_zap_gfn_range() bumps the notifier sequence and so vCPU1 will retry the fault.
> If vCPU1 wins the race, i.e. sees the same sequence number, then the zap is
> guaranteed to find the newly-installed SPTE.
> 
> And IMO, retrying is the desired behavior.  Installing a SPTE based on the global
> state works, but it's all kinds of weird to knowingly take an action the directly
> contradicts the current vCPU state.

I think both of you are correct. :)

Installing a SPTE based on global state is weird because this is a vCPU 
action; installing it based on vCPU state is weird because it is 
knowingly out of date.  I tend to be more on Maxim's side, but that may 
be simply because I have reviewed the code earlier and the various 
interleavings are still somewhere in my brain.

It certainly deserves a comment though.  The behavior wrt the sequence 
number is particularly important if you use the vCPU state, but it's 
worth pointing out even with the current code; this exchange shows that 
it can be confusing.

Paolo


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-15 16:23           ` Paolo Bonzini
@ 2021-10-15 16:36             ` Sean Christopherson
  2021-10-15 17:50               ` Paolo Bonzini
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2021-10-15 16:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Maxim Levitsky, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Fri, Oct 15, 2021, Paolo Bonzini wrote:
> On 15/10/21 18:15, Sean Christopherson wrote:
> > > 
> > >                                          - now vCPU1 finally starts running the page fault code.
> > > 
> > >                                          - vCPU1 AVIC is still enabled
> > >                                            (because vCPU1 never handled KVM_REQ_APICV_UPDATE),
> > >                                            so the page fault code will populate the SPTE.
> > But vCPU1 won't install the SPTE if it loses the race to acquire mmu_lock, because
> > kvm_zap_gfn_range() bumps the notifier sequence and so vCPU1 will retry the fault.
> > If vCPU1 wins the race, i.e. sees the same sequence number, then the zap is
> > guaranteed to find the newly-installed SPTE.
> > 
> > And IMO, retrying is the desired behavior.  Installing a SPTE based on the global
> > state works, but it's all kinds of weird to knowingly take an action the directly
> > contradicts the current vCPU state.
> 
> I think both of you are correct. :)
> 
> Installing a SPTE based on global state is weird because this is a vCPU
> action; installing it based on vCPU state is weird because it is knowingly
> out of date.

If that's the argument, then kvm_faultin_page() should explicitly check for a
pending KVM_REQ_APICV_UPDATE, because I would then argue that contuining on when
KVM _knows_ its new SPTE will either get zapped (page fault wins the race) or
will get rejected (kvm_zap_gfn_range() wins the race) is just as wrong.  The SPTE
_cannot_ be used even if the page fault wins the race, becuase all vCPUs need to
process KVM_REQ_APICV_UPDATE and thus will be blocked until the initiating vCPU
zaps the range and drops the APICv lock.

And I personally do _not_ want to add a check for the request because it implies
the check is sufficient, which it is not, because the page fault doesn't yet hold
mmu_lock.

Since all answers are some form of wrong, IMO we should at least be coherent with
respect to the original page fault.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes
  2021-10-15 16:36             ` Sean Christopherson
@ 2021-10-15 17:50               ` Paolo Bonzini
  0 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2021-10-15 17:50 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Maxim Levitsky, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On 15/10/21 18:36, Sean Christopherson wrote:
>> Installing a SPTE based on global state is weird because this is a vCPU
>> action; installing it based on vCPU state is weird because it is knowingly
>> out of date.
> If that's the argument, then kvm_faultin_page() should explicitly check for a
> pending KVM_REQ_APICV_UPDATE, because I would then argue that contuining on when
> KVM_knows_  its new SPTE will either get zapped (page fault wins the race) or
> will get rejected (kvm_zap_gfn_range() wins the race) is just as wrong.  The SPTE
> _cannot_  be used even if the page fault wins the race, becuase all vCPUs need to
> process KVM_REQ_APICV_UPDATE and thus will be blocked until the initiating vCPU
> zaps the range and drops the APICv lock.

Right, that was my counter-argument - no need to check for the request 
because the request "synchronizes" with the actual use of the PTE, via 
kvm_make_all_cpus_request + kvm_zap_gfn_range.

> And I personally do_not_  want to add a check for the request because it implies
> the check is sufficient, which it is not, because the page fault doesn't yet hold
> mmu_lock.

Of course, that would be even worse.

> Since all answers are some form of wrong, IMO we should at least be coherent with
> respect to the original page fault.

Okay, you win if you send a patch with a comment. :)

Paolo


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-10-15 17:50 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-09  1:01 [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Sean Christopherson
2021-10-09  1:01 ` [PATCH 1/2] KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS memslot Sean Christopherson
2021-10-10 12:47   ` Maxim Levitsky
2021-10-09  1:01 ` [PATCH 2/2] KVM: x86: Simplify APICv update request logic Sean Christopherson
2021-10-10 12:49   ` Maxim Levitsky
2021-10-11 17:55     ` Sean Christopherson
2021-10-10 12:37 ` [PATCH 0/2] KVM: x86: Fix and cleanup for recent AVIC changes Maxim Levitsky
2021-10-11 14:27   ` Sean Christopherson
2021-10-11 16:58     ` Sean Christopherson
2021-10-12  9:53       ` Maxim Levitsky
2021-10-15 16:15         ` Sean Christopherson
2021-10-15 16:23           ` Paolo Bonzini
2021-10-15 16:36             ` Sean Christopherson
2021-10-15 17:50               ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.