linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/16] My AVIC patch queue
@ 2021-08-10 20:52 Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 01/16] Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock" Maxim Levitsky
                   ` (16 more replies)
  0 siblings, 17 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

Hi!

This is a series of bugfixes to the AVIC dynamic inhibition, which was
made while trying to fix bugs as much as possible in this area and trying
to make the AVIC+SYNIC conditional enablement work.

* Patches 1,3-8 are code from Sean Christopherson which
  implement an alternative approach of inhibiting AVIC without
  disabling its memslot.

  V4: addressed review feedback.

* Patch 2 is new and it fixes a bug in kvm_flush_remote_tlbs_with_address

* Patches 9-10 in this series fix a race condition which can cause
  a lost write from a guest to APIC when the APIC write races
  the AVIC un-inhibition, and add a warning to catch this problem
  if it re-emerges again.

  V4: applied review feedback from Paolo

* Patch 11 is the patch from Vitaly about allowing AVIC with SYNC
  as long as the guest doesn’t use the AutoEOI feature. I only slightly
  changed it to expose the AutoEOI cpuid bit regardless of AVIC enablement.

  V4: fixed a race that Paolo pointed out.

* Patch 12 is a refactoring that is now possible in SVM AVIC inhibition code,
  because the RCU lock is not dropped anymore.

* Patch 13-15 fixes another issue I found in AVIC inhibit code:

  Currently avic_vcpu_load/avic_vcpu_put are called on userspace entry/exit
  from KVM (aka kvm_vcpu_get/kvm_vcpu_put), and these functions update the
  "is running" bit in the AVIC physical ID remap table and update the
  target vCPU in iommu code.

  However both of these functions don't do anything when AVIC is inhibited
  thus the "is running" bit will be kept enabled during the exit to userspace.
  This shouldn't be a big issue as the caller
  doesn't use the AVIC when inhibited but still inconsistent and can trigger
  a warning about this in avic_vcpu_load.

  To be on the safe side I think it makes sense to call
  avic_vcpu_put/avic_vcpu_load when inhibiting/uninhibiting the AVIC.
  This will ensure that the work these functions do is matched.

  V4: I splitted a single patch to 3 patches to make it easier
      to review, and applied Paolo's review feedback.

* Patch 16 removes the pointless APIC base
  relocation from AVIC to make it consistent with the rest of KVM.

  (both AVIC and APICv only support default base, while regular KVM,
  sort of support any APIC base as long as it is not RAM.
  If guest attempts to relocate APIC base to non RAM area,
  while APICv/AVIC are active, the new base will be non accelerated,
  while the default base will continue to be AVIC/APICv backed).

  On top of that if guest uses different APIC bases on different vCPUs,
  KVM doesn't honour the fact that the MMIO range should only be active
  on that vCPU.

Best regards,
	Maxim Levitsky

Maxim Levitsky (14):
  KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address
  KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range
  KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range
  KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn
  KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code
  KVM: x86/mmu: allow APICv memslot to be enabled but invisible
  KVM: x86: don't disable APICv memslot when inhibited
  KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM
  KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC
    inhibition
  KVM: SVM: remove svm_toggle_avic_for_irq_window
  KVM: SVM: avoid refreshing avic if its state didn't change
  KVM: SVM: move check for kvm_vcpu_apicv_active outside of
    avic_vcpu_{put|load}
  KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling
    AVIC
  KVM: SVM: AVIC: drop unsupported AVIC base relocation code

Sean Christopherson (1):
  Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu
    read lock"

Vitaly Kuznetsov (1):
  KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in
    use

 arch/x86/include/asm/kvm-x86-ops.h |  1 -
 arch/x86/include/asm/kvm_host.h    | 13 +++++-
 arch/x86/kvm/hyperv.c              | 32 ++++++++++---
 arch/x86/kvm/mmu/mmu.c             | 75 ++++++++++++++++++++----------
 arch/x86/kvm/mmu/paging_tmpl.h     |  6 +--
 arch/x86/kvm/mmu/tdp_mmu.c         | 15 ++----
 arch/x86/kvm/mmu/tdp_mmu.h         | 11 ++---
 arch/x86/kvm/svm/avic.c            | 49 +++++++------------
 arch/x86/kvm/svm/svm.c             | 21 ++++-----
 arch/x86/kvm/svm/svm.h             |  8 ----
 arch/x86/kvm/x86.c                 | 67 +++++++++++++++-----------
 include/linux/kvm_host.h           |  5 ++
 virt/kvm/kvm_main.c                |  7 ++-
 13 files changed, 174 insertions(+), 136 deletions(-)

-- 
2.26.3



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 01/16] Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock"
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 02/16] KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address Maxim Levitsky
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

From: Sean Christopherson <seanjc@google.com>

This together with the next patch will fix a future race between
kvm_zap_gfn_range and the page fault handler, which will happen
when AVIC memslot is going to be only partially disabled.

The performance impact is minimal since kvm_zap_gfn_range is only
called by users, update_mtrr() and kvm_post_set_cr0().

Both only use it if the guest has non-coherent DMA, in order to
honor the guest's UC memtype.

MTRR and CD setup only happens at boot, and generally in an area
where the page tables should be small (for CD) or should not
include the affected GFNs at all (for MTRRs).

This is based on a patch suggested by Sean Christopherson:
https://lkml.org/lkml/2021/7/22/1025

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c     | 19 ++++++++-----------
 arch/x86/kvm/mmu/tdp_mmu.c | 15 ++++-----------
 arch/x86/kvm/mmu/tdp_mmu.h | 11 ++++-------
 3 files changed, 16 insertions(+), 29 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d574c68cbc5c..9e3839971844 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5654,8 +5654,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 	int i;
 	bool flush = false;
 
+	write_lock(&kvm->mmu_lock);
+
 	if (kvm_memslots_have_rmaps(kvm)) {
-		write_lock(&kvm->mmu_lock);
 		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
 			slots = __kvm_memslots(kvm, i);
 			kvm_for_each_memslot(memslot, slots) {
@@ -5675,22 +5676,18 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 		}
 		if (flush)
 			kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end);
-		write_unlock(&kvm->mmu_lock);
 	}
 
 	if (is_tdp_mmu_enabled(kvm)) {
-		flush = false;
-
-		read_lock(&kvm->mmu_lock);
 		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
 			flush = kvm_tdp_mmu_zap_gfn_range(kvm, i, gfn_start,
-							  gfn_end, flush, true);
-		if (flush)
-			kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
-							   gfn_end);
-
-		read_unlock(&kvm->mmu_lock);
+							  gfn_end, flush);
 	}
+
+	if (flush)
+		kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end);
+
+	write_unlock(&kvm->mmu_lock);
 }
 
 static bool slot_rmap_write_protect(struct kvm *kvm,
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index dab6cb46cdb2..133d94e30c2b 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -802,21 +802,15 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
  * non-root pages mapping GFNs strictly within that range. Returns true if
  * SPTEs have been cleared and a TLB flush is needed before releasing the
  * MMU lock.
- *
- * If shared is true, this thread holds the MMU lock in read mode and must
- * account for the possibility that other threads are modifying the paging
- * structures concurrently. If shared is false, this thread should hold the
- * MMU in write mode.
  */
 bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id, gfn_t start,
-				 gfn_t end, bool can_yield, bool flush,
-				 bool shared)
+				 gfn_t end, bool can_yield, bool flush)
 {
 	struct kvm_mmu_page *root;
 
-	for_each_tdp_mmu_root_yield_safe(kvm, root, as_id, shared)
+	for_each_tdp_mmu_root_yield_safe(kvm, root, as_id, false)
 		flush = zap_gfn_range(kvm, root, start, end, can_yield, flush,
-				      shared);
+				      false);
 
 	return flush;
 }
@@ -828,8 +822,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
 	int i;
 
 	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
-		flush = kvm_tdp_mmu_zap_gfn_range(kvm, i, 0, max_gfn,
-						  flush, false);
+		flush = kvm_tdp_mmu_zap_gfn_range(kvm, i, 0, max_gfn, flush);
 
 	if (flush)
 		kvm_flush_remote_tlbs(kvm);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index b224d126adf9..358f447d4012 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -20,14 +20,11 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
 			  bool shared);
 
 bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id, gfn_t start,
-				 gfn_t end, bool can_yield, bool flush,
-				 bool shared);
+				 gfn_t end, bool can_yield, bool flush);
 static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id,
-					     gfn_t start, gfn_t end, bool flush,
-					     bool shared)
+					     gfn_t start, gfn_t end, bool flush)
 {
-	return __kvm_tdp_mmu_zap_gfn_range(kvm, as_id, start, end, true, flush,
-					   shared);
+	return __kvm_tdp_mmu_zap_gfn_range(kvm, as_id, start, end, true, flush);
 }
 static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
@@ -44,7 +41,7 @@ static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 	 */
 	lockdep_assert_held_write(&kvm->mmu_lock);
 	return __kvm_tdp_mmu_zap_gfn_range(kvm, kvm_mmu_page_as_id(sp),
-					   sp->gfn, end, false, false, false);
+					   sp->gfn, end, false, false);
 }
 
 void kvm_tdp_mmu_zap_all(struct kvm *kvm);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 02/16] KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 01/16] Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock" Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 03/16] KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range Maxim Levitsky
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

kvm_flush_remote_tlbs_with_address expects (start gfn, number of pages),
and not (start gfn, end gfn)

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 9e3839971844..3080e25c8a3a 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5675,13 +5675,17 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 			}
 		}
 		if (flush)
-			kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end);
+			kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
+							   gfn_end - gfn_start);
 	}
 
 	if (is_tdp_mmu_enabled(kvm)) {
 		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
 			flush = kvm_tdp_mmu_zap_gfn_range(kvm, i, gfn_start,
 							  gfn_end, flush);
+		if (flush)
+			kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
+							   gfn_end - gfn_start);
 	}
 
 	if (flush)
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 03/16] KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 01/16] Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock" Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 02/16] KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 04/16] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range Maxim Levitsky
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

This comment makes it clear that the range of gfns that this
function receives is non inclusive.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 3080e25c8a3a..d4e22a9635a9 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5647,6 +5647,10 @@ void kvm_mmu_uninit_vm(struct kvm *kvm)
 	kvm_mmu_uninit_tdp_mmu(kvm);
 }
 
+/*
+ * Invalidate (zap) SPTEs that cover GFNs from gfn_start and up to gfn_end
+ * (not including it)
+ */
 void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 {
 	struct kvm_memslots *slots;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 04/16] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (2 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 03/16] KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 05/16] KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn Maxim Levitsky
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

This together with previous patch, ensures that
kvm_zap_gfn_range doesn't race with page fault
running on another vcpu, and will make this page fault code
retry instead.

This is based on a patch suggested by Sean Christopherson:
https://lkml.org/lkml/2021/7/22/1025

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c   | 4 ++++
 include/linux/kvm_host.h | 5 +++++
 virt/kvm/kvm_main.c      | 7 +++++--
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d4e22a9635a9..abaf8e661c61 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5660,6 +5660,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 
 	write_lock(&kvm->mmu_lock);
 
+	kvm_inc_notifier_count(kvm, gfn_start, gfn_end);
+
 	if (kvm_memslots_have_rmaps(kvm)) {
 		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
 			slots = __kvm_memslots(kvm, i);
@@ -5695,6 +5697,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 	if (flush)
 		kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end);
 
+	kvm_dec_notifier_count(kvm, gfn_start, gfn_end);
+
 	write_unlock(&kvm->mmu_lock);
 }
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f50bfcf225f0..4e43843fe0d7 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -991,6 +991,11 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
 void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
 #endif
 
+void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
+				   unsigned long end);
+void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
+				   unsigned long end);
+
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg);
 long kvm_arch_vcpu_ioctl(struct file *filp,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a438a7a3774a..46f55e860b8b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -610,7 +610,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
 	kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
 }
 
-static void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
+void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
 				   unsigned long end)
 {
 	/*
@@ -638,6 +638,7 @@ static void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
 			max(kvm->mmu_notifier_range_end, end);
 	}
 }
+EXPORT_SYMBOL_GPL(kvm_inc_notifier_count);
 
 static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 					const struct mmu_notifier_range *range)
@@ -672,7 +673,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 	return 0;
 }
 
-static void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
+void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
 				   unsigned long end)
 {
 	/*
@@ -689,6 +690,8 @@ static void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
 	 */
 	kvm->mmu_notifier_count--;
 }
+EXPORT_SYMBOL_GPL(kvm_dec_notifier_count);
+
 
 static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
 					const struct mmu_notifier_range *range)
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 05/16] KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (3 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 04/16] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 06/16] KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code Maxim Levitsky
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

try_async_pf is a wrong name for this function, since this code
is used when asynchronous page fault is not enabled as well.

This code is based on a patch from Sean Christopherson:
https://lkml.org/lkml/2021/7/19/2970

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c         | 4 ++--
 arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index abaf8e661c61..15a396af75e7 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3858,7 +3858,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 				  kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch);
 }
 
-static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
+static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 			 gpa_t cr2_or_gpa, kvm_pfn_t *pfn, hva_t *hva,
 			 bool write, bool *writable)
 {
@@ -3928,7 +3928,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	mmu_seq = vcpu->kvm->mmu_notifier_seq;
 	smp_rmb();
 
-	if (try_async_pf(vcpu, prefault, gfn, gpa, &pfn, &hva,
+	if (kvm_faultin_pfn(vcpu, prefault, gfn, gpa, &pfn, &hva,
 			 write, &map_writable))
 		return RET_PF_RETRY;
 
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index ee044d357b5f..f349eae69bf3 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -881,7 +881,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
 	mmu_seq = vcpu->kvm->mmu_notifier_seq;
 	smp_rmb();
 
-	if (try_async_pf(vcpu, prefault, walker.gfn, addr, &pfn, &hva,
+	if (kvm_faultin_pfn(vcpu, prefault, walker.gfn, addr, &pfn, &hva,
 			 write_fault, &map_writable))
 		return RET_PF_RETRY;
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 06/16] KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (4 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 05/16] KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 07/16] KVM: x86/mmu: allow APICv memslot to be enabled but invisible Maxim Levitsky
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

This will allow it to return RET_PF_EMULATE for APIC mmio
emulation.

This code is based on a patch from Sean Christopherson:
https://lkml.org/lkml/2021/7/19/2970

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c         | 17 ++++++++++-------
 arch/x86/kvm/mmu/paging_tmpl.h |  4 ++--
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 15a396af75e7..6d6ad222f114 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3860,7 +3860,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 
 static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 			 gpa_t cr2_or_gpa, kvm_pfn_t *pfn, hva_t *hva,
-			 bool write, bool *writable)
+			 bool write, bool *writable, int *r)
 {
 	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	bool async;
@@ -3871,7 +3871,7 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 	 * be zapped before KVM inserts a new MMIO SPTE for the gfn.
 	 */
 	if (slot && (slot->flags & KVM_MEMSLOT_INVALID))
-		return true;
+		goto out_retry;
 
 	/* Don't expose private memslots to L2. */
 	if (is_guest_mode(vcpu) && !kvm_is_visible_memslot(slot)) {
@@ -3891,14 +3891,17 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 		if (kvm_find_async_pf_gfn(vcpu, gfn)) {
 			trace_kvm_async_pf_doublefault(cr2_or_gpa, gfn);
 			kvm_make_request(KVM_REQ_APF_HALT, vcpu);
-			return true;
+			goto out_retry;
 		} else if (kvm_arch_setup_async_pf(vcpu, cr2_or_gpa, gfn))
-			return true;
+			goto out_retry;
 	}
 
 	*pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL,
 				    write, writable, hva);
-	return false;
+
+out_retry:
+	*r = RET_PF_RETRY;
+	return true;
 }
 
 static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
@@ -3929,8 +3932,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	smp_rmb();
 
 	if (kvm_faultin_pfn(vcpu, prefault, gfn, gpa, &pfn, &hva,
-			 write, &map_writable))
-		return RET_PF_RETRY;
+			 write, &map_writable, &r))
+		return r;
 
 	if (handle_abnormal_pfn(vcpu, is_tdp ? 0 : gpa, gfn, pfn, ACC_ALL, &r))
 		return r;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index f349eae69bf3..7d03e9b7ccfa 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -882,8 +882,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
 	smp_rmb();
 
 	if (kvm_faultin_pfn(vcpu, prefault, walker.gfn, addr, &pfn, &hva,
-			 write_fault, &map_writable))
-		return RET_PF_RETRY;
+			 write_fault, &map_writable, &r))
+		return r;
 
 	if (handle_abnormal_pfn(vcpu, addr, walker.gfn, pfn, walker.pte_access, &r))
 		return r;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 07/16] KVM: x86/mmu: allow APICv memslot to be enabled but invisible
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (5 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 06/16] KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 08/16] KVM: x86: don't disable APICv memslot when inhibited Maxim Levitsky
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

on AMD, APIC virtualization needs to dynamicaly inhibit the AVIC in a
response to some events, and this is problematic and not efficient to do by
enabling/disabling the memslot that covers APIC's mmio range.

Plus due to SRCU locking, it makes it more complex to
request AVIC inhibition.

Instead, the APIC memslot will be always enabled, but be invisible
to the guest, such as the MMU code will not install a SPTE for it,
when it is inhibited and instead jump straight to emulating the access.

When inhibiting the AVIC, this SPTE will be zapped.

This code is based on a suggestion from Sean Christopherson:
https://lkml.org/lkml/2021/7/19/2970

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/mmu/mmu.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6d6ad222f114..bfc94d8bd9f2 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3873,11 +3873,24 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 	if (slot && (slot->flags & KVM_MEMSLOT_INVALID))
 		goto out_retry;
 
-	/* Don't expose private memslots to L2. */
-	if (is_guest_mode(vcpu) && !kvm_is_visible_memslot(slot)) {
-		*pfn = KVM_PFN_NOSLOT;
-		*writable = false;
-		return false;
+	if (!kvm_is_visible_memslot(slot)) {
+		/* Don't expose private memslots to L2. */
+		if (is_guest_mode(vcpu)) {
+			*pfn = KVM_PFN_NOSLOT;
+			*writable = false;
+			return false;
+		}
+		/*
+		 * If the APIC access page exists but is disabled, go directly
+		 * to emulation without caching the MMIO access or creating a
+		 * MMIO SPTE.  That way the cache doesn't need to be purged
+		 * when the AVIC is re-enabled.
+		 */
+		if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT &&
+		    !kvm_apicv_activated(vcpu->kvm)) {
+			*r = RET_PF_EMULATE;
+			return true;
+		}
 	}
 
 	async = false;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 08/16] KVM: x86: don't disable APICv memslot when inhibited
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (6 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 07/16] KVM: x86/mmu: allow APICv memslot to be enabled but invisible Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 09/16] KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM Maxim Levitsky
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

Thanks to the former patches, it is now possible to keep the APICv
memslot always enabled, and it will be invisible to the guest
when it is inhibited

This code is based on a suggestion from Sean Christopherson:
https://lkml.org/lkml/2021/7/19/2970

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 -
 arch/x86/include/asm/kvm_host.h    |  1 -
 arch/x86/kvm/svm/avic.c            | 21 ++++++---------------
 arch/x86/kvm/svm/svm.c             |  1 -
 arch/x86/kvm/svm/svm.h             |  1 -
 arch/x86/kvm/x86.c                 | 21 ++++++++-------------
 6 files changed, 14 insertions(+), 32 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index a12a4987154e..cefe1d81e2e8 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -72,7 +72,6 @@ KVM_X86_OP(enable_nmi_window)
 KVM_X86_OP(enable_irq_window)
 KVM_X86_OP(update_cr8_intercept)
 KVM_X86_OP(check_apicv_inhibit_reasons)
-KVM_X86_OP_NULL(pre_update_apicv_exec_ctrl)
 KVM_X86_OP(refresh_apicv_exec_ctrl)
 KVM_X86_OP(hwapic_irr_update)
 KVM_X86_OP(hwapic_isr_update)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4c567b05edad..d800ee894c92 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1346,7 +1346,6 @@ struct kvm_x86_ops {
 	void (*enable_irq_window)(struct kvm_vcpu *vcpu);
 	void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
 	bool (*check_apicv_inhibit_reasons)(ulong bit);
-	void (*pre_update_apicv_exec_ctrl)(struct kvm *kvm, bool activate);
 	void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
 	void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
 	void (*hwapic_isr_update)(struct kvm_vcpu *vcpu, int isr);
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index a8ad78a2faa1..d0acbeeab3d6 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -225,31 +225,26 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu,
  * field of the VMCB. Therefore, we set up the
  * APIC_ACCESS_PAGE_PRIVATE_MEMSLOT (4KB) here.
  */
-static int avic_update_access_page(struct kvm *kvm, bool activate)
+static int avic_alloc_access_page(struct kvm *kvm)
 {
 	void __user *ret;
 	int r = 0;
 
 	mutex_lock(&kvm->slots_lock);
-	/*
-	 * During kvm_destroy_vm(), kvm_pit_set_reinject() could trigger
-	 * APICv mode change, which update APIC_ACCESS_PAGE_PRIVATE_MEMSLOT
-	 * memory region. So, we need to ensure that kvm->mm == current->mm.
-	 */
-	if ((kvm->arch.apic_access_memslot_enabled == activate) ||
-	    (kvm->mm != current->mm))
+
+	if (kvm->arch.apic_access_memslot_enabled)
 		goto out;
 
 	ret = __x86_set_memory_region(kvm,
 				      APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
 				      APIC_DEFAULT_PHYS_BASE,
-				      activate ? PAGE_SIZE : 0);
+				      PAGE_SIZE);
 	if (IS_ERR(ret)) {
 		r = PTR_ERR(ret);
 		goto out;
 	}
 
-	kvm->arch.apic_access_memslot_enabled = activate;
+	kvm->arch.apic_access_memslot_enabled = true;
 out:
 	mutex_unlock(&kvm->slots_lock);
 	return r;
@@ -270,7 +265,7 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
 	if (kvm_apicv_activated(vcpu->kvm)) {
 		int ret;
 
-		ret = avic_update_access_page(vcpu->kvm, true);
+		ret = avic_alloc_access_page(vcpu->kvm);
 		if (ret)
 			return ret;
 	}
@@ -918,10 +913,6 @@ bool svm_check_apicv_inhibit_reasons(ulong bit)
 	return supported & BIT(bit);
 }
 
-void svm_pre_update_apicv_exec_ctrl(struct kvm *kvm, bool activate)
-{
-	avic_update_access_page(kvm, activate);
-}
 
 static inline int
 avic_update_iommu_vcpu_affinity(struct kvm_vcpu *vcpu, int cpu, bool r)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9d72b1df426e..4feff53dd1d3 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4583,7 +4583,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.set_virtual_apic_mode = svm_set_virtual_apic_mode,
 	.refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl,
 	.check_apicv_inhibit_reasons = svm_check_apicv_inhibit_reasons,
-	.pre_update_apicv_exec_ctrl = svm_pre_update_apicv_exec_ctrl,
 	.load_eoi_exitmap = svm_load_eoi_exitmap,
 	.hwapic_irr_update = svm_hwapic_irr_update,
 	.hwapic_isr_update = svm_hwapic_isr_update,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index bd0fe94c2920..bd41f2a32838 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -534,7 +534,6 @@ void avic_post_state_restore(struct kvm_vcpu *vcpu);
 void svm_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
 void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu);
 bool svm_check_apicv_inhibit_reasons(ulong bit);
-void svm_pre_update_apicv_exec_ctrl(struct kvm *kvm, bool activate);
 void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 void svm_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
 void svm_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index df71f5e3e23b..b8952001ee44 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9255,13 +9255,6 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_update_apicv);
 
-/*
- * NOTE: Do not hold any lock prior to calling this.
- *
- * In particular, kvm_request_apicv_update() expects kvm->srcu not to be
- * locked, because it calls __x86_set_memory_region() which does
- * synchronize_srcu(&kvm->srcu).
- */
 void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
 {
 	unsigned long old, new, expected;
@@ -9282,14 +9275,16 @@ void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
 		old = cmpxchg(&kvm->arch.apicv_inhibit_reasons, expected, new);
 	} while (old != expected);
 
-	if (!!old == !!new)
-		return;
+	if (!!old != !!new) {
+		trace_kvm_apicv_update_request(activate, bit);
+		kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);
+		if (new) {
+			unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE);
 
-	trace_kvm_apicv_update_request(activate, bit);
-	if (kvm_x86_ops.pre_update_apicv_exec_ctrl)
-		static_call(kvm_x86_pre_update_apicv_exec_ctrl)(kvm, activate);
+			kvm_zap_gfn_range(kvm, gfn, gfn+1);
+		}
+	}
 
-	kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);
 }
 EXPORT_SYMBOL_GPL(kvm_request_apicv_update);
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 09/16] KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (7 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 08/16] KVM: x86: don't disable APICv memslot when inhibited Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 10/16] KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC inhibition Maxim Levitsky
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

Currently on SVM, the kvm_request_apicv_update toggles the APICv
memslot without doing any synchronization.

If there is a mismatch between that memslot state and the AVIC state,
on one of the vCPUs, an APIC mmio access can be lost:

For example:

VCPU0: enable the APIC_ACCESS_PAGE_PRIVATE_MEMSLOT
VCPU1: access an APIC mmio register.

Since AVIC is still disabled on VCPU1, the access will not be intercepted
by it, and neither will it cause MMIO fault, but rather it will just be
read/written from/to the dummy page mapped into the
APIC_ACCESS_PAGE_PRIVATE_MEMSLOT.

Fix that by adding a lock guarding the AVIC state changes, and carefully
order the operations of kvm_request_apicv_update to avoid this race:

1. Take the lock
2. Send KVM_REQ_APICV_UPDATE
3. Update the apic inhibit reason
4. Release the lock

This ensures that at (2) all vCPUs are kicked out of the guest mode,
but don't yet see the new avic state.
Then only after (4) all other vCPUs can update their AVIC state and resume.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  6 +++++
 arch/x86/kvm/x86.c              | 39 ++++++++++++++++++++-------------
 2 files changed, 30 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d800ee894c92..27627e32f362 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1047,6 +1047,9 @@ struct kvm_arch {
 	struct kvm_apic_map __rcu *apic_map;
 	atomic_t apic_map_dirty;
 
+	/* Protects apic_access_memslot_enabled and apicv_inhibit_reasons */
+	struct mutex apicv_update_lock;
+
 	bool apic_access_memslot_enabled;
 	unsigned long apicv_inhibit_reasons;
 
@@ -1730,6 +1733,9 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu);
 void kvm_request_apicv_update(struct kvm *kvm, bool activate,
 			      unsigned long bit);
 
+void __kvm_request_apicv_update(struct kvm *kvm, bool activate,
+				unsigned long bit);
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
 int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b8952001ee44..b6185921fe44 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8579,6 +8579,8 @@ EXPORT_SYMBOL_GPL(kvm_apicv_activated);
 
 static void kvm_apicv_init(struct kvm *kvm)
 {
+	mutex_init(&kvm->arch.apicv_update_lock);
+
 	if (enable_apicv)
 		clear_bit(APICV_INHIBIT_REASON_DISABLE,
 			  &kvm->arch.apicv_inhibit_reasons);
@@ -9240,6 +9242,8 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 	if (!lapic_in_kernel(vcpu))
 		return;
 
+	mutex_lock(&vcpu->kvm->arch.apicv_update_lock);
+
 	vcpu->arch.apicv_active = kvm_apicv_activated(vcpu->kvm);
 	kvm_apic_update_apicv(vcpu);
 	static_call(kvm_x86_refresh_apicv_exec_ctrl)(vcpu);
@@ -9252,39 +9256,44 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 	 */
 	if (!vcpu->arch.apicv_active)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
+
+	mutex_unlock(&vcpu->kvm->arch.apicv_update_lock);
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_update_apicv);
 
-void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
+void __kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
 {
-	unsigned long old, new, expected;
+	unsigned long old, new;
 
 	if (!kvm_x86_ops.check_apicv_inhibit_reasons ||
 	    !static_call(kvm_x86_check_apicv_inhibit_reasons)(bit))
 		return;
 
-	old = READ_ONCE(kvm->arch.apicv_inhibit_reasons);
-	do {
-		expected = new = old;
-		if (activate)
-			__clear_bit(bit, &new);
-		else
-			__set_bit(bit, &new);
-		if (new == old)
-			break;
-		old = cmpxchg(&kvm->arch.apicv_inhibit_reasons, expected, new);
-	} while (old != expected);
+	old = new = kvm->arch.apicv_inhibit_reasons;
+
+	if (activate)
+		__clear_bit(bit, &new);
+	else
+		__set_bit(bit, &new);
 
 	if (!!old != !!new) {
 		trace_kvm_apicv_update_request(activate, bit);
 		kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);
+		kvm->arch.apicv_inhibit_reasons = new;
 		if (new) {
 			unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE);
-
 			kvm_zap_gfn_range(kvm, gfn, gfn+1);
 		}
-	}
+	} else
+		kvm->arch.apicv_inhibit_reasons = new;
+}
+EXPORT_SYMBOL_GPL(__kvm_request_apicv_update);
 
+void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
+{
+	mutex_lock(&kvm->arch.apicv_update_lock);
+	__kvm_request_apicv_update(kvm, activate, bit);
+	mutex_unlock(&kvm->arch.apicv_update_lock);
 }
 EXPORT_SYMBOL_GPL(kvm_request_apicv_update);
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 10/16] KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC inhibition
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (8 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 09/16] KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 11/16] KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in use Maxim Levitsky
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

It is never a good idea to enter a guest on a vCPU when the
AVIC inhibition state doesn't match the enablement of
the AVIC on the vCPU.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4feff53dd1d3..6cb7ffbde03b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3781,6 +3781,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	pre_svm_run(vcpu);
 
+	WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
+
 	sync_lapic_to_cr8(vcpu);
 
 	if (unlikely(svm->asid != svm->vmcb->control.asid)) {
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 11/16] KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in use
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (9 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 10/16] KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC inhibition Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 12/16] KVM: SVM: remove svm_toggle_avic_for_irq_window Maxim Levitsky
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

From: Vitaly Kuznetsov <vkuznets@redhat.com>

APICV_INHIBIT_REASON_HYPERV is currently unconditionally forced upon
SynIC activation as SynIC's AutoEOI is incompatible with APICv/AVIC. It is,
however, possible to track whether the feature was actually used by the
guest and only inhibit APICv/AVIC when needed.

TLFS suggests a dedicated 'HV_DEPRECATING_AEOI_RECOMMENDED' flag to let
Windows know that AutoEOI feature should be avoided. While it's up to
KVM userspace to set the flag, KVM can help a bit by exposing global
APICv/AVIC enablement.

Maxim:
   - always set HV_DEPRECATING_AEOI_RECOMMENDED in kvm_get_hv_cpuid,
     since this feature can be used regardless of AVIC

Paolo:
   - use arch.apicv_update_lock to protect the hv->synic_auto_eoi_used
     instead of atomic ops

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  6 ++++++
 arch/x86/kvm/hyperv.c           | 32 ++++++++++++++++++++++++++------
 2 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 27627e32f362..a0ddff8d13c4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -982,6 +982,12 @@ struct kvm_hv {
 	/* How many vCPUs have VP index != vCPU index */
 	atomic_t num_mismatched_vp_indexes;
 
+	/*
+	 * How many SynICs use 'AutoEOI' feature
+	 * (protected by arch.apicv_update_lock)
+	 */
+	unsigned int synic_auto_eoi_used;
+
 	struct hv_partition_assist_pg *hv_pa_pg;
 	struct kvm_hv_syndbg hv_syndbg;
 };
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index b07592ca92f0..031ea0fb3e2f 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -88,6 +88,10 @@ static bool synic_has_vector_auto_eoi(struct kvm_vcpu_hv_synic *synic,
 static void synic_update_vector(struct kvm_vcpu_hv_synic *synic,
 				int vector)
 {
+	struct kvm_vcpu *vcpu = hv_synic_to_vcpu(synic);
+	struct kvm_hv *hv = to_kvm_hv(vcpu->kvm);
+	int auto_eoi_old, auto_eoi_new;
+
 	if (vector < HV_SYNIC_FIRST_VALID_VECTOR)
 		return;
 
@@ -96,10 +100,30 @@ static void synic_update_vector(struct kvm_vcpu_hv_synic *synic,
 	else
 		__clear_bit(vector, synic->vec_bitmap);
 
+	auto_eoi_old = bitmap_weight(synic->auto_eoi_bitmap, 256);
+
 	if (synic_has_vector_auto_eoi(synic, vector))
 		__set_bit(vector, synic->auto_eoi_bitmap);
 	else
 		__clear_bit(vector, synic->auto_eoi_bitmap);
+
+	auto_eoi_new = bitmap_weight(synic->auto_eoi_bitmap, 256);
+
+	if (!!auto_eoi_old == !!auto_eoi_new)
+		return;
+
+	mutex_lock(&vcpu->kvm->arch.apicv_update_lock);
+
+	if (auto_eoi_new)
+		hv->synic_auto_eoi_used++;
+	else
+		hv->synic_auto_eoi_used--;
+
+	__kvm_request_apicv_update(vcpu->kvm,
+				   !hv->synic_auto_eoi_used,
+				   APICV_INHIBIT_REASON_HYPERV);
+
+	mutex_unlock(&vcpu->kvm->arch.apicv_update_lock);
 }
 
 static int synic_set_sint(struct kvm_vcpu_hv_synic *synic, int sint,
@@ -933,12 +957,6 @@ int kvm_hv_activate_synic(struct kvm_vcpu *vcpu, bool dont_zero_synic_pages)
 
 	synic = to_hv_synic(vcpu);
 
-	/*
-	 * Hyper-V SynIC auto EOI SINT's are
-	 * not compatible with APICV, so request
-	 * to deactivate APICV permanently.
-	 */
-	kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_HYPERV);
 	synic->active = true;
 	synic->dont_zero_synic_pages = dont_zero_synic_pages;
 	synic->control = HV_SYNIC_CONTROL_ENABLE;
@@ -2466,6 +2484,8 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
 				ent->eax |= HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
 			if (!cpu_smt_possible())
 				ent->eax |= HV_X64_NO_NONARCH_CORESHARING;
+
+			ent->eax |= HV_DEPRECATING_AEOI_RECOMMENDED;
 			/*
 			 * Default number of spinlock retry attempts, matches
 			 * HyperV 2016.
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 12/16] KVM: SVM: remove svm_toggle_avic_for_irq_window
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (10 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 11/16] KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in use Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 13/16] KVM: SVM: avoid refreshing avic if its state didn't change Maxim Levitsky
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

Now that kvm_request_apicv_update doesn't need to drop the kvm->srcu lock,
we can call kvm_request_apicv_update directly.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/avic.c | 11 -----------
 arch/x86/kvm/svm/svm.c  |  4 ++--
 arch/x86/kvm/svm/svm.h  |  1 -
 3 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index d0acbeeab3d6..1def54c26259 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -582,17 +582,6 @@ void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
-void svm_toggle_avic_for_irq_window(struct kvm_vcpu *vcpu, bool activate)
-{
-	if (!enable_apicv || !lapic_in_kernel(vcpu))
-		return;
-
-	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
-	kvm_request_apicv_update(vcpu->kvm, activate,
-				 APICV_INHIBIT_REASON_IRQWIN);
-	vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
-}
-
 void svm_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 {
 	return;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 6cb7ffbde03b..1da12d700436 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2994,7 +2994,7 @@ static int interrupt_window_interception(struct kvm_vcpu *vcpu)
 	 * In this case AVIC was temporarily disabled for
 	 * requesting the IRQ window and we have to re-enable it.
 	 */
-	svm_toggle_avic_for_irq_window(vcpu, true);
+	kvm_request_apicv_update(vcpu->kvm, true, APICV_INHIBIT_REASON_IRQWIN);
 
 	++vcpu->stat.irq_window_exits;
 	return 1;
@@ -3546,7 +3546,7 @@ static void svm_enable_irq_window(struct kvm_vcpu *vcpu)
 		 * via AVIC. In such case, we need to temporarily disable AVIC,
 		 * and fallback to injecting IRQ via V_IRQ.
 		 */
-		svm_toggle_avic_for_irq_window(vcpu, false);
+		kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_IRQWIN);
 		svm_set_vintr(svm);
 	}
 }
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index bd41f2a32838..aae851762b59 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -524,7 +524,6 @@ int avic_ga_log_notifier(u32 ga_tag);
 void avic_vm_destroy(struct kvm *kvm);
 int avic_vm_init(struct kvm *kvm);
 void avic_init_vmcb(struct vcpu_svm *svm);
-void svm_toggle_avic_for_irq_window(struct kvm_vcpu *vcpu, bool activate);
 int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu);
 int avic_unaccelerated_access_interception(struct kvm_vcpu *vcpu);
 int avic_init_vcpu(struct vcpu_svm *svm);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 13/16] KVM: SVM: avoid refreshing avic if its state didn't change
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (11 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 12/16] KVM: SVM: remove svm_toggle_avic_for_irq_window Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 14/16] KVM: SVM: move check for kvm_vcpu_apicv_active outside of avic_vcpu_{put|load} Maxim Levitsky
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

Since AVIC can be inhibited and uninhibited rapidly it is possible that
we have nothing to do by the time the svm_refresh_apicv_exec_ctrl
is called.

Detect and avoid this, which will be useful when we will start calling
avic_vcpu_load/avic_vcpu_put when the avic inhibition state changes.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/x86.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b6185921fe44..e2ec45e3dc0a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9239,12 +9239,18 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm)
 
 void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 {
+	bool activate;
+
 	if (!lapic_in_kernel(vcpu))
 		return;
 
 	mutex_lock(&vcpu->kvm->arch.apicv_update_lock);
 
-	vcpu->arch.apicv_active = kvm_apicv_activated(vcpu->kvm);
+	activate = kvm_apicv_activated(vcpu->kvm);
+	if (vcpu->arch.apicv_active == activate)
+		goto out;
+
+	vcpu->arch.apicv_active = activate;
 	kvm_apic_update_apicv(vcpu);
 	static_call(kvm_x86_refresh_apicv_exec_ctrl)(vcpu);
 
@@ -9257,6 +9263,7 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 	if (!vcpu->arch.apicv_active)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
 
+out:
 	mutex_unlock(&vcpu->kvm->arch.apicv_update_lock);
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_update_apicv);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 14/16] KVM: SVM: move check for kvm_vcpu_apicv_active outside of avic_vcpu_{put|load}
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (12 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 13/16] KVM: SVM: avoid refreshing avic if its state didn't change Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 15/16] KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC Maxim Levitsky
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

No functional change intended.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/avic.c | 10 ++++------
 arch/x86/kvm/svm/svm.c  |  7 +++++--
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 1def54c26259..e7728b16a46f 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -940,9 +940,6 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	int h_physical_id = kvm_cpu_get_apicid(cpu);
 	struct vcpu_svm *svm = to_svm(vcpu);
 
-	if (!kvm_vcpu_apicv_active(vcpu))
-		return;
-
 	/*
 	 * Since the host physical APIC id is 8 bits,
 	 * we can support host APIC ID upto 255.
@@ -970,9 +967,6 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu)
 	u64 entry;
 	struct vcpu_svm *svm = to_svm(vcpu);
 
-	if (!kvm_vcpu_apicv_active(vcpu))
-		return;
-
 	entry = READ_ONCE(*(svm->avic_physical_id_cache));
 	if (entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)
 		avic_update_iommu_vcpu_affinity(vcpu, -1, 0);
@@ -989,6 +983,10 @@ static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run)
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	svm->avic_is_running = is_run;
+
+	if (!kvm_vcpu_apicv_active(vcpu))
+		return;
+
 	if (is_run)
 		avic_vcpu_load(vcpu, vcpu->cpu);
 	else
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1da12d700436..700bc188a650 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1483,12 +1483,15 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		sd->current_vmcb = svm->vmcb;
 		indirect_branch_prediction_barrier();
 	}
-	avic_vcpu_load(vcpu, cpu);
+	if (kvm_vcpu_apicv_active(vcpu))
+		avic_vcpu_load(vcpu, cpu);
 }
 
 static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	avic_vcpu_put(vcpu);
+	if (kvm_vcpu_apicv_active(vcpu))
+		avic_vcpu_put(vcpu);
+
 	svm_prepare_host_switch(vcpu);
 
 	++vcpu->stat.host_state_reload;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 15/16] KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (13 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 14/16] KVM: SVM: move check for kvm_vcpu_apicv_active outside of avic_vcpu_{put|load} Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 20:52 ` [PATCH v4 16/16] KVM: SVM: AVIC: drop unsupported AVIC base relocation code Maxim Levitsky
  2021-08-10 21:21 ` [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

Currently it is possible to have the following scenario:

1. AVIC is disabled by svm_refresh_apicv_exec_ctrl
2. svm_vcpu_blocking calls avic_vcpu_put which does nothing
3. svm_vcpu_unblocking enables the AVIC (due to KVM_REQ_APICV_UPDATE)
   and then calls avic_vcpu_load
4. warning is triggered in avic_vcpu_load since
   AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK was never cleared

While it is possible to just remove the warning, it seems to be more robust
to fully disable/enable AVIC in svm_refresh_apicv_exec_ctrl by calling the
avic_vcpu_load/avic_vcpu_put

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/avic.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index e7728b16a46f..01c0e83e1b71 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -651,6 +651,11 @@ void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 	}
 	vmcb_mark_dirty(vmcb, VMCB_AVIC);
 
+	if (activated)
+		avic_vcpu_load(vcpu, vcpu->cpu);
+	else
+		avic_vcpu_put(vcpu);
+
 	svm_set_pi_irte_mode(vcpu, activated);
 }
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 16/16] KVM: SVM: AVIC: drop unsupported AVIC base relocation code
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (14 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 15/16] KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC Maxim Levitsky
@ 2021-08-10 20:52 ` Maxim Levitsky
  2021-08-10 21:21 ` [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
  16 siblings, 0 replies; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 20:52 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Maxim Levitsky

APIC base relocation is not supported anyway and won't work
correctly so just drop the code that handles it and keep AVIC
MMIO bar at the default APIC base.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/avic.c | 2 ++
 arch/x86/kvm/svm/svm.c  | 7 -------
 arch/x86/kvm/svm/svm.h  | 6 ------
 3 files changed, 2 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 01c0e83e1b71..8052d92069e0 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -197,6 +197,8 @@ void avic_init_vmcb(struct vcpu_svm *svm)
 	vmcb->control.avic_logical_id = lpa & AVIC_HPA_MASK;
 	vmcb->control.avic_physical_id = ppa & AVIC_HPA_MASK;
 	vmcb->control.avic_physical_id |= AVIC_MAX_PHYSICAL_ID_COUNT;
+	vmcb->control.avic_vapic_bar = APIC_DEFAULT_PHYS_BASE & VMCB_AVIC_APIC_BAR_MASK;
+
 	if (kvm_apicv_activated(svm->vcpu.kvm))
 		vmcb->control.int_ctl |= AVIC_ENABLE_MASK;
 	else
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 700bc188a650..160b10ca4f62 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1316,9 +1316,6 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	svm->virt_spec_ctrl = 0;
 
 	init_vmcb(vcpu);
-
-	if (kvm_vcpu_apicv_active(vcpu) && !init_event)
-		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
 }
 
 void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb)
@@ -2969,10 +2966,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		svm->msr_decfg = data;
 		break;
 	}
-	case MSR_IA32_APICBASE:
-		if (kvm_vcpu_apicv_active(vcpu))
-			avic_update_vapic_bar(to_svm(vcpu), data);
-		fallthrough;
 	default:
 		return kvm_set_msr_common(vcpu, msr);
 	}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index aae851762b59..524d943f3efc 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -503,12 +503,6 @@ extern struct kvm_x86_nested_ops svm_nested_ops;
 
 #define VMCB_AVIC_APIC_BAR_MASK		0xFFFFFFFFFF000ULL
 
-static inline void avic_update_vapic_bar(struct vcpu_svm *svm, u64 data)
-{
-	svm->vmcb->control.avic_vapic_bar = data & VMCB_AVIC_APIC_BAR_MASK;
-	vmcb_mark_dirty(svm->vmcb, VMCB_AVIC);
-}
-
 static inline bool avic_vcpu_is_running(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 00/16] My AVIC patch queue
  2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
                   ` (15 preceding siblings ...)
  2021-08-10 20:52 ` [PATCH v4 16/16] KVM: SVM: AVIC: drop unsupported AVIC base relocation code Maxim Levitsky
@ 2021-08-10 21:21 ` Maxim Levitsky
  2021-08-11  8:06   ` Paolo Bonzini
  16 siblings, 1 reply; 19+ messages in thread
From: Maxim Levitsky @ 2021-08-10 21:21 UTC (permalink / raw)
  To: kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Tue, 2021-08-10 at 23:52 +0300, Maxim Levitsky wrote:
> Hi!
> 
> This is a series of bugfixes to the AVIC dynamic inhibition, which was
> made while trying to fix bugs as much as possible in this area and trying
> to make the AVIC+SYNIC conditional enablement work.
> 
> * Patches 1,3-8 are code from Sean Christopherson which

I mean patches 1,4-8. I forgot about patch 3 which I also added,
which just added a comment about parameters of the kvm_flush_remote_tlbs_with_address.

Best regards,
	Maxim Levitsky

>   implement an alternative approach of inhibiting AVIC without
>   disabling its memslot.
> 
>   V4: addressed review feedback.
> 
> * Patch 2 is new and it fixes a bug in kvm_flush_remote_tlbs_with_address
> 
> * Patches 9-10 in this series fix a race condition which can cause
>   a lost write from a guest to APIC when the APIC write races
>   the AVIC un-inhibition, and add a warning to catch this problem
>   if it re-emerges again.
> 
>   V4: applied review feedback from Paolo
> 
> * Patch 11 is the patch from Vitaly about allowing AVIC with SYNC
>   as long as the guest doesn’t use the AutoEOI feature. I only slightly
>   changed it to expose the AutoEOI cpuid bit regardless of AVIC enablement.
> 
>   V4: fixed a race that Paolo pointed out.
> 
> * Patch 12 is a refactoring that is now possible in SVM AVIC inhibition code,
>   because the RCU lock is not dropped anymore.
> 
> * Patch 13-15 fixes another issue I found in AVIC inhibit code:
> 
>   Currently avic_vcpu_load/avic_vcpu_put are called on userspace entry/exit
>   from KVM (aka kvm_vcpu_get/kvm_vcpu_put), and these functions update the
>   "is running" bit in the AVIC physical ID remap table and update the
>   target vCPU in iommu code.
> 
>   However both of these functions don't do anything when AVIC is inhibited
>   thus the "is running" bit will be kept enabled during the exit to userspace.
>   This shouldn't be a big issue as the caller
>   doesn't use the AVIC when inhibited but still inconsistent and can trigger
>   a warning about this in avic_vcpu_load.
> 
>   To be on the safe side I think it makes sense to call
>   avic_vcpu_put/avic_vcpu_load when inhibiting/uninhibiting the AVIC.
>   This will ensure that the work these functions do is matched.
> 
>   V4: I splitted a single patch to 3 patches to make it easier
>       to review, and applied Paolo's review feedback.
> 
> * Patch 16 removes the pointless APIC base
>   relocation from AVIC to make it consistent with the rest of KVM.
> 
>   (both AVIC and APICv only support default base, while regular KVM,
>   sort of support any APIC base as long as it is not RAM.
>   If guest attempts to relocate APIC base to non RAM area,
>   while APICv/AVIC are active, the new base will be non accelerated,
>   while the default base will continue to be AVIC/APICv backed).
> 
>   On top of that if guest uses different APIC bases on different vCPUs,
>   KVM doesn't honour the fact that the MMIO range should only be active
>   on that vCPU.
> 
> Best regards,
> 	Maxim Levitsky
> 
> Maxim Levitsky (14):
>   KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address
>   KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range
>   KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range
>   KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn
>   KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code
>   KVM: x86/mmu: allow APICv memslot to be enabled but invisible
>   KVM: x86: don't disable APICv memslot when inhibited
>   KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM
>   KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC
>     inhibition
>   KVM: SVM: remove svm_toggle_avic_for_irq_window
>   KVM: SVM: avoid refreshing avic if its state didn't change
>   KVM: SVM: move check for kvm_vcpu_apicv_active outside of
>     avic_vcpu_{put|load}
>   KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling
>     AVIC
>   KVM: SVM: AVIC: drop unsupported AVIC base relocation code
> 
> Sean Christopherson (1):
>   Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu
>     read lock"
> 
> Vitaly Kuznetsov (1):
>   KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in
>     use
> 
>  arch/x86/include/asm/kvm-x86-ops.h |  1 -
>  arch/x86/include/asm/kvm_host.h    | 13 +++++-
>  arch/x86/kvm/hyperv.c              | 32 ++++++++++---
>  arch/x86/kvm/mmu/mmu.c             | 75 ++++++++++++++++++++----------
>  arch/x86/kvm/mmu/paging_tmpl.h     |  6 +--
>  arch/x86/kvm/mmu/tdp_mmu.c         | 15 ++----
>  arch/x86/kvm/mmu/tdp_mmu.h         | 11 ++---
>  arch/x86/kvm/svm/avic.c            | 49 +++++++------------
>  arch/x86/kvm/svm/svm.c             | 21 ++++-----
>  arch/x86/kvm/svm/svm.h             |  8 ----
>  arch/x86/kvm/x86.c                 | 67 +++++++++++++++-----------
>  include/linux/kvm_host.h           |  5 ++
>  virt/kvm/kvm_main.c                |  7 ++-
>  13 files changed, 174 insertions(+), 136 deletions(-)
> 
> -- 
> 2.26.3
> 
> 



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 00/16] My AVIC patch queue
  2021-08-10 21:21 ` [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
@ 2021-08-11  8:06   ` Paolo Bonzini
  0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2021-08-11  8:06 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Jim Mattson, linux-kernel, Wanpeng Li, Borislav Petkov,
	Joerg Roedel, Suravee Suthikulpanit, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Vitaly Kuznetsov,
	Sean Christopherson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On 10/08/21 23:21, Maxim Levitsky wrote:
> On Tue, 2021-08-10 at 23:52 +0300, Maxim Levitsky wrote:
>> Hi!
>>
>> This is a series of bugfixes to the AVIC dynamic inhibition, which was
>> made while trying to fix bugs as much as possible in this area and trying
>> to make the AVIC+SYNIC conditional enablement work.
>>
>> * Patches 1,3-8 are code from Sean Christopherson which
> 
> I mean patches 1,4-8. I forgot about patch 3 which I also added,
> which just added a comment about parameters of the kvm_flush_remote_tlbs_with_address.
> 
> Best regards,
> 	Maxim Levitsky
> 
>>    implement an alternative approach of inhibiting AVIC without
>>    disabling its memslot.
>>
>>    V4: addressed review feedback.
>>
>> * Patch 2 is new and it fixes a bug in kvm_flush_remote_tlbs_with_address
>>
>> * Patches 9-10 in this series fix a race condition which can cause
>>    a lost write from a guest to APIC when the APIC write races
>>    the AVIC un-inhibition, and add a warning to catch this problem
>>    if it re-emerges again.
>>
>>    V4: applied review feedback from Paolo
>>
>> * Patch 11 is the patch from Vitaly about allowing AVIC with SYNC
>>    as long as the guest doesn’t use the AutoEOI feature. I only slightly
>>    changed it to expose the AutoEOI cpuid bit regardless of AVIC enablement.
>>
>>    V4: fixed a race that Paolo pointed out.
>>
>> * Patch 12 is a refactoring that is now possible in SVM AVIC inhibition code,
>>    because the RCU lock is not dropped anymore.
>>
>> * Patch 13-15 fixes another issue I found in AVIC inhibit code:
>>
>>    Currently avic_vcpu_load/avic_vcpu_put are called on userspace entry/exit
>>    from KVM (aka kvm_vcpu_get/kvm_vcpu_put), and these functions update the
>>    "is running" bit in the AVIC physical ID remap table and update the
>>    target vCPU in iommu code.
>>
>>    However both of these functions don't do anything when AVIC is inhibited
>>    thus the "is running" bit will be kept enabled during the exit to userspace.
>>    This shouldn't be a big issue as the caller
>>    doesn't use the AVIC when inhibited but still inconsistent and can trigger
>>    a warning about this in avic_vcpu_load.
>>
>>    To be on the safe side I think it makes sense to call
>>    avic_vcpu_put/avic_vcpu_load when inhibiting/uninhibiting the AVIC.
>>    This will ensure that the work these functions do is matched.
>>
>>    V4: I splitted a single patch to 3 patches to make it easier
>>        to review, and applied Paolo's review feedback.
>>
>> * Patch 16 removes the pointless APIC base
>>    relocation from AVIC to make it consistent with the rest of KVM.
>>
>>    (both AVIC and APICv only support default base, while regular KVM,
>>    sort of support any APIC base as long as it is not RAM.
>>    If guest attempts to relocate APIC base to non RAM area,
>>    while APICv/AVIC are active, the new base will be non accelerated,
>>    while the default base will continue to be AVIC/APICv backed).
>>
>>    On top of that if guest uses different APIC bases on different vCPUs,
>>    KVM doesn't honour the fact that the MMIO range should only be active
>>    on that vCPU.

No problem, b4 diff is my friend. :)  Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-08-11  8:08 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-10 20:52 [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 01/16] Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock" Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 02/16] KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 03/16] KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 04/16] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 05/16] KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 06/16] KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 07/16] KVM: x86/mmu: allow APICv memslot to be enabled but invisible Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 08/16] KVM: x86: don't disable APICv memslot when inhibited Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 09/16] KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 10/16] KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC inhibition Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 11/16] KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in use Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 12/16] KVM: SVM: remove svm_toggle_avic_for_irq_window Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 13/16] KVM: SVM: avoid refreshing avic if its state didn't change Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 14/16] KVM: SVM: move check for kvm_vcpu_apicv_active outside of avic_vcpu_{put|load} Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 15/16] KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC Maxim Levitsky
2021-08-10 20:52 ` [PATCH v4 16/16] KVM: SVM: AVIC: drop unsupported AVIC base relocation code Maxim Levitsky
2021-08-10 21:21 ` [PATCH v4 00/16] My AVIC patch queue Maxim Levitsky
2021-08-11  8:06   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).