linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups
@ 2022-07-15 23:00 Sean Christopherson
  2022-07-15 23:00 ` [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB Sean Christopherson
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Sean Christopherson @ 2022-07-15 23:00 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

Minor cleanups for KVM's handling of the memtype that's shoved into SPTEs.

Patch 1 enforces that entry '0' of the host's IA32_PAT is configured for WB
memtype.  KVM subtle relies on this behavior (silently shoves '0' into the
SPTE PAT field).  Check this at KVM load time so that if that doesn't hold
true, KVM will refuse to load instead of running the guest with weird and
potentially dangerous memtypes.

Patch 2 is a pure code cleanup (ordered after patch 1 in case someone wants
to backport the PAT check).

Patch 3 add a mask to track whether or not KVM may use a non-zero memtype
value in SPTEs.  Essentially, it's a "is EPT enabled" flag without being an
explicit "is EPT enabled" flag.  This avoid some minor work when not using
EPT, e.g. technically KVM could drop the RET0 implemention that's used for
SVM's get_mt_mask(), but IMO that's an unnecessary risk.

Patch 4 modifies the TDP page fault path to restrict the mapping level
based on guest MTRRs if and only if KVM might actually consume them.  The
guest MTRRs are purely software constructs (not directly consumed by
hardware), and KVM only honors them when EPT is enabled (host MTRRs are
overridden by EPT) and the guest has non-coherent DMA.  I doubt this will
move the needed on whether or not KVM can create huge pages, but it does
save having to do MTRR lookups on every page fault for guests without
a non-coherent DMA device attached.

Sean Christopherson (4):
  KVM: x86: Reject loading KVM if host.PAT[0] != WB
  KVM: x86: Drop unnecessary goto+label in kvm_arch_init()
  KVM: x86/mmu: Add shadow mask for effective host MTRR memtype
  KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're
    used

 arch/x86/kvm/mmu/mmu.c  | 26 +++++++++++++++++++-------
 arch/x86/kvm/mmu/spte.c | 21 ++++++++++++++++++---
 arch/x86/kvm/mmu/spte.h |  1 +
 arch/x86/kvm/x86.c      | 33 ++++++++++++++++++++-------------
 4 files changed, 58 insertions(+), 23 deletions(-)


base-commit: 8031d87aa9953ddeb047a5356ebd0b240c30f233
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB
  2022-07-15 23:00 [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Sean Christopherson
@ 2022-07-15 23:00 ` Sean Christopherson
  2022-07-15 23:06   ` Jim Mattson
  2022-07-15 23:00 ` [PATCH 2/4] KVM: x86: Drop unnecessary goto+label in kvm_arch_init() Sean Christopherson
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2022-07-15 23:00 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

Reject KVM if entry '0' in the host's IA32_PAT MSR is not programmed to
writeback (WB) memtype.  KVM subtly relies on IA32_PAT entry '0' to be
programmed to WB by leaving the PAT bits in shadow paging and NPT SPTEs
as '0'.  If something other than WB is in PAT[0], at _best_ guests will
suffer very poor performance, and at worst KVM will crash the system by
breaking cache-coherency expecations (e.g. using WC for guest memory).

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f389691d8c04..12199c40f2bc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9141,6 +9141,7 @@ static struct notifier_block pvclock_gtod_notifier = {
 int kvm_arch_init(void *opaque)
 {
 	struct kvm_x86_init_ops *ops = opaque;
+	u64 host_pat;
 	int r;
 
 	if (kvm_x86_ops.hardware_enable) {
@@ -9179,6 +9180,20 @@ int kvm_arch_init(void *opaque)
 		goto out;
 	}
 
+	/*
+	 * KVM assumes that PAT entry '0' encodes WB memtype and simply zeroes
+	 * the PAT bits in SPTEs.  Bail if PAT[0] is programmed to something
+	 * other than WB.  Note, EPT doesn't utilize the PAT, but don't bother
+	 * with an exception.  PAT[0] is set to WB on RESET and also by the
+	 * kernel, i.e. failure indicates a kernel bug or broken firmware.
+	 */
+	if (rdmsrl_safe(MSR_IA32_CR_PAT, &host_pat) ||
+	    (host_pat & GENMASK(2, 0)) != 6) {
+		pr_err("kvm: host PAT[0] is not WB\n");
+		r = -EIO;
+		goto out;
+	}
+
 	r = -ENOMEM;
 
 	x86_emulator_cache = kvm_alloc_emulator_cache();
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/4] KVM: x86: Drop unnecessary goto+label in kvm_arch_init()
  2022-07-15 23:00 [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Sean Christopherson
  2022-07-15 23:00 ` [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB Sean Christopherson
@ 2022-07-15 23:00 ` Sean Christopherson
  2022-07-18 10:03   ` Maxim Levitsky
  2022-07-15 23:00 ` [PATCH 3/4] KVM: x86/mmu: Add shadow mask for effective host MTRR memtype Sean Christopherson
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2022-07-15 23:00 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

Return directly if kvm_arch_init() detects an error before doing any real
work, jumping through a label obfuscates what's happening and carries the
unnecessary risk of leaving 'r' uninitialized.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 12199c40f2bc..41aa3137665c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9146,21 +9146,18 @@ int kvm_arch_init(void *opaque)
 
 	if (kvm_x86_ops.hardware_enable) {
 		pr_err("kvm: already loaded vendor module '%s'\n", kvm_x86_ops.name);
-		r = -EEXIST;
-		goto out;
+		return -EEXIST;
 	}
 
 	if (!ops->cpu_has_kvm_support()) {
 		pr_err_ratelimited("kvm: no hardware support for '%s'\n",
 				   ops->runtime_ops->name);
-		r = -EOPNOTSUPP;
-		goto out;
+		return -EOPNOTSUPP;
 	}
 	if (ops->disabled_by_bios()) {
 		pr_err_ratelimited("kvm: support for '%s' disabled by bios\n",
 				   ops->runtime_ops->name);
-		r = -EOPNOTSUPP;
-		goto out;
+		return -EOPNOTSUPP;
 	}
 
 	/*
@@ -9170,14 +9167,12 @@ int kvm_arch_init(void *opaque)
 	 */
 	if (!boot_cpu_has(X86_FEATURE_FPU) || !boot_cpu_has(X86_FEATURE_FXSR)) {
 		printk(KERN_ERR "kvm: inadequate fpu\n");
-		r = -EOPNOTSUPP;
-		goto out;
+		return -EOPNOTSUPP;
 	}
 
 	if (IS_ENABLED(CONFIG_PREEMPT_RT) && !boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) {
 		pr_err("RT requires X86_FEATURE_CONSTANT_TSC\n");
-		r = -EOPNOTSUPP;
-		goto out;
+		return -EOPNOTSUPP;
 	}
 
 	/*
@@ -9190,21 +9185,19 @@ int kvm_arch_init(void *opaque)
 	if (rdmsrl_safe(MSR_IA32_CR_PAT, &host_pat) ||
 	    (host_pat & GENMASK(2, 0)) != 6) {
 		pr_err("kvm: host PAT[0] is not WB\n");
-		r = -EIO;
-		goto out;
+		return -EIO;
 	}
 
-	r = -ENOMEM;
-
 	x86_emulator_cache = kvm_alloc_emulator_cache();
 	if (!x86_emulator_cache) {
 		pr_err("kvm: failed to allocate cache for x86 emulator\n");
-		goto out;
+		return -ENOMEM;
 	}
 
 	user_return_msrs = alloc_percpu(struct kvm_user_return_msrs);
 	if (!user_return_msrs) {
 		printk(KERN_ERR "kvm: failed to allocate percpu kvm_user_return_msrs\n");
+		r = -ENOMEM;
 		goto out_free_x86_emulator_cache;
 	}
 	kvm_nr_uret_msrs = 0;
@@ -9235,7 +9228,6 @@ int kvm_arch_init(void *opaque)
 	free_percpu(user_return_msrs);
 out_free_x86_emulator_cache:
 	kmem_cache_destroy(x86_emulator_cache);
-out:
 	return r;
 }
 
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/4] KVM: x86/mmu: Add shadow mask for effective host MTRR memtype
  2022-07-15 23:00 [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Sean Christopherson
  2022-07-15 23:00 ` [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB Sean Christopherson
  2022-07-15 23:00 ` [PATCH 2/4] KVM: x86: Drop unnecessary goto+label in kvm_arch_init() Sean Christopherson
@ 2022-07-15 23:00 ` Sean Christopherson
  2022-07-18 12:08   ` Maxim Levitsky
  2022-07-15 23:00 ` [PATCH 4/4] KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used Sean Christopherson
  2022-07-19 17:59 ` [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Paolo Bonzini
  4 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2022-07-15 23:00 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

Add shadow_memtype_mask to capture that EPT needs a non-zero memtype mask
instead of relying on TDP being enabled, as NPT doesn't need a non-zero
mask.  This is a glorified nop as kvm_x86_ops.get_mt_mask() returns zero
for NPT anyways.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/spte.c | 21 ++++++++++++++++++---
 arch/x86/kvm/mmu/spte.h |  1 +
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index fb1f17504138..7314d27d57a4 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -33,6 +33,7 @@ u64 __read_mostly shadow_mmio_value;
 u64 __read_mostly shadow_mmio_mask;
 u64 __read_mostly shadow_mmio_access_mask;
 u64 __read_mostly shadow_present_mask;
+u64 __read_mostly shadow_memtype_mask;
 u64 __read_mostly shadow_me_value;
 u64 __read_mostly shadow_me_mask;
 u64 __read_mostly shadow_acc_track_mask;
@@ -161,10 +162,10 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 
 	if (level > PG_LEVEL_4K)
 		spte |= PT_PAGE_SIZE_MASK;
-	if (tdp_enabled)
+
+	if (shadow_memtype_mask)
 		spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn,
-			kvm_is_mmio_pfn(pfn));
-
+							 kvm_is_mmio_pfn(pfn));
 	if (host_writable)
 		spte |= shadow_host_writable_mask;
 	else
@@ -391,6 +392,13 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
 	shadow_nx_mask		= 0ull;
 	shadow_x_mask		= VMX_EPT_EXECUTABLE_MASK;
 	shadow_present_mask	= has_exec_only ? 0ull : VMX_EPT_READABLE_MASK;
+	/*
+	 * EPT overrides the host MTRRs, and so KVM must program the desired
+	 * memtype directly into the SPTEs.  Note, this mask is just the mask
+	 * of all bits that factor into the memtype, the actual memtype must be
+	 * dynamically calculated, e.g. to ensure host MMIO is mapped UC.
+	 */
+	shadow_memtype_mask	= VMX_EPT_MT_MASK | VMX_EPT_IPAT_BIT;
 	shadow_acc_track_mask	= VMX_EPT_RWX_MASK;
 	shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
 	shadow_mmu_writable_mask  = EPT_SPTE_MMU_WRITABLE;
@@ -441,6 +449,13 @@ void kvm_mmu_reset_all_pte_masks(void)
 	shadow_nx_mask		= PT64_NX_MASK;
 	shadow_x_mask		= 0;
 	shadow_present_mask	= PT_PRESENT_MASK;
+
+	/*
+	 * For shadow paging and NPT, KVM uses PAT entry '0' to encode WB
+	 * memtype in the SPTEs, i.e. relies on host MTRRs to provide the
+	 * correct memtype (WB is the "weakest" memtype).
+	 */
+	shadow_memtype_mask	= 0;
 	shadow_acc_track_mask	= 0;
 	shadow_me_mask		= 0;
 	shadow_me_value		= 0;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index ba3dccb202bc..cabe3fbb4f39 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -147,6 +147,7 @@ extern u64 __read_mostly shadow_mmio_value;
 extern u64 __read_mostly shadow_mmio_mask;
 extern u64 __read_mostly shadow_mmio_access_mask;
 extern u64 __read_mostly shadow_present_mask;
+extern u64 __read_mostly shadow_memtype_mask;
 extern u64 __read_mostly shadow_me_value;
 extern u64 __read_mostly shadow_me_mask;
 
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/4] KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used
  2022-07-15 23:00 [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Sean Christopherson
                   ` (2 preceding siblings ...)
  2022-07-15 23:00 ` [PATCH 3/4] KVM: x86/mmu: Add shadow mask for effective host MTRR memtype Sean Christopherson
@ 2022-07-15 23:00 ` Sean Christopherson
  2022-07-18 12:08   ` Maxim Levitsky
  2022-07-19 17:59 ` [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Paolo Bonzini
  4 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2022-07-15 23:00 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

Restrict the mapping level for SPTEs based on the guest MTRRs if and only
if KVM may actually use the guest MTRRs to compute the "real" memtype.
For all forms of paging, guest MTRRs are purely virtual in the sense that
they are completely ignored by hardware, i.e. they affect the memtype
only if software manually consumes them.  The only scenario where KVM
consumes the guest MTRRs is when shadow_memtype_mask is non-zero and the
guest has non-coherent DMA, in all other cases KVM simply leaves the PAT
field in SPTEs as '0' to encode WB memtype.

Note, KVM may still ultimately ignore guest MTRRs, e.g. if the backing
pfn is host MMIO, but false positives are ok as they only cause a slight
performance blip (unless the guest is doing weird things with its MTRRs,
which is extremely unlikely).

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 52664c3caaab..82f38af06f5c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4295,14 +4295,26 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault);
 
 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
-	while (fault->max_level > PG_LEVEL_4K) {
-		int page_num = KVM_PAGES_PER_HPAGE(fault->max_level);
-		gfn_t base = (fault->addr >> PAGE_SHIFT) & ~(page_num - 1);
+	/*
+	 * If the guest's MTRRs may be used to compute the "real" memtype,
+	 * restrict the mapping level to ensure KVM uses a consistent memtype
+	 * across the entire mapping.  If the host MTRRs are ignored by TDP
+	 * (shadow_memtype_mask is non-zero), and the VM has non-coherent DMA
+	 * (DMA doesn't snoop CPU caches), KVM's ABI is to honor the memtype
+	 * from the guest's MTRRs so that guest accesses to memory that is
+	 * DMA'd aren't cached against the guest's wishes.
+	 *
+	 * Note, KVM may still ultimately ignore guest MTRRs for certain PFNs,
+	 * e.g. KVM will force UC memtype for host MMIO.
+	 */
+	if (shadow_memtype_mask && kvm_arch_has_noncoherent_dma(vcpu->kvm)) {
+		for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) {
+			int page_num = KVM_PAGES_PER_HPAGE(fault->max_level);
+			gfn_t base = (fault->addr >> PAGE_SHIFT) & ~(page_num - 1);
 
-		if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
-			break;
-
-		--fault->max_level;
+			if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
+				break;
+		}
 	}
 
 	return direct_page_fault(vcpu, fault);
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB
  2022-07-15 23:00 ` [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB Sean Christopherson
@ 2022-07-15 23:06   ` Jim Mattson
  2022-07-15 23:18     ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: Jim Mattson @ 2022-07-15 23:06 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel

On Fri, Jul 15, 2022 at 4:02 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Reject KVM if entry '0' in the host's IA32_PAT MSR is not programmed to
> writeback (WB) memtype.  KVM subtly relies on IA32_PAT entry '0' to be
> programmed to WB by leaving the PAT bits in shadow paging and NPT SPTEs
> as '0'.  If something other than WB is in PAT[0], at _best_ guests will
> suffer very poor performance, and at worst KVM will crash the system by
> breaking cache-coherency expecations (e.g. using WC for guest memory).
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
What if someone changes the host's PAT to violate this rule *after*
kvm is loaded?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB
  2022-07-15 23:06   ` Jim Mattson
@ 2022-07-15 23:18     ` Sean Christopherson
  2022-07-18  9:42       ` Maxim Levitsky
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2022-07-15 23:18 UTC (permalink / raw)
  To: Jim Mattson; +Cc: Paolo Bonzini, kvm, linux-kernel

On Fri, Jul 15, 2022, Jim Mattson wrote:
> On Fri, Jul 15, 2022 at 4:02 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > Reject KVM if entry '0' in the host's IA32_PAT MSR is not programmed to
> > writeback (WB) memtype.  KVM subtly relies on IA32_PAT entry '0' to be
> > programmed to WB by leaving the PAT bits in shadow paging and NPT SPTEs
> > as '0'.  If something other than WB is in PAT[0], at _best_ guests will
> > suffer very poor performance, and at worst KVM will crash the system by
> > breaking cache-coherency expecations (e.g. using WC for guest memory).
> >
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> What if someone changes the host's PAT to violate this rule *after*
> kvm is loaded?

Then KVM (and probably many other things in the kernel) is hosed.  The same argument
(that KVM isn't paranoid enough) can likely be made for a number of MSRs and critical
registers.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB
  2022-07-15 23:18     ` Sean Christopherson
@ 2022-07-18  9:42       ` Maxim Levitsky
  0 siblings, 0 replies; 14+ messages in thread
From: Maxim Levitsky @ 2022-07-18  9:42 UTC (permalink / raw)
  To: Sean Christopherson, Jim Mattson; +Cc: Paolo Bonzini, kvm, linux-kernel

On Fri, 2022-07-15 at 23:18 +0000, Sean Christopherson wrote:
> On Fri, Jul 15, 2022, Jim Mattson wrote:
> > On Fri, Jul 15, 2022 at 4:02 PM Sean Christopherson <seanjc@google.com> wrote:
> > > 
> > > Reject KVM if entry '0' in the host's IA32_PAT MSR is not programmed to
> > > writeback (WB) memtype.  KVM subtly relies on IA32_PAT entry '0' to be
> > > programmed to WB by leaving the PAT bits in shadow paging and NPT SPTEs
> > > as '0'.  If something other than WB is in PAT[0], at _best_ guests will
> > > suffer very poor performance, and at worst KVM will crash the system by
> > > breaking cache-coherency expecations (e.g. using WC for guest memory).
> > > 
> > > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > > ---
> > What if someone changes the host's PAT to violate this rule *after*
> > kvm is loaded?
> 
> Then KVM (and probably many other things in the kernel) is hosed.  The same argument
> (that KVM isn't paranoid enough) can likely be made for a number of MSRs and critical
> registers.
> 

I was thinking about the same thing and I also 100% agree with the above.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/4] KVM: x86: Drop unnecessary goto+label in kvm_arch_init()
  2022-07-15 23:00 ` [PATCH 2/4] KVM: x86: Drop unnecessary goto+label in kvm_arch_init() Sean Christopherson
@ 2022-07-18 10:03   ` Maxim Levitsky
  2022-07-18 15:10     ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: Maxim Levitsky @ 2022-07-18 10:03 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

On Fri, 2022-07-15 at 23:00 +0000, Sean Christopherson wrote:
> Return directly if kvm_arch_init() detects an error before doing any real
> work, jumping through a label obfuscates what's happening and carries the
> unnecessary risk of leaving 'r' uninitialized.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/x86.c | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 12199c40f2bc..41aa3137665c 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9146,21 +9146,18 @@ int kvm_arch_init(void *opaque)
>  
>         if (kvm_x86_ops.hardware_enable) {
>                 pr_err("kvm: already loaded vendor module '%s'\n", kvm_x86_ops.name);
> -               r = -EEXIST;
> -               goto out;
> +               return -EEXIST;
>         }
>  
>         if (!ops->cpu_has_kvm_support()) {
>                 pr_err_ratelimited("kvm: no hardware support for '%s'\n",
>                                    ops->runtime_ops->name);
> -               r = -EOPNOTSUPP;
> -               goto out;
> +               return -EOPNOTSUPP;
>         }
>         if (ops->disabled_by_bios()) {
>                 pr_err_ratelimited("kvm: support for '%s' disabled by bios\n",
>                                    ops->runtime_ops->name);
> -               r = -EOPNOTSUPP;
> -               goto out;
> +               return -EOPNOTSUPP;
>         }
>  
>         /*
> @@ -9170,14 +9167,12 @@ int kvm_arch_init(void *opaque)
>          */
>         if (!boot_cpu_has(X86_FEATURE_FPU) || !boot_cpu_has(X86_FEATURE_FXSR)) {
>                 printk(KERN_ERR "kvm: inadequate fpu\n");
> -               r = -EOPNOTSUPP;
> -               goto out;
> +               return -EOPNOTSUPP;
>         }
>  
>         if (IS_ENABLED(CONFIG_PREEMPT_RT) && !boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) {
>                 pr_err("RT requires X86_FEATURE_CONSTANT_TSC\n");
> -               r = -EOPNOTSUPP;
> -               goto out;
> +               return -EOPNOTSUPP;
>         }
>  
>         /*
> @@ -9190,21 +9185,19 @@ int kvm_arch_init(void *opaque)
>         if (rdmsrl_safe(MSR_IA32_CR_PAT, &host_pat) ||
>             (host_pat & GENMASK(2, 0)) != 6) {
>                 pr_err("kvm: host PAT[0] is not WB\n");
> -               r = -EIO;
> -               goto out;
> +               return -EIO;
>         }
>  
> -       r = -ENOMEM;
> -
>         x86_emulator_cache = kvm_alloc_emulator_cache();
>         if (!x86_emulator_cache) {
>                 pr_err("kvm: failed to allocate cache for x86 emulator\n");
> -               goto out;
> +               return -ENOMEM;
>         }
>  
>         user_return_msrs = alloc_percpu(struct kvm_user_return_msrs);
>         if (!user_return_msrs) {
>                 printk(KERN_ERR "kvm: failed to allocate percpu kvm_user_return_msrs\n");
> +               r = -ENOMEM;
>                 goto out_free_x86_emulator_cache;
>         }
>         kvm_nr_uret_msrs = 0;
> @@ -9235,7 +9228,6 @@ int kvm_arch_init(void *opaque)
>         free_percpu(user_return_msrs);
>  out_free_x86_emulator_cache:
>         kmem_cache_destroy(x86_emulator_cache);
> -out:
>         return r;
>  }
>  


I honestly don't see much value in this change, but I don't mind it either.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/4] KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used
  2022-07-15 23:00 ` [PATCH 4/4] KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used Sean Christopherson
@ 2022-07-18 12:08   ` Maxim Levitsky
  0 siblings, 0 replies; 14+ messages in thread
From: Maxim Levitsky @ 2022-07-18 12:08 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

On Fri, 2022-07-15 at 23:00 +0000, Sean Christopherson wrote:
> Restrict the mapping level for SPTEs based on the guest MTRRs if and only
> if KVM may actually use the guest MTRRs to compute the "real" memtype.
> For all forms of paging, guest MTRRs are purely virtual in the sense that
> they are completely ignored by hardware, i.e. they affect the memtype
> only if software manually consumes them.  The only scenario where KVM
> consumes the guest MTRRs is when shadow_memtype_mask is non-zero and the
> guest has non-coherent DMA, in all other cases KVM simply leaves the PAT
> field in SPTEs as '0' to encode WB memtype.
> 
> Note, KVM may still ultimately ignore guest MTRRs, e.g. if the backing
> pfn is host MMIO, but false positives are ok as they only cause a slight
> performance blip (unless the guest is doing weird things with its MTRRs,
> which is extremely unlikely).
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++++++++-------
>  1 file changed, 19 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 52664c3caaab..82f38af06f5c 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4295,14 +4295,26 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault);
>  
>  int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
> -       while (fault->max_level > PG_LEVEL_4K) {
> -               int page_num = KVM_PAGES_PER_HPAGE(fault->max_level);
> -               gfn_t base = (fault->addr >> PAGE_SHIFT) & ~(page_num - 1);
> +       /*
> +        * If the guest's MTRRs may be used to compute the "real" memtype,
> +        * restrict the mapping level to ensure KVM uses a consistent memtype
> +        * across the entire mapping.  If the host MTRRs are ignored by TDP
> +        * (shadow_memtype_mask is non-zero), and the VM has non-coherent DMA
> +        * (DMA doesn't snoop CPU caches), KVM's ABI is to honor the memtype
> +        * from the guest's MTRRs so that guest accesses to memory that is
> +        * DMA'd aren't cached against the guest's wishes.
> +        *
> +        * Note, KVM may still ultimately ignore guest MTRRs for certain PFNs,
> +        * e.g. KVM will force UC memtype for host MMIO.
> +        */
> +       if (shadow_memtype_mask && kvm_arch_has_noncoherent_dma(vcpu->kvm)) {
> +               for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) {
> +                       int page_num = KVM_PAGES_PER_HPAGE(fault->max_level);
> +                       gfn_t base = (fault->addr >> PAGE_SHIFT) & ~(page_num - 1);
>  
> -               if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
> -                       break;
> -
> -               --fault->max_level;
> +                       if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
> +                               break;
> +               }
>         }
>  
>         return direct_page_fault(vcpu, fault);


Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] KVM: x86/mmu: Add shadow mask for effective host MTRR memtype
  2022-07-15 23:00 ` [PATCH 3/4] KVM: x86/mmu: Add shadow mask for effective host MTRR memtype Sean Christopherson
@ 2022-07-18 12:08   ` Maxim Levitsky
  2022-07-18 16:07     ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: Maxim Levitsky @ 2022-07-18 12:08 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

On Fri, 2022-07-15 at 23:00 +0000, Sean Christopherson wrote:
> Add shadow_memtype_mask to capture that EPT needs a non-zero memtype mask
> instead of relying on TDP being enabled, as NPT doesn't need a non-zero
> mask.  This is a glorified nop as kvm_x86_ops.get_mt_mask() returns zero
> for NPT anyways.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/mmu/spte.c | 21 ++++++++++++++++++---
>  arch/x86/kvm/mmu/spte.h |  1 +
>  2 files changed, 19 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index fb1f17504138..7314d27d57a4 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -33,6 +33,7 @@ u64 __read_mostly shadow_mmio_value;
>  u64 __read_mostly shadow_mmio_mask;
>  u64 __read_mostly shadow_mmio_access_mask;
>  u64 __read_mostly shadow_present_mask;
> +u64 __read_mostly shadow_memtype_mask;
>  u64 __read_mostly shadow_me_value;
>  u64 __read_mostly shadow_me_mask;
>  u64 __read_mostly shadow_acc_track_mask;
> @@ -161,10 +162,10 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
>  
>         if (level > PG_LEVEL_4K)
>                 spte |= PT_PAGE_SIZE_MASK;
> -       if (tdp_enabled)
> +
> +       if (shadow_memtype_mask)
>                 spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn,
> -                       kvm_is_mmio_pfn(pfn));
> -
> +                                                        kvm_is_mmio_pfn(pfn));
>         if (host_writable)
>                 spte |= shadow_host_writable_mask;
>         else
> @@ -391,6 +392,13 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
>         shadow_nx_mask          = 0ull;
>         shadow_x_mask           = VMX_EPT_EXECUTABLE_MASK;
>         shadow_present_mask     = has_exec_only ? 0ull : VMX_EPT_READABLE_MASK;
> +       /*
> +        * EPT overrides the host MTRRs, and so KVM must program the desired
> +        * memtype directly into the SPTEs.  Note, this mask is just the mask
> +        * of all bits that factor into the memtype, the actual memtype must be
> +        * dynamically calculated, e.g. to ensure host MMIO is mapped UC.
> +        */
> +       shadow_memtype_mask     = VMX_EPT_MT_MASK | VMX_EPT_IPAT_BIT;
>         shadow_acc_track_mask   = VMX_EPT_RWX_MASK;
>         shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
>         shadow_mmu_writable_mask  = EPT_SPTE_MMU_WRITABLE;
> @@ -441,6 +449,13 @@ void kvm_mmu_reset_all_pte_masks(void)
>         shadow_nx_mask          = PT64_NX_MASK;
>         shadow_x_mask           = 0;
>         shadow_present_mask     = PT_PRESENT_MASK;
> +
> +       /*
> +        * For shadow paging and NPT, KVM uses PAT entry '0' to encode WB
> +        * memtype in the SPTEs, i.e. relies on host MTRRs to provide the
> +        * correct memtype (WB is the "weakest" memtype).
> +        */
> +       shadow_memtype_mask     = 0;
>         shadow_acc_track_mask   = 0;
>         shadow_me_mask          = 0;
>         shadow_me_value         = 0;
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index ba3dccb202bc..cabe3fbb4f39 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -147,6 +147,7 @@ extern u64 __read_mostly shadow_mmio_value;
>  extern u64 __read_mostly shadow_mmio_mask;
>  extern u64 __read_mostly shadow_mmio_access_mask;
>  extern u64 __read_mostly shadow_present_mask;
> +extern u64 __read_mostly shadow_memtype_mask;
>  extern u64 __read_mostly shadow_me_value;
>  extern u64 __read_mostly shadow_me_mask;
>  


So if I understand correctly:


VMX:

- host MTRRs are ignored.

- all *host* mmio ranges (can only be VFIO's pci BARs), are mapped UC in EPT,
 but guest can override this with its PAT to WC)


- all regular memory is mapped WB + guest PAT ignored unless there is noncoherent dma,
 (an older Intel's IOMMU? I think current Intel's IOMMLU are coherent?)


- In case of noncoherent dma guest MTRRs and PAT are respected.



SVM:

- host MTRRs are respected, and can enforce UC on *host* mmio areas.


- WB is always used in NPT, *always*, however NPT doesn't have the 'IPAT'
 bit, so the guest is free to overrride it for its its MMIO areas to any memory type as it wishes,
 using its own PAT, and we do allow the guest to change IA32_PAT to any value it wishes to.

 (e.g VFIO's PCI bars, memory which a VFIO devices needs to access, etc)

 (This reminds me that PAT is somewhat broken in regard to nesting, we ignore L2's PAT)


With all this said, it makes sense.


Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/4] KVM: x86: Drop unnecessary goto+label in kvm_arch_init()
  2022-07-18 10:03   ` Maxim Levitsky
@ 2022-07-18 15:10     ` Sean Christopherson
  0 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2022-07-18 15:10 UTC (permalink / raw)
  To: Maxim Levitsky; +Cc: Paolo Bonzini, kvm, linux-kernel

On Mon, Jul 18, 2022, Maxim Levitsky wrote:
> I honestly don't see much value in this change, but I don't mind it either.

Yeah, this particular instance isn't a significant improvement, but I really dislike
the pattern (if the target is a raw return) and want to discourage its use in KVM.

For longer functions, having to scroll down to see that the target is nothing more
than a raw return is quite annoying.  And for more complex usage, the pattern sometimes
leads to setting the return value well ahead of the "goto", which combined with the
scrolling is very unfriendly to readers.

E.g. prior to commit 71a4c30bf0d3 ("KVM: Refactor error handling for setting memory
region"), the memslot code input validation was written as so.  The "r = 0" in the
"Nothing to change" path was especially gross.

        r = -EINVAL;
        as_id = mem->slot >> 16;
        id = (u16)mem->slot;

        /* General sanity checks */
        if (mem->memory_size & (PAGE_SIZE - 1))
                goto out;
        if (mem->guest_phys_addr & (PAGE_SIZE - 1))
                goto out;
        /* We can read the guest memory with __xxx_user() later on. */
        if ((id < KVM_USER_MEM_SLOTS) &&
            ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
             !access_ok((void __user *)(unsigned long)mem->userspace_addr,
                        mem->memory_size)))
                goto out;
        if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM)
                goto out;
        if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
                goto out;

        slot = id_to_memslot(__kvm_memslots(kvm, as_id), id);
        base_gfn = mem->guest_phys_addr >> PAGE_SHIFT;
        npages = mem->memory_size >> PAGE_SHIFT;

        if (npages > KVM_MEM_MAX_NR_PAGES)
                goto out;

        new = old = *slot;

        new.id = id;
        new.base_gfn = base_gfn;
        new.npages = npages;
        new.flags = mem->flags;
        new.userspace_addr = mem->userspace_addr;

        if (npages) {
                if (!old.npages)
                        change = KVM_MR_CREATE;
                else { /* Modify an existing slot. */
                        if ((new.userspace_addr != old.userspace_addr) ||
                            (npages != old.npages) ||
                            ((new.flags ^ old.flags) & KVM_MEM_READONLY))
                                goto out;

                        if (base_gfn != old.base_gfn)
                                change = KVM_MR_MOVE;
                        else if (new.flags != old.flags)
                                change = KVM_MR_FLAGS_ONLY;
                        else { /* Nothing to change. */
                                r = 0;
                                goto out;
                        }
                }
        } else {
                if (!old.npages)
                        goto out;

                change = KVM_MR_DELETE;
                new.base_gfn = 0;
                new.flags = 0;
        }


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] KVM: x86/mmu: Add shadow mask for effective host MTRR memtype
  2022-07-18 12:08   ` Maxim Levitsky
@ 2022-07-18 16:07     ` Sean Christopherson
  0 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2022-07-18 16:07 UTC (permalink / raw)
  To: Maxim Levitsky; +Cc: Paolo Bonzini, kvm, linux-kernel

On Mon, Jul 18, 2022, Maxim Levitsky wrote:
> On Fri, 2022-07-15 at 23:00 +0000, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> > index ba3dccb202bc..cabe3fbb4f39 100644
> > --- a/arch/x86/kvm/mmu/spte.h
> > +++ b/arch/x86/kvm/mmu/spte.h
> > @@ -147,6 +147,7 @@ extern u64 __read_mostly shadow_mmio_value;
> >  extern u64 __read_mostly shadow_mmio_mask;
> >  extern u64 __read_mostly shadow_mmio_access_mask;
> >  extern u64 __read_mostly shadow_present_mask;
> > +extern u64 __read_mostly shadow_memtype_mask;
> >  extern u64 __read_mostly shadow_me_value;
> >  extern u64 __read_mostly shadow_me_mask;
> >  
> 
> 
> So if I understand correctly:
> 
> 
> VMX:
> 
> - host MTRRs are ignored.
> 
> - all *host* mmio ranges (can only be VFIO's pci BARs), are mapped UC in EPT,
>  but guest can override this with its PAT to WC)
> 
> 
> - all regular memory is mapped WB + guest PAT ignored unless there is noncoherent dma,
>  (an older Intel's IOMMU? I think current Intel's IOMMLU are coherent?)

Effectively, yes.

My understanding is that on x86, everything is cache-coherent by default, but devices
can set a no-snoop flag, which breaks cache coherency.  But then the IOMMU, except for
old Intel IOMMUs, can block such packets, and VFIO forces the block setting in the IOMMU
when it's supported by hardware.

Note, at first glance, commit e8ae0e140c05 ("vfio: Require that devices support DMA
cache coherence") makes it seem like exposing non-coherent DMA to KVM is impossible,
but IIUC that's just enforcing that the _default_ device behavior provides coherency.
I.e. VFIO will still allow an old Intel IOMMU plus a device that sets no-snoop.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups
  2022-07-15 23:00 [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Sean Christopherson
                   ` (3 preceding siblings ...)
  2022-07-15 23:00 ` [PATCH 4/4] KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used Sean Christopherson
@ 2022-07-19 17:59 ` Paolo Bonzini
  4 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2022-07-19 17:59 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: kvm, linux-kernel

On 7/16/22 01:00, Sean Christopherson wrote:
> Minor cleanups for KVM's handling of the memtype that's shoved into SPTEs.
> 
> Patch 1 enforces that entry '0' of the host's IA32_PAT is configured for WB
> memtype.  KVM subtle relies on this behavior (silently shoves '0' into the
> SPTE PAT field).  Check this at KVM load time so that if that doesn't hold
> true, KVM will refuse to load instead of running the guest with weird and
> potentially dangerous memtypes.
> 
> Patch 2 is a pure code cleanup (ordered after patch 1 in case someone wants
> to backport the PAT check).
> 
> Patch 3 add a mask to track whether or not KVM may use a non-zero memtype
> value in SPTEs.  Essentially, it's a "is EPT enabled" flag without being an
> explicit "is EPT enabled" flag.  This avoid some minor work when not using
> EPT, e.g. technically KVM could drop the RET0 implemention that's used for
> SVM's get_mt_mask(), but IMO that's an unnecessary risk.
> 
> Patch 4 modifies the TDP page fault path to restrict the mapping level
> based on guest MTRRs if and only if KVM might actually consume them.  The
> guest MTRRs are purely software constructs (not directly consumed by
> hardware), and KVM only honors them when EPT is enabled (host MTRRs are
> overridden by EPT) and the guest has non-coherent DMA.  I doubt this will
> move the needed on whether or not KVM can create huge pages, but it does
> save having to do MTRR lookups on every page fault for guests without
> a non-coherent DMA device attached.
> 
> Sean Christopherson (4):
>    KVM: x86: Reject loading KVM if host.PAT[0] != WB
>    KVM: x86: Drop unnecessary goto+label in kvm_arch_init()
>    KVM: x86/mmu: Add shadow mask for effective host MTRR memtype
>    KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're
>      used
> 
>   arch/x86/kvm/mmu/mmu.c  | 26 +++++++++++++++++++-------
>   arch/x86/kvm/mmu/spte.c | 21 ++++++++++++++++++---
>   arch/x86/kvm/mmu/spte.h |  1 +
>   arch/x86/kvm/x86.c      | 33 ++++++++++++++++++++-------------
>   4 files changed, 58 insertions(+), 23 deletions(-)
> 
> 
> base-commit: 8031d87aa9953ddeb047a5356ebd0b240c30f233

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-07-19 18:00 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-15 23:00 [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Sean Christopherson
2022-07-15 23:00 ` [PATCH 1/4] KVM: x86: Reject loading KVM if host.PAT[0] != WB Sean Christopherson
2022-07-15 23:06   ` Jim Mattson
2022-07-15 23:18     ` Sean Christopherson
2022-07-18  9:42       ` Maxim Levitsky
2022-07-15 23:00 ` [PATCH 2/4] KVM: x86: Drop unnecessary goto+label in kvm_arch_init() Sean Christopherson
2022-07-18 10:03   ` Maxim Levitsky
2022-07-18 15:10     ` Sean Christopherson
2022-07-15 23:00 ` [PATCH 3/4] KVM: x86/mmu: Add shadow mask for effective host MTRR memtype Sean Christopherson
2022-07-18 12:08   ` Maxim Levitsky
2022-07-18 16:07     ` Sean Christopherson
2022-07-15 23:00 ` [PATCH 4/4] KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used Sean Christopherson
2022-07-18 12:08   ` Maxim Levitsky
2022-07-19 17:59 ` [PATCH 0/4] KVM: x86/mmu: Memtype related cleanups Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).