All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME
@ 2019-05-03 10:08 Kai Huang
  2019-05-03 10:08 ` [PATCH 1/2] kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c Kai Huang
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Kai Huang @ 2019-05-03 10:08 UTC (permalink / raw)
  To: kvm, pbonzini, rkrcmar
  Cc: sean.j.christopherson, junaids, thomas.lendacky, brijesh.singh,
	guangrong.xiao, tglx, bp, hpa, kai.huang

This series fix reserved bits related calculation errors caused by MKTME. MKTME
repurposes high bits of physical address bits as 'keyID' thus they are not
reserved bits, and to honor such HW behavior those reduced bits are taken away
from boot_cpu_data.x86_phys_bits when MKTME is detected (exactly how many bits
are taken away is configured by BIOS). Currently KVM asssumes bits from
boot_cpu_data.x86_phys_bits to 51 are reserved bits, which is not true anymore
with MKTME, and needs fix.

This series was splitted from the old patch I sent out around 2 weeks ago:

kvm: x86: Fix several SPTE mask calculation errors caused by MKTME

Changes to old patch:

  - splitted one patch into two patches. First patch is to move
    kvm_set_mmio_spte_mask() as prerequisite. It doesn't impact functionality.
    Patch 2 does the real fix.

  - renamed shadow_first_rsvd_bits to shadow_phys_bits suggested by Sean.

  - refined comments and commit msg to be more concise.

Btw sorry that I will be out next week and won't be able to reply email.

Kai Huang (2):
  kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c
  kvm: x86: Fix reserved bits related calculation errors caused by MKTME

 arch/x86/kvm/mmu.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++----
 arch/x86/kvm/x86.c | 31 ---------------------------
 2 files changed, 57 insertions(+), 35 deletions(-)

-- 
2.13.6


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c
  2019-05-03 10:08 [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME Kai Huang
@ 2019-05-03 10:08 ` Kai Huang
  2019-05-03 10:08 ` [PATCH 2/2] kvm: x86: Fix reserved bits related calculation errors caused by MKTME Kai Huang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Kai Huang @ 2019-05-03 10:08 UTC (permalink / raw)
  To: kvm, pbonzini, rkrcmar
  Cc: sean.j.christopherson, junaids, thomas.lendacky, brijesh.singh,
	guangrong.xiao, tglx, bp, hpa, kai.huang, Kai Huang

As a prerequisite to fix several SPTE reserved bits related calculation
errors caused by MKTME, which requires kvm_set_mmio_spte_mask() to use
local static variable defined in mmu.c.

Also move call site of kvm_set_mmio_spte_mask() from kvm_arch_init() to
kvm_mmu_module_init() so that kvm_set_mmio_spte_mask() can be static.

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
 arch/x86/kvm/mmu.c | 31 +++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c | 31 -------------------------------
 2 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b1f6451022e5..c0a3d120167b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5989,6 +5989,35 @@ static void mmu_destroy_caches(void)
 	kmem_cache_destroy(mmu_page_header_cache);
 }
 
+static void kvm_set_mmio_spte_mask(void)
+{
+	u64 mask;
+	int maxphyaddr = boot_cpu_data.x86_phys_bits;
+
+	/*
+	 * Set the reserved bits and the present bit of an paging-structure
+	 * entry to generate page fault with PFER.RSV = 1.
+	 */
+
+	/*
+	 * Mask the uppermost physical address bit, which would be reserved as
+	 * long as the supported physical address width is less than 52.
+	 */
+	mask = 1ull << 51;
+
+	/* Set the present bit. */
+	mask |= 1ull;
+
+	/*
+	 * If reserved bit is not supported, clear the present bit to disable
+	 * mmio page fault.
+	 */
+	if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52)
+		mask &= ~1ull;
+
+	kvm_mmu_set_mmio_spte_mask(mask, mask);
+}
+
 int kvm_mmu_module_init(void)
 {
 	int ret = -ENOMEM;
@@ -6005,6 +6034,8 @@ int kvm_mmu_module_init(void)
 
 	kvm_mmu_reset_all_pte_masks();
 
+	kvm_set_mmio_spte_mask();
+
 	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
 					    sizeof(struct pte_list_desc),
 					    0, SLAB_ACCOUNT, NULL);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index dc621f73e96b..80ab4a46ddfd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6877,35 +6877,6 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
 	.handle_intel_pt_intr	= kvm_handle_intel_pt_intr,
 };
 
-static void kvm_set_mmio_spte_mask(void)
-{
-	u64 mask;
-	int maxphyaddr = boot_cpu_data.x86_phys_bits;
-
-	/*
-	 * Set the reserved bits and the present bit of an paging-structure
-	 * entry to generate page fault with PFER.RSV = 1.
-	 */
-
-	/*
-	 * Mask the uppermost physical address bit, which would be reserved as
-	 * long as the supported physical address width is less than 52.
-	 */
-	mask = 1ull << 51;
-
-	/* Set the present bit. */
-	mask |= 1ull;
-
-	/*
-	 * If reserved bit is not supported, clear the present bit to disable
-	 * mmio page fault.
-	 */
-	if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52)
-		mask &= ~1ull;
-
-	kvm_mmu_set_mmio_spte_mask(mask, mask);
-}
-
 #ifdef CONFIG_X86_64
 static void pvclock_gtod_update_fn(struct work_struct *work)
 {
@@ -7002,8 +6973,6 @@ int kvm_arch_init(void *opaque)
 	if (r)
 		goto out_free_percpu;
 
-	kvm_set_mmio_spte_mask();
-
 	kvm_x86_ops = ops;
 
 	kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK,
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] kvm: x86: Fix reserved bits related calculation errors caused by MKTME
  2019-05-03 10:08 [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME Kai Huang
  2019-05-03 10:08 ` [PATCH 1/2] kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c Kai Huang
@ 2019-05-03 10:08 ` Kai Huang
  2019-05-13  3:31 ` [PATCH 0/2] Fix reserved bits " Huang, Kai
  2019-06-04 17:04 ` Paolo Bonzini
  3 siblings, 0 replies; 6+ messages in thread
From: Kai Huang @ 2019-05-03 10:08 UTC (permalink / raw)
  To: kvm, pbonzini, rkrcmar
  Cc: sean.j.christopherson, junaids, thomas.lendacky, brijesh.singh,
	guangrong.xiao, tglx, bp, hpa, kai.huang, Kai Huang

Intel MKTME repurposes several high bits of physical address as 'keyID'
for memory encryption thus effectively reduces platform's maximum
physical address bits. Exactly how many bits are reduced is configured
by BIOS. To honor such HW behavior, the repurposed bits are reduced from
cpuinfo_x86->x86_phys_bits when MKTME is detected in CPU detection.
Similarly, AMD SME/SEV also reduces physical address bits for memory
encryption, and cpuinfo->x86_phys_bits is reduced too when SME/SEV is
detected, so for both MKTME and SME/SEV, boot_cpu_data.x86_phys_bits
doesn't hold physical address bits reported by CPUID anymore.

Currently KVM treats bits from boot_cpu_data.x86_phys_bits to 51 as
reserved bits, but it's not true anymore for MKTME, since MKTME treats
those reduced bits as 'keyID', but not reserved bits. Therefore
boot_cpu_data.x86_phys_bits cannot be used to calculate reserved bits
anymore, although we can still use it for AMD SME/SEV since SME/SEV
treats the reduced bits differently -- they are treated as reserved
bits, the same as other reserved bits in page table entity [1].

Fix by introducing a new 'shadow_phys_bits' variable in KVM x86 MMU code
to store the effective physical bits w/o reserved bits -- for MKTME,
it equals to physical address reported by CPUID, and for SME/SEV, it is
boot_cpu_data.x86_phys_bits.

Note that for the physical address bits reported to guest should remain
unchanged -- KVM should report physical address reported by CPUID to
guest, but not boot_cpu_data.x86_phys_bits. Because for Intel MKTME,
there's no harm if guest sets up 'keyID' bits in guest page table (since
MKTME only works at physical address level), and KVM doesn't even expose
MKTME to guest. Arguably, for AMD SME/SEV, guest is aware of SEV thus it
should adjust boot_cpu_data.x86_phys_bits when it detects SEV, therefore
KVM should still reports physcial address reported by CPUID to guest.

[1] Section 7.10.1 Determining Support for Secure Memory Encryption,
    AMD Architecture Programmer's Manual Volume 2: System Programming.

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
 arch/x86/kvm/mmu.c | 34 ++++++++++++++++++++++++++++------
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c0a3d120167b..b0899f175db9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -262,6 +262,12 @@ static const u64 shadow_nonpresent_or_rsvd_mask_len = 5;
  */
 static u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
 
+/*
+ * The number of non-reserved physical address bits irrespective of features
+ * that repurpose legal bits, e.g. MKTME.
+ */
+static u8 __read_mostly shadow_phys_bits;
+
 
 static void mmu_spte_set(u64 *sptep, u64 spte);
 static union kvm_mmu_page_role
@@ -471,6 +477,21 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
 
+static u8 kvm_get_shadow_phys_bits(void)
+{
+	/*
+	 * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected
+	 * in CPU detection code, but MKTME treats those reduced bits as
+	 * 'keyID' thus they are not reserved bits. Therefore for MKTME
+	 * we should still return physical address bits reported by CPUID.
+	 */
+	if (!boot_cpu_has(X86_FEATURE_TME) ||
+	    WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008))
+		return boot_cpu_data.x86_phys_bits;
+
+	return cpuid_eax(0x80000008) & 0xff;
+}
+
 static void kvm_mmu_reset_all_pte_masks(void)
 {
 	u8 low_phys_bits;
@@ -484,6 +505,8 @@ static void kvm_mmu_reset_all_pte_masks(void)
 	shadow_present_mask = 0;
 	shadow_acc_track_mask = 0;
 
+	shadow_phys_bits = kvm_get_shadow_phys_bits();
+
 	/*
 	 * If the CPU has 46 or less physical address bits, then set an
 	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
@@ -4489,7 +4512,7 @@ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
 	 */
 	shadow_zero_check = &context->shadow_zero_check;
 	__reset_rsvds_bits_mask(vcpu, shadow_zero_check,
-				boot_cpu_data.x86_phys_bits,
+				shadow_phys_bits,
 				context->shadow_root_level, uses_nx,
 				guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES),
 				is_pse(vcpu), true);
@@ -4526,13 +4549,13 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
 
 	if (boot_cpu_is_amd())
 		__reset_rsvds_bits_mask(vcpu, shadow_zero_check,
-					boot_cpu_data.x86_phys_bits,
+					shadow_phys_bits,
 					context->shadow_root_level, false,
 					boot_cpu_has(X86_FEATURE_GBPAGES),
 					true, true);
 	else
 		__reset_rsvds_bits_mask_ept(shadow_zero_check,
-					    boot_cpu_data.x86_phys_bits,
+					    shadow_phys_bits,
 					    false);
 
 	if (!shadow_me_mask)
@@ -4553,7 +4576,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
 				struct kvm_mmu *context, bool execonly)
 {
 	__reset_rsvds_bits_mask_ept(&context->shadow_zero_check,
-				    boot_cpu_data.x86_phys_bits, execonly);
+				    shadow_phys_bits, execonly);
 }
 
 #define BYTE_MASK(access) \
@@ -5992,7 +6015,6 @@ static void mmu_destroy_caches(void)
 static void kvm_set_mmio_spte_mask(void)
 {
 	u64 mask;
-	int maxphyaddr = boot_cpu_data.x86_phys_bits;
 
 	/*
 	 * Set the reserved bits and the present bit of an paging-structure
@@ -6012,7 +6034,7 @@ static void kvm_set_mmio_spte_mask(void)
 	 * If reserved bit is not supported, clear the present bit to disable
 	 * mmio page fault.
 	 */
-	if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52)
+	if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52)
 		mask &= ~1ull;
 
 	kvm_mmu_set_mmio_spte_mask(mask, mask);
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* RE: [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME
  2019-05-03 10:08 [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME Kai Huang
  2019-05-03 10:08 ` [PATCH 1/2] kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c Kai Huang
  2019-05-03 10:08 ` [PATCH 2/2] kvm: x86: Fix reserved bits related calculation errors caused by MKTME Kai Huang
@ 2019-05-13  3:31 ` Huang, Kai
  2019-05-27  0:12   ` Kai Huang
  2019-06-04 17:04 ` Paolo Bonzini
  3 siblings, 1 reply; 6+ messages in thread
From: Huang, Kai @ 2019-05-13  3:31 UTC (permalink / raw)
  To: Kai Huang, kvm, pbonzini, rkrcmar
  Cc: Christopherson, Sean J, junaids, thomas.lendacky, brijesh.singh,
	guangrong.xiao, tglx, bp, hpa

Hi Paolo/Radim,

Would you take a look?

Thanks,
-Kai


> -----Original Message-----
> From: Kai Huang [mailto:kai.huang@linux.intel.com]
> Sent: Friday, May 3, 2019 10:09 PM
> To: kvm@vger.kernel.org; pbonzini@redhat.com; rkrcmar@redhat.com
> Cc: Christopherson, Sean J <sean.j.christopherson@intel.com>;
> junaids@google.com; thomas.lendacky@amd.com; brijesh.singh@amd.com;
> guangrong.xiao@gmail.com; tglx@linutronix.de; bp@alien8.de;
> hpa@zytor.com; Huang, Kai <kai.huang@intel.com>
> Subject: [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME
> 
> This series fix reserved bits related calculation errors caused by MKTME.
> MKTME repurposes high bits of physical address bits as 'keyID' thus they are
> not reserved bits, and to honor such HW behavior those reduced bits are
> taken away from boot_cpu_data.x86_phys_bits when MKTME is detected
> (exactly how many bits are taken away is configured by BIOS). Currently KVM
> asssumes bits from boot_cpu_data.x86_phys_bits to 51 are reserved bits,
> which is not true anymore with MKTME, and needs fix.
> 
> This series was splitted from the old patch I sent out around 2 weeks ago:
> 
> kvm: x86: Fix several SPTE mask calculation errors caused by MKTME
> 
> Changes to old patch:
> 
>   - splitted one patch into two patches. First patch is to move
>     kvm_set_mmio_spte_mask() as prerequisite. It doesn't impact
> functionality.
>     Patch 2 does the real fix.
> 
>   - renamed shadow_first_rsvd_bits to shadow_phys_bits suggested by Sean.
> 
>   - refined comments and commit msg to be more concise.
> 
> Btw sorry that I will be out next week and won't be able to reply email.
> 
> Kai Huang (2):
>   kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c
>   kvm: x86: Fix reserved bits related calculation errors caused by MKTME
> 
>  arch/x86/kvm/mmu.c | 61
> ++++++++++++++++++++++++++++++++++++++++++++++++++----
>  arch/x86/kvm/x86.c | 31 ---------------------------
>  2 files changed, 57 insertions(+), 35 deletions(-)
> 
> --
> 2.13.6


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME
  2019-05-13  3:31 ` [PATCH 0/2] Fix reserved bits " Huang, Kai
@ 2019-05-27  0:12   ` Kai Huang
  0 siblings, 0 replies; 6+ messages in thread
From: Kai Huang @ 2019-05-27  0:12 UTC (permalink / raw)
  To: kvm, pbonzini, rkrcmar
  Cc: Christopherson, Sean J, junaids, thomas.lendacky, brijesh.singh,
	guangrong.xiao, tglx, bp, hpa

Hi Paolo,

Kindly ping.

Thanks,
-Kai

On Mon, 2019-05-13 at 03:31 +0000, Huang, Kai wrote:
> Hi Paolo/Radim,
> 
> Would you take a look?
> 
> Thanks,
> -Kai
> 
> 
> > -----Original Message-----
> > From: Kai Huang [mailto:kai.huang@linux.intel.com]
> > Sent: Friday, May 3, 2019 10:09 PM
> > To: kvm@vger.kernel.org; pbonzini@redhat.com; rkrcmar@redhat.com
> > Cc: Christopherson, Sean J <sean.j.christopherson@intel.com>;
> > junaids@google.com; thomas.lendacky@amd.com; brijesh.singh@amd.com;
> > guangrong.xiao@gmail.com; tglx@linutronix.de; bp@alien8.de;
> > hpa@zytor.com; Huang, Kai <kai.huang@intel.com>
> > Subject: [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME
> > 
> > This series fix reserved bits related calculation errors caused by MKTME.
> > MKTME repurposes high bits of physical address bits as 'keyID' thus they are
> > not reserved bits, and to honor such HW behavior those reduced bits are
> > taken away from boot_cpu_data.x86_phys_bits when MKTME is detected
> > (exactly how many bits are taken away is configured by BIOS). Currently KVM
> > asssumes bits from boot_cpu_data.x86_phys_bits to 51 are reserved bits,
> > which is not true anymore with MKTME, and needs fix.
> > 
> > This series was splitted from the old patch I sent out around 2 weeks ago:
> > 
> > kvm: x86: Fix several SPTE mask calculation errors caused by MKTME
> > 
> > Changes to old patch:
> > 
> >   - splitted one patch into two patches. First patch is to move
> >     kvm_set_mmio_spte_mask() as prerequisite. It doesn't impact
> > functionality.
> >     Patch 2 does the real fix.
> > 
> >   - renamed shadow_first_rsvd_bits to shadow_phys_bits suggested by Sean.
> > 
> >   - refined comments and commit msg to be more concise.
> > 
> > Btw sorry that I will be out next week and won't be able to reply email.
> > 
> > Kai Huang (2):
> >   kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c
> >   kvm: x86: Fix reserved bits related calculation errors caused by MKTME
> > 
> >  arch/x86/kvm/mmu.c | 61
> > ++++++++++++++++++++++++++++++++++++++++++++++++++----
> >  arch/x86/kvm/x86.c | 31 ---------------------------
> >  2 files changed, 57 insertions(+), 35 deletions(-)
> > 
> > --
> > 2.13.6
> 
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME
  2019-05-03 10:08 [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME Kai Huang
                   ` (2 preceding siblings ...)
  2019-05-13  3:31 ` [PATCH 0/2] Fix reserved bits " Huang, Kai
@ 2019-06-04 17:04 ` Paolo Bonzini
  3 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2019-06-04 17:04 UTC (permalink / raw)
  To: Kai Huang, kvm, rkrcmar
  Cc: sean.j.christopherson, junaids, thomas.lendacky, brijesh.singh,
	guangrong.xiao, tglx, bp, hpa, kai.huang

On 03/05/19 12:08, Kai Huang wrote:
> This series fix reserved bits related calculation errors caused by MKTME. MKTME
> repurposes high bits of physical address bits as 'keyID' thus they are not
> reserved bits, and to honor such HW behavior those reduced bits are taken away
> from boot_cpu_data.x86_phys_bits when MKTME is detected (exactly how many bits
> are taken away is configured by BIOS). Currently KVM asssumes bits from
> boot_cpu_data.x86_phys_bits to 51 are reserved bits, which is not true anymore
> with MKTME, and needs fix.
> 
> This series was splitted from the old patch I sent out around 2 weeks ago:
> 
> kvm: x86: Fix several SPTE mask calculation errors caused by MKTME
> 
> Changes to old patch:
> 
>   - splitted one patch into two patches. First patch is to move
>     kvm_set_mmio_spte_mask() as prerequisite. It doesn't impact functionality.
>     Patch 2 does the real fix.
> 
>   - renamed shadow_first_rsvd_bits to shadow_phys_bits suggested by Sean.
> 
>   - refined comments and commit msg to be more concise.
> 
> Btw sorry that I will be out next week and won't be able to reply email.
> 
> Kai Huang (2):
>   kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c
>   kvm: x86: Fix reserved bits related calculation errors caused by MKTME
> 
>  arch/x86/kvm/mmu.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++----
>  arch/x86/kvm/x86.c | 31 ---------------------------
>  2 files changed, 57 insertions(+), 35 deletions(-)
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-06-04 17:04 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-03 10:08 [PATCH 0/2] Fix reserved bits calculation errors caused by MKTME Kai Huang
2019-05-03 10:08 ` [PATCH 1/2] kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c Kai Huang
2019-05-03 10:08 ` [PATCH 2/2] kvm: x86: Fix reserved bits related calculation errors caused by MKTME Kai Huang
2019-05-13  3:31 ` [PATCH 0/2] Fix reserved bits " Huang, Kai
2019-05-27  0:12   ` Kai Huang
2019-06-04 17:04 ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.