kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/2] MMIO mask fix for AMD memory encryption support
@ 2019-12-18 19:45 Tom Lendacky
  2019-12-18 19:45 ` [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask Tom Lendacky
  2019-12-18 19:45 ` [PATCH v1 2/2] KVM: SVM: Implement reserved bit callback to set " Tom Lendacky
  0 siblings, 2 replies; 7+ messages in thread
From: Tom Lendacky @ 2019-12-18 19:45 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh

This patch series ensures that a valid MMIO SPTE mask can be generated
under SVM when memory encryption is enabled.

The patchset includes:
- Add an optional callback to return a reserved bit(s) mask
- Implement the callback in SVM

---

Patches based on https://git.kernel.org/pub/scm/virt/kvm/kvm.git next
commit:
  7d73710d9ca2 ("kvm: vmx: Stop wasting a page for guest_msrs")

Tom Lendacky (2):
  KVM: x86/mmu: Allow for overriding MMIO SPTE mask
  KVM: SVM: Implement reserved bit callback to set MMIO SPTE mask

 arch/x86/include/asm/kvm_host.h |  4 ++-
 arch/x86/kvm/mmu/mmu.c          | 54 +++++++++++++++++++++------------
 arch/x86/kvm/svm.c              | 42 +++++++++++++++++++++++++
 arch/x86/kvm/x86.c              |  2 +-
 4 files changed, 80 insertions(+), 22 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask
  2019-12-18 19:45 [PATCH v1 0/2] MMIO mask fix for AMD memory encryption support Tom Lendacky
@ 2019-12-18 19:45 ` Tom Lendacky
  2019-12-18 19:51   ` Tom Lendacky
  2019-12-18 19:45 ` [PATCH v1 2/2] KVM: SVM: Implement reserved bit callback to set " Tom Lendacky
  1 sibling, 1 reply; 7+ messages in thread
From: Tom Lendacky @ 2019-12-18 19:45 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh

The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
faults when a guest performs MMIO. The AMD memory encryption support uses
CPUID functions to define the encryption bit position. Given this, KVM
can't assume that bit 51 will be safe all the time.

Add a callback to return a reserved bit(s) mask that can be used for the
MMIO pagetable entries. The callback is not responsible for setting the
present bit.

If a callback is registered:
  - any non-zero mask returned is updated with the present bit and used
    as the MMIO SPTE mask.
  - a zero mask returned results in a mask with only bit 51 set (i.e. no
    present bit) as the MMIO SPTE mask, similar to the way 52-bit physical
    addressing is handled.

If no callback is registered, the current method of setting the MMIO SPTE
mask is used.

Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |  4 ++-
 arch/x86/kvm/mmu/mmu.c          | 54 +++++++++++++++++++++------------
 arch/x86/kvm/x86.c              |  2 +-
 3 files changed, 38 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b79cd6aa4075..0c666c10f1a2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1233,6 +1233,8 @@ struct kvm_x86_ops {
 
 	bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
 	int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
+
+	u64 (*get_reserved_mask)(void);
 };
 
 struct kvm_arch_async_pf {
@@ -1266,7 +1268,7 @@ static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
 		return -ENOTSUPP;
 }
 
-int kvm_mmu_module_init(void);
+int kvm_mmu_module_init(struct kvm_x86_ops *ops);
 void kvm_mmu_module_exit(void);
 
 void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6f92b40d798c..d419df7a4056 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6227,30 +6227,44 @@ static void mmu_destroy_caches(void)
 	kmem_cache_destroy(mmu_page_header_cache);
 }
 
-static void kvm_set_mmio_spte_mask(void)
+static void kvm_set_mmio_spte_mask(struct kvm_x86_ops *ops)
 {
 	u64 mask;
 
-	/*
-	 * Set the reserved bits and the present bit of an paging-structure
-	 * entry to generate page fault with PFER.RSV = 1.
-	 */
+	if (ops->get_reserved_mask) {
+		mask = ops->get_reserved_mask();
 
-	/*
-	 * Mask the uppermost physical address bit, which would be reserved as
-	 * long as the supported physical address width is less than 52.
-	 */
-	mask = 1ull << 51;
+		/*
+		 * If there are reserved bits available, add the present bit
+		 * to the mask to generate a page fault with PFER.RSV = 1.
+		 * If there are no reserved bits available, mask the uppermost
+		 * physical address bit, but keep the present bit cleared.
+		 */
+		if (mask)
+			mask |= 1ull;
+		else
+			mask = 1ull << 51;
+	} else {
+		/*
+		 * Set the reserved bits and the present bit of a
+		 * paging-structure entry to generate page fault with
+		 * PFER.RSV = 1.
+		 */
 
-	/* Set the present bit. */
-	mask |= 1ull;
+		/*
+		 * Mask the uppermost physical address bit, which would be
+		 * reserved as long as the supported physical address width
+		 * is less than 52.
+		 */
+		mask = 1ull << 51;
 
-	/*
-	 * If reserved bit is not supported, clear the present bit to disable
-	 * mmio page fault.
-	 */
-	if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52)
-		mask &= ~1ull;
+		/*
+		 * If reserved bit is not supported, don't set the present bit
+		 * to disable mmio page fault.
+		 */
+		if (!IS_ENABLED(CONFIG_X86_64) || shadow_phys_bits != 52)
+			mask |= 1ull;
+	}
 
 	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
 }
@@ -6301,7 +6315,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
 	return 0;
 }
 
-int kvm_mmu_module_init(void)
+int kvm_mmu_module_init(struct kvm_x86_ops *ops)
 {
 	int ret = -ENOMEM;
 
@@ -6320,7 +6334,7 @@ int kvm_mmu_module_init(void)
 
 	kvm_mmu_reset_all_pte_masks();
 
-	kvm_set_mmio_spte_mask();
+	kvm_set_mmio_spte_mask(ops);
 
 	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
 					    sizeof(struct pte_list_desc),
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3ed167e039e5..311da4ed423d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7234,7 +7234,7 @@ int kvm_arch_init(void *opaque)
 		goto out_free_x86_fpu_cache;
 	}
 
-	r = kvm_mmu_module_init();
+	r = kvm_mmu_module_init(ops);
 	if (r)
 		goto out_free_percpu;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 2/2] KVM: SVM: Implement reserved bit callback to set MMIO SPTE mask
  2019-12-18 19:45 [PATCH v1 0/2] MMIO mask fix for AMD memory encryption support Tom Lendacky
  2019-12-18 19:45 ` [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask Tom Lendacky
@ 2019-12-18 19:45 ` Tom Lendacky
  1 sibling, 0 replies; 7+ messages in thread
From: Tom Lendacky @ 2019-12-18 19:45 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh

Register a reserved bit(s) mask callback that will check if memory
encryption is supported/enabled:
  If enabled, then the physical address width is reduced and the first
  bit after the last valid reduced physical address bit will always be
  reserved.

  If disabled, then the physical address width is not reduced, so bit 51
  can be used, unless the physical address width is 52. In this case,
  return zero for the mask.

Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/svm.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 122d4ce3b1ab..a769aab45841 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -7242,6 +7242,46 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 		   (svm->vmcb->control.intercept & (1ULL << INTERCEPT_INIT));
 }
 
+static u64 svm_get_reserved_mask(void)
+{
+	u64 mask, msr;
+
+	/* The default mask, used when memory encryption is not enabled */
+	mask = 1ull << 51;
+
+	/* No support for memory encryption, use the default */
+	if (cpuid_eax(0x80000000) < 0x8000001f)
+		return mask;
+
+	/*
+	 * Check for memory encryption support. If memory encryption support
+	 * is enabled:
+	 *   The physical addressing width is reduced. The first bit above the
+	 *   new physical addressing limit will always be reserved.
+	 */
+	rdmsrl(MSR_K8_SYSCFG, msr);
+	if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) {
+		/*
+		 * x86_phys_bits has been adjusted as part of the memory
+		 * encryption support.
+		 */
+		mask = 1ull << boot_cpu_data.x86_phys_bits;
+
+		return mask;
+	}
+
+	/*
+	 * If memory encryption support is disabled:
+	 *   The physical addressing width is not reduced, so the default mask
+	 *   will always be reserved unless the physical addressing width is 52,
+	 *   in which case there are no reserved bits, so return an empty mask.
+	 */
+	if (IS_ENABLED(CONFIG_X86_64) && boot_cpu_data.x86_phys_bits == 52)
+		mask = 0;
+
+	return mask;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -7379,6 +7419,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
 
 	.apic_init_signal_blocked = svm_apic_init_signal_blocked,
+
+	.get_reserved_mask = svm_get_reserved_mask,
 };
 
 static int __init svm_init(void)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask
  2019-12-18 19:45 ` [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask Tom Lendacky
@ 2019-12-18 19:51   ` Tom Lendacky
  2019-12-18 20:27     ` Sean Christopherson
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Lendacky @ 2019-12-18 19:51 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh

On 12/18/19 1:45 PM, Tom Lendacky wrote:
> The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
> faults when a guest performs MMIO. The AMD memory encryption support uses
> CPUID functions to define the encryption bit position. Given this, KVM
> can't assume that bit 51 will be safe all the time.
> 
> Add a callback to return a reserved bit(s) mask that can be used for the
> MMIO pagetable entries. The callback is not responsible for setting the
> present bit.
> 
> If a callback is registered:
>   - any non-zero mask returned is updated with the present bit and used
>     as the MMIO SPTE mask.
>   - a zero mask returned results in a mask with only bit 51 set (i.e. no
>     present bit) as the MMIO SPTE mask, similar to the way 52-bit physical
>     addressing is handled.
> 
> If no callback is registered, the current method of setting the MMIO SPTE
> mask is used.
> 
> Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  4 ++-
>  arch/x86/kvm/mmu/mmu.c          | 54 +++++++++++++++++++++------------
>  arch/x86/kvm/x86.c              |  2 +-
>  3 files changed, 38 insertions(+), 22 deletions(-)

This patch has some extra churn because kvm_x86_ops isn't set yet when the
call to kvm_set_mmio_spte_mask() is made. If it's not a problem to move
setting kvm_x86_ops just a bit earlier in kvm_arch_init(), some of the
churn can be avoided.

Thanks,
Tom

> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index b79cd6aa4075..0c666c10f1a2 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1233,6 +1233,8 @@ struct kvm_x86_ops {
>  
>  	bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
>  	int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
> +
> +	u64 (*get_reserved_mask)(void);
>  };
>  
>  struct kvm_arch_async_pf {
> @@ -1266,7 +1268,7 @@ static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
>  		return -ENOTSUPP;
>  }
>  
> -int kvm_mmu_module_init(void);
> +int kvm_mmu_module_init(struct kvm_x86_ops *ops);
>  void kvm_mmu_module_exit(void);
>  
>  void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 6f92b40d798c..d419df7a4056 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6227,30 +6227,44 @@ static void mmu_destroy_caches(void)
>  	kmem_cache_destroy(mmu_page_header_cache);
>  }
>  
> -static void kvm_set_mmio_spte_mask(void)
> +static void kvm_set_mmio_spte_mask(struct kvm_x86_ops *ops)
>  {
>  	u64 mask;
>  
> -	/*
> -	 * Set the reserved bits and the present bit of an paging-structure
> -	 * entry to generate page fault with PFER.RSV = 1.
> -	 */
> +	if (ops->get_reserved_mask) {
> +		mask = ops->get_reserved_mask();
>  
> -	/*
> -	 * Mask the uppermost physical address bit, which would be reserved as
> -	 * long as the supported physical address width is less than 52.
> -	 */
> -	mask = 1ull << 51;
> +		/*
> +		 * If there are reserved bits available, add the present bit
> +		 * to the mask to generate a page fault with PFER.RSV = 1.
> +		 * If there are no reserved bits available, mask the uppermost
> +		 * physical address bit, but keep the present bit cleared.
> +		 */
> +		if (mask)
> +			mask |= 1ull;
> +		else
> +			mask = 1ull << 51;
> +	} else {
> +		/*
> +		 * Set the reserved bits and the present bit of a
> +		 * paging-structure entry to generate page fault with
> +		 * PFER.RSV = 1.
> +		 */
>  
> -	/* Set the present bit. */
> -	mask |= 1ull;
> +		/*
> +		 * Mask the uppermost physical address bit, which would be
> +		 * reserved as long as the supported physical address width
> +		 * is less than 52.
> +		 */
> +		mask = 1ull << 51;
>  
> -	/*
> -	 * If reserved bit is not supported, clear the present bit to disable
> -	 * mmio page fault.
> -	 */
> -	if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52)
> -		mask &= ~1ull;
> +		/*
> +		 * If reserved bit is not supported, don't set the present bit
> +		 * to disable mmio page fault.
> +		 */
> +		if (!IS_ENABLED(CONFIG_X86_64) || shadow_phys_bits != 52)
> +			mask |= 1ull;
> +	}
>  
>  	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
>  }
> @@ -6301,7 +6315,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
>  	return 0;
>  }
>  
> -int kvm_mmu_module_init(void)
> +int kvm_mmu_module_init(struct kvm_x86_ops *ops)
>  {
>  	int ret = -ENOMEM;
>  
> @@ -6320,7 +6334,7 @@ int kvm_mmu_module_init(void)
>  
>  	kvm_mmu_reset_all_pte_masks();
>  
> -	kvm_set_mmio_spte_mask();
> +	kvm_set_mmio_spte_mask(ops);
>  
>  	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
>  					    sizeof(struct pte_list_desc),
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 3ed167e039e5..311da4ed423d 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7234,7 +7234,7 @@ int kvm_arch_init(void *opaque)
>  		goto out_free_x86_fpu_cache;
>  	}
>  
> -	r = kvm_mmu_module_init();
> +	r = kvm_mmu_module_init(ops);
>  	if (r)
>  		goto out_free_percpu;
>  
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask
  2019-12-18 19:51   ` Tom Lendacky
@ 2019-12-18 20:27     ` Sean Christopherson
  2019-12-18 21:18       ` Tom Lendacky
  0 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2019-12-18 20:27 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: kvm, linux-kernel, Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh

On Wed, Dec 18, 2019 at 01:51:23PM -0600, Tom Lendacky wrote:
> On 12/18/19 1:45 PM, Tom Lendacky wrote:
> > The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
> > faults when a guest performs MMIO. The AMD memory encryption support uses
> > CPUID functions to define the encryption bit position. Given this, KVM
> > can't assume that bit 51 will be safe all the time.
> > 
> > Add a callback to return a reserved bit(s) mask that can be used for the
> > MMIO pagetable entries. The callback is not responsible for setting the
> > present bit.
> > 
> > If a callback is registered:
> >   - any non-zero mask returned is updated with the present bit and used
> >     as the MMIO SPTE mask.
> >   - a zero mask returned results in a mask with only bit 51 set (i.e. no
> >     present bit) as the MMIO SPTE mask, similar to the way 52-bit physical
> >     addressing is handled.
> > 
> > If no callback is registered, the current method of setting the MMIO SPTE
> > mask is used.
> > 
> > Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
> > Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  4 ++-
> >  arch/x86/kvm/mmu/mmu.c          | 54 +++++++++++++++++++++------------
> >  arch/x86/kvm/x86.c              |  2 +-
> >  3 files changed, 38 insertions(+), 22 deletions(-)
> 
> This patch has some extra churn because kvm_x86_ops isn't set yet when the
> call to kvm_set_mmio_spte_mask() is made. If it's not a problem to move
> setting kvm_x86_ops just a bit earlier in kvm_arch_init(), some of the
> churn can be avoided.

As a completely different alternative, what about handling this purely
within SVM code by overriding the masks during svm_hardware_setup(),
similar to how VMX handles EPT's custom masks, e.g.:

	/*
	 * Override the MMIO masks if memory encryption support is enabled:
	 *   The physical addressing width is reduced. The first bit above the
	 *   new physical addressing limit will always be reserved.
	 */
	if (cpuid_eax(0x80000000) >= 0x8000001f) {
		rdmsrl(MSR_K8_SYSCFG, msr);
		if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) {
			mask = BIT_ULL(boot_cpu_data.x86_phys_bits) | BIT_ULL(0);
			kvm_mmu_set_mmio_spte_mask(mask, mask,
						   ACC_WRITE_MASK | ACC_USER_MASK);
		}
	}

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask
  2019-12-18 20:27     ` Sean Christopherson
@ 2019-12-18 21:18       ` Tom Lendacky
  2019-12-24 18:14         ` Tom Lendacky
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Lendacky @ 2019-12-18 21:18 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh

On 12/18/19 2:27 PM, Sean Christopherson wrote:
> On Wed, Dec 18, 2019 at 01:51:23PM -0600, Tom Lendacky wrote:
>> On 12/18/19 1:45 PM, Tom Lendacky wrote:
>>> The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
>>> faults when a guest performs MMIO. The AMD memory encryption support uses
>>> CPUID functions to define the encryption bit position. Given this, KVM
>>> can't assume that bit 51 will be safe all the time.
>>>
>>> Add a callback to return a reserved bit(s) mask that can be used for the
>>> MMIO pagetable entries. The callback is not responsible for setting the
>>> present bit.
>>>
>>> If a callback is registered:
>>>   - any non-zero mask returned is updated with the present bit and used
>>>     as the MMIO SPTE mask.
>>>   - a zero mask returned results in a mask with only bit 51 set (i.e. no
>>>     present bit) as the MMIO SPTE mask, similar to the way 52-bit physical
>>>     addressing is handled.
>>>
>>> If no callback is registered, the current method of setting the MMIO SPTE
>>> mask is used.
>>>
>>> Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
>>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>>> ---
>>>  arch/x86/include/asm/kvm_host.h |  4 ++-
>>>  arch/x86/kvm/mmu/mmu.c          | 54 +++++++++++++++++++++------------
>>>  arch/x86/kvm/x86.c              |  2 +-
>>>  3 files changed, 38 insertions(+), 22 deletions(-)
>>
>> This patch has some extra churn because kvm_x86_ops isn't set yet when the
>> call to kvm_set_mmio_spte_mask() is made. If it's not a problem to move
>> setting kvm_x86_ops just a bit earlier in kvm_arch_init(), some of the
>> churn can be avoided.
> 
> As a completely different alternative, what about handling this purely
> within SVM code by overriding the masks during svm_hardware_setup(),
> similar to how VMX handles EPT's custom masks, e.g.:
> 
> 	/*
> 	 * Override the MMIO masks if memory encryption support is enabled:
> 	 *   The physical addressing width is reduced. The first bit above the
> 	 *   new physical addressing limit will always be reserved.
> 	 */
> 	if (cpuid_eax(0x80000000) >= 0x8000001f) {
> 		rdmsrl(MSR_K8_SYSCFG, msr);
> 		if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) {
> 			mask = BIT_ULL(boot_cpu_data.x86_phys_bits) | BIT_ULL(0);
> 			kvm_mmu_set_mmio_spte_mask(mask, mask,
> 						   ACC_WRITE_MASK | ACC_USER_MASK);
> 		}
> 	}

Works for me if no one has objections to doing it that way (and will
actually make going into stable much easier).

Thanks,
Tom

> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask
  2019-12-18 21:18       ` Tom Lendacky
@ 2019-12-24 18:14         ` Tom Lendacky
  0 siblings, 0 replies; 7+ messages in thread
From: Tom Lendacky @ 2019-12-24 18:14 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh

On 12/18/19 3:18 PM, Tom Lendacky wrote:
> On 12/18/19 2:27 PM, Sean Christopherson wrote:
>> On Wed, Dec 18, 2019 at 01:51:23PM -0600, Tom Lendacky wrote:
>>> On 12/18/19 1:45 PM, Tom Lendacky wrote:
>>>> The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
>>>> faults when a guest performs MMIO. The AMD memory encryption support uses
>>>> CPUID functions to define the encryption bit position. Given this, KVM
>>>> can't assume that bit 51 will be safe all the time.
>>>>
>>>> Add a callback to return a reserved bit(s) mask that can be used for the
>>>> MMIO pagetable entries. The callback is not responsible for setting the
>>>> present bit.
>>>>
>>>> If a callback is registered:
>>>>   - any non-zero mask returned is updated with the present bit and used
>>>>     as the MMIO SPTE mask.
>>>>   - a zero mask returned results in a mask with only bit 51 set (i.e. no
>>>>     present bit) as the MMIO SPTE mask, similar to the way 52-bit physical
>>>>     addressing is handled.
>>>>
>>>> If no callback is registered, the current method of setting the MMIO SPTE
>>>> mask is used.
>>>>
>>>> Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
>>>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>>>> ---
>>>>  arch/x86/include/asm/kvm_host.h |  4 ++-
>>>>  arch/x86/kvm/mmu/mmu.c          | 54 +++++++++++++++++++++------------
>>>>  arch/x86/kvm/x86.c              |  2 +-
>>>>  3 files changed, 38 insertions(+), 22 deletions(-)
>>>
>>> This patch has some extra churn because kvm_x86_ops isn't set yet when the
>>> call to kvm_set_mmio_spte_mask() is made. If it's not a problem to move
>>> setting kvm_x86_ops just a bit earlier in kvm_arch_init(), some of the
>>> churn can be avoided.
>>
>> As a completely different alternative, what about handling this purely
>> within SVM code by overriding the masks during svm_hardware_setup(),
>> similar to how VMX handles EPT's custom masks, e.g.:
>>
>> 	/*
>> 	 * Override the MMIO masks if memory encryption support is enabled:
>> 	 *   The physical addressing width is reduced. The first bit above the
>> 	 *   new physical addressing limit will always be reserved.
>> 	 */
>> 	if (cpuid_eax(0x80000000) >= 0x8000001f) {
>> 		rdmsrl(MSR_K8_SYSCFG, msr);
>> 		if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) {
>> 			mask = BIT_ULL(boot_cpu_data.x86_phys_bits) | BIT_ULL(0);
>> 			kvm_mmu_set_mmio_spte_mask(mask, mask,
>> 						   ACC_WRITE_MASK | ACC_USER_MASK);
>> 		}
>> 	}
> 
> Works for me if no one has objections to doing it that way (and will
> actually make going into stable much easier).

No objections, so I'll re-submit using Sean's suggested override method.

Thanks,
Tom

> 
> Thanks,
> Tom
> 
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-12-24 18:15 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-18 19:45 [PATCH v1 0/2] MMIO mask fix for AMD memory encryption support Tom Lendacky
2019-12-18 19:45 ` [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask Tom Lendacky
2019-12-18 19:51   ` Tom Lendacky
2019-12-18 20:27     ` Sean Christopherson
2019-12-18 21:18       ` Tom Lendacky
2019-12-24 18:14         ` Tom Lendacky
2019-12-18 19:45 ` [PATCH v1 2/2] KVM: SVM: Implement reserved bit callback to set " Tom Lendacky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).