All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5 v4] KVM: nSVM: Check addresses of MSR bitmap and IO bitmap tables on vmrun of nested guests
@ 2021-03-24 17:50 Krish Sadhukhan
  2021-03-24 17:50 ` [PATCH 1/5 v4] KVM: SVM: Move IOPM_ALLOC_ORDER and MSRPM_ALLOC_ORDER #defines to svm.h Krish Sadhukhan
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-24 17:50 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

v3 -> v4:
	1. There were some issues with the checks added in
	   nested_vmcb_check_controls() in patch# 2. Those are fixed. Also,
	   instead of using page_address_valid() for the checks, a new
	   function is now used. The new function doesn't check for alignment
	   of the addresses of intercept tables.
	2. In patch# 4, the tests for alignment of the addresses of intercept
	   tables, have been removed.


[PATCH 1/5 v4] KVM: SVM: Move IOPM_ALLOC_ORDER and MSRPM_ALLOC_ORDER
[PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps
[PATCH 3/5 v4] KVM: nSVM: Cleanup in nested_svm_vmrun()
[PATCH 4/5 v4] nSVM: Test addresses of MSR and IO permissions maps
[PATCH 5/5 v4] SVM: Use ALIGN macro when aligning 'io_bitmap_area'

 arch/x86/kvm/svm/nested.c | 59 +++++++++++++++++++++++++++++------------------
 arch/x86/kvm/svm/svm.c    |  3 ---
 arch/x86/kvm/svm/svm.h    |  3 +++
 3 files changed, 40 insertions(+), 25 deletions(-)

Krish Sadhukhan (3):
      KVM: SVM: Move IOPM_ALLOC_ORDER and MSRPM_ALLOC_ORDER #defines to svm.h
      nSVM: Check addresses of MSR and IO permission maps
      KVM: nSVM: Cleanup in nested_svm_vmrun()

 x86/svm.c       |  2 +-
 x86/svm_tests.c | 28 ++++++++++++++++++++++++++++
 2 files changed, 29 insertions(+), 1 deletion(-)

Krish Sadhukhan (2):
      nSVM: Test addresses of MSR and IO permissions maps
      SVM: Use ALIGN macro when aligning 'io_bitmap_area'


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/5 v4] KVM: SVM: Move IOPM_ALLOC_ORDER and MSRPM_ALLOC_ORDER #defines to svm.h
  2021-03-24 17:50 [PATCH 0/5 v4] KVM: nSVM: Check addresses of MSR bitmap and IO bitmap tables on vmrun of nested guests Krish Sadhukhan
@ 2021-03-24 17:50 ` Krish Sadhukhan
  2021-03-24 17:50 ` [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps Krish Sadhukhan
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-24 17:50 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

These #defines will be used by nested.c in the next patch. So move these to
svm.h.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 arch/x86/kvm/svm/svm.c | 3 ---
 arch/x86/kvm/svm/svm.h | 3 +++
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 58a45bb139f8..a3d8699e70d6 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -56,9 +56,6 @@ static const struct x86_cpu_id svm_cpu_id[] = {
 MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id);
 #endif
 
-#define IOPM_ALLOC_ORDER 2
-#define MSRPM_ALLOC_ORDER 1
-
 #define SEG_TYPE_LDT 2
 #define SEG_TYPE_BUSY_TSS16 3
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 39e071fdab0c..ae0a629b3ec6 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -28,6 +28,9 @@ static const u32 host_save_user_msrs[] = {
 };
 #define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs)
 
+#define IOPM_ALLOC_ORDER 2
+#define MSRPM_ALLOC_ORDER 1
+
 #define MAX_DIRECT_ACCESS_MSRS	18
 #define MSRPM_OFFSETS	16
 extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps
  2021-03-24 17:50 [PATCH 0/5 v4] KVM: nSVM: Check addresses of MSR bitmap and IO bitmap tables on vmrun of nested guests Krish Sadhukhan
  2021-03-24 17:50 ` [PATCH 1/5 v4] KVM: SVM: Move IOPM_ALLOC_ORDER and MSRPM_ALLOC_ORDER #defines to svm.h Krish Sadhukhan
@ 2021-03-24 17:50 ` Krish Sadhukhan
  2021-03-24 19:15   ` Sean Christopherson
  2021-03-24 17:50 ` [PATCH 3/5 v4] KVM: nSVM: Cleanup in nested_svm_vmrun() Krish Sadhukhan
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-24 17:50 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

According to section "Canonicalization and Consistency Checks" in APM vol 2,
the following guest state is illegal:

    "The MSR or IOIO intercept tables extend to a physical address that
     is greater than or equal to the maximum supported physical address."

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 arch/x86/kvm/svm/nested.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 35891d9a1099..b08d1c595736 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -231,7 +231,15 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
 	return true;
 }
 
-static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
+static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa,
+				       u8 order)
+{
+	u64 last_pa = PAGE_ALIGN(pa) + (PAGE_SIZE << order) - 1;
+	return last_pa > pa && !(last_pa >> cpuid_maxphyaddr(vcpu));
+}
+
+static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
+				       struct vmcb_control_area *control)
 {
 	if ((vmcb_is_intercept(control, INTERCEPT_VMRUN)) == 0)
 		return false;
@@ -243,12 +251,18 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
 	    !npt_enabled)
 		return false;
 
+	if (!nested_svm_check_bitmap_pa(vcpu, control->msrpm_base_pa,
+	    MSRPM_ALLOC_ORDER))
+		return false;
+	if (!nested_svm_check_bitmap_pa(vcpu, control->iopm_base_pa,
+	    IOPM_ALLOC_ORDER))
+		return false;
+
 	return true;
 }
 
-static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
+static bool nested_vmcb_checks(struct kvm_vcpu *vcpu, struct vmcb *vmcb12)
 {
-	struct kvm_vcpu *vcpu = &svm->vcpu;
 	bool vmcb12_lma;
 
 	if ((vmcb12->save.efer & EFER_SVME) == 0)
@@ -268,10 +282,10 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
 		    kvm_vcpu_is_illegal_gpa(vcpu, vmcb12->save.cr3))
 			return false;
 	}
-	if (!kvm_is_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
+	if (!kvm_is_valid_cr4(vcpu, vmcb12->save.cr4))
 		return false;
 
-	return nested_vmcb_check_controls(&vmcb12->control);
+	return nested_vmcb_check_controls(vcpu, &vmcb12->control);
 }
 
 static void load_nested_vmcb_control(struct vcpu_svm *svm,
@@ -515,7 +529,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
 	if (WARN_ON_ONCE(!svm->nested.initialized))
 		return -EINVAL;
 
-	if (!nested_vmcb_checks(svm, vmcb12)) {
+	if (!nested_vmcb_checks(&svm->vcpu, vmcb12)) {
 		vmcb12->control.exit_code    = SVM_EXIT_ERR;
 		vmcb12->control.exit_code_hi = 0;
 		vmcb12->control.exit_info_1  = 0;
@@ -1191,7 +1205,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 		goto out_free;
 
 	ret = -EINVAL;
-	if (!nested_vmcb_check_controls(ctl))
+	if (!nested_vmcb_check_controls(vcpu, ctl))
 		goto out_free;
 
 	/*
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/5 v4] KVM: nSVM: Cleanup in nested_svm_vmrun()
  2021-03-24 17:50 [PATCH 0/5 v4] KVM: nSVM: Check addresses of MSR bitmap and IO bitmap tables on vmrun of nested guests Krish Sadhukhan
  2021-03-24 17:50 ` [PATCH 1/5 v4] KVM: SVM: Move IOPM_ALLOC_ORDER and MSRPM_ALLOC_ORDER #defines to svm.h Krish Sadhukhan
  2021-03-24 17:50 ` [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps Krish Sadhukhan
@ 2021-03-24 17:50 ` Krish Sadhukhan
  2021-03-24 17:50 ` [PATCH 4/5 v4] nSVM: Test addresses of MSR and IO permissions maps Krish Sadhukhan
  2021-03-24 17:50 ` [PATCH 5/5 v4] SVM: Use ALIGN macro when aligning 'io_bitmap_area' Krish Sadhukhan
  4 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-24 17:50 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

Use local variables to derefence svm->vcpu and svm->vmcb as they make the
code tidier.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 arch/x86/kvm/svm/nested.c | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b08d1c595736..a02a4e01e308 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -503,33 +503,34 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
 {
 	int ret;
 	struct vmcb *vmcb12;
+	struct kvm_vcpu *vcpu = &svm->vcpu;
 	struct vmcb *hsave = svm->nested.hsave;
 	struct vmcb *vmcb = svm->vmcb;
 	struct kvm_host_map map;
 	u64 vmcb12_gpa;
 
-	if (is_smm(&svm->vcpu)) {
-		kvm_queue_exception(&svm->vcpu, UD_VECTOR);
+	if (is_smm(vcpu)) {
+		kvm_queue_exception(vcpu, UD_VECTOR);
 		return 1;
 	}
 
-	vmcb12_gpa = svm->vmcb->save.rax;
-	ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb12_gpa), &map);
+	vmcb12_gpa = vmcb->save.rax;
+	ret = kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map);
 	if (ret == -EINVAL) {
-		kvm_inject_gp(&svm->vcpu, 0);
+		kvm_inject_gp(vcpu, 0);
 		return 1;
 	} else if (ret) {
-		return kvm_skip_emulated_instruction(&svm->vcpu);
+		return kvm_skip_emulated_instruction(vcpu);
 	}
 
-	ret = kvm_skip_emulated_instruction(&svm->vcpu);
+	ret = kvm_skip_emulated_instruction(vcpu);
 
 	vmcb12 = map.hva;
 
 	if (WARN_ON_ONCE(!svm->nested.initialized))
 		return -EINVAL;
 
-	if (!nested_vmcb_checks(&svm->vcpu, vmcb12)) {
+	if (!nested_vmcb_checks(vcpu, vmcb12)) {
 		vmcb12->control.exit_code    = SVM_EXIT_ERR;
 		vmcb12->control.exit_code_hi = 0;
 		vmcb12->control.exit_info_1  = 0;
@@ -539,8 +540,8 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
 
 
 	/* Clear internal status */
-	kvm_clear_exception_queue(&svm->vcpu);
-	kvm_clear_interrupt_queue(&svm->vcpu);
+	kvm_clear_exception_queue(vcpu);
+	kvm_clear_interrupt_queue(vcpu);
 
 	/*
 	 * Save the old vmcb, so we don't need to pick what we save, but can
@@ -552,17 +553,17 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
 	hsave->save.ds     = vmcb->save.ds;
 	hsave->save.gdtr   = vmcb->save.gdtr;
 	hsave->save.idtr   = vmcb->save.idtr;
-	hsave->save.efer   = svm->vcpu.arch.efer;
-	hsave->save.cr0    = kvm_read_cr0(&svm->vcpu);
+	hsave->save.efer   = vcpu->arch.efer;
+	hsave->save.cr0    = kvm_read_cr0(vcpu);
 	hsave->save.cr4    = svm->vcpu.arch.cr4;
-	hsave->save.rflags = kvm_get_rflags(&svm->vcpu);
-	hsave->save.rip    = kvm_rip_read(&svm->vcpu);
+	hsave->save.rflags = kvm_get_rflags(vcpu);
+	hsave->save.rip    = kvm_rip_read(vcpu);
 	hsave->save.rsp    = vmcb->save.rsp;
 	hsave->save.rax    = vmcb->save.rax;
 	if (npt_enabled)
 		hsave->save.cr3    = vmcb->save.cr3;
 	else
-		hsave->save.cr3    = kvm_read_cr3(&svm->vcpu);
+		hsave->save.cr3    = kvm_read_cr3(vcpu);
 
 	copy_vmcb_control_area(&hsave->control, &vmcb->control);
 
@@ -585,7 +586,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
 	nested_svm_vmexit(svm);
 
 out:
-	kvm_vcpu_unmap(&svm->vcpu, &map, true);
+	kvm_vcpu_unmap(vcpu, &map, true);
 
 	return ret;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/5 v4] nSVM: Test addresses of MSR and IO permissions maps
  2021-03-24 17:50 [PATCH 0/5 v4] KVM: nSVM: Check addresses of MSR bitmap and IO bitmap tables on vmrun of nested guests Krish Sadhukhan
                   ` (2 preceding siblings ...)
  2021-03-24 17:50 ` [PATCH 3/5 v4] KVM: nSVM: Cleanup in nested_svm_vmrun() Krish Sadhukhan
@ 2021-03-24 17:50 ` Krish Sadhukhan
  2021-03-24 19:21   ` Sean Christopherson
  2021-03-24 17:50 ` [PATCH 5/5 v4] SVM: Use ALIGN macro when aligning 'io_bitmap_area' Krish Sadhukhan
  4 siblings, 1 reply; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-24 17:50 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

According to section "Canonicalization and Consistency Checks" in APM vol 2,
the following guest state is illegal:

    "The MSR or IOIO intercept tables extend to a physical address that
     is greater than or equal to the maximum supported physical address."

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 x86/svm_tests.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 29a0b59..70442d2 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -2304,6 +2304,33 @@ static void test_dr(void)
 	vmcb->save.dr7 = dr_saved;
 }
 
+/*
+ * If the MSR or IOIO intercept table extends to a physical address that
+ * is greater than or equal to the maximum supported physical address, the
+ * guest state is illegal.
+ *
+ * [ APM vol 2]
+ */
+static void test_msrpm_iopm_bitmap_addrs(void)
+{
+	u64 addr_spill_beyond_ram =
+	    (u64)(((u64)1 << cpuid_maxphyaddr()) - 4096);
+
+	/* MSR bitmap address */
+	vmcb->control.intercept |= 1ULL << INTERCEPT_MSR_PROT;
+	vmcb->control.msrpm_base_pa = addr_spill_beyond_ram;
+	report(svm_vmrun() == SVM_EXIT_ERR, "Test MSRPM address: %lx",
+	    addr_spill_beyond_ram);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
+
+	/* MSR bitmap address */
+	vmcb->control.intercept |= 1ULL << INTERCEPT_IOIO_PROT;
+	vmcb->control.msrpm_base_pa = addr_spill_beyond_ram;
+	report(svm_vmrun() == SVM_EXIT_ERR, "Test IOPM address: %lx",
+	    addr_spill_beyond_ram);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_IOIO_PROT);
+}
+
 static void svm_guest_state_test(void)
 {
 	test_set_guest(basic_guest_main);
@@ -2313,6 +2340,7 @@ static void svm_guest_state_test(void)
 	test_cr3();
 	test_cr4();
 	test_dr();
+	test_msrpm_iopm_bitmap_addrs();
 }
 
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/5 v4] SVM: Use ALIGN macro when aligning 'io_bitmap_area'
  2021-03-24 17:50 [PATCH 0/5 v4] KVM: nSVM: Check addresses of MSR bitmap and IO bitmap tables on vmrun of nested guests Krish Sadhukhan
                   ` (3 preceding siblings ...)
  2021-03-24 17:50 ` [PATCH 4/5 v4] nSVM: Test addresses of MSR and IO permissions maps Krish Sadhukhan
@ 2021-03-24 17:50 ` Krish Sadhukhan
  4 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-24 17:50 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

Since the macro is available and we already use it for MSR bitmap table, use
it for aligning IO bitmap table also.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 x86/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/x86/svm.c b/x86/svm.c
index a1808c7..846cf2a 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -298,7 +298,7 @@ static void setup_svm(void)
 	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
 	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_NX);
 
-	io_bitmap = (void *) (((ulong)io_bitmap_area + 4095) & ~4095);
+	io_bitmap = (void *) ALIGN((ulong)io_bitmap_area, PAGE_SIZE);
 
 	msr_bitmap = (void *) ALIGN((ulong)msr_bitmap_area, PAGE_SIZE);
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps
  2021-03-24 17:50 ` [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps Krish Sadhukhan
@ 2021-03-24 19:15   ` Sean Christopherson
  2021-03-25  1:16     ` Krish Sadhukhan
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2021-03-24 19:15 UTC (permalink / raw)
  To: Krish Sadhukhan; +Cc: kvm, pbonzini, jmattson

On Wed, Mar 24, 2021, Krish Sadhukhan wrote:
> According to section "Canonicalization and Consistency Checks" in APM vol 2,
> the following guest state is illegal:
> 
>     "The MSR or IOIO intercept tables extend to a physical address that
>      is greater than or equal to the maximum supported physical address."
> 
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> ---
>  arch/x86/kvm/svm/nested.c | 28 +++++++++++++++++++++-------
>  1 file changed, 21 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 35891d9a1099..b08d1c595736 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -231,7 +231,15 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> -static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
> +static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa,
> +				       u8 order)
> +{
> +	u64 last_pa = PAGE_ALIGN(pa) + (PAGE_SIZE << order) - 1;

Ugh, I really wish things that "must" happen were actually enforced by hardware.

  The MSRPM must be aligned on a 4KB boundary... The VMRUN instruction ignores
  the lower 12 bits of the address specified in the VMCB.

So, ignoring an unaligned @pa is correct, but that means
nested_svm_exit_handled_msr() and nested_svm_intercept_ioio() are busted.

> +	return last_pa > pa && !(last_pa >> cpuid_maxphyaddr(vcpu));

Please use kvm_vcpu_is_legal_gpa().

> +}
> +
> +static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
> +				       struct vmcb_control_area *control)
>  {
>  	if ((vmcb_is_intercept(control, INTERCEPT_VMRUN)) == 0)
>  		return false;
> @@ -243,12 +251,18 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
>  	    !npt_enabled)
>  		return false;
>  
> +	if (!nested_svm_check_bitmap_pa(vcpu, control->msrpm_base_pa,
> +	    MSRPM_ALLOC_ORDER))
> +		return false;
> +	if (!nested_svm_check_bitmap_pa(vcpu, control->iopm_base_pa,
> +	    IOPM_ALLOC_ORDER))

I strongly dislike using the alloc orders, relying on kernel behavior to
represent architectural values it sketchy.  Case in point, IOPM_ALLOC_ORDER is a
16kb size, whereas the actual size of the IOPM is 12kb.  I also called this out
in v1...

https://lkml.kernel.org/r/YAd9MBkpDjC1MY6E@google.com

> +		return false;
> +
>  	return true;
>  }
>  
> -static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
> +static bool nested_vmcb_checks(struct kvm_vcpu *vcpu, struct vmcb *vmcb12)
>  {
> -	struct kvm_vcpu *vcpu = &svm->vcpu;
>  	bool vmcb12_lma;
>  
>  	if ((vmcb12->save.efer & EFER_SVME) == 0)
> @@ -268,10 +282,10 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
>  		    kvm_vcpu_is_illegal_gpa(vcpu, vmcb12->save.cr3))
>  			return false;
>  	}
> -	if (!kvm_is_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
> +	if (!kvm_is_valid_cr4(vcpu, vmcb12->save.cr4))
>  		return false;
>  
> -	return nested_vmcb_check_controls(&vmcb12->control);
> +	return nested_vmcb_check_controls(vcpu, &vmcb12->control);
>  }
>  
>  static void load_nested_vmcb_control(struct vcpu_svm *svm,
> @@ -515,7 +529,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
>  	if (WARN_ON_ONCE(!svm->nested.initialized))
>  		return -EINVAL;
>  
> -	if (!nested_vmcb_checks(svm, vmcb12)) {
> +	if (!nested_vmcb_checks(&svm->vcpu, vmcb12)) {

Please use @vcpu directly.  Looks like this needs a rebase, as the prototype for
nested_svm_vmrun() is wrong relative to kvm/queue.

>  		vmcb12->control.exit_code    = SVM_EXIT_ERR;
>  		vmcb12->control.exit_code_hi = 0;
>  		vmcb12->control.exit_info_1  = 0;
> @@ -1191,7 +1205,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
>  		goto out_free;
>  
>  	ret = -EINVAL;
> -	if (!nested_vmcb_check_controls(ctl))
> +	if (!nested_vmcb_check_controls(vcpu, ctl))
>  		goto out_free;
>  
>  	/*
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 4/5 v4] nSVM: Test addresses of MSR and IO permissions maps
  2021-03-24 17:50 ` [PATCH 4/5 v4] nSVM: Test addresses of MSR and IO permissions maps Krish Sadhukhan
@ 2021-03-24 19:21   ` Sean Christopherson
  0 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2021-03-24 19:21 UTC (permalink / raw)
  To: Krish Sadhukhan; +Cc: kvm, pbonzini, jmattson

On Wed, Mar 24, 2021, Krish Sadhukhan wrote:
> According to section "Canonicalization and Consistency Checks" in APM vol 2,
> the following guest state is illegal:
> 
>     "The MSR or IOIO intercept tables extend to a physical address that
>      is greater than or equal to the maximum supported physical address."
> 
> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
> ---
>  x86/svm_tests.c | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 29a0b59..70442d2 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -2304,6 +2304,33 @@ static void test_dr(void)
>  	vmcb->save.dr7 = dr_saved;
>  }
>  
> +/*
> + * If the MSR or IOIO intercept table extends to a physical address that
> + * is greater than or equal to the maximum supported physical address, the
> + * guest state is illegal.
> + *
> + * [ APM vol 2]
> + */
> +static void test_msrpm_iopm_bitmap_addrs(void)
> +{
> +	u64 addr_spill_beyond_ram =

FWIW, it's not "beyond ram", it's beyond the legal physical address space.  E.g.
the address can point at stuff other than RAM and be perfectly legal from a
consistency check perspective.

> +	    (u64)(((u64)1 << cpuid_maxphyaddr()) - 4096);

It'd be nice to also check a straight legal address, and an address that
straddles the high address => 0.

> +
> +	/* MSR bitmap address */
> +	vmcb->control.intercept |= 1ULL << INTERCEPT_MSR_PROT;
> +	vmcb->control.msrpm_base_pa = addr_spill_beyond_ram;
> +	report(svm_vmrun() == SVM_EXIT_ERR, "Test MSRPM address: %lx",
> +	    addr_spill_beyond_ram);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
> +
> +	/* MSR bitmap address */
> +	vmcb->control.intercept |= 1ULL << INTERCEPT_IOIO_PROT;
> +	vmcb->control.msrpm_base_pa = addr_spill_beyond_ram;

Wrong bitmap.

> +	report(svm_vmrun() == SVM_EXIT_ERR, "Test IOPM address: %lx",
> +	    addr_spill_beyond_ram);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_IOIO_PROT);

The control should be save/restored, assuming the intercepts were clear will
cause reproducibility issues for other tests.

> +}
> +
>  static void svm_guest_state_test(void)
>  {
>  	test_set_guest(basic_guest_main);
> @@ -2313,6 +2340,7 @@ static void svm_guest_state_test(void)
>  	test_cr3();
>  	test_cr4();
>  	test_dr();
> +	test_msrpm_iopm_bitmap_addrs();
>  }
>  
>  
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps
  2021-03-24 19:15   ` Sean Christopherson
@ 2021-03-25  1:16     ` Krish Sadhukhan
  0 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-25  1:16 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: kvm, pbonzini, jmattson


On 3/24/21 12:15 PM, Sean Christopherson wrote:
> On Wed, Mar 24, 2021, Krish Sadhukhan wrote:
>> According to section "Canonicalization and Consistency Checks" in APM vol 2,
>> the following guest state is illegal:
>>
>>      "The MSR or IOIO intercept tables extend to a physical address that
>>       is greater than or equal to the maximum supported physical address."
>>
>> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
>> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
>> ---
>>   arch/x86/kvm/svm/nested.c | 28 +++++++++++++++++++++-------
>>   1 file changed, 21 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
>> index 35891d9a1099..b08d1c595736 100644
>> --- a/arch/x86/kvm/svm/nested.c
>> +++ b/arch/x86/kvm/svm/nested.c
>> @@ -231,7 +231,15 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
>>   	return true;
>>   }
>>   
>> -static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
>> +static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa,
>> +				       u8 order)
>> +{
>> +	u64 last_pa = PAGE_ALIGN(pa) + (PAGE_SIZE << order) - 1;
> Ugh, I really wish things that "must" happen were actually enforced by hardware.
>
>    The MSRPM must be aligned on a 4KB boundary... The VMRUN instruction ignores
>    the lower 12 bits of the address specified in the VMCB.
>
> So, ignoring an unaligned @pa is correct, but that means
> nested_svm_exit_handled_msr() and nested_svm_intercept_ioio() are busted.


How about we call PAGE_ALIGN() on the addresses where they are allocated 
i.e., in svm_vcpu_alloc_msrpm() and in svm_hardware_setup() ? That way 
even if we are not checking for alignment here, we are still good.

>
>> +	return last_pa > pa && !(last_pa >> cpuid_maxphyaddr(vcpu));
> Please use kvm_vcpu_is_legal_gpa().
>
>> +}
>> +
>> +static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
>> +				       struct vmcb_control_area *control)
>>   {
>>   	if ((vmcb_is_intercept(control, INTERCEPT_VMRUN)) == 0)
>>   		return false;
>> @@ -243,12 +251,18 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control)
>>   	    !npt_enabled)
>>   		return false;
>>   
>> +	if (!nested_svm_check_bitmap_pa(vcpu, control->msrpm_base_pa,
>> +	    MSRPM_ALLOC_ORDER))
>> +		return false;
>> +	if (!nested_svm_check_bitmap_pa(vcpu, control->iopm_base_pa,
>> +	    IOPM_ALLOC_ORDER))
> I strongly dislike using the alloc orders, relying on kernel behavior to
> represent architectural values it sketchy.  Case in point, IOPM_ALLOC_ORDER is a
> 16kb size, whereas the actual size of the IOPM is 12kb.


You're right, the IOPM check is wrong.

>   I also called this out
> in v1...
>
> https://urldefense.com/v3/__https://lkml.kernel.org/r/YAd9MBkpDjC1MY6E@google.com__;!!GqivPVa7Brio!PkV46MQtWW8toodVKSwtWy_wKBPlsT8ME0Y_NND8Xs05NJir6WSNS4ndmhVuqW9N3Jef$


OK, I will define the actual size.

BTW, can we can switch to alloc_pages_exact() instead of alloc_pages() 
for allocating the IOPM bitmap ? The IOPM stays allocated throughout the 
lifetime of the guest and hence it won't impact performance much.

>> +		return false;
>> +
>>   	return true;
>>   }
>>   
>> -static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
>> +static bool nested_vmcb_checks(struct kvm_vcpu *vcpu, struct vmcb *vmcb12)
>>   {
>> -	struct kvm_vcpu *vcpu = &svm->vcpu;
>>   	bool vmcb12_lma;
>>   
>>   	if ((vmcb12->save.efer & EFER_SVME) == 0)
>> @@ -268,10 +282,10 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)
>>   		    kvm_vcpu_is_illegal_gpa(vcpu, vmcb12->save.cr3))
>>   			return false;
>>   	}
>> -	if (!kvm_is_valid_cr4(&svm->vcpu, vmcb12->save.cr4))
>> +	if (!kvm_is_valid_cr4(vcpu, vmcb12->save.cr4))
>>   		return false;
>>   
>> -	return nested_vmcb_check_controls(&vmcb12->control);
>> +	return nested_vmcb_check_controls(vcpu, &vmcb12->control);
>>   }
>>   
>>   static void load_nested_vmcb_control(struct vcpu_svm *svm,
>> @@ -515,7 +529,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
>>   	if (WARN_ON_ONCE(!svm->nested.initialized))
>>   		return -EINVAL;
>>   
>> -	if (!nested_vmcb_checks(svm, vmcb12)) {
>> +	if (!nested_vmcb_checks(&svm->vcpu, vmcb12)) {
> Please use @vcpu directly.


It's all cleaned up in patch# 3.

>    Looks like this needs a rebase, as the prototype for
> nested_svm_vmrun() is wrong relative to kvm/queue.
>
>>   		vmcb12->control.exit_code    = SVM_EXIT_ERR;
>>   		vmcb12->control.exit_code_hi = 0;
>>   		vmcb12->control.exit_info_1  = 0;
>> @@ -1191,7 +1205,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
>>   		goto out_free;
>>   
>>   	ret = -EINVAL;
>> -	if (!nested_vmcb_check_controls(ctl))
>> +	if (!nested_vmcb_check_controls(vcpu, ctl))
>>   		goto out_free;
>>   
>>   	/*
>> -- 
>> 2.27.0
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-03-25  1:18 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-24 17:50 [PATCH 0/5 v4] KVM: nSVM: Check addresses of MSR bitmap and IO bitmap tables on vmrun of nested guests Krish Sadhukhan
2021-03-24 17:50 ` [PATCH 1/5 v4] KVM: SVM: Move IOPM_ALLOC_ORDER and MSRPM_ALLOC_ORDER #defines to svm.h Krish Sadhukhan
2021-03-24 17:50 ` [PATCH 2/5 v4] KVM: nSVM: Check addresses of MSR and IO permission maps Krish Sadhukhan
2021-03-24 19:15   ` Sean Christopherson
2021-03-25  1:16     ` Krish Sadhukhan
2021-03-24 17:50 ` [PATCH 3/5 v4] KVM: nSVM: Cleanup in nested_svm_vmrun() Krish Sadhukhan
2021-03-24 17:50 ` [PATCH 4/5 v4] nSVM: Test addresses of MSR and IO permissions maps Krish Sadhukhan
2021-03-24 19:21   ` Sean Christopherson
2021-03-24 17:50 ` [PATCH 5/5 v4] SVM: Use ALIGN macro when aligning 'io_bitmap_area' Krish Sadhukhan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.