All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH KVM 0/3] nSVM: Fix PAT in VMCB02 and check PAT in VMCB12
@ 2022-01-18 19:14 Krish Sadhukhan
  2022-01-18 19:14 ` [PATCH KVM 1/3] " Krish Sadhukhan
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Krish Sadhukhan @ 2022-01-18 19:14 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, mlevitsk

According to section "Nested Paging and VMRUN/#VMEXIT" in APM vol 2:

    "When VMRUN is executed with nested paging enabled (NP_ENABLE = 1),
     the paging registers are affected as follows:

        • VMRUN loads the guest paging state from the guest VMCB into the
          guest registers (i.e., VMRUN loads CR3 with the VMCB CR3 field,
          etc.). The guest PAT register is loaded from G_PAT field in the
          VMCB.

    The following guest state is illegal:

	• Any G_PAT.PA field has an unsupported type encoding or any
	  reserved field in G_PAT has a nonzero value."


Patch# 1 does the following:
	i) Fixes the PAT value in VMCB02 before launching nested guests as
	   follows:

		If nested paging is enabled in VMCB12, use PAT from VMCB12.
		Otherwise, use PAT from VMCB01.

	ii) When nested guests attempt to write MSR_IA32_CR_PAT, the register
	    is updated only when nested paging is disabled and PAT from VMCB12
	    is used to update it.

	iii) Adds checks for the PAT fields in VMCB12.

Patch# 2 adds a helper to check if PAT is supported by the VCPU.
Patch# 3 adds tests for all the PAT fields.


[PATCH KVM 1/3] nSVM: Fix PAT value in VMCB02
[PATCH kvm-unit-tests 2/3] SVM: Add a helpter function for checking if PAT is
[PATCH kvm-unit-tests 3/3] nSVM: Test G_PAT fields

 arch/x86/kvm/svm/nested.c | 34 +++++++++++++++++++++++++++-------
 arch/x86/kvm/svm/svm.c    |  3 ++-
 arch/x86/kvm/svm/svm.h    |  3 ++-
 3 files changed, 31 insertions(+), 9 deletions(-)

Krish Sadhukhan (1):
      nSVM: Fix PAT value in VMCB02

 lib/x86/asm/page.h  | 11 +++++++++
 lib/x86/processor.h |  1 +
 x86/svm.c           | 13 +++++++++++
 x86/svm.h           |  2 ++
 x86/svm_tests.c     | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 93 insertions(+)

Krish Sadhukhan (2):
      SVM: Add a helpter function for checking if PAT is supported by the VCPU
      nSVM: Test G_PAT fields


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH KVM 1/3] nSVM: Fix PAT in VMCB02 and check PAT in VMCB12
  2022-01-18 19:14 [PATCH KVM 0/3] nSVM: Fix PAT in VMCB02 and check PAT in VMCB12 Krish Sadhukhan
@ 2022-01-18 19:14 ` Krish Sadhukhan
  2022-01-18 19:14 ` [PATCH kvm-unit-tests 2/3] SVM: Add a helpter function for checking if PAT is supported by the VCPU Krish Sadhukhan
  2022-01-18 19:14 ` [PATCH kvm-unit-tests 3/3] nSVM: Test G_PAT fields Krish Sadhukhan
  2 siblings, 0 replies; 4+ messages in thread
From: Krish Sadhukhan @ 2022-01-18 19:14 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, mlevitsk

Currently, KVM uses PAT from VMCB01 when launching nested guests, as well
as, when nested guests write MSR_IA32_CR_PAT. But section "Nested Paging
and VMRUN/#VMEXIT" in APM vol 2 states the following:

    "When VMRUN is executed with nested paging enabled (NP_ENABLE = 1),
     the paging registers are affected as follows:
	• VMRUN loads the guest paging state from the guest VMCB into the
	  guest registers (i.e., VMRUN loads CR3 with the VMCB CR3 field,
	  etc.). The guest PAT register is loaded from G_PAT field in the
	  VMCB."

Therefore, if we are launching nested guests, the PAT value from VMCB12
needs to be use in VMCB02 if nested paging is enabled in VMCB12 whereas the
PAT value from VMCB01 needs to used in VMCB02 if nested paging is disabled
in VMCB12.

However, when nested guets write MSR_IA32_CR_PAT, that register needs to
be updated only if nested paging is disabled in nested guests and the PAT
to be used is the one from VMCB12.

According to the same section in APM vol 2, the following guest state is
illegal:

	• Any G_PAT.PA field has an unsupported type encoding or any
	  reserved field in G_PAT has a nonzero value.

So, add checks for the PAT fields in VMCB12.


Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Suggested-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/svm/nested.c | 34 +++++++++++++++++++++++++++-------
 arch/x86/kvm/svm/svm.c    |  3 ++-
 arch/x86/kvm/svm/svm.h    |  3 ++-
 3 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index f8b7bc04b3e7..3283a58d5b0f 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -326,6 +326,17 @@ static bool nested_vmcb_valid_sregs(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+noinline bool nested_vmcb_check_save_area(struct kvm_vcpu *vcpu,
+                                       struct vmcb_control_area *control,
+                                       struct vmcb_save_area *save)
+{
+	if (CC((control->nested_ctl  & SVM_NESTED_CTL_NP_ENABLE) &&
+	    !kvm_pat_valid(save->g_pat)))
+		return false;
+
+	return nested_vmcb_valid_sregs(vcpu, save);
+}
+
 void nested_load_control_from_vmcb12(struct vcpu_svm *svm,
 				     struct vmcb_control_area *control)
 {
@@ -452,21 +463,28 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3,
 	return 0;
 }
 
-void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
+void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm, u64 g_pat,
+				 u64 nested_ctl, bool from_vmrun)
 {
 	if (!svm->nested.vmcb02.ptr)
 		return;
 
-	/* FIXME: merge g_pat from vmcb01 and vmcb12.  */
-	svm->nested.vmcb02.ptr->save.g_pat = svm->vmcb01.ptr->save.g_pat;
+	if (from_vmrun) {
+		if (nested_ctl & SVM_NESTED_CTL_NP_ENABLE)
+			svm->nested.vmcb02.ptr->save.g_pat = g_pat;
+		else
+			svm->nested.vmcb02.ptr->save.g_pat =
+			    svm->vmcb01.ptr->save.g_pat;
+	} else {
+	    if (!(nested_ctl & SVM_NESTED_CTL_NP_ENABLE))
+		svm->nested.vmcb02.ptr->save.g_pat = g_pat;
+	}
 }
 
 static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
 {
 	bool new_vmcb12 = false;
 
-	nested_vmcb02_compute_g_pat(svm);
-
 	/* Load the nested guest state */
 	if (svm->nested.vmcb12_gpa != svm->nested.last_vmcb12_gpa) {
 		new_vmcb12 = true;
@@ -679,8 +697,10 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 		return -EINVAL;
 
 	nested_load_control_from_vmcb12(svm, &vmcb12->control);
+	nested_vmcb02_compute_g_pat(svm, vmcb12->save.g_pat,
+	    vmcb12->control.nested_ctl, true);
 
-	if (!nested_vmcb_valid_sregs(vcpu, &vmcb12->save) ||
+	if (!nested_vmcb_check_save_area(vcpu, &vmcb12->control, &vmcb12->save) ||
 	    !nested_vmcb_check_controls(vcpu, &svm->nested.ctl)) {
 		vmcb12->control.exit_code    = SVM_EXIT_ERR;
 		vmcb12->control.exit_code_hi = 0;
@@ -1386,7 +1406,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 	if (!(save->cr0 & X86_CR0_PG) ||
 	    !(save->cr0 & X86_CR0_PE) ||
 	    (save->rflags & X86_EFLAGS_VM) ||
-	    !nested_vmcb_valid_sregs(vcpu, save))
+	    !nested_vmcb_check_save_area(vcpu, ctl, save))
 		goto out_free;
 
 	/*
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 5151efa424ac..e08e55082e77 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2907,7 +2907,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		vcpu->arch.pat = data;
 		svm->vmcb01.ptr->save.g_pat = data;
 		if (is_guest_mode(vcpu))
-			nested_vmcb02_compute_g_pat(svm);
+			nested_vmcb02_compute_g_pat(svm, data,
+			    svm->vmcb->control.nested_ctl, false);
 		vmcb_mark_dirty(svm->vmcb, VMCB_NPT);
 		break;
 	case MSR_IA32_SPEC_CTRL:
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 1c7306c370fa..872d6c72d937 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -497,7 +497,8 @@ void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier);
 void nested_load_control_from_vmcb12(struct vcpu_svm *svm,
 				     struct vmcb_control_area *control);
 void nested_sync_control_from_vmcb02(struct vcpu_svm *svm);
-void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm);
+void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm, u64 g_pat,
+				 u64 nested_ctl, bool from_vmrun);
 void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb);
 
 extern struct kvm_x86_nested_ops svm_nested_ops;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH kvm-unit-tests 2/3] SVM: Add a helpter function for checking if PAT is supported by the VCPU
  2022-01-18 19:14 [PATCH KVM 0/3] nSVM: Fix PAT in VMCB02 and check PAT in VMCB12 Krish Sadhukhan
  2022-01-18 19:14 ` [PATCH KVM 1/3] " Krish Sadhukhan
@ 2022-01-18 19:14 ` Krish Sadhukhan
  2022-01-18 19:14 ` [PATCH kvm-unit-tests 3/3] nSVM: Test G_PAT fields Krish Sadhukhan
  2 siblings, 0 replies; 4+ messages in thread
From: Krish Sadhukhan @ 2022-01-18 19:14 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, mlevitsk

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/x86/processor.h | 1 +
 x86/svm.c           | 5 +++++
 x86/svm.h           | 1 +
 3 files changed, 7 insertions(+)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index fe5add5..ad892b7 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -180,6 +180,7 @@ static inline bool is_intel(void)
  * Extended Leafs, a.k.a. AMD defined
  */
 #define	X86_FEATURE_SVM			(CPUID(0x80000001, 0, ECX, 2))
+#define	X86_FEATURE_PAT			(CPUID(0x80000001, 0, EDX, 16))
 #define	X86_FEATURE_NX			(CPUID(0x80000001, 0, EDX, 20))
 #define	X86_FEATURE_GBPAGES		(CPUID(0x80000001, 0, EDX, 26))
 #define	X86_FEATURE_RDTSCP		(CPUID(0x80000001, 0, EDX, 27))
diff --git a/x86/svm.c b/x86/svm.c
index 3f94b2a..d03f011 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -89,6 +89,11 @@ bool npt_supported(void)
 	return this_cpu_has(X86_FEATURE_NPT);
 }
 
+bool pat_supported(void)
+{
+	return this_cpu_has(X86_FEATURE_PAT);
+}
+
 int get_test_stage(struct svm_test *test)
 {
 	barrier();
diff --git a/x86/svm.h b/x86/svm.h
index f74b13a..d4db4c1 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -409,6 +409,7 @@ void default_prepare(struct svm_test *test);
 void default_prepare_gif_clear(struct svm_test *test);
 bool default_finished(struct svm_test *test);
 bool npt_supported(void);
+bool pat_supported(void);
 int get_test_stage(struct svm_test *test);
 void set_test_stage(struct svm_test *test, int s);
 void inc_test_stage(struct svm_test *test);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH kvm-unit-tests 3/3] nSVM: Test G_PAT fields
  2022-01-18 19:14 [PATCH KVM 0/3] nSVM: Fix PAT in VMCB02 and check PAT in VMCB12 Krish Sadhukhan
  2022-01-18 19:14 ` [PATCH KVM 1/3] " Krish Sadhukhan
  2022-01-18 19:14 ` [PATCH kvm-unit-tests 2/3] SVM: Add a helpter function for checking if PAT is supported by the VCPU Krish Sadhukhan
@ 2022-01-18 19:14 ` Krish Sadhukhan
  2 siblings, 0 replies; 4+ messages in thread
From: Krish Sadhukhan @ 2022-01-18 19:14 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, mlevitsk

According to section "Nested Paging and VMRUN/#VMEXIT" in APM vol 2, the
following guest state is illegal:

    "Any G_PAT.PA field has an unsupported type encoding or any
     reserved field in G_PAT has a nonzero value."

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/x86/asm/page.h | 11 ++++++++
 x86/svm.c          |  8 ++++++
 x86/svm.h          |  1 +
 x86/svm_tests.c    | 66 ++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 86 insertions(+)

diff --git a/lib/x86/asm/page.h b/lib/x86/asm/page.h
index fc14160..9ff9329 100644
--- a/lib/x86/asm/page.h
+++ b/lib/x86/asm/page.h
@@ -57,5 +57,16 @@ typedef unsigned long pgd_t;
 #define PGDIR_BITS(lvl)        (((lvl) - 1) * PGDIR_WIDTH + PAGE_SHIFT)
 #define PGDIR_OFFSET(va, lvl)  (((va) >> PGDIR_BITS(lvl)) & PGDIR_MASK)
 
+#ifdef __x86_64__
+enum {
+	PAT_UC = 0,             /* uncached */
+	PAT_WC = 1,             /* Write combining */
+	PAT_WT = 4,             /* Write Through */
+	PAT_WP = 5,             /* Write Protected */
+	PAT_WB = 6,             /* Write Back (default) */
+	PAT_UC_MINUS = 7,       /* UC, but can be overridden by MTRR */
+};
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif
diff --git a/x86/svm.c b/x86/svm.c
index d03f011..c949003 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -94,6 +94,14 @@ bool pat_supported(void)
 	return this_cpu_has(X86_FEATURE_PAT);
 }
 
+bool pat_valid(u64 data)
+{
+	if (data & 0xF8F8F8F8F8F8F8F8ull)
+		return false;
+	/* 0, 1, 4, 5, 6, 7 are valid values.  */
+	return (data | ((data & 0x0202020202020202ull) << 1)) == data;
+}
+
 int get_test_stage(struct svm_test *test)
 {
 	barrier();
diff --git a/x86/svm.h b/x86/svm.h
index d4db4c1..d4c6e1c 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -410,6 +410,7 @@ void default_prepare_gif_clear(struct svm_test *test);
 bool default_finished(struct svm_test *test);
 bool npt_supported(void);
 bool pat_supported(void);
+bool pat_valid(u64 data);
 int get_test_stage(struct svm_test *test);
 void set_test_stage(struct svm_test *test, int s);
 void inc_test_stage(struct svm_test *test);
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 8ad6122..4536362 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -2547,6 +2547,71 @@ static void guest_rflags_test_db_handler(struct ex_regs *r)
 	r->rflags &= ~X86_EFLAGS_TF;
 }
 
+#define G_PAT_VMRUN(nested_ctl, val, i, field_val)		\
+{								\
+	u32 ret, xret;						\
+								\
+	if (nested_ctl) {					\
+		if (pat_valid(val))				\
+			xret = SVM_EXIT_VMMCALL;		\
+		else						\
+			xret = SVM_EXIT_ERR;			\
+	} else {						\
+		xret = SVM_EXIT_VMMCALL;			\
+	}							\
+	vmcb->save.g_pat = val;					\
+	ret = svm_vmrun();					\
+	report (ret == xret, "Test G_PAT[%d]: %lx, wanted "	\
+           "exit 0x%x, got 0x%x", i, field_val, xret, ret);	\
+}
+
+#define TEST_G_PAT(g_pat_saved, nested_ctl)			\
+{								\
+	int i, field_shift;					\
+	u64 g_pat_mask, field_val, val, j;			\
+								\
+	for (i = 0; i < 8; i++) {				\
+		/*						\
+		 * Test each PAT field's encodings and		\
+		 * reserved values				\
+		 */						\
+		field_shift = i * 8;				\
+		g_pat_mask = ~(0x7ul << field_shift) &		\
+				g_pat_saved;			\
+		for (j = PAT_UC; j <= PAT_UC_MINUS; j++) {	\
+			val = g_pat_mask | j << field_shift;	\
+			G_PAT_VMRUN(nested_ctl, val, i, j);	\
+		}						\
+		field_shift = i * 8 + 3;			\
+		g_pat_mask = ~(0x1ful << field_shift) &		\
+				g_pat_saved;			\
+		for (j = 0; j < 5; j++) {			\
+			field_val = 1ul << j;			\
+			val = g_pat_mask |			\
+			      field_val << field_shift;		\
+			G_PAT_VMRUN(nested_ctl, val, i,		\
+				    field_val);			\
+		}						\
+	}							\
+}
+
+static void test_g_pat(void)
+{
+	u64 g_pat_saved = vmcb->save.g_pat;
+	u64 nested_ctl_saved = vmcb->control.nested_ctl;
+
+	if (!npt_supported() || !pat_supported()) {
+		report_skip("NPT or PAT or both not supported");
+		return;
+	}
+
+	TEST_G_PAT(g_pat_saved, (vmcb->control.nested_ctl = 0));
+	TEST_G_PAT(g_pat_saved, (vmcb->control.nested_ctl = 1));
+
+	vmcb->control.nested_ctl = nested_ctl_saved;
+	vmcb->save.g_pat = g_pat_saved;
+}
+
 static void svm_guest_state_test(void)
 {
 	test_set_guest(basic_guest_main);
@@ -2557,6 +2622,7 @@ static void svm_guest_state_test(void)
 	test_dr();
 	test_msrpm_iopm_bitmap_addrs();
 	test_canonicalization();
+	test_g_pat();
 }
 
 extern void guest_rflags_test_guest(struct svm_test *test);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-01-18 20:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-18 19:14 [PATCH KVM 0/3] nSVM: Fix PAT in VMCB02 and check PAT in VMCB12 Krish Sadhukhan
2022-01-18 19:14 ` [PATCH KVM 1/3] " Krish Sadhukhan
2022-01-18 19:14 ` [PATCH kvm-unit-tests 2/3] SVM: Add a helpter function for checking if PAT is supported by the VCPU Krish Sadhukhan
2022-01-18 19:14 ` [PATCH kvm-unit-tests 3/3] nSVM: Test G_PAT fields Krish Sadhukhan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.