All of lore.kernel.org
 help / color / mirror / Atom feed
* [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements
@ 2022-04-25 11:44 Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 1/8] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

If __setup_vm() is changed to setup_vm(), KUT will build tests with 
PT_USER_MASK set on all PTEs. It is a better idea to move nNPT tests 
to their own file so that tests don't need to fiddle with page tables midway.

The quick approach to do this would be to turn the current main into a 
small helper, without calling __setup_vm() from helper.

setup_mmu_range() function in vm.c was modified to allocate new user 
pages to implement nested page table.

Current implementation of nested page table does the page table build 
up statistically with 2048 PTEs and one pml4 entry. With newly 
implemented routine, nested page table can be implemented dynamically 
based on the RAM size of VM which enables us to have separate memory 
ranges to test various npt test cases.

Based on this implementation, minimal changes were required to be done 
in below mentioned existing APIs:
npt_get_pde(), npt_get_pte(), npt_get_pdpe().

v1 -> v2
Added new patch for building up a nested page table dynamically and 
did minimal changes required to make it adaptable with old test cases.

v2 -> v3
Added new patch to change setup_mmu_range to use it in implementation 
of nested page table.
Added new patches to correct indentation errors in svm.c, svm_npt.c and 
svm_tests.c.
Used scripts/Lindent from linux source code to fix indentation errors.

Last patch from this series was bounced back due to maximum characters for
patch was crossed, so resending the whole series again by spliting the patch in
multiples.

Manali Shukla (8):
  x86: nSVM: Move common functionality of the main() to helper
    run_svm_tests
  x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate
    file.
  x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled
  x86: Improve set_mmu_range() to implement npt
  x86: nSVM: Build up the nested page table dynamically
  x86: nSVM: Correct indentation for svm.c
  x86: nSVM: Correct indentation for svm_tests.c part-1
  x86: nSVM: Correct indentation for svm_tests.c part-2

 lib/x86/vm.c        |   37 +-
 lib/x86/vm.h        |    3 +
 x86/Makefile.common |    2 +
 x86/Makefile.x86_64 |    2 +
 x86/svm.c           |  272 ++--
 x86/svm.h           |    5 +-
 x86/svm_npt.c       |  391 +++++
 x86/svm_tests.c     | 3535 ++++++++++++++++++++-----------------------
 x86/unittests.cfg   |    6 +
 9 files changed, 2157 insertions(+), 2096 deletions(-)
 create mode 100644 x86/svm_npt.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 1/8] x86: nSVM: Move common functionality of the main() to helper run_svm_tests
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 2/8] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file Manali Shukla
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

nSVM tests are "incompatible" with usermode due to __setup_vm()
call in main function.

If __setup_vm() is replaced with setup_vm() in main function, KUT
will build the test with PT_USER_MASK set on all PTEs.

nNPT tests will be moved to their own file so that the tests
don't need to fiddle with page tables midway through.

The quick and dirty approach would be to turn the current main()
into a small helper, minus its call to __setup_vm() and call the
helper function run_svm_tests() from main() function.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm.c | 14 +++++++++-----
 x86/svm.h |  1 +
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index f6896f0..299383c 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -397,17 +397,13 @@ test_wanted(const char *name, char *filters[], int filter_count)
         }
 }
 
-int main(int ac, char **av)
+int run_svm_tests(int ac, char **av)
 {
-	/* Omit PT_USER_MASK to allow tested host.CR4.SMEP=1. */
-	pteval_t opt_mask = 0;
 	int i = 0;
 
 	ac--;
 	av++;
 
-	__setup_vm(&opt_mask);
-
 	if (!this_cpu_has(X86_FEATURE_SVM)) {
 		printf("SVM not availble\n");
 		return report_summary();
@@ -444,3 +440,11 @@ int main(int ac, char **av)
 
 	return report_summary();
 }
+
+int main(int ac, char **av)
+{
+    pteval_t opt_mask = 0;
+
+    __setup_vm(&opt_mask);
+    return run_svm_tests(ac, av);
+}
diff --git a/x86/svm.h b/x86/svm.h
index e93822b..123e64f 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -403,6 +403,7 @@ struct regs {
 
 typedef void (*test_guest_func)(struct svm_test *);
 
+int run_svm_tests(int ac, char **av);
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
 u64 *npt_get_pdpe(void);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 2/8] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file.
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 1/8] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 3/8] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled Manali Shukla
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

nNPT test cases are moved to a separate file svm_npt.c
so that they can be run independently with PTE_USER_MASK disabled.

Rest of the test cases can be run with PTE_USER_MASK enabled.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/Makefile.common |   2 +
 x86/Makefile.x86_64 |   2 +
 x86/svm.c           |   8 -
 x86/svm_npt.c       | 390 ++++++++++++++++++++++++++++++++++++++++++++
 x86/svm_tests.c     | 371 +----------------------------------------
 x86/unittests.cfg   |   6 +
 6 files changed, 409 insertions(+), 370 deletions(-)
 create mode 100644 x86/svm_npt.c

diff --git a/x86/Makefile.common b/x86/Makefile.common
index b903988..5590afe 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -107,6 +107,8 @@ $(TEST_DIR)/access_test.$(bin): $(TEST_DIR)/access.o
 
 $(TEST_DIR)/vmx.$(bin): $(TEST_DIR)/access.o
 
+$(TEST_DIR)/svm_npt.$(bin): $(TEST_DIR)/svm.o
+
 $(TEST_DIR)/kvmclock_test.$(bin): $(TEST_DIR)/kvmclock.o
 
 $(TEST_DIR)/hyperv_synic.$(bin): $(TEST_DIR)/hyperv.o
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index f18c1e2..dbe5967 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -42,6 +42,7 @@ endif
 ifneq ($(CONFIG_EFI),y)
 tests += $(TEST_DIR)/access_test.$(exe)
 tests += $(TEST_DIR)/svm.$(exe)
+tests += $(TEST_DIR)/svm_npt.$(exe)
 tests += $(TEST_DIR)/vmx.$(exe)
 endif
 
@@ -55,3 +56,4 @@ $(TEST_DIR)/hyperv_clock.$(bin): $(TEST_DIR)/hyperv_clock.o
 
 $(TEST_DIR)/vmx.$(bin): $(TEST_DIR)/vmx_tests.o
 $(TEST_DIR)/svm.$(bin): $(TEST_DIR)/svm_tests.o
+$(TEST_DIR)/svm_npt.$(bin): $(TEST_DIR)/svm_npt.o
diff --git a/x86/svm.c b/x86/svm.c
index 299383c..ec825c7 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -440,11 +440,3 @@ int run_svm_tests(int ac, char **av)
 
 	return report_summary();
 }
-
-int main(int ac, char **av)
-{
-    pteval_t opt_mask = 0;
-
-    __setup_vm(&opt_mask);
-    return run_svm_tests(ac, av);
-}
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
new file mode 100644
index 0000000..53e8a90
--- /dev/null
+++ b/x86/svm_npt.c
@@ -0,0 +1,390 @@
+#include "svm.h"
+#include "vm.h"
+#include "alloc_page.h"
+#include "vmalloc.h"
+
+static void *scratch_page;
+
+static void null_test(struct svm_test *test)
+{
+}
+
+static void npt_np_prepare(struct svm_test *test)
+{
+	u64 *pte;
+
+	scratch_page = alloc_page();
+	pte = npt_get_pte((u64) scratch_page);
+
+	*pte &= ~1ULL;
+}
+
+static void npt_np_test(struct svm_test *test)
+{
+	(void)*(volatile u64 *)scratch_page;
+}
+
+static bool npt_np_check(struct svm_test *test)
+{
+	u64 *pte = npt_get_pte((u64) scratch_page);
+
+	*pte |= 1ULL;
+
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000004ULL);
+}
+
+static void npt_nx_prepare(struct svm_test *test)
+{
+	u64 *pte;
+
+	test->scratch = rdmsr(MSR_EFER);
+	wrmsr(MSR_EFER, test->scratch | EFER_NX);
+
+	/* Clear the guest's EFER.NX, it should not affect NPT behavior. */
+	vmcb->save.efer &= ~EFER_NX;
+
+	pte = npt_get_pte((u64) null_test);
+
+	*pte |= PT64_NX_MASK;
+}
+
+static bool npt_nx_check(struct svm_test *test)
+{
+	u64 *pte = npt_get_pte((u64) null_test);
+
+	wrmsr(MSR_EFER, test->scratch);
+
+	*pte &= ~PT64_NX_MASK;
+
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000015ULL);
+}
+
+static void npt_us_prepare(struct svm_test *test)
+{
+	u64 *pte;
+
+	scratch_page = alloc_page();
+	pte = npt_get_pte((u64) scratch_page);
+
+	*pte &= ~(1ULL << 2);
+}
+
+static void npt_us_test(struct svm_test *test)
+{
+	(void)*(volatile u64 *)scratch_page;
+}
+
+static bool npt_us_check(struct svm_test *test)
+{
+	u64 *pte = npt_get_pte((u64) scratch_page);
+
+	*pte |= (1ULL << 2);
+
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000005ULL);
+}
+
+static void npt_rw_prepare(struct svm_test *test)
+{
+
+	u64 *pte;
+
+	pte = npt_get_pte(0x80000);
+
+	*pte &= ~(1ULL << 1);
+}
+
+static void npt_rw_test(struct svm_test *test)
+{
+	u64 *data = (void *)(0x80000);
+
+	*data = 0;
+}
+
+static bool npt_rw_check(struct svm_test *test)
+{
+	u64 *pte = npt_get_pte(0x80000);
+
+	*pte |= (1ULL << 1);
+
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
+}
+
+static void npt_rw_pfwalk_prepare(struct svm_test *test)
+{
+
+	u64 *pte;
+
+	pte = npt_get_pte(read_cr3());
+
+	*pte &= ~(1ULL << 1);
+}
+
+static bool npt_rw_pfwalk_check(struct svm_test *test)
+{
+	u64 *pte = npt_get_pte(read_cr3());
+
+	*pte |= (1ULL << 1);
+
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x200000007ULL)
+	    && (vmcb->control.exit_info_2 == read_cr3());
+}
+
+static void npt_l1mmio_prepare(struct svm_test *test)
+{
+}
+
+u32 nested_apic_version1;
+u32 nested_apic_version2;
+
+static void npt_l1mmio_test(struct svm_test *test)
+{
+	volatile u32 *data = (volatile void *)(0xfee00030UL);
+
+	nested_apic_version1 = *data;
+	nested_apic_version2 = *data;
+}
+
+static bool npt_l1mmio_check(struct svm_test *test)
+{
+	volatile u32 *data = (volatile void *)(0xfee00030);
+	u32 lvr = *data;
+
+	return nested_apic_version1 == lvr && nested_apic_version2 == lvr;
+}
+
+static void npt_rw_l1mmio_prepare(struct svm_test *test)
+{
+
+	u64 *pte;
+
+	pte = npt_get_pte(0xfee00080);
+
+	*pte &= ~(1ULL << 1);
+}
+
+static void npt_rw_l1mmio_test(struct svm_test *test)
+{
+	volatile u32 *data = (volatile void *)(0xfee00080);
+
+	*data = *data;
+}
+
+static bool npt_rw_l1mmio_check(struct svm_test *test)
+{
+	u64 *pte = npt_get_pte(0xfee00080);
+
+	*pte |= (1ULL << 1);
+
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
+}
+
+static void basic_guest_main(struct svm_test *test)
+{
+}
+
+static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
+				     ulong cr4, u64 guest_efer, ulong guest_cr4)
+{
+	u64 pxe_orig = *pxe;
+	int exit_reason;
+	u64 pfec;
+
+	wrmsr(MSR_EFER, efer);
+	write_cr4(cr4);
+
+	vmcb->save.efer = guest_efer;
+	vmcb->save.cr4 = guest_cr4;
+
+	*pxe |= rsvd_bits;
+
+	exit_reason = svm_vmrun();
+
+	report(exit_reason == SVM_EXIT_NPF,
+	       "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits,
+	       exit_reason);
+
+	if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) {
+		/*
+		 * The guest's page tables will blow up on a bad PDPE/PML4E,
+		 * before starting the final walk of the guest page.
+		 */
+		pfec = 0x20000000full;
+	} else {
+		/* RSVD #NPF on final walk of guest page. */
+		pfec = 0x10000000dULL;
+
+		/* PFEC.FETCH=1 if NX=1 *or* SMEP=1. */
+		if ((cr4 & X86_CR4_SMEP) || (efer & EFER_NX))
+			pfec |= 0x10;
+
+	}
+
+	report(vmcb->control.exit_info_1 == pfec,
+	       "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
+	       "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
+	       pfec, vmcb->control.exit_info_1, *pxe,
+	       !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
+	       !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
+
+	*pxe = pxe_orig;
+}
+
+static void _svm_npt_rsvd_bits_test(u64 * pxe, u64 pxe_rsvd_bits, u64 efer,
+				    ulong cr4, u64 guest_efer, ulong guest_cr4)
+{
+	u64 rsvd_bits;
+	int i;
+
+	/*
+	 * RDTSC or RDRAND can sometimes fail to generate a valid reserved bits
+	 */
+	if (!pxe_rsvd_bits) {
+		report_skip
+		    ("svm_npt_rsvd_bits_test: Reserved bits are not valid");
+		return;
+	}
+
+	/*
+	 * Test all combinations of guest/host EFER.NX and CR4.SMEP.  If host
+	 * EFER.NX=0, use NX as the reserved bit, otherwise use the passed in
+	 * @pxe_rsvd_bits.
+	 */
+	for (i = 0; i < 16; i++) {
+		if (i & 1) {
+			rsvd_bits = pxe_rsvd_bits;
+			efer |= EFER_NX;
+		} else {
+			rsvd_bits = PT64_NX_MASK;
+			efer &= ~EFER_NX;
+		}
+		if (i & 2)
+			cr4 |= X86_CR4_SMEP;
+		else
+			cr4 &= ~X86_CR4_SMEP;
+		if (i & 4)
+			guest_efer |= EFER_NX;
+		else
+			guest_efer &= ~EFER_NX;
+		if (i & 8)
+			guest_cr4 |= X86_CR4_SMEP;
+		else
+			guest_cr4 &= ~X86_CR4_SMEP;
+
+		__svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4,
+					 guest_efer, guest_cr4);
+	}
+}
+
+static u64 get_random_bits(u64 hi, u64 low)
+{
+	unsigned retry = 5;
+	u64 rsvd_bits = 0;
+
+	if (this_cpu_has(X86_FEATURE_RDRAND)) {
+		do {
+			rsvd_bits = (rdrand() << low) & GENMASK_ULL(hi, low);
+			retry--;
+		} while (!rsvd_bits && retry);
+	}
+
+	if (!rsvd_bits) {
+		retry = 5;
+		do {
+			rsvd_bits = (rdtsc() << low) & GENMASK_ULL(hi, low);
+			retry--;
+		} while (!rsvd_bits && retry);
+	}
+
+	return rsvd_bits;
+}
+
+static void svm_npt_rsvd_bits_test(void)
+{
+	u64 saved_efer, host_efer, sg_efer, guest_efer;
+	ulong saved_cr4, host_cr4, sg_cr4, guest_cr4;
+
+	if (!npt_supported()) {
+		report_skip("NPT not supported");
+		return;
+	}
+
+	saved_efer = host_efer = rdmsr(MSR_EFER);
+	saved_cr4 = host_cr4 = read_cr4();
+	sg_efer = guest_efer = vmcb->save.efer;
+	sg_cr4 = guest_cr4 = vmcb->save.cr4;
+
+	test_set_guest(basic_guest_main);
+
+	/*
+	 * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the
+	 * sub-test.  The NX test is still valid, but the extra bit of coverage
+	 * isn't worth the extra complexity.
+	 */
+	if (cpuid_maxphyaddr() >= 52)
+		goto skip_pte_test;
+
+	_svm_npt_rsvd_bits_test(npt_get_pte((u64) basic_guest_main),
+				get_random_bits(51, cpuid_maxphyaddr()),
+				host_efer, host_cr4, guest_efer, guest_cr4);
+
+skip_pte_test:
+	_svm_npt_rsvd_bits_test(npt_get_pde((u64) basic_guest_main),
+				get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
+				host_efer, host_cr4, guest_efer, guest_cr4);
+
+	_svm_npt_rsvd_bits_test(npt_get_pdpe(),
+				PT_PAGE_SIZE_MASK |
+				(this_cpu_has(X86_FEATURE_GBPAGES) ?
+				 get_random_bits(29, 13) : 0), host_efer,
+				host_cr4, guest_efer, guest_cr4);
+
+	_svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8),
+				host_efer, host_cr4, guest_efer, guest_cr4);
+
+	wrmsr(MSR_EFER, saved_efer);
+	write_cr4(saved_cr4);
+	vmcb->save.efer = sg_efer;
+	vmcb->save.cr4 = sg_cr4;
+}
+
+int main(int ac, char **av)
+{
+	pteval_t opt_mask = 0;
+
+	__setup_vm(&opt_mask);
+	return run_svm_tests(ac, av);
+}
+
+#define TEST(name) { #name, .v2 = name }
+
+struct svm_test svm_tests[] = {
+	{ "npt_nx", npt_supported, npt_nx_prepare,
+	 default_prepare_gif_clear, null_test,
+	 default_finished, npt_nx_check },
+	{ "npt_np", npt_supported, npt_np_prepare,
+	 default_prepare_gif_clear, npt_np_test,
+	 default_finished, npt_np_check },
+	{ "npt_us", npt_supported, npt_us_prepare,
+	 default_prepare_gif_clear, npt_us_test,
+	 default_finished, npt_us_check },
+	{ "npt_rw", npt_supported, npt_rw_prepare,
+	 default_prepare_gif_clear, npt_rw_test,
+	 default_finished, npt_rw_check },
+	{ "npt_rw_pfwalk", npt_supported, npt_rw_pfwalk_prepare,
+	 default_prepare_gif_clear, null_test,
+	 default_finished, npt_rw_pfwalk_check },
+	{ "npt_l1mmio", npt_supported, npt_l1mmio_prepare,
+	 default_prepare_gif_clear, npt_l1mmio_test,
+	 default_finished, npt_l1mmio_check },
+	{ "npt_rw_l1mmio", npt_supported, npt_rw_l1mmio_prepare,
+	 default_prepare_gif_clear, npt_rw_l1mmio_test,
+	 default_finished, npt_rw_l1mmio_check },
+	TEST(svm_npt_rsvd_bits_test),
+	{ NULL, NULL, NULL, NULL, NULL, NULL, NULL }
+};
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 6a9b03b..f0eeb1d 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -10,11 +10,10 @@
 #include "isr.h"
 #include "apic.h"
 #include "delay.h"
+#include "vmalloc.h"
 
 #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
 
-static void *scratch_page;
-
 #define LATENCY_RUNS 1000000
 
 extern u16 cpu_online_count;
@@ -698,181 +697,6 @@ static bool sel_cr0_bug_check(struct svm_test *test)
     return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
 }
 
-static void npt_nx_prepare(struct svm_test *test)
-{
-    u64 *pte;
-
-    test->scratch = rdmsr(MSR_EFER);
-    wrmsr(MSR_EFER, test->scratch | EFER_NX);
-
-    /* Clear the guest's EFER.NX, it should not affect NPT behavior. */
-    vmcb->save.efer &= ~EFER_NX;
-
-    pte = npt_get_pte((u64)null_test);
-
-    *pte |= PT64_NX_MASK;
-}
-
-static bool npt_nx_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte((u64)null_test);
-
-    wrmsr(MSR_EFER, test->scratch);
-
-    *pte &= ~PT64_NX_MASK;
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000015ULL);
-}
-
-static void npt_np_prepare(struct svm_test *test)
-{
-    u64 *pte;
-
-    scratch_page = alloc_page();
-    pte = npt_get_pte((u64)scratch_page);
-
-    *pte &= ~1ULL;
-}
-
-static void npt_np_test(struct svm_test *test)
-{
-    (void) *(volatile u64 *)scratch_page;
-}
-
-static bool npt_np_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte((u64)scratch_page);
-
-    *pte |= 1ULL;
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000004ULL);
-}
-
-static void npt_us_prepare(struct svm_test *test)
-{
-    u64 *pte;
-
-    scratch_page = alloc_page();
-    pte = npt_get_pte((u64)scratch_page);
-
-    *pte &= ~(1ULL << 2);
-}
-
-static void npt_us_test(struct svm_test *test)
-{
-    (void) *(volatile u64 *)scratch_page;
-}
-
-static bool npt_us_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte((u64)scratch_page);
-
-    *pte |= (1ULL << 2);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000005ULL);
-}
-
-static void npt_rw_prepare(struct svm_test *test)
-{
-
-    u64 *pte;
-
-    pte = npt_get_pte(0x80000);
-
-    *pte &= ~(1ULL << 1);
-}
-
-static void npt_rw_test(struct svm_test *test)
-{
-    u64 *data = (void*)(0x80000);
-
-    *data = 0;
-}
-
-static bool npt_rw_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte(0x80000);
-
-    *pte |= (1ULL << 1);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000007ULL);
-}
-
-static void npt_rw_pfwalk_prepare(struct svm_test *test)
-{
-
-    u64 *pte;
-
-    pte = npt_get_pte(read_cr3());
-
-    *pte &= ~(1ULL << 1);
-}
-
-static bool npt_rw_pfwalk_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte(read_cr3());
-
-    *pte |= (1ULL << 1);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x200000007ULL)
-	   && (vmcb->control.exit_info_2 == read_cr3());
-}
-
-static void npt_l1mmio_prepare(struct svm_test *test)
-{
-}
-
-u32 nested_apic_version1;
-u32 nested_apic_version2;
-
-static void npt_l1mmio_test(struct svm_test *test)
-{
-    volatile u32 *data = (volatile void*)(0xfee00030UL);
-
-    nested_apic_version1 = *data;
-    nested_apic_version2 = *data;
-}
-
-static bool npt_l1mmio_check(struct svm_test *test)
-{
-    volatile u32 *data = (volatile void*)(0xfee00030);
-    u32 lvr = *data;
-
-    return nested_apic_version1 == lvr && nested_apic_version2 == lvr;
-}
-
-static void npt_rw_l1mmio_prepare(struct svm_test *test)
-{
-
-    u64 *pte;
-
-    pte = npt_get_pte(0xfee00080);
-
-    *pte &= ~(1ULL << 1);
-}
-
-static void npt_rw_l1mmio_test(struct svm_test *test)
-{
-    volatile u32 *data = (volatile void*)(0xfee00080);
-
-    *data = *data;
-}
-
-static bool npt_rw_l1mmio_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte(0xfee00080);
-
-    *pte |= (1ULL << 1);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000007ULL);
-}
-
 #define TSC_ADJUST_VALUE    (1ll << 32)
 #define TSC_OFFSET_VALUE    (~0ull << 48)
 static bool ok;
@@ -2672,169 +2496,6 @@ static void svm_test_singlestep(void)
 		vmcb->save.rip == (u64)&guest_end, "Test EFLAGS.TF on VMRUN: guest execution completion");
 }
 
-static void __svm_npt_rsvd_bits_test(u64 *pxe, u64 rsvd_bits, u64 efer,
-				     ulong cr4, u64 guest_efer, ulong guest_cr4)
-{
-	u64 pxe_orig = *pxe;
-	int exit_reason;
-	u64 pfec;
-
-	wrmsr(MSR_EFER, efer);
-	write_cr4(cr4);
-
-	vmcb->save.efer = guest_efer;
-	vmcb->save.cr4  = guest_cr4;
-
-	*pxe |= rsvd_bits;
-
-	exit_reason = svm_vmrun();
-
-	report(exit_reason == SVM_EXIT_NPF,
-	       "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits, exit_reason);
-
-	if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) {
-		/*
-		 * The guest's page tables will blow up on a bad PDPE/PML4E,
-		 * before starting the final walk of the guest page.
-		 */
-		pfec = 0x20000000full;
-	} else {
-		/* RSVD #NPF on final walk of guest page. */
-		pfec = 0x10000000dULL;
-
-		/* PFEC.FETCH=1 if NX=1 *or* SMEP=1. */
-		if ((cr4 & X86_CR4_SMEP) || (efer & EFER_NX))
-			pfec |= 0x10;
-
-	}
-
-	report(vmcb->control.exit_info_1 == pfec,
-	       "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
-	       "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
-	       pfec, vmcb->control.exit_info_1, *pxe,
-	       !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
-	       !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
-
-	*pxe = pxe_orig;
-}
-
-static void _svm_npt_rsvd_bits_test(u64 *pxe, u64 pxe_rsvd_bits,  u64 efer,
-				    ulong cr4, u64 guest_efer, ulong guest_cr4)
-{
-	u64 rsvd_bits;
-	int i;
-
-	/*
-	 * RDTSC or RDRAND can sometimes fail to generate a valid reserved bits
-	 */
-	if (!pxe_rsvd_bits) {
-		report_skip("svm_npt_rsvd_bits_test: Reserved bits are not valid");
-		return;
-	}
-
-	/*
-	 * Test all combinations of guest/host EFER.NX and CR4.SMEP.  If host
-	 * EFER.NX=0, use NX as the reserved bit, otherwise use the passed in
-	 * @pxe_rsvd_bits.
-	 */
-	for (i = 0; i < 16; i++) {
-		if (i & 1) {
-			rsvd_bits = pxe_rsvd_bits;
-			efer |= EFER_NX;
-		} else {
-			rsvd_bits = PT64_NX_MASK;
-			efer &= ~EFER_NX;
-		}
-		if (i & 2)
-			cr4 |= X86_CR4_SMEP;
-		else
-			cr4 &= ~X86_CR4_SMEP;
-		if (i & 4)
-			guest_efer |= EFER_NX;
-		else
-			guest_efer &= ~EFER_NX;
-		if (i & 8)
-			guest_cr4 |= X86_CR4_SMEP;
-		else
-			guest_cr4 &= ~X86_CR4_SMEP;
-
-		__svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4,
-					 guest_efer, guest_cr4);
-	}
-}
-
-static u64 get_random_bits(u64 hi, u64 low)
-{
-	unsigned retry = 5;
-	u64 rsvd_bits = 0;
-
-	if (this_cpu_has(X86_FEATURE_RDRAND)) {
-		do {
-			rsvd_bits = (rdrand() << low) & GENMASK_ULL(hi, low);
-			retry--;
-		} while (!rsvd_bits && retry);
-	}
-
-	if (!rsvd_bits) {
-		retry = 5;
-		do {
-			rsvd_bits = (rdtsc() << low) & GENMASK_ULL(hi, low);
-			retry--;
-		} while (!rsvd_bits && retry);
-	}
-
-	return rsvd_bits;
-}
-
-
-static void svm_npt_rsvd_bits_test(void)
-{
-	u64   saved_efer, host_efer, sg_efer, guest_efer;
-	ulong saved_cr4,  host_cr4,  sg_cr4,  guest_cr4;
-
-	if (!npt_supported()) {
-		report_skip("NPT not supported");
-		return;
-	}
-
-	saved_efer = host_efer  = rdmsr(MSR_EFER);
-	saved_cr4  = host_cr4   = read_cr4();
-	sg_efer    = guest_efer = vmcb->save.efer;
-	sg_cr4     = guest_cr4  = vmcb->save.cr4;
-
-	test_set_guest(basic_guest_main);
-
-	/*
-	 * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the
-	 * sub-test.  The NX test is still valid, but the extra bit of coverage
-	 * isn't worth the extra complexity.
-	 */
-	if (cpuid_maxphyaddr() >= 52)
-		goto skip_pte_test;
-
-	_svm_npt_rsvd_bits_test(npt_get_pte((u64)basic_guest_main),
-				get_random_bits(51, cpuid_maxphyaddr()),
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-skip_pte_test:
-	_svm_npt_rsvd_bits_test(npt_get_pde((u64)basic_guest_main),
-				get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-	_svm_npt_rsvd_bits_test(npt_get_pdpe(),
-				PT_PAGE_SIZE_MASK |
-					(this_cpu_has(X86_FEATURE_GBPAGES) ? get_random_bits(29, 13) : 0),
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-	_svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8),
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-	wrmsr(MSR_EFER, saved_efer);
-	write_cr4(saved_cr4);
-	vmcb->save.efer = sg_efer;
-	vmcb->save.cr4  = sg_cr4;
-}
-
 static bool volatile svm_errata_reproduced = false;
 static unsigned long volatile physical = 0;
 
@@ -3634,6 +3295,14 @@ static void svm_intr_intercept_mix_smi(void)
 	svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI);
 }
 
+int main(int ac, char **av)
+{
+    pteval_t opt_mask = 0;
+
+    __setup_vm(&opt_mask);
+    return run_svm_tests(ac, av);
+}
+
 struct svm_test svm_tests[] = {
     { "null", default_supported, default_prepare,
       default_prepare_gif_clear, null_test,
@@ -3677,27 +3346,6 @@ struct svm_test svm_tests[] = {
     { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare,
       default_prepare_gif_clear, sel_cr0_bug_test,
        sel_cr0_bug_finished, sel_cr0_bug_check },
-    { "npt_nx", npt_supported, npt_nx_prepare,
-      default_prepare_gif_clear, null_test,
-      default_finished, npt_nx_check },
-    { "npt_np", npt_supported, npt_np_prepare,
-      default_prepare_gif_clear, npt_np_test,
-      default_finished, npt_np_check },
-    { "npt_us", npt_supported, npt_us_prepare,
-      default_prepare_gif_clear, npt_us_test,
-      default_finished, npt_us_check },
-    { "npt_rw", npt_supported, npt_rw_prepare,
-      default_prepare_gif_clear, npt_rw_test,
-      default_finished, npt_rw_check },
-    { "npt_rw_pfwalk", npt_supported, npt_rw_pfwalk_prepare,
-      default_prepare_gif_clear, null_test,
-      default_finished, npt_rw_pfwalk_check },
-    { "npt_l1mmio", npt_supported, npt_l1mmio_prepare,
-      default_prepare_gif_clear, npt_l1mmio_test,
-      default_finished, npt_l1mmio_check },
-    { "npt_rw_l1mmio", npt_supported, npt_rw_l1mmio_prepare,
-      default_prepare_gif_clear, npt_rw_l1mmio_test,
-      default_finished, npt_rw_l1mmio_check },
     { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
       default_prepare_gif_clear, tsc_adjust_test,
       default_finished, tsc_adjust_check },
@@ -3749,7 +3397,6 @@ struct svm_test svm_tests[] = {
       vgif_check },
     TEST(svm_cr4_osxsave_test),
     TEST(svm_guest_state_test),
-    TEST(svm_npt_rsvd_bits_test),
     TEST(svm_vmrun_errata_test),
     TEST(svm_vmload_vmsave),
     TEST(svm_test_singlestep),
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 3701797..1828d2c 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -258,6 +258,12 @@ file = svm.flat
 extra_params = -cpu max,+svm -overcommit cpu-pm=on -m 4g -append pause_filter_test
 arch = x86_64
 
+[svm_npt]
+file = svm_npt.flat
+smp = 2
+extra_params = -cpu max,+svm -m 4g
+arch = x86_64
+
 [taskswitch]
 file = taskswitch.flat
 arch = i386
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 3/8] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 1/8] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 2/8] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 4/8] x86: Improve set_mmu_range() to implement npt Manali Shukla
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

Commit 916635a813e975600335c6c47250881b7a328971
(nSVM: Add test for NPT reserved bit and #NPF error code behavior)
clears PT_USER_MASK for all svm testcases. Any tests that requires
usermode access will fail after this commit.

Above mentioned commit did changes in main() due to which other
nSVM tests became "incompatible" with usermode

Solution to this problem would be to set PT_USER_MASK on all PTEs.
So that KUT will build other tests with PT_USER_MASK set on
all PTEs.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm_tests.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index f0eeb1d..3b3b990 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -10,7 +10,6 @@
 #include "isr.h"
 #include "apic.h"
 #include "delay.h"
-#include "vmalloc.h"
 
 #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
 
@@ -3297,9 +3296,7 @@ static void svm_intr_intercept_mix_smi(void)
 
 int main(int ac, char **av)
 {
-    pteval_t opt_mask = 0;
-
-    __setup_vm(&opt_mask);
+    setup_vm();
     return run_svm_tests(ac, av);
 }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 4/8] x86: Improve set_mmu_range() to implement npt
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
                   ` (2 preceding siblings ...)
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 3/8] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 5/8] x86: nSVM: Build up the nested page table dynamically Manali Shukla
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

If U/S bit is "0" for all page table entries, all these pages are
considered as supervisor pages. By default, pte_opt_mask is set to "0"
for all npt test cases, which sets U/S bit in all PTEs to "0".

Any nested page table accesses performed by the MMU are treated as user
acesses. So while implementing a nested page table dynamically, PT_USER_MASK
needs to be enabled for all npt entries.

set_mmu_range() function is improved based on above analysis.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 lib/x86/vm.c | 37 +++++++++++++++++++++++++++----------
 lib/x86/vm.h |  3 +++
 2 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/lib/x86/vm.c b/lib/x86/vm.c
index 25a4f5f..b555d5b 100644
--- a/lib/x86/vm.c
+++ b/lib/x86/vm.c
@@ -4,7 +4,7 @@
 #include "alloc_page.h"
 #include "smp.h"
 
-static pteval_t pte_opt_mask;
+static pteval_t pte_opt_mask, prev_pte_opt_mask;
 
 pteval_t *install_pte(pgd_t *cr3,
 		      int pte_level,
@@ -140,16 +140,33 @@ bool any_present_pages(pgd_t *cr3, void *virt, size_t len)
 	return false;
 }
 
-static void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len)
+void set_pte_opt_mask()
+{
+        prev_pte_opt_mask = pte_opt_mask;
+        pte_opt_mask = PT_USER_MASK;
+}
+
+void reset_pte_opt_mask()
+{
+        pte_opt_mask = prev_pte_opt_mask;
+}
+
+void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool nested_mmu)
 {
 	u64 max = (u64)len + (u64)start;
 	u64 phys = start;
 
-	while (phys + LARGE_PAGE_SIZE <= max) {
-		install_large_page(cr3, phys, (void *)(ulong)phys);
-		phys += LARGE_PAGE_SIZE;
-	}
-	install_pages(cr3, phys, max - phys, (void *)(ulong)phys);
+        if (nested_mmu == false) {
+                while (phys + LARGE_PAGE_SIZE <= max) {
+                        install_large_page(cr3, phys, (void *)(ulong)phys);
+		        phys += LARGE_PAGE_SIZE;
+	        }
+	        install_pages(cr3, phys, max - phys, (void *)(ulong)phys);
+        } else {
+                set_pte_opt_mask();
+                install_pages(cr3, phys, len, (void *)(ulong)phys);
+                reset_pte_opt_mask();
+        }
 }
 
 static void set_additional_vcpu_vmregs(struct vm_vcpu_info *info)
@@ -176,10 +193,10 @@ void *setup_mmu(phys_addr_t end_of_memory, void *opt_mask)
     if (end_of_memory < (1ul << 32))
         end_of_memory = (1ul << 32);  /* map mmio 1:1 */
 
-    setup_mmu_range(cr3, 0, end_of_memory);
+    setup_mmu_range(cr3, 0, end_of_memory, false);
 #else
-    setup_mmu_range(cr3, 0, (2ul << 30));
-    setup_mmu_range(cr3, 3ul << 30, (1ul << 30));
+    setup_mmu_range(cr3, 0, (2ul << 30), false);
+    setup_mmu_range(cr3, 3ul << 30, (1ul << 30), false);
     init_alloc_vpage((void*)(3ul << 30));
 #endif
 
diff --git a/lib/x86/vm.h b/lib/x86/vm.h
index 4c6dff9..fbb657f 100644
--- a/lib/x86/vm.h
+++ b/lib/x86/vm.h
@@ -37,6 +37,9 @@ pteval_t *install_pte(pgd_t *cr3,
 pteval_t *install_large_page(pgd_t *cr3, phys_addr_t phys, void *virt);
 void install_pages(pgd_t *cr3, phys_addr_t phys, size_t len, void *virt);
 bool any_present_pages(pgd_t *cr3, void *virt, size_t len);
+void set_pte_opt_mask(void);
+void reset_pte_opt_mask(void);
+void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool nested_mmu);
 
 static inline void *current_page_table(void)
 {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 5/8] x86: nSVM: Build up the nested page table dynamically
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
                   ` (3 preceding siblings ...)
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 4/8] x86: Improve set_mmu_range() to implement npt Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 6/8] x86: nSVM: Correct indentation for svm.c Manali Shukla
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

Current implementation of nested page table does the page table build
up statistically with 2048 PTEs and one pml4 entry.
That is why current implementation is not extensible.

New implementation does page table build up dynamically based on the
RAM size of the VM which enables us to have separate memory range to
test various npt test cases.

Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm.c     | 75 ++++++++++++++++-----------------------------------
 x86/svm.h     |  4 ++-
 x86/svm_npt.c |  5 ++--
 3 files changed, 29 insertions(+), 55 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index ec825c7..e66c801 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -8,6 +8,7 @@
 #include "desc.h"
 #include "msr.h"
 #include "vm.h"
+#include "fwcfg.h"
 #include "smp.h"
 #include "types.h"
 #include "alloc_page.h"
@@ -16,43 +17,32 @@
 #include "vmalloc.h"
 
 /* for the nested page table*/
-u64 *pte[2048];
-u64 *pde[4];
-u64 *pdpe;
 u64 *pml4e;
 
 struct vmcb *vmcb;
 
 u64 *npt_get_pte(u64 address)
 {
-	int i1, i2;
-
-	address >>= 12;
-	i1 = (address >> 9) & 0x7ff;
-	i2 = address & 0x1ff;
-
-	return &pte[i1][i2];
+        return get_pte(npt_get_pml4e(), (void*)address);
 }
 
 u64 *npt_get_pde(u64 address)
 {
-	int i1, i2;
-
-	address >>= 21;
-	i1 = (address >> 9) & 0x3;
-	i2 = address & 0x1ff;
-
-	return &pde[i1][i2];
+    struct pte_search search;
+    search = find_pte_level(npt_get_pml4e(), (void*)address, 2);
+    return search.pte;
 }
 
-u64 *npt_get_pdpe(void)
+u64 *npt_get_pdpe(u64 address)
 {
-	return pdpe;
+    struct pte_search search;
+    search = find_pte_level(npt_get_pml4e(), (void*)address, 3);
+    return search.pte;
 }
 
 u64 *npt_get_pml4e(void)
 {
-	return pml4e;
+    return pml4e;
 }
 
 bool smp_supported(void)
@@ -300,11 +290,21 @@ static void set_additional_vcpu_msr(void *msr_efer)
 	wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME);
 }
 
+void setup_npt(void) {
+    u64 end_of_memory;
+    pml4e = alloc_page();
+
+    end_of_memory = fwcfg_get_u64(FW_CFG_RAM_SIZE);
+    if (end_of_memory < (1ul << 32))
+        end_of_memory = (1ul << 32);
+
+    setup_mmu_range(pml4e, 0, end_of_memory, true);
+}
+
 static void setup_svm(void)
 {
 	void *hsave = alloc_page();
-	u64 *page, address;
-	int i,j;
+	int i;
 
 	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
 	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
@@ -327,36 +327,7 @@ static void setup_svm(void)
 	* pages to get enough granularity for the NPT unit-tests.
 	*/
 
-	address = 0;
-
-	/* PTE level */
-	for (i = 0; i < 2048; ++i) {
-		page = alloc_page();
-
-		for (j = 0; j < 512; ++j, address += 4096)
-	    		page[j] = address | 0x067ULL;
-
-		pte[i] = page;
-	}
-
-	/* PDE level */
-	for (i = 0; i < 4; ++i) {
-		page = alloc_page();
-
-	for (j = 0; j < 512; ++j)
-	    page[j] = (u64)pte[(i * 512) + j] | 0x027ULL;
-
-		pde[i] = page;
-	}
-
-	/* PDPe level */
-	pdpe   = alloc_page();
-	for (i = 0; i < 4; ++i)
-		pdpe[i] = ((u64)(pde[i])) | 0x27;
-
-	/* PML4e level */
-	pml4e    = alloc_page();
-	pml4e[0] = ((u64)pdpe) | 0x27;
+  setup_npt();
 }
 
 int matched;
diff --git a/x86/svm.h b/x86/svm.h
index 123e64f..85eff3f 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -406,7 +406,7 @@ typedef void (*test_guest_func)(struct svm_test *);
 int run_svm_tests(int ac, char **av);
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
-u64 *npt_get_pdpe(void);
+u64 *npt_get_pdpe(u64 address);
 u64 *npt_get_pml4e(void);
 bool smp_supported(void);
 bool default_supported(void);
@@ -429,6 +429,8 @@ int __svm_vmrun(u64 rip);
 void __svm_bare_vmrun(void);
 int svm_vmrun(void);
 void test_set_guest(test_guest_func func);
+void setup_npt(void);
+u64* get_npt_pte(u64 *pml4, u64 guest_addr, int level);
 
 extern struct vmcb *vmcb;
 extern struct svm_test svm_tests[];
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index 53e8a90..ab4dcf4 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -209,7 +209,8 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
 	       "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits,
 	       exit_reason);
 
-	if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) {
+	if (pxe == npt_get_pdpe((u64) basic_guest_main)
+	    || pxe == npt_get_pml4e()) {
 		/*
 		 * The guest's page tables will blow up on a bad PDPE/PML4E,
 		 * before starting the final walk of the guest page.
@@ -338,7 +339,7 @@ skip_pte_test:
 				get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
 				host_efer, host_cr4, guest_efer, guest_cr4);
 
-	_svm_npt_rsvd_bits_test(npt_get_pdpe(),
+	_svm_npt_rsvd_bits_test(npt_get_pdpe((u64) basic_guest_main),
 				PT_PAGE_SIZE_MASK |
 				(this_cpu_has(X86_FEATURE_GBPAGES) ?
 				 get_random_bits(29, 13) : 0), host_efer,
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 6/8] x86: nSVM: Correct indentation for svm.c
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
                   ` (4 preceding siblings ...)
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 5/8] x86: nSVM: Build up the nested page table dynamically Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 7/8] x86: nSVM: Correct indentation for svm_tests.c part-1 Manali Shukla
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

Used ./scripts/Lident script from linux kernel source base to correct the
indentation in svm.c file.

No functional changes intended.

Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm.c | 225 +++++++++++++++++++++++++++---------------------------
 1 file changed, 111 insertions(+), 114 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index e66c801..081a167 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -23,26 +23,26 @@ struct vmcb *vmcb;
 
 u64 *npt_get_pte(u64 address)
 {
-        return get_pte(npt_get_pml4e(), (void*)address);
+	return get_pte(npt_get_pml4e(), (void *)address);
 }
 
 u64 *npt_get_pde(u64 address)
 {
-    struct pte_search search;
-    search = find_pte_level(npt_get_pml4e(), (void*)address, 2);
-    return search.pte;
+	struct pte_search search;
+	search = find_pte_level(npt_get_pml4e(), (void *)address, 2);
+	return search.pte;
 }
 
 u64 *npt_get_pdpe(u64 address)
 {
-    struct pte_search search;
-    search = find_pte_level(npt_get_pml4e(), (void*)address, 3);
-    return search.pte;
+	struct pte_search search;
+	search = find_pte_level(npt_get_pml4e(), (void *)address, 3);
+	return search.pte;
 }
 
 u64 *npt_get_pml4e(void)
 {
-    return pml4e;
+	return pml4e;
 }
 
 bool smp_supported(void)
@@ -52,7 +52,7 @@ bool smp_supported(void)
 
 bool default_supported(void)
 {
-    return true;
+	return true;
 }
 
 bool vgif_supported(void)
@@ -62,25 +62,24 @@ bool vgif_supported(void)
 
 bool lbrv_supported(void)
 {
-    return this_cpu_has(X86_FEATURE_LBRV);
+	return this_cpu_has(X86_FEATURE_LBRV);
 }
 
 bool tsc_scale_supported(void)
 {
-    return this_cpu_has(X86_FEATURE_TSCRATEMSR);
+	return this_cpu_has(X86_FEATURE_TSCRATEMSR);
 }
 
 bool pause_filter_supported(void)
 {
-    return this_cpu_has(X86_FEATURE_PAUSEFILTER);
+	return this_cpu_has(X86_FEATURE_PAUSEFILTER);
 }
 
 bool pause_threshold_supported(void)
 {
-    return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
+	return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
 }
 
-
 void default_prepare(struct svm_test *test)
 {
 	vmcb_ident(vmcb);
@@ -92,7 +91,7 @@ void default_prepare_gif_clear(struct svm_test *test)
 
 bool default_finished(struct svm_test *test)
 {
-	return true; /* one vmexit */
+	return true;		/* one vmexit */
 }
 
 bool npt_supported(void)
@@ -121,7 +120,7 @@ void inc_test_stage(struct svm_test *test)
 }
 
 static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
-                         u64 base, u32 limit, u32 attr)
+			 u64 base, u32 limit, u32 attr)
 {
 	seg->selector = selector;
 	seg->attrib = attr;
@@ -131,7 +130,7 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
 
 inline void vmmcall(void)
 {
-	asm volatile ("vmmcall" : : : "memory");
+	asm volatile ("vmmcall":::"memory");
 }
 
 static test_guest_func guest_main;
@@ -165,15 +164,17 @@ void vmcb_ident(struct vmcb *vmcb)
 	struct descriptor_table_ptr desc_table_ptr;
 
 	memset(vmcb, 0, sizeof(*vmcb));
-	asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory");
+	asm volatile ("vmsave %0"::"a" (vmcb_phys):"memory");
 	vmcb_set_seg(&save->es, read_es(), 0, -1U, data_seg_attr);
 	vmcb_set_seg(&save->cs, read_cs(), 0, -1U, code_seg_attr);
 	vmcb_set_seg(&save->ss, read_ss(), 0, -1U, data_seg_attr);
 	vmcb_set_seg(&save->ds, read_ds(), 0, -1U, data_seg_attr);
 	sgdt(&desc_table_ptr);
-	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
+	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit,
+		     0);
 	sidt(&desc_table_ptr);
-	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
+	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit,
+		     0);
 	ctrl->asid = 1;
 	save->cpl = 0;
 	save->efer = rdmsr(MSR_EFER);
@@ -186,14 +187,13 @@ void vmcb_ident(struct vmcb *vmcb)
 	save->g_pat = rdmsr(MSR_IA32_CR_PAT);
 	save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
-			  (1ULL << INTERCEPT_VMMCALL) |
-			  (1ULL << INTERCEPT_SHUTDOWN);
+	    (1ULL << INTERCEPT_VMMCALL) | (1ULL << INTERCEPT_SHUTDOWN);
 	ctrl->iopm_base_pa = virt_to_phys(io_bitmap);
 	ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
 
 	if (npt_supported()) {
 		ctrl->nested_ctl = 1;
-		ctrl->nested_cr3 = (u64)pml4e;
+		ctrl->nested_cr3 = (u64) pml4e;
 		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 }
@@ -207,32 +207,29 @@ struct regs get_regs(void)
 
 // rax handled specially below
 
-
 struct svm_test *v2_test;
 
-
 u64 guest_stack[10000];
 
 int __svm_vmrun(u64 rip)
 {
-	vmcb->save.rip = (ulong)rip;
-	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
-	regs.rdi = (ulong)v2_test;
+	vmcb->save.rip = (ulong) rip;
+	vmcb->save.rsp = (ulong) (guest_stack + ARRAY_SIZE(guest_stack));
+	regs.rdi = (ulong) v2_test;
 
-	asm volatile (
-		ASM_PRE_VMRUN_CMD
-                "vmrun %%rax\n\t"               \
-		ASM_POST_VMRUN_CMD
-		:
-		: "a" (virt_to_phys(vmcb))
-		: "memory", "r15");
+	asm volatile (ASM_PRE_VMRUN_CMD
+			  "vmrun %%rax\n\t" \
+			  ASM_POST_VMRUN_CMD
+			  :
+			  :"a"(virt_to_phys(vmcb))
+			  :"memory", "r15");
 
 	return (vmcb->control.exit_code);
 }
 
 int svm_vmrun(void)
 {
-	return __svm_vmrun((u64)test_thunk);
+	return __svm_vmrun((u64) test_thunk);
 }
 
 extern u8 vmrun_rip;
@@ -246,40 +243,38 @@ static noinline void test_run(struct svm_test *test)
 
 	test->prepare(test);
 	guest_main = test->guest_func;
-	vmcb->save.rip = (ulong)test_thunk;
-	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
-	regs.rdi = (ulong)test;
+	vmcb->save.rip = (ulong) test_thunk;
+	vmcb->save.rsp = (ulong) (guest_stack + ARRAY_SIZE(guest_stack));
+	regs.rdi = (ulong) test;
 	do {
 		struct svm_test *the_test = test;
 		u64 the_vmcb = vmcb_phys;
-		asm volatile (
-			"clgi;\n\t" // semi-colon needed for LLVM compatibility
-			"sti \n\t"
-			"call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t"
-			"mov %[vmcb_phys], %%rax \n\t"
-			ASM_PRE_VMRUN_CMD
-			".global vmrun_rip\n\t"		\
-			"vmrun_rip: vmrun %%rax\n\t"    \
-			ASM_POST_VMRUN_CMD
-			"cli \n\t"
-			"stgi"
-			: // inputs clobbered by the guest:
-			"=D" (the_test),            // first argument register
-			"=b" (the_vmcb)             // callee save register!
-			: [test] "0" (the_test),
-			[vmcb_phys] "1"(the_vmcb),
-			[PREPARE_GIF_CLEAR] "i" (offsetof(struct svm_test, prepare_gif_clear))
-			: "rax", "rcx", "rdx", "rsi",
-			"r8", "r9", "r10", "r11" , "r12", "r13", "r14", "r15",
-			"memory");
+		asm volatile ("clgi;\n\t"	// semi-colon needed for LLVM compatibility
+			      "sti \n\t"
+			      "call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t"
+			      "mov %[vmcb_phys], %%rax \n\t"
+			      ASM_PRE_VMRUN_CMD
+			      ".global vmrun_rip\n\t"       \
+			      "vmrun_rip: vmrun %%rax\n\t"  \
+			      ASM_POST_VMRUN_CMD "cli \n\t"
+			      "stgi"
+			      :	// inputs clobbered by the guest:
+			      "=D"(the_test),	// first argument register
+			      "=b"(the_vmcb)	// callee save register!
+			      :[test] "0"(the_test),
+			      [vmcb_phys] "1"(the_vmcb),
+			      [PREPARE_GIF_CLEAR]
+			      "i"(offsetof(struct svm_test, prepare_gif_clear))
+			      :"rax", "rcx", "rdx", "rsi", "r8", "r9", "r10",
+			      "r11", "r12", "r13", "r14", "r15", "memory");
 		++test->exits;
 	} while (!test->finished(test));
 	irq_enable();
 
 	report(test->succeeded(test), "%s", test->name);
 
-        if (test->on_vcpu)
-	    test->on_vcpu_done = true;
+	if (test->on_vcpu)
+		test->on_vcpu_done = true;
 }
 
 static void set_additional_vcpu_msr(void *msr_efer)
@@ -287,18 +282,19 @@ static void set_additional_vcpu_msr(void *msr_efer)
 	void *hsave = alloc_page();
 
 	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
-	wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME);
+	wrmsr(MSR_EFER, (ulong) msr_efer | EFER_SVME);
 }
 
-void setup_npt(void) {
-    u64 end_of_memory;
-    pml4e = alloc_page();
+void setup_npt(void)
+{
+	u64 end_of_memory;
+	pml4e = alloc_page();
 
-    end_of_memory = fwcfg_get_u64(FW_CFG_RAM_SIZE);
-    if (end_of_memory < (1ul << 32))
-        end_of_memory = (1ul << 32);
+	end_of_memory = fwcfg_get_u64(FW_CFG_RAM_SIZE);
+	if (end_of_memory < (1ul << 32))
+		end_of_memory = (1ul << 32);
 
-    setup_mmu_range(pml4e, 0, end_of_memory, true);
+	setup_mmu_range(pml4e, 0, end_of_memory, true);
 }
 
 static void setup_svm(void)
@@ -309,63 +305,64 @@ static void setup_svm(void)
 	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
 	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
 
-	io_bitmap = (void *) ALIGN((ulong)io_bitmap_area, PAGE_SIZE);
+	io_bitmap = (void *)ALIGN((ulong) io_bitmap_area, PAGE_SIZE);
 
-	msr_bitmap = (void *) ALIGN((ulong)msr_bitmap_area, PAGE_SIZE);
+	msr_bitmap = (void *)ALIGN((ulong) msr_bitmap_area, PAGE_SIZE);
 
 	if (!npt_supported())
 		return;
 
 	for (i = 1; i < cpu_count(); i++)
-		on_cpu(i, (void *)set_additional_vcpu_msr, (void *)rdmsr(MSR_EFER));
+		on_cpu(i, (void *)set_additional_vcpu_msr,
+		       (void *)rdmsr(MSR_EFER));
 
 	printf("NPT detected - running all tests with NPT enabled\n");
 
 	/*
-	* Nested paging supported - Build a nested page table
-	* Build the page-table bottom-up and map everything with 4k
-	* pages to get enough granularity for the NPT unit-tests.
-	*/
+	 * Nested paging supported - Build a nested page table
+	 * Build the page-table bottom-up and map everything with 4k
+	 * pages to get enough granularity for the NPT unit-tests.
+	 */
 
-  setup_npt();
+	setup_npt();
 }
 
 int matched;
 
-static bool
-test_wanted(const char *name, char *filters[], int filter_count)
-{
-        int i;
-        bool positive = false;
-        bool match = false;
-        char clean_name[strlen(name) + 1];
-        char *c;
-        const char *n;
-
-        /* Replace spaces with underscores. */
-        n = name;
-        c = &clean_name[0];
-        do *c++ = (*n == ' ') ? '_' : *n;
-        while (*n++);
-
-        for (i = 0; i < filter_count; i++) {
-                const char *filter = filters[i];
-
-                if (filter[0] == '-') {
-                        if (simple_glob(clean_name, filter + 1))
-                                return false;
-                } else {
-                        positive = true;
-                        match |= simple_glob(clean_name, filter);
-                }
-        }
-
-        if (!positive || match) {
-                matched++;
-                return true;
-        } else {
-                return false;
-        }
+static bool test_wanted(const char *name, char *filters[], int filter_count)
+{
+	int i;
+	bool positive = false;
+	bool match = false;
+	char clean_name[strlen(name) + 1];
+	char *c;
+	const char *n;
+
+	/* Replace spaces with underscores. */
+	n = name;
+	c = &clean_name[0];
+	do
+		*c++ = (*n == ' ') ? '_' : *n;
+	while (*n++);
+
+	for (i = 0; i < filter_count; i++) {
+		const char *filter = filters[i];
+
+		if (filter[0] == '-') {
+			if (simple_glob(clean_name, filter + 1))
+				return false;
+		} else {
+			positive = true;
+			match |= simple_glob(clean_name, filter);
+		}
+	}
+
+	if (!positive || match) {
+		matched++;
+		return true;
+	} else {
+		return false;
+	}
 }
 
 int run_svm_tests(int ac, char **av)
@@ -393,11 +390,11 @@ int run_svm_tests(int ac, char **av)
 			if (svm_tests[i].on_vcpu) {
 				if (cpu_count() <= svm_tests[i].on_vcpu)
 					continue;
-				on_cpu_async(svm_tests[i].on_vcpu, (void *)test_run, &svm_tests[i]);
+				on_cpu_async(svm_tests[i].on_vcpu,
+					     (void *)test_run, &svm_tests[i]);
 				while (!svm_tests[i].on_vcpu_done)
 					cpu_relax();
-			}
-			else
+			} else
 				test_run(&svm_tests[i]);
 		} else {
 			vmcb_ident(vmcb);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 7/8] x86: nSVM: Correct indentation for svm_tests.c part-1
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
                   ` (5 preceding siblings ...)
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 6/8] x86: nSVM: Correct indentation for svm.c Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2 Manali Shukla
  2022-04-25 13:31 ` [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Paolo Bonzini
  8 siblings, 0 replies; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

Used ./scripts/Lident script from linux kernel source base to correct the
indentation in svm_tests.c file.

No functional changes intended.
---
 x86/svm_tests.c | 2223 ++++++++++++++++++++++++-----------------------
 1 file changed, 1129 insertions(+), 1094 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 3b3b990..1813b97 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -43,492 +43,506 @@ static void null_test(struct svm_test *test)
 
 static bool null_check(struct svm_test *test)
 {
-    return vmcb->control.exit_code == SVM_EXIT_VMMCALL;
+	return vmcb->control.exit_code == SVM_EXIT_VMMCALL;
 }
 
 static void prepare_no_vmrun_int(struct svm_test *test)
 {
-    vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
 }
 
 static bool check_no_vmrun_int(struct svm_test *test)
 {
-    return vmcb->control.exit_code == SVM_EXIT_ERR;
+	return vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
 static void test_vmrun(struct svm_test *test)
 {
-    asm volatile ("vmrun %0" : : "a"(virt_to_phys(vmcb)));
+	asm volatile ("vmrun %0"::"a" (virt_to_phys(vmcb)));
 }
 
 static bool check_vmrun(struct svm_test *test)
 {
-    return vmcb->control.exit_code == SVM_EXIT_VMRUN;
+	return vmcb->control.exit_code == SVM_EXIT_VMRUN;
 }
 
 static void prepare_rsm_intercept(struct svm_test *test)
 {
-    default_prepare(test);
-    vmcb->control.intercept |= 1 << INTERCEPT_RSM;
-    vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
+	default_prepare(test);
+	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
+	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
 }
 
 static void test_rsm_intercept(struct svm_test *test)
 {
-    asm volatile ("rsm" : : : "memory");
+	asm volatile ("rsm":::"memory");
 }
 
 static bool check_rsm_intercept(struct svm_test *test)
 {
-    return get_test_stage(test) == 2;
+	return get_test_stage(test) == 2;
 }
 
 static bool finished_rsm_intercept(struct svm_test *test)
 {
-    switch (get_test_stage(test)) {
-    case 0:
-        if (vmcb->control.exit_code != SVM_EXIT_RSM) {
-            report_fail("VMEXIT not due to rsm. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
-        inc_test_stage(test);
-        break;
+	switch (get_test_stage(test)) {
+	case 0:
+		if (vmcb->control.exit_code != SVM_EXIT_RSM) {
+			report_fail("VMEXIT not due to rsm. Exit reason 0x%x",
+				    vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
+		inc_test_stage(test);
+		break;
 
-    case 1:
-        if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
-            report_fail("VMEXIT not due to #UD. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->save.rip += 2;
-        inc_test_stage(test);
-        break;
+	case 1:
+		if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
+			report_fail("VMEXIT not due to #UD. Exit reason 0x%x",
+				    vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 2;
+		inc_test_stage(test);
+		break;
 
-    default:
-        return true;
-    }
-    return get_test_stage(test) == 2;
+	default:
+		return true;
+	}
+	return get_test_stage(test) == 2;
 }
 
 static void prepare_cr3_intercept(struct svm_test *test)
 {
-    default_prepare(test);
-    vmcb->control.intercept_cr_read |= 1 << 3;
+	default_prepare(test);
+	vmcb->control.intercept_cr_read |= 1 << 3;
 }
 
 static void test_cr3_intercept(struct svm_test *test)
 {
-    asm volatile ("mov %%cr3, %0" : "=r"(test->scratch) : : "memory");
+	asm volatile ("mov %%cr3, %0":"=r" (test->scratch)::"memory");
 }
 
 static bool check_cr3_intercept(struct svm_test *test)
 {
-    return vmcb->control.exit_code == SVM_EXIT_READ_CR3;
+	return vmcb->control.exit_code == SVM_EXIT_READ_CR3;
 }
 
 static bool check_cr3_nointercept(struct svm_test *test)
 {
-    return null_check(test) && test->scratch == read_cr3();
+	return null_check(test) && test->scratch == read_cr3();
 }
 
 static void corrupt_cr3_intercept_bypass(void *_test)
 {
-    struct svm_test *test = _test;
-    extern volatile u32 mmio_insn;
+	struct svm_test *test = _test;
+	extern volatile u32 mmio_insn;
 
-    while (!__sync_bool_compare_and_swap(&test->scratch, 1, 2))
-        pause();
-    pause();
-    pause();
-    pause();
-    mmio_insn = 0x90d8200f;  // mov %cr3, %rax; nop
+	while (!__sync_bool_compare_and_swap(&test->scratch, 1, 2))
+		pause();
+	pause();
+	pause();
+	pause();
+	mmio_insn = 0x90d8200f;	// mov %cr3, %rax; nop
 }
 
 static void prepare_cr3_intercept_bypass(struct svm_test *test)
 {
-    default_prepare(test);
-    vmcb->control.intercept_cr_read |= 1 << 3;
-    on_cpu_async(1, corrupt_cr3_intercept_bypass, test);
+	default_prepare(test);
+	vmcb->control.intercept_cr_read |= 1 << 3;
+	on_cpu_async(1, corrupt_cr3_intercept_bypass, test);
 }
 
 static void test_cr3_intercept_bypass(struct svm_test *test)
 {
-    ulong a = 0xa0000;
+	ulong a = 0xa0000;
 
-    test->scratch = 1;
-    while (test->scratch != 2)
-        barrier();
+	test->scratch = 1;
+	while (test->scratch != 2)
+		barrier();
 
-    asm volatile ("mmio_insn: mov %0, (%0); nop"
-                  : "+a"(a) : : "memory");
-    test->scratch = a;
+	asm volatile ("mmio_insn: mov %0, (%0); nop":"+a" (a)::"memory");
+	test->scratch = a;
 }
 
 static void prepare_dr_intercept(struct svm_test *test)
 {
-    default_prepare(test);
-    vmcb->control.intercept_dr_read = 0xff;
-    vmcb->control.intercept_dr_write = 0xff;
+	default_prepare(test);
+	vmcb->control.intercept_dr_read = 0xff;
+	vmcb->control.intercept_dr_write = 0xff;
 }
 
 static void test_dr_intercept(struct svm_test *test)
 {
-    unsigned int i, failcnt = 0;
+	unsigned int i, failcnt = 0;
 
-    /* Loop testing debug register reads */
-    for (i = 0; i < 8; i++) {
+	/* Loop testing debug register reads */
+	for (i = 0; i < 8; i++) {
 
-        switch (i) {
-        case 0:
-            asm volatile ("mov %%dr0, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        case 1:
-            asm volatile ("mov %%dr1, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        case 2:
-            asm volatile ("mov %%dr2, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        case 3:
-            asm volatile ("mov %%dr3, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        case 4:
-            asm volatile ("mov %%dr4, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        case 5:
-            asm volatile ("mov %%dr5, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        case 6:
-            asm volatile ("mov %%dr6, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        case 7:
-            asm volatile ("mov %%dr7, %0" : "=r"(test->scratch) : : "memory");
-            break;
-        }
+		switch (i) {
+		case 0:
+			asm volatile ("mov %%dr0, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		case 1:
+			asm volatile ("mov %%dr1, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		case 2:
+			asm volatile ("mov %%dr2, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		case 3:
+			asm volatile ("mov %%dr3, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		case 4:
+			asm volatile ("mov %%dr4, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		case 5:
+			asm volatile ("mov %%dr5, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		case 6:
+			asm volatile ("mov %%dr6, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		case 7:
+			asm volatile ("mov %%dr7, %0":"=r"
+				      (test->scratch)::"memory");
+			break;
+		}
 
-        if (test->scratch != i) {
-            report_fail("dr%u read intercept", i);
-            failcnt++;
-        }
-    }
+		if (test->scratch != i) {
+			report_fail("dr%u read intercept", i);
+			failcnt++;
+		}
+	}
 
-    /* Loop testing debug register writes */
-    for (i = 0; i < 8; i++) {
+	/* Loop testing debug register writes */
+	for (i = 0; i < 8; i++) {
 
-        switch (i) {
-        case 0:
-            asm volatile ("mov %0, %%dr0" : : "r"(test->scratch) : "memory");
-            break;
-        case 1:
-            asm volatile ("mov %0, %%dr1" : : "r"(test->scratch) : "memory");
-            break;
-        case 2:
-            asm volatile ("mov %0, %%dr2" : : "r"(test->scratch) : "memory");
-            break;
-        case 3:
-            asm volatile ("mov %0, %%dr3" : : "r"(test->scratch) : "memory");
-            break;
-        case 4:
-            asm volatile ("mov %0, %%dr4" : : "r"(test->scratch) : "memory");
-            break;
-        case 5:
-            asm volatile ("mov %0, %%dr5" : : "r"(test->scratch) : "memory");
-            break;
-        case 6:
-            asm volatile ("mov %0, %%dr6" : : "r"(test->scratch) : "memory");
-            break;
-        case 7:
-            asm volatile ("mov %0, %%dr7" : : "r"(test->scratch) : "memory");
-            break;
-        }
+		switch (i) {
+		case 0:
+			asm volatile ("mov %0, %%dr0"::"r"
+				      (test->scratch):"memory");
+			break;
+		case 1:
+			asm volatile ("mov %0, %%dr1"::"r"
+				      (test->scratch):"memory");
+			break;
+		case 2:
+			asm volatile ("mov %0, %%dr2"::"r"
+				      (test->scratch):"memory");
+			break;
+		case 3:
+			asm volatile ("mov %0, %%dr3"::"r"
+				      (test->scratch):"memory");
+			break;
+		case 4:
+			asm volatile ("mov %0, %%dr4"::"r"
+				      (test->scratch):"memory");
+			break;
+		case 5:
+			asm volatile ("mov %0, %%dr5"::"r"
+				      (test->scratch):"memory");
+			break;
+		case 6:
+			asm volatile ("mov %0, %%dr6"::"r"
+				      (test->scratch):"memory");
+			break;
+		case 7:
+			asm volatile ("mov %0, %%dr7"::"r"
+				      (test->scratch):"memory");
+			break;
+		}
 
-        if (test->scratch != i) {
-            report_fail("dr%u write intercept", i);
-            failcnt++;
-        }
-    }
+		if (test->scratch != i) {
+			report_fail("dr%u write intercept", i);
+			failcnt++;
+		}
+	}
 
-    test->scratch = failcnt;
+	test->scratch = failcnt;
 }
 
 static bool dr_intercept_finished(struct svm_test *test)
 {
-    ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0);
+	ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0);
 
-    /* Only expect DR intercepts */
-    if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0))
-        return true;
+	/* Only expect DR intercepts */
+	if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0))
+		return true;
 
-    /*
-     * Compute debug register number.
-     * Per Appendix C "SVM Intercept Exit Codes" of AMD64 Architecture
-     * Programmer's Manual Volume 2 - System Programming:
-     * http://support.amd.com/TechDocs/24593.pdf
-     * there are 16 VMEXIT codes each for DR read and write.
-     */
-    test->scratch = (n % 16);
+	/*
+	 * Compute debug register number.
+	 * Per Appendix C "SVM Intercept Exit Codes" of AMD64 Architecture
+	 * Programmer's Manual Volume 2 - System Programming:
+	 * http://support.amd.com/TechDocs/24593.pdf
+	 * there are 16 VMEXIT codes each for DR read and write.
+	 */
+	test->scratch = (n % 16);
 
-    /* Jump over MOV instruction */
-    vmcb->save.rip += 3;
+	/* Jump over MOV instruction */
+	vmcb->save.rip += 3;
 
-    return false;
+	return false;
 }
 
 static bool check_dr_intercept(struct svm_test *test)
 {
-    return !test->scratch;
+	return !test->scratch;
 }
 
 static bool next_rip_supported(void)
 {
-    return this_cpu_has(X86_FEATURE_NRIPS);
+	return this_cpu_has(X86_FEATURE_NRIPS);
 }
 
 static void prepare_next_rip(struct svm_test *test)
 {
-    vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
 }
 
-
 static void test_next_rip(struct svm_test *test)
 {
-    asm volatile ("rdtsc\n\t"
-                  ".globl exp_next_rip\n\t"
-                  "exp_next_rip:\n\t" ::: "eax", "edx");
+	asm volatile ("rdtsc\n\t"
+		      ".globl exp_next_rip\n\t"
+		      "exp_next_rip:\n\t":::"eax", "edx");
 }
 
 static bool check_next_rip(struct svm_test *test)
 {
-    extern char exp_next_rip;
-    unsigned long address = (unsigned long)&exp_next_rip;
+	extern char exp_next_rip;
+	unsigned long address = (unsigned long)&exp_next_rip;
 
-    return address == vmcb->control.next_rip;
+	return address == vmcb->control.next_rip;
 }
 
 extern u8 *msr_bitmap;
 
 static void prepare_msr_intercept(struct svm_test *test)
 {
-    default_prepare(test);
-    vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
-    vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
-    memset(msr_bitmap, 0xff, MSR_BITMAP_SIZE);
+	default_prepare(test);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+	memset(msr_bitmap, 0xff, MSR_BITMAP_SIZE);
 }
 
 static void test_msr_intercept(struct svm_test *test)
 {
-    unsigned long msr_value = 0xef8056791234abcd; /* Arbitrary value */
-    unsigned long msr_index;
-
-    for (msr_index = 0; msr_index <= 0xc0011fff; msr_index++) {
-        if (msr_index == 0xC0010131 /* MSR_SEV_STATUS */) {
-            /*
-             * Per section 15.34.10 "SEV_STATUS MSR" of AMD64 Architecture
-             * Programmer's Manual volume 2 - System Programming:
-             * http://support.amd.com/TechDocs/24593.pdf
-             * SEV_STATUS MSR (C001_0131) is a non-interceptable MSR.
-             */
-            continue;
-        }
+	unsigned long msr_value = 0xef8056791234abcd;	/* Arbitrary value */
+	unsigned long msr_index;
+
+	for (msr_index = 0; msr_index <= 0xc0011fff; msr_index++) {
+		if (msr_index == 0xC0010131 /* MSR_SEV_STATUS */ ) {
+			/*
+			 * Per section 15.34.10 "SEV_STATUS MSR" of AMD64 Architecture
+			 * Programmer's Manual volume 2 - System Programming:
+			 * http://support.amd.com/TechDocs/24593.pdf
+			 * SEV_STATUS MSR (C001_0131) is a non-interceptable MSR.
+			 */
+			continue;
+		}
 
-        /* Skips gaps between supported MSR ranges */
-        if (msr_index == 0x2000)
-            msr_index = 0xc0000000;
-        else if (msr_index == 0xc0002000)
-            msr_index = 0xc0010000;
+		/* Skips gaps between supported MSR ranges */
+		if (msr_index == 0x2000)
+			msr_index = 0xc0000000;
+		else if (msr_index == 0xc0002000)
+			msr_index = 0xc0010000;
 
-        test->scratch = -1;
+		test->scratch = -1;
 
-        rdmsr(msr_index);
+		rdmsr(msr_index);
 
-        /* Check that a read intercept occurred for MSR at msr_index */
-        if (test->scratch != msr_index)
-            report_fail("MSR 0x%lx read intercept", msr_index);
+		/* Check that a read intercept occurred for MSR at msr_index */
+		if (test->scratch != msr_index)
+			report_fail("MSR 0x%lx read intercept", msr_index);
 
-        /*
-         * Poor man approach to generate a value that
-         * seems arbitrary each time around the loop.
-         */
-        msr_value += (msr_value << 1);
+		/*
+		 * Poor man approach to generate a value that
+		 * seems arbitrary each time around the loop.
+		 */
+		msr_value += (msr_value << 1);
 
-        wrmsr(msr_index, msr_value);
+		wrmsr(msr_index, msr_value);
 
-        /* Check that a write intercept occurred for MSR with msr_value */
-        if (test->scratch != msr_value)
-            report_fail("MSR 0x%lx write intercept", msr_index);
-    }
+		/* Check that a write intercept occurred for MSR with msr_value */
+		if (test->scratch != msr_value)
+			report_fail("MSR 0x%lx write intercept", msr_index);
+	}
 
-    test->scratch = -2;
+	test->scratch = -2;
 }
 
 static bool msr_intercept_finished(struct svm_test *test)
 {
-    u32 exit_code = vmcb->control.exit_code;
-    u64 exit_info_1;
-    u8 *opcode;
+	u32 exit_code = vmcb->control.exit_code;
+	u64 exit_info_1;
+	u8 *opcode;
 
-    if (exit_code == SVM_EXIT_MSR) {
-        exit_info_1 = vmcb->control.exit_info_1;
-    } else {
-        /*
-         * If #GP exception occurs instead, check that it was
-         * for RDMSR/WRMSR and set exit_info_1 accordingly.
-         */
+	if (exit_code == SVM_EXIT_MSR) {
+		exit_info_1 = vmcb->control.exit_info_1;
+	} else {
+		/*
+		 * If #GP exception occurs instead, check that it was
+		 * for RDMSR/WRMSR and set exit_info_1 accordingly.
+		 */
 
-        if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR))
-            return true;
+		if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR))
+			return true;
 
-        opcode = (u8 *)vmcb->save.rip;
-        if (opcode[0] != 0x0f)
-            return true;
+		opcode = (u8 *) vmcb->save.rip;
+		if (opcode[0] != 0x0f)
+			return true;
 
-        switch (opcode[1]) {
-        case 0x30: /* WRMSR */
-            exit_info_1 = 1;
-            break;
-        case 0x32: /* RDMSR */
-            exit_info_1 = 0;
-            break;
-        default:
-            return true;
-        }
+		switch (opcode[1]) {
+		case 0x30:	/* WRMSR */
+			exit_info_1 = 1;
+			break;
+		case 0x32:	/* RDMSR */
+			exit_info_1 = 0;
+			break;
+		default:
+			return true;
+		}
 
-        /*
-         * Warn that #GP exception occured instead.
-         * RCX holds the MSR index.
-         */
-        printf("%s 0x%lx #GP exception\n",
-            exit_info_1 ? "WRMSR" : "RDMSR", get_regs().rcx);
-    }
+		/*
+		 * Warn that #GP exception occured instead.
+		 * RCX holds the MSR index.
+		 */
+		printf("%s 0x%lx #GP exception\n",
+		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs().rcx);
+	}
 
-    /* Jump over RDMSR/WRMSR instruction */
-    vmcb->save.rip += 2;
-
-    /*
-     * Test whether the intercept was for RDMSR/WRMSR.
-     * For RDMSR, test->scratch is set to the MSR index;
-     *      RCX holds the MSR index.
-     * For WRMSR, test->scratch is set to the MSR value;
-     *      RDX holds the upper 32 bits of the MSR value,
-     *      while RAX hold its lower 32 bits.
-     */
-    if (exit_info_1)
-        test->scratch =
-            ((get_regs().rdx << 32) | (vmcb->save.rax & 0xffffffff));
-    else
-        test->scratch = get_regs().rcx;
+	/* Jump over RDMSR/WRMSR instruction */
+	vmcb->save.rip += 2;
+
+	/*
+	 * Test whether the intercept was for RDMSR/WRMSR.
+	 * For RDMSR, test->scratch is set to the MSR index;
+	 *      RCX holds the MSR index.
+	 * For WRMSR, test->scratch is set to the MSR value;
+	 *      RDX holds the upper 32 bits of the MSR value,
+	 *      while RAX hold its lower 32 bits.
+	 */
+	if (exit_info_1)
+		test->scratch =
+		    ((get_regs().rdx << 32) | (vmcb->save.rax & 0xffffffff));
+	else
+		test->scratch = get_regs().rcx;
 
-    return false;
+	return false;
 }
 
 static bool check_msr_intercept(struct svm_test *test)
 {
-    memset(msr_bitmap, 0, MSR_BITMAP_SIZE);
-    return (test->scratch == -2);
+	memset(msr_bitmap, 0, MSR_BITMAP_SIZE);
+	return (test->scratch == -2);
 }
 
 static void prepare_mode_switch(struct svm_test *test)
 {
-    vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
-                                             |  (1ULL << UD_VECTOR)
-                                             |  (1ULL << DF_VECTOR)
-                                             |  (1ULL << PF_VECTOR);
-    test->scratch = 0;
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
+	    | (1ULL << UD_VECTOR)
+	    | (1ULL << DF_VECTOR)
+	    | (1ULL << PF_VECTOR);
+	test->scratch = 0;
 }
 
 static void test_mode_switch(struct svm_test *test)
 {
-    asm volatile("	cli\n"
-		 "	ljmp *1f\n" /* jump to 32-bit code segment */
-		 "1:\n"
-		 "	.long 2f\n"
-		 "	.long " xstr(KERNEL_CS32) "\n"
-		 ".code32\n"
-		 "2:\n"
-		 "	movl %%cr0, %%eax\n"
-		 "	btcl  $31, %%eax\n" /* clear PG */
-		 "	movl %%eax, %%cr0\n"
-		 "	movl $0xc0000080, %%ecx\n" /* EFER */
-		 "	rdmsr\n"
-		 "	btcl $8, %%eax\n" /* clear LME */
-		 "	wrmsr\n"
-		 "	movl %%cr4, %%eax\n"
-		 "	btcl $5, %%eax\n" /* clear PAE */
-		 "	movl %%eax, %%cr4\n"
-		 "	movw %[ds16], %%ax\n"
-		 "	movw %%ax, %%ds\n"
-		 "	ljmpl %[cs16], $3f\n" /* jump to 16 bit protected-mode */
-		 ".code16\n"
-		 "3:\n"
-		 "	movl %%cr0, %%eax\n"
-		 "	btcl $0, %%eax\n" /* clear PE  */
-		 "	movl %%eax, %%cr0\n"
-		 "	ljmpl $0, $4f\n"   /* jump to real-mode */
-		 "4:\n"
-		 "	vmmcall\n"
-		 "	movl %%cr0, %%eax\n"
-		 "	btsl $0, %%eax\n" /* set PE  */
-		 "	movl %%eax, %%cr0\n"
-		 "	ljmpl %[cs32], $5f\n" /* back to protected mode */
-		 ".code32\n"
-		 "5:\n"
-		 "	movl %%cr4, %%eax\n"
-		 "	btsl $5, %%eax\n" /* set PAE */
-		 "	movl %%eax, %%cr4\n"
-		 "	movl $0xc0000080, %%ecx\n" /* EFER */
-		 "	rdmsr\n"
-		 "	btsl $8, %%eax\n" /* set LME */
-		 "	wrmsr\n"
-		 "	movl %%cr0, %%eax\n"
-		 "	btsl  $31, %%eax\n" /* set PG */
-		 "	movl %%eax, %%cr0\n"
-		 "	ljmpl %[cs64], $6f\n"    /* back to long mode */
-		 ".code64\n\t"
-		 "6:\n"
-		 "	vmmcall\n"
-		 :: [cs16] "i"(KERNEL_CS16), [ds16] "i"(KERNEL_DS16),
-		    [cs32] "i"(KERNEL_CS32), [cs64] "i"(KERNEL_CS64)
-		 : "rax", "rbx", "rcx", "rdx", "memory");
+	asm volatile("	  cli\n"
+		     "	  ljmp *1f\n" /* jump to 32-bit code segment */
+		     "1:\n"
+		     "	  .long 2f\n"
+		     "	  .long " xstr(KERNEL_CS32) "\n"
+		     ".code32\n"
+		     "2:\n"
+		     "	  movl %%cr0, %%eax\n"
+		     "	  btcl  $31, %%eax\n" /* clear PG */
+		     "	  movl %%eax, %%cr0\n"
+		     "	  movl $0xc0000080, %%ecx\n" /* EFER */
+		     "	  rdmsr\n"
+		     "	  btcl $8, %%eax\n" /* clear LME */
+		     "	  wrmsr\n"
+		     "	  movl %%cr4, %%eax\n"
+		     "	  btcl $5, %%eax\n" /* clear PAE */
+		     "	  movl %%eax, %%cr4\n"
+		     "	  movw %[ds16], %%ax\n"
+		     "	  movw %%ax, %%ds\n"
+		     "	  ljmpl %[cs16], $3f\n" /* jump to 16 bit protected-mode */
+		     ".code16\n"
+		     "3:\n"
+		     "	  movl %%cr0, %%eax\n"
+		     "	  btcl $0, %%eax\n" /* clear PE  */
+		     "	  movl %%eax, %%cr0\n"
+		     "	  ljmpl $0, $4f\n"   /* jump to real-mode */
+		     "4:\n"
+		     "	  vmmcall\n"
+		     "	  movl %%cr0, %%eax\n"
+		     "	  btsl $0, %%eax\n" /* set PE  */
+		     "	  movl %%eax, %%cr0\n"
+		     "	  ljmpl %[cs32], $5f\n" /* back to protected mode */
+		     ".code32\n"
+		     "5:\n"
+		     "	  movl %%cr4, %%eax\n"
+		     "	  btsl $5, %%eax\n" /* set PAE */
+		     "	  movl %%eax, %%cr4\n"
+		     "	  movl $0xc0000080, %%ecx\n" /* EFER */
+		     "	  rdmsr\n"
+		     "	  btsl $8, %%eax\n" /* set LME */
+		     "	  wrmsr\n"
+		     "	  movl %%cr0, %%eax\n"
+		     "	  btsl  $31, %%eax\n" /* set PG */
+		     "	  movl %%eax, %%cr0\n"
+		     "	  ljmpl %[cs64], $6f\n"	/* back to long mode */
+		     ".code64\n\t"
+		     "6:\n"
+		     "	  vmmcall\n"
+		     :: [cs16] "i"(KERNEL_CS16), [ds16] "i"(KERNEL_DS16),
+			[cs32] "i"(KERNEL_CS32), [cs64] "i"(KERNEL_CS64)
+		     : "rax", "rbx", "rcx", "rdx", "memory");
 }
 
 static bool mode_switch_finished(struct svm_test *test)
 {
-    u64 cr0, cr4, efer;
+	u64 cr0, cr4, efer;
 
-    cr0  = vmcb->save.cr0;
-    cr4  = vmcb->save.cr4;
-    efer = vmcb->save.efer;
+	cr0 = vmcb->save.cr0;
+	cr4 = vmcb->save.cr4;
+	efer = vmcb->save.efer;
 
-    /* Only expect VMMCALL intercepts */
-    if (vmcb->control.exit_code != SVM_EXIT_VMMCALL)
-	    return true;
+	/* Only expect VMMCALL intercepts */
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL)
+		return true;
 
-    /* Jump over VMMCALL instruction */
-    vmcb->save.rip += 3;
+	/* Jump over VMMCALL instruction */
+	vmcb->save.rip += 3;
 
-    /* Do sanity checks */
-    switch (test->scratch) {
-    case 0:
-        /* Test should be in real mode now - check for this */
-        if ((cr0  & 0x80000001) || /* CR0.PG, CR0.PE */
-            (cr4  & 0x00000020) || /* CR4.PAE */
-            (efer & 0x00000500))   /* EFER.LMA, EFER.LME */
-                return true;
-        break;
-    case 2:
-        /* Test should be back in long-mode now - check for this */
-        if (((cr0  & 0x80000001) != 0x80000001) || /* CR0.PG, CR0.PE */
-            ((cr4  & 0x00000020) != 0x00000020) || /* CR4.PAE */
-            ((efer & 0x00000500) != 0x00000500))   /* EFER.LMA, EFER.LME */
-		    return true;
-	break;
-    }
+	/* Do sanity checks */
+	switch (test->scratch) {
+	case 0:
+		/* Test should be in real mode now - check for this */
+		if ((cr0 & 0x80000001) ||	/* CR0.PG, CR0.PE */
+		    (cr4 & 0x00000020) ||	/* CR4.PAE */
+		    (efer & 0x00000500))	/* EFER.LMA, EFER.LME */
+			return true;
+		break;
+	case 2:
+		/* Test should be back in long-mode now - check for this */
+		if (((cr0 & 0x80000001) != 0x80000001) ||	/* CR0.PG, CR0.PE */
+		    ((cr4 & 0x00000020) != 0x00000020) ||	/* CR4.PAE */
+		    ((efer & 0x00000500) != 0x00000500))	/* EFER.LMA, EFER.LME */
+			return true;
+		break;
+	}
 
-    /* one step forward */
-    test->scratch += 1;
+	/* one step forward */
+	test->scratch += 1;
 
-    return test->scratch == 2;
+	return test->scratch == 2;
 }
 
 static bool check_mode_switch(struct svm_test *test)
@@ -540,132 +554,132 @@ extern u8 *io_bitmap;
 
 static void prepare_ioio(struct svm_test *test)
 {
-    vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
-    test->scratch = 0;
-    memset(io_bitmap, 0, 8192);
-    io_bitmap[8192] = 0xFF;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
+	test->scratch = 0;
+	memset(io_bitmap, 0, 8192);
+	io_bitmap[8192] = 0xFF;
 }
 
 static void test_ioio(struct svm_test *test)
 {
-    // stage 0, test IO pass
-    inb(0x5000);
-    outb(0x0, 0x5000);
-    if (get_test_stage(test) != 0)
-        goto fail;
-
-    // test IO width, in/out
-    io_bitmap[0] = 0xFF;
-    inc_test_stage(test);
-    inb(0x0);
-    if (get_test_stage(test) != 2)
-        goto fail;
-
-    outw(0x0, 0x0);
-    if (get_test_stage(test) != 3)
-        goto fail;
-
-    inl(0x0);
-    if (get_test_stage(test) != 4)
-        goto fail;
-
-    // test low/high IO port
-    io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8));
-    inb(0x5000);
-    if (get_test_stage(test) != 5)
-        goto fail;
-
-    io_bitmap[0x9000 / 8] = (1 << (0x9000 % 8));
-    inw(0x9000);
-    if (get_test_stage(test) != 6)
-        goto fail;
-
-    // test partial pass
-    io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8));
-    inl(0x4FFF);
-    if (get_test_stage(test) != 7)
-        goto fail;
-
-    // test across pages
-    inc_test_stage(test);
-    inl(0x7FFF);
-    if (get_test_stage(test) != 8)
-        goto fail;
-
-    inc_test_stage(test);
-    io_bitmap[0x8000 / 8] = 1 << (0x8000 % 8);
-    inl(0x7FFF);
-    if (get_test_stage(test) != 10)
-        goto fail;
-
-    io_bitmap[0] = 0;
-    inl(0xFFFF);
-    if (get_test_stage(test) != 11)
-        goto fail;
-
-    io_bitmap[0] = 0xFF;
-    io_bitmap[8192] = 0;
-    inl(0xFFFF);
-    inc_test_stage(test);
-    if (get_test_stage(test) != 12)
-        goto fail;
+	// stage 0, test IO pass
+	inb(0x5000);
+	outb(0x0, 0x5000);
+	if (get_test_stage(test) != 0)
+		goto fail;
 
-    return;
+	// test IO width, in/out
+	io_bitmap[0] = 0xFF;
+	inc_test_stage(test);
+	inb(0x0);
+	if (get_test_stage(test) != 2)
+		goto fail;
+
+	outw(0x0, 0x0);
+	if (get_test_stage(test) != 3)
+		goto fail;
+
+	inl(0x0);
+	if (get_test_stage(test) != 4)
+		goto fail;
+
+	// test low/high IO port
+	io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8));
+	inb(0x5000);
+	if (get_test_stage(test) != 5)
+		goto fail;
+
+	io_bitmap[0x9000 / 8] = (1 << (0x9000 % 8));
+	inw(0x9000);
+	if (get_test_stage(test) != 6)
+		goto fail;
+
+	// test partial pass
+	io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8));
+	inl(0x4FFF);
+	if (get_test_stage(test) != 7)
+		goto fail;
+
+	// test across pages
+	inc_test_stage(test);
+	inl(0x7FFF);
+	if (get_test_stage(test) != 8)
+		goto fail;
+
+	inc_test_stage(test);
+	io_bitmap[0x8000 / 8] = 1 << (0x8000 % 8);
+	inl(0x7FFF);
+	if (get_test_stage(test) != 10)
+		goto fail;
+
+	io_bitmap[0] = 0;
+	inl(0xFFFF);
+	if (get_test_stage(test) != 11)
+		goto fail;
+
+	io_bitmap[0] = 0xFF;
+	io_bitmap[8192] = 0;
+	inl(0xFFFF);
+	inc_test_stage(test);
+	if (get_test_stage(test) != 12)
+		goto fail;
+
+	return;
 
 fail:
-    report_fail("stage %d", get_test_stage(test));
-    test->scratch = -1;
+	report_fail("stage %d", get_test_stage(test));
+	test->scratch = -1;
 }
 
 static bool ioio_finished(struct svm_test *test)
 {
-    unsigned port, size;
+	unsigned port, size;
 
-    /* Only expect IOIO intercepts */
-    if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
-        return true;
+	/* Only expect IOIO intercepts */
+	if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
+		return true;
 
-    if (vmcb->control.exit_code != SVM_EXIT_IOIO)
-        return true;
+	if (vmcb->control.exit_code != SVM_EXIT_IOIO)
+		return true;
 
-    /* one step forward */
-    test->scratch += 1;
+	/* one step forward */
+	test->scratch += 1;
 
-    port = vmcb->control.exit_info_1 >> 16;
-    size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
+	port = vmcb->control.exit_info_1 >> 16;
+	size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
 
-    while (size--) {
-        io_bitmap[port / 8] &= ~(1 << (port & 7));
-        port++;
-    }
+	while (size--) {
+		io_bitmap[port / 8] &= ~(1 << (port & 7));
+		port++;
+	}
 
-    return false;
+	return false;
 }
 
 static bool check_ioio(struct svm_test *test)
 {
-    memset(io_bitmap, 0, 8193);
-    return test->scratch != -1;
+	memset(io_bitmap, 0, 8193);
+	return test->scratch != -1;
 }
 
 static void prepare_asid_zero(struct svm_test *test)
 {
-    vmcb->control.asid = 0;
+	vmcb->control.asid = 0;
 }
 
 static void test_asid_zero(struct svm_test *test)
 {
-    asm volatile ("vmmcall\n\t");
+	asm volatile ("vmmcall\n\t");
 }
 
 static bool check_asid_zero(struct svm_test *test)
 {
-    return vmcb->control.exit_code == SVM_EXIT_ERR;
+	return vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
 static void sel_cr0_bug_prepare(struct svm_test *test)
 {
-    vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
 }
 
 static bool sel_cr0_bug_finished(struct svm_test *test)
@@ -675,25 +689,25 @@ static bool sel_cr0_bug_finished(struct svm_test *test)
 
 static void sel_cr0_bug_test(struct svm_test *test)
 {
-    unsigned long cr0;
+	unsigned long cr0;
 
-    /* read cr0, clear CD, and write back */
-    cr0  = read_cr0();
-    cr0 |= (1UL << 30);
-    write_cr0(cr0);
+	/* read cr0, clear CD, and write back */
+	cr0 = read_cr0();
+	cr0 |= (1UL << 30);
+	write_cr0(cr0);
 
-    /*
-     * If we are here the test failed, not sure what to do now because we
-     * are not in guest-mode anymore so we can't trigger an intercept.
-     * Trigger a tripple-fault for now.
-     */
-    report_fail("sel_cr0 test. Can not recover from this - exiting");
-    exit(report_summary());
+	/*
+	 * If we are here the test failed, not sure what to do now because we
+	 * are not in guest-mode anymore so we can't trigger an intercept.
+	 * Trigger a tripple-fault for now.
+	 */
+	report_fail("sel_cr0 test. Can not recover from this - exiting");
+	exit(report_summary());
 }
 
 static bool sel_cr0_bug_check(struct svm_test *test)
 {
-    return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
+	return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
 }
 
 #define TSC_ADJUST_VALUE    (1ll << 32)
@@ -702,46 +716,45 @@ static bool ok;
 
 static bool tsc_adjust_supported(void)
 {
-    return this_cpu_has(X86_FEATURE_TSC_ADJUST);
+	return this_cpu_has(X86_FEATURE_TSC_ADJUST);
 }
 
 static void tsc_adjust_prepare(struct svm_test *test)
 {
-    default_prepare(test);
-    vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
+	default_prepare(test);
+	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
 
-    wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
-    int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
-    ok = adjust == -TSC_ADJUST_VALUE;
+	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
+	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
+	ok = adjust == -TSC_ADJUST_VALUE;
 }
 
 static void tsc_adjust_test(struct svm_test *test)
 {
-    int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
-    ok &= adjust == -TSC_ADJUST_VALUE;
+	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
+	ok &= adjust == -TSC_ADJUST_VALUE;
 
-    uint64_t l1_tsc = rdtsc() - TSC_OFFSET_VALUE;
-    wrmsr(MSR_IA32_TSC, l1_tsc - TSC_ADJUST_VALUE);
+	uint64_t l1_tsc = rdtsc() - TSC_OFFSET_VALUE;
+	wrmsr(MSR_IA32_TSC, l1_tsc - TSC_ADJUST_VALUE);
 
-    adjust = rdmsr(MSR_IA32_TSC_ADJUST);
-    ok &= adjust <= -2 * TSC_ADJUST_VALUE;
+	adjust = rdmsr(MSR_IA32_TSC_ADJUST);
+	ok &= adjust <= -2 * TSC_ADJUST_VALUE;
 
-    uint64_t l1_tsc_end = rdtsc() - TSC_OFFSET_VALUE;
-    ok &= (l1_tsc_end + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE;
+	uint64_t l1_tsc_end = rdtsc() - TSC_OFFSET_VALUE;
+	ok &= (l1_tsc_end + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE;
 
-    uint64_t l1_tsc_msr = rdmsr(MSR_IA32_TSC) - TSC_OFFSET_VALUE;
-    ok &= (l1_tsc_msr + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE;
+	uint64_t l1_tsc_msr = rdmsr(MSR_IA32_TSC) - TSC_OFFSET_VALUE;
+	ok &= (l1_tsc_msr + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE;
 }
 
 static bool tsc_adjust_check(struct svm_test *test)
 {
-    int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
+	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
 
-    wrmsr(MSR_IA32_TSC_ADJUST, 0);
-    return ok && adjust <= -2 * TSC_ADJUST_VALUE;
+	wrmsr(MSR_IA32_TSC_ADJUST, 0);
+	return ok && adjust <= -2 * TSC_ADJUST_VALUE;
 }
 
-
 static u64 guest_tsc_delay_value;
 /* number of bits to shift tsc right for stable result */
 #define TSC_SHIFT 24
@@ -749,917 +762,941 @@ static u64 guest_tsc_delay_value;
 
 static void svm_tsc_scale_guest(struct svm_test *test)
 {
-    u64 start_tsc = rdtsc();
+	u64 start_tsc = rdtsc();
 
-    while (rdtsc() - start_tsc < guest_tsc_delay_value)
-        cpu_relax();
+	while (rdtsc() - start_tsc < guest_tsc_delay_value)
+		cpu_relax();
 }
 
 static void svm_tsc_scale_run_testcase(u64 duration,
-        double tsc_scale, u64 tsc_offset)
+				       double tsc_scale, u64 tsc_offset)
 {
-    u64 start_tsc, actual_duration;
+	u64 start_tsc, actual_duration;
 
-    guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
+	guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
 
-    test_set_guest(svm_tsc_scale_guest);
-    vmcb->control.tsc_offset = tsc_offset;
-    wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32)));
+	test_set_guest(svm_tsc_scale_guest);
+	vmcb->control.tsc_offset = tsc_offset;
+	wrmsr(MSR_AMD64_TSC_RATIO, (u64) (tsc_scale * (1ULL << 32)));
 
-    start_tsc = rdtsc();
+	start_tsc = rdtsc();
 
-    if (svm_vmrun() != SVM_EXIT_VMMCALL)
-        report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code);
+	if (svm_vmrun() != SVM_EXIT_VMMCALL)
+		report_fail("unexpected vm exit code 0x%x",
+			    vmcb->control.exit_code);
 
-    actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
+	actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
 
-    report(duration == actual_duration, "tsc delay (expected: %lu, actual: %lu)",
-            duration, actual_duration);
+	report(duration == actual_duration,
+	       "tsc delay (expected: %lu, actual: %lu)", duration,
+	       actual_duration);
 }
 
 static void svm_tsc_scale_test(void)
 {
-    int i;
+	int i;
 
-    if (!tsc_scale_supported()) {
-        report_skip("TSC scale not supported in the guest");
-        return;
-    }
+	if (!tsc_scale_supported()) {
+		report_skip("TSC scale not supported in the guest");
+		return;
+	}
 
-    report(rdmsr(MSR_AMD64_TSC_RATIO) == TSC_RATIO_DEFAULT,
-           "initial TSC scale ratio");
+	report(rdmsr(MSR_AMD64_TSC_RATIO) == TSC_RATIO_DEFAULT,
+	       "initial TSC scale ratio");
 
-    for (i = 0 ; i < TSC_SCALE_ITERATIONS; i++) {
+	for (i = 0; i < TSC_SCALE_ITERATIONS; i++) {
 
-        double tsc_scale = (double)(rdrand() % 100 + 1) / 10;
-        int duration = rdrand() % 50 + 1;
-        u64 tsc_offset = rdrand();
+		double tsc_scale = (double)(rdrand() % 100 + 1) / 10;
+		int duration = rdrand() % 50 + 1;
+		u64 tsc_offset = rdrand();
 
-        report_info("duration=%d, tsc_scale=%d, tsc_offset=%ld",
-                    duration, (int)(tsc_scale * 100), tsc_offset);
+		report_info("duration=%d, tsc_scale=%d, tsc_offset=%ld",
+			    duration, (int)(tsc_scale * 100), tsc_offset);
 
-        svm_tsc_scale_run_testcase(duration, tsc_scale, tsc_offset);
-    }
+		svm_tsc_scale_run_testcase(duration, tsc_scale, tsc_offset);
+	}
 
-    svm_tsc_scale_run_testcase(50, 255, rdrand());
-    svm_tsc_scale_run_testcase(50, 0.0001, rdrand());
+	svm_tsc_scale_run_testcase(50, 255, rdrand());
+	svm_tsc_scale_run_testcase(50, 0.0001, rdrand());
 }
 
 static void latency_prepare(struct svm_test *test)
 {
-    default_prepare(test);
-    runs = LATENCY_RUNS;
-    latvmrun_min = latvmexit_min = -1ULL;
-    latvmrun_max = latvmexit_max = 0;
-    vmrun_sum = vmexit_sum = 0;
-    tsc_start = rdtsc();
+	default_prepare(test);
+	runs = LATENCY_RUNS;
+	latvmrun_min = latvmexit_min = -1ULL;
+	latvmrun_max = latvmexit_max = 0;
+	vmrun_sum = vmexit_sum = 0;
+	tsc_start = rdtsc();
 }
 
 static void latency_test(struct svm_test *test)
 {
-    u64 cycles;
+	u64 cycles;
 
 start:
-    tsc_end = rdtsc();
+	tsc_end = rdtsc();
 
-    cycles = tsc_end - tsc_start;
+	cycles = tsc_end - tsc_start;
 
-    if (cycles > latvmrun_max)
-        latvmrun_max = cycles;
+	if (cycles > latvmrun_max)
+		latvmrun_max = cycles;
 
-    if (cycles < latvmrun_min)
-        latvmrun_min = cycles;
+	if (cycles < latvmrun_min)
+		latvmrun_min = cycles;
 
-    vmrun_sum += cycles;
+	vmrun_sum += cycles;
 
-    tsc_start = rdtsc();
+	tsc_start = rdtsc();
 
-    asm volatile ("vmmcall" : : : "memory");
-    goto start;
+	asm volatile ("vmmcall":::"memory");
+	goto start;
 }
 
 static bool latency_finished(struct svm_test *test)
 {
-    u64 cycles;
+	u64 cycles;
 
-    tsc_end = rdtsc();
+	tsc_end = rdtsc();
 
-    cycles = tsc_end - tsc_start;
+	cycles = tsc_end - tsc_start;
 
-    if (cycles > latvmexit_max)
-        latvmexit_max = cycles;
+	if (cycles > latvmexit_max)
+		latvmexit_max = cycles;
 
-    if (cycles < latvmexit_min)
-        latvmexit_min = cycles;
+	if (cycles < latvmexit_min)
+		latvmexit_min = cycles;
 
-    vmexit_sum += cycles;
+	vmexit_sum += cycles;
 
-    vmcb->save.rip += 3;
+	vmcb->save.rip += 3;
 
-    runs -= 1;
+	runs -= 1;
 
-    tsc_end = rdtsc();
+	tsc_end = rdtsc();
 
-    return runs == 0;
+	return runs == 0;
 }
 
 static bool latency_finished_clean(struct svm_test *test)
 {
-    vmcb->control.clean = VMCB_CLEAN_ALL;
-    return latency_finished(test);
+	vmcb->control.clean = VMCB_CLEAN_ALL;
+	return latency_finished(test);
 }
 
 static bool latency_check(struct svm_test *test)
 {
-    printf("    Latency VMRUN : max: %ld min: %ld avg: %ld\n", latvmrun_max,
-            latvmrun_min, vmrun_sum / LATENCY_RUNS);
-    printf("    Latency VMEXIT: max: %ld min: %ld avg: %ld\n", latvmexit_max,
-            latvmexit_min, vmexit_sum / LATENCY_RUNS);
-    return true;
+	printf("    Latency VMRUN : max: %ld min: %ld avg: %ld\n", latvmrun_max,
+	       latvmrun_min, vmrun_sum / LATENCY_RUNS);
+	printf("    Latency VMEXIT: max: %ld min: %ld avg: %ld\n",
+	       latvmexit_max, latvmexit_min, vmexit_sum / LATENCY_RUNS);
+	return true;
 }
 
 static void lat_svm_insn_prepare(struct svm_test *test)
 {
-    default_prepare(test);
-    runs = LATENCY_RUNS;
-    latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL;
-    latvmload_max = latvmsave_max = latstgi_max = latclgi_max = 0;
-    vmload_sum = vmsave_sum = stgi_sum = clgi_sum;
+	default_prepare(test);
+	runs = LATENCY_RUNS;
+	latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL;
+	latvmload_max = latvmsave_max = latstgi_max = latclgi_max = 0;
+	vmload_sum = vmsave_sum = stgi_sum = clgi_sum;
 }
 
 static bool lat_svm_insn_finished(struct svm_test *test)
 {
-    u64 vmcb_phys = virt_to_phys(vmcb);
-    u64 cycles;
-
-    for ( ; runs != 0; runs--) {
-        tsc_start = rdtsc();
-        asm volatile("vmload %0\n\t" : : "a"(vmcb_phys) : "memory");
-        cycles = rdtsc() - tsc_start;
-        if (cycles > latvmload_max)
-            latvmload_max = cycles;
-        if (cycles < latvmload_min)
-            latvmload_min = cycles;
-        vmload_sum += cycles;
-
-        tsc_start = rdtsc();
-        asm volatile("vmsave %0\n\t" : : "a"(vmcb_phys) : "memory");
-        cycles = rdtsc() - tsc_start;
-        if (cycles > latvmsave_max)
-            latvmsave_max = cycles;
-        if (cycles < latvmsave_min)
-            latvmsave_min = cycles;
-        vmsave_sum += cycles;
-
-        tsc_start = rdtsc();
-        asm volatile("stgi\n\t");
-        cycles = rdtsc() - tsc_start;
-        if (cycles > latstgi_max)
-            latstgi_max = cycles;
-        if (cycles < latstgi_min)
-            latstgi_min = cycles;
-        stgi_sum += cycles;
-
-        tsc_start = rdtsc();
-        asm volatile("clgi\n\t");
-        cycles = rdtsc() - tsc_start;
-        if (cycles > latclgi_max)
-            latclgi_max = cycles;
-        if (cycles < latclgi_min)
-            latclgi_min = cycles;
-        clgi_sum += cycles;
-    }
+	u64 vmcb_phys = virt_to_phys(vmcb);
+	u64 cycles;
+
+	for (; runs != 0; runs--) {
+		tsc_start = rdtsc();
+		asm volatile ("vmload %0\n\t"::"a" (vmcb_phys):"memory");
+		cycles = rdtsc() - tsc_start;
+		if (cycles > latvmload_max)
+			latvmload_max = cycles;
+		if (cycles < latvmload_min)
+			latvmload_min = cycles;
+		vmload_sum += cycles;
+
+		tsc_start = rdtsc();
+		asm volatile ("vmsave %0\n\t"::"a" (vmcb_phys):"memory");
+		cycles = rdtsc() - tsc_start;
+		if (cycles > latvmsave_max)
+			latvmsave_max = cycles;
+		if (cycles < latvmsave_min)
+			latvmsave_min = cycles;
+		vmsave_sum += cycles;
+
+		tsc_start = rdtsc();
+		asm volatile ("stgi\n\t");
+		cycles = rdtsc() - tsc_start;
+		if (cycles > latstgi_max)
+			latstgi_max = cycles;
+		if (cycles < latstgi_min)
+			latstgi_min = cycles;
+		stgi_sum += cycles;
+
+		tsc_start = rdtsc();
+		asm volatile ("clgi\n\t");
+		cycles = rdtsc() - tsc_start;
+		if (cycles > latclgi_max)
+			latclgi_max = cycles;
+		if (cycles < latclgi_min)
+			latclgi_min = cycles;
+		clgi_sum += cycles;
+	}
 
-    tsc_end = rdtsc();
+	tsc_end = rdtsc();
 
-    return true;
+	return true;
 }
 
 static bool lat_svm_insn_check(struct svm_test *test)
 {
-    printf("    Latency VMLOAD: max: %ld min: %ld avg: %ld\n", latvmload_max,
-            latvmload_min, vmload_sum / LATENCY_RUNS);
-    printf("    Latency VMSAVE: max: %ld min: %ld avg: %ld\n", latvmsave_max,
-            latvmsave_min, vmsave_sum / LATENCY_RUNS);
-    printf("    Latency STGI:   max: %ld min: %ld avg: %ld\n", latstgi_max,
-            latstgi_min, stgi_sum / LATENCY_RUNS);
-    printf("    Latency CLGI:   max: %ld min: %ld avg: %ld\n", latclgi_max,
-            latclgi_min, clgi_sum / LATENCY_RUNS);
-    return true;
+	printf("    Latency VMLOAD: max: %ld min: %ld avg: %ld\n",
+	       latvmload_max, latvmload_min, vmload_sum / LATENCY_RUNS);
+	printf("    Latency VMSAVE: max: %ld min: %ld avg: %ld\n",
+	       latvmsave_max, latvmsave_min, vmsave_sum / LATENCY_RUNS);
+	printf("    Latency STGI:   max: %ld min: %ld avg: %ld\n", latstgi_max,
+	       latstgi_min, stgi_sum / LATENCY_RUNS);
+	printf("    Latency CLGI:   max: %ld min: %ld avg: %ld\n", latclgi_max,
+	       latclgi_min, clgi_sum / LATENCY_RUNS);
+	return true;
 }
 
 bool pending_event_ipi_fired;
 bool pending_event_guest_run;
 
-static void pending_event_ipi_isr(isr_regs_t *regs)
+static void pending_event_ipi_isr(isr_regs_t * regs)
 {
-    pending_event_ipi_fired = true;
-    eoi();
+	pending_event_ipi_fired = true;
+	eoi();
 }
 
 static void pending_event_prepare(struct svm_test *test)
 {
-    int ipi_vector = 0xf1;
+	int ipi_vector = 0xf1;
 
-    default_prepare(test);
+	default_prepare(test);
 
-    pending_event_ipi_fired = false;
+	pending_event_ipi_fired = false;
 
-    handle_irq(ipi_vector, pending_event_ipi_isr);
+	handle_irq(ipi_vector, pending_event_ipi_isr);
 
-    pending_event_guest_run = false;
+	pending_event_guest_run = false;
 
-    vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
-    vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+	vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 
-    apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
-                  APIC_DM_FIXED | ipi_vector, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
+		       APIC_DM_FIXED | ipi_vector, 0);
 
-    set_test_stage(test, 0);
+	set_test_stage(test, 0);
 }
 
 static void pending_event_test(struct svm_test *test)
 {
-    pending_event_guest_run = true;
+	pending_event_guest_run = true;
 }
 
 static bool pending_event_finished(struct svm_test *test)
 {
-    switch (get_test_stage(test)) {
-    case 0:
-        if (vmcb->control.exit_code != SVM_EXIT_INTR) {
-            report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
+	switch (get_test_stage(test)) {
+	case 0:
+		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
+			report_fail
+			    ("VMEXIT not due to pending interrupt. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
 
-        vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
-        vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
+		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 
-        if (pending_event_guest_run) {
-            report_fail("Guest ran before host received IPI\n");
-            return true;
-        }
+		if (pending_event_guest_run) {
+			report_fail("Guest ran before host received IPI\n");
+			return true;
+		}
 
-        irq_enable();
-        asm volatile ("nop");
-        irq_disable();
+		irq_enable();
+		asm volatile ("nop");
+		irq_disable();
 
-        if (!pending_event_ipi_fired) {
-            report_fail("Pending interrupt not dispatched after IRQ enabled\n");
-            return true;
-        }
-        break;
+		if (!pending_event_ipi_fired) {
+			report_fail
+			    ("Pending interrupt not dispatched after IRQ enabled\n");
+			return true;
+		}
+		break;
 
-    case 1:
-        if (!pending_event_guest_run) {
-            report_fail("Guest did not resume when no interrupt\n");
-            return true;
-        }
-        break;
-    }
+	case 1:
+		if (!pending_event_guest_run) {
+			report_fail("Guest did not resume when no interrupt\n");
+			return true;
+		}
+		break;
+	}
 
-    inc_test_stage(test);
+	inc_test_stage(test);
 
-    return get_test_stage(test) == 2;
+	return get_test_stage(test) == 2;
 }
 
 static bool pending_event_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 2;
+	return get_test_stage(test) == 2;
 }
 
 static void pending_event_cli_prepare(struct svm_test *test)
 {
-    default_prepare(test);
+	default_prepare(test);
 
-    pending_event_ipi_fired = false;
+	pending_event_ipi_fired = false;
 
-    handle_irq(0xf1, pending_event_ipi_isr);
+	handle_irq(0xf1, pending_event_ipi_isr);
 
-    apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
-              APIC_DM_FIXED | 0xf1, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
+		       APIC_DM_FIXED | 0xf1, 0);
 
-    set_test_stage(test, 0);
+	set_test_stage(test, 0);
 }
 
 static void pending_event_cli_prepare_gif_clear(struct svm_test *test)
 {
-    asm("cli");
+	asm("cli");
 }
 
 static void pending_event_cli_test(struct svm_test *test)
 {
-    if (pending_event_ipi_fired == true) {
-        set_test_stage(test, -1);
-        report_fail("Interrupt preceeded guest");
-        vmmcall();
-    }
+	if (pending_event_ipi_fired == true) {
+		set_test_stage(test, -1);
+		report_fail("Interrupt preceeded guest");
+		vmmcall();
+	}
 
-    /* VINTR_MASKING is zero.  This should cause the IPI to fire.  */
-    irq_enable();
-    asm volatile ("nop");
-    irq_disable();
+	/* VINTR_MASKING is zero.  This should cause the IPI to fire.  */
+	irq_enable();
+	asm volatile ("nop");
+	irq_disable();
 
-    if (pending_event_ipi_fired != true) {
-        set_test_stage(test, -1);
-        report_fail("Interrupt not triggered by guest");
-    }
+	if (pending_event_ipi_fired != true) {
+		set_test_stage(test, -1);
+		report_fail("Interrupt not triggered by guest");
+	}
 
-    vmmcall();
+	vmmcall();
 
-    /*
-     * Now VINTR_MASKING=1, but no interrupt is pending so
-     * the VINTR interception should be clear in VMCB02.  Check
-     * that L0 did not leave a stale VINTR in the VMCB.
-     */
-    irq_enable();
-    asm volatile ("nop");
-    irq_disable();
+	/*
+	 * Now VINTR_MASKING=1, but no interrupt is pending so
+	 * the VINTR interception should be clear in VMCB02.  Check
+	 * that L0 did not leave a stale VINTR in the VMCB.
+	 */
+	irq_enable();
+	asm volatile ("nop");
+	irq_disable();
 }
 
 static bool pending_event_cli_finished(struct svm_test *test)
 {
-    if ( vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-        report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x",
-                    vmcb->control.exit_code);
-        return true;
-    }
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		report_fail
+		    ("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x",
+		     vmcb->control.exit_code);
+		return true;
+	}
 
-    switch (get_test_stage(test)) {
-    case 0:
-        vmcb->save.rip += 3;
+	switch (get_test_stage(test)) {
+	case 0:
+		vmcb->save.rip += 3;
 
-        pending_event_ipi_fired = false;
+		pending_event_ipi_fired = false;
 
-        vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 
-	/* Now entering again with VINTR_MASKING=1.  */
-        apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
-              APIC_DM_FIXED | 0xf1, 0);
+		/* Now entering again with VINTR_MASKING=1.  */
+		apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
+			       APIC_DM_FIXED | 0xf1, 0);
 
-        break;
+		break;
 
-    case 1:
-        if (pending_event_ipi_fired == true) {
-            report_fail("Interrupt triggered by guest");
-            return true;
-        }
+	case 1:
+		if (pending_event_ipi_fired == true) {
+			report_fail("Interrupt triggered by guest");
+			return true;
+		}
 
-        irq_enable();
-        asm volatile ("nop");
-        irq_disable();
+		irq_enable();
+		asm volatile ("nop");
+		irq_disable();
 
-        if (pending_event_ipi_fired != true) {
-            report_fail("Interrupt not triggered by host");
-            return true;
-        }
+		if (pending_event_ipi_fired != true) {
+			report_fail("Interrupt not triggered by host");
+			return true;
+		}
 
-        break;
+		break;
 
-    default:
-        return true;
-    }
+	default:
+		return true;
+	}
 
-    inc_test_stage(test);
+	inc_test_stage(test);
 
-    return get_test_stage(test) == 2;
+	return get_test_stage(test) == 2;
 }
 
 static bool pending_event_cli_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 2;
+	return get_test_stage(test) == 2;
 }
 
 #define TIMER_VECTOR    222
 
 static volatile bool timer_fired;
 
-static void timer_isr(isr_regs_t *regs)
+static void timer_isr(isr_regs_t * regs)
 {
-    timer_fired = true;
-    apic_write(APIC_EOI, 0);
+	timer_fired = true;
+	apic_write(APIC_EOI, 0);
 }
 
 static void interrupt_prepare(struct svm_test *test)
 {
-    default_prepare(test);
-    handle_irq(TIMER_VECTOR, timer_isr);
-    timer_fired = false;
-    set_test_stage(test, 0);
+	default_prepare(test);
+	handle_irq(TIMER_VECTOR, timer_isr);
+	timer_fired = false;
+	set_test_stage(test, 0);
 }
 
 static void interrupt_test(struct svm_test *test)
 {
-    long long start, loops;
+	long long start, loops;
 
-    apic_write(APIC_LVTT, TIMER_VECTOR);
-    irq_enable();
-    apic_write(APIC_TMICT, 1); //Timer Initial Count Register 0x380 one-shot
-    for (loops = 0; loops < 10000000 && !timer_fired; loops++)
-        asm volatile ("nop");
+	apic_write(APIC_LVTT, TIMER_VECTOR);
+	irq_enable();
+	apic_write(APIC_TMICT, 1);	//Timer Initial Count Register 0x380 one-shot
+	for (loops = 0; loops < 10000000 && !timer_fired; loops++)
+		asm volatile ("nop");
 
-    report(timer_fired, "direct interrupt while running guest");
+	report(timer_fired, "direct interrupt while running guest");
 
-    if (!timer_fired) {
-        set_test_stage(test, -1);
-        vmmcall();
-    }
+	if (!timer_fired) {
+		set_test_stage(test, -1);
+		vmmcall();
+	}
 
-    apic_write(APIC_TMICT, 0);
-    irq_disable();
-    vmmcall();
+	apic_write(APIC_TMICT, 0);
+	irq_disable();
+	vmmcall();
 
-    timer_fired = false;
-    apic_write(APIC_TMICT, 1);
-    for (loops = 0; loops < 10000000 && !timer_fired; loops++)
-        asm volatile ("nop");
+	timer_fired = false;
+	apic_write(APIC_TMICT, 1);
+	for (loops = 0; loops < 10000000 && !timer_fired; loops++)
+		asm volatile ("nop");
 
-    report(timer_fired, "intercepted interrupt while running guest");
+	report(timer_fired, "intercepted interrupt while running guest");
 
-    if (!timer_fired) {
-        set_test_stage(test, -1);
-        vmmcall();
-    }
+	if (!timer_fired) {
+		set_test_stage(test, -1);
+		vmmcall();
+	}
 
-    irq_enable();
-    apic_write(APIC_TMICT, 0);
-    irq_disable();
+	irq_enable();
+	apic_write(APIC_TMICT, 0);
+	irq_disable();
 
-    timer_fired = false;
-    start = rdtsc();
-    apic_write(APIC_TMICT, 1000000);
-    safe_halt();
+	timer_fired = false;
+	start = rdtsc();
+	apic_write(APIC_TMICT, 1000000);
+	safe_halt();
 
-    report(rdtsc() - start > 10000 && timer_fired,
-          "direct interrupt + hlt");
+	report(rdtsc() - start > 10000 && timer_fired,
+	       "direct interrupt + hlt");
 
-    if (!timer_fired) {
-        set_test_stage(test, -1);
-        vmmcall();
-    }
+	if (!timer_fired) {
+		set_test_stage(test, -1);
+		vmmcall();
+	}
 
-    apic_write(APIC_TMICT, 0);
-    irq_disable();
-    vmmcall();
+	apic_write(APIC_TMICT, 0);
+	irq_disable();
+	vmmcall();
 
-    timer_fired = false;
-    start = rdtsc();
-    apic_write(APIC_TMICT, 1000000);
-    asm volatile ("hlt");
+	timer_fired = false;
+	start = rdtsc();
+	apic_write(APIC_TMICT, 1000000);
+	asm volatile ("hlt");
 
-    report(rdtsc() - start > 10000 && timer_fired,
-           "intercepted interrupt + hlt");
+	report(rdtsc() - start > 10000 && timer_fired,
+	       "intercepted interrupt + hlt");
 
-    if (!timer_fired) {
-        set_test_stage(test, -1);
-        vmmcall();
-    }
+	if (!timer_fired) {
+		set_test_stage(test, -1);
+		vmmcall();
+	}
 
-    apic_write(APIC_TMICT, 0);
-    irq_disable();
+	apic_write(APIC_TMICT, 0);
+	irq_disable();
 }
 
 static bool interrupt_finished(struct svm_test *test)
 {
-    switch (get_test_stage(test)) {
-    case 0:
-    case 2:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->save.rip += 3;
+	switch (get_test_stage(test)) {
+	case 0:
+	case 2:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 3;
 
-        vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
-        vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
-        break;
+		vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+		break;
 
-    case 1:
-    case 3:
-        if (vmcb->control.exit_code != SVM_EXIT_INTR) {
-            report_fail("VMEXIT not due to intr intercept. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
+	case 1:
+	case 3:
+		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
+			report_fail
+			    ("VMEXIT not due to intr intercept. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
 
-        irq_enable();
-        asm volatile ("nop");
-        irq_disable();
+		irq_enable();
+		asm volatile ("nop");
+		irq_disable();
 
-        vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
-        vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-        break;
+		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
+		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+		break;
 
-    case 4:
-        break;
+	case 4:
+		break;
 
-    default:
-        return true;
-    }
+	default:
+		return true;
+	}
 
-    inc_test_stage(test);
+	inc_test_stage(test);
 
-    return get_test_stage(test) == 5;
+	return get_test_stage(test) == 5;
 }
 
 static bool interrupt_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 5;
+	return get_test_stage(test) == 5;
 }
 
 static volatile bool nmi_fired;
 
 static void nmi_handler(struct ex_regs *regs)
 {
-    nmi_fired = true;
+	nmi_fired = true;
 }
 
 static void nmi_prepare(struct svm_test *test)
 {
-    default_prepare(test);
-    nmi_fired = false;
-    handle_exception(NMI_VECTOR, nmi_handler);
-    set_test_stage(test, 0);
+	default_prepare(test);
+	nmi_fired = false;
+	handle_exception(NMI_VECTOR, nmi_handler);
+	set_test_stage(test, 0);
 }
 
 static void nmi_test(struct svm_test *test)
 {
-    apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI |
+		       APIC_INT_ASSERT, 0);
 
-    report(nmi_fired, "direct NMI while running guest");
+	report(nmi_fired, "direct NMI while running guest");
 
-    if (!nmi_fired)
-        set_test_stage(test, -1);
+	if (!nmi_fired)
+		set_test_stage(test, -1);
 
-    vmmcall();
+	vmmcall();
 
-    nmi_fired = false;
+	nmi_fired = false;
 
-    apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI |
+		       APIC_INT_ASSERT, 0);
 
-    if (!nmi_fired) {
-        report(nmi_fired, "intercepted pending NMI not dispatched");
-        set_test_stage(test, -1);
-    }
+	if (!nmi_fired) {
+		report(nmi_fired, "intercepted pending NMI not dispatched");
+		set_test_stage(test, -1);
+	}
 
 }
 
 static bool nmi_finished(struct svm_test *test)
 {
-    switch (get_test_stage(test)) {
-    case 0:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->save.rip += 3;
+	switch (get_test_stage(test)) {
+	case 0:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 3;
 
-        vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
-        break;
+		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
+		break;
 
-    case 1:
-        if (vmcb->control.exit_code != SVM_EXIT_NMI) {
-            report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
+	case 1:
+		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
+			report_fail
+			    ("VMEXIT not due to NMI intercept. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
 
-        report_pass("NMI intercept while running guest");
-        break;
+		report_pass("NMI intercept while running guest");
+		break;
 
-    case 2:
-        break;
+	case 2:
+		break;
 
-    default:
-        return true;
-    }
+	default:
+		return true;
+	}
 
-    inc_test_stage(test);
+	inc_test_stage(test);
 
-    return get_test_stage(test) == 3;
+	return get_test_stage(test) == 3;
 }
 
 static bool nmi_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 3;
+	return get_test_stage(test) == 3;
 }
 
 #define NMI_DELAY 100000000ULL
 
 static void nmi_message_thread(void *_test)
 {
-    struct svm_test *test = _test;
+	struct svm_test *test = _test;
 
-    while (get_test_stage(test) != 1)
-        pause();
+	while (get_test_stage(test) != 1)
+		pause();
 
-    delay(NMI_DELAY);
+	delay(NMI_DELAY);
 
-    apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]);
+	apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT,
+		       id_map[0]);
 
-    while (get_test_stage(test) != 2)
-        pause();
+	while (get_test_stage(test) != 2)
+		pause();
 
-    delay(NMI_DELAY);
+	delay(NMI_DELAY);
 
-    apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]);
+	apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT,
+		       id_map[0]);
 }
 
 static void nmi_hlt_test(struct svm_test *test)
 {
-    long long start;
+	long long start;
 
-    on_cpu_async(1, nmi_message_thread, test);
+	on_cpu_async(1, nmi_message_thread, test);
 
-    start = rdtsc();
+	start = rdtsc();
 
-    set_test_stage(test, 1);
+	set_test_stage(test, 1);
 
-    asm volatile ("hlt");
+	asm volatile ("hlt");
 
-    report((rdtsc() - start > NMI_DELAY) && nmi_fired,
-          "direct NMI + hlt");
+	report((rdtsc() - start > NMI_DELAY) && nmi_fired, "direct NMI + hlt");
 
-    if (!nmi_fired)
-        set_test_stage(test, -1);
+	if (!nmi_fired)
+		set_test_stage(test, -1);
 
-    nmi_fired = false;
+	nmi_fired = false;
 
-    vmmcall();
+	vmmcall();
 
-    start = rdtsc();
+	start = rdtsc();
 
-    set_test_stage(test, 2);
+	set_test_stage(test, 2);
 
-    asm volatile ("hlt");
+	asm volatile ("hlt");
 
-    report((rdtsc() - start > NMI_DELAY) && nmi_fired,
-           "intercepted NMI + hlt");
+	report((rdtsc() - start > NMI_DELAY) && nmi_fired,
+	       "intercepted NMI + hlt");
 
-    if (!nmi_fired) {
-        report(nmi_fired, "intercepted pending NMI not dispatched");
-        set_test_stage(test, -1);
-        vmmcall();
-    }
+	if (!nmi_fired) {
+		report(nmi_fired, "intercepted pending NMI not dispatched");
+		set_test_stage(test, -1);
+		vmmcall();
+	}
 
-    set_test_stage(test, 3);
+	set_test_stage(test, 3);
 }
 
 static bool nmi_hlt_finished(struct svm_test *test)
 {
-    switch (get_test_stage(test)) {
-    case 1:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->save.rip += 3;
+	switch (get_test_stage(test)) {
+	case 1:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 3;
 
-        vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
-        break;
+		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
+		break;
 
-    case 2:
-        if (vmcb->control.exit_code != SVM_EXIT_NMI) {
-            report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
+	case 2:
+		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
+			report_fail
+			    ("VMEXIT not due to NMI intercept. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
 
-        report_pass("NMI intercept while running guest");
-        break;
+		report_pass("NMI intercept while running guest");
+		break;
 
-    case 3:
-        break;
+	case 3:
+		break;
 
-    default:
-        return true;
-    }
+	default:
+		return true;
+	}
 
-    return get_test_stage(test) == 3;
+	return get_test_stage(test) == 3;
 }
 
 static bool nmi_hlt_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 3;
+	return get_test_stage(test) == 3;
 }
 
 static volatile int count_exc = 0;
 
 static void my_isr(struct ex_regs *r)
 {
-        count_exc++;
+	count_exc++;
 }
 
 static void exc_inject_prepare(struct svm_test *test)
 {
-    default_prepare(test);
-    handle_exception(DE_VECTOR, my_isr);
-    handle_exception(NMI_VECTOR, my_isr);
+	default_prepare(test);
+	handle_exception(DE_VECTOR, my_isr);
+	handle_exception(NMI_VECTOR, my_isr);
 }
 
-
 static void exc_inject_test(struct svm_test *test)
 {
-    asm volatile ("vmmcall\n\tvmmcall\n\t");
+	asm volatile ("vmmcall\n\tvmmcall\n\t");
 }
 
 static bool exc_inject_finished(struct svm_test *test)
 {
-    switch (get_test_stage(test)) {
-    case 0:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->save.rip += 3;
-        vmcb->control.event_inj = NMI_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID;
-        break;
+	switch (get_test_stage(test)) {
+	case 0:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 3;
+		vmcb->control.event_inj =
+		    NMI_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID;
+		break;
 
-    case 1:
-        if (vmcb->control.exit_code != SVM_EXIT_ERR) {
-            report_fail("VMEXIT not due to error. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        report(count_exc == 0, "exception with vector 2 not injected");
-        vmcb->control.event_inj = DE_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID;
-        break;
+	case 1:
+		if (vmcb->control.exit_code != SVM_EXIT_ERR) {
+			report_fail("VMEXIT not due to error. Exit reason 0x%x",
+				    vmcb->control.exit_code);
+			return true;
+		}
+		report(count_exc == 0, "exception with vector 2 not injected");
+		vmcb->control.event_inj =
+		    DE_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID;
+		break;
 
-    case 2:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->save.rip += 3;
-        report(count_exc == 1, "divide overflow exception injected");
-        report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID), "eventinj.VALID cleared");
-        break;
+	case 2:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 3;
+		report(count_exc == 1, "divide overflow exception injected");
+		report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID),
+		       "eventinj.VALID cleared");
+		break;
 
-    default:
-        return true;
-    }
+	default:
+		return true;
+	}
 
-    inc_test_stage(test);
+	inc_test_stage(test);
 
-    return get_test_stage(test) == 3;
+	return get_test_stage(test) == 3;
 }
 
 static bool exc_inject_check(struct svm_test *test)
 {
-    return count_exc == 1 && get_test_stage(test) == 3;
+	return count_exc == 1 && get_test_stage(test) == 3;
 }
 
 static volatile bool virq_fired;
 
-static void virq_isr(isr_regs_t *regs)
+static void virq_isr(isr_regs_t * regs)
 {
-    virq_fired = true;
+	virq_fired = true;
 }
 
 static void virq_inject_prepare(struct svm_test *test)
 {
-    handle_irq(0xf1, virq_isr);
-    default_prepare(test);
-    vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
-                            (0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
-    vmcb->control.int_vector = 0xf1;
-    virq_fired = false;
-    set_test_stage(test, 0);
+	handle_irq(0xf1, virq_isr);
+	default_prepare(test);
+	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | (0x0f << V_INTR_PRIO_SHIFT);	// Set to the highest priority
+	vmcb->control.int_vector = 0xf1;
+	virq_fired = false;
+	set_test_stage(test, 0);
 }
 
 static void virq_inject_test(struct svm_test *test)
 {
-    if (virq_fired) {
-        report_fail("virtual interrupt fired before L2 sti");
-        set_test_stage(test, -1);
-        vmmcall();
-    }
+	if (virq_fired) {
+		report_fail("virtual interrupt fired before L2 sti");
+		set_test_stage(test, -1);
+		vmmcall();
+	}
 
-    irq_enable();
-    asm volatile ("nop");
-    irq_disable();
+	irq_enable();
+	asm volatile ("nop");
+	irq_disable();
 
-    if (!virq_fired) {
-        report_fail("virtual interrupt not fired after L2 sti");
-        set_test_stage(test, -1);
-    }
+	if (!virq_fired) {
+		report_fail("virtual interrupt not fired after L2 sti");
+		set_test_stage(test, -1);
+	}
 
-    vmmcall();
+	vmmcall();
 
-    if (virq_fired) {
-        report_fail("virtual interrupt fired before L2 sti after VINTR intercept");
-        set_test_stage(test, -1);
-        vmmcall();
-    }
+	if (virq_fired) {
+		report_fail
+		    ("virtual interrupt fired before L2 sti after VINTR intercept");
+		set_test_stage(test, -1);
+		vmmcall();
+	}
 
-    irq_enable();
-    asm volatile ("nop");
-    irq_disable();
+	irq_enable();
+	asm volatile ("nop");
+	irq_disable();
 
-    if (!virq_fired) {
-        report_fail("virtual interrupt not fired after return from VINTR intercept");
-        set_test_stage(test, -1);
-    }
+	if (!virq_fired) {
+		report_fail
+		    ("virtual interrupt not fired after return from VINTR intercept");
+		set_test_stage(test, -1);
+	}
 
-    vmmcall();
+	vmmcall();
 
-    irq_enable();
-    asm volatile ("nop");
-    irq_disable();
+	irq_enable();
+	asm volatile ("nop");
+	irq_disable();
 
-    if (virq_fired) {
-        report_fail("virtual interrupt fired when V_IRQ_PRIO less than V_TPR");
-        set_test_stage(test, -1);
-    }
+	if (virq_fired) {
+		report_fail
+		    ("virtual interrupt fired when V_IRQ_PRIO less than V_TPR");
+		set_test_stage(test, -1);
+	}
 
-    vmmcall();
-    vmmcall();
+	vmmcall();
+	vmmcall();
 }
 
 static bool virq_inject_finished(struct svm_test *test)
 {
-    vmcb->save.rip += 3;
+	vmcb->save.rip += 3;
 
-    switch (get_test_stage(test)) {
-    case 0:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        if (vmcb->control.int_ctl & V_IRQ_MASK) {
-            report_fail("V_IRQ not cleared on VMEXIT after firing");
-            return true;
-        }
-        virq_fired = false;
-        vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
-        vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
-                            (0x0f << V_INTR_PRIO_SHIFT);
-        break;
+	switch (get_test_stage(test)) {
+	case 0:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		if (vmcb->control.int_ctl & V_IRQ_MASK) {
+			report_fail("V_IRQ not cleared on VMEXIT after firing");
+			return true;
+		}
+		virq_fired = false;
+		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
+		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+		    (0x0f << V_INTR_PRIO_SHIFT);
+		break;
 
-    case 1:
-        if (vmcb->control.exit_code != SVM_EXIT_VINTR) {
-            report_fail("VMEXIT not due to vintr. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        if (virq_fired) {
-            report_fail("V_IRQ fired before SVM_EXIT_VINTR");
-            return true;
-        }
-        vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
-        break;
+	case 1:
+		if (vmcb->control.exit_code != SVM_EXIT_VINTR) {
+			report_fail("VMEXIT not due to vintr. Exit reason 0x%x",
+				    vmcb->control.exit_code);
+			return true;
+		}
+		if (virq_fired) {
+			report_fail("V_IRQ fired before SVM_EXIT_VINTR");
+			return true;
+		}
+		vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
+		break;
 
-    case 2:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        virq_fired = false;
-        // Set irq to lower priority
-        vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
-                            (0x08 << V_INTR_PRIO_SHIFT);
-        // Raise guest TPR
-        vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
-        break;
+	case 2:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		virq_fired = false;
+		// Set irq to lower priority
+		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+		    (0x08 << V_INTR_PRIO_SHIFT);
+		// Raise guest TPR
+		vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
+		break;
 
-    case 3:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
-        break;
+	case 3:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
+		break;
 
-    case 4:
-        // INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-                        vmcb->control.exit_code);
-            return true;
-        }
-        break;
+	case 4:
+		// INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail
+			    ("VMEXIT not due to vmmcall. Exit reason 0x%x",
+			     vmcb->control.exit_code);
+			return true;
+		}
+		break;
 
-    default:
-        return true;
-    }
+	default:
+		return true;
+	}
 
-    inc_test_stage(test);
+	inc_test_stage(test);
 
-    return get_test_stage(test) == 5;
+	return get_test_stage(test) == 5;
 }
 
 static bool virq_inject_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 5;
+	return get_test_stage(test) == 5;
 }
 
 /*
@@ -1686,159 +1723,157 @@ static volatile int isr_cnt = 0;
 static volatile uint8_t io_port_var = 0xAA;
 extern const char insb_instruction_label[];
 
-static void reg_corruption_isr(isr_regs_t *regs)
+static void reg_corruption_isr(isr_regs_t * regs)
 {
-    isr_cnt++;
-    apic_write(APIC_EOI, 0);
+	isr_cnt++;
+	apic_write(APIC_EOI, 0);
 }
 
 static void reg_corruption_prepare(struct svm_test *test)
 {
-    default_prepare(test);
-    set_test_stage(test, 0);
+	default_prepare(test);
+	set_test_stage(test, 0);
 
-    vmcb->control.int_ctl = V_INTR_MASKING_MASK;
-    vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
 
-    handle_irq(TIMER_VECTOR, reg_corruption_isr);
+	handle_irq(TIMER_VECTOR, reg_corruption_isr);
 
-    /* set local APIC to inject external interrupts */
-    apic_write(APIC_TMICT, 0);
-    apic_write(APIC_TDCR, 0);
-    apic_write(APIC_LVTT, TIMER_VECTOR | APIC_LVT_TIMER_PERIODIC);
-    apic_write(APIC_TMICT, 1000);
+	/* set local APIC to inject external interrupts */
+	apic_write(APIC_TMICT, 0);
+	apic_write(APIC_TDCR, 0);
+	apic_write(APIC_LVTT, TIMER_VECTOR | APIC_LVT_TIMER_PERIODIC);
+	apic_write(APIC_TMICT, 1000);
 }
 
 static void reg_corruption_test(struct svm_test *test)
 {
-    /* this is endless loop, which is interrupted by the timer interrupt */
-    asm volatile (
-            "1:\n\t"
-            "movw $0x4d0, %%dx\n\t" // IO port
-            "lea %[io_port_var], %%rdi\n\t"
-            "movb $0xAA, %[io_port_var]\n\t"
-            "insb_instruction_label:\n\t"
-            "insb\n\t"
-            "jmp 1b\n\t"
-
-            : [io_port_var] "=m" (io_port_var)
-            : /* no inputs*/
-            : "rdx", "rdi"
-    );
+	/* this is endless loop, which is interrupted by the timer interrupt */
+	asm volatile ("1:\n\t" "movw $0x4d0, %%dx\n\t"	// IO port
+		      "lea %[io_port_var], %%rdi\n\t"
+		      "movb $0xAA, %[io_port_var]\n\t"
+		      "insb_instruction_label:\n\t"
+		      "insb\n\t" "jmp 1b\n\t":[io_port_var] "=m"(io_port_var)
+		      :		/* no inputs */
+		      :"rdx", "rdi");
 }
 
 static bool reg_corruption_finished(struct svm_test *test)
 {
-    if (isr_cnt == 10000) {
-        report_pass("No RIP corruption detected after %d timer interrupts",
-                    isr_cnt);
-        set_test_stage(test, 1);
-        goto cleanup;
-    }
+	if (isr_cnt == 10000) {
+		report_pass
+		    ("No RIP corruption detected after %d timer interrupts",
+		     isr_cnt);
+		set_test_stage(test, 1);
+		goto cleanup;
+	}
 
-    if (vmcb->control.exit_code == SVM_EXIT_INTR) {
+	if (vmcb->control.exit_code == SVM_EXIT_INTR) {
 
-        void* guest_rip = (void*)vmcb->save.rip;
+		void *guest_rip = (void *)vmcb->save.rip;
 
-        irq_enable();
-        asm volatile ("nop");
-        irq_disable();
+		irq_enable();
+		asm volatile ("nop");
+		irq_disable();
 
-        if (guest_rip == insb_instruction_label && io_port_var != 0xAA) {
-            report_fail("RIP corruption detected after %d timer interrupts",
-                        isr_cnt);
-            goto cleanup;
-        }
+		if (guest_rip == insb_instruction_label && io_port_var != 0xAA) {
+			report_fail
+			    ("RIP corruption detected after %d timer interrupts",
+			     isr_cnt);
+			goto cleanup;
+		}
 
-    }
-    return false;
+	}
+	return false;
 cleanup:
-    apic_write(APIC_LVTT, APIC_LVT_TIMER_MASK);
-    apic_write(APIC_TMICT, 0);
-    return true;
+	apic_write(APIC_LVTT, APIC_LVT_TIMER_MASK);
+	apic_write(APIC_TMICT, 0);
+	return true;
 
 }
 
 static bool reg_corruption_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 1;
+	return get_test_stage(test) == 1;
 }
 
 static void get_tss_entry(void *data)
 {
-    *((gdt_entry_t **)data) = get_tss_descr();
+	*((gdt_entry_t **) data) = get_tss_descr();
 }
 
 static int orig_cpu_count;
 
 static void init_startup_prepare(struct svm_test *test)
 {
-    gdt_entry_t *tss_entry;
-    int i;
+	gdt_entry_t *tss_entry;
+	int i;
 
-    on_cpu(1, get_tss_entry, &tss_entry);
+	on_cpu(1, get_tss_entry, &tss_entry);
 
-    orig_cpu_count = cpu_online_count;
+	orig_cpu_count = cpu_online_count;
 
-    apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT,
-                   id_map[1]);
+	apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT,
+		       id_map[1]);
 
-    delay(100000000ULL);
+	delay(100000000ULL);
 
-    --cpu_online_count;
+	--cpu_online_count;
 
-    tss_entry->type &= ~DESC_BUSY;
+	tss_entry->type &= ~DESC_BUSY;
 
-    apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_STARTUP, id_map[1]);
+	apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_STARTUP, id_map[1]);
 
-    for (i = 0; i < 5 && cpu_online_count < orig_cpu_count; i++)
-       delay(100000000ULL);
+	for (i = 0; i < 5 && cpu_online_count < orig_cpu_count; i++)
+		delay(100000000ULL);
 }
 
 static bool init_startup_finished(struct svm_test *test)
 {
-    return true;
+	return true;
 }
 
 static bool init_startup_check(struct svm_test *test)
 {
-    return cpu_online_count == orig_cpu_count;
+	return cpu_online_count == orig_cpu_count;
 }
 
 static volatile bool init_intercept;
 
 static void init_intercept_prepare(struct svm_test *test)
 {
-    init_intercept = false;
-    vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
+	init_intercept = false;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
 }
 
 static void init_intercept_test(struct svm_test *test)
 {
-    apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_INIT |
+		       APIC_INT_ASSERT, 0);
 }
 
 static bool init_intercept_finished(struct svm_test *test)
 {
-    vmcb->save.rip += 3;
+	vmcb->save.rip += 3;
 
-    if (vmcb->control.exit_code != SVM_EXIT_INIT) {
-        report_fail("VMEXIT not due to init intercept. Exit reason 0x%x",
-                    vmcb->control.exit_code);
+	if (vmcb->control.exit_code != SVM_EXIT_INIT) {
+		report_fail
+		    ("VMEXIT not due to init intercept. Exit reason 0x%x",
+		     vmcb->control.exit_code);
 
-        return true;
-        }
+		return true;
+	}
 
-    init_intercept = true;
+	init_intercept = true;
 
-    report_pass("INIT to vcpu intercepted");
+	report_pass("INIT to vcpu intercepted");
 
-    return true;
+	return true;
 }
 
 static bool init_intercept_check(struct svm_test *test)
 {
-    return init_intercept;
+	return init_intercept;
 }
 
 /*
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests RESEND PATCH v3 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
                   ` (6 preceding siblings ...)
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 7/8] x86: nSVM: Correct indentation for svm_tests.c part-1 Manali Shukla
@ 2022-04-25 11:44 ` Manali Shukla
  2022-04-25 13:29   ` Paolo Bonzini
  2022-04-25 13:31 ` [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Paolo Bonzini
  8 siblings, 1 reply; 13+ messages in thread
From: Manali Shukla @ 2022-04-25 11:44 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

Used ./scripts/Lident script from linux kernel source base to correct the
indentation in svm_tests.c file.

No functional change intended.

Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm_tests.c | 884 ++++++++++++++++++++++++------------------------
 1 file changed, 439 insertions(+), 445 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 1813b97..52896fd 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -1908,7 +1908,7 @@ static void host_rflags_db_handler(struct ex_regs *r)
 				++host_rflags_db_handler_flag;
 			}
 		} else {
-			if (r->rip == (u64)&vmrun_rip) {
+			if (r->rip == (u64) & vmrun_rip) {
 				host_rflags_vmrun_reached = true;
 
 				if (host_rflags_set_rf) {
@@ -1944,8 +1944,10 @@ static void host_rflags_test(struct svm_test *test)
 {
 	while (1) {
 		if (get_test_stage(test) > 0) {
-			if ((host_rflags_set_tf && !host_rflags_ss_on_vmrun && !host_rflags_db_handler_flag) ||
-			    (host_rflags_set_rf && host_rflags_db_handler_flag == 1))
+			if ((host_rflags_set_tf && !host_rflags_ss_on_vmrun
+			     && !host_rflags_db_handler_flag)
+			    || (host_rflags_set_rf
+				&& host_rflags_db_handler_flag == 1))
 				host_rflags_guest_main_flag = 1;
 		}
 
@@ -1989,11 +1991,11 @@ static bool host_rflags_finished(struct svm_test *test)
 		break;
 	case 2:
 		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
-		    rip_detected != (u64)&vmrun_rip + 3) {
+		    rip_detected != (u64) & vmrun_rip + 3) {
 			report_fail("Unexpected VMEXIT or RIP mismatch."
 				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
 				    "%lx", vmcb->control.exit_code,
-				    (u64)&vmrun_rip + 3, rip_detected);
+				    (u64) & vmrun_rip + 3, rip_detected);
 			return true;
 		}
 		host_rflags_set_rf = true;
@@ -2003,7 +2005,7 @@ static bool host_rflags_finished(struct svm_test *test)
 		break;
 	case 3:
 		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
-		    rip_detected != (u64)&vmrun_rip ||
+		    rip_detected != (u64) & vmrun_rip ||
 		    host_rflags_guest_main_flag != 1 ||
 		    host_rflags_db_handler_flag > 1 ||
 		    read_rflags() & X86_EFLAGS_RF) {
@@ -2011,7 +2013,7 @@ static bool host_rflags_finished(struct svm_test *test)
 				    "EFLAGS.RF not cleared."
 				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
 				    "%lx", vmcb->control.exit_code,
-				    (u64)&vmrun_rip, rip_detected);
+				    (u64) & vmrun_rip, rip_detected);
 			return true;
 		}
 		host_rflags_set_tf = false;
@@ -2074,7 +2076,6 @@ static void basic_guest_main(struct svm_test *test)
 {
 }
 
-
 #define SVM_TEST_REG_RESERVED_BITS(start, end, inc, str_name, reg, val,	\
 				   resv_mask)				\
 {									\
@@ -2128,10 +2129,10 @@ static void test_efer(void)
 	u64 efer_saved = vmcb->save.efer;
 	u64 efer = efer_saved;
 
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
+	report(svm_vmrun() == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
 	efer &= ~EFER_SVME;
 	vmcb->save.efer = efer;
-	report (svm_vmrun() == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
+	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
 	vmcb->save.efer = efer_saved;
 
 	/*
@@ -2140,9 +2141,9 @@ static void test_efer(void)
 	efer_saved = vmcb->save.efer;
 
 	SVM_TEST_REG_RESERVED_BITS(8, 9, 1, "EFER", vmcb->save.efer,
-	    efer_saved, SVM_EFER_RESERVED_MASK);
+				   efer_saved, SVM_EFER_RESERVED_MASK);
 	SVM_TEST_REG_RESERVED_BITS(16, 63, 4, "EFER", vmcb->save.efer,
-	    efer_saved, SVM_EFER_RESERVED_MASK);
+				   efer_saved, SVM_EFER_RESERVED_MASK);
 
 	/*
 	 * EFER.LME and CR0.PG are both set and CR4.PAE is zero.
@@ -2159,7 +2160,7 @@ static void test_efer(void)
 	cr4 = cr4_saved & ~X86_CR4_PAE;
 	vmcb->save.cr4 = cr4;
 	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
-	    "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
+	       "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
 
 	/*
 	 * EFER.LME and CR0.PG are both set and CR0.PE is zero.
@@ -2172,7 +2173,7 @@ static void test_efer(void)
 	cr0 &= ~X86_CR0_PE;
 	vmcb->save.cr0 = cr0;
 	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
-	    "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
+	       "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
 
 	/*
 	 * EFER.LME, CR0.PG, CR4.PAE, CS.L, and CS.D are all non-zero.
@@ -2186,8 +2187,8 @@ static void test_efer(void)
 	    SVM_SELECTOR_DB_MASK;
 	vmcb->save.cs.attrib = cs_attrib;
 	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
-	    "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
-	    efer, cr0, cr4, cs_attrib);
+	       "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
+	       efer, cr0, cr4, cs_attrib);
 
 	vmcb->save.cr0 = cr0_saved;
 	vmcb->save.cr4 = cr4_saved;
@@ -2206,21 +2207,17 @@ static void test_cr0(void)
 	cr0 |= X86_CR0_CD;
 	cr0 &= ~X86_CR0_NW;
 	vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
-	    cr0);
+	report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx", cr0);
 	cr0 |= X86_CR0_NW;
 	vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
-	    cr0);
+	report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx", cr0);
 	cr0 &= ~X86_CR0_NW;
 	cr0 &= ~X86_CR0_CD;
 	vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
-	    cr0);
+	report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx", cr0);
 	cr0 |= X86_CR0_NW;
 	vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
-	    cr0);
+	report(svm_vmrun() == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx", cr0);
 	vmcb->save.cr0 = cr0_saved;
 
 	/*
@@ -2229,7 +2226,7 @@ static void test_cr0(void)
 	cr0 = cr0_saved;
 
 	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "CR0", vmcb->save.cr0, cr0_saved,
-	    SVM_CR0_RESERVED_MASK);
+				   SVM_CR0_RESERVED_MASK);
 	vmcb->save.cr0 = cr0_saved;
 }
 
@@ -2242,11 +2239,11 @@ static void test_cr3(void)
 	u64 cr3_saved = vmcb->save.cr3;
 
 	SVM_TEST_CR_RESERVED_BITS(0, 63, 1, 3, cr3_saved,
-	    SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
+				  SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
 
 	vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
 	report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
-	    vmcb->save.cr3);
+	       vmcb->save.cr3);
 
 	/*
 	 * CR3 non-MBZ reserved bits based on different modes:
@@ -2262,11 +2259,12 @@ static void test_cr3(void)
 	if (this_cpu_has(X86_FEATURE_PCID)) {
 		vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
 		SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved,
-		    SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) ");
+					  SVM_CR3_LONG_RESERVED_MASK,
+					  SVM_EXIT_VMMCALL, "(PCIDE=1) ");
 
 		vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
 		report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
-		    vmcb->save.cr3);
+		       vmcb->save.cr3);
 	}
 
 	vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
@@ -2278,7 +2276,8 @@ static void test_cr3(void)
 	pdpe[0] &= ~1ULL;
 
 	SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved,
-	    SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) ");
+				  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF,
+				  "(PCIDE=0) ");
 
 	pdpe[0] |= 1ULL;
 	vmcb->save.cr3 = cr3_saved;
@@ -2289,7 +2288,8 @@ static void test_cr3(void)
 	pdpe[0] &= ~1ULL;
 	vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
 	SVM_TEST_CR_RESERVED_BITS(0, 2, 1, 3, cr3_saved,
-	    SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) ");
+				  SVM_CR3_PAE_LEGACY_RESERVED_MASK,
+				  SVM_EXIT_NPF, "(PAE) ");
 
 	pdpe[0] |= 1ULL;
 
@@ -2308,14 +2308,15 @@ static void test_cr4(void)
 	efer &= ~EFER_LME;
 	vmcb->save.efer = efer;
 	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
-	    SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, "");
+				  SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR,
+				  "");
 
 	efer |= EFER_LME;
 	vmcb->save.efer = efer;
 	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
-	    SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
+				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
 	SVM_TEST_CR_RESERVED_BITS(32, 63, 4, 4, cr4_saved,
-	    SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
+				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
 
 	vmcb->save.cr4 = cr4_saved;
 	vmcb->save.efer = efer_saved;
@@ -2329,12 +2330,12 @@ static void test_dr(void)
 	u64 dr_saved = vmcb->save.dr6;
 
 	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR6", vmcb->save.dr6, dr_saved,
-	    SVM_DR6_RESERVED_MASK);
+				   SVM_DR6_RESERVED_MASK);
 	vmcb->save.dr6 = dr_saved;
 
 	dr_saved = vmcb->save.dr7;
 	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR7", vmcb->save.dr7, dr_saved,
-	    SVM_DR7_RESERVED_MASK);
+				   SVM_DR7_RESERVED_MASK);
 
 	vmcb->save.dr7 = dr_saved;
 }
@@ -2374,41 +2375,39 @@ static void test_msrpm_iopm_bitmap_addrs(void)
 	u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1));
 
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
-			addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
-			"MSRPM");
+			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
+			 "MSRPM");
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
-			addr_beyond_limit - 2 * PAGE_SIZE + 1, SVM_EXIT_ERR,
-			"MSRPM");
+			 addr_beyond_limit - 2 * PAGE_SIZE + 1, SVM_EXIT_ERR,
+			 "MSRPM");
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
-			addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR,
-			"MSRPM");
+			 addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR, "MSRPM");
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr,
-			SVM_EXIT_VMMCALL, "MSRPM");
+			 SVM_EXIT_VMMCALL, "MSRPM");
 	addr |= (1ull << 12) - 1;
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr,
-			SVM_EXIT_VMMCALL, "MSRPM");
+			 SVM_EXIT_VMMCALL, "MSRPM");
 
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
-			addr_beyond_limit - 4 * PAGE_SIZE, SVM_EXIT_VMMCALL,
-			"IOPM");
+			 addr_beyond_limit - 4 * PAGE_SIZE, SVM_EXIT_VMMCALL,
+			 "IOPM");
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
-			addr_beyond_limit - 3 * PAGE_SIZE, SVM_EXIT_VMMCALL,
-			"IOPM");
+			 addr_beyond_limit - 3 * PAGE_SIZE, SVM_EXIT_VMMCALL,
+			 "IOPM");
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
-			addr_beyond_limit - 2 * PAGE_SIZE - 2, SVM_EXIT_VMMCALL,
-			"IOPM");
+			 addr_beyond_limit - 2 * PAGE_SIZE - 2,
+			 SVM_EXIT_VMMCALL, "IOPM");
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
-			addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
-			"IOPM");
+			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
+			 "IOPM");
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
-			addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR,
-			"IOPM");
+			 addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR, "IOPM");
 	addr = virt_to_phys(io_bitmap) & (~((1ull << 11) - 1));
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr,
-			SVM_EXIT_VMMCALL, "IOPM");
+			 SVM_EXIT_VMMCALL, "IOPM");
 	addr |= (1ull << 12) - 1;
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr,
-			SVM_EXIT_VMMCALL, "IOPM");
+			 SVM_EXIT_VMMCALL, "IOPM");
 
 	vmcb->control.intercept = saved_intercept;
 }
@@ -2425,7 +2424,6 @@ static void test_msrpm_iopm_bitmap_addrs(void)
 			"Successful VMRUN with noncanonical %s.base", msg); \
 	seg_base = saved_addr;
 
-
 #define TEST_CANONICAL_VMLOAD(seg_base, msg)					\
 	saved_addr = seg_base;					\
 	seg_base = (seg_base & ((1ul << addr_limit) - 1)) | noncanonical_mask; \
@@ -2497,10 +2495,7 @@ asm("guest_rflags_test_guest:\n\t"
     "vmmcall\n\t"
     "vmmcall\n\t"
     ".global guest_end\n\t"
-    "guest_end:\n\t"
-    "vmmcall\n\t"
-    "pop %rbp\n\t"
-    "ret");
+    "guest_end:\n\t" "vmmcall\n\t" "pop %rbp\n\t" "ret");
 
 static void svm_test_singlestep(void)
 {
@@ -2510,30 +2505,31 @@ static void svm_test_singlestep(void)
 	 * Trap expected after completion of first guest instruction
 	 */
 	vmcb->save.rflags |= X86_EFLAGS_TF;
-	report (__svm_vmrun((u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
-		guest_rflags_test_trap_rip == (u64)&insn2,
-               "Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
+	report(__svm_vmrun((u64) guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
+	       guest_rflags_test_trap_rip == (u64) & insn2,
+	       "Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
 	/*
 	 * No trap expected
 	 */
 	guest_rflags_test_trap_rip = 0;
 	vmcb->save.rip += 3;
 	vmcb->save.rflags |= X86_EFLAGS_TF;
-	report (__svm_vmrun(vmcb->save.rip) == SVM_EXIT_VMMCALL &&
-		guest_rflags_test_trap_rip == 0, "Test EFLAGS.TF on VMRUN: trap not expected");
+	report(__svm_vmrun(vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+	       guest_rflags_test_trap_rip == 0,
+	       "Test EFLAGS.TF on VMRUN: trap not expected");
 
 	/*
 	 * Let guest finish execution
 	 */
 	vmcb->save.rip += 3;
-	report (__svm_vmrun(vmcb->save.rip) == SVM_EXIT_VMMCALL &&
-		vmcb->save.rip == (u64)&guest_end, "Test EFLAGS.TF on VMRUN: guest execution completion");
+	report(__svm_vmrun(vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+	       vmcb->save.rip == (u64) & guest_end,
+	       "Test EFLAGS.TF on VMRUN: guest execution completion");
 }
 
 static bool volatile svm_errata_reproduced = false;
 static unsigned long volatile physical = 0;
 
-
 /*
  *
  * Test the following errata:
@@ -2548,60 +2544,58 @@ static unsigned long volatile physical = 0;
 
 static void gp_isr(struct ex_regs *r)
 {
-    svm_errata_reproduced = true;
-    /* skip over the vmsave instruction*/
-    r->rip += 3;
+	svm_errata_reproduced = true;
+	/* skip over the vmsave instruction */
+	r->rip += 3;
 }
 
 static void svm_vmrun_errata_test(void)
 {
-    unsigned long *last_page = NULL;
-
-    handle_exception(GP_VECTOR, gp_isr);
+	unsigned long *last_page = NULL;
 
-    while (!svm_errata_reproduced) {
+	handle_exception(GP_VECTOR, gp_isr);
 
-        unsigned long *page = alloc_pages(1);
+	while (!svm_errata_reproduced) {
 
-        if (!page) {
-            report_pass("All guest memory tested, no bug found");
-            break;
-        }
+		unsigned long *page = alloc_pages(1);
 
-        physical = virt_to_phys(page);
+		if (!page) {
+			report_pass("All guest memory tested, no bug found");
+			break;
+		}
 
-        asm volatile (
-            "mov %[_physical], %%rax\n\t"
-            "vmsave %%rax\n\t"
+		physical = virt_to_phys(page);
 
-            : [_physical] "=m" (physical)
-            : /* no inputs*/
-            : "rax" /*clobbers*/
-        );
+		asm volatile ("mov %[_physical], %%rax\n\t"
+			      "vmsave %%rax\n\t":[_physical] "=m"(physical)
+			      :	/* no inputs */
+			      :"rax"	/*clobbers */
+		    );
 
-        if (svm_errata_reproduced) {
-            report_fail("Got #GP exception - svm errata reproduced at 0x%lx",
-                        physical);
-            break;
-        }
+		if (svm_errata_reproduced) {
+			report_fail
+			    ("Got #GP exception - svm errata reproduced at 0x%lx",
+			     physical);
+			break;
+		}
 
-        *page = (unsigned long)last_page;
-        last_page = page;
-    }
+		*page = (unsigned long)last_page;
+		last_page = page;
+	}
 
-    while (last_page) {
-        unsigned long *page = last_page;
-        last_page = (unsigned long *)*last_page;
-        free_pages_by_order(page, 1);
-    }
+	while (last_page) {
+		unsigned long *page = last_page;
+		last_page = (unsigned long *)*last_page;
+		free_pages_by_order(page, 1);
+	}
 }
 
 static void vmload_vmsave_guest_main(struct svm_test *test)
 {
 	u64 vmcb_phys = virt_to_phys(vmcb);
 
-	asm volatile ("vmload %0" : : "a"(vmcb_phys));
-	asm volatile ("vmsave %0" : : "a"(vmcb_phys));
+	asm volatile ("vmload %0"::"a" (vmcb_phys));
+	asm volatile ("vmsave %0"::"a" (vmcb_phys));
 }
 
 static void svm_vmload_vmsave(void)
@@ -2618,7 +2612,7 @@ static void svm_vmload_vmsave(void)
 	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	/*
 	 * Enabling intercept for VMLOAD and VMSAVE causes respective
@@ -2627,252 +2621,248 @@ static void svm_vmload_vmsave(void)
 	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
 	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
 	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
 	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
 	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
 	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
 	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
-	    "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
+	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	vmcb->control.intercept = intercept_saved;
 }
 
 static void prepare_vgif_enabled(struct svm_test *test)
 {
-    default_prepare(test);
+	default_prepare(test);
 }
 
 static void test_vgif(struct svm_test *test)
 {
-    asm volatile ("vmmcall\n\tstgi\n\tvmmcall\n\tclgi\n\tvmmcall\n\t");
+	asm volatile ("vmmcall\n\tstgi\n\tvmmcall\n\tclgi\n\tvmmcall\n\t");
 
 }
 
 static bool vgif_finished(struct svm_test *test)
 {
-    switch (get_test_stage(test))
-    {
-    case 0:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall.");
-            return true;
-        }
-        vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
-        vmcb->save.rip += 3;
-        inc_test_stage(test);
-        break;
-    case 1:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall.");
-            return true;
-        }
-        if (!(vmcb->control.int_ctl & V_GIF_MASK)) {
-            report_fail("Failed to set VGIF when executing STGI.");
-            vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
-            return true;
-        }
-        report_pass("STGI set VGIF bit.");
-        vmcb->save.rip += 3;
-        inc_test_stage(test);
-        break;
-    case 2:
-        if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
-            report_fail("VMEXIT not due to vmmcall.");
-            return true;
-        }
-        if (vmcb->control.int_ctl & V_GIF_MASK) {
-            report_fail("Failed to clear VGIF when executing CLGI.");
-            vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
-            return true;
-        }
-        report_pass("CLGI cleared VGIF bit.");
-        vmcb->save.rip += 3;
-        inc_test_stage(test);
-        vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
-        break;
-    default:
-        return true;
-        break;
-    }
-
-    return get_test_stage(test) == 3;
+	switch (get_test_stage(test)) {
+	case 0:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail("VMEXIT not due to vmmcall.");
+			return true;
+		}
+		vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
+		vmcb->save.rip += 3;
+		inc_test_stage(test);
+		break;
+	case 1:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail("VMEXIT not due to vmmcall.");
+			return true;
+		}
+		if (!(vmcb->control.int_ctl & V_GIF_MASK)) {
+			report_fail("Failed to set VGIF when executing STGI.");
+			vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+			return true;
+		}
+		report_pass("STGI set VGIF bit.");
+		vmcb->save.rip += 3;
+		inc_test_stage(test);
+		break;
+	case 2:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report_fail("VMEXIT not due to vmmcall.");
+			return true;
+		}
+		if (vmcb->control.int_ctl & V_GIF_MASK) {
+			report_fail
+			    ("Failed to clear VGIF when executing CLGI.");
+			vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+			return true;
+		}
+		report_pass("CLGI cleared VGIF bit.");
+		vmcb->save.rip += 3;
+		inc_test_stage(test);
+		vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+		break;
+	default:
+		return true;
+		break;
+	}
+
+	return get_test_stage(test) == 3;
 }
 
 static bool vgif_check(struct svm_test *test)
 {
-    return get_test_stage(test) == 3;
+	return get_test_stage(test) == 3;
 }
 
-
 static int pause_test_counter;
 static int wait_counter;
 
 static void pause_filter_test_guest_main(struct svm_test *test)
 {
-    int i;
-    for (i = 0 ; i < pause_test_counter ; i++)
-        pause();
+	int i;
+	for (i = 0; i < pause_test_counter; i++)
+		pause();
 
-    if (!wait_counter)
-        return;
+	if (!wait_counter)
+		return;
 
-    for (i = 0; i < wait_counter; i++)
-        ;
+	for (i = 0; i < wait_counter; i++) ;
 
-    for (i = 0 ; i < pause_test_counter ; i++)
-        pause();
+	for (i = 0; i < pause_test_counter; i++)
+		pause();
 
 }
 
-static void pause_filter_run_test(int pause_iterations, int filter_value, int wait_iterations, int threshold)
+static void pause_filter_run_test(int pause_iterations, int filter_value,
+				  int wait_iterations, int threshold)
 {
-    test_set_guest(pause_filter_test_guest_main);
+	test_set_guest(pause_filter_test_guest_main);
 
-    pause_test_counter = pause_iterations;
-    wait_counter = wait_iterations;
+	pause_test_counter = pause_iterations;
+	wait_counter = wait_iterations;
 
-    vmcb->control.pause_filter_count = filter_value;
-    vmcb->control.pause_filter_thresh = threshold;
-    svm_vmrun();
+	vmcb->control.pause_filter_count = filter_value;
+	vmcb->control.pause_filter_thresh = threshold;
+	svm_vmrun();
 
-    if (filter_value <= pause_iterations || wait_iterations < threshold)
-        report(vmcb->control.exit_code == SVM_EXIT_PAUSE, "expected PAUSE vmexit");
-    else
-        report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "no expected PAUSE vmexit");
+	if (filter_value <= pause_iterations || wait_iterations < threshold)
+		report(vmcb->control.exit_code == SVM_EXIT_PAUSE,
+		       "expected PAUSE vmexit");
+	else
+		report(vmcb->control.exit_code == SVM_EXIT_VMMCALL,
+		       "no expected PAUSE vmexit");
 }
 
 static void pause_filter_test(void)
 {
-    if (!pause_filter_supported()) {
-            report_skip("PAUSE filter not supported in the guest");
-            return;
-    }
-
-    vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
+	if (!pause_filter_supported()) {
+		report_skip("PAUSE filter not supported in the guest");
+		return;
+	}
 
-    // filter count more that pause count - no VMexit
-    pause_filter_run_test(10, 9, 0, 0);
+	vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
 
-    // filter count smaller pause count - no VMexit
-    pause_filter_run_test(20, 21, 0, 0);
+	// filter count more that pause count - no VMexit
+	pause_filter_run_test(10, 9, 0, 0);
 
+	// filter count smaller pause count - no VMexit
+	pause_filter_run_test(20, 21, 0, 0);
 
-    if (pause_threshold_supported()) {
-        // filter count smaller pause count - no VMexit +  large enough threshold
-        // so that filter counter resets
-        pause_filter_run_test(20, 21, 1000, 10);
+	if (pause_threshold_supported()) {
+		// filter count smaller pause count - no VMexit +  large enough threshold
+		// so that filter counter resets
+		pause_filter_run_test(20, 21, 1000, 10);
 
-        // filter count smaller pause count - no VMexit +  small threshold
-        // so that filter doesn't reset
-        pause_filter_run_test(20, 21, 10, 1000);
-    } else {
-        report_skip("PAUSE threshold not supported in the guest");
-        return;
-    }
+		// filter count smaller pause count - no VMexit +  small threshold
+		// so that filter doesn't reset
+		pause_filter_run_test(20, 21, 10, 1000);
+	} else {
+		report_skip("PAUSE threshold not supported in the guest");
+		return;
+	}
 }
 
-
 static int of_test_counter;
 
 static void guest_test_of_handler(struct ex_regs *r)
 {
-    of_test_counter++;
+	of_test_counter++;
 }
 
 static void svm_of_test_guest(struct svm_test *test)
 {
-    struct far_pointer32 fp = {
-        .offset = (uintptr_t)&&into,
-        .selector = KERNEL_CS32,
-    };
-    uintptr_t rsp;
+	struct far_pointer32 fp = {
+		.offset = (uintptr_t) && into,
+		.selector = KERNEL_CS32,
+	};
+	uintptr_t rsp;
 
-    asm volatile ("mov %%rsp, %0" : "=r"(rsp));
+	asm volatile ("mov %%rsp, %0":"=r" (rsp));
 
-    if (fp.offset != (uintptr_t)&&into) {
-        printf("Codee address too high.\n");
-        return;
-    }
+	if (fp.offset != (uintptr_t) && into) {
+		printf("Codee address too high.\n");
+		return;
+	}
 
-    if ((u32)rsp != rsp) {
-        printf("Stack address too high.\n");
-    }
+	if ((u32) rsp != rsp) {
+		printf("Stack address too high.\n");
+	}
 
-    asm goto("lcall *%0" : : "m" (fp) : "rax" : into);
-    return;
+	asm goto ("lcall *%0"::"m" (fp):"rax":into);
+	return;
 into:
 
-    asm volatile (".code32;"
-            "movl $0x7fffffff, %eax;"
-            "addl %eax, %eax;"
-            "into;"
-            "lret;"
-            ".code64");
-    __builtin_unreachable();
+	asm volatile (".code32;"
+		      "movl $0x7fffffff, %eax;"
+		      "addl %eax, %eax;" "into;" "lret;" ".code64");
+	__builtin_unreachable();
 }
 
 static void svm_into_test(void)
 {
-    handle_exception(OF_VECTOR, guest_test_of_handler);
-    test_set_guest(svm_of_test_guest);
-    report(svm_vmrun() == SVM_EXIT_VMMCALL && of_test_counter == 1,
-        "#OF is generated in L2 exception handler0");
+	handle_exception(OF_VECTOR, guest_test_of_handler);
+	test_set_guest(svm_of_test_guest);
+	report(svm_vmrun() == SVM_EXIT_VMMCALL && of_test_counter == 1,
+	       "#OF is generated in L2 exception handler0");
 }
 
 static int bp_test_counter;
 
 static void guest_test_bp_handler(struct ex_regs *r)
 {
-    bp_test_counter++;
+	bp_test_counter++;
 }
 
 static void svm_bp_test_guest(struct svm_test *test)
 {
-    asm volatile("int3");
+	asm volatile ("int3");
 }
 
 static void svm_int3_test(void)
 {
-    handle_exception(BP_VECTOR, guest_test_bp_handler);
-    test_set_guest(svm_bp_test_guest);
-    report(svm_vmrun() == SVM_EXIT_VMMCALL && bp_test_counter == 1,
-        "#BP is handled in L2 exception handler");
+	handle_exception(BP_VECTOR, guest_test_bp_handler);
+	test_set_guest(svm_bp_test_guest);
+	report(svm_vmrun() == SVM_EXIT_VMMCALL && bp_test_counter == 1,
+	       "#BP is handled in L2 exception handler");
 }
 
 static int nm_test_counter;
 
 static void guest_test_nm_handler(struct ex_regs *r)
 {
-    nm_test_counter++; 
-    write_cr0(read_cr0() & ~X86_CR0_TS);
-    write_cr0(read_cr0() & ~X86_CR0_EM);
+	nm_test_counter++;
+	write_cr0(read_cr0() & ~X86_CR0_TS);
+	write_cr0(read_cr0() & ~X86_CR0_EM);
 }
 
 static void svm_nm_test_guest(struct svm_test *test)
 {
-    asm volatile("fnop");
+	asm volatile ("fnop");
 }
 
 /* This test checks that:
@@ -2889,38 +2879,39 @@ static void svm_nm_test_guest(struct svm_test *test)
 
 static void svm_nm_test(void)
 {
-    handle_exception(NM_VECTOR, guest_test_nm_handler);
-    write_cr0(read_cr0() & ~X86_CR0_TS);
-    test_set_guest(svm_nm_test_guest);
+	handle_exception(NM_VECTOR, guest_test_nm_handler);
+	write_cr0(read_cr0() & ~X86_CR0_TS);
+	test_set_guest(svm_nm_test_guest);
 
-    vmcb->save.cr0 = vmcb->save.cr0 | X86_CR0_TS;
-    report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 1,
-        "fnop with CR0.TS set in L2, #NM is triggered");
+	vmcb->save.cr0 = vmcb->save.cr0 | X86_CR0_TS;
+	report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 1,
+	       "fnop with CR0.TS set in L2, #NM is triggered");
 
-    vmcb->save.cr0 = (vmcb->save.cr0 & ~X86_CR0_TS) | X86_CR0_EM;
-    report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2,
-        "fnop with CR0.EM set in L2, #NM is triggered");
+	vmcb->save.cr0 = (vmcb->save.cr0 & ~X86_CR0_TS) | X86_CR0_EM;
+	report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2,
+	       "fnop with CR0.EM set in L2, #NM is triggered");
 
-    vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
-    report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2,
-        "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
+	vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
+	report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2,
+	       "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
 }
 
-
-static bool check_lbr(u64 *from_excepted, u64 *to_expected)
+static bool check_lbr(u64 * from_excepted, u64 * to_expected)
 {
 	u64 from = rdmsr(MSR_IA32_LASTBRANCHFROMIP);
 	u64 to = rdmsr(MSR_IA32_LASTBRANCHTOIP);
 
-	if ((u64)from_excepted != from) {
-		report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx",
-			(u64)from_excepted, from);
+	if ((u64) from_excepted != from) {
+		report(false,
+		       "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx",
+		       (u64) from_excepted, from);
 		return false;
 	}
 
-	if ((u64)to_expected != to) {
-		report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx",
-			(u64)from_excepted, from);
+	if ((u64) to_expected != to) {
+		report(false,
+		       "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx",
+		       (u64) from_excepted, from);
 		return false;
 	}
 
@@ -2930,13 +2921,13 @@ static bool check_lbr(u64 *from_excepted, u64 *to_expected)
 static bool check_dbgctl(u64 dbgctl, u64 dbgctl_expected)
 {
 	if (dbgctl != dbgctl_expected) {
-		report(false, "Unexpected MSR_IA32_DEBUGCTLMSR value 0x%lx", dbgctl);
+		report(false, "Unexpected MSR_IA32_DEBUGCTLMSR value 0x%lx",
+		       dbgctl);
 		return false;
 	}
 	return true;
 }
 
-
 #define DO_BRANCH(branch_name) \
 	asm volatile ( \
 		# branch_name "_from:" \
@@ -2947,7 +2938,6 @@ static bool check_dbgctl(u64 dbgctl, u64 dbgctl_expected)
 		"nop\n" \
 	)
 
-
 extern u64 guest_branch0_from, guest_branch0_to;
 extern u64 guest_branch2_from, guest_branch2_to;
 
@@ -2971,13 +2961,13 @@ static void svm_lbrv_test_guest1(void)
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
 	if (dbgctl != DEBUGCTLMSR_LBR)
-		asm volatile("ud2\n");
+		asm volatile ("ud2\n");
 	if (rdmsr(MSR_IA32_DEBUGCTLMSR) != 0)
-		asm volatile("ud2\n");
-	if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&guest_branch0_from)
-		asm volatile("ud2\n");
-	if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&guest_branch0_to)
-		asm volatile("ud2\n");
+		asm volatile ("ud2\n");
+	if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64) & guest_branch0_from)
+		asm volatile ("ud2\n");
+	if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64) & guest_branch0_to)
+		asm volatile ("ud2\n");
 
 	asm volatile ("vmmcall\n");
 }
@@ -2993,13 +2983,12 @@ static void svm_lbrv_test_guest2(void)
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 
 	if (dbgctl != 0)
-		asm volatile("ud2\n");
-
-	if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&host_branch2_from)
-		asm volatile("ud2\n");
-	if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&host_branch2_to)
-		asm volatile("ud2\n");
+		asm volatile ("ud2\n");
 
+	if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64) & host_branch2_from)
+		asm volatile ("ud2\n");
+	if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64) & host_branch2_to)
+		asm volatile ("ud2\n");
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
@@ -3007,11 +2996,11 @@ static void svm_lbrv_test_guest2(void)
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
 	if (dbgctl != DEBUGCTLMSR_LBR)
-		asm volatile("ud2\n");
-	if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&guest_branch2_from)
-		asm volatile("ud2\n");
-	if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&guest_branch2_to)
-		asm volatile("ud2\n");
+		asm volatile ("ud2\n");
+	if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64) & guest_branch2_from)
+		asm volatile ("ud2\n");
+	if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64) & guest_branch2_to)
+		asm volatile ("ud2\n");
 
 	asm volatile ("vmmcall\n");
 }
@@ -3033,9 +3022,10 @@ static void svm_lbrv_test0(void)
 
 static void svm_lbrv_test1(void)
 {
-	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
+	report(true,
+	       "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
 
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
+	vmcb->save.rip = (ulong) svm_lbrv_test_guest1;
 	vmcb->control.virt_ext = 0;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
@@ -3045,7 +3035,7 @@ static void svm_lbrv_test1(void)
 
 	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
@@ -3055,9 +3045,10 @@ static void svm_lbrv_test1(void)
 
 static void svm_lbrv_test2(void)
 {
-	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
+	report(true,
+	       "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
 
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
+	vmcb->save.rip = (ulong) svm_lbrv_test_guest2;
 	vmcb->control.virt_ext = 0;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
@@ -3069,7 +3060,7 @@ static void svm_lbrv_test2(void)
 
 	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
@@ -3084,8 +3075,9 @@ static void svm_lbrv_nested_test1(void)
 		return;
 	}
 
-	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)");
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
+	report(true,
+	       "Test that with LBRV enabled, guest LBR state doesn't leak (1)");
+	vmcb->save.rip = (ulong) svm_lbrv_test_guest1;
 	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
 	vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
 
@@ -3097,18 +3089,21 @@ static void svm_lbrv_nested_test1(void)
 
 	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
 	if (vmcb->save.dbgctl != 0) {
-		report(false, "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx", vmcb->save.dbgctl);
+		report(false,
+		       "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx",
+		       vmcb->save.dbgctl);
 		return;
 	}
 
 	check_dbgctl(dbgctl, DEBUGCTLMSR_LBR);
 	check_lbr(&host_branch3_from, &host_branch3_to);
 }
+
 static void svm_lbrv_nested_test2(void)
 {
 	if (!lbrv_supported()) {
@@ -3116,13 +3111,14 @@ static void svm_lbrv_nested_test2(void)
 		return;
 	}
 
-	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)");
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
+	report(true,
+	       "Test that with LBRV enabled, guest LBR state doesn't leak (2)");
+	vmcb->save.rip = (ulong) svm_lbrv_test_guest2;
 	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
 
 	vmcb->save.dbgctl = 0;
-	vmcb->save.br_from = (u64)&host_branch2_from;
-	vmcb->save.br_to = (u64)&host_branch2_to;
+	vmcb->save.br_from = (u64) & host_branch2_from;
+	vmcb->save.br_to = (u64) & host_branch2_to;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch4);
@@ -3132,7 +3128,7 @@ static void svm_lbrv_nested_test2(void)
 
 	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
@@ -3140,32 +3136,30 @@ static void svm_lbrv_nested_test2(void)
 	check_lbr(&host_branch4_from, &host_branch4_to);
 }
 
-
 // test that a nested guest which does enable INTR interception
 // but doesn't enable virtual interrupt masking works
 
 static volatile int dummy_isr_recevied;
-static void dummy_isr(isr_regs_t *regs)
+static void dummy_isr(isr_regs_t * regs)
 {
 	dummy_isr_recevied++;
 	eoi();
 }
 
-
 static volatile int nmi_recevied;
 static void dummy_nmi_handler(struct ex_regs *regs)
 {
 	nmi_recevied++;
 }
 
-
-static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected_vmexit)
+static void svm_intr_intercept_mix_run_guest(volatile int *counter,
+					     int expected_vmexit)
 {
 	if (counter)
 		*counter = 0;
 
-	sti();  // host IF value should not matter
-	clgi(); // vmrun will set back GI to 1
+	sti();			// host IF value should not matter
+	clgi();			// vmrun will set back GI to 1
 
 	svm_vmrun();
 
@@ -3177,19 +3171,20 @@ static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected
 	if (counter)
 		report(*counter == 1, "Interrupt is expected");
 
-	report (vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
-	report(vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
+	report(vmcb->control.exit_code == expected_vmexit,
+	       "Test expected VM exit");
+	report(vmcb->save.rflags & X86_EFLAGS_IF,
+	       "Guest should have EFLAGS.IF set now");
 	cli();
 }
 
-
 // subtest: test that enabling EFLAGS.IF is enought to trigger an interrupt
 static void svm_intr_intercept_mix_if_guest(struct svm_test *test)
 {
-	asm volatile("nop;nop;nop;nop");
+	asm volatile ("nop;nop;nop;nop");
 	report(!dummy_isr_recevied, "No interrupt expected");
 	sti();
-	asm volatile("nop");
+	asm volatile ("nop");
 	report(0, "must not reach here");
 }
 
@@ -3203,28 +3198,28 @@ static void svm_intr_intercept_mix_if(void)
 	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_if_guest);
-	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED |
+		       0x55, 0);
 	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
 }
 
-
 // subtest: test that a clever guest can trigger an interrupt by setting GIF
 // if GIF is not intercepted
 static void svm_intr_intercept_mix_gif_guest(struct svm_test *test)
 {
 
-	asm volatile("nop;nop;nop;nop");
+	asm volatile ("nop;nop;nop;nop");
 	report(!dummy_isr_recevied, "No interrupt expected");
 
 	// clear GIF and enable IF
 	// that should still not cause VM exit
 	clgi();
 	sti();
-	asm volatile("nop");
+	asm volatile ("nop");
 	report(!dummy_isr_recevied, "No interrupt expected");
 
 	stgi();
-	asm volatile("nop");
+	asm volatile ("nop");
 	report(0, "must not reach here");
 }
 
@@ -3237,26 +3232,26 @@ static void svm_intr_intercept_mix_gif(void)
 	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_gif_guest);
-	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED |
+		       0x55, 0);
 	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
 }
 
-
-
 // subtest: test that a clever guest can trigger an interrupt by setting GIF
 // if GIF is not intercepted and interrupt comes after guest
 // started running
 static void svm_intr_intercept_mix_gif_guest2(struct svm_test *test)
 {
-	asm volatile("nop;nop;nop;nop");
+	asm volatile ("nop;nop;nop;nop");
 	report(!dummy_isr_recevied, "No interrupt expected");
 
 	clgi();
-	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
+	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED |
+		       0x55, 0);
 	report(!dummy_isr_recevied, "No interrupt expected");
 
 	stgi();
-	asm volatile("nop");
+	asm volatile ("nop");
 	report(0, "must not reach here");
 }
 
@@ -3272,23 +3267,22 @@ static void svm_intr_intercept_mix_gif2(void)
 	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
 }
 
-
 // subtest: test that pending NMI will be handled when guest enables GIF
 static void svm_intr_intercept_mix_nmi_guest(struct svm_test *test)
 {
-	asm volatile("nop;nop;nop;nop");
+	asm volatile ("nop;nop;nop;nop");
 	report(!nmi_recevied, "No NMI expected");
-	cli(); // should have no effect
+	cli();			// should have no effect
 
 	clgi();
-	asm volatile("nop");
+	asm volatile ("nop");
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI, 0);
-	sti(); // should have no effect
-	asm volatile("nop");
+	sti();			// should have no effect
+	asm volatile ("nop");
 	report(!nmi_recevied, "No NMI expected");
 
 	stgi();
-	asm volatile("nop");
+	asm volatile ("nop");
 	report(0, "must not reach here");
 }
 
@@ -3309,15 +3303,15 @@ static void svm_intr_intercept_mix_nmi(void)
 // and VMexits on SMI
 static void svm_intr_intercept_mix_smi_guest(struct svm_test *test)
 {
-	asm volatile("nop;nop;nop;nop");
+	asm volatile ("nop;nop;nop;nop");
 
 	clgi();
-	asm volatile("nop");
+	asm volatile ("nop");
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_SMI, 0);
-	sti(); // should have no effect
-	asm volatile("nop");
+	sti();			// should have no effect
+	asm volatile ("nop");
 	stgi();
-	asm volatile("nop");
+	asm volatile ("nop");
 	report(0, "must not reach here");
 }
 
@@ -3331,121 +3325,121 @@ static void svm_intr_intercept_mix_smi(void)
 
 int main(int ac, char **av)
 {
-    setup_vm();
-    return run_svm_tests(ac, av);
+	setup_vm();
+	return run_svm_tests(ac, av);
 }
 
 struct svm_test svm_tests[] = {
-    { "null", default_supported, default_prepare,
-      default_prepare_gif_clear, null_test,
-      default_finished, null_check },
-    { "vmrun", default_supported, default_prepare,
-      default_prepare_gif_clear, test_vmrun,
-       default_finished, check_vmrun },
-    { "ioio", default_supported, prepare_ioio,
-       default_prepare_gif_clear, test_ioio,
-       ioio_finished, check_ioio },
-    { "vmrun intercept check", default_supported, prepare_no_vmrun_int,
-      default_prepare_gif_clear, null_test, default_finished,
-      check_no_vmrun_int },
-    { "rsm", default_supported,
-      prepare_rsm_intercept, default_prepare_gif_clear,
-      test_rsm_intercept, finished_rsm_intercept, check_rsm_intercept },
-    { "cr3 read intercept", default_supported,
-      prepare_cr3_intercept, default_prepare_gif_clear,
-      test_cr3_intercept, default_finished, check_cr3_intercept },
-    { "cr3 read nointercept", default_supported, default_prepare,
-      default_prepare_gif_clear, test_cr3_intercept, default_finished,
-      check_cr3_nointercept },
-    { "cr3 read intercept emulate", smp_supported,
-      prepare_cr3_intercept_bypass, default_prepare_gif_clear,
-      test_cr3_intercept_bypass, default_finished, check_cr3_intercept },
-    { "dr intercept check", default_supported, prepare_dr_intercept,
-      default_prepare_gif_clear, test_dr_intercept, dr_intercept_finished,
-      check_dr_intercept },
-    { "next_rip", next_rip_supported, prepare_next_rip,
-      default_prepare_gif_clear, test_next_rip,
-      default_finished, check_next_rip },
-    { "msr intercept check", default_supported, prepare_msr_intercept,
-      default_prepare_gif_clear, test_msr_intercept,
-      msr_intercept_finished, check_msr_intercept },
-    { "mode_switch", default_supported, prepare_mode_switch,
-      default_prepare_gif_clear, test_mode_switch,
-       mode_switch_finished, check_mode_switch },
-    { "asid_zero", default_supported, prepare_asid_zero,
-      default_prepare_gif_clear, test_asid_zero,
-       default_finished, check_asid_zero },
-    { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare,
-      default_prepare_gif_clear, sel_cr0_bug_test,
-       sel_cr0_bug_finished, sel_cr0_bug_check },
-    { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
-      default_prepare_gif_clear, tsc_adjust_test,
-      default_finished, tsc_adjust_check },
-    { "latency_run_exit", default_supported, latency_prepare,
-      default_prepare_gif_clear, latency_test,
-      latency_finished, latency_check },
-    { "latency_run_exit_clean", default_supported, latency_prepare,
-      default_prepare_gif_clear, latency_test,
-      latency_finished_clean, latency_check },
-    { "latency_svm_insn", default_supported, lat_svm_insn_prepare,
-      default_prepare_gif_clear, null_test,
-      lat_svm_insn_finished, lat_svm_insn_check },
-    { "exc_inject", default_supported, exc_inject_prepare,
-      default_prepare_gif_clear, exc_inject_test,
-      exc_inject_finished, exc_inject_check },
-    { "pending_event", default_supported, pending_event_prepare,
-      default_prepare_gif_clear,
-      pending_event_test, pending_event_finished, pending_event_check },
-    { "pending_event_cli", default_supported, pending_event_cli_prepare,
-      pending_event_cli_prepare_gif_clear,
-      pending_event_cli_test, pending_event_cli_finished,
-      pending_event_cli_check },
-    { "interrupt", default_supported, interrupt_prepare,
-      default_prepare_gif_clear, interrupt_test,
-      interrupt_finished, interrupt_check },
-    { "nmi", default_supported, nmi_prepare,
-      default_prepare_gif_clear, nmi_test,
-      nmi_finished, nmi_check },
-    { "nmi_hlt", smp_supported, nmi_prepare,
-      default_prepare_gif_clear, nmi_hlt_test,
-      nmi_hlt_finished, nmi_hlt_check },
-    { "virq_inject", default_supported, virq_inject_prepare,
-      default_prepare_gif_clear, virq_inject_test,
-      virq_inject_finished, virq_inject_check },
-    { "reg_corruption", default_supported, reg_corruption_prepare,
-      default_prepare_gif_clear, reg_corruption_test,
-      reg_corruption_finished, reg_corruption_check },
-    { "svm_init_startup_test", smp_supported, init_startup_prepare,
-      default_prepare_gif_clear, null_test,
-      init_startup_finished, init_startup_check },
-    { "svm_init_intercept_test", smp_supported, init_intercept_prepare,
-      default_prepare_gif_clear, init_intercept_test,
-      init_intercept_finished, init_intercept_check, .on_vcpu = 2 },
-    { "host_rflags", default_supported, host_rflags_prepare,
-      host_rflags_prepare_gif_clear, host_rflags_test,
-      host_rflags_finished, host_rflags_check },
-    { "vgif", vgif_supported, prepare_vgif_enabled,
-      default_prepare_gif_clear, test_vgif, vgif_finished,
-      vgif_check },
-    TEST(svm_cr4_osxsave_test),
-    TEST(svm_guest_state_test),
-    TEST(svm_vmrun_errata_test),
-    TEST(svm_vmload_vmsave),
-    TEST(svm_test_singlestep),
-    TEST(svm_nm_test),
-    TEST(svm_int3_test),
-    TEST(svm_into_test),
-    TEST(svm_lbrv_test0),
-    TEST(svm_lbrv_test1),
-    TEST(svm_lbrv_test2),
-    TEST(svm_lbrv_nested_test1),
-    TEST(svm_lbrv_nested_test2),
-    TEST(svm_intr_intercept_mix_if),
-    TEST(svm_intr_intercept_mix_gif),
-    TEST(svm_intr_intercept_mix_gif2),
-    TEST(svm_intr_intercept_mix_nmi),
-    TEST(svm_intr_intercept_mix_smi),
-    TEST(svm_tsc_scale_test),
-    TEST(pause_filter_test),
-    { NULL, NULL, NULL, NULL, NULL, NULL, NULL }
+	{ "null", default_supported, default_prepare,
+	 default_prepare_gif_clear, null_test,
+	 default_finished, null_check },
+	{ "vmrun", default_supported, default_prepare,
+	 default_prepare_gif_clear, test_vmrun,
+	 default_finished, check_vmrun },
+	{ "ioio", default_supported, prepare_ioio,
+	 default_prepare_gif_clear, test_ioio,
+	 ioio_finished, check_ioio },
+	{ "vmrun intercept check", default_supported, prepare_no_vmrun_int,
+	 default_prepare_gif_clear, null_test, default_finished,
+	 check_no_vmrun_int },
+	{ "rsm", default_supported,
+	 prepare_rsm_intercept, default_prepare_gif_clear,
+	 test_rsm_intercept, finished_rsm_intercept, check_rsm_intercept },
+	{ "cr3 read intercept", default_supported,
+	 prepare_cr3_intercept, default_prepare_gif_clear,
+	 test_cr3_intercept, default_finished, check_cr3_intercept },
+	{ "cr3 read nointercept", default_supported, default_prepare,
+	 default_prepare_gif_clear, test_cr3_intercept, default_finished,
+	 check_cr3_nointercept },
+	{ "cr3 read intercept emulate", smp_supported,
+	 prepare_cr3_intercept_bypass, default_prepare_gif_clear,
+	 test_cr3_intercept_bypass, default_finished, check_cr3_intercept },
+	{ "dr intercept check", default_supported, prepare_dr_intercept,
+	 default_prepare_gif_clear, test_dr_intercept, dr_intercept_finished,
+	 check_dr_intercept },
+	{ "next_rip", next_rip_supported, prepare_next_rip,
+	 default_prepare_gif_clear, test_next_rip,
+	 default_finished, check_next_rip },
+	{ "msr intercept check", default_supported, prepare_msr_intercept,
+	 default_prepare_gif_clear, test_msr_intercept,
+	 msr_intercept_finished, check_msr_intercept },
+	{ "mode_switch", default_supported, prepare_mode_switch,
+	 default_prepare_gif_clear, test_mode_switch,
+	 mode_switch_finished, check_mode_switch },
+	{ "asid_zero", default_supported, prepare_asid_zero,
+	 default_prepare_gif_clear, test_asid_zero,
+	 default_finished, check_asid_zero },
+	{ "sel_cr0_bug", default_supported, sel_cr0_bug_prepare,
+	 default_prepare_gif_clear, sel_cr0_bug_test,
+	 sel_cr0_bug_finished, sel_cr0_bug_check },
+	{ "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
+	 default_prepare_gif_clear, tsc_adjust_test,
+	 default_finished, tsc_adjust_check },
+	{ "latency_run_exit", default_supported, latency_prepare,
+	 default_prepare_gif_clear, latency_test,
+	 latency_finished, latency_check },
+	{ "latency_run_exit_clean", default_supported, latency_prepare,
+	 default_prepare_gif_clear, latency_test,
+	 latency_finished_clean, latency_check },
+	{ "latency_svm_insn", default_supported, lat_svm_insn_prepare,
+	 default_prepare_gif_clear, null_test,
+	 lat_svm_insn_finished, lat_svm_insn_check },
+	{ "exc_inject", default_supported, exc_inject_prepare,
+	 default_prepare_gif_clear, exc_inject_test,
+	 exc_inject_finished, exc_inject_check },
+	{ "pending_event", default_supported, pending_event_prepare,
+	 default_prepare_gif_clear,
+	 pending_event_test, pending_event_finished, pending_event_check },
+	{ "pending_event_cli", default_supported, pending_event_cli_prepare,
+	 pending_event_cli_prepare_gif_clear,
+	 pending_event_cli_test, pending_event_cli_finished,
+	 pending_event_cli_check },
+	{ "interrupt", default_supported, interrupt_prepare,
+	 default_prepare_gif_clear, interrupt_test,
+	 interrupt_finished, interrupt_check },
+	{ "nmi", default_supported, nmi_prepare,
+	 default_prepare_gif_clear, nmi_test,
+	 nmi_finished, nmi_check },
+	{ "nmi_hlt", smp_supported, nmi_prepare,
+	 default_prepare_gif_clear, nmi_hlt_test,
+	 nmi_hlt_finished, nmi_hlt_check },
+	{ "virq_inject", default_supported, virq_inject_prepare,
+	 default_prepare_gif_clear, virq_inject_test,
+	 virq_inject_finished, virq_inject_check },
+	{ "reg_corruption", default_supported, reg_corruption_prepare,
+	 default_prepare_gif_clear, reg_corruption_test,
+	 reg_corruption_finished, reg_corruption_check },
+	{ "svm_init_startup_test", smp_supported, init_startup_prepare,
+	 default_prepare_gif_clear, null_test,
+	 init_startup_finished, init_startup_check },
+	{ "svm_init_intercept_test", smp_supported, init_intercept_prepare,
+	 default_prepare_gif_clear, init_intercept_test,
+	 init_intercept_finished, init_intercept_check,.on_vcpu = 2 },
+	{ "host_rflags", default_supported, host_rflags_prepare,
+	 host_rflags_prepare_gif_clear, host_rflags_test,
+	 host_rflags_finished, host_rflags_check },
+	{ "vgif", vgif_supported, prepare_vgif_enabled,
+	 default_prepare_gif_clear, test_vgif, vgif_finished,
+	 vgif_check },
+	TEST(svm_cr4_osxsave_test),
+	TEST(svm_guest_state_test),
+	TEST(svm_vmrun_errata_test),
+	TEST(svm_vmload_vmsave),
+	TEST(svm_test_singlestep),
+	TEST(svm_nm_test),
+	TEST(svm_int3_test),
+	TEST(svm_into_test),
+	TEST(svm_lbrv_test0),
+	TEST(svm_lbrv_test1),
+	TEST(svm_lbrv_test2),
+	TEST(svm_lbrv_nested_test1),
+	TEST(svm_lbrv_nested_test2),
+	TEST(svm_intr_intercept_mix_if),
+	TEST(svm_intr_intercept_mix_gif),
+	TEST(svm_intr_intercept_mix_gif2),
+	TEST(svm_intr_intercept_mix_nmi),
+	TEST(svm_intr_intercept_mix_smi),
+	TEST(svm_tsc_scale_test),
+	TEST(pause_filter_test),
+	{ NULL, NULL, NULL, NULL, NULL, NULL, NULL }
 };
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests RESEND PATCH v3 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2 Manali Shukla
@ 2022-04-25 13:29   ` Paolo Bonzini
  2022-04-27  3:36     ` Shukla, Manali
  0 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-04-25 13:29 UTC (permalink / raw)
  To: Manali Shukla, seanjc; +Cc: kvm

On 4/25/22 13:44, Manali Shukla wrote:
> +			if (r->rip == (u64) & vmrun_rip) {

This reindentation is wrong (several instances below).

Paolo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements
  2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
                   ` (7 preceding siblings ...)
  2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2 Manali Shukla
@ 2022-04-25 13:31 ` Paolo Bonzini
  2022-04-25 14:48   ` Sean Christopherson
  8 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-04-25 13:31 UTC (permalink / raw)
  To: Manali Shukla, seanjc; +Cc: kvm

On 4/25/22 13:44, Manali Shukla wrote:
> If __setup_vm() is changed to setup_vm(), KUT will build tests with
> PT_USER_MASK set on all PTEs. It is a better idea to move nNPT tests
> to their own file so that tests don't need to fiddle with page tables midway.

Sorry, I have already asked this but I don't understand: why is it 
problematic to have PT_USER_MASK set on all PTEs, since you have a patch 
(3) to "allow nSVM tests to run with PT_USER_MASK enabled"?

Paolo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements
  2022-04-25 13:31 ` [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Paolo Bonzini
@ 2022-04-25 14:48   ` Sean Christopherson
  0 siblings, 0 replies; 13+ messages in thread
From: Sean Christopherson @ 2022-04-25 14:48 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Manali Shukla, kvm

On Mon, Apr 25, 2022, Paolo Bonzini wrote:
> On 4/25/22 13:44, Manali Shukla wrote:
> > If __setup_vm() is changed to setup_vm(), KUT will build tests with
> > PT_USER_MASK set on all PTEs. It is a better idea to move nNPT tests
> > to their own file so that tests don't need to fiddle with page tables midway.
> 
> Sorry, I have already asked this but I don't understand: why is it
> problematic to have PT_USER_MASK set on all PTEs, since you have a patch (3)
> to "allow nSVM tests to run with PT_USER_MASK enabled"?

svm_npt_rsvd_bits_test() intentionally sets hCR4.SMEP=1 to verify that KVM doesn't
consume the host's value for the guest.  Having USER set on PTEs causes the test
to hit SMEP violations.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests RESEND PATCH v3 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2
  2022-04-25 13:29   ` Paolo Bonzini
@ 2022-04-27  3:36     ` Shukla, Manali
  0 siblings, 0 replies; 13+ messages in thread
From: Shukla, Manali @ 2022-04-27  3:36 UTC (permalink / raw)
  To: Paolo Bonzini, Manali Shukla, seanjc; +Cc: kvm



On 4/25/2022 6:59 PM, Paolo Bonzini wrote:
> On 4/25/22 13:44, Manali Shukla wrote:
>> +            if (r->rip == (u64) & vmrun_rip) {
> 
> This reindentation is wrong (several instances below).
> 
> Paolo
> 

Thanks for the review, Paolo.
I will try to correct indentations on required instances and send the series again

Manali

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-04-27  3:36 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-25 11:44 [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 1/8] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 2/8] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 3/8] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 4/8] x86: Improve set_mmu_range() to implement npt Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 5/8] x86: nSVM: Build up the nested page table dynamically Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 6/8] x86: nSVM: Correct indentation for svm.c Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 7/8] x86: nSVM: Correct indentation for svm_tests.c part-1 Manali Shukla
2022-04-25 11:44 ` [kvm-unit-tests RESEND PATCH v3 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2 Manali Shukla
2022-04-25 13:29   ` Paolo Bonzini
2022-04-27  3:36     ` Shukla, Manali
2022-04-25 13:31 ` [kvm-unit-tests RESEND PATCH v3 0/8] Move npt test cases and NPT code improvements Paolo Bonzini
2022-04-25 14:48   ` Sean Christopherson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.