All of lore.kernel.org
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements
@ 2022-03-24  5:30 Manali Shukla
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Manali Shukla @ 2022-03-24  5:30 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

If __setup_vm() is changed to setup_vm(), KUT will build tests with PT_USER_MASK set on all 
PTEs. It is a better idea to move nNPT tests to their own file so that tests don't need to 
fiddle with page tables midway.

The quick approach to do this would be to turn the current main into a small helper, 
without calling __setup_vm() from helper.

Current implementation of nested page table does the page table build up statistically 
with 2048 PTEs and one pml4 entry. With newly implemented routine, nested page table can 
be implemented dynamically based on the RAM size of VM which enables us to have separate 
memory ranges to test various npt test cases.

Based on this implementation, minimal changes were required to be done in below mentioned 
existing APIs:
npt_get_pde(), npt_get_pte(), npt_get_pdpe().

v1 -> v2
Added new patch for building up a nested page table dynamically and did minimal changes 
required to make it adaptable with old test cases.

There are four patches in this patch series
1) Turned current main into helper function minus setup_vm().
2) Moved all nNPT test cases from svm_tests.c to svm_npt.c.
3) Enabled PT_USER_MASK for all nSVM test cases other than nNPT tests.
4) Implemented routine to build up nested page table dynamically.

*** BLURB HERE ***

Manali Shukla (4):
  x86: nSVM: Move common functionality of the main() to helper
    run_svm_tests
  x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate
    file.
  x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled
  x86: nSVM: Build up the nested page table dynamically

 x86/Makefile.common |   2 +
 x86/Makefile.x86_64 |   2 +
 x86/svm.c           | 169 ++++++++++++-------
 x86/svm.h           |  18 ++-
 x86/svm_npt.c       | 386 ++++++++++++++++++++++++++++++++++++++++++++
 x86/svm_tests.c     | 369 +-----------------------------------------
 6 files changed, 526 insertions(+), 420 deletions(-)
 create mode 100644 x86/svm_npt.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests
  2022-03-24  5:30 [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Manali Shukla
@ 2022-03-24  5:30 ` Manali Shukla
  2022-04-13 20:28   ` Sean Christopherson
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 2/4] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file Manali Shukla
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Manali Shukla @ 2022-03-24  5:30 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

nSVM tests are "incompatible" with usermode due to __setup_vm()
call in main function.

If __setup_vm() is replaced with setup_vm() in main function, KUT
will build the test with PT_USER_MASK set on all PTEs.

nNPT tests will be moved to their own file so that the tests
don't need to fiddle with page tables midway through.

The quick and dirty approach would be to turn the current main()
into a small helper, minus its call to __setup_vm() and call the
helper function run_svm_tests() from main() function.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm.c | 14 +++++++++-----
 x86/svm.h |  1 +
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index 3f94b2a..e93e780 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -406,17 +406,13 @@ test_wanted(const char *name, char *filters[], int filter_count)
         }
 }
 
-int main(int ac, char **av)
+int run_svm_tests(int ac, char **av)
 {
-	/* Omit PT_USER_MASK to allow tested host.CR4.SMEP=1. */
-	pteval_t opt_mask = 0;
 	int i = 0;
 
 	ac--;
 	av++;
 
-	__setup_vm(&opt_mask);
-
 	if (!this_cpu_has(X86_FEATURE_SVM)) {
 		printf("SVM not availble\n");
 		return report_summary();
@@ -453,3 +449,11 @@ int main(int ac, char **av)
 
 	return report_summary();
 }
+
+int main(int ac, char **av)
+{
+    pteval_t opt_mask = 0;
+
+    __setup_vm(&opt_mask);
+    return run_svm_tests(ac, av);
+}
diff --git a/x86/svm.h b/x86/svm.h
index f74b13a..9ab3aa5 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -398,6 +398,7 @@ struct regs {
 
 typedef void (*test_guest_func)(struct svm_test *);
 
+int run_svm_tests(int ac, char **av);
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
 u64 *npt_get_pdpe(void);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [kvm-unit-tests PATCH v2 2/4] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file.
  2022-03-24  5:30 [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Manali Shukla
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
@ 2022-03-24  5:30 ` Manali Shukla
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 3/4] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled Manali Shukla
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Manali Shukla @ 2022-03-24  5:30 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

nNPT test cases are moved to a separate file svm_npt.c
so that they can be run independently with PTE_USER_MASK disabled.

Rest of the test cases can be run with PTE_USER_MASK enabled.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/Makefile.common |   2 +
 x86/Makefile.x86_64 |   2 +
 x86/svm.c           |   8 -
 x86/svm_npt.c       | 386 ++++++++++++++++++++++++++++++++++++++++++++
 x86/svm_tests.c     | 372 ++----------------------------------------
 x86/unittests.cfg   |   6 +
 6 files changed, 405 insertions(+), 371 deletions(-)
 create mode 100644 x86/svm_npt.c

diff --git a/x86/Makefile.common b/x86/Makefile.common
index b903988..5590afe 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -107,6 +107,8 @@ $(TEST_DIR)/access_test.$(bin): $(TEST_DIR)/access.o
 
 $(TEST_DIR)/vmx.$(bin): $(TEST_DIR)/access.o
 
+$(TEST_DIR)/svm_npt.$(bin): $(TEST_DIR)/svm.o
+
 $(TEST_DIR)/kvmclock_test.$(bin): $(TEST_DIR)/kvmclock.o
 
 $(TEST_DIR)/hyperv_synic.$(bin): $(TEST_DIR)/hyperv.o
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index f18c1e2..dbe5967 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -42,6 +42,7 @@ endif
 ifneq ($(CONFIG_EFI),y)
 tests += $(TEST_DIR)/access_test.$(exe)
 tests += $(TEST_DIR)/svm.$(exe)
+tests += $(TEST_DIR)/svm_npt.$(exe)
 tests += $(TEST_DIR)/vmx.$(exe)
 endif
 
@@ -55,3 +56,4 @@ $(TEST_DIR)/hyperv_clock.$(bin): $(TEST_DIR)/hyperv_clock.o
 
 $(TEST_DIR)/vmx.$(bin): $(TEST_DIR)/vmx_tests.o
 $(TEST_DIR)/svm.$(bin): $(TEST_DIR)/svm_tests.o
+$(TEST_DIR)/svm_npt.$(bin): $(TEST_DIR)/svm_npt.o
diff --git a/x86/svm.c b/x86/svm.c
index e93e780..d0d523a 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -449,11 +449,3 @@ int run_svm_tests(int ac, char **av)
 
 	return report_summary();
 }
-
-int main(int ac, char **av)
-{
-    pteval_t opt_mask = 0;
-
-    __setup_vm(&opt_mask);
-    return run_svm_tests(ac, av);
-}
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
new file mode 100644
index 0000000..4f80d9a
--- /dev/null
+++ b/x86/svm_npt.c
@@ -0,0 +1,386 @@
+#include "svm.h"
+#include "vm.h"
+#include "alloc_page.h"
+#include "vmalloc.h"
+
+static void *scratch_page;
+
+static void null_test(struct svm_test *test)
+{
+}
+
+static void npt_np_prepare(struct svm_test *test)
+{
+    u64 *pte;
+
+    scratch_page = alloc_page();
+    pte = npt_get_pte((u64)scratch_page);
+
+    *pte &= ~1ULL;
+}
+
+static void npt_np_test(struct svm_test *test)
+{
+    (void) *(volatile u64 *)scratch_page;
+}
+
+static bool npt_np_check(struct svm_test *test)
+{
+    u64 *pte = npt_get_pte((u64)scratch_page);
+
+    *pte |= 1ULL;
+
+    return (vmcb->control.exit_code == SVM_EXIT_NPF)
+           && (vmcb->control.exit_info_1 == 0x100000004ULL);
+}
+
+static void npt_nx_prepare(struct svm_test *test)
+{
+    u64 *pte;
+
+    test->scratch = rdmsr(MSR_EFER);
+    wrmsr(MSR_EFER, test->scratch | EFER_NX);
+
+    /* Clear the guest's EFER.NX, it should not affect NPT behavior. */
+    vmcb->save.efer &= ~EFER_NX;
+
+    pte = npt_get_pte((u64)null_test);
+
+    *pte |= PT64_NX_MASK;
+}
+
+static bool npt_nx_check(struct svm_test *test)
+{
+    u64 *pte = npt_get_pte((u64)null_test);
+
+    wrmsr(MSR_EFER, test->scratch);
+
+    *pte &= ~PT64_NX_MASK;
+
+    return (vmcb->control.exit_code == SVM_EXIT_NPF)
+           && (vmcb->control.exit_info_1 == 0x100000015ULL);
+}
+
+static void npt_us_prepare(struct svm_test *test)
+{
+    u64 *pte;
+
+    scratch_page = alloc_page();
+    pte = npt_get_pte((u64)scratch_page);
+
+    *pte &= ~(1ULL << 2);
+}
+
+static void npt_us_test(struct svm_test *test)
+{
+    (void) *(volatile u64 *)scratch_page;
+}
+
+static bool npt_us_check(struct svm_test *test)
+{
+    u64 *pte = npt_get_pte((u64)scratch_page);
+
+    *pte |= (1ULL << 2);
+
+    return (vmcb->control.exit_code == SVM_EXIT_NPF)
+           && (vmcb->control.exit_info_1 == 0x100000005ULL);
+}
+
+static void npt_rw_prepare(struct svm_test *test)
+{
+
+    u64 *pte;
+
+    pte = npt_get_pte(0x80000);
+
+    *pte &= ~(1ULL << 1);
+}
+
+static void npt_rw_test(struct svm_test *test)
+{
+    u64 *data = (void*)(0x80000);
+
+    *data = 0;
+}
+
+static bool npt_rw_check(struct svm_test *test)
+{
+    u64 *pte = npt_get_pte(0x80000);
+
+    *pte |= (1ULL << 1);
+
+    return (vmcb->control.exit_code == SVM_EXIT_NPF)
+        && (vmcb->control.exit_info_1 == 0x100000007ULL);
+}
+
+static void npt_rw_pfwalk_prepare(struct svm_test *test)
+{
+
+    u64 *pte;
+
+    pte = npt_get_pte(read_cr3());
+
+    *pte &= ~(1ULL << 1);
+}
+
+static bool npt_rw_pfwalk_check(struct svm_test *test)
+{
+    u64 *pte = npt_get_pte(read_cr3());
+
+    *pte |= (1ULL << 1);
+
+    return (vmcb->control.exit_code == SVM_EXIT_NPF)
+           && (vmcb->control.exit_info_1 == 0x200000007ULL)
+       && (vmcb->control.exit_info_2 == read_cr3());
+}
+
+static void npt_l1mmio_prepare(struct svm_test *test)
+{
+}
+
+u32 nested_apic_version1;
+u32 nested_apic_version2;
+
+static void npt_l1mmio_test(struct svm_test *test)
+{
+    volatile u32 *data = (volatile void*)(0xfee00030UL);
+
+    nested_apic_version1 = *data;
+    nested_apic_version2 = *data;
+}
+
+static bool npt_l1mmio_check(struct svm_test *test)
+{
+    volatile u32 *data = (volatile void*)(0xfee00030);
+    u32 lvr = *data;
+
+    return nested_apic_version1 == lvr && nested_apic_version2 == lvr;
+}
+
+static void npt_rw_l1mmio_prepare(struct svm_test *test)
+{
+
+    u64 *pte;
+
+    pte = npt_get_pte(0xfee00080);
+
+    *pte &= ~(1ULL << 1);
+}
+
+static void npt_rw_l1mmio_test(struct svm_test *test)
+{
+    volatile u32 *data = (volatile void*)(0xfee00080);
+
+    *data = *data;
+}
+
+static bool npt_rw_l1mmio_check(struct svm_test *test)
+{
+    u64 *pte = npt_get_pte(0xfee00080);
+
+    *pte |= (1ULL << 1);
+
+    return (vmcb->control.exit_code == SVM_EXIT_NPF)
+           && (vmcb->control.exit_info_1 == 0x100000007ULL);
+}
+
+static void basic_guest_main(struct svm_test *test)
+{
+}
+
+static void __svm_npt_rsvd_bits_test(u64 *pxe, u64 rsvd_bits, u64 efer,
+                     ulong cr4, u64 guest_efer, ulong guest_cr4)
+{
+    u64 pxe_orig = *pxe;
+    int exit_reason;
+    u64 pfec;
+
+    wrmsr(MSR_EFER, efer);
+    write_cr4(cr4);
+
+    vmcb->save.efer = guest_efer;
+    vmcb->save.cr4  = guest_cr4;
+
+    *pxe |= rsvd_bits;
+
+    exit_reason = svm_vmrun();
+
+    report(exit_reason == SVM_EXIT_NPF,
+           "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits, exit_reason);
+
+    if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) {
+        /*
+         * The guest's page tables will blow up on a bad PDPE/PML4E,
+         * before starting the final walk of the guest page.
+         */
+        pfec = 0x20000000full;
+    } else {
+        /* RSVD #NPF on final walk of guest page. */
+        pfec = 0x10000000dULL;
+
+        /* PFEC.FETCH=1 if NX=1 *or* SMEP=1. */
+        if ((cr4 & X86_CR4_SMEP) || (efer & EFER_NX))
+            pfec |= 0x10;
+
+    }
+
+    report(vmcb->control.exit_info_1 == pfec,
+           "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
+           "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
+           pfec, vmcb->control.exit_info_1, *pxe,
+           !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
+           !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
+
+    *pxe = pxe_orig;
+}
+
+static void _svm_npt_rsvd_bits_test(u64 *pxe, u64 pxe_rsvd_bits,  u64 efer,
+                    ulong cr4, u64 guest_efer, ulong guest_cr4)
+{
+    u64 rsvd_bits;
+    int i;
+
+    /*
+     * RDTSC or RDRAND can sometimes fail to generate a valid reserved bits
+     */
+    if (!pxe_rsvd_bits) {
+        report_skip("svm_npt_rsvd_bits_test: Reserved bits are not valid");
+        return;
+    }
+
+    /*
+     * Test all combinations of guest/host EFER.NX and CR4.SMEP.  If host
+     * EFER.NX=0, use NX as the reserved bit, otherwise use the passed in
+     * @pxe_rsvd_bits.
+     */
+    for (i = 0; i < 16; i++) {
+        if (i & 1) {
+            rsvd_bits = pxe_rsvd_bits;
+            efer |= EFER_NX;
+        } else {
+            rsvd_bits = PT64_NX_MASK;
+            efer &= ~EFER_NX;
+        }
+        if (i & 2)
+            cr4 |= X86_CR4_SMEP;
+        else
+            cr4 &= ~X86_CR4_SMEP;
+        if (i & 4)
+            guest_efer |= EFER_NX;
+        else
+            guest_efer &= ~EFER_NX;
+        if (i & 8)
+            guest_cr4 |= X86_CR4_SMEP;
+        else
+            guest_cr4 &= ~X86_CR4_SMEP;
+
+        __svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4,
+                     guest_efer, guest_cr4);
+    }
+}
+
+static u64 get_random_bits(u64 hi, u64 low)
+{
+    unsigned retry = 5;
+    u64 rsvd_bits = 0;
+
+    if (this_cpu_has(X86_FEATURE_RDRAND)) {
+        do {
+            rsvd_bits = (rdrand() << low) & GENMASK_ULL(hi, low);
+            retry--;
+        } while (!rsvd_bits && retry);
+    }
+
+    if (!rsvd_bits) {
+        retry = 5;
+        do {
+            rsvd_bits = (rdtsc() << low) & GENMASK_ULL(hi, low);
+            retry--;
+        } while (!rsvd_bits && retry);
+    }
+
+    return rsvd_bits;
+}
+
+static void svm_npt_rsvd_bits_test(void)
+{
+    u64   saved_efer, host_efer, sg_efer, guest_efer;
+    ulong saved_cr4,  host_cr4,  sg_cr4,  guest_cr4;
+
+    if (!npt_supported()) {
+        report_skip("NPT not supported");
+        return;
+    }
+
+    saved_efer = host_efer  = rdmsr(MSR_EFER);
+    saved_cr4  = host_cr4   = read_cr4();
+    sg_efer    = guest_efer = vmcb->save.efer;
+    sg_cr4     = guest_cr4  = vmcb->save.cr4;
+
+    test_set_guest(basic_guest_main);
+
+   /*
+    * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the
+    * sub-test.  The NX test is still valid, but the extra bit of coverage
+    * isn't worth the extra complexity.
+    */
+    if (cpuid_maxphyaddr() >= 52)
+        goto skip_pte_test;
+
+    _svm_npt_rsvd_bits_test(npt_get_pte((u64)basic_guest_main),
+                get_random_bits(51, cpuid_maxphyaddr()),
+                host_efer, host_cr4, guest_efer, guest_cr4);
+
+skip_pte_test:
+    _svm_npt_rsvd_bits_test(npt_get_pde((u64)basic_guest_main),
+                get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
+                host_efer, host_cr4, guest_efer, guest_cr4);
+
+    _svm_npt_rsvd_bits_test(npt_get_pdpe(),
+                PT_PAGE_SIZE_MASK |
+                    (this_cpu_has(X86_FEATURE_GBPAGES) ? get_random_bits(29, 13) : 0),
+                host_efer, host_cr4, guest_efer, guest_cr4);
+
+    _svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8),
+                host_efer, host_cr4, guest_efer, guest_cr4);
+
+    wrmsr(MSR_EFER, saved_efer);
+    write_cr4(saved_cr4);
+    vmcb->save.efer = sg_efer;
+    vmcb->save.cr4  = sg_cr4;
+}
+
+int main(int ac, char **av)
+{
+    pteval_t opt_mask = 0;
+
+    __setup_vm(&opt_mask);
+    return run_svm_tests(ac, av);
+}
+
+#define TEST(name) { #name, .v2 = name }
+
+struct svm_test svm_tests[] = {
+    { "npt_nx", npt_supported, npt_nx_prepare,
+      default_prepare_gif_clear, null_test,
+      default_finished, npt_nx_check },
+    { "npt_np", npt_supported, npt_np_prepare,
+      default_prepare_gif_clear, npt_np_test,
+      default_finished, npt_np_check },
+    { "npt_us", npt_supported, npt_us_prepare,
+      default_prepare_gif_clear, npt_us_test,
+      default_finished, npt_us_check },
+    { "npt_rw", npt_supported, npt_rw_prepare,
+      default_prepare_gif_clear, npt_rw_test,
+      default_finished, npt_rw_check },
+    { "npt_rw_pfwalk", npt_supported, npt_rw_pfwalk_prepare,
+      default_prepare_gif_clear, null_test,
+      default_finished, npt_rw_pfwalk_check },
+    { "npt_l1mmio", npt_supported, npt_l1mmio_prepare,
+      default_prepare_gif_clear, npt_l1mmio_test,
+      default_finished, npt_l1mmio_check },
+    { "npt_rw_l1mmio", npt_supported, npt_rw_l1mmio_prepare,
+      default_prepare_gif_clear, npt_rw_l1mmio_test,
+      default_finished, npt_rw_l1mmio_check },
+    TEST(svm_npt_rsvd_bits_test)
+};
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 0707786..41980d9 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -10,11 +10,10 @@
 #include "isr.h"
 #include "apic.h"
 #include "delay.h"
+#include "vmalloc.h"
 
 #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
 
-static void *scratch_page;
-
 #define LATENCY_RUNS 1000000
 
 extern u16 cpu_online_count;
@@ -698,181 +697,6 @@ static bool sel_cr0_bug_check(struct svm_test *test)
     return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
 }
 
-static void npt_nx_prepare(struct svm_test *test)
-{
-    u64 *pte;
-
-    test->scratch = rdmsr(MSR_EFER);
-    wrmsr(MSR_EFER, test->scratch | EFER_NX);
-
-    /* Clear the guest's EFER.NX, it should not affect NPT behavior. */
-    vmcb->save.efer &= ~EFER_NX;
-
-    pte = npt_get_pte((u64)null_test);
-
-    *pte |= PT64_NX_MASK;
-}
-
-static bool npt_nx_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte((u64)null_test);
-
-    wrmsr(MSR_EFER, test->scratch);
-
-    *pte &= ~PT64_NX_MASK;
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000015ULL);
-}
-
-static void npt_np_prepare(struct svm_test *test)
-{
-    u64 *pte;
-
-    scratch_page = alloc_page();
-    pte = npt_get_pte((u64)scratch_page);
-
-    *pte &= ~1ULL;
-}
-
-static void npt_np_test(struct svm_test *test)
-{
-    (void) *(volatile u64 *)scratch_page;
-}
-
-static bool npt_np_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte((u64)scratch_page);
-
-    *pte |= 1ULL;
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000004ULL);
-}
-
-static void npt_us_prepare(struct svm_test *test)
-{
-    u64 *pte;
-
-    scratch_page = alloc_page();
-    pte = npt_get_pte((u64)scratch_page);
-
-    *pte &= ~(1ULL << 2);
-}
-
-static void npt_us_test(struct svm_test *test)
-{
-    (void) *(volatile u64 *)scratch_page;
-}
-
-static bool npt_us_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte((u64)scratch_page);
-
-    *pte |= (1ULL << 2);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000005ULL);
-}
-
-static void npt_rw_prepare(struct svm_test *test)
-{
-
-    u64 *pte;
-
-    pte = npt_get_pte(0x80000);
-
-    *pte &= ~(1ULL << 1);
-}
-
-static void npt_rw_test(struct svm_test *test)
-{
-    u64 *data = (void*)(0x80000);
-
-    *data = 0;
-}
-
-static bool npt_rw_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte(0x80000);
-
-    *pte |= (1ULL << 1);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000007ULL);
-}
-
-static void npt_rw_pfwalk_prepare(struct svm_test *test)
-{
-
-    u64 *pte;
-
-    pte = npt_get_pte(read_cr3());
-
-    *pte &= ~(1ULL << 1);
-}
-
-static bool npt_rw_pfwalk_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte(read_cr3());
-
-    *pte |= (1ULL << 1);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x200000007ULL)
-	   && (vmcb->control.exit_info_2 == read_cr3());
-}
-
-static void npt_l1mmio_prepare(struct svm_test *test)
-{
-}
-
-u32 nested_apic_version1;
-u32 nested_apic_version2;
-
-static void npt_l1mmio_test(struct svm_test *test)
-{
-    volatile u32 *data = (volatile void*)(0xfee00030UL);
-
-    nested_apic_version1 = *data;
-    nested_apic_version2 = *data;
-}
-
-static bool npt_l1mmio_check(struct svm_test *test)
-{
-    volatile u32 *data = (volatile void*)(0xfee00030);
-    u32 lvr = *data;
-
-    return nested_apic_version1 == lvr && nested_apic_version2 == lvr;
-}
-
-static void npt_rw_l1mmio_prepare(struct svm_test *test)
-{
-
-    u64 *pte;
-
-    pte = npt_get_pte(0xfee00080);
-
-    *pte &= ~(1ULL << 1);
-}
-
-static void npt_rw_l1mmio_test(struct svm_test *test)
-{
-    volatile u32 *data = (volatile void*)(0xfee00080);
-
-    *data = *data;
-}
-
-static bool npt_rw_l1mmio_check(struct svm_test *test)
-{
-    u64 *pte = npt_get_pte(0xfee00080);
-
-    *pte |= (1ULL << 1);
-
-    return (vmcb->control.exit_code == SVM_EXIT_NPF)
-           && (vmcb->control.exit_info_1 == 0x100000007ULL);
-}
-
 #define TSC_ADJUST_VALUE    (1ll << 32)
 #define TSC_OFFSET_VALUE    (~0ull << 48)
 static bool ok;
@@ -2604,173 +2428,9 @@ static void svm_test_singlestep(void)
 		vmcb->save.rip == (u64)&guest_end, "Test EFLAGS.TF on VMRUN: guest execution completion");
 }
 
-static void __svm_npt_rsvd_bits_test(u64 *pxe, u64 rsvd_bits, u64 efer,
-				     ulong cr4, u64 guest_efer, ulong guest_cr4)
-{
-	u64 pxe_orig = *pxe;
-	int exit_reason;
-	u64 pfec;
-
-	wrmsr(MSR_EFER, efer);
-	write_cr4(cr4);
-
-	vmcb->save.efer = guest_efer;
-	vmcb->save.cr4  = guest_cr4;
-
-	*pxe |= rsvd_bits;
-
-	exit_reason = svm_vmrun();
-
-	report(exit_reason == SVM_EXIT_NPF,
-	       "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits, exit_reason);
-
-	if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) {
-		/*
-		 * The guest's page tables will blow up on a bad PDPE/PML4E,
-		 * before starting the final walk of the guest page.
-		 */
-		pfec = 0x20000000full;
-	} else {
-		/* RSVD #NPF on final walk of guest page. */
-		pfec = 0x10000000dULL;
-
-		/* PFEC.FETCH=1 if NX=1 *or* SMEP=1. */
-		if ((cr4 & X86_CR4_SMEP) || (efer & EFER_NX))
-			pfec |= 0x10;
-
-	}
-
-	report(vmcb->control.exit_info_1 == pfec,
-	       "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
-	       "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
-	       pfec, vmcb->control.exit_info_1, *pxe,
-	       !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
-	       !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
-
-	*pxe = pxe_orig;
-}
-
-static void _svm_npt_rsvd_bits_test(u64 *pxe, u64 pxe_rsvd_bits,  u64 efer,
-				    ulong cr4, u64 guest_efer, ulong guest_cr4)
-{
-	u64 rsvd_bits;
-	int i;
-
-	/*
-	 * RDTSC or RDRAND can sometimes fail to generate a valid reserved bits
-	 */
-	if (!pxe_rsvd_bits) {
-		report_skip("svm_npt_rsvd_bits_test: Reserved bits are not valid");
-		return;
-	}
-
-	/*
-	 * Test all combinations of guest/host EFER.NX and CR4.SMEP.  If host
-	 * EFER.NX=0, use NX as the reserved bit, otherwise use the passed in
-	 * @pxe_rsvd_bits.
-	 */
-	for (i = 0; i < 16; i++) {
-		if (i & 1) {
-			rsvd_bits = pxe_rsvd_bits;
-			efer |= EFER_NX;
-		} else {
-			rsvd_bits = PT64_NX_MASK;
-			efer &= ~EFER_NX;
-		}
-		if (i & 2)
-			cr4 |= X86_CR4_SMEP;
-		else
-			cr4 &= ~X86_CR4_SMEP;
-		if (i & 4)
-			guest_efer |= EFER_NX;
-		else
-			guest_efer &= ~EFER_NX;
-		if (i & 8)
-			guest_cr4 |= X86_CR4_SMEP;
-		else
-			guest_cr4 &= ~X86_CR4_SMEP;
-
-		__svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4,
-					 guest_efer, guest_cr4);
-	}
-}
-
-static u64 get_random_bits(u64 hi, u64 low)
-{
-	unsigned retry = 5;
-	u64 rsvd_bits = 0;
-
-	if (this_cpu_has(X86_FEATURE_RDRAND)) {
-		do {
-			rsvd_bits = (rdrand() << low) & GENMASK_ULL(hi, low);
-			retry--;
-		} while (!rsvd_bits && retry);
-	}
-
-	if (!rsvd_bits) {
-		retry = 5;
-		do {
-			rsvd_bits = (rdtsc() << low) & GENMASK_ULL(hi, low);
-			retry--;
-		} while (!rsvd_bits && retry);
-	}
-
-	return rsvd_bits;
-}
-
-
-static void svm_npt_rsvd_bits_test(void)
-{
-	u64   saved_efer, host_efer, sg_efer, guest_efer;
-	ulong saved_cr4,  host_cr4,  sg_cr4,  guest_cr4;
-
-	if (!npt_supported()) {
-		report_skip("NPT not supported");
-		return;
-	}
-
-	saved_efer = host_efer  = rdmsr(MSR_EFER);
-	saved_cr4  = host_cr4   = read_cr4();
-	sg_efer    = guest_efer = vmcb->save.efer;
-	sg_cr4     = guest_cr4  = vmcb->save.cr4;
-
-	test_set_guest(basic_guest_main);
-
-	/*
-	 * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the
-	 * sub-test.  The NX test is still valid, but the extra bit of coverage
-	 * isn't worth the extra complexity.
-	 */
-	if (cpuid_maxphyaddr() >= 52)
-		goto skip_pte_test;
-
-	_svm_npt_rsvd_bits_test(npt_get_pte((u64)basic_guest_main),
-				get_random_bits(51, cpuid_maxphyaddr()),
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-skip_pte_test:
-	_svm_npt_rsvd_bits_test(npt_get_pde((u64)basic_guest_main),
-				get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-	_svm_npt_rsvd_bits_test(npt_get_pdpe(),
-				PT_PAGE_SIZE_MASK |
-					(this_cpu_has(X86_FEATURE_GBPAGES) ? get_random_bits(29, 13) : 0),
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-	_svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8),
-				host_efer, host_cr4, guest_efer, guest_cr4);
-
-	wrmsr(MSR_EFER, saved_efer);
-	write_cr4(saved_cr4);
-	vmcb->save.efer = sg_efer;
-	vmcb->save.cr4  = sg_cr4;
-}
-
 static bool volatile svm_errata_reproduced = false;
 static unsigned long volatile physical = 0;
 
-
 /*
  *
  * Test the following errata:
@@ -3074,6 +2734,14 @@ static void svm_nm_test(void)
         "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
 }
 
+int main(int ac, char **av)
+{
+    pteval_t opt_mask = 0;
+
+    __setup_vm(&opt_mask);
+    return run_svm_tests(ac, av);
+}
+
 struct svm_test svm_tests[] = {
     { "null", default_supported, default_prepare,
       default_prepare_gif_clear, null_test,
@@ -3117,27 +2785,6 @@ struct svm_test svm_tests[] = {
     { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare,
       default_prepare_gif_clear, sel_cr0_bug_test,
        sel_cr0_bug_finished, sel_cr0_bug_check },
-    { "npt_nx", npt_supported, npt_nx_prepare,
-      default_prepare_gif_clear, null_test,
-      default_finished, npt_nx_check },
-    { "npt_np", npt_supported, npt_np_prepare,
-      default_prepare_gif_clear, npt_np_test,
-      default_finished, npt_np_check },
-    { "npt_us", npt_supported, npt_us_prepare,
-      default_prepare_gif_clear, npt_us_test,
-      default_finished, npt_us_check },
-    { "npt_rw", npt_supported, npt_rw_prepare,
-      default_prepare_gif_clear, npt_rw_test,
-      default_finished, npt_rw_check },
-    { "npt_rw_pfwalk", npt_supported, npt_rw_pfwalk_prepare,
-      default_prepare_gif_clear, null_test,
-      default_finished, npt_rw_pfwalk_check },
-    { "npt_l1mmio", npt_supported, npt_l1mmio_prepare,
-      default_prepare_gif_clear, npt_l1mmio_test,
-      default_finished, npt_l1mmio_check },
-    { "npt_rw_l1mmio", npt_supported, npt_rw_l1mmio_prepare,
-      default_prepare_gif_clear, npt_rw_l1mmio_test,
-      default_finished, npt_rw_l1mmio_check },
     { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
       default_prepare_gif_clear, tsc_adjust_test,
       default_finished, tsc_adjust_check },
@@ -3189,7 +2836,6 @@ struct svm_test svm_tests[] = {
       vgif_check },
     TEST(svm_cr4_osxsave_test),
     TEST(svm_guest_state_test),
-    TEST(svm_npt_rsvd_bits_test),
     TEST(svm_vmrun_errata_test),
     TEST(svm_vmload_vmsave),
     TEST(svm_test_singlestep),
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 9a70ba3..0d14721 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -240,6 +240,12 @@ smp = 2
 extra_params = -cpu max,+svm -m 4g
 arch = x86_64
 
+[svm_npt]
+file = svm_npt.flat
+smp = 2
+extra_params = -cpu max,+svm -m 4g
+arch = x86_64
+
 [taskswitch]
 file = taskswitch.flat
 arch = i386
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [kvm-unit-tests PATCH v2 3/4] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled
  2022-03-24  5:30 [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Manali Shukla
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 2/4] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file Manali Shukla
@ 2022-03-24  5:30 ` Manali Shukla
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 4/4] x86: nSVM: Build up the nested page table dynamically Manali Shukla
  2022-03-24 15:58 ` [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Maxim Levitsky
  4 siblings, 0 replies; 10+ messages in thread
From: Manali Shukla @ 2022-03-24  5:30 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

Commit 916635a813e975600335c6c47250881b7a328971
(nSVM: Add test for NPT reserved bit and #NPF error code behavior)
clears PT_USER_MASK for all svm testcases. Any tests that requires
usermode access will fail after this commit.

Above mentioned commit did changes in main() due to which other
nSVM tests became "incompatible" with usermode

Solution to this problem would be to set PT_USER_MASK on all PTEs.
So that KUT will build other tests with PT_USER_MASK set on
all PTEs.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm_tests.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 41980d9..fce466e 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -10,7 +10,6 @@
 #include "isr.h"
 #include "apic.h"
 #include "delay.h"
-#include "vmalloc.h"
 
 #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
 
@@ -2736,9 +2735,7 @@ static void svm_nm_test(void)
 
 int main(int ac, char **av)
 {
-    pteval_t opt_mask = 0;
-
-    __setup_vm(&opt_mask);
+    setup_vm();
     return run_svm_tests(ac, av);
 }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [kvm-unit-tests PATCH v2 4/4] x86: nSVM: Build up the nested page table dynamically
  2022-03-24  5:30 [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Manali Shukla
                   ` (2 preceding siblings ...)
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 3/4] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled Manali Shukla
@ 2022-03-24  5:30 ` Manali Shukla
  2022-04-13 21:33   ` Sean Christopherson
  2022-03-24 15:58 ` [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Maxim Levitsky
  4 siblings, 1 reply; 10+ messages in thread
From: Manali Shukla @ 2022-03-24  5:30 UTC (permalink / raw)
  To: pbonzini, seanjc; +Cc: kvm

Current implementation of nested page table does the page
table build up statistically with 2048 PTEs and one pml4 entry.
That is why current implementation is not extensible.

New implementation does page table build up dynamically based
on the RAM size of the VM which enables us to have separate
memory range to test various npt test cases.

Signed-off-by: Manali Shukla <manali.shukla@amd.com>
---
 x86/svm.c     | 163 ++++++++++++++++++++++++++++++++++----------------
 x86/svm.h     |  17 +++++-
 x86/svm_npt.c |   4 +-
 3 files changed, 130 insertions(+), 54 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index d0d523a..67dbe31 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -8,6 +8,7 @@
 #include "desc.h"
 #include "msr.h"
 #include "vm.h"
+#include "fwcfg.h"
 #include "smp.h"
 #include "types.h"
 #include "alloc_page.h"
@@ -16,38 +17,67 @@
 #include "vmalloc.h"
 
 /* for the nested page table*/
-u64 *pte[2048];
-u64 *pde[4];
-u64 *pdpe;
 u64 *pml4e;
 
 struct vmcb *vmcb;
 
-u64 *npt_get_pte(u64 address)
+u64* get_npt_pte(u64 *pml4, u64 guest_addr, int level)
 {
-	int i1, i2;
+    int l;
+    u64 *pt = pml4, iter_pte;
+    unsigned offset;
+
+    assert(level >= 1 && level <= 4);
+
+    for(l = NPT_PAGE_LEVEL; ; --l) {
+        offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
+                 & NPT_PGDIR_MASK;
+        iter_pte = pt[offset];
+        if (l == level)
+            break;
+        if (!(iter_pte & NPT_PRESENT))
+            return false;
+        pt = (u64*)(iter_pte & PT_ADDR_MASK);
+    }
+    offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
+             & NPT_PGDIR_MASK;
+    return  &pt[offset];
+}
 
-	address >>= 12;
-	i1 = (address >> 9) & 0x7ff;
-	i2 = address & 0x1ff;
+void set_npt_pte(u64 *pml4, u64 guest_addr,
+        int level, u64  pte_val)
+{
+    int l;
+    unsigned long *pt = pml4;
+    unsigned offset;
+
+    for (l = NPT_PAGE_LEVEL; ; --l) {
+        offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
+                 & NPT_PGDIR_MASK;
+        if (l == level)
+            break;
+        if (!(pt[offset] & NPT_PRESENT))
+            return;
+        pt = (u64*)(pt[offset] & PT_ADDR_MASK);
+    }
+    offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
+              & NPT_PGDIR_MASK;
+    pt[offset] = pte_val;
+}
 
-	return &pte[i1][i2];
+u64 *npt_get_pte(u64 address)
+{
+    return get_npt_pte(npt_get_pml4e(), address, 1);
 }
 
 u64 *npt_get_pde(u64 address)
 {
-	int i1, i2;
-
-	address >>= 21;
-	i1 = (address >> 9) & 0x3;
-	i2 = address & 0x1ff;
-
-	return &pde[i1][i2];
+    return get_npt_pte(npt_get_pml4e(), address, 2);
 }
 
-u64 *npt_get_pdpe(void)
+u64 *npt_get_pdpe(u64 address)
 {
-	return pdpe;
+	return get_npt_pte(npt_get_pml4e(), address, 3);
 }
 
 u64 *npt_get_pml4e(void)
@@ -309,11 +339,72 @@ static void set_additional_vcpu_msr(void *msr_efer)
 	wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME);
 }
 
+static void install_npt_entry(u64 *pml4,
+        int pte_level,
+        u64 guest_addr,
+        u64 pte,
+        u64 *pt_page)
+{
+    int level;
+    unsigned long *pt = pml4;
+    unsigned offset;
+
+    for (level = NPT_PAGE_LEVEL; level > pte_level; --level) {
+        offset = (guest_addr >> (((level - 1) * NPT_PGDIR_WIDTH) + 12))
+                  & NPT_PGDIR_MASK;
+        if (!(pt[offset] & PT_PRESENT_MASK)) {
+            unsigned long *new_pt = pt_page;
+            if (!new_pt) {
+                new_pt = alloc_page();
+            } else
+                pt_page = 0;
+            memset(new_pt, 0, PAGE_SIZE);
+
+            pt[offset] = virt_to_phys(new_pt) | NPT_USER_ACCESS |
+                         NPT_ACCESS_BIT | NPT_PRESENT | NPT_RW_ACCESS;
+        }
+        pt = phys_to_virt(pt[offset] & PT_ADDR_MASK);
+    }
+    offset = (guest_addr >> (((level - 1) * NPT_PGDIR_WIDTH) + 12))
+              & NPT_PGDIR_MASK;
+    pt[offset] = pte | NPT_USER_ACCESS | NPT_ACCESS_BIT | NPT_DIRTY_BIT;
+}
+
+void install_npt(u64 *pml4, u64 phys, u64 guest_addr,
+        u64 perm)
+{
+    install_npt_entry(pml4, 1, guest_addr,
+            (phys & PAGE_MASK) | perm, 0);
+}
+
+static void setup_npt_range(u64 *pml4, u64 start,
+        u64 len, u64 perm)
+{
+    u64 phys = start;
+    u64 max = (u64)len + (u64)start;
+
+    while (phys + PAGE_SIZE <= max) {
+        install_npt(pml4, phys, phys, perm);
+        phys += PAGE_SIZE;
+    }
+}
+
+void setup_npt(void) {
+    u64 end_of_memory;
+    pml4e = alloc_page();
+
+    end_of_memory = fwcfg_get_u64(FW_CFG_RAM_SIZE);
+    if (end_of_memory < (1ul << 32))
+        end_of_memory = (1ul << 32);
+
+    setup_npt_range(pml4e, 0, end_of_memory,
+            NPT_PRESENT | NPT_RW_ACCESS);
+}
+
 static void setup_svm(void)
 {
 	void *hsave = alloc_page();
-	u64 *page, address;
-	int i,j;
+    int i;
 
 	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
 	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
@@ -335,37 +426,7 @@ static void setup_svm(void)
 	* Build the page-table bottom-up and map everything with 4k
 	* pages to get enough granularity for the NPT unit-tests.
 	*/
-
-	address = 0;
-
-	/* PTE level */
-	for (i = 0; i < 2048; ++i) {
-		page = alloc_page();
-
-		for (j = 0; j < 512; ++j, address += 4096)
-	    		page[j] = address | 0x067ULL;
-
-		pte[i] = page;
-	}
-
-	/* PDE level */
-	for (i = 0; i < 4; ++i) {
-		page = alloc_page();
-
-	for (j = 0; j < 512; ++j)
-	    page[j] = (u64)pte[(i * 512) + j] | 0x027ULL;
-
-		pde[i] = page;
-	}
-
-	/* PDPe level */
-	pdpe   = alloc_page();
-	for (i = 0; i < 4; ++i)
-		pdpe[i] = ((u64)(pde[i])) | 0x27;
-
-	/* PML4e level */
-	pml4e    = alloc_page();
-	pml4e[0] = ((u64)pdpe) | 0x27;
+    setup_npt();
 }
 
 int matched;
diff --git a/x86/svm.h b/x86/svm.h
index 9ab3aa5..7815f56 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -147,6 +147,17 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define NPT_PAGE_LEVEL      4
+#define NPT_PGDIR_WIDTH     9
+#define NPT_PGDIR_MASK      511
+
+#define NPT_PRESENT         (1ul << 0)
+#define NPT_RW_ACCESS       (1ul << 1)
+#define NPT_USER_ACCESS     (1ul << 2)
+#define NPT_ACCESS_BIT      (1ul << 5)
+#define NPT_DIRTY_BIT       (1ul << 6)
+#define NPT_NX_ACCESS       (1ul << 63)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
 	u16 attrib;
@@ -401,7 +412,7 @@ typedef void (*test_guest_func)(struct svm_test *);
 int run_svm_tests(int ac, char **av);
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
-u64 *npt_get_pdpe(void);
+u64 *npt_get_pdpe(u64 address);
 u64 *npt_get_pml4e(void);
 bool smp_supported(void);
 bool default_supported(void);
@@ -418,7 +429,11 @@ struct regs get_regs(void);
 void vmmcall(void);
 int __svm_vmrun(u64 rip);
 int svm_vmrun(void);
+void setup_npt(void);
 void test_set_guest(test_guest_func func);
+void install_npt(u64 *pml4, u64 phys, u64 guest_addr, u64 perm);
+u64* get_npt_pte(u64 *pml4, u64 guest_addr, int level);
+void set_npt_pte(u64 *pml4, u64 guest_addr, int level, u64  pte_val);
 
 extern struct vmcb *vmcb;
 extern struct svm_test svm_tests[];
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index 4f80d9a..4f95ae0 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -208,7 +208,7 @@ static void __svm_npt_rsvd_bits_test(u64 *pxe, u64 rsvd_bits, u64 efer,
     report(exit_reason == SVM_EXIT_NPF,
            "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits, exit_reason);
 
-    if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) {
+    if (pxe == npt_get_pdpe((u64)basic_guest_main) || pxe == npt_get_pml4e()) {
         /*
          * The guest's page tables will blow up on a bad PDPE/PML4E,
          * before starting the final walk of the guest page.
@@ -336,7 +336,7 @@ skip_pte_test:
                 get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
                 host_efer, host_cr4, guest_efer, guest_cr4);
 
-    _svm_npt_rsvd_bits_test(npt_get_pdpe(),
+    _svm_npt_rsvd_bits_test(npt_get_pdpe((u64)basic_guest_main),
                 PT_PAGE_SIZE_MASK |
                     (this_cpu_has(X86_FEATURE_GBPAGES) ? get_random_bits(29, 13) : 0),
                 host_efer, host_cr4, guest_efer, guest_cr4);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements
  2022-03-24  5:30 [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Manali Shukla
                   ` (3 preceding siblings ...)
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 4/4] x86: nSVM: Build up the nested page table dynamically Manali Shukla
@ 2022-03-24 15:58 ` Maxim Levitsky
  4 siblings, 0 replies; 10+ messages in thread
From: Maxim Levitsky @ 2022-03-24 15:58 UTC (permalink / raw)
  To: Manali Shukla, pbonzini, seanjc; +Cc: kvm

On Thu, 2022-03-24 at 05:30 +0000, Manali Shukla wrote:
> If __setup_vm() is changed to setup_vm(), KUT will build tests with PT_USER_MASK set on all 
> PTEs. It is a better idea to move nNPT tests to their own file so that tests don't need to 
> fiddle with page tables midway.
> 
> The quick approach to do this would be to turn the current main into a small helper, 
> without calling __setup_vm() from helper.
> 
> Current implementation of nested page table does the page table build up statistically 
> with 2048 PTEs and one pml4 entry. With newly implemented routine, nested page table can 
> be implemented dynamically based on the RAM size of VM which enables us to have separate 
> memory ranges to test various npt test cases.
> 
> Based on this implementation, minimal changes were required to be done in below mentioned 
> existing APIs:
> npt_get_pde(), npt_get_pte(), npt_get_pdpe().
> 
> v1 -> v2
> Added new patch for building up a nested page table dynamically and did minimal changes 
> required to make it adaptable with old test cases.
> 
> There are four patches in this patch series
> 1) Turned current main into helper function minus setup_vm().
> 2) Moved all nNPT test cases from svm_tests.c to svm_npt.c.
> 3) Enabled PT_USER_MASK for all nSVM test cases other than nNPT tests.
> 4) Implemented routine to build up nested page table dynamically.
> 
> *** BLURB HERE ***
> 
> Manali Shukla (4):
>   x86: nSVM: Move common functionality of the main() to helper
>     run_svm_tests
>   x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate
>     file.
>   x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled
>   x86: nSVM: Build up the nested page table dynamically
> 
>  x86/Makefile.common |   2 +
>  x86/Makefile.x86_64 |   2 +
>  x86/svm.c           | 169 ++++++++++++-------
>  x86/svm.h           |  18 ++-
>  x86/svm_npt.c       | 386 ++++++++++++++++++++++++++++++++++++++++++++
>  x86/svm_tests.c     | 369 +-----------------------------------------
>  6 files changed, 526 insertions(+), 420 deletions(-)
>  create mode 100644 x86/svm_npt.c
> 

Yesterday I was prototyping something similar for my use case.
 
I would like to have mini SVM library which can be called from any test,
and in particular from a test which is mostly not SVM, but sometimes
one of its vCPU enters a nested guest.

(The test in question sends lots of IPIs between vCPUs, and sometimes,
it likes a vCPU to be running a nested guest to test that this works.
 
I'll see if I can finish this. Meanwhile these patches do look good to me.
 
Best regards,
	Maxim Levitsky
 
 
 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
@ 2022-04-13 20:28   ` Sean Christopherson
  2022-04-14 16:42     ` Shukla, Manali
  0 siblings, 1 reply; 10+ messages in thread
From: Sean Christopherson @ 2022-04-13 20:28 UTC (permalink / raw)
  To: Manali Shukla; +Cc: pbonzini, kvm

On Thu, Mar 24, 2022, Manali Shukla wrote:
> nSVM tests are "incompatible" with usermode due to __setup_vm()
> call in main function.
> 
> If __setup_vm() is replaced with setup_vm() in main function, KUT
> will build the test with PT_USER_MASK set on all PTEs.
> 
> nNPT tests will be moved to their own file so that the tests
> don't need to fiddle with page tables midway through.
> 
> The quick and dirty approach would be to turn the current main()
> into a small helper, minus its call to __setup_vm() and call the
> helper function run_svm_tests() from main() function.
> 
> No functional change intended.
> 
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
> ---
>  x86/svm.c | 14 +++++++++-----
>  x86/svm.h |  1 +
>  2 files changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index 3f94b2a..e93e780 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -406,17 +406,13 @@ test_wanted(const char *name, char *filters[], int filter_count)
>          }
>  }
>  
> -int main(int ac, char **av)
> +int run_svm_tests(int ac, char **av)
>  {
> -	/* Omit PT_USER_MASK to allow tested host.CR4.SMEP=1. */
> -	pteval_t opt_mask = 0;
>  	int i = 0;
>  
>  	ac--;
>  	av++;
>  
> -	__setup_vm(&opt_mask);
> -
>  	if (!this_cpu_has(X86_FEATURE_SVM)) {
>  		printf("SVM not availble\n");
>  		return report_summary();
> @@ -453,3 +449,11 @@ int main(int ac, char **av)
>  
>  	return report_summary();
>  }
> +
> +int main(int ac, char **av)
> +{
> +    pteval_t opt_mask = 0;

Please use tabs, not spaces.  Looks like this file is an unholy mess of tabs and
spaces.  And since we're riping this file apart, let's take the opportunity to
clean it up.  How about after moving code to svm_npt.c, go through and replace
all spaces with tabs and fixup indentation as appropriate in this file?

> +
> +    __setup_vm(&opt_mask);
> +    return run_svm_tests(ac, av);
> +}
> diff --git a/x86/svm.h b/x86/svm.h
> index f74b13a..9ab3aa5 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -398,6 +398,7 @@ struct regs {
>  
>  typedef void (*test_guest_func)(struct svm_test *);
>  
> +int run_svm_tests(int ac, char **av);
>  u64 *npt_get_pte(u64 address);
>  u64 *npt_get_pde(u64 address);
>  u64 *npt_get_pdpe(void);
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [kvm-unit-tests PATCH v2 4/4] x86: nSVM: Build up the nested page table dynamically
  2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 4/4] x86: nSVM: Build up the nested page table dynamically Manali Shukla
@ 2022-04-13 21:33   ` Sean Christopherson
  2022-04-14 16:39     ` Shukla, Manali
  0 siblings, 1 reply; 10+ messages in thread
From: Sean Christopherson @ 2022-04-13 21:33 UTC (permalink / raw)
  To: Manali Shukla; +Cc: pbonzini, kvm

On Thu, Mar 24, 2022, Manali Shukla wrote:
> Current implementation of nested page table does the page
> table build up statistically with 2048 PTEs and one pml4 entry.
> That is why current implementation is not extensible.
> 
> New implementation does page table build up dynamically based
> on the RAM size of the VM which enables us to have separate
> memory range to test various npt test cases.
> 
> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
> ---
>  x86/svm.c     | 163 ++++++++++++++++++++++++++++++++++----------------

Ok, so I got fairly far into reviewing this (see below, but it can be ignored)
before realizing that all this new code is nearly identical to what's in lib/x86/vm.c.
E.g. find_pte_level() and install_pte() can probably used almost verbatim.

Instead of duplicating code, can you extend vm.c to as necessary?  It might not
even require any changes.  I'll happily clean up vm.c in the future, e.g. to fix
the misleading nomenclature and open coded horrors, but for your purposes I think
you should be able to get away with a bare minimum of changes.

>  x86/svm.h     |  17 +++++-
>  x86/svm_npt.c |   4 +-
>  3 files changed, 130 insertions(+), 54 deletions(-)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index d0d523a..67dbe31 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -8,6 +8,7 @@
>  #include "desc.h"
>  #include "msr.h"
>  #include "vm.h"
> +#include "fwcfg.h"
>  #include "smp.h"
>  #include "types.h"
>  #include "alloc_page.h"
> @@ -16,38 +17,67 @@
>  #include "vmalloc.h"
>  
>  /* for the nested page table*/
> -u64 *pte[2048];
> -u64 *pde[4];
> -u64 *pdpe;
>  u64 *pml4e;
>  
>  struct vmcb *vmcb;
>  
> -u64 *npt_get_pte(u64 address)
> +u64* get_npt_pte(u64 *pml4,

Heh, the usual way to handle wrappers is to add underscores, i.e.

u64 *npt_get_pte(u64 address)
{
    return __npt_get_pte(npt_get_pml4e(), address, 1);
}

swapping the order just results in namespacing wierdness and doesn't convey to the
reader that this is an "inner" helper.

> u64 guest_addr, int level)

Assuming guest_addr is a gpa, call it gpa to avoid ambiguity over virtual vs.
physical.

>  {
> -	int i1, i2;
> +    int l;
> +    u64 *pt = pml4, iter_pte;

Please point pointers and non-pointers on separate lines.  And just "pte" for
the tmp, it's not actually used as an iterator.  And with that, I have a slight
preference for page_table over pt so that it's not mistaken for pte.

> +    unsigned offset;

No bare unsigned please.  And "offset" is the wrong terminology, "index" or "idx"
is preferable.  An offset is usually an offset in bytes, this indexes into a u64
array.

Ugh, looks like that awful name comes from PGDIR_OFFSET in lib/x86/asm/page.h.
The offset, at least in Intel SDM terminology, it specifically the last N:0 bits
of the virtual address (or guest physical) that are the offset into the physical
page, e.g. 11:0 for a 4kb page, 20:0 for a 2mb page.

> +
> +    assert(level >= 1 && level <= 4);

The upper bound should be NPT_PAGE_LEVEL, or root_level (see below).

> +    for(l = NPT_PAGE_LEVEL; ; --l) {

Nit, need a space after "for".

Also, can you plumb in the root level?  E.g. have npt_get_pte() hardcode the
root in this case.  At some point this will hopefully support 5-level NPT, at
which point hardcoding the root will require updating more code than should be
necessary.

> +        offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
> +                 & NPT_PGDIR_MASK;

Not your code (I think), but NPT_PGDIR_MASK is an odd name since it's common to
all.  The easiest thing would be to loosely follow KVM.  Actually, I think it
makes sense to grab the PT64_ stuff from KVM

#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
#define PT64_LEVEL_BITS 9
#define PT64_LEVEL_SHIFT(level) \
		(PAGE_SHIFT + (level - 1) * PT64_LEVEL_BITS)
#define PT64_INDEX(address, level)\
	(((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1))


and then use those instead of having dedicated NPT_* defines.  That makes it more
obvious that (a) SVM/NPT tests are 64-bit only and (b) there's nothing special
about NPT with respect to "legacy" 64-bit paging.

That will provide a nice macro, PT64_INDEX, to replace the open coded calcuations.

> +        if (l == level)
> +            break;
> +        if (!(iter_pte & NPT_PRESENT))
> +            return false;

Return "false" works, but it's all kinds of wrong.  This should either assert or
return NULL.

> +        pt = (u64*)(iter_pte & PT_ADDR_MASK);
> +    }
> +    offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
> +             & NPT_PGDIR_MASK;

Hmm, this is unnecessary because the for-loop can't terminate on its own, it
can only exit on "l == level", and offset is already correct in that case.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [kvm-unit-tests PATCH v2 4/4] x86: nSVM: Build up the nested page table dynamically
  2022-04-13 21:33   ` Sean Christopherson
@ 2022-04-14 16:39     ` Shukla, Manali
  0 siblings, 0 replies; 10+ messages in thread
From: Shukla, Manali @ 2022-04-14 16:39 UTC (permalink / raw)
  To: Sean Christopherson, Manali Shukla; +Cc: pbonzini, kvm



On 4/14/2022 3:03 AM, Sean Christopherson wrote:
> On Thu, Mar 24, 2022, Manali Shukla wrote:
>> Current implementation of nested page table does the page
>> table build up statistically with 2048 PTEs and one pml4 entry.
>> That is why current implementation is not extensible.
>>
>> New implementation does page table build up dynamically based
>> on the RAM size of the VM which enables us to have separate
>> memory range to test various npt test cases.
>>
>> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
>> ---
>>  x86/svm.c     | 163 ++++++++++++++++++++++++++++++++++----------------
> 
> Ok, so I got fairly far into reviewing this (see below, but it can be ignored)
> before realizing that all this new code is nearly identical to what's in lib/x86/vm.c.
> E.g. find_pte_level() and install_pte() can probably used almost verbatim.
> 
> Instead of duplicating code, can you extend vm.c to as necessary?  It might not
> even require any changes.  I'll happily clean up vm.c in the future, e.g. to fix
> the misleading nomenclature and open coded horrors, but for your purposes I think
> you should be able to get away with a bare minimum of changes.
> 
>>  x86/svm.h     |  17 +++++-
>>  x86/svm_npt.c |   4 +-
>>  3 files changed, 130 insertions(+), 54 deletions(-)
>>
>> diff --git a/x86/svm.c b/x86/svm.c
>> index d0d523a..67dbe31 100644
>> --- a/x86/svm.c
>> +++ b/x86/svm.c
>> @@ -8,6 +8,7 @@
>>  #include "desc.h"
>>  #include "msr.h"
>>  #include "vm.h"
>> +#include "fwcfg.h"
>>  #include "smp.h"
>>  #include "types.h"
>>  #include "alloc_page.h"
>> @@ -16,38 +17,67 @@
>>  #include "vmalloc.h"
>>  
>>  /* for the nested page table*/
>> -u64 *pte[2048];
>> -u64 *pde[4];
>> -u64 *pdpe;
>>  u64 *pml4e;
>>  
>>  struct vmcb *vmcb;
>>  
>> -u64 *npt_get_pte(u64 address)
>> +u64* get_npt_pte(u64 *pml4,
> 
> Heh, the usual way to handle wrappers is to add underscores, i.e.
> 
> u64 *npt_get_pte(u64 address)
> {
>     return __npt_get_pte(npt_get_pml4e(), address, 1);
> }
> 
> swapping the order just results in namespacing wierdness and doesn't convey to the
> reader that this is an "inner" helper.
> 
>> u64 guest_addr, int level)
> 
> Assuming guest_addr is a gpa, call it gpa to avoid ambiguity over virtual vs.
> physical.
> 
>>  {
>> -	int i1, i2;
>> +    int l;
>> +    u64 *pt = pml4, iter_pte;
> 
> Please point pointers and non-pointers on separate lines.  And just "pte" for
> the tmp, it's not actually used as an iterator.  And with that, I have a slight
> preference for page_table over pt so that it's not mistaken for pte.
> 
>> +    unsigned offset;
> 
> No bare unsigned please.  And "offset" is the wrong terminology, "index" or "idx"
> is preferable.  An offset is usually an offset in bytes, this indexes into a u64
> array.
> 
> Ugh, looks like that awful name comes from PGDIR_OFFSET in lib/x86/asm/page.h.
> The offset, at least in Intel SDM terminology, it specifically the last N:0 bits
> of the virtual address (or guest physical) that are the offset into the physical
> page, e.g. 11:0 for a 4kb page, 20:0 for a 2mb page.
> 
>> +
>> +    assert(level >= 1 && level <= 4);
> 
> The upper bound should be NPT_PAGE_LEVEL, or root_level (see below).
> 
>> +    for(l = NPT_PAGE_LEVEL; ; --l) {
> 
> Nit, need a space after "for".
> 
> Also, can you plumb in the root level?  E.g. have npt_get_pte() hardcode the
> root in this case.  At some point this will hopefully support 5-level NPT, at
> which point hardcoding the root will require updating more code than should be
> necessary.
> 
>> +        offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
>> +                 & NPT_PGDIR_MASK;
> 
> Not your code (I think), but NPT_PGDIR_MASK is an odd name since it's common to
> all.  The easiest thing would be to loosely follow KVM.  Actually, I think it
> makes sense to grab the PT64_ stuff from KVM
> 
> #define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
> #define PT64_LEVEL_BITS 9
> #define PT64_LEVEL_SHIFT(level) \
> 		(PAGE_SHIFT + (level - 1) * PT64_LEVEL_BITS)
> #define PT64_INDEX(address, level)\
> 	(((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1))
> 
> 
> and then use those instead of having dedicated NPT_* defines.  That makes it more
> obvious that (a) SVM/NPT tests are 64-bit only and (b) there's nothing special
> about NPT with respect to "legacy" 64-bit paging.
> 
> That will provide a nice macro, PT64_INDEX, to replace the open coded calcuations.
> 
>> +        if (l == level)
>> +            break;
>> +        if (!(iter_pte & NPT_PRESENT))
>> +            return false;
> 
> Return "false" works, but it's all kinds of wrong.  This should either assert or
> return NULL.
> 
>> +        pt = (u64*)(iter_pte & PT_ADDR_MASK);
>> +    }
>> +    offset = (guest_addr >> (((l - 1) * NPT_PGDIR_WIDTH) + 12))
>> +             & NPT_PGDIR_MASK;
> 
> Hmm, this is unnecessary because the for-loop can't terminate on its own, it
> can only exit on "l == level", and offset is already correct in that case.

Hey Sean,

Thank you so much for reviewing the code.

I will work on the comments.

- Manali

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests
  2022-04-13 20:28   ` Sean Christopherson
@ 2022-04-14 16:42     ` Shukla, Manali
  0 siblings, 0 replies; 10+ messages in thread
From: Shukla, Manali @ 2022-04-14 16:42 UTC (permalink / raw)
  To: Sean Christopherson, Manali Shukla; +Cc: pbonzini, kvm



On 4/14/2022 1:58 AM, Sean Christopherson wrote:
> On Thu, Mar 24, 2022, Manali Shukla wrote:
>> nSVM tests are "incompatible" with usermode due to __setup_vm()
>> call in main function.
>>
>> If __setup_vm() is replaced with setup_vm() in main function, KUT
>> will build the test with PT_USER_MASK set on all PTEs.
>>
>> nNPT tests will be moved to their own file so that the tests
>> don't need to fiddle with page tables midway through.
>>
>> The quick and dirty approach would be to turn the current main()
>> into a small helper, minus its call to __setup_vm() and call the
>> helper function run_svm_tests() from main() function.
>>
>> No functional change intended.
>>
>> Suggested-by: Sean Christopherson <seanjc@google.com>
>> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
>> ---
>>  x86/svm.c | 14 +++++++++-----
>>  x86/svm.h |  1 +
>>  2 files changed, 10 insertions(+), 5 deletions(-)
>>
>> diff --git a/x86/svm.c b/x86/svm.c
>> index 3f94b2a..e93e780 100644
>> --- a/x86/svm.c
>> +++ b/x86/svm.c
>> @@ -406,17 +406,13 @@ test_wanted(const char *name, char *filters[], int filter_count)
>>          }
>>  }
>>  
>> -int main(int ac, char **av)
>> +int run_svm_tests(int ac, char **av)
>>  {
>> -	/* Omit PT_USER_MASK to allow tested host.CR4.SMEP=1. */
>> -	pteval_t opt_mask = 0;
>>  	int i = 0;
>>  
>>  	ac--;
>>  	av++;
>>  
>> -	__setup_vm(&opt_mask);
>> -
>>  	if (!this_cpu_has(X86_FEATURE_SVM)) {
>>  		printf("SVM not availble\n");
>>  		return report_summary();
>> @@ -453,3 +449,11 @@ int main(int ac, char **av)
>>  
>>  	return report_summary();
>>  }
>> +
>> +int main(int ac, char **av)
>> +{
>> +    pteval_t opt_mask = 0;
> 
> Please use tabs, not spaces.  Looks like this file is an unholy mess of tabs and
> spaces.  And since we're riping this file apart, let's take the opportunity to
> clean it up.  How about after moving code to svm_npt.c, go through and replace
> all spaces with tabs and fixup indentation as appropriate in this file?
Hey Sean,

Thank you for reviewing the code.

I will work on the comments and fixing up the indentation.

-Manali

> 
>> +
>> +    __setup_vm(&opt_mask);
>> +    return run_svm_tests(ac, av);
>> +}
>> diff --git a/x86/svm.h b/x86/svm.h
>> index f74b13a..9ab3aa5 100644
>> --- a/x86/svm.h
>> +++ b/x86/svm.h
>> @@ -398,6 +398,7 @@ struct regs {
>>  
>>  typedef void (*test_guest_func)(struct svm_test *);
>>  
>> +int run_svm_tests(int ac, char **av);
>>  u64 *npt_get_pte(u64 address);
>>  u64 *npt_get_pde(u64 address);
>>  u64 *npt_get_pdpe(void);
>> -- 
>> 2.30.2
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-04-14 17:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-24  5:30 [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Manali Shukla
2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 1/4] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Manali Shukla
2022-04-13 20:28   ` Sean Christopherson
2022-04-14 16:42     ` Shukla, Manali
2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 2/4] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file Manali Shukla
2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 3/4] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled Manali Shukla
2022-03-24  5:30 ` [kvm-unit-tests PATCH v2 4/4] x86: nSVM: Build up the nested page table dynamically Manali Shukla
2022-04-13 21:33   ` Sean Christopherson
2022-04-14 16:39     ` Shukla, Manali
2022-03-24 15:58 ` [kvm-unit-tests PATCH v2 0/4] Move npt test cases and NPT code improvements Maxim Levitsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.