All of lore.kernel.org
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests
@ 2022-11-22 16:11 Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli() Maxim Levitsky
                   ` (27 more replies)
  0 siblings, 28 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

This is set of fixes and new unit tests that I developed for the
KVM unit tests.

I also did some work to separate the SVM code into a minimal
support library so that you could use it from an arbitrary test.

V2:

  - addressed review feedback, and futher cleaned up the svm tests
    set to use less global variables
    (the patches are large, but each changes one thing all over the tests, so hopefully not hard to review).

Best regards,
    Maxim Levitsky

Maxim Levitsky (27):
  x86: replace irq_{enable|disable}() with sti()/cli()
  x86: introduce sti_nop() and sti_nop_cli()
  x86: add few helper functions for apic local timer
  svm: remove nop after stgi/clgi
  svm: make svm_intr_intercept_mix_if/gif test a bit more robust
  svm: use apic_start_timer/apic_stop_timer instead of open coding it
  x86: Add test for #SMI during interrupt window
  x86: Add a simple test for SYSENTER instruction.
  svm: add simple nested shutdown test.
  SVM: add two tests for exitintinto on exception
  lib: Add random number generator
  x86: add IPI stress test
  svm: remove get_npt_pte extern
  svm: move svm spec definitions to lib/x86/svm.h
  svm: move some svm support functions into lib/x86/svm_lib.h
  svm: move setup_svm() to svm_lib.c
  svm: correctly skip if NPT not supported
  svm: move vmcb_ident to svm_lib.c
  svm: rewerite vm entry macros
  svm: move v2 tests run into test_run
  svm: cleanup the default_prepare
  svm: introduce svm_vcpu
  svm: introduce struct svm_test_context
  svm: use svm_test_context in v2 tests
  svm: move nested vcpu to test context
  svm: move test_guest_func to test context
  x86: ipi_stress: add optional SVM support

 Makefile                  |    3 +-
 README.md                 |    1 +
 lib/prng.c                |   41 ++
 lib/prng.h                |   23 +
 lib/x86/apic.c            |   38 ++
 lib/x86/apic.h            |    6 +
 lib/x86/processor.h       |   39 +-
 lib/x86/random.c          |   33 +
 lib/x86/random.h          |   17 +
 lib/x86/smp.c             |    2 +-
 lib/x86/svm.h             |  367 +++++++++++
 lib/x86/svm_lib.c         |  179 ++++++
 lib/x86/svm_lib.h         |  151 +++++
 scripts/arch-run.bash     |    2 +-
 x86/Makefile.common       |    5 +-
 x86/Makefile.x86_64       |    5 +
 x86/apic.c                |    6 +-
 x86/asyncpf.c             |    6 +-
 x86/eventinj.c            |   22 +-
 x86/hyperv_connections.c  |    2 +-
 x86/hyperv_stimer.c       |    4 +-
 x86/hyperv_synic.c        |    6 +-
 x86/intel-iommu.c         |    2 +-
 x86/ioapic.c              |   15 +-
 x86/ipi_stress.c          |  244 +++++++
 x86/pmu.c                 |    4 +-
 x86/smm_int_window.c      |  118 ++++
 x86/svm.c                 |  341 ++--------
 x86/svm.h                 |  495 +-------------
 x86/svm_npt.c             |   81 ++-
 x86/svm_tests.c           | 1278 ++++++++++++++++++++++---------------
 x86/sysenter.c            |  203 ++++++
 x86/taskswitch2.c         |    4 +-
 x86/tscdeadline_latency.c |    4 +-
 x86/unittests.cfg         |   20 +
 x86/vmexit.c              |   18 +-
 x86/vmx_tests.c           |   47 +-
 37 files changed, 2454 insertions(+), 1378 deletions(-)
 create mode 100644 lib/prng.c
 create mode 100644 lib/prng.h
 create mode 100644 lib/x86/random.c
 create mode 100644 lib/x86/random.h
 create mode 100644 lib/x86/svm.h
 create mode 100644 lib/x86/svm_lib.c
 create mode 100644 lib/x86/svm_lib.h
 create mode 100644 x86/ipi_stress.c
 create mode 100644 x86/smm_int_window.c
 create mode 100644 x86/sysenter.c

-- 
2.34.3



^ permalink raw reply	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli()
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 02/27] x86: introduce sti_nop() and sti_nop_cli() Maxim Levitsky
                   ` (26 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

This removes a layer of indirection which is strictly
speaking not needed since its x86 code anyway.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/processor.h       | 19 +++++-----------
 lib/x86/smp.c             |  2 +-
 x86/apic.c                |  2 +-
 x86/asyncpf.c             |  6 ++---
 x86/eventinj.c            | 22 +++++++++---------
 x86/hyperv_connections.c  |  2 +-
 x86/hyperv_stimer.c       |  4 ++--
 x86/hyperv_synic.c        |  6 ++---
 x86/intel-iommu.c         |  2 +-
 x86/ioapic.c              | 14 ++++++------
 x86/pmu.c                 |  4 ++--
 x86/svm.c                 |  4 ++--
 x86/svm_tests.c           | 48 +++++++++++++++++++--------------------
 x86/taskswitch2.c         |  4 ++--
 x86/tscdeadline_latency.c |  4 ++--
 x86/vmexit.c              | 18 +++++++--------
 x86/vmx_tests.c           | 42 +++++++++++++++++-----------------
 17 files changed, 98 insertions(+), 105 deletions(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 7a9e8c82..b89f6a7c 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -653,11 +653,17 @@ static inline void pause(void)
 	asm volatile ("pause");
 }
 
+/* Disable interrupts as per x86 spec */
 static inline void cli(void)
 {
 	asm volatile ("cli");
 }
 
+/*
+ * Enable interrupts.
+ * Note that next instruction after sti will not have interrupts
+ * evaluated due to concept of 'interrupt shadow'
+ */
 static inline void sti(void)
 {
 	asm volatile ("sti");
@@ -732,19 +738,6 @@ static inline void wrtsc(u64 tsc)
 	wrmsr(MSR_IA32_TSC, tsc);
 }
 
-static inline void irq_disable(void)
-{
-	asm volatile("cli");
-}
-
-/* Note that irq_enable() does not ensure an interrupt shadow due
- * to the vagaries of compiler optimizations.  If you need the
- * shadow, use a single asm with "sti" and the instruction after it.
- */
-static inline void irq_enable(void)
-{
-	asm volatile("sti");
-}
 
 static inline void invlpg(volatile void *va)
 {
diff --git a/lib/x86/smp.c b/lib/x86/smp.c
index b9b91c77..e297016c 100644
--- a/lib/x86/smp.c
+++ b/lib/x86/smp.c
@@ -84,7 +84,7 @@ static void setup_smp_id(void *data)
 
 void ap_online(void)
 {
-	irq_enable();
+	sti();
 
 	printf("setup: CPU %" PRId32 " online\n", apic_id());
 	atomic_inc(&cpu_online_count);
diff --git a/x86/apic.c b/x86/apic.c
index 20c3a1a4..66c1c58a 100644
--- a/x86/apic.c
+++ b/x86/apic.c
@@ -958,7 +958,7 @@ int main(void)
 	setup_vm();
 
 	mask_pic_interrupts();
-	irq_enable();
+	sti();
 
 	for (i = 0; i < ARRAY_SIZE(tests); i++) {
 		tests[i]();
diff --git a/x86/asyncpf.c b/x86/asyncpf.c
index 9366c293..bc515be9 100644
--- a/x86/asyncpf.c
+++ b/x86/asyncpf.c
@@ -65,7 +65,7 @@ static void pf_isr(struct ex_regs *r)
 				    read_cr2(), virt, phys);
 			while(phys) {
 				safe_halt(); /* enables irq */
-				irq_disable();
+				cli();
 			}
 			break;
 		case KVM_PV_REASON_PAGE_READY:
@@ -97,7 +97,7 @@ int main(int ac, char **av)
 			KVM_ASYNC_PF_SEND_ALWAYS | KVM_ASYNC_PF_ENABLED);
 	printf("alloc memory\n");
 	buf = malloc(MEM);
-	irq_enable();
+	sti();
 	while(loop--) {
 		printf("start loop\n");
 		/* access a lot of memory to make host swap it out */
@@ -105,7 +105,7 @@ int main(int ac, char **av)
 			buf[i] = 1;
 		printf("end loop\n");
 	}
-	irq_disable();
+	cli();
 
 	return report_summary();
 }
diff --git a/x86/eventinj.c b/x86/eventinj.c
index 3031c040..147838ce 100644
--- a/x86/eventinj.c
+++ b/x86/eventinj.c
@@ -270,10 +270,10 @@ int main(void)
 	test_count = 0;
 	flush_idt_page();
 	printf("Sending vec 33 to self\n");
-	irq_enable();
+	sti();
 	apic_self_ipi(33);
 	io_delay();
-	irq_disable();
+	cli();
 	printf("After vec 33 to self\n");
 	report(test_count == 1, "vec 33");
 
@@ -294,9 +294,9 @@ int main(void)
 	apic_self_ipi(32);
 	apic_self_ipi(33);
 	io_delay();
-	irq_enable();
+	sti();
 	asm volatile("nop");
-	irq_disable();
+	cli();
 	printf("After vec 32 and 33 to self\n");
 	report(test_count == 2, "vec 32/33");
 
@@ -311,7 +311,7 @@ int main(void)
 	flush_stack();
 	io_delay();
 	asm volatile ("sti; int $33");
-	irq_disable();
+	cli();
 	printf("After vec 32 and int $33\n");
 	report(test_count == 2, "vec 32/int $33");
 
@@ -321,7 +321,7 @@ int main(void)
 	flush_idt_page();
 	printf("Sending vec 33 and 62 and mask one with TPR\n");
 	apic_write(APIC_TASKPRI, 0xf << 4);
-	irq_enable();
+	sti();
 	apic_self_ipi(32);
 	apic_self_ipi(62);
 	io_delay();
@@ -330,7 +330,7 @@ int main(void)
 	report(test_count == 1, "TPR");
 	apic_write(APIC_TASKPRI, 0x0);
 	while(test_count != 2); /* wait for second irq */
-	irq_disable();
+	cli();
 
 	/* test fault durint NP delivery */
 	printf("Before NP test\n");
@@ -353,9 +353,9 @@ int main(void)
 	/* this is needed on VMX without NMI window notification.
 	   Interrupt windows is used instead, so let pending NMI
 	   to be injected */
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 	report(test_count == 2, "NMI");
 
 	/* generate NMI that will fault on IRET */
@@ -367,9 +367,9 @@ int main(void)
 	/* this is needed on VMX without NMI window notification.
 	   Interrupt windows is used instead, so let pending NMI
 	   to be injected */
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 	printf("After NMI to self\n");
 	report(test_count == 2, "NMI");
 	stack_phys = (ulong)virt_to_phys(alloc_page());
diff --git a/x86/hyperv_connections.c b/x86/hyperv_connections.c
index 6e8ac32f..cf043664 100644
--- a/x86/hyperv_connections.c
+++ b/x86/hyperv_connections.c
@@ -94,7 +94,7 @@ static void setup_cpu(void *ctx)
 	struct hv_vcpu *hv;
 
 	write_cr3((ulong)ctx);
-	irq_enable();
+	sti();
 
 	vcpu = smp_id();
 	hv = &hv_vcpus[vcpu];
diff --git a/x86/hyperv_stimer.c b/x86/hyperv_stimer.c
index 7b7c985c..f7c67916 100644
--- a/x86/hyperv_stimer.c
+++ b/x86/hyperv_stimer.c
@@ -307,7 +307,7 @@ static void stimer_test(void *ctx)
     struct svcpu *svcpu = &g_synic_vcpu[vcpu];
     struct stimer *timer1, *timer2;
 
-    irq_enable();
+    sti();
 
     timer1 = &svcpu->timer[0];
     timer2 = &svcpu->timer[1];
@@ -318,7 +318,7 @@ static void stimer_test(void *ctx)
     stimer_test_auto_enable_periodic(vcpu, timer1);
     stimer_test_one_shot_busy(vcpu, timer1);
 
-    irq_disable();
+    cli();
 }
 
 static void stimer_test_cleanup(void *ctx)
diff --git a/x86/hyperv_synic.c b/x86/hyperv_synic.c
index 5ca593c0..9d61d836 100644
--- a/x86/hyperv_synic.c
+++ b/x86/hyperv_synic.c
@@ -79,7 +79,7 @@ static void synic_test_prepare(void *ctx)
     int i = 0;
 
     write_cr3((ulong)ctx);
-    irq_enable();
+    sti();
 
     rdmsr(HV_X64_MSR_SVERSION);
     rdmsr(HV_X64_MSR_SIMP);
@@ -121,7 +121,7 @@ static void synic_test(void *ctx)
 {
     int dst_vcpu = (ulong)ctx;
 
-    irq_enable();
+    sti();
     synic_sints_test(dst_vcpu);
 }
 
@@ -129,7 +129,7 @@ static void synic_test_cleanup(void *ctx)
 {
     int i;
 
-    irq_enable();
+    sti();
     for (i = 0; i < HV_SYNIC_SINT_COUNT; i++) {
         synic_sint_destroy(i);
     }
diff --git a/x86/intel-iommu.c b/x86/intel-iommu.c
index 4442fe1f..687a43ce 100644
--- a/x86/intel-iommu.c
+++ b/x86/intel-iommu.c
@@ -82,7 +82,7 @@ static void vtd_test_ir(void)
 
 	report_prefix_push("vtd_ir");
 
-	irq_enable();
+	sti();
 
 	/* This will enable INTx */
 	pci_msi_set_enable(pci_dev, false);
diff --git a/x86/ioapic.c b/x86/ioapic.c
index 4f578ce4..cce8add1 100644
--- a/x86/ioapic.c
+++ b/x86/ioapic.c
@@ -125,10 +125,10 @@ static void test_ioapic_simultaneous(void)
 	handle_irq(0x66, ioapic_isr_66);
 	ioapic_set_redir(0x0e, 0x78, TRIGGER_EDGE);
 	ioapic_set_redir(0x0f, 0x66, TRIGGER_EDGE);
-	irq_disable();
+	cli();
 	toggle_irq_line(0x0f);
 	toggle_irq_line(0x0e);
-	irq_enable();
+	sti();
 	asm volatile ("nop");
 	report(g_66 && g_78 && g_66_after_78 && g_66_rip == g_78_rip,
 	       "ioapic simultaneous edge interrupts");
@@ -173,10 +173,10 @@ static void test_ioapic_level_tmr(bool expected_tmr_before)
 
 static void toggle_irq_line_0x0e(void *data)
 {
-	irq_disable();
+	cli();
 	delay(IPI_DELAY);
 	toggle_irq_line(0x0e);
-	irq_enable();
+	sti();
 }
 
 static void test_ioapic_edge_tmr_smp(bool expected_tmr_before)
@@ -199,10 +199,10 @@ static void test_ioapic_edge_tmr_smp(bool expected_tmr_before)
 
 static void set_irq_line_0x0e(void *data)
 {
-	irq_disable();
+	cli();
 	delay(IPI_DELAY);
 	set_irq_line(0x0e, 1);
-	irq_enable();
+	sti();
 }
 
 static void test_ioapic_level_tmr_smp(bool expected_tmr_before)
@@ -485,7 +485,7 @@ int main(void)
 	else
 		printf("x2apic not detected\n");
 
-	irq_enable();
+	sti();
 
 	ioapic_reg_version();
 	ioapic_reg_id();
diff --git a/x86/pmu.c b/x86/pmu.c
index 72c2c9cf..328e3c68 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -75,10 +75,10 @@ static bool check_irq(void)
 {
 	int i;
 	irq_received = 0;
-	irq_enable();
+	sti();
 	for (i = 0; i < 100000 && !irq_received; i++)
 		asm volatile("pause");
-	irq_disable();
+	cli();
 	return irq_received;
 }
 
diff --git a/x86/svm.c b/x86/svm.c
index ba435b4a..0b2a1d69 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -240,7 +240,7 @@ static noinline void test_run(struct svm_test *test)
 {
 	u64 vmcb_phys = virt_to_phys(vmcb);
 
-	irq_disable();
+	cli();
 	vmcb_ident(vmcb);
 
 	test->prepare(test);
@@ -273,7 +273,7 @@ static noinline void test_run(struct svm_test *test)
 				"memory");
 		++test->exits;
 	} while (!test->finished(test));
-	irq_enable();
+	sti();
 
 	report(test->succeeded(test), "%s", test->name);
 
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 27ce47b4..15e781af 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -1000,9 +1000,9 @@ static bool pending_event_finished(struct svm_test *test)
 			return true;
 		}
 
-		irq_enable();
+		sti();
 		asm volatile ("nop");
-		irq_disable();
+		cli();
 
 		if (!pending_event_ipi_fired) {
 			report_fail("Pending interrupt not dispatched after IRQ enabled\n");
@@ -1056,9 +1056,9 @@ static void pending_event_cli_test(struct svm_test *test)
 	}
 
 	/* VINTR_MASKING is zero.  This should cause the IPI to fire.  */
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 
 	if (pending_event_ipi_fired != true) {
 		set_test_stage(test, -1);
@@ -1072,9 +1072,9 @@ static void pending_event_cli_test(struct svm_test *test)
 	 * the VINTR interception should be clear in VMCB02.  Check
 	 * that L0 did not leave a stale VINTR in the VMCB.
 	 */
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 }
 
 static bool pending_event_cli_finished(struct svm_test *test)
@@ -1105,9 +1105,9 @@ static bool pending_event_cli_finished(struct svm_test *test)
 			return true;
 		}
 
-		irq_enable();
+		sti();
 		asm volatile ("nop");
-		irq_disable();
+		cli();
 
 		if (pending_event_ipi_fired != true) {
 			report_fail("Interrupt not triggered by host");
@@ -1153,7 +1153,7 @@ static void interrupt_test(struct svm_test *test)
 	long long start, loops;
 
 	apic_write(APIC_LVTT, TIMER_VECTOR);
-	irq_enable();
+	sti();
 	apic_write(APIC_TMICT, 1); //Timer Initial Count Register 0x380 one-shot
 	for (loops = 0; loops < 10000000 && !timer_fired; loops++)
 		asm volatile ("nop");
@@ -1166,7 +1166,7 @@ static void interrupt_test(struct svm_test *test)
 	}
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmmcall();
 
 	timer_fired = false;
@@ -1181,9 +1181,9 @@ static void interrupt_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	irq_enable();
+	sti();
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 
 	timer_fired = false;
 	start = rdtsc();
@@ -1199,7 +1199,7 @@ static void interrupt_test(struct svm_test *test)
 	}
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmmcall();
 
 	timer_fired = false;
@@ -1216,7 +1216,7 @@ static void interrupt_test(struct svm_test *test)
 	}
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 }
 
 static bool interrupt_finished(struct svm_test *test)
@@ -1243,9 +1243,9 @@ static bool interrupt_finished(struct svm_test *test)
 			return true;
 		}
 
-		irq_enable();
+		sti();
 		asm volatile ("nop");
-		irq_disable();
+		cli();
 
 		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
 		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
@@ -1540,9 +1540,9 @@ static void virq_inject_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 
 	if (!virq_fired) {
 		report_fail("virtual interrupt not fired after L2 sti");
@@ -1557,9 +1557,9 @@ static void virq_inject_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 
 	if (!virq_fired) {
 		report_fail("virtual interrupt not fired after return from VINTR intercept");
@@ -1568,9 +1568,9 @@ static void virq_inject_test(struct svm_test *test)
 
 	vmmcall();
 
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 
 	if (virq_fired) {
 		report_fail("virtual interrupt fired when V_IRQ_PRIO less than V_TPR");
@@ -1739,9 +1739,9 @@ static bool reg_corruption_finished(struct svm_test *test)
 
 		void* guest_rip = (void*)vmcb->save.rip;
 
-		irq_enable();
+		sti();
 		asm volatile ("nop");
-		irq_disable();
+		cli();
 
 		if (guest_rip == insb_instruction_label && io_port_var != 0xAA) {
 			report_fail("RIP corruption detected after %d timer interrupts",
diff --git a/x86/taskswitch2.c b/x86/taskswitch2.c
index db69f078..2a3b210d 100644
--- a/x86/taskswitch2.c
+++ b/x86/taskswitch2.c
@@ -139,10 +139,10 @@ static void test_kernel_mode_int(void)
 	test_count = 0;
 	printf("Trigger IRQ from APIC\n");
 	set_intr_task_gate(0xf0, irq_tss);
-	irq_enable();
+	sti();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | APIC_INT_ASSERT | 0xf0, 0);
 	io_delay();
-	irq_disable();
+	cli();
 	printf("Return from APIC IRQ\n");
 	report(test_count == 1, "IRQ external");
 
diff --git a/x86/tscdeadline_latency.c b/x86/tscdeadline_latency.c
index a3bc4ea4..6bf56225 100644
--- a/x86/tscdeadline_latency.c
+++ b/x86/tscdeadline_latency.c
@@ -70,7 +70,7 @@ static void tsc_deadline_timer_isr(isr_regs_t *regs)
 static void start_tsc_deadline_timer(void)
 {
     handle_irq(TSC_DEADLINE_TIMER_VECTOR, tsc_deadline_timer_isr);
-    irq_enable();
+    sti();
 
     wrmsr(MSR_IA32_TSCDEADLINE, rdmsr(MSR_IA32_TSC)+delta);
     asm volatile ("nop");
@@ -115,7 +115,7 @@ int main(int argc, char **argv)
     breakmax = argc <= 3 ? 0 : atol(argv[3]);
     printf("breakmax=%d\n", breakmax);
     test_tsc_deadline_timer();
-    irq_enable();
+    sti();
 
     /* The condition might have triggered already, so check before HLT. */
     while (!hitmax && table_idx < size)
diff --git a/x86/vmexit.c b/x86/vmexit.c
index b1eed8d1..884ac63a 100644
--- a/x86/vmexit.c
+++ b/x86/vmexit.c
@@ -93,7 +93,7 @@ static void apic_self_ipi(int vec)
 static void self_ipi_sti_nop(void)
 {
 	x = 0;
-	irq_disable();
+	cli();
 	apic_self_ipi(IPI_TEST_VECTOR);
 	asm volatile("sti; nop");
 	if (x != 1) printf("%d", x);
@@ -102,7 +102,7 @@ static void self_ipi_sti_nop(void)
 static void self_ipi_sti_hlt(void)
 {
 	x = 0;
-	irq_disable();
+	cli();
 	apic_self_ipi(IPI_TEST_VECTOR);
 	safe_halt();
 	if (x != 1) printf("%d", x);
@@ -121,7 +121,7 @@ static void self_ipi_tpr(void)
 static void self_ipi_tpr_sti_nop(void)
 {
 	x = 0;
-	irq_disable();
+	cli();
 	apic_set_tpr(0x0f);
 	apic_self_ipi(IPI_TEST_VECTOR);
 	apic_set_tpr(0x00);
@@ -132,7 +132,7 @@ static void self_ipi_tpr_sti_nop(void)
 static void self_ipi_tpr_sti_hlt(void)
 {
 	x = 0;
-	irq_disable();
+	cli();
 	apic_set_tpr(0x0f);
 	apic_self_ipi(IPI_TEST_VECTOR);
 	apic_set_tpr(0x00);
@@ -147,14 +147,14 @@ static int is_x2apic(void)
 
 static void x2apic_self_ipi_sti_nop(void)
 {
-	irq_disable();
+	cli();
 	x2apic_self_ipi(IPI_TEST_VECTOR);
 	asm volatile("sti; nop");
 }
 
 static void x2apic_self_ipi_sti_hlt(void)
 {
-	irq_disable();
+	cli();
 	x2apic_self_ipi(IPI_TEST_VECTOR);
 	safe_halt();
 }
@@ -169,7 +169,7 @@ static void x2apic_self_ipi_tpr(void)
 
 static void x2apic_self_ipi_tpr_sti_nop(void)
 {
-	irq_disable();
+	cli();
 	apic_set_tpr(0x0f);
 	x2apic_self_ipi(IPI_TEST_VECTOR);
 	apic_set_tpr(0x00);
@@ -178,7 +178,7 @@ static void x2apic_self_ipi_tpr_sti_nop(void)
 
 static void x2apic_self_ipi_tpr_sti_hlt(void)
 {
-	irq_disable();
+	cli();
 	apic_set_tpr(0x0f);
 	x2apic_self_ipi(IPI_TEST_VECTOR);
 	apic_set_tpr(0x00);
@@ -605,7 +605,7 @@ int main(int ac, char **av)
 	handle_irq(IPI_TEST_VECTOR, self_ipi_isr);
 	nr_cpus = cpu_count();
 
-	irq_enable();
+	sti();
 	on_cpus(enable_nx, NULL);
 
 	ret = pci_find_dev(PCI_VENDOR_ID_REDHAT, PCI_DEVICE_ID_REDHAT_TEST);
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 7bba8165..c556af28 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1577,7 +1577,7 @@ static void interrupt_main(void)
 	vmx_set_test_stage(0);
 
 	apic_write(APIC_LVTT, TIMER_VECTOR);
-	irq_enable();
+	sti();
 
 	apic_write(APIC_TMICT, 1);
 	for (loops = 0; loops < 10000000 && !timer_fired; loops++)
@@ -1585,7 +1585,7 @@ static void interrupt_main(void)
 	report(timer_fired, "direct interrupt while running guest");
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmcall();
 	timer_fired = false;
 	apic_write(APIC_TMICT, 1);
@@ -1593,9 +1593,9 @@ static void interrupt_main(void)
 		asm volatile ("nop");
 	report(timer_fired, "intercepted interrupt while running guest");
 
-	irq_enable();
+	sti();
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmcall();
 	timer_fired = false;
 	start = rdtsc();
@@ -1607,7 +1607,7 @@ static void interrupt_main(void)
 	       "direct interrupt + hlt");
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmcall();
 	timer_fired = false;
 	start = rdtsc();
@@ -1619,13 +1619,13 @@ static void interrupt_main(void)
 	       "intercepted interrupt + hlt");
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmcall();
 	timer_fired = false;
 	start = rdtsc();
 	apic_write(APIC_TMICT, 1000000);
 
-	irq_enable();
+	sti();
 	asm volatile ("nop");
 	vmcall();
 
@@ -1633,13 +1633,13 @@ static void interrupt_main(void)
 	       "direct interrupt + activity state hlt");
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmcall();
 	timer_fired = false;
 	start = rdtsc();
 	apic_write(APIC_TMICT, 1000000);
 
-	irq_enable();
+	sti();
 	asm volatile ("nop");
 	vmcall();
 
@@ -1647,7 +1647,7 @@ static void interrupt_main(void)
 	       "intercepted interrupt + activity state hlt");
 
 	apic_write(APIC_TMICT, 0);
-	irq_disable();
+	cli();
 	vmx_set_test_stage(7);
 	vmcall();
 	timer_fired = false;
@@ -1658,7 +1658,7 @@ static void interrupt_main(void)
 	       "running a guest with interrupt acknowledgement set");
 
 	apic_write(APIC_TMICT, 0);
-	irq_enable();
+	sti();
 	timer_fired = false;
 	vmcall();
 	report(timer_fired, "Inject an event to a halted guest");
@@ -1709,9 +1709,9 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
 			int vector = vmcs_read(EXI_INTR_INFO) & 0xff;
 			handle_external_interrupt(vector);
 		} else {
-			irq_enable();
+			sti();
 			asm volatile ("nop");
-			irq_disable();
+			cli();
 		}
 		if (vmx_get_test_stage() >= 2)
 			vmcs_write(GUEST_ACTV_STATE, ACTV_ACTIVE);
@@ -6716,9 +6716,9 @@ static void test_x2apic_wr(
 		assert_exit_reason(exit_reason_want);
 
 		/* Clear the external interrupt. */
-		irq_enable();
+		sti();
 		asm volatile ("nop");
-		irq_disable();
+		cli();
 		report(handle_x2apic_ipi_ran,
 		       "Got pending interrupt after IRQ enabled.");
 
@@ -8403,7 +8403,7 @@ static void vmx_pending_event_test_core(bool guest_hlt)
 	if (guest_hlt)
 		vmcs_write(GUEST_ACTV_STATE, ACTV_HLT);
 
-	irq_disable();
+	cli();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
 				   APIC_DM_FIXED | ipi_vector,
 				   0);
@@ -8414,9 +8414,9 @@ static void vmx_pending_event_test_core(bool guest_hlt)
 	report(!vmx_pending_event_guest_run,
 	       "Guest did not run before host received IPI");
 
-	irq_enable();
+	sti();
 	asm volatile ("nop");
-	irq_disable();
+	cli();
 	report(vmx_pending_event_ipi_fired,
 	       "Got pending interrupt after IRQ enabled");
 
@@ -9340,7 +9340,7 @@ static void irq_79_handler_guest(isr_regs_t *regs)
 static void vmx_eoi_bitmap_ioapic_scan_test_guest(void)
 {
 	handle_irq(0x79, irq_79_handler_guest);
-	irq_enable();
+	sti();
 
 	/* Signal to L1 CPU to trigger ioapic scan */
 	vmx_set_test_stage(1);
@@ -9397,7 +9397,7 @@ static void vmx_hlt_with_rvi_guest(void)
 {
 	handle_irq(HLT_WITH_RVI_VECTOR, vmx_hlt_with_rvi_guest_isr);
 
-	irq_enable();
+	sti();
 	asm volatile ("nop");
 
 	vmcall();
@@ -9449,7 +9449,7 @@ static void irq_78_handler_guest(isr_regs_t *regs)
 static void vmx_apic_passthrough_guest(void)
 {
 	handle_irq(0x78, irq_78_handler_guest);
-	irq_enable();
+	sti();
 
 	/* If requested, wait for other CPU to trigger ioapic scan */
 	if (vmx_get_test_stage() < 1) {
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 02/27] x86: introduce sti_nop() and sti_nop_cli()
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli() Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 03/27] x86: add few helper functions for apic local timer Maxim Levitsky
                   ` (25 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Add functions that shorten the common usage of sti

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/processor.h | 20 +++++++++++++++++++
 x86/apic.c          |  4 ++--
 x86/eventinj.c      | 12 +++---------
 x86/ioapic.c        |  3 +--
 x86/svm_tests.c     | 48 ++++++++++++---------------------------------
 x86/vmx_tests.c     | 23 +++++++---------------
 6 files changed, 46 insertions(+), 64 deletions(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index b89f6a7c..29689d21 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -669,6 +669,26 @@ static inline void sti(void)
 	asm volatile ("sti");
 }
 
+
+/*
+ * Enable interrupts and ensure that interrupts are evaluated upon
+ * return from this function.
+ */
+static inline void sti_nop(void)
+{
+	asm volatile ("sti; nop");
+}
+
+
+/*
+ * Enable interrupts for one cpu cycle, allowing the CPU
+ * to process all interrupts that are already pending.
+ */
+static inline void sti_nop_cli(void)
+{
+	asm volatile ("sti; nop; cli");
+}
+
 static inline unsigned long long rdrand(void)
 {
 	long long r;
diff --git a/x86/apic.c b/x86/apic.c
index 66c1c58a..dd7e7834 100644
--- a/x86/apic.c
+++ b/x86/apic.c
@@ -313,7 +313,7 @@ static void test_self_ipi_x2apic(void)
 
 volatile int nmi_counter_private, nmi_counter, nmi_hlt_counter, sti_loop_active;
 
-static void sti_nop(char *p)
+static void test_sti_nop(char *p)
 {
 	asm volatile (
 		  ".globl post_sti \n\t"
@@ -335,7 +335,7 @@ static void sti_loop(void *ignore)
 	unsigned k = 0;
 
 	while (sti_loop_active)
-		sti_nop((char *)(ulong)((k++ * 4096) % (128 * 1024 * 1024)));
+		test_sti_nop((char *)(ulong)((k++ * 4096) % (128 * 1024 * 1024)));
 }
 
 static void nmi_handler(isr_regs_t *regs)
diff --git a/x86/eventinj.c b/x86/eventinj.c
index 147838ce..6fbb2d0f 100644
--- a/x86/eventinj.c
+++ b/x86/eventinj.c
@@ -294,9 +294,7 @@ int main(void)
 	apic_self_ipi(32);
 	apic_self_ipi(33);
 	io_delay();
-	sti();
-	asm volatile("nop");
-	cli();
+	sti_nop_cli();
 	printf("After vec 32 and 33 to self\n");
 	report(test_count == 2, "vec 32/33");
 
@@ -353,9 +351,7 @@ int main(void)
 	/* this is needed on VMX without NMI window notification.
 	   Interrupt windows is used instead, so let pending NMI
 	   to be injected */
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 	report(test_count == 2, "NMI");
 
 	/* generate NMI that will fault on IRET */
@@ -367,9 +363,7 @@ int main(void)
 	/* this is needed on VMX without NMI window notification.
 	   Interrupt windows is used instead, so let pending NMI
 	   to be injected */
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 	printf("After NMI to self\n");
 	report(test_count == 2, "NMI");
 	stack_phys = (ulong)virt_to_phys(alloc_page());
diff --git a/x86/ioapic.c b/x86/ioapic.c
index cce8add1..7d3e37cc 100644
--- a/x86/ioapic.c
+++ b/x86/ioapic.c
@@ -128,8 +128,7 @@ static void test_ioapic_simultaneous(void)
 	cli();
 	toggle_irq_line(0x0f);
 	toggle_irq_line(0x0e);
-	sti();
-	asm volatile ("nop");
+	sti_nop();
 	report(g_66 && g_78 && g_66_after_78 && g_66_rip == g_78_rip,
 	       "ioapic simultaneous edge interrupts");
 }
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 15e781af..02583236 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -1000,9 +1000,7 @@ static bool pending_event_finished(struct svm_test *test)
 			return true;
 		}
 
-		sti();
-		asm volatile ("nop");
-		cli();
+		sti_nop_cli();
 
 		if (!pending_event_ipi_fired) {
 			report_fail("Pending interrupt not dispatched after IRQ enabled\n");
@@ -1056,9 +1054,7 @@ static void pending_event_cli_test(struct svm_test *test)
 	}
 
 	/* VINTR_MASKING is zero.  This should cause the IPI to fire.  */
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 
 	if (pending_event_ipi_fired != true) {
 		set_test_stage(test, -1);
@@ -1072,9 +1068,7 @@ static void pending_event_cli_test(struct svm_test *test)
 	 * the VINTR interception should be clear in VMCB02.  Check
 	 * that L0 did not leave a stale VINTR in the VMCB.
 	 */
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 }
 
 static bool pending_event_cli_finished(struct svm_test *test)
@@ -1105,9 +1099,7 @@ static bool pending_event_cli_finished(struct svm_test *test)
 			return true;
 		}
 
-		sti();
-		asm volatile ("nop");
-		cli();
+		sti_nop_cli();
 
 		if (pending_event_ipi_fired != true) {
 			report_fail("Interrupt not triggered by host");
@@ -1243,9 +1235,7 @@ static bool interrupt_finished(struct svm_test *test)
 			return true;
 		}
 
-		sti();
-		asm volatile ("nop");
-		cli();
+		sti_nop_cli();
 
 		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
 		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
@@ -1540,9 +1530,7 @@ static void virq_inject_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 
 	if (!virq_fired) {
 		report_fail("virtual interrupt not fired after L2 sti");
@@ -1557,9 +1545,7 @@ static void virq_inject_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 
 	if (!virq_fired) {
 		report_fail("virtual interrupt not fired after return from VINTR intercept");
@@ -1568,9 +1554,7 @@ static void virq_inject_test(struct svm_test *test)
 
 	vmmcall();
 
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 
 	if (virq_fired) {
 		report_fail("virtual interrupt fired when V_IRQ_PRIO less than V_TPR");
@@ -1739,9 +1723,7 @@ static bool reg_corruption_finished(struct svm_test *test)
 
 		void* guest_rip = (void*)vmcb->save.rip;
 
-		sti();
-		asm volatile ("nop");
-		cli();
+		sti_nop_cli();
 
 		if (guest_rip == insb_instruction_label && io_port_var != 0xAA) {
 			report_fail("RIP corruption detected after %d timer interrupts",
@@ -3049,8 +3031,7 @@ static void svm_intr_intercept_mix_if_guest(struct svm_test *test)
 {
 	asm volatile("nop;nop;nop;nop");
 	report(!dummy_isr_recevied, "No interrupt expected");
-	sti();
-	asm volatile("nop");
+	sti_nop();
 	report(0, "must not reach here");
 }
 
@@ -3080,8 +3061,7 @@ static void svm_intr_intercept_mix_gif_guest(struct svm_test *test)
 	// clear GIF and enable IF
 	// that should still not cause VM exit
 	clgi();
-	sti();
-	asm volatile("nop");
+	sti_nop();
 	report(!dummy_isr_recevied, "No interrupt expected");
 
 	stgi();
@@ -3142,8 +3122,7 @@ static void svm_intr_intercept_mix_nmi_guest(struct svm_test *test)
 	clgi();
 	asm volatile("nop");
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI, 0);
-	sti(); // should have no effect
-	asm volatile("nop");
+	sti_nop(); // should have no effect
 	report(!nmi_recevied, "No NMI expected");
 
 	stgi();
@@ -3173,8 +3152,7 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test *test)
 	clgi();
 	asm volatile("nop");
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_SMI, 0);
-	sti(); // should have no effect
-	asm volatile("nop");
+	sti_nop(); // should have no effect
 	stgi();
 	asm volatile("nop");
 	report(0, "must not reach here");
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index c556af28..a252529a 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1625,8 +1625,7 @@ static void interrupt_main(void)
 	start = rdtsc();
 	apic_write(APIC_TMICT, 1000000);
 
-	sti();
-	asm volatile ("nop");
+	sti_nop();
 	vmcall();
 
 	report(rdtsc() - start > 10000 && timer_fired,
@@ -1639,8 +1638,7 @@ static void interrupt_main(void)
 	start = rdtsc();
 	apic_write(APIC_TMICT, 1000000);
 
-	sti();
-	asm volatile ("nop");
+	sti_nop();
 	vmcall();
 
 	report(rdtsc() - start > 10000 && timer_fired,
@@ -1709,9 +1707,7 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
 			int vector = vmcs_read(EXI_INTR_INFO) & 0xff;
 			handle_external_interrupt(vector);
 		} else {
-			sti();
-			asm volatile ("nop");
-			cli();
+			sti_nop_cli();
 		}
 		if (vmx_get_test_stage() >= 2)
 			vmcs_write(GUEST_ACTV_STATE, ACTV_ACTIVE);
@@ -6716,9 +6712,7 @@ static void test_x2apic_wr(
 		assert_exit_reason(exit_reason_want);
 
 		/* Clear the external interrupt. */
-		sti();
-		asm volatile ("nop");
-		cli();
+		sti_nop_cli();
 		report(handle_x2apic_ipi_ran,
 		       "Got pending interrupt after IRQ enabled.");
 
@@ -8414,9 +8408,7 @@ static void vmx_pending_event_test_core(bool guest_hlt)
 	report(!vmx_pending_event_guest_run,
 	       "Guest did not run before host received IPI");
 
-	sti();
-	asm volatile ("nop");
-	cli();
+	sti_nop_cli();
 	report(vmx_pending_event_ipi_fired,
 	       "Got pending interrupt after IRQ enabled");
 
@@ -9397,7 +9389,7 @@ static void vmx_hlt_with_rvi_guest(void)
 {
 	handle_irq(HLT_WITH_RVI_VECTOR, vmx_hlt_with_rvi_guest_isr);
 
-	sti();
+	sti_nop();
 	asm volatile ("nop");
 
 	vmcall();
@@ -9557,8 +9549,7 @@ static void vmx_apic_passthrough_tpr_threshold_test(void)
 	/* Clean pending self-IPI */
 	vmx_apic_passthrough_tpr_threshold_ipi_isr_fired = false;
 	handle_irq(ipi_vector, vmx_apic_passthrough_tpr_threshold_ipi_isr);
-	sti();
-	asm volatile ("nop");
+	sti_nop();
 	report(vmx_apic_passthrough_tpr_threshold_ipi_isr_fired, "self-IPI fired");
 
 	report_pass(__func__);
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 03/27] x86: add few helper functions for apic local timer
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli() Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 02/27] x86: introduce sti_nop() and sti_nop_cli() Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 04/27] svm: remove nop after stgi/clgi Maxim Levitsky
                   ` (24 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Add a few functions to apic.c to make it easier to enable and disable
the local apic timer.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/apic.c | 38 ++++++++++++++++++++++++++++++++++++++
 lib/x86/apic.h |  6 ++++++
 2 files changed, 44 insertions(+)

diff --git a/lib/x86/apic.c b/lib/x86/apic.c
index 5131525a..174a8c28 100644
--- a/lib/x86/apic.c
+++ b/lib/x86/apic.c
@@ -256,3 +256,41 @@ void init_apic_map(void)
 			id_map[j++] = i;
 	}
 }
+
+void apic_setup_timer(int vector, u32 mode)
+{
+	apic_cleanup_timer();
+
+	assert((mode & APIC_LVT_TIMER_MASK) == mode);
+
+	apic_write(APIC_TDCR, APIC_TDR_DIV_1);
+	apic_write(APIC_LVTT, vector | mode);
+}
+
+void apic_start_timer(u32 value)
+{
+	/*
+	 * APIC timer runs with 'core crystal clock',
+	 * divided by value in APIC_TDCR
+	 */
+	apic_write(APIC_TMICT, value);
+}
+
+void apic_stop_timer(void)
+{
+	apic_write(APIC_TMICT, 0);
+}
+
+void apic_cleanup_timer(void)
+{
+	u32 lvtt = apic_read(APIC_LVTT);
+
+	// stop the counter
+	apic_stop_timer();
+
+	// mask the timer interrupt
+	apic_write(APIC_LVTT, lvtt | APIC_LVT_MASKED);
+
+	// enable interrupts for one cycle to ensure that a pending timer is serviced
+	sti_nop_cli();
+}
diff --git a/lib/x86/apic.h b/lib/x86/apic.h
index 6d27f047..7c539071 100644
--- a/lib/x86/apic.h
+++ b/lib/x86/apic.h
@@ -58,6 +58,12 @@ void disable_apic(void);
 void reset_apic(void);
 void init_apic_map(void);
 
+void apic_cleanup_timer(void);
+void apic_setup_timer(int vector, u32 mode);
+
+void apic_start_timer(u32 counter);
+void apic_stop_timer(void);
+
 /* Converts byte-addressable APIC register offset to 4-byte offset. */
 static inline u32 apic_reg_index(u32 reg)
 {
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 04/27] svm: remove nop after stgi/clgi
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (2 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 03/27] x86: add few helper functions for apic local timer Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 05/27] svm: make svm_intr_intercept_mix_if/gif test a bit more robust Maxim Levitsky
                   ` (23 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Remove pointless nop after stgi/clgi - these instructions don't have an
interrupt window.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm_tests.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 02583236..88393fcf 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -3026,7 +3026,7 @@ static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected
 }
 
 
-// subtest: test that enabling EFLAGS.IF is enought to trigger an interrupt
+// subtest: test that enabling EFLAGS.IF is enough to trigger an interrupt
 static void svm_intr_intercept_mix_if_guest(struct svm_test *test)
 {
 	asm volatile("nop;nop;nop;nop");
@@ -3065,7 +3065,6 @@ static void svm_intr_intercept_mix_gif_guest(struct svm_test *test)
 	report(!dummy_isr_recevied, "No interrupt expected");
 
 	stgi();
-	asm volatile("nop");
 	report(0, "must not reach here");
 }
 
@@ -3095,7 +3094,6 @@ static void svm_intr_intercept_mix_gif_guest2(struct svm_test *test)
 	report(!dummy_isr_recevied, "No interrupt expected");
 
 	stgi();
-	asm volatile("nop");
 	report(0, "must not reach here");
 }
 
@@ -3120,13 +3118,11 @@ static void svm_intr_intercept_mix_nmi_guest(struct svm_test *test)
 	cli(); // should have no effect
 
 	clgi();
-	asm volatile("nop");
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI, 0);
 	sti_nop(); // should have no effect
 	report(!nmi_recevied, "No NMI expected");
 
 	stgi();
-	asm volatile("nop");
 	report(0, "must not reach here");
 }
 
@@ -3150,11 +3146,9 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test *test)
 	asm volatile("nop;nop;nop;nop");
 
 	clgi();
-	asm volatile("nop");
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_SMI, 0);
 	sti_nop(); // should have no effect
 	stgi();
-	asm volatile("nop");
 	report(0, "must not reach here");
 }
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 05/27] svm: make svm_intr_intercept_mix_if/gif test a bit more robust
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (3 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 04/27] svm: remove nop after stgi/clgi Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 06/27] svm: use apic_start_timer/apic_stop_timer instead of open coding it Maxim Levitsky
                   ` (22 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

When injecting self IPI the test assumes that initial EFLAGS.IF flag is
zero, but previous tests might have set it.

Explicitly disable interrupts to avoid this assumption.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm_tests.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 88393fcf..a5fabd4a 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -3045,6 +3045,7 @@ static void svm_intr_intercept_mix_if(void)
 	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_if_guest);
+	cli();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
 	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
 }
@@ -3077,6 +3078,7 @@ static void svm_intr_intercept_mix_gif(void)
 	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_gif_guest);
+	cli();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
 	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
 }
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 06/27] svm: use apic_start_timer/apic_stop_timer instead of open coding it
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (4 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 05/27] svm: make svm_intr_intercept_mix_if/gif test a bit more robust Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 07/27] x86: Add test for #SMI during interrupt window Maxim Levitsky
                   ` (21 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

This simplified code and ensures that after a subtest used apic timer,
it won't affect next subtests which are run after it.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm_tests.c | 29 +++++++++++++----------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index a5fabd4a..a7641fb8 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -1144,9 +1144,10 @@ static void interrupt_test(struct svm_test *test)
 {
 	long long start, loops;
 
-	apic_write(APIC_LVTT, TIMER_VECTOR);
+	apic_setup_timer(TIMER_VECTOR, APIC_LVT_TIMER_PERIODIC);
 	sti();
-	apic_write(APIC_TMICT, 1); //Timer Initial Count Register 0x380 one-shot
+	apic_start_timer(1);
+
 	for (loops = 0; loops < 10000000 && !timer_fired; loops++)
 		asm volatile ("nop");
 
@@ -1157,12 +1158,12 @@ static void interrupt_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	apic_write(APIC_TMICT, 0);
+	apic_stop_timer();
 	cli();
 	vmmcall();
 
 	timer_fired = false;
-	apic_write(APIC_TMICT, 1);
+	apic_start_timer(1);
 	for (loops = 0; loops < 10000000 && !timer_fired; loops++)
 		asm volatile ("nop");
 
@@ -1174,12 +1175,12 @@ static void interrupt_test(struct svm_test *test)
 	}
 
 	sti();
-	apic_write(APIC_TMICT, 0);
+	apic_stop_timer();
 	cli();
 
 	timer_fired = false;
 	start = rdtsc();
-	apic_write(APIC_TMICT, 1000000);
+	apic_start_timer(1000000);
 	safe_halt();
 
 	report(rdtsc() - start > 10000 && timer_fired,
@@ -1190,13 +1191,13 @@ static void interrupt_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	apic_write(APIC_TMICT, 0);
+	apic_stop_timer();
 	cli();
 	vmmcall();
 
 	timer_fired = false;
 	start = rdtsc();
-	apic_write(APIC_TMICT, 1000000);
+	apic_start_timer(1000000);
 	asm volatile ("hlt");
 
 	report(rdtsc() - start > 10000 && timer_fired,
@@ -1207,8 +1208,7 @@ static void interrupt_test(struct svm_test *test)
 		vmmcall();
 	}
 
-	apic_write(APIC_TMICT, 0);
-	cli();
+	apic_cleanup_timer();
 }
 
 static bool interrupt_finished(struct svm_test *test)
@@ -1686,10 +1686,8 @@ static void reg_corruption_prepare(struct svm_test *test)
 	handle_irq(TIMER_VECTOR, reg_corruption_isr);
 
 	/* set local APIC to inject external interrupts */
-	apic_write(APIC_TMICT, 0);
-	apic_write(APIC_TDCR, 0);
-	apic_write(APIC_LVTT, TIMER_VECTOR | APIC_LVT_TIMER_PERIODIC);
-	apic_write(APIC_TMICT, 1000);
+	apic_setup_timer(TIMER_VECTOR, APIC_LVT_TIMER_PERIODIC);
+	apic_start_timer(1000);
 }
 
 static void reg_corruption_test(struct svm_test *test)
@@ -1734,8 +1732,7 @@ static bool reg_corruption_finished(struct svm_test *test)
 	}
 	return false;
 cleanup:
-	apic_write(APIC_LVTT, APIC_LVT_TIMER_MASK);
-	apic_write(APIC_TMICT, 0);
+	apic_cleanup_timer();
 	return true;
 
 }
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 07/27] x86: Add test for #SMI during interrupt window
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (5 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 06/27] svm: use apic_start_timer/apic_stop_timer instead of open coding it Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 08/27] x86: Add a simple test for SYSENTER instruction Maxim Levitsky
                   ` (20 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

This test tests a corner case in which KVM doesn't
preserve STI interrupt shadow when #SMI arrives during it.

Due to apparent fact that STI interrupt shadow blocks real interrupts as well,
and thus prevents a vCPU kick to make the CPU enter SMM,
during the interrupt shadow, a workaround was used:

An instruction which gets VMexit anyway, but retried by
KVM is used in the interrupt shadow.

While emulating such instruction KVM doesn't reset the interrupt shadow
(because it retries it), but it can notice the pending #SMI and enter SMM,
thus the test tests that interrupt shadow in this case is preserved.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/Makefile.common  |   3 +-
 x86/Makefile.x86_64  |   1 +
 x86/smm_int_window.c | 118 +++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg    |   5 ++
 4 files changed, 126 insertions(+), 1 deletion(-)
 create mode 100644 x86/smm_int_window.c

diff --git a/x86/Makefile.common b/x86/Makefile.common
index 365e199f..698a48ab 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -87,7 +87,8 @@ tests-common = $(TEST_DIR)/vmexit.$(exe) $(TEST_DIR)/tsc.$(exe) \
                $(TEST_DIR)/emulator.$(exe) \
                $(TEST_DIR)/eventinj.$(exe) \
                $(TEST_DIR)/smap.$(exe) \
-               $(TEST_DIR)/umip.$(exe)
+               $(TEST_DIR)/umip.$(exe) \
+               $(TEST_DIR)/smm_int_window.$(exe)
 
 # The following test cases are disabled when building EFI tests because they
 # use absolute addresses in their inline assembly code, which cannot compile
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index f483dead..5d66b201 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -35,6 +35,7 @@ tests += $(TEST_DIR)/pks.$(exe)
 tests += $(TEST_DIR)/pmu_lbr.$(exe)
 tests += $(TEST_DIR)/pmu_pebs.$(exe)
 
+
 ifeq ($(CONFIG_EFI),y)
 tests += $(TEST_DIR)/amd_sev.$(exe)
 endif
diff --git a/x86/smm_int_window.c b/x86/smm_int_window.c
new file mode 100644
index 00000000..d3a2b073
--- /dev/null
+++ b/x86/smm_int_window.c
@@ -0,0 +1,118 @@
+#include "libcflat.h"
+#include "apic.h"
+#include "processor.h"
+#include "smp.h"
+#include "isr.h"
+#include "asm/barrier.h"
+#include "alloc_page.h"
+#include "asm/page.h"
+
+#define SELF_INT_VECTOR 0xBB
+#define MEM_ALLOC_ORDER 16
+
+volatile int bad_int_received;
+volatile bool test_ended;
+volatile bool send_smi;
+
+extern unsigned long shadow_label;
+
+static void dummy_ipi_isr(isr_regs_t *regs)
+{
+	/*
+	 * Test that we never get the interrupt on the instruction which
+	 * is in interrupt shadow
+	 */
+	if (regs->rip == (unsigned long)&shadow_label)
+		bad_int_received++;
+	eoi();
+}
+
+static void vcpu1_code(void *data)
+{
+	/*
+	 * Flood vCPU0 with #SMIs
+	 *
+	 * Note that kvm unit tests run with seabios and its #SMI handler
+	 * is only installed on vCPU0 (BSP).
+	 * Sending #SMI to any other CPU will crash the guest
+	 */
+	setup_vm();
+
+	while (!test_ended) {
+		if (send_smi) {
+			apic_icr_write(APIC_INT_ASSERT | APIC_DEST_PHYSICAL | APIC_DM_SMI, 0);
+			send_smi = false;
+		}
+		cpu_relax();
+	}
+}
+
+int main(void)
+{
+	int i;
+	unsigned volatile char *mem;
+
+	setup_vm();
+	cli();
+
+	mem = alloc_pages_flags(MEM_ALLOC_ORDER, AREA_ANY | FLAG_DONTZERO);
+	assert(mem);
+
+	handle_irq(SELF_INT_VECTOR, dummy_ipi_isr);
+	on_cpu_async(1, vcpu1_code, NULL);
+
+	for  (i = 0 ; i < (1 << MEM_ALLOC_ORDER) && !bad_int_received ; i++) {
+
+		apic_icr_write(APIC_INT_ASSERT | APIC_DEST_PHYSICAL |
+			       APIC_DM_FIXED | SELF_INT_VECTOR, 0);
+
+		/* in case the sender is still sending #SMI, wait for it*/
+		while (send_smi)
+			;
+
+		/* ask the peer vCPU to send SMI to us */
+		send_smi = true;
+
+		/*
+		 * The below memory access should never get an interrupt because
+		 * it is in an interrupt shadow from the STI.
+		 *
+		 * Note that seems that even if a real interrupt happens, it will
+		 * still not interrupt this instruction, thus vCPU kick from
+		 * vCPU1, when it attempts to send #SMI to us is itself not enough,
+		 * to trigger the switch to SMM mode at this point.
+
+		 * Therefore STI;NOP;CLI sequence itself doesn't lead to #SMI happening
+		 * in between these instructions.
+		 *
+		 * So instead of NOP, make an instruction that accesses a fresh memory,
+		 * which will force the CPU to  #VMEXIT and just before resuming the guest,
+		 * KVM might notice the incoming #SMI, and enter the SMM
+		 * with a still pending interrupt shadow.
+		 *
+		 * Also note that, just an #VMEXITing instruction like CPUID
+		 * can't be used here, because KVM itself will emulate it,
+		 * and clear the interrupt shadow, prior to entering the SMM.
+		 *
+		 * Test that in this case, the interrupt shadow is preserved,
+		 * which means that upon exit from #SMI  handler, the instruction
+		 * should still not get the pending interrupt
+		 */
+
+		asm volatile(
+			"sti\n"
+			"shadow_label:\n"
+			"movl $1, %0\n"
+			"cli\n"
+			: "=m" (*(mem+i*PAGE_SIZE))
+			::
+		);
+	}
+
+	test_ended = 1;
+	while (cpus_active() > 1)
+		cpu_relax();
+
+	report(!bad_int_received, "No interrupts during the interrupt shadow");
+	return report_summary();
+}
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index f324e32d..e803ba03 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -478,3 +478,8 @@ file = cet.flat
 arch = x86_64
 smp = 2
 extra_params = -enable-kvm -m 2048 -cpu host
+
+[smm_int_window]
+file = smm_int_window.flat
+smp = 2
+extra_params = -machine smm=on -machine kernel-irqchip=on -m 2g
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 08/27] x86: Add a simple test for SYSENTER instruction.
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (6 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 07/27] x86: Add test for #SMI during interrupt window Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 09/27] svm: add simple nested shutdown test Maxim Levitsky
                   ` (19 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Run the test with Intel's vendor ID and in the long mode,
to test the emulation of this instruction on AMD.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/Makefile.x86_64 |   2 +
 x86/sysenter.c      | 203 ++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg   |   5 ++
 3 files changed, 210 insertions(+)
 create mode 100644 x86/sysenter.c

diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index 5d66b201..f76ff18a 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -34,6 +34,7 @@ tests += $(TEST_DIR)/rdpru.$(exe)
 tests += $(TEST_DIR)/pks.$(exe)
 tests += $(TEST_DIR)/pmu_lbr.$(exe)
 tests += $(TEST_DIR)/pmu_pebs.$(exe)
+tests += $(TEST_DIR)/sysenter.$(exe)
 
 
 ifeq ($(CONFIG_EFI),y)
@@ -61,3 +62,4 @@ $(TEST_DIR)/hyperv_clock.$(bin): $(TEST_DIR)/hyperv_clock.o
 $(TEST_DIR)/vmx.$(bin): $(TEST_DIR)/vmx_tests.o
 $(TEST_DIR)/svm.$(bin): $(TEST_DIR)/svm_tests.o
 $(TEST_DIR)/svm_npt.$(bin): $(TEST_DIR)/svm_npt.o
+$(TEST_DIR)/sysenter.o: CFLAGS += -Wa,-mintel64
diff --git a/x86/sysenter.c b/x86/sysenter.c
new file mode 100644
index 00000000..1cc76219
--- /dev/null
+++ b/x86/sysenter.c
@@ -0,0 +1,203 @@
+#include "alloc.h"
+#include "libcflat.h"
+#include "processor.h"
+#include "msr.h"
+#include "desc.h"
+
+// define this to test SYSENTER/SYSEXIT in 64 bit mode
+//#define TEST_64_BIT
+
+static void test_comp32(void)
+{
+	ulong rax = 0xDEAD;
+
+	extern void sysenter_target_32(void);
+
+	wrmsr(MSR_IA32_SYSENTER_EIP, (uint64_t)sysenter_target_32);
+
+	asm volatile (
+		"# switch to comp32, mode prior to running the test\n"
+		"ljmpl *1f\n"
+		"1:\n"
+		".long 1f\n"
+		".long " xstr(KERNEL_CS32) "\n"
+		"1:\n"
+		".code32\n"
+
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+		"# user code (comp32)\n"
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+
+		"# use sysenter to enter 64 bit system code\n"
+		"mov %%esp, %%ecx #stash rsp value\n"
+		"mov $1, %%ebx\n"
+		"sysenter\n"
+		"ud2\n"
+
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+		"# 64 bit cpl=0 code\n"
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+
+		".code64\n"
+		"sysenter_target_32:\n"
+		"test %%rbx, %%rbx # check if we are here for second time\n"
+		"jne 1f\n"
+		"movq %%rcx, %%rsp # restore stack pointer manually\n"
+		"jmp test_done_32\n"
+		"1:\n"
+
+		"# test that MSR_IA32_SYSENTER_ESP is correct\n"
+		"movq $0xAAFFFFFFFF, %%rbx\n"
+		"movq $0xDEAD, %%rax\n"
+		"cmpq %%rsp, %%rbx\n"
+		"jne 1f\n"
+		"movq $0xACED, %%rax\n"
+
+		"# use sysexit to exit back\n"
+		"1:\n"
+		"leaq sysexit_target, %%rdx\n"
+		"sysexit\n"
+
+		"sysexit_target:\n"
+
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+		"# exit back to 64 bit mode using a second sysenter\n"
+		"# due to rbx == 0, the sysenter handler will jump back to\n"
+		"# here without sysexit due to ebx=0\n"
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+
+		".code32\n"
+		"mov $0, %%ebx\n"
+		"sysenter\n"
+
+		".code64\n"
+		"test_done_32:\n"
+		"nop\n"
+
+		: /*outputs*/
+		"=a" (rax)
+		: /* inputs*/
+		: /*clobbers*/
+		"rbx",  /* action flag for sysenter_target */
+		"rcx",  /* saved RSP */
+		"rdx",  /* used for SYSEXIT*/
+		"flags"
+	);
+
+	report(rax == 0xACED, "MSR_IA32_SYSENTER_ESP has correct value");
+}
+
+#ifdef TEST_64_BIT
+static void test_64_bit(void)
+{
+	extern void test_done_64(void);
+	extern void sysenter_target_64(void);
+
+	ulong rax = 0xDEAD;
+	u8 *sysexit_thunk = (u8 *)malloc(50);
+	u8 *tmp = sysexit_thunk;
+
+	/* Allocate SYSEXIT thunk, whose purpose is to be at > 32 bit address,
+	 * to test that SYSEXIT can jump to these addresses
+	 *
+	 * TODO: malloc seems to return addresses from the top of the
+	 * virtual address space, but it is better to use a dedicated API
+	 */
+	printf("SYSEXIT Thunk at 0x%lx\n", (u64)sysexit_thunk);
+
+	/* movabs test_done, %rdx*/
+	*tmp++ = 0x48; *tmp++ = 0xBA;
+	*(u64 *)tmp = (uint64_t)test_done_64; tmp += 8;
+	/* jmp %%rdx*/
+	*tmp++ = 0xFF; *tmp++ = 0xe2;
+
+	wrmsr(MSR_IA32_SYSENTER_EIP, (uint64_t)sysenter_target_64);
+
+	asm volatile (
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+		"# user code (64 bit)\n"
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+
+		"# store the 64 bit thunk address to rdx\n"
+		"mov %[sysexit_thunk], %%rdx\n"
+		"# use sysenter to enter 64 bit system code\n"
+		"mov %%esp, %%ecx #stash rsp value\n"
+		"mov $1, %%ebx\n"
+		"sysenter\n"
+		"ud2\n"
+
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+		"# 64 bit cpl=0 code\n"
+		"#+++++++++++++++++++++++++++++++++++++++++++++++++++\n"
+
+		".code64\n"
+		"sysenter_target_64:\n"
+		"# test that MSR_IA32_SYSENTER_ESP is correct\n"
+		"movq $0xAAFFFFFFFF, %%rbx\n"
+		"movq $0xDEAD, %%rax\n"
+		"cmpq %%rsp, %%rbx\n"
+		"jne 1f\n"
+		"movq $0xACED, %%rax\n"
+
+		"# use sysexit to exit back\n"
+		"1:\n"
+
+		"# this will go through thunk to test_done_64, which tests\n"
+		"# that we can sysexit to a high address\n"
+		".byte 0x48\n"
+		"sysexit\n"
+		"ud2\n"
+
+		".code64\n"
+		"test_done_64:\n"
+		"nop\n"
+
+		: /*outputs*/
+		"=a" (rax)
+		: /* inputs*/
+		[sysexit_thunk] "r" (sysexit_thunk)
+		: /*clobbers*/
+		"rbx",  /* action flag for sysenter_target */
+		"rcx",  /* saved RSP */
+		"rdx",  /* used for SYSEXIT*/
+		"flags"
+	);
+	report(rax == 0xACED, "MSR_IA32_SYSENTER_ESP has correct value");
+}
+#endif
+
+int main(int ac, char **av)
+{
+	setup_vm();
+
+	int gdt_index = 0x50 >> 3;
+
+	/* init the sysenter GDT block */
+	gdt[gdt_index+0] = gdt[KERNEL_CS >> 3];
+	gdt[gdt_index+1] = gdt[KERNEL_DS >> 3];
+	gdt[gdt_index+2] = gdt[USER_CS >> 3];
+	gdt[gdt_index+3] = gdt[USER_DS >> 3];
+
+	/* init the sysenter msrs*/
+	wrmsr(MSR_IA32_SYSENTER_CS, gdt_index << 3);
+	wrmsr(MSR_IA32_SYSENTER_ESP, 0xAAFFFFFFFF);
+	test_comp32();
+
+	/*
+	 * on AMD, the SYSENTER/SYSEXIT instruction is not supported in
+	 * both 64 bit and comp32 modes
+	 *
+	 * However the KVM emulates it in COMP32 mode to support migration,
+	 * iff guest cpu model is intel,
+	 *
+	 * but it doesn't emulate it in 64 bit mode because there is no good
+	 * reason to use this instruction in 64 bit mode anyway.
+	 */
+
+#ifdef TEST_64_BIT
+	test_64_bit();
+#endif
+	return report_summary();
+}
+
+
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index e803ba03..df248dff 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -245,6 +245,11 @@ file = syscall.flat
 arch = x86_64
 extra_params = -cpu Opteron_G1,vendor=AuthenticAMD
 
+[sysenter]
+file = sysenter.flat
+arch = x86_64
+extra_params = -cpu host,vendor=GenuineIntel
+
 [tsc]
 file = tsc.flat
 extra_params = -cpu kvm64,+rdtscp
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 09/27] svm: add simple nested shutdown test.
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (7 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 08/27] x86: Add a simple test for SYSENTER instruction Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 10/27] SVM: add two tests for exitintinto on exception Maxim Levitsky
                   ` (18 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Add a simple test that a shutdown in L2 is intercepted
correctly by the L1.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm_tests.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index a7641fb8..7a67132a 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -11,6 +11,7 @@
 #include "apic.h"
 #include "delay.h"
 #include "x86/usermode.h"
+#include "vmalloc.h"
 
 #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
 
@@ -3238,6 +3239,21 @@ static void svm_exception_test(void)
 	}
 }
 
+static void shutdown_intercept_test_guest(struct svm_test *test)
+{
+	asm volatile ("ud2");
+	report_fail("should not reach here\n");
+
+}
+static void svm_shutdown_intercept_test(void)
+{
+	test_set_guest(shutdown_intercept_test_guest);
+	vmcb->save.idtr.base = (u64)alloc_vpage();
+	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
+	svm_vmrun();
+	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
+}
+
 struct svm_test svm_tests[] = {
 	{ "null", default_supported, default_prepare,
 	  default_prepare_gif_clear, null_test,
@@ -3349,6 +3365,7 @@ struct svm_test svm_tests[] = {
 	TEST(svm_intr_intercept_mix_smi),
 	TEST(svm_tsc_scale_test),
 	TEST(pause_filter_test),
+	TEST(svm_shutdown_intercept_test),
 	{ NULL, NULL, NULL, NULL, NULL, NULL, NULL }
 };
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 10/27] SVM: add two tests for exitintinto on exception
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (8 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 09/27] svm: add simple nested shutdown test Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator Maxim Levitsky
                   ` (17 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Test that exitintinfo is set correctly when
exception happens during exception/interrupt delivery
and that exception is intercepted.

Note that those tests currently fail, due to few bugs in KVM.

Also note that those bugs are in KVM's common x86 code,
thus the issue exists on VMX as well and unit tests
that reproduce those on VMX will be written as well.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm_tests.c | 148 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 148 insertions(+)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 7a67132a..202e9271 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -3254,6 +3254,145 @@ static void svm_shutdown_intercept_test(void)
 	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
 }
 
+/*
+ * Test that nested exceptions are delivered correctly
+ * when parent exception is intercepted
+ */
+
+static void exception_merging_prepare(struct svm_test *test)
+{
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+
+	/* break UD vector idt entry to get #GP*/
+	boot_idt[UD_VECTOR].type = 1;
+}
+
+static void exception_merging_test(struct svm_test *test)
+{
+	asm volatile ("ud2");
+}
+
+static bool exception_merging_finished(struct svm_test *test)
+{
+	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
+	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
+
+	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
+		report(false, "unexpected VM exit");
+		goto out;
+	}
+
+	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
+		report(false, "EXITINTINFO not valid");
+		goto out;
+	}
+
+	if (type != SVM_EXITINTINFO_TYPE_EXEPT) {
+		report(false, "Incorrect event type in EXITINTINFO");
+		goto out;
+	}
+
+	if (vec != UD_VECTOR) {
+		report(false, "Incorrect vector in EXITINTINFO");
+		goto out;
+	}
+
+	set_test_stage(test, 1);
+out:
+	boot_idt[UD_VECTOR].type = 14;
+	return true;
+}
+
+static bool exception_merging_check(struct svm_test *test)
+{
+	return get_test_stage(test) == 1;
+}
+
+
+/*
+ * Test that if exception is raised during interrupt delivery,
+ * and that exception is intercepted, the interrupt is preserved
+ * in EXITINTINFO of the exception
+ */
+
+static void interrupt_merging_prepare(struct svm_test *test)
+{
+	/* intercept #GP */
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+
+	/* set local APIC to inject external interrupts */
+	apic_setup_timer(TIMER_VECTOR, APIC_LVT_TIMER_PERIODIC);
+	apic_start_timer(100000);
+}
+
+#define INTERRUPT_MERGING_DELAY 100000000ULL
+
+static void interrupt_merging_test(struct svm_test *test)
+{
+	handle_irq(TIMER_VECTOR, timer_isr);
+	/* break timer vector IDT entry to get #GP on interrupt delivery */
+	boot_idt[TIMER_VECTOR].type = 1;
+
+	sti();
+	delay(INTERRUPT_MERGING_DELAY);
+}
+
+static bool interrupt_merging_finished(struct svm_test *test)
+{
+
+	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
+	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
+	u32 error_code = vmcb->control.exit_info_1;
+
+	/* exit on external interrupts is disabled, thus timer interrupt
+	 * should be attempted to be delivered, but due to incorrect IDT entry
+	 * an #GP should be raised
+	 */
+	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
+		report(false, "unexpected VM exit");
+		goto cleanup;
+	}
+
+	/* GP error code should be about an IDT entry, and due to external event */
+	if (error_code != (TIMER_VECTOR << 3 | 3)) {
+		report(false, "Incorrect error code of the GP exception");
+		goto cleanup;
+	}
+
+	/* Original interrupt should be preserved in EXITINTINFO */
+	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
+		report(false, "EXITINTINFO not valid");
+		goto cleanup;
+	}
+
+	if (type != SVM_EXITINTINFO_TYPE_INTR) {
+		report(false, "Incorrect event type in EXITINTINFO");
+		goto cleanup;
+	}
+
+	if (vec != TIMER_VECTOR) {
+		report(false, "Incorrect vector in EXITINTINFO");
+		goto cleanup;
+	}
+
+	set_test_stage(test, 1);
+
+cleanup:
+	// restore the IDT gate
+	boot_idt[TIMER_VECTOR].type = 14;
+	wmb();
+	// eoi the interrupt we got #GP for
+	eoi();
+	apic_cleanup_timer();
+	return true;
+}
+
+static bool interrupt_merging_check(struct svm_test *test)
+{
+	return get_test_stage(test) == 1;
+}
+
+
 struct svm_test svm_tests[] = {
 	{ "null", default_supported, default_prepare,
 	  default_prepare_gif_clear, null_test,
@@ -3346,6 +3485,15 @@ struct svm_test svm_tests[] = {
 	{ "vgif", vgif_supported, prepare_vgif_enabled,
 	  default_prepare_gif_clear, test_vgif, vgif_finished,
 	  vgif_check },
+	{ "exception_merging", default_supported,
+	  exception_merging_prepare, default_prepare_gif_clear,
+	  exception_merging_test,  exception_merging_finished,
+	  exception_merging_check },
+	{ "interrupt_merging", default_supported,
+	  interrupt_merging_prepare, default_prepare_gif_clear,
+	  interrupt_merging_test,  interrupt_merging_finished,
+	  interrupt_merging_check },
+
 	TEST(svm_cr4_osxsave_test),
 	TEST(svm_guest_state_test),
 	TEST(svm_vmrun_errata_test),
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (9 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 10/27] SVM: add two tests for exitintinto on exception Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-23  9:28   ` Claudio Imbrenda
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 12/27] x86: add IPI stress test Maxim Levitsky
                   ` (16 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Add a simple pseudo random number generator which can be used
in the tests to add randomeness in a controlled manner.

For x86 add a wrapper which initializes the PRNG with RDRAND,
unless RANDOM_SEED env variable is set, in which case it is used
instead.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 Makefile              |  3 ++-
 README.md             |  1 +
 lib/prng.c            | 41 +++++++++++++++++++++++++++++++++++++++++
 lib/prng.h            | 23 +++++++++++++++++++++++
 lib/x86/random.c      | 33 +++++++++++++++++++++++++++++++++
 lib/x86/random.h      | 17 +++++++++++++++++
 scripts/arch-run.bash |  2 +-
 x86/Makefile.common   |  1 +
 8 files changed, 119 insertions(+), 2 deletions(-)
 create mode 100644 lib/prng.c
 create mode 100644 lib/prng.h
 create mode 100644 lib/x86/random.c
 create mode 100644 lib/x86/random.h

diff --git a/Makefile b/Makefile
index 6ed5deac..384b5acf 100644
--- a/Makefile
+++ b/Makefile
@@ -29,7 +29,8 @@ cflatobjs := \
 	lib/string.o \
 	lib/abort.o \
 	lib/report.o \
-	lib/stack.o
+	lib/stack.o \
+	lib/prng.o
 
 # libfdt paths
 LIBFDT_objdir = lib/libfdt
diff --git a/README.md b/README.md
index 6e82dc22..5a677a03 100644
--- a/README.md
+++ b/README.md
@@ -91,6 +91,7 @@ the framework.  The list of reserved environment variables is below
     QEMU_ACCEL                   either kvm, hvf or tcg
     QEMU_VERSION_STRING          string of the form `qemu -h | head -1`
     KERNEL_VERSION_STRING        string of the form `uname -r`
+    TEST_SEED                    integer to force a fixed seed for the prng
 
 Additionally these self-explanatory variables are reserved
 
diff --git a/lib/prng.c b/lib/prng.c
new file mode 100644
index 00000000..d9342eb3
--- /dev/null
+++ b/lib/prng.c
@@ -0,0 +1,41 @@
+
+/*
+ * Random number generator that is usable from guest code. This is the
+ * Park-Miller LCG using standard constants.
+ */
+
+#include "libcflat.h"
+#include "prng.h"
+
+struct random_state new_random_state(uint32_t seed)
+{
+	struct random_state s = {.seed = seed};
+	return s;
+}
+
+uint32_t random_u32(struct random_state *state)
+{
+	state->seed = (uint64_t)state->seed * 48271 % ((uint32_t)(1 << 31) - 1);
+	return state->seed;
+}
+
+
+uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max)
+{
+	uint32_t val = random_u32(state);
+
+	return val % (max - min + 1) + min;
+}
+
+/*
+ * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
+ * it will return true in 70.0% of cases)
+ */
+bool random_decision(struct random_state *state, float percent_true)
+{
+	if (percent_true == 0)
+		return 0;
+	if (percent_true == 100)
+		return 1;
+	return random_range(state, 1, 10000) < (uint32_t)(percent_true * 100);
+}
diff --git a/lib/prng.h b/lib/prng.h
new file mode 100644
index 00000000..61d3a48b
--- /dev/null
+++ b/lib/prng.h
@@ -0,0 +1,23 @@
+
+#ifndef SRC_LIB_PRNG_H_
+#define SRC_LIB_PRNG_H_
+
+struct random_state {
+	uint32_t seed;
+};
+
+struct random_state new_random_state(uint32_t seed);
+uint32_t random_u32(struct random_state *state);
+
+/*
+ * return a random number from min to max (included)
+ */
+uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max);
+
+/*
+ * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
+ * it will return true in 70.0% of cases)
+ */
+bool random_decision(struct random_state *state, float percent_true);
+
+#endif /* SRC_LIB_PRNG_H_ */
diff --git a/lib/x86/random.c b/lib/x86/random.c
new file mode 100644
index 00000000..fcdd5fe8
--- /dev/null
+++ b/lib/x86/random.c
@@ -0,0 +1,33 @@
+
+#include "libcflat.h"
+#include "processor.h"
+#include "prng.h"
+#include "smp.h"
+#include "asm/spinlock.h"
+#include "random.h"
+
+static u32 test_seed;
+static bool initialized;
+
+void init_prng(void)
+{
+	char *test_seed_str = getenv("TEST_SEED");
+
+	if (test_seed_str && strlen(test_seed_str))
+		test_seed = atol(test_seed_str);
+	else
+#ifdef __x86_64__
+		test_seed =  (u32)rdrand();
+#else
+		test_seed = (u32)(rdtsc() << 4);
+#endif
+	initialized = true;
+
+	printf("Test seed: %u\n", (unsigned int)test_seed);
+}
+
+struct random_state get_prng(void)
+{
+	assert(initialized);
+	return new_random_state(test_seed + this_cpu_read_smp_id());
+}
diff --git a/lib/x86/random.h b/lib/x86/random.h
new file mode 100644
index 00000000..795b450b
--- /dev/null
+++ b/lib/x86/random.h
@@ -0,0 +1,17 @@
+/*
+ * prng.h
+ *
+ *  Created on: Nov 9, 2022
+ *      Author: mlevitsk
+ */
+
+#ifndef SRC_LIB_X86_RANDOM_H_
+#define SRC_LIB_X86_RANDOM_H_
+
+#include "libcflat.h"
+#include "prng.h"
+
+void init_prng(void);
+struct random_state get_prng(void);
+
+#endif /* SRC_LIB_X86_RANDOM_H_ */
diff --git a/scripts/arch-run.bash b/scripts/arch-run.bash
index 51e4b97b..238d19f8 100644
--- a/scripts/arch-run.bash
+++ b/scripts/arch-run.bash
@@ -298,7 +298,7 @@ env_params ()
 	KERNEL_EXTRAVERSION=${KERNEL_EXTRAVERSION%%[!0-9]*}
 	! [[ $KERNEL_SUBLEVEL =~ ^[0-9]+$ ]] && unset $KERNEL_SUBLEVEL
 	! [[ $KERNEL_EXTRAVERSION =~ ^[0-9]+$ ]] && unset $KERNEL_EXTRAVERSION
-	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION
+	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION TEST_SEED
 }
 
 env_file ()
diff --git a/x86/Makefile.common b/x86/Makefile.common
index 698a48ab..fa0a50e6 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -23,6 +23,7 @@ cflatobjs += lib/x86/stack.o
 cflatobjs += lib/x86/fault_test.o
 cflatobjs += lib/x86/delay.o
 cflatobjs += lib/x86/pmu.o
+cflatobjs += lib/x86/random.o
 ifeq ($(CONFIG_EFI),y)
 cflatobjs += lib/x86/amd_sev.o
 cflatobjs += lib/efi.o
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 12/27] x86: add IPI stress test
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (10 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 13/27] svm: remove get_npt_pte extern Maxim Levitsky
                   ` (15 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Adds a test that sends IPIs between vCPUs and detects missing IPIs

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/Makefile.common |   3 +-
 x86/ipi_stress.c    | 167 ++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg   |  10 +++
 3 files changed, 179 insertions(+), 1 deletion(-)
 create mode 100644 x86/ipi_stress.c

diff --git a/x86/Makefile.common b/x86/Makefile.common
index fa0a50e6..08cc036b 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -89,7 +89,8 @@ tests-common = $(TEST_DIR)/vmexit.$(exe) $(TEST_DIR)/tsc.$(exe) \
                $(TEST_DIR)/eventinj.$(exe) \
                $(TEST_DIR)/smap.$(exe) \
                $(TEST_DIR)/umip.$(exe) \
-               $(TEST_DIR)/smm_int_window.$(exe)
+               $(TEST_DIR)/smm_int_window.$(exe) \
+               $(TEST_DIR)/ipi_stress.$(exe)
 
 # The following test cases are disabled when building EFI tests because they
 # use absolute addresses in their inline assembly code, which cannot compile
diff --git a/x86/ipi_stress.c b/x86/ipi_stress.c
new file mode 100644
index 00000000..dea3e605
--- /dev/null
+++ b/x86/ipi_stress.c
@@ -0,0 +1,167 @@
+#include "libcflat.h"
+#include "smp.h"
+#include "alloc.h"
+#include "apic.h"
+#include "processor.h"
+#include "isr.h"
+#include "asm/barrier.h"
+#include "delay.h"
+#include "desc.h"
+#include "msr.h"
+#include "vm.h"
+#include "types.h"
+#include "alloc_page.h"
+#include "vmalloc.h"
+#include "random.h"
+
+u64 num_iterations = 100000;
+float hlt_prob = 0.1;
+volatile bool end_test;
+
+#define APIC_TIMER_PERIOD (1000*1000*1000)
+
+struct cpu_test_state {
+	volatile u64 isr_count;
+	u64 last_isr_count;
+	struct random_state random;
+	int smp_id;
+} *cpu_states;
+
+
+static void ipi_interrupt_handler(isr_regs_t *r)
+{
+	cpu_states[smp_id()].isr_count++;
+	eoi();
+}
+
+static void local_timer_interrupt(isr_regs_t *r)
+{
+	struct cpu_test_state *state = &cpu_states[smp_id()];
+
+	u64 isr_count = state->isr_count;
+	unsigned long diff =  isr_count - state->last_isr_count;
+
+	if (!diff) {
+		printf("\n");
+		printf("hang detected!!\n");
+		end_test = true;
+		goto out;
+	}
+
+	printf("made %ld IPIs\n", diff * cpu_count());
+	state->last_isr_count = state->isr_count;
+out:
+	eoi();
+}
+
+static void wait_for_ipi(struct cpu_test_state *state)
+{
+	u64 old_count = state->isr_count;
+	bool use_halt = random_decision(&state->random, hlt_prob);
+
+	do {
+		if (use_halt) {
+			safe_halt();
+			cli();
+		} else
+			sti_nop_cli();
+
+	} while (old_count == state->isr_count);
+
+	assert(state->isr_count == old_count + 1);
+}
+
+
+static void vcpu_init(void *)
+{
+	struct cpu_test_state *state = &cpu_states[smp_id()];
+
+	memset(state, 0, sizeof(*state));
+
+	/* To make it easier to see iteration number in the trace */
+	handle_irq(0x40, ipi_interrupt_handler);
+	handle_irq(0x50, ipi_interrupt_handler);
+
+	state->random = get_prng();
+	state->isr_count = 0;
+	state->smp_id = smp_id();
+}
+
+static void vcpu_code(void *)
+{
+	struct cpu_test_state *state = &cpu_states[smp_id()];
+	int ncpus = cpu_count();
+	u64 i;
+	u8 target_smp_id;
+
+	if (state->smp_id > 0)
+		wait_for_ipi(state);
+
+	target_smp_id = state->smp_id == ncpus - 1 ? 0 : state->smp_id  + 1;
+
+	for (i = 0; i < num_iterations && !end_test; i++) {
+		// send IPI to a next vCPU in a circular fashion
+		apic_icr_write(APIC_INT_ASSERT |
+				APIC_DEST_PHYSICAL |
+				APIC_DM_FIXED |
+				(i % 2 ? 0x40 : 0x50),
+				target_smp_id);
+
+		if (i == (num_iterations - 1) && state->smp_id > 0)
+			break;
+
+		// wait for the IPI interrupt chain to come back to us
+		wait_for_ipi(state);
+	}
+}
+
+int main(int argc, void **argv)
+{
+	int cpu, ncpus = cpu_count();
+
+	handle_irq(0xF0, local_timer_interrupt);
+	apic_setup_timer(0xF0, APIC_LVT_TIMER_PERIODIC);
+
+	if (argc > 1) {
+		int hlt_param = atol(argv[1]);
+
+		if (hlt_param == 1)
+			hlt_prob = 100;
+		else if (hlt_param == 0)
+			hlt_prob = 0;
+	}
+
+	if (argc > 2)
+		num_iterations = atol(argv[2]);
+
+	setup_vm();
+	init_prng();
+
+	cpu_states = calloc(ncpus, sizeof(cpu_states[0]));
+
+	printf("found %d cpus\n", ncpus);
+	printf("running for %lld iterations\n",
+		(unsigned long long)num_iterations);
+
+	on_cpus(vcpu_init, NULL);
+
+	apic_start_timer(1000*1000*1000);
+
+	printf("test started, waiting to end...\n");
+
+	on_cpus(vcpu_code, NULL);
+
+	apic_stop_timer();
+	apic_cleanup_timer();
+
+	for (cpu = 0; cpu < ncpus; ++cpu) {
+		u64 result = cpu_states[cpu].isr_count;
+
+		report(result == num_iterations,
+				"Number of IPIs match (%lld)",
+				(unsigned long long)result);
+	}
+
+	free((void *)cpu_states);
+	return report_summary();
+}
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index df248dff..b0fd92fb 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -74,6 +74,16 @@ smp = 2
 file = smptest.flat
 smp = 3
 
+[ipi_stress]
+file = ipi_stress.flat
+extra_params = -cpu host,-x2apic -global kvm-pit.lost_tick_policy=discard -machine kernel-irqchip=on
+smp = 4
+
+[ipi_stress_x2apic]
+file = ipi_stress.flat
+extra_params = -cpu host,+x2apic -global kvm-pit.lost_tick_policy=discard -machine kernel-irqchip=on
+smp = 4
+
 [vmexit_cpuid]
 file = vmexit.flat
 extra_params = -append 'cpuid'
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 13/27] svm: remove get_npt_pte extern
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (11 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 12/27] x86: add IPI stress test Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 14/27] svm: move svm spec definitions to lib/x86/svm.h Maxim Levitsky
                   ` (14 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

get_npt_pte is unused

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/x86/svm.h b/x86/svm.h
index 766ff7e3..1ad85ba4 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -429,7 +429,6 @@ int __svm_vmrun(u64 rip);
 void __svm_bare_vmrun(void);
 int svm_vmrun(void);
 void test_set_guest(test_guest_func func);
-u64* get_npt_pte(u64 *pml4, u64 guest_addr, int level);
 
 extern struct vmcb *vmcb;
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 14/27] svm: move svm spec definitions to lib/x86/svm.h
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (12 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 13/27] svm: remove get_npt_pte extern Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 13:54   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 15/27] svm: move some svm support functions into lib/x86/svm_lib.h Maxim Levitsky
                   ` (13 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

This is first step of separating SVM code to a library

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/svm.h | 365 ++++++++++++++++++++++++++++++++++++++++++++++++++
 x86/svm.h     | 359 +------------------------------------------------
 2 files changed, 366 insertions(+), 358 deletions(-)
 create mode 100644 lib/x86/svm.h

diff --git a/lib/x86/svm.h b/lib/x86/svm.h
new file mode 100644
index 00000000..8b836c13
--- /dev/null
+++ b/lib/x86/svm.h
@@ -0,0 +1,365 @@
+
+#ifndef SRC_LIB_X86_SVM_H_
+#define SRC_LIB_X86_SVM_H_
+
+enum {
+	INTERCEPT_INTR,
+	INTERCEPT_NMI,
+	INTERCEPT_SMI,
+	INTERCEPT_INIT,
+	INTERCEPT_VINTR,
+	INTERCEPT_SELECTIVE_CR0,
+	INTERCEPT_STORE_IDTR,
+	INTERCEPT_STORE_GDTR,
+	INTERCEPT_STORE_LDTR,
+	INTERCEPT_STORE_TR,
+	INTERCEPT_LOAD_IDTR,
+	INTERCEPT_LOAD_GDTR,
+	INTERCEPT_LOAD_LDTR,
+	INTERCEPT_LOAD_TR,
+	INTERCEPT_RDTSC,
+	INTERCEPT_RDPMC,
+	INTERCEPT_PUSHF,
+	INTERCEPT_POPF,
+	INTERCEPT_CPUID,
+	INTERCEPT_RSM,
+	INTERCEPT_IRET,
+	INTERCEPT_INTn,
+	INTERCEPT_INVD,
+	INTERCEPT_PAUSE,
+	INTERCEPT_HLT,
+	INTERCEPT_INVLPG,
+	INTERCEPT_INVLPGA,
+	INTERCEPT_IOIO_PROT,
+	INTERCEPT_MSR_PROT,
+	INTERCEPT_TASK_SWITCH,
+	INTERCEPT_FERR_FREEZE,
+	INTERCEPT_SHUTDOWN,
+	INTERCEPT_VMRUN,
+	INTERCEPT_VMMCALL,
+	INTERCEPT_VMLOAD,
+	INTERCEPT_VMSAVE,
+	INTERCEPT_STGI,
+	INTERCEPT_CLGI,
+	INTERCEPT_SKINIT,
+	INTERCEPT_RDTSCP,
+	INTERCEPT_ICEBP,
+	INTERCEPT_WBINVD,
+	INTERCEPT_MONITOR,
+	INTERCEPT_MWAIT,
+	INTERCEPT_MWAIT_COND,
+};
+
+enum {
+	VMCB_CLEAN_INTERCEPTS = 1, /* Intercept vectors, TSC offset, pause filter count */
+	VMCB_CLEAN_PERM_MAP = 2,   /* IOPM Base and MSRPM Base */
+	VMCB_CLEAN_ASID = 4,	   /* ASID */
+	VMCB_CLEAN_INTR = 8,	   /* int_ctl, int_vector */
+	VMCB_CLEAN_NPT = 16,	   /* npt_en, nCR3, gPAT */
+	VMCB_CLEAN_CR = 32,	   /* CR0, CR3, CR4, EFER */
+	VMCB_CLEAN_DR = 64,	   /* DR6, DR7 */
+	VMCB_CLEAN_DT = 128,	   /* GDT, IDT */
+	VMCB_CLEAN_SEG = 256,	   /* CS, DS, SS, ES, CPL */
+	VMCB_CLEAN_CR2 = 512,	   /* CR2 only */
+	VMCB_CLEAN_LBR = 1024,	   /* DBGCTL, BR_FROM, BR_TO, LAST_EX_FROM, LAST_EX_TO */
+	VMCB_CLEAN_AVIC = 2048,	   /* APIC_BAR, APIC_BACKING_PAGE,
+				    * PHYSICAL_TABLE pointer, LOGICAL_TABLE pointer
+				    */
+	VMCB_CLEAN_ALL = 4095,
+};
+
+struct __attribute__ ((__packed__)) vmcb_control_area {
+	u16 intercept_cr_read;
+	u16 intercept_cr_write;
+	u16 intercept_dr_read;
+	u16 intercept_dr_write;
+	u32 intercept_exceptions;
+	u64 intercept;
+	u8 reserved_1[40];
+	u16 pause_filter_thresh;
+	u16 pause_filter_count;
+	u64 iopm_base_pa;
+	u64 msrpm_base_pa;
+	u64 tsc_offset;
+	u32 asid;
+	u8 tlb_ctl;
+	u8 reserved_2[3];
+	u32 int_ctl;
+	u32 int_vector;
+	u32 int_state;
+	u8 reserved_3[4];
+	u32 exit_code;
+	u32 exit_code_hi;
+	u64 exit_info_1;
+	u64 exit_info_2;
+	u32 exit_int_info;
+	u32 exit_int_info_err;
+	u64 nested_ctl;
+	u8 reserved_4[16];
+	u32 event_inj;
+	u32 event_inj_err;
+	u64 nested_cr3;
+	u64 virt_ext;
+	u32 clean;
+	u32 reserved_5;
+	u64 next_rip;
+	u8 insn_len;
+	u8 insn_bytes[15];
+	u8 reserved_6[800];
+};
+
+#define TLB_CONTROL_DO_NOTHING 0
+#define TLB_CONTROL_FLUSH_ALL_ASID 1
+
+#define V_TPR_MASK 0x0f
+
+#define V_IRQ_SHIFT 8
+#define V_IRQ_MASK (1 << V_IRQ_SHIFT)
+
+#define V_GIF_ENABLED_SHIFT 25
+#define V_GIF_ENABLED_MASK (1 << V_GIF_ENABLED_SHIFT)
+
+#define V_GIF_SHIFT 9
+#define V_GIF_MASK (1 << V_GIF_SHIFT)
+
+#define V_INTR_PRIO_SHIFT 16
+#define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)
+
+#define V_IGN_TPR_SHIFT 20
+#define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
+
+#define V_INTR_MASKING_SHIFT 24
+#define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
+
+#define SVM_INTERRUPT_SHADOW_MASK 1
+
+#define SVM_IOIO_STR_SHIFT 2
+#define SVM_IOIO_REP_SHIFT 3
+#define SVM_IOIO_SIZE_SHIFT 4
+#define SVM_IOIO_ASIZE_SHIFT 7
+
+#define SVM_IOIO_TYPE_MASK 1
+#define SVM_IOIO_STR_MASK (1 << SVM_IOIO_STR_SHIFT)
+#define SVM_IOIO_REP_MASK (1 << SVM_IOIO_REP_SHIFT)
+#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
+#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
+
+#define SVM_VM_CR_VALID_MASK	0x001fULL
+#define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
+#define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
+
+#define TSC_RATIO_DEFAULT   0x0100000000ULL
+
+struct __attribute__ ((__packed__)) vmcb_seg {
+	u16 selector;
+	u16 attrib;
+	u32 limit;
+	u64 base;
+};
+
+struct __attribute__ ((__packed__)) vmcb_save_area {
+	struct vmcb_seg es;
+	struct vmcb_seg cs;
+	struct vmcb_seg ss;
+	struct vmcb_seg ds;
+	struct vmcb_seg fs;
+	struct vmcb_seg gs;
+	struct vmcb_seg gdtr;
+	struct vmcb_seg ldtr;
+	struct vmcb_seg idtr;
+	struct vmcb_seg tr;
+	u8 reserved_1[43];
+	u8 cpl;
+	u8 reserved_2[4];
+	u64 efer;
+	u8 reserved_3[112];
+	u64 cr4;
+	u64 cr3;
+	u64 cr0;
+	u64 dr7;
+	u64 dr6;
+	u64 rflags;
+	u64 rip;
+	u8 reserved_4[88];
+	u64 rsp;
+	u8 reserved_5[24];
+	u64 rax;
+	u64 star;
+	u64 lstar;
+	u64 cstar;
+	u64 sfmask;
+	u64 kernel_gs_base;
+	u64 sysenter_cs;
+	u64 sysenter_esp;
+	u64 sysenter_eip;
+	u64 cr2;
+	u8 reserved_6[32];
+	u64 g_pat;
+	u64 dbgctl;
+	u64 br_from;
+	u64 br_to;
+	u64 last_excp_from;
+	u64 last_excp_to;
+};
+
+struct __attribute__ ((__packed__)) vmcb {
+	struct vmcb_control_area control;
+	struct vmcb_save_area save;
+};
+
+#define SVM_CPUID_FEATURE_SHIFT 2
+#define SVM_CPUID_FUNC 0x8000000a
+
+#define SVM_VM_CR_SVM_DISABLE 4
+
+#define SVM_SELECTOR_S_SHIFT 4
+#define SVM_SELECTOR_DPL_SHIFT 5
+#define SVM_SELECTOR_P_SHIFT 7
+#define SVM_SELECTOR_AVL_SHIFT 8
+#define SVM_SELECTOR_L_SHIFT 9
+#define SVM_SELECTOR_DB_SHIFT 10
+#define SVM_SELECTOR_G_SHIFT 11
+
+#define SVM_SELECTOR_TYPE_MASK (0xf)
+#define SVM_SELECTOR_S_MASK (1 << SVM_SELECTOR_S_SHIFT)
+#define SVM_SELECTOR_DPL_MASK (3 << SVM_SELECTOR_DPL_SHIFT)
+#define SVM_SELECTOR_P_MASK (1 << SVM_SELECTOR_P_SHIFT)
+#define SVM_SELECTOR_AVL_MASK (1 << SVM_SELECTOR_AVL_SHIFT)
+#define SVM_SELECTOR_L_MASK (1 << SVM_SELECTOR_L_SHIFT)
+#define SVM_SELECTOR_DB_MASK (1 << SVM_SELECTOR_DB_SHIFT)
+#define SVM_SELECTOR_G_MASK (1 << SVM_SELECTOR_G_SHIFT)
+
+#define SVM_SELECTOR_WRITE_MASK (1 << 1)
+#define SVM_SELECTOR_READ_MASK SVM_SELECTOR_WRITE_MASK
+#define SVM_SELECTOR_CODE_MASK (1 << 3)
+
+#define INTERCEPT_CR0_MASK 1
+#define INTERCEPT_CR3_MASK (1 << 3)
+#define INTERCEPT_CR4_MASK (1 << 4)
+#define INTERCEPT_CR8_MASK (1 << 8)
+
+#define INTERCEPT_DR0_MASK 1
+#define INTERCEPT_DR1_MASK (1 << 1)
+#define INTERCEPT_DR2_MASK (1 << 2)
+#define INTERCEPT_DR3_MASK (1 << 3)
+#define INTERCEPT_DR4_MASK (1 << 4)
+#define INTERCEPT_DR5_MASK (1 << 5)
+#define INTERCEPT_DR6_MASK (1 << 6)
+#define INTERCEPT_DR7_MASK (1 << 7)
+
+#define SVM_EVTINJ_VEC_MASK 0xff
+
+#define SVM_EVTINJ_TYPE_SHIFT 8
+#define SVM_EVTINJ_TYPE_MASK (7 << SVM_EVTINJ_TYPE_SHIFT)
+
+#define SVM_EVTINJ_TYPE_INTR (0 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_NMI (2 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_EXEPT (3 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_SOFT (4 << SVM_EVTINJ_TYPE_SHIFT)
+
+#define SVM_EVTINJ_VALID (1 << 31)
+#define SVM_EVTINJ_VALID_ERR (1 << 11)
+
+#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
+#define SVM_EXITINTINFO_TYPE_MASK SVM_EVTINJ_TYPE_MASK
+
+#define SVM_EXITINTINFO_TYPE_INTR SVM_EVTINJ_TYPE_INTR
+#define SVM_EXITINTINFO_TYPE_NMI SVM_EVTINJ_TYPE_NMI
+#define SVM_EXITINTINFO_TYPE_EXEPT SVM_EVTINJ_TYPE_EXEPT
+#define SVM_EXITINTINFO_TYPE_SOFT SVM_EVTINJ_TYPE_SOFT
+
+#define SVM_EXITINTINFO_VALID SVM_EVTINJ_VALID
+#define SVM_EXITINTINFO_VALID_ERR SVM_EVTINJ_VALID_ERR
+
+#define SVM_EXITINFOSHIFT_TS_REASON_IRET 36
+#define SVM_EXITINFOSHIFT_TS_REASON_JMP 38
+#define SVM_EXITINFOSHIFT_TS_HAS_ERROR_CODE 44
+
+#define SVM_EXIT_READ_CR0   0x000
+#define SVM_EXIT_READ_CR3   0x003
+#define SVM_EXIT_READ_CR4   0x004
+#define SVM_EXIT_READ_CR8   0x008
+#define SVM_EXIT_WRITE_CR0  0x010
+#define SVM_EXIT_WRITE_CR3  0x013
+#define SVM_EXIT_WRITE_CR4  0x014
+#define SVM_EXIT_WRITE_CR8  0x018
+#define SVM_EXIT_READ_DR0   0x020
+#define SVM_EXIT_READ_DR1   0x021
+#define SVM_EXIT_READ_DR2   0x022
+#define SVM_EXIT_READ_DR3   0x023
+#define SVM_EXIT_READ_DR4   0x024
+#define SVM_EXIT_READ_DR5   0x025
+#define SVM_EXIT_READ_DR6   0x026
+#define SVM_EXIT_READ_DR7   0x027
+#define SVM_EXIT_WRITE_DR0  0x030
+#define SVM_EXIT_WRITE_DR1  0x031
+#define SVM_EXIT_WRITE_DR2  0x032
+#define SVM_EXIT_WRITE_DR3  0x033
+#define SVM_EXIT_WRITE_DR4  0x034
+#define SVM_EXIT_WRITE_DR5  0x035
+#define SVM_EXIT_WRITE_DR6  0x036
+#define SVM_EXIT_WRITE_DR7  0x037
+#define SVM_EXIT_EXCP_BASE	  0x040
+#define SVM_EXIT_INTR	   0x060
+#define SVM_EXIT_NMI		0x061
+#define SVM_EXIT_SMI		0x062
+#define SVM_EXIT_INIT	   0x063
+#define SVM_EXIT_VINTR	  0x064
+#define SVM_EXIT_CR0_SEL_WRITE  0x065
+#define SVM_EXIT_IDTR_READ  0x066
+#define SVM_EXIT_GDTR_READ  0x067
+#define SVM_EXIT_LDTR_READ  0x068
+#define SVM_EXIT_TR_READ	0x069
+#define SVM_EXIT_IDTR_WRITE 0x06a
+#define SVM_EXIT_GDTR_WRITE 0x06b
+#define SVM_EXIT_LDTR_WRITE 0x06c
+#define SVM_EXIT_TR_WRITE   0x06d
+#define SVM_EXIT_RDTSC	  0x06e
+#define SVM_EXIT_RDPMC	  0x06f
+#define SVM_EXIT_PUSHF	  0x070
+#define SVM_EXIT_POPF	   0x071
+#define SVM_EXIT_CPUID	  0x072
+#define SVM_EXIT_RSM		0x073
+#define SVM_EXIT_IRET	   0x074
+#define SVM_EXIT_SWINT	  0x075
+#define SVM_EXIT_INVD	   0x076
+#define SVM_EXIT_PAUSE	  0x077
+#define SVM_EXIT_HLT		0x078
+#define SVM_EXIT_INVLPG	 0x079
+#define SVM_EXIT_INVLPGA	0x07a
+#define SVM_EXIT_IOIO	   0x07b
+#define SVM_EXIT_MSR		0x07c
+#define SVM_EXIT_TASK_SWITCH	0x07d
+#define SVM_EXIT_FERR_FREEZE	0x07e
+#define SVM_EXIT_SHUTDOWN   0x07f
+#define SVM_EXIT_VMRUN	  0x080
+#define SVM_EXIT_VMMCALL	0x081
+#define SVM_EXIT_VMLOAD	 0x082
+#define SVM_EXIT_VMSAVE	 0x083
+#define SVM_EXIT_STGI	   0x084
+#define SVM_EXIT_CLGI	   0x085
+#define SVM_EXIT_SKINIT	 0x086
+#define SVM_EXIT_RDTSCP	 0x087
+#define SVM_EXIT_ICEBP	  0x088
+#define SVM_EXIT_WBINVD	 0x089
+#define SVM_EXIT_MONITOR	0x08a
+#define SVM_EXIT_MWAIT	  0x08b
+#define SVM_EXIT_MWAIT_COND 0x08c
+#define SVM_EXIT_NPF		0x400
+
+#define SVM_EXIT_ERR		-1
+
+#define SVM_CR0_SELECTIVE_MASK (X86_CR0_TS | X86_CR0_MP)
+
+#define SVM_CR0_RESERVED_MASK			0xffffffff00000000U
+#define SVM_CR3_LONG_MBZ_MASK			0xfff0000000000000U
+#define SVM_CR3_LONG_RESERVED_MASK		0x0000000000000fe7U
+#define SVM_CR3_PAE_LEGACY_RESERVED_MASK	0x0000000000000007U
+#define SVM_CR4_LEGACY_RESERVED_MASK		0xff08e000U
+#define SVM_CR4_RESERVED_MASK			0xffffffffff08e000U
+#define SVM_DR6_RESERVED_MASK			0xffffffffffff1ff0U
+#define SVM_DR7_RESERVED_MASK			0xffffffff0000cc00U
+#define SVM_EFER_RESERVED_MASK			0xffffffffffff0200U
+
+
+#endif /* SRC_LIB_X86_SVM_H_ */
diff --git a/x86/svm.h b/x86/svm.h
index 1ad85ba4..3cd7ce8b 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -2,367 +2,10 @@
 #define X86_SVM_H
 
 #include "libcflat.h"
+#include <x86/svm.h>
 
-enum {
-	INTERCEPT_INTR,
-	INTERCEPT_NMI,
-	INTERCEPT_SMI,
-	INTERCEPT_INIT,
-	INTERCEPT_VINTR,
-	INTERCEPT_SELECTIVE_CR0,
-	INTERCEPT_STORE_IDTR,
-	INTERCEPT_STORE_GDTR,
-	INTERCEPT_STORE_LDTR,
-	INTERCEPT_STORE_TR,
-	INTERCEPT_LOAD_IDTR,
-	INTERCEPT_LOAD_GDTR,
-	INTERCEPT_LOAD_LDTR,
-	INTERCEPT_LOAD_TR,
-	INTERCEPT_RDTSC,
-	INTERCEPT_RDPMC,
-	INTERCEPT_PUSHF,
-	INTERCEPT_POPF,
-	INTERCEPT_CPUID,
-	INTERCEPT_RSM,
-	INTERCEPT_IRET,
-	INTERCEPT_INTn,
-	INTERCEPT_INVD,
-	INTERCEPT_PAUSE,
-	INTERCEPT_HLT,
-	INTERCEPT_INVLPG,
-	INTERCEPT_INVLPGA,
-	INTERCEPT_IOIO_PROT,
-	INTERCEPT_MSR_PROT,
-	INTERCEPT_TASK_SWITCH,
-	INTERCEPT_FERR_FREEZE,
-	INTERCEPT_SHUTDOWN,
-	INTERCEPT_VMRUN,
-	INTERCEPT_VMMCALL,
-	INTERCEPT_VMLOAD,
-	INTERCEPT_VMSAVE,
-	INTERCEPT_STGI,
-	INTERCEPT_CLGI,
-	INTERCEPT_SKINIT,
-	INTERCEPT_RDTSCP,
-	INTERCEPT_ICEBP,
-	INTERCEPT_WBINVD,
-	INTERCEPT_MONITOR,
-	INTERCEPT_MWAIT,
-	INTERCEPT_MWAIT_COND,
-};
-
-enum {
-        VMCB_CLEAN_INTERCEPTS = 1, /* Intercept vectors, TSC offset, pause filter count */
-        VMCB_CLEAN_PERM_MAP = 2,   /* IOPM Base and MSRPM Base */
-        VMCB_CLEAN_ASID = 4,       /* ASID */
-        VMCB_CLEAN_INTR = 8,       /* int_ctl, int_vector */
-        VMCB_CLEAN_NPT = 16,       /* npt_en, nCR3, gPAT */
-        VMCB_CLEAN_CR = 32,        /* CR0, CR3, CR4, EFER */
-        VMCB_CLEAN_DR = 64,        /* DR6, DR7 */
-        VMCB_CLEAN_DT = 128,       /* GDT, IDT */
-        VMCB_CLEAN_SEG = 256,      /* CS, DS, SS, ES, CPL */
-        VMCB_CLEAN_CR2 = 512,      /* CR2 only */
-        VMCB_CLEAN_LBR = 1024,     /* DBGCTL, BR_FROM, BR_TO, LAST_EX_FROM, LAST_EX_TO */
-        VMCB_CLEAN_AVIC = 2048,    /* APIC_BAR, APIC_BACKING_PAGE,
-				      PHYSICAL_TABLE pointer, LOGICAL_TABLE pointer */
-        VMCB_CLEAN_ALL = 4095,
-};
-
-struct __attribute__ ((__packed__)) vmcb_control_area {
-	u16 intercept_cr_read;
-	u16 intercept_cr_write;
-	u16 intercept_dr_read;
-	u16 intercept_dr_write;
-	u32 intercept_exceptions;
-	u64 intercept;
-	u8 reserved_1[40];
-	u16 pause_filter_thresh;
-	u16 pause_filter_count;
-	u64 iopm_base_pa;
-	u64 msrpm_base_pa;
-	u64 tsc_offset;
-	u32 asid;
-	u8 tlb_ctl;
-	u8 reserved_2[3];
-	u32 int_ctl;
-	u32 int_vector;
-	u32 int_state;
-	u8 reserved_3[4];
-	u32 exit_code;
-	u32 exit_code_hi;
-	u64 exit_info_1;
-	u64 exit_info_2;
-	u32 exit_int_info;
-	u32 exit_int_info_err;
-	u64 nested_ctl;
-	u8 reserved_4[16];
-	u32 event_inj;
-	u32 event_inj_err;
-	u64 nested_cr3;
-	u64 virt_ext;
-	u32 clean;
-	u32 reserved_5;
-	u64 next_rip;
-	u8 insn_len;
-	u8 insn_bytes[15];
-	u8 reserved_6[800];
-};
-
-#define TLB_CONTROL_DO_NOTHING 0
-#define TLB_CONTROL_FLUSH_ALL_ASID 1
-
-#define V_TPR_MASK 0x0f
-
-#define V_IRQ_SHIFT 8
-#define V_IRQ_MASK (1 << V_IRQ_SHIFT)
-
-#define V_GIF_ENABLED_SHIFT 25
-#define V_GIF_ENABLED_MASK (1 << V_GIF_ENABLED_SHIFT)
-
-#define V_GIF_SHIFT 9
-#define V_GIF_MASK (1 << V_GIF_SHIFT)
-
-#define V_INTR_PRIO_SHIFT 16
-#define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)
-
-#define V_IGN_TPR_SHIFT 20
-#define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
-
-#define V_INTR_MASKING_SHIFT 24
-#define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
-
-#define SVM_INTERRUPT_SHADOW_MASK 1
-
-#define SVM_IOIO_STR_SHIFT 2
-#define SVM_IOIO_REP_SHIFT 3
-#define SVM_IOIO_SIZE_SHIFT 4
-#define SVM_IOIO_ASIZE_SHIFT 7
-
-#define SVM_IOIO_TYPE_MASK 1
-#define SVM_IOIO_STR_MASK (1 << SVM_IOIO_STR_SHIFT)
-#define SVM_IOIO_REP_MASK (1 << SVM_IOIO_REP_SHIFT)
-#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
-#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
-
-#define SVM_VM_CR_VALID_MASK	0x001fULL
-#define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
-#define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
-
-#define TSC_RATIO_DEFAULT   0x0100000000ULL
-
-struct __attribute__ ((__packed__)) vmcb_seg {
-	u16 selector;
-	u16 attrib;
-	u32 limit;
-	u64 base;
-};
-
-struct __attribute__ ((__packed__)) vmcb_save_area {
-	struct vmcb_seg es;
-	struct vmcb_seg cs;
-	struct vmcb_seg ss;
-	struct vmcb_seg ds;
-	struct vmcb_seg fs;
-	struct vmcb_seg gs;
-	struct vmcb_seg gdtr;
-	struct vmcb_seg ldtr;
-	struct vmcb_seg idtr;
-	struct vmcb_seg tr;
-	u8 reserved_1[43];
-	u8 cpl;
-	u8 reserved_2[4];
-	u64 efer;
-	u8 reserved_3[112];
-	u64 cr4;
-	u64 cr3;
-	u64 cr0;
-	u64 dr7;
-	u64 dr6;
-	u64 rflags;
-	u64 rip;
-	u8 reserved_4[88];
-	u64 rsp;
-	u8 reserved_5[24];
-	u64 rax;
-	u64 star;
-	u64 lstar;
-	u64 cstar;
-	u64 sfmask;
-	u64 kernel_gs_base;
-	u64 sysenter_cs;
-	u64 sysenter_esp;
-	u64 sysenter_eip;
-	u64 cr2;
-	u8 reserved_6[32];
-	u64 g_pat;
-	u64 dbgctl;
-	u64 br_from;
-	u64 br_to;
-	u64 last_excp_from;
-	u64 last_excp_to;
-};
-
-struct __attribute__ ((__packed__)) vmcb {
-	struct vmcb_control_area control;
-	struct vmcb_save_area save;
-};
-
-#define SVM_CPUID_FEATURE_SHIFT 2
-#define SVM_CPUID_FUNC 0x8000000a
-
-#define SVM_VM_CR_SVM_DISABLE 4
-
-#define SVM_SELECTOR_S_SHIFT 4
-#define SVM_SELECTOR_DPL_SHIFT 5
-#define SVM_SELECTOR_P_SHIFT 7
-#define SVM_SELECTOR_AVL_SHIFT 8
-#define SVM_SELECTOR_L_SHIFT 9
-#define SVM_SELECTOR_DB_SHIFT 10
-#define SVM_SELECTOR_G_SHIFT 11
-
-#define SVM_SELECTOR_TYPE_MASK (0xf)
-#define SVM_SELECTOR_S_MASK (1 << SVM_SELECTOR_S_SHIFT)
-#define SVM_SELECTOR_DPL_MASK (3 << SVM_SELECTOR_DPL_SHIFT)
-#define SVM_SELECTOR_P_MASK (1 << SVM_SELECTOR_P_SHIFT)
-#define SVM_SELECTOR_AVL_MASK (1 << SVM_SELECTOR_AVL_SHIFT)
-#define SVM_SELECTOR_L_MASK (1 << SVM_SELECTOR_L_SHIFT)
-#define SVM_SELECTOR_DB_MASK (1 << SVM_SELECTOR_DB_SHIFT)
-#define SVM_SELECTOR_G_MASK (1 << SVM_SELECTOR_G_SHIFT)
-
-#define SVM_SELECTOR_WRITE_MASK (1 << 1)
-#define SVM_SELECTOR_READ_MASK SVM_SELECTOR_WRITE_MASK
-#define SVM_SELECTOR_CODE_MASK (1 << 3)
-
-#define INTERCEPT_CR0_MASK 1
-#define INTERCEPT_CR3_MASK (1 << 3)
-#define INTERCEPT_CR4_MASK (1 << 4)
-#define INTERCEPT_CR8_MASK (1 << 8)
-
-#define INTERCEPT_DR0_MASK 1
-#define INTERCEPT_DR1_MASK (1 << 1)
-#define INTERCEPT_DR2_MASK (1 << 2)
-#define INTERCEPT_DR3_MASK (1 << 3)
-#define INTERCEPT_DR4_MASK (1 << 4)
-#define INTERCEPT_DR5_MASK (1 << 5)
-#define INTERCEPT_DR6_MASK (1 << 6)
-#define INTERCEPT_DR7_MASK (1 << 7)
-
-#define SVM_EVTINJ_VEC_MASK 0xff
-
-#define SVM_EVTINJ_TYPE_SHIFT 8
-#define SVM_EVTINJ_TYPE_MASK (7 << SVM_EVTINJ_TYPE_SHIFT)
-
-#define SVM_EVTINJ_TYPE_INTR (0 << SVM_EVTINJ_TYPE_SHIFT)
-#define SVM_EVTINJ_TYPE_NMI (2 << SVM_EVTINJ_TYPE_SHIFT)
-#define SVM_EVTINJ_TYPE_EXEPT (3 << SVM_EVTINJ_TYPE_SHIFT)
-#define SVM_EVTINJ_TYPE_SOFT (4 << SVM_EVTINJ_TYPE_SHIFT)
-
-#define SVM_EVTINJ_VALID (1 << 31)
-#define SVM_EVTINJ_VALID_ERR (1 << 11)
-
-#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
-#define SVM_EXITINTINFO_TYPE_MASK SVM_EVTINJ_TYPE_MASK
-
-#define	SVM_EXITINTINFO_TYPE_INTR SVM_EVTINJ_TYPE_INTR
-#define	SVM_EXITINTINFO_TYPE_NMI SVM_EVTINJ_TYPE_NMI
-#define	SVM_EXITINTINFO_TYPE_EXEPT SVM_EVTINJ_TYPE_EXEPT
-#define	SVM_EXITINTINFO_TYPE_SOFT SVM_EVTINJ_TYPE_SOFT
-
-#define SVM_EXITINTINFO_VALID SVM_EVTINJ_VALID
-#define SVM_EXITINTINFO_VALID_ERR SVM_EVTINJ_VALID_ERR
-
-#define SVM_EXITINFOSHIFT_TS_REASON_IRET 36
-#define SVM_EXITINFOSHIFT_TS_REASON_JMP 38
-#define SVM_EXITINFOSHIFT_TS_HAS_ERROR_CODE 44
-
-#define	SVM_EXIT_READ_CR0 	0x000
-#define	SVM_EXIT_READ_CR3 	0x003
-#define	SVM_EXIT_READ_CR4 	0x004
-#define	SVM_EXIT_READ_CR8 	0x008
-#define	SVM_EXIT_WRITE_CR0 	0x010
-#define	SVM_EXIT_WRITE_CR3 	0x013
-#define	SVM_EXIT_WRITE_CR4 	0x014
-#define	SVM_EXIT_WRITE_CR8 	0x018
-#define	SVM_EXIT_READ_DR0 	0x020
-#define	SVM_EXIT_READ_DR1 	0x021
-#define	SVM_EXIT_READ_DR2 	0x022
-#define	SVM_EXIT_READ_DR3 	0x023
-#define	SVM_EXIT_READ_DR4 	0x024
-#define	SVM_EXIT_READ_DR5 	0x025
-#define	SVM_EXIT_READ_DR6 	0x026
-#define	SVM_EXIT_READ_DR7 	0x027
-#define	SVM_EXIT_WRITE_DR0 	0x030
-#define	SVM_EXIT_WRITE_DR1 	0x031
-#define	SVM_EXIT_WRITE_DR2 	0x032
-#define	SVM_EXIT_WRITE_DR3 	0x033
-#define	SVM_EXIT_WRITE_DR4 	0x034
-#define	SVM_EXIT_WRITE_DR5 	0x035
-#define	SVM_EXIT_WRITE_DR6 	0x036
-#define	SVM_EXIT_WRITE_DR7 	0x037
-#define SVM_EXIT_EXCP_BASE      0x040
-#define SVM_EXIT_INTR		0x060
-#define SVM_EXIT_NMI		0x061
-#define SVM_EXIT_SMI		0x062
-#define SVM_EXIT_INIT		0x063
-#define SVM_EXIT_VINTR		0x064
-#define SVM_EXIT_CR0_SEL_WRITE	0x065
-#define SVM_EXIT_IDTR_READ	0x066
-#define SVM_EXIT_GDTR_READ	0x067
-#define SVM_EXIT_LDTR_READ	0x068
-#define SVM_EXIT_TR_READ	0x069
-#define SVM_EXIT_IDTR_WRITE	0x06a
-#define SVM_EXIT_GDTR_WRITE	0x06b
-#define SVM_EXIT_LDTR_WRITE	0x06c
-#define SVM_EXIT_TR_WRITE	0x06d
-#define SVM_EXIT_RDTSC		0x06e
-#define SVM_EXIT_RDPMC		0x06f
-#define SVM_EXIT_PUSHF		0x070
-#define SVM_EXIT_POPF		0x071
-#define SVM_EXIT_CPUID		0x072
-#define SVM_EXIT_RSM		0x073
-#define SVM_EXIT_IRET		0x074
-#define SVM_EXIT_SWINT		0x075
-#define SVM_EXIT_INVD		0x076
-#define SVM_EXIT_PAUSE		0x077
-#define SVM_EXIT_HLT		0x078
-#define SVM_EXIT_INVLPG		0x079
-#define SVM_EXIT_INVLPGA	0x07a
-#define SVM_EXIT_IOIO		0x07b
-#define SVM_EXIT_MSR		0x07c
-#define SVM_EXIT_TASK_SWITCH	0x07d
-#define SVM_EXIT_FERR_FREEZE	0x07e
-#define SVM_EXIT_SHUTDOWN	0x07f
-#define SVM_EXIT_VMRUN		0x080
-#define SVM_EXIT_VMMCALL	0x081
-#define SVM_EXIT_VMLOAD		0x082
-#define SVM_EXIT_VMSAVE		0x083
-#define SVM_EXIT_STGI		0x084
-#define SVM_EXIT_CLGI		0x085
-#define SVM_EXIT_SKINIT		0x086
-#define SVM_EXIT_RDTSCP		0x087
-#define SVM_EXIT_ICEBP		0x088
-#define SVM_EXIT_WBINVD		0x089
-#define SVM_EXIT_MONITOR	0x08a
-#define SVM_EXIT_MWAIT		0x08b
-#define SVM_EXIT_MWAIT_COND	0x08c
-#define SVM_EXIT_NPF  		0x400
-
-#define SVM_EXIT_ERR		-1
-
-#define SVM_CR0_SELECTIVE_MASK (X86_CR0_TS | X86_CR0_MP)
-
-#define	SVM_CR0_RESERVED_MASK			0xffffffff00000000U
-#define	SVM_CR3_LONG_MBZ_MASK			0xfff0000000000000U
-#define	SVM_CR3_LONG_RESERVED_MASK		0x0000000000000fe7U
-#define SVM_CR3_PAE_LEGACY_RESERVED_MASK	0x0000000000000007U
-#define	SVM_CR4_LEGACY_RESERVED_MASK		0xff08e000U
-#define	SVM_CR4_RESERVED_MASK			0xffffffffff08e000U
-#define	SVM_DR6_RESERVED_MASK			0xffffffffffff1ff0U
-#define	SVM_DR7_RESERVED_MASK			0xffffffff0000cc00U
-#define	SVM_EFER_RESERVED_MASK			0xffffffffffff0200U
 
 #define MSR_BITMAP_SIZE 8192
-
 #define LBR_CTL_ENABLE_MASK BIT_ULL(0)
 
 struct svm_test {
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 15/27] svm: move some svm support functions into lib/x86/svm_lib.h
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (13 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 14/27] svm: move svm spec definitions to lib/x86/svm.h Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 13:59   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 16/27] svm: move setup_svm() to svm_lib.c Maxim Levitsky
                   ` (12 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/svm_lib.h | 53 +++++++++++++++++++++++++++++++++++++++++++++++
 x86/svm.c         | 36 +-------------------------------
 x86/svm.h         | 18 ----------------
 x86/svm_npt.c     |  1 +
 x86/svm_tests.c   |  1 +
 5 files changed, 56 insertions(+), 53 deletions(-)
 create mode 100644 lib/x86/svm_lib.h

diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
new file mode 100644
index 00000000..04910281
--- /dev/null
+++ b/lib/x86/svm_lib.h
@@ -0,0 +1,53 @@
+#ifndef SRC_LIB_X86_SVM_LIB_H_
+#define SRC_LIB_X86_SVM_LIB_H_
+
+#include <x86/svm.h>
+#include "processor.h"
+
+static inline bool npt_supported(void)
+{
+	return this_cpu_has(X86_FEATURE_NPT);
+}
+
+static inline bool vgif_supported(void)
+{
+	return this_cpu_has(X86_FEATURE_VGIF);
+}
+
+static inline bool lbrv_supported(void)
+{
+	return this_cpu_has(X86_FEATURE_LBRV);
+}
+
+static inline bool tsc_scale_supported(void)
+{
+	return this_cpu_has(X86_FEATURE_TSCRATEMSR);
+}
+
+static inline bool pause_filter_supported(void)
+{
+	return this_cpu_has(X86_FEATURE_PAUSEFILTER);
+}
+
+static inline bool pause_threshold_supported(void)
+{
+	return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
+}
+
+static inline void vmmcall(void)
+{
+	asm volatile ("vmmcall" : : : "memory");
+}
+
+static inline void stgi(void)
+{
+	asm volatile ("stgi");
+}
+
+static inline void clgi(void)
+{
+	asm volatile ("clgi");
+}
+
+
+#endif /* SRC_LIB_X86_SVM_LIB_H_ */
diff --git a/x86/svm.c b/x86/svm.c
index 0b2a1d69..8d90a242 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -14,6 +14,7 @@
 #include "alloc_page.h"
 #include "isr.h"
 #include "apic.h"
+#include "svm_lib.h"
 
 /* for the nested page table*/
 u64 *pml4e;
@@ -54,32 +55,6 @@ bool default_supported(void)
 	return true;
 }
 
-bool vgif_supported(void)
-{
-	return this_cpu_has(X86_FEATURE_VGIF);
-}
-
-bool lbrv_supported(void)
-{
-	return this_cpu_has(X86_FEATURE_LBRV);
-}
-
-bool tsc_scale_supported(void)
-{
-	return this_cpu_has(X86_FEATURE_TSCRATEMSR);
-}
-
-bool pause_filter_supported(void)
-{
-	return this_cpu_has(X86_FEATURE_PAUSEFILTER);
-}
-
-bool pause_threshold_supported(void)
-{
-	return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
-}
-
-
 void default_prepare(struct svm_test *test)
 {
 	vmcb_ident(vmcb);
@@ -94,10 +69,6 @@ bool default_finished(struct svm_test *test)
 	return true; /* one vmexit */
 }
 
-bool npt_supported(void)
-{
-	return this_cpu_has(X86_FEATURE_NPT);
-}
 
 int get_test_stage(struct svm_test *test)
 {
@@ -128,11 +99,6 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
 	seg->base = base;
 }
 
-inline void vmmcall(void)
-{
-	asm volatile ("vmmcall" : : : "memory");
-}
-
 static test_guest_func guest_main;
 
 void test_set_guest(test_guest_func func)
diff --git a/x86/svm.h b/x86/svm.h
index 3cd7ce8b..7cb1b898 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -53,21 +53,14 @@ u64 *npt_get_pdpe(u64 address);
 u64 *npt_get_pml4e(void);
 bool smp_supported(void);
 bool default_supported(void);
-bool vgif_supported(void);
-bool lbrv_supported(void);
-bool tsc_scale_supported(void);
-bool pause_filter_supported(void);
-bool pause_threshold_supported(void);
 void default_prepare(struct svm_test *test);
 void default_prepare_gif_clear(struct svm_test *test);
 bool default_finished(struct svm_test *test);
-bool npt_supported(void);
 int get_test_stage(struct svm_test *test);
 void set_test_stage(struct svm_test *test, int s);
 void inc_test_stage(struct svm_test *test);
 void vmcb_ident(struct vmcb *vmcb);
 struct regs get_regs(void);
-void vmmcall(void);
 int __svm_vmrun(u64 rip);
 void __svm_bare_vmrun(void);
 int svm_vmrun(void);
@@ -75,17 +68,6 @@ void test_set_guest(test_guest_func func);
 
 extern struct vmcb *vmcb;
 
-static inline void stgi(void)
-{
-    asm volatile ("stgi");
-}
-
-static inline void clgi(void)
-{
-    asm volatile ("clgi");
-}
-
-
 
 #define SAVE_GPR_C                              \
         "xchg %%rbx, regs+0x8\n\t"              \
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index b791f1ac..8aac0bb6 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -2,6 +2,7 @@
 #include "vm.h"
 #include "alloc_page.h"
 #include "vmalloc.h"
+#include "svm_lib.h"
 
 static void *scratch_page;
 
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 202e9271..f86c2fa4 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -12,6 +12,7 @@
 #include "delay.h"
 #include "x86/usermode.h"
 #include "vmalloc.h"
+#include "svm_lib.h"
 
 #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 16/27] svm: move setup_svm() to svm_lib.c
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (14 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 15/27] svm: move some svm support functions into lib/x86/svm_lib.h Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 16:14   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 17/27] svm: correctly skip if NPT not supported Maxim Levitsky
                   ` (11 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/svm.h       |   2 +
 lib/x86/svm_lib.c   | 107 ++++++++++++++++++++++++++++++++++++++++++++
 lib/x86/svm_lib.h   |  12 +++++
 x86/Makefile.x86_64 |   2 +
 x86/svm.c           |  90 ++-----------------------------------
 x86/svm.h           |   6 +--
 x86/svm_tests.c     |  18 +++++---
 7 files changed, 138 insertions(+), 99 deletions(-)
 create mode 100644 lib/x86/svm_lib.c

diff --git a/lib/x86/svm.h b/lib/x86/svm.h
index 8b836c13..d714dac9 100644
--- a/lib/x86/svm.h
+++ b/lib/x86/svm.h
@@ -2,6 +2,8 @@
 #ifndef SRC_LIB_X86_SVM_H_
 #define SRC_LIB_X86_SVM_H_
 
+#include "libcflat.h"
+
 enum {
 	INTERCEPT_INTR,
 	INTERCEPT_NMI,
diff --git a/lib/x86/svm_lib.c b/lib/x86/svm_lib.c
new file mode 100644
index 00000000..cb80f08f
--- /dev/null
+++ b/lib/x86/svm_lib.c
@@ -0,0 +1,107 @@
+
+#include "svm_lib.h"
+#include "libcflat.h"
+#include "processor.h"
+#include "desc.h"
+#include "msr.h"
+#include "vm.h"
+#include "smp.h"
+#include "alloc_page.h"
+#include "fwcfg.h"
+
+/* for the nested page table*/
+static u64 *pml4e;
+
+static u8 *io_bitmap;
+static u8 io_bitmap_area[16384];
+
+static u8 *msr_bitmap;
+static u8 msr_bitmap_area[MSR_BITMAP_SIZE + PAGE_SIZE];
+
+
+u64 *npt_get_pte(u64 address)
+{
+	return get_pte(npt_get_pml4e(), (void *)address);
+}
+
+u64 *npt_get_pde(u64 address)
+{
+	struct pte_search search;
+
+	search = find_pte_level(npt_get_pml4e(), (void *)address, 2);
+	return search.pte;
+}
+
+u64 *npt_get_pdpe(u64 address)
+{
+	struct pte_search search;
+
+	search = find_pte_level(npt_get_pml4e(), (void *)address, 3);
+	return search.pte;
+}
+
+u64 *npt_get_pml4e(void)
+{
+	return pml4e;
+}
+
+u8 *svm_get_msr_bitmap(void)
+{
+	return msr_bitmap;
+}
+
+u8 *svm_get_io_bitmap(void)
+{
+	return io_bitmap;
+}
+
+static void set_additional_vcpu_msr(void *msr_efer)
+{
+	void *hsave = alloc_page();
+
+	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
+	wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME);
+}
+
+static void setup_npt(void)
+{
+	u64 size = fwcfg_get_u64(FW_CFG_RAM_SIZE);
+
+	/* Ensure all <4gb is mapped, e.g. if there's no RAM above 4gb. */
+	if (size < BIT_ULL(32))
+		size = BIT_ULL(32);
+
+	pml4e = alloc_page();
+
+	/* NPT accesses are treated as "user" accesses. */
+	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
+}
+
+void setup_svm(void)
+{
+	void *hsave = alloc_page();
+	int i;
+
+	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
+	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
+
+	io_bitmap = (void *) ALIGN((ulong)io_bitmap_area, PAGE_SIZE);
+
+	msr_bitmap = (void *) ALIGN((ulong)msr_bitmap_area, PAGE_SIZE);
+
+	if (!npt_supported())
+		return;
+
+	for (i = 1; i < cpu_count(); i++)
+		on_cpu(i, (void *)set_additional_vcpu_msr, (void *)rdmsr(MSR_EFER));
+
+	printf("NPT detected - running all tests with NPT enabled\n");
+
+	/*
+	 * Nested paging supported - Build a nested page table
+	 * Build the page-table bottom-up and map everything with 4k
+	 * pages to get enough granularity for the NPT unit-tests.
+	 */
+
+	setup_npt();
+}
diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
index 04910281..b491eee6 100644
--- a/lib/x86/svm_lib.h
+++ b/lib/x86/svm_lib.h
@@ -49,5 +49,17 @@ static inline void clgi(void)
 	asm volatile ("clgi");
 }
 
+void setup_svm(void);
+
+u64 *npt_get_pte(u64 address);
+u64 *npt_get_pde(u64 address);
+u64 *npt_get_pdpe(u64 address);
+u64 *npt_get_pml4e(void);
+
+u8 *svm_get_msr_bitmap(void);
+u8 *svm_get_io_bitmap(void);
+
+#define MSR_BITMAP_SIZE 8192
+
 
 #endif /* SRC_LIB_X86_SVM_LIB_H_ */
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index f76ff18a..5e4c4cc0 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -19,6 +19,8 @@ COMMON_CFLAGS += -mno-red-zone -mno-sse -mno-sse2 $(fcf_protection_full)
 cflatobjs += lib/x86/setjmp64.o
 cflatobjs += lib/x86/intel-iommu.o
 cflatobjs += lib/x86/usermode.o
+cflatobjs += lib/x86/svm_lib.o
+
 
 tests = $(TEST_DIR)/apic.$(exe) \
 	  $(TEST_DIR)/idt_test.$(exe) \
diff --git a/x86/svm.c b/x86/svm.c
index 8d90a242..9edf5500 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -16,35 +16,8 @@
 #include "apic.h"
 #include "svm_lib.h"
 
-/* for the nested page table*/
-u64 *pml4e;
-
 struct vmcb *vmcb;
 
-u64 *npt_get_pte(u64 address)
-{
-	return get_pte(npt_get_pml4e(), (void*)address);
-}
-
-u64 *npt_get_pde(u64 address)
-{
-	struct pte_search search;
-	search = find_pte_level(npt_get_pml4e(), (void*)address, 2);
-	return search.pte;
-}
-
-u64 *npt_get_pdpe(u64 address)
-{
-	struct pte_search search;
-	search = find_pte_level(npt_get_pml4e(), (void*)address, 3);
-	return search.pte;
-}
-
-u64 *npt_get_pml4e(void)
-{
-	return pml4e;
-}
-
 bool smp_supported(void)
 {
 	return cpu_count() > 1;
@@ -112,12 +85,6 @@ static void test_thunk(struct svm_test *test)
 	vmmcall();
 }
 
-u8 *io_bitmap;
-u8 io_bitmap_area[16384];
-
-u8 *msr_bitmap;
-u8 msr_bitmap_area[MSR_BITMAP_SIZE + PAGE_SIZE];
-
 void vmcb_ident(struct vmcb *vmcb)
 {
 	u64 vmcb_phys = virt_to_phys(vmcb);
@@ -153,12 +120,12 @@ void vmcb_ident(struct vmcb *vmcb)
 	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
 		(1ULL << INTERCEPT_VMMCALL) |
 		(1ULL << INTERCEPT_SHUTDOWN);
-	ctrl->iopm_base_pa = virt_to_phys(io_bitmap);
-	ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
+	ctrl->iopm_base_pa = virt_to_phys(svm_get_io_bitmap());
+	ctrl->msrpm_base_pa = virt_to_phys(svm_get_msr_bitmap());
 
 	if (npt_supported()) {
 		ctrl->nested_ctl = 1;
-		ctrl->nested_cr3 = (u64)pml4e;
+		ctrl->nested_cr3 = (u64)npt_get_pml4e();
 		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 }
@@ -247,57 +214,6 @@ static noinline void test_run(struct svm_test *test)
 		test->on_vcpu_done = true;
 }
 
-static void set_additional_vcpu_msr(void *msr_efer)
-{
-	void *hsave = alloc_page();
-
-	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
-	wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME);
-}
-
-static void setup_npt(void)
-{
-	u64 size = fwcfg_get_u64(FW_CFG_RAM_SIZE);
-
-	/* Ensure all <4gb is mapped, e.g. if there's no RAM above 4gb. */
-	if (size < BIT_ULL(32))
-		size = BIT_ULL(32);
-
-	pml4e = alloc_page();
-
-	/* NPT accesses are treated as "user" accesses. */
-	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
-}
-
-static void setup_svm(void)
-{
-	void *hsave = alloc_page();
-	int i;
-
-	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
-	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
-
-	io_bitmap = (void *) ALIGN((ulong)io_bitmap_area, PAGE_SIZE);
-
-	msr_bitmap = (void *) ALIGN((ulong)msr_bitmap_area, PAGE_SIZE);
-
-	if (!npt_supported())
-		return;
-
-	for (i = 1; i < cpu_count(); i++)
-		on_cpu(i, (void *)set_additional_vcpu_msr, (void *)rdmsr(MSR_EFER));
-
-	printf("NPT detected - running all tests with NPT enabled\n");
-
-	/*
-	 * Nested paging supported - Build a nested page table
-	 * Build the page-table bottom-up and map everything with 4k
-	 * pages to get enough granularity for the NPT unit-tests.
-	 */
-
-	setup_npt();
-}
-
 int matched;
 
 static bool
diff --git a/x86/svm.h b/x86/svm.h
index 7cb1b898..67f3205d 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -5,7 +5,6 @@
 #include <x86/svm.h>
 
 
-#define MSR_BITMAP_SIZE 8192
 #define LBR_CTL_ENABLE_MASK BIT_ULL(0)
 
 struct svm_test {
@@ -47,10 +46,7 @@ struct regs {
 typedef void (*test_guest_func)(struct svm_test *);
 
 int run_svm_tests(int ac, char **av, struct svm_test *svm_tests);
-u64 *npt_get_pte(u64 address);
-u64 *npt_get_pde(u64 address);
-u64 *npt_get_pdpe(u64 address);
-u64 *npt_get_pml4e(void);
+
 bool smp_supported(void);
 bool default_supported(void);
 void default_prepare(struct svm_test *test);
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index f86c2fa4..712d24e2 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -307,14 +307,13 @@ static bool check_next_rip(struct svm_test *test)
 	return address == vmcb->control.next_rip;
 }
 
-extern u8 *msr_bitmap;
 
 static void prepare_msr_intercept(struct svm_test *test)
 {
 	default_prepare(test);
 	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
 	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
-	memset(msr_bitmap, 0xff, MSR_BITMAP_SIZE);
+	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
 }
 
 static void test_msr_intercept(struct svm_test *test)
@@ -425,7 +424,7 @@ static bool msr_intercept_finished(struct svm_test *test)
 
 static bool check_msr_intercept(struct svm_test *test)
 {
-	memset(msr_bitmap, 0, MSR_BITMAP_SIZE);
+	memset(svm_get_msr_bitmap(), 0, MSR_BITMAP_SIZE);
 	return (test->scratch == -2);
 }
 
@@ -537,10 +536,10 @@ static bool check_mode_switch(struct svm_test *test)
 	return test->scratch == 2;
 }
 
-extern u8 *io_bitmap;
-
 static void prepare_ioio(struct svm_test *test)
 {
+	u8 *io_bitmap = svm_get_io_bitmap();
+
 	vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
 	test->scratch = 0;
 	memset(io_bitmap, 0, 8192);
@@ -549,6 +548,8 @@ static void prepare_ioio(struct svm_test *test)
 
 static void test_ioio(struct svm_test *test)
 {
+	u8 *io_bitmap = svm_get_io_bitmap();
+
 	// stage 0, test IO pass
 	inb(0x5000);
 	outb(0x0, 0x5000);
@@ -612,7 +613,6 @@ static void test_ioio(struct svm_test *test)
 		goto fail;
 
 	return;
-
 fail:
 	report_fail("stage %d", get_test_stage(test));
 	test->scratch = -1;
@@ -621,6 +621,7 @@ fail:
 static bool ioio_finished(struct svm_test *test)
 {
 	unsigned port, size;
+	u8 *io_bitmap = svm_get_io_bitmap();
 
 	/* Only expect IOIO intercepts */
 	if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
@@ -645,6 +646,8 @@ static bool ioio_finished(struct svm_test *test)
 
 static bool check_ioio(struct svm_test *test)
 {
+	u8 *io_bitmap = svm_get_io_bitmap();
+
 	memset(io_bitmap, 0, 8193);
 	return test->scratch != -1;
 }
@@ -2316,7 +2319,8 @@ static void test_msrpm_iopm_bitmap_addrs(void)
 {
 	u64 saved_intercept = vmcb->control.intercept;
 	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
-	u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1));
+	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
+	u8 *io_bitmap = svm_get_io_bitmap();
 
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
 			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 17/27] svm: correctly skip if NPT not supported
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (15 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 16/27] svm: move setup_svm() to svm_lib.c Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 18/27] svm: move vmcb_ident to svm_lib.c Maxim Levitsky
                   ` (10 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Fail SVM setup when NPT is not supported

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/svm_lib.c | 16 ++++++++++------
 lib/x86/svm_lib.h |  2 +-
 x86/svm.c         |  3 ++-
 3 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/lib/x86/svm_lib.c b/lib/x86/svm_lib.c
index cb80f08f..c7194909 100644
--- a/lib/x86/svm_lib.c
+++ b/lib/x86/svm_lib.c
@@ -77,11 +77,18 @@ static void setup_npt(void)
 	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
 }
 
-void setup_svm(void)
+bool setup_svm(void)
 {
-	void *hsave = alloc_page();
+	void *hsave;
 	int i;
 
+	if (!npt_supported()) {
+		printf("NPT not detected - skipping SVM initialization\n");
+		return false;
+	}
+
+	hsave = alloc_page();
+
 	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
 	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
 
@@ -89,14 +96,10 @@ void setup_svm(void)
 
 	msr_bitmap = (void *) ALIGN((ulong)msr_bitmap_area, PAGE_SIZE);
 
-	if (!npt_supported())
-		return;
 
 	for (i = 1; i < cpu_count(); i++)
 		on_cpu(i, (void *)set_additional_vcpu_msr, (void *)rdmsr(MSR_EFER));
 
-	printf("NPT detected - running all tests with NPT enabled\n");
-
 	/*
 	 * Nested paging supported - Build a nested page table
 	 * Build the page-table bottom-up and map everything with 4k
@@ -104,4 +107,5 @@ void setup_svm(void)
 	 */
 
 	setup_npt();
+	return true;
 }
diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
index b491eee6..f603ff93 100644
--- a/lib/x86/svm_lib.h
+++ b/lib/x86/svm_lib.h
@@ -49,7 +49,7 @@ static inline void clgi(void)
 	asm volatile ("clgi");
 }
 
-void setup_svm(void);
+bool setup_svm(void);
 
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
diff --git a/x86/svm.c b/x86/svm.c
index 9edf5500..cf246c37 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -264,7 +264,8 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 		return report_summary();
 	}
 
-	setup_svm();
+	if (!setup_svm())
+		return 0;
 
 	vmcb = alloc_page();
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 18/27] svm: move vmcb_ident to svm_lib.c
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (16 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 17/27] svm: correctly skip if NPT not supported Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-01 16:18   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 19/27] svm: rewerite vm entry macros Maxim Levitsky
                   ` (9 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Extract vmcb_ident to svm_lib.c

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/svm_lib.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
 lib/x86/svm_lib.h |  4 ++++
 x86/svm.c         | 54 -----------------------------------------------
 x86/svm.h         |  1 -
 4 files changed, 58 insertions(+), 55 deletions(-)

diff --git a/lib/x86/svm_lib.c b/lib/x86/svm_lib.c
index c7194909..aed757a1 100644
--- a/lib/x86/svm_lib.c
+++ b/lib/x86/svm_lib.c
@@ -109,3 +109,57 @@ bool setup_svm(void)
 	setup_npt();
 	return true;
 }
+
+void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
+			 u64 base, u32 limit, u32 attr)
+{
+	seg->selector = selector;
+	seg->attrib = attr;
+	seg->limit = limit;
+	seg->base = base;
+}
+
+void vmcb_ident(struct vmcb *vmcb)
+{
+	u64 vmcb_phys = virt_to_phys(vmcb);
+	struct vmcb_save_area *save = &vmcb->save;
+	struct vmcb_control_area *ctrl = &vmcb->control;
+	u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
+		| SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK;
+	u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
+		| SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK;
+	struct descriptor_table_ptr desc_table_ptr;
+
+	memset(vmcb, 0, sizeof(*vmcb));
+	asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory");
+	vmcb_set_seg(&save->es, read_es(), 0, -1U, data_seg_attr);
+	vmcb_set_seg(&save->cs, read_cs(), 0, -1U, code_seg_attr);
+	vmcb_set_seg(&save->ss, read_ss(), 0, -1U, data_seg_attr);
+	vmcb_set_seg(&save->ds, read_ds(), 0, -1U, data_seg_attr);
+	sgdt(&desc_table_ptr);
+	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
+	sidt(&desc_table_ptr);
+	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
+	ctrl->asid = 1;
+	save->cpl = 0;
+	save->efer = rdmsr(MSR_EFER);
+	save->cr4 = read_cr4();
+	save->cr3 = read_cr3();
+	save->cr0 = read_cr0();
+	save->dr7 = read_dr7();
+	save->dr6 = read_dr6();
+	save->cr2 = read_cr2();
+	save->g_pat = rdmsr(MSR_IA32_CR_PAT);
+	save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
+		(1ULL << INTERCEPT_VMMCALL) |
+		(1ULL << INTERCEPT_SHUTDOWN);
+	ctrl->iopm_base_pa = virt_to_phys(io_bitmap);
+	ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
+
+	if (npt_supported()) {
+		ctrl->nested_ctl = 1;
+		ctrl->nested_cr3 = (u64)pml4e;
+		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+	}
+}
diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
index f603ff93..3bb098dc 100644
--- a/lib/x86/svm_lib.h
+++ b/lib/x86/svm_lib.h
@@ -49,7 +49,11 @@ static inline void clgi(void)
 	asm volatile ("clgi");
 }
 
+void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
+				  u64 base, u32 limit, u32 attr);
+
 bool setup_svm(void);
+void vmcb_ident(struct vmcb *vmcb);
 
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
diff --git a/x86/svm.c b/x86/svm.c
index cf246c37..5e2c3a83 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -63,15 +63,6 @@ void inc_test_stage(struct svm_test *test)
 	barrier();
 }
 
-static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
-			 u64 base, u32 limit, u32 attr)
-{
-	seg->selector = selector;
-	seg->attrib = attr;
-	seg->limit = limit;
-	seg->base = base;
-}
-
 static test_guest_func guest_main;
 
 void test_set_guest(test_guest_func func)
@@ -85,51 +76,6 @@ static void test_thunk(struct svm_test *test)
 	vmmcall();
 }
 
-void vmcb_ident(struct vmcb *vmcb)
-{
-	u64 vmcb_phys = virt_to_phys(vmcb);
-	struct vmcb_save_area *save = &vmcb->save;
-	struct vmcb_control_area *ctrl = &vmcb->control;
-	u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
-		| SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK;
-	u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
-		| SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK;
-	struct descriptor_table_ptr desc_table_ptr;
-
-	memset(vmcb, 0, sizeof(*vmcb));
-	asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory");
-	vmcb_set_seg(&save->es, read_es(), 0, -1U, data_seg_attr);
-	vmcb_set_seg(&save->cs, read_cs(), 0, -1U, code_seg_attr);
-	vmcb_set_seg(&save->ss, read_ss(), 0, -1U, data_seg_attr);
-	vmcb_set_seg(&save->ds, read_ds(), 0, -1U, data_seg_attr);
-	sgdt(&desc_table_ptr);
-	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
-	sidt(&desc_table_ptr);
-	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
-	ctrl->asid = 1;
-	save->cpl = 0;
-	save->efer = rdmsr(MSR_EFER);
-	save->cr4 = read_cr4();
-	save->cr3 = read_cr3();
-	save->cr0 = read_cr0();
-	save->dr7 = read_dr7();
-	save->dr6 = read_dr6();
-	save->cr2 = read_cr2();
-	save->g_pat = rdmsr(MSR_IA32_CR_PAT);
-	save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
-	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
-		(1ULL << INTERCEPT_VMMCALL) |
-		(1ULL << INTERCEPT_SHUTDOWN);
-	ctrl->iopm_base_pa = virt_to_phys(svm_get_io_bitmap());
-	ctrl->msrpm_base_pa = virt_to_phys(svm_get_msr_bitmap());
-
-	if (npt_supported()) {
-		ctrl->nested_ctl = 1;
-		ctrl->nested_cr3 = (u64)npt_get_pml4e();
-		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
-	}
-}
-
 struct regs regs;
 
 struct regs get_regs(void)
diff --git a/x86/svm.h b/x86/svm.h
index 67f3205d..a4aabeb2 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -55,7 +55,6 @@ bool default_finished(struct svm_test *test);
 int get_test_stage(struct svm_test *test);
 void set_test_stage(struct svm_test *test, int s);
 void inc_test_stage(struct svm_test *test);
-void vmcb_ident(struct vmcb *vmcb);
 struct regs get_regs(void);
 int __svm_vmrun(u64 rip);
 void __svm_bare_vmrun(void);
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 19/27] svm: rewerite vm entry macros
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (17 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 18/27] svm: move vmcb_ident to svm_lib.c Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-02 10:14   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 20/27] svm: move v2 tests run into test_run Maxim Levitsky
                   ` (8 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Make SVM VM entry macros to not use harcode regs label
and also simplify them as much as possible

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/svm_lib.h | 71 +++++++++++++++++++++++++++++++++++++++++++++++
 x86/svm.c         | 58 ++++++++++++--------------------------
 x86/svm.h         | 70 ++--------------------------------------------
 x86/svm_tests.c   | 24 ++++++++++------
 4 files changed, 106 insertions(+), 117 deletions(-)

diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
index 3bb098dc..f9c2b352 100644
--- a/lib/x86/svm_lib.h
+++ b/lib/x86/svm_lib.h
@@ -66,4 +66,75 @@ u8 *svm_get_io_bitmap(void);
 #define MSR_BITMAP_SIZE 8192
 
 
+struct svm_gprs {
+	u64 rax;
+	u64 rbx;
+	u64 rcx;
+	u64 rdx;
+	u64 rbp;
+	u64 rsi;
+	u64 rdi;
+	u64 r8;
+	u64 r9;
+	u64 r10;
+	u64 r11;
+	u64 r12;
+	u64 r13;
+	u64 r14;
+	u64 r15;
+	u64 rsp;
+};
+
+#define SWAP_GPRS \
+	"xchg %%rbx, 0x08(%%rax)\n"           \
+	"xchg %%rcx, 0x10(%%rax)\n"           \
+	"xchg %%rdx, 0x18(%%rax)\n"           \
+	"xchg %%rbp, 0x20(%%rax)\n"           \
+	"xchg %%rsi, 0x28(%%rax)\n"           \
+	"xchg %%rdi, 0x30(%%rax)\n"           \
+	"xchg %%r8,  0x38(%%rax)\n"           \
+	"xchg %%r9,  0x40(%%rax)\n"           \
+	"xchg %%r10, 0x48(%%rax)\n"           \
+	"xchg %%r11, 0x50(%%rax)\n"           \
+	"xchg %%r12, 0x58(%%rax)\n"           \
+	"xchg %%r13, 0x60(%%rax)\n"           \
+	"xchg %%r14, 0x68(%%rax)\n"           \
+	"xchg %%r15, 0x70(%%rax)\n"           \
+	\
+
+
+#define __SVM_VMRUN(vmcb, regs, label)        \
+{                                             \
+	u32 dummy;                            \
+\
+	(vmcb)->save.rax = (regs)->rax;       \
+	(vmcb)->save.rsp = (regs)->rsp;       \
+\
+	asm volatile (                        \
+		"vmload %%rax\n"              \
+		"push %%rbx\n"                \
+		"push %%rax\n"                \
+		"mov %%rbx, %%rax\n"          \
+		SWAP_GPRS                     \
+		"pop %%rax\n"                 \
+		".global " label "\n"         \
+		label ": vmrun %%rax\n"       \
+		"vmsave %%rax\n"              \
+		"pop %%rax\n"                 \
+		SWAP_GPRS                     \
+		: "=a"(dummy),                \
+		  "=b"(dummy)                 \
+		: "a" (virt_to_phys(vmcb)),   \
+		  "b"(regs)                   \
+		/* clobbers*/                 \
+		: "memory"                    \
+	);                                    \
+\
+	(regs)->rax = (vmcb)->save.rax;       \
+	(regs)->rsp = (vmcb)->save.rsp;       \
+}
+
+#define SVM_VMRUN(vmcb, regs) \
+	__SVM_VMRUN(vmcb, regs, "vmrun_dummy_label_%=")
+
 #endif /* SRC_LIB_X86_SVM_LIB_H_ */
diff --git a/x86/svm.c b/x86/svm.c
index 5e2c3a83..220bce66 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -76,16 +76,13 @@ static void test_thunk(struct svm_test *test)
 	vmmcall();
 }
 
-struct regs regs;
+static struct svm_gprs regs;
 
-struct regs get_regs(void)
+struct svm_gprs *get_regs(void)
 {
-	return regs;
+	return &regs;
 }
 
-// rax handled specially below
-
-
 struct svm_test *v2_test;
 
 
@@ -94,16 +91,10 @@ u64 guest_stack[10000];
 int __svm_vmrun(u64 rip)
 {
 	vmcb->save.rip = (ulong)rip;
-	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
+	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
 	regs.rdi = (ulong)v2_test;
 
-	asm volatile (
-		      ASM_PRE_VMRUN_CMD
-		      "vmrun %%rax\n\t"               \
-		      ASM_POST_VMRUN_CMD
-		      :
-		      : "a" (virt_to_phys(vmcb))
-		      : "memory", "r15");
+	SVM_VMRUN(vmcb, &regs);
 
 	return (vmcb->control.exit_code);
 }
@@ -113,43 +104,28 @@ int svm_vmrun(void)
 	return __svm_vmrun((u64)test_thunk);
 }
 
-extern u8 vmrun_rip;
-
 static noinline void test_run(struct svm_test *test)
 {
-	u64 vmcb_phys = virt_to_phys(vmcb);
-
 	cli();
 	vmcb_ident(vmcb);
 
 	test->prepare(test);
 	guest_main = test->guest_func;
 	vmcb->save.rip = (ulong)test_thunk;
-	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
+	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
 	regs.rdi = (ulong)test;
 	do {
-		struct svm_test *the_test = test;
-		u64 the_vmcb = vmcb_phys;
-		asm volatile (
-			      "clgi;\n\t" // semi-colon needed for LLVM compatibility
-			      "sti \n\t"
-			      "call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t"
-			      "mov %[vmcb_phys], %%rax \n\t"
-			      ASM_PRE_VMRUN_CMD
-			      ".global vmrun_rip\n\t"		\
-			      "vmrun_rip: vmrun %%rax\n\t"    \
-			      ASM_POST_VMRUN_CMD
-			      "cli \n\t"
-			      "stgi"
-			      : // inputs clobbered by the guest:
-				"=D" (the_test),            // first argument register
-				"=b" (the_vmcb)             // callee save register!
-			      : [test] "0" (the_test),
-				[vmcb_phys] "1"(the_vmcb),
-				[PREPARE_GIF_CLEAR] "i" (offsetof(struct svm_test, prepare_gif_clear))
-			      : "rax", "rcx", "rdx", "rsi",
-				"r8", "r9", "r10", "r11" , "r12", "r13", "r14", "r15",
-				"memory");
+
+		clgi();
+		sti();
+
+		test->prepare_gif_clear(test);
+
+		__SVM_VMRUN(vmcb, &regs, "vmrun_rip");
+
+		cli();
+		stgi();
+
 		++test->exits;
 	} while (!test->finished(test));
 	sti();
diff --git a/x86/svm.h b/x86/svm.h
index a4aabeb2..6f809ce3 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -23,26 +23,6 @@ struct svm_test {
 	bool on_vcpu_done;
 };
 
-struct regs {
-	u64 rax;
-	u64 rbx;
-	u64 rcx;
-	u64 rdx;
-	u64 cr2;
-	u64 rbp;
-	u64 rsi;
-	u64 rdi;
-	u64 r8;
-	u64 r9;
-	u64 r10;
-	u64 r11;
-	u64 r12;
-	u64 r13;
-	u64 r14;
-	u64 r15;
-	u64 rflags;
-};
-
 typedef void (*test_guest_func)(struct svm_test *);
 
 int run_svm_tests(int ac, char **av, struct svm_test *svm_tests);
@@ -55,58 +35,12 @@ bool default_finished(struct svm_test *test);
 int get_test_stage(struct svm_test *test);
 void set_test_stage(struct svm_test *test, int s);
 void inc_test_stage(struct svm_test *test);
-struct regs get_regs(void);
+struct svm_gprs *get_regs(void);
 int __svm_vmrun(u64 rip);
 void __svm_bare_vmrun(void);
 int svm_vmrun(void);
 void test_set_guest(test_guest_func func);
 
 extern struct vmcb *vmcb;
-
-
-#define SAVE_GPR_C                              \
-        "xchg %%rbx, regs+0x8\n\t"              \
-        "xchg %%rcx, regs+0x10\n\t"             \
-        "xchg %%rdx, regs+0x18\n\t"             \
-        "xchg %%rbp, regs+0x28\n\t"             \
-        "xchg %%rsi, regs+0x30\n\t"             \
-        "xchg %%rdi, regs+0x38\n\t"             \
-        "xchg %%r8, regs+0x40\n\t"              \
-        "xchg %%r9, regs+0x48\n\t"              \
-        "xchg %%r10, regs+0x50\n\t"             \
-        "xchg %%r11, regs+0x58\n\t"             \
-        "xchg %%r12, regs+0x60\n\t"             \
-        "xchg %%r13, regs+0x68\n\t"             \
-        "xchg %%r14, regs+0x70\n\t"             \
-        "xchg %%r15, regs+0x78\n\t"
-
-#define LOAD_GPR_C      SAVE_GPR_C
-
-#define ASM_PRE_VMRUN_CMD                       \
-                "vmload %%rax\n\t"              \
-                "mov regs+0x80, %%r15\n\t"      \
-                "mov %%r15, 0x170(%%rax)\n\t"   \
-                "mov regs, %%r15\n\t"           \
-                "mov %%r15, 0x1f8(%%rax)\n\t"   \
-                LOAD_GPR_C                      \
-
-#define ASM_POST_VMRUN_CMD                      \
-                SAVE_GPR_C                      \
-                "mov 0x170(%%rax), %%r15\n\t"   \
-                "mov %%r15, regs+0x80\n\t"      \
-                "mov 0x1f8(%%rax), %%r15\n\t"   \
-                "mov %%r15, regs\n\t"           \
-                "vmsave %%rax\n\t"              \
-
-
-
-#define SVM_BARE_VMRUN \
-	asm volatile ( \
-		ASM_PRE_VMRUN_CMD \
-                "vmrun %%rax\n\t"               \
-		ASM_POST_VMRUN_CMD \
-		: \
-		: "a" (virt_to_phys(vmcb)) \
-		: "memory", "r15") \
-
+extern struct svm_test svm_tests[];
 #endif
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 712d24e2..70e41300 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -399,7 +399,7 @@ static bool msr_intercept_finished(struct svm_test *test)
 		 * RCX holds the MSR index.
 		 */
 		printf("%s 0x%lx #GP exception\n",
-		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs().rcx);
+		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs()->rcx);
 	}
 
 	/* Jump over RDMSR/WRMSR instruction */
@@ -415,9 +415,9 @@ static bool msr_intercept_finished(struct svm_test *test)
 	 */
 	if (exit_info_1)
 		test->scratch =
-			((get_regs().rdx << 32) | (vmcb->save.rax & 0xffffffff));
+			((get_regs()->rdx << 32) | (get_regs()->rax & 0xffffffff));
 	else
-		test->scratch = get_regs().rcx;
+		test->scratch = get_regs()->rcx;
 
 	return false;
 }
@@ -1842,7 +1842,7 @@ static volatile bool host_rflags_set_tf = false;
 static volatile bool host_rflags_set_rf = false;
 static u64 rip_detected;
 
-extern u64 *vmrun_rip;
+extern u64 vmrun_rip;
 
 static void host_rflags_db_handler(struct ex_regs *r)
 {
@@ -2878,6 +2878,8 @@ static void svm_lbrv_test0(void)
 
 static void svm_lbrv_test1(void)
 {
+	struct svm_gprs *regs = get_regs();
+
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
 
 	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
@@ -2885,7 +2887,7 @@ static void svm_lbrv_test1(void)
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch1);
-	SVM_BARE_VMRUN;
+	SVM_VMRUN(vmcb, regs);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 
 	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
@@ -2900,6 +2902,8 @@ static void svm_lbrv_test1(void)
 
 static void svm_lbrv_test2(void)
 {
+	struct svm_gprs *regs = get_regs();
+
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
 
 	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
@@ -2908,7 +2912,7 @@ static void svm_lbrv_test2(void)
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch2);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
-	SVM_BARE_VMRUN;
+	SVM_VMRUN(vmcb, regs);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
@@ -2924,6 +2928,8 @@ static void svm_lbrv_test2(void)
 
 static void svm_lbrv_nested_test1(void)
 {
+	struct svm_gprs *regs = get_regs();
+
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
 		return;
@@ -2936,7 +2942,7 @@ static void svm_lbrv_nested_test1(void)
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch3);
-	SVM_BARE_VMRUN;
+	SVM_VMRUN(vmcb, regs);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
@@ -2957,6 +2963,8 @@ static void svm_lbrv_nested_test1(void)
 
 static void svm_lbrv_nested_test2(void)
 {
+	struct svm_gprs *regs = get_regs();
+
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
 		return;
@@ -2972,7 +2980,7 @@ static void svm_lbrv_nested_test2(void)
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch4);
-	SVM_BARE_VMRUN;
+	SVM_VMRUN(vmcb, regs);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 20/27] svm: move v2 tests run into test_run
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (18 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 19/27] svm: rewerite vm entry macros Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-02  9:53   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 21/27] svm: cleanup the default_prepare Maxim Levitsky
                   ` (7 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Move v2 tests running into test_run which allows to have code that runs the
test in one place and allows to run v2 tests on a non 0 vCPU if needed.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index 220bce66..2ab553a5 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -106,6 +106,13 @@ int svm_vmrun(void)
 
 static noinline void test_run(struct svm_test *test)
 {
+	if (test->v2) {
+		vmcb_ident(vmcb);
+		v2_test = test;
+		test->v2();
+		return;
+	}
+
 	cli();
 	vmcb_ident(vmcb);
 
@@ -196,21 +203,19 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 			continue;
 		if (svm_tests[i].supported && !svm_tests[i].supported())
 			continue;
-		if (svm_tests[i].v2 == NULL) {
-			if (svm_tests[i].on_vcpu) {
-				if (cpu_count() <= svm_tests[i].on_vcpu)
-					continue;
-				on_cpu_async(svm_tests[i].on_vcpu, (void *)test_run, &svm_tests[i]);
-				while (!svm_tests[i].on_vcpu_done)
-					cpu_relax();
-			}
-			else
-				test_run(&svm_tests[i]);
-		} else {
-			vmcb_ident(vmcb);
-			v2_test = &(svm_tests[i]);
-			svm_tests[i].v2();
+
+		if (!svm_tests[i].on_vcpu) {
+			test_run(&svm_tests[i]);
+			continue;
 		}
+
+		if (cpu_count() <= svm_tests[i].on_vcpu)
+			continue;
+
+		on_cpu_async(svm_tests[i].on_vcpu, (void *)test_run, &svm_tests[i]);
+
+		while (!svm_tests[i].on_vcpu_done)
+			cpu_relax();
 	}
 
 	if (!matched)
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 21/27] svm: cleanup the default_prepare
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (19 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 20/27] svm: move v2 tests run into test_run Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-02  9:45   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 22/27] svm: introduce svm_vcpu Maxim Levitsky
                   ` (6 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

default_prepare only calls vmcb_indent, which is called before
each test anyway

Also don't call this now empty function from other
.prepare functions

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm.c       |  1 -
 x86/svm_tests.c | 18 ------------------
 2 files changed, 19 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index 2ab553a5..5667402b 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -30,7 +30,6 @@ bool default_supported(void)
 
 void default_prepare(struct svm_test *test)
 {
-	vmcb_ident(vmcb);
 }
 
 void default_prepare_gif_clear(struct svm_test *test)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 70e41300..3b68718e 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -69,7 +69,6 @@ static bool check_vmrun(struct svm_test *test)
 
 static void prepare_rsm_intercept(struct svm_test *test)
 {
-	default_prepare(test);
 	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
 	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
 }
@@ -115,7 +114,6 @@ static bool finished_rsm_intercept(struct svm_test *test)
 
 static void prepare_cr3_intercept(struct svm_test *test)
 {
-	default_prepare(test);
 	vmcb->control.intercept_cr_read |= 1 << 3;
 }
 
@@ -149,7 +147,6 @@ static void corrupt_cr3_intercept_bypass(void *_test)
 
 static void prepare_cr3_intercept_bypass(struct svm_test *test)
 {
-	default_prepare(test);
 	vmcb->control.intercept_cr_read |= 1 << 3;
 	on_cpu_async(1, corrupt_cr3_intercept_bypass, test);
 }
@@ -169,7 +166,6 @@ static void test_cr3_intercept_bypass(struct svm_test *test)
 
 static void prepare_dr_intercept(struct svm_test *test)
 {
-	default_prepare(test);
 	vmcb->control.intercept_dr_read = 0xff;
 	vmcb->control.intercept_dr_write = 0xff;
 }
@@ -310,7 +306,6 @@ static bool check_next_rip(struct svm_test *test)
 
 static void prepare_msr_intercept(struct svm_test *test)
 {
-	default_prepare(test);
 	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
 	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
@@ -711,7 +706,6 @@ static bool tsc_adjust_supported(void)
 
 static void tsc_adjust_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
 
 	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
@@ -811,7 +805,6 @@ static void svm_tsc_scale_test(void)
 
 static void latency_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	runs = LATENCY_RUNS;
 	latvmrun_min = latvmexit_min = -1ULL;
 	latvmrun_max = latvmexit_max = 0;
@@ -884,7 +877,6 @@ static bool latency_check(struct svm_test *test)
 
 static void lat_svm_insn_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	runs = LATENCY_RUNS;
 	latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL;
 	latvmload_max = latvmsave_max = latstgi_max = latclgi_max = 0;
@@ -965,7 +957,6 @@ static void pending_event_prepare(struct svm_test *test)
 {
 	int ipi_vector = 0xf1;
 
-	default_prepare(test);
 
 	pending_event_ipi_fired = false;
 
@@ -1033,8 +1024,6 @@ static bool pending_event_check(struct svm_test *test)
 
 static void pending_event_cli_prepare(struct svm_test *test)
 {
-	default_prepare(test);
-
 	pending_event_ipi_fired = false;
 
 	handle_irq(0xf1, pending_event_ipi_isr);
@@ -1139,7 +1128,6 @@ static void timer_isr(isr_regs_t *regs)
 
 static void interrupt_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	handle_irq(TIMER_VECTOR, timer_isr);
 	timer_fired = false;
 	set_test_stage(test, 0);
@@ -1272,7 +1260,6 @@ static void nmi_handler(struct ex_regs *regs)
 
 static void nmi_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	nmi_fired = false;
 	handle_exception(NMI_VECTOR, nmi_handler);
 	set_test_stage(test, 0);
@@ -1450,7 +1437,6 @@ static void my_isr(struct ex_regs *r)
 
 static void exc_inject_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	handle_exception(DE_VECTOR, my_isr);
 	handle_exception(NMI_VECTOR, my_isr);
 }
@@ -1519,7 +1505,6 @@ static void virq_isr(isr_regs_t *regs)
 static void virq_inject_prepare(struct svm_test *test)
 {
 	handle_irq(0xf1, virq_isr);
-	default_prepare(test);
 	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
 		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
 	vmcb->control.int_vector = 0xf1;
@@ -1682,7 +1667,6 @@ static void reg_corruption_isr(isr_regs_t *regs)
 
 static void reg_corruption_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	set_test_stage(test, 0);
 
 	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
@@ -1877,7 +1861,6 @@ static void host_rflags_db_handler(struct ex_regs *r)
 
 static void host_rflags_prepare(struct svm_test *test)
 {
-	default_prepare(test);
 	handle_exception(DB_VECTOR, host_rflags_db_handler);
 	set_test_stage(test, 0);
 }
@@ -2610,7 +2593,6 @@ static void svm_vmload_vmsave(void)
 
 static void prepare_vgif_enabled(struct svm_test *test)
 {
-	default_prepare(test);
 }
 
 static void test_vgif(struct svm_test *test)
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 22/27] svm: introduce svm_vcpu
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (20 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 21/27] svm: cleanup the default_prepare Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 23/27] svm: introduce struct svm_test_context Maxim Levitsky
                   ` (5 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

This adds minimum amount of code to support tests that
run SVM on more that one vCPU.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 lib/x86/svm_lib.c |  16 +-
 lib/x86/svm_lib.h |  31 ++-
 x86/svm.c         |  36 +--
 x86/svm.h         |   5 +-
 x86/svm_npt.c     |  44 ++--
 x86/svm_tests.c   | 649 +++++++++++++++++++++++-----------------------
 6 files changed, 401 insertions(+), 380 deletions(-)

diff --git a/lib/x86/svm_lib.c b/lib/x86/svm_lib.c
index aed757a1..f705f0ae 100644
--- a/lib/x86/svm_lib.c
+++ b/lib/x86/svm_lib.c
@@ -119,7 +119,7 @@ void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
 	seg->base = base;
 }
 
-void vmcb_ident(struct vmcb *vmcb)
+static void vmcb_ident(struct vmcb *vmcb)
 {
 	u64 vmcb_phys = virt_to_phys(vmcb);
 	struct vmcb_save_area *save = &vmcb->save;
@@ -163,3 +163,17 @@ void vmcb_ident(struct vmcb *vmcb)
 		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 }
+
+void svm_vcpu_ident(struct svm_vcpu *vcpu)
+{
+	vmcb_ident(vcpu->vmcb);
+	memset(&vcpu->regs, 0, sizeof(vcpu->regs));
+	vcpu->vmcb->save.rsp = (ulong)(vcpu->stack);
+}
+
+void svm_vcpu_init(struct svm_vcpu *vcpu)
+{
+	vcpu->vmcb = alloc_page();
+	vcpu->stack = alloc_pages(4) + (PAGE_SIZE << 4);
+	svm_vcpu_ident(vcpu);
+}
diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
index f9c2b352..fba34eae 100644
--- a/lib/x86/svm_lib.h
+++ b/lib/x86/svm_lib.h
@@ -53,7 +53,6 @@ void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
 				  u64 base, u32 limit, u32 attr);
 
 bool setup_svm(void);
-void vmcb_ident(struct vmcb *vmcb);
 
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
@@ -85,6 +84,16 @@ struct svm_gprs {
 	u64 rsp;
 };
 
+struct svm_vcpu {
+	struct vmcb *vmcb;
+	struct svm_gprs regs;
+	void *stack;
+};
+
+void svm_vcpu_init(struct svm_vcpu *vcpu);
+void svm_vcpu_ident(struct svm_vcpu *vcpu);
+
+
 #define SWAP_GPRS \
 	"xchg %%rbx, 0x08(%%rax)\n"           \
 	"xchg %%rcx, 0x10(%%rax)\n"           \
@@ -103,12 +112,14 @@ struct svm_gprs {
 	\
 
 
-#define __SVM_VMRUN(vmcb, regs, label)        \
-{                                             \
-	u32 dummy;                            \
+#define __SVM_VMRUN(vcpu, label)                 \
+{                                                \
+	u32 dummy;                               \
+	struct vmcb *vmcb = (vcpu)->vmcb;        \
+	struct svm_gprs *regs = &((vcpu)->regs); \
 \
-	(vmcb)->save.rax = (regs)->rax;       \
-	(vmcb)->save.rsp = (regs)->rsp;       \
+	vmcb->save.rax = regs->rax;              \
+	vmcb->save.rsp = regs->rsp;              \
 \
 	asm volatile (                        \
 		"vmload %%rax\n"              \
@@ -130,11 +141,11 @@ struct svm_gprs {
 		: "memory"                    \
 	);                                    \
 \
-	(regs)->rax = (vmcb)->save.rax;       \
-	(regs)->rsp = (vmcb)->save.rsp;       \
+	regs->rax = vmcb->save.rax;           \
+	regs->rsp = vmcb->save.rsp;           \
 }
 
-#define SVM_VMRUN(vmcb, regs) \
-	__SVM_VMRUN(vmcb, regs, "vmrun_dummy_label_%=")
+#define SVM_VMRUN(vcpu) \
+	__SVM_VMRUN(vcpu, "vmrun_dummy_label_%=")
 
 #endif /* SRC_LIB_X86_SVM_LIB_H_ */
diff --git a/x86/svm.c b/x86/svm.c
index 5667402b..51ed4d06 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -16,7 +16,7 @@
 #include "apic.h"
 #include "svm_lib.h"
 
-struct vmcb *vmcb;
+struct svm_vcpu vcpu0;
 
 bool smp_supported(void)
 {
@@ -75,27 +75,17 @@ static void test_thunk(struct svm_test *test)
 	vmmcall();
 }
 
-static struct svm_gprs regs;
-
-struct svm_gprs *get_regs(void)
-{
-	return &regs;
-}
 
 struct svm_test *v2_test;
 
 
-u64 guest_stack[10000];
-
 int __svm_vmrun(u64 rip)
 {
-	vmcb->save.rip = (ulong)rip;
-	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
-	regs.rdi = (ulong)v2_test;
-
-	SVM_VMRUN(vmcb, &regs);
-
-	return (vmcb->control.exit_code);
+	vcpu0.vmcb->save.rip = (ulong)rip;
+	vcpu0.regs.rdi = (ulong)v2_test;
+	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
+	SVM_VMRUN(&vcpu0);
+	return vcpu0.vmcb->control.exit_code;
 }
 
 int svm_vmrun(void)
@@ -105,21 +95,21 @@ int svm_vmrun(void)
 
 static noinline void test_run(struct svm_test *test)
 {
+	svm_vcpu_ident(&vcpu0);
+
 	if (test->v2) {
-		vmcb_ident(vmcb);
 		v2_test = test;
 		test->v2();
 		return;
 	}
 
 	cli();
-	vmcb_ident(vmcb);
 
 	test->prepare(test);
 	guest_main = test->guest_func;
-	vmcb->save.rip = (ulong)test_thunk;
-	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
-	regs.rdi = (ulong)test;
+	vcpu0.vmcb->save.rip = (ulong)test_thunk;
+	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
+	vcpu0.regs.rdi = (ulong)test;
 	do {
 
 		clgi();
@@ -127,7 +117,7 @@ static noinline void test_run(struct svm_test *test)
 
 		test->prepare_gif_clear(test);
 
-		__SVM_VMRUN(vmcb, &regs, "vmrun_rip");
+		__SVM_VMRUN(&vcpu0, "vmrun_rip");
 
 		cli();
 		stgi();
@@ -195,7 +185,7 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 	if (!setup_svm())
 		return 0;
 
-	vmcb = alloc_page();
+	svm_vcpu_init(&vcpu0);
 
 	for (; svm_tests[i].name != NULL; i++) {
 		if (!test_wanted(svm_tests[i].name, av, ac))
diff --git a/x86/svm.h b/x86/svm.h
index 6f809ce3..61fd2387 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -35,12 +35,13 @@ bool default_finished(struct svm_test *test);
 int get_test_stage(struct svm_test *test);
 void set_test_stage(struct svm_test *test, int s);
 void inc_test_stage(struct svm_test *test);
-struct svm_gprs *get_regs(void);
 int __svm_vmrun(u64 rip);
 void __svm_bare_vmrun(void);
 int svm_vmrun(void);
 void test_set_guest(test_guest_func func);
 
-extern struct vmcb *vmcb;
+
 extern struct svm_test svm_tests[];
+extern struct svm_vcpu vcpu0;
+
 #endif
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index 8aac0bb6..53a82793 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -31,8 +31,8 @@ static bool npt_np_check(struct svm_test *test)
 
 	*pte |= 1ULL;
 
-	return (vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vmcb->control.exit_info_1 == 0x100000004ULL);
+	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000004ULL);
 }
 
 static void npt_nx_prepare(struct svm_test *test)
@@ -43,7 +43,7 @@ static void npt_nx_prepare(struct svm_test *test)
 	wrmsr(MSR_EFER, test->scratch | EFER_NX);
 
 	/* Clear the guest's EFER.NX, it should not affect NPT behavior. */
-	vmcb->save.efer &= ~EFER_NX;
+	vcpu0.vmcb->save.efer &= ~EFER_NX;
 
 	pte = npt_get_pte((u64) null_test);
 
@@ -58,8 +58,8 @@ static bool npt_nx_check(struct svm_test *test)
 
 	*pte &= ~PT64_NX_MASK;
 
-	return (vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vmcb->control.exit_info_1 == 0x100000015ULL);
+	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000015ULL);
 }
 
 static void npt_us_prepare(struct svm_test *test)
@@ -83,8 +83,8 @@ static bool npt_us_check(struct svm_test *test)
 
 	*pte |= (1ULL << 2);
 
-	return (vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vmcb->control.exit_info_1 == 0x100000005ULL);
+	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000005ULL);
 }
 
 static void npt_rw_prepare(struct svm_test *test)
@@ -110,8 +110,8 @@ static bool npt_rw_check(struct svm_test *test)
 
 	*pte |= (1ULL << 1);
 
-	return (vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
+	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
 }
 
 static void npt_rw_pfwalk_prepare(struct svm_test *test)
@@ -130,9 +130,9 @@ static bool npt_rw_pfwalk_check(struct svm_test *test)
 
 	*pte |= (1ULL << 1);
 
-	return (vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vmcb->control.exit_info_1 == 0x200000007ULL)
-	    && (vmcb->control.exit_info_2 == read_cr3());
+	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vcpu0.vmcb->control.exit_info_1 == 0x200000007ULL)
+	    && (vcpu0.vmcb->control.exit_info_2 == read_cr3());
 }
 
 static void npt_l1mmio_prepare(struct svm_test *test)
@@ -181,8 +181,8 @@ static bool npt_rw_l1mmio_check(struct svm_test *test)
 
 	*pte |= (1ULL << 1);
 
-	return (vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
+	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
 }
 
 static void basic_guest_main(struct svm_test *test)
@@ -199,8 +199,8 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
 	wrmsr(MSR_EFER, efer);
 	write_cr4(cr4);
 
-	vmcb->save.efer = guest_efer;
-	vmcb->save.cr4 = guest_cr4;
+	vcpu0.vmcb->save.efer = guest_efer;
+	vcpu0.vmcb->save.cr4 = guest_cr4;
 
 	*pxe |= rsvd_bits;
 
@@ -226,10 +226,10 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
 
 	}
 
-	report(vmcb->control.exit_info_1 == pfec,
+	report(vcpu0.vmcb->control.exit_info_1 == pfec,
 	       "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
 	       "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
-	       pfec, vmcb->control.exit_info_1, *pxe,
+	       pfec, vcpu0.vmcb->control.exit_info_1, *pxe,
 	       !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
 	       !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
 
@@ -317,8 +317,8 @@ static void svm_npt_rsvd_bits_test(void)
 
 	saved_efer = host_efer = rdmsr(MSR_EFER);
 	saved_cr4 = host_cr4 = read_cr4();
-	sg_efer = guest_efer = vmcb->save.efer;
-	sg_cr4 = guest_cr4 = vmcb->save.cr4;
+	sg_efer = guest_efer = vcpu0.vmcb->save.efer;
+	sg_cr4 = guest_cr4 = vcpu0.vmcb->save.cr4;
 
 	test_set_guest(basic_guest_main);
 
@@ -350,8 +350,8 @@ skip_pte_test:
 
 	wrmsr(MSR_EFER, saved_efer);
 	write_cr4(saved_cr4);
-	vmcb->save.efer = sg_efer;
-	vmcb->save.cr4 = sg_cr4;
+	vcpu0.vmcb->save.efer = sg_efer;
+	vcpu0.vmcb->save.cr4 = sg_cr4;
 }
 
 #define NPT_V1_TEST(name, prepare, guest_code, check)				\
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 3b68718e..0312df33 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -44,33 +44,33 @@ static void null_test(struct svm_test *test)
 
 static bool null_check(struct svm_test *test)
 {
-	return vmcb->control.exit_code == SVM_EXIT_VMMCALL;
+	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL;
 }
 
 static void prepare_no_vmrun_int(struct svm_test *test)
 {
-	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
+	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
 }
 
 static bool check_no_vmrun_int(struct svm_test *test)
 {
-	return vmcb->control.exit_code == SVM_EXIT_ERR;
+	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
 static void test_vmrun(struct svm_test *test)
 {
-	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vmcb)));
+	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vcpu0.vmcb)));
 }
 
 static bool check_vmrun(struct svm_test *test)
 {
-	return vmcb->control.exit_code == SVM_EXIT_VMRUN;
+	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMRUN;
 }
 
 static void prepare_rsm_intercept(struct svm_test *test)
 {
-	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
-	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
+	vcpu0.vmcb->control.intercept |= 1 << INTERCEPT_RSM;
+	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
 }
 
 static void test_rsm_intercept(struct svm_test *test)
@@ -87,22 +87,22 @@ static bool finished_rsm_intercept(struct svm_test *test)
 {
 	switch (get_test_stage(test)) {
 	case 0:
-		if (vmcb->control.exit_code != SVM_EXIT_RSM) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_RSM) {
 			report_fail("VMEXIT not due to rsm. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
+		vcpu0.vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
 		inc_test_stage(test);
 		break;
 
 	case 1:
-		if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
 			report_fail("VMEXIT not due to #UD. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 2;
+		vcpu0.vmcb->save.rip += 2;
 		inc_test_stage(test);
 		break;
 
@@ -114,7 +114,7 @@ static bool finished_rsm_intercept(struct svm_test *test)
 
 static void prepare_cr3_intercept(struct svm_test *test)
 {
-	vmcb->control.intercept_cr_read |= 1 << 3;
+	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
 }
 
 static void test_cr3_intercept(struct svm_test *test)
@@ -124,7 +124,7 @@ static void test_cr3_intercept(struct svm_test *test)
 
 static bool check_cr3_intercept(struct svm_test *test)
 {
-	return vmcb->control.exit_code == SVM_EXIT_READ_CR3;
+	return vcpu0.vmcb->control.exit_code == SVM_EXIT_READ_CR3;
 }
 
 static bool check_cr3_nointercept(struct svm_test *test)
@@ -147,7 +147,7 @@ static void corrupt_cr3_intercept_bypass(void *_test)
 
 static void prepare_cr3_intercept_bypass(struct svm_test *test)
 {
-	vmcb->control.intercept_cr_read |= 1 << 3;
+	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
 	on_cpu_async(1, corrupt_cr3_intercept_bypass, test);
 }
 
@@ -166,8 +166,8 @@ static void test_cr3_intercept_bypass(struct svm_test *test)
 
 static void prepare_dr_intercept(struct svm_test *test)
 {
-	vmcb->control.intercept_dr_read = 0xff;
-	vmcb->control.intercept_dr_write = 0xff;
+	vcpu0.vmcb->control.intercept_dr_read = 0xff;
+	vcpu0.vmcb->control.intercept_dr_write = 0xff;
 }
 
 static void test_dr_intercept(struct svm_test *test)
@@ -251,7 +251,7 @@ static void test_dr_intercept(struct svm_test *test)
 
 static bool dr_intercept_finished(struct svm_test *test)
 {
-	ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0);
+	ulong n = (vcpu0.vmcb->control.exit_code - SVM_EXIT_READ_DR0);
 
 	/* Only expect DR intercepts */
 	if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0))
@@ -267,7 +267,7 @@ static bool dr_intercept_finished(struct svm_test *test)
 	test->scratch = (n % 16);
 
 	/* Jump over MOV instruction */
-	vmcb->save.rip += 3;
+	vcpu0.vmcb->save.rip += 3;
 
 	return false;
 }
@@ -284,7 +284,7 @@ static bool next_rip_supported(void)
 
 static void prepare_next_rip(struct svm_test *test)
 {
-	vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
 }
 
 
@@ -300,14 +300,14 @@ static bool check_next_rip(struct svm_test *test)
 	extern char exp_next_rip;
 	unsigned long address = (unsigned long)&exp_next_rip;
 
-	return address == vmcb->control.next_rip;
+	return address == vcpu0.vmcb->control.next_rip;
 }
 
 
 static void prepare_msr_intercept(struct svm_test *test)
 {
-	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
-	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
+	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
 }
 
@@ -359,12 +359,12 @@ static void test_msr_intercept(struct svm_test *test)
 
 static bool msr_intercept_finished(struct svm_test *test)
 {
-	u32 exit_code = vmcb->control.exit_code;
+	u32 exit_code = vcpu0.vmcb->control.exit_code;
 	u64 exit_info_1;
 	u8 *opcode;
 
 	if (exit_code == SVM_EXIT_MSR) {
-		exit_info_1 = vmcb->control.exit_info_1;
+		exit_info_1 = vcpu0.vmcb->control.exit_info_1;
 	} else {
 		/*
 		 * If #GP exception occurs instead, check that it was
@@ -374,7 +374,7 @@ static bool msr_intercept_finished(struct svm_test *test)
 		if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR))
 			return true;
 
-		opcode = (u8 *)vmcb->save.rip;
+		opcode = (u8 *)vcpu0.vmcb->save.rip;
 		if (opcode[0] != 0x0f)
 			return true;
 
@@ -394,11 +394,11 @@ static bool msr_intercept_finished(struct svm_test *test)
 		 * RCX holds the MSR index.
 		 */
 		printf("%s 0x%lx #GP exception\n",
-		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs()->rcx);
+		       exit_info_1 ? "WRMSR" : "RDMSR", vcpu0.regs.rcx);
 	}
 
 	/* Jump over RDMSR/WRMSR instruction */
-	vmcb->save.rip += 2;
+	vcpu0.vmcb->save.rip += 2;
 
 	/*
 	 * Test whether the intercept was for RDMSR/WRMSR.
@@ -410,9 +410,9 @@ static bool msr_intercept_finished(struct svm_test *test)
 	 */
 	if (exit_info_1)
 		test->scratch =
-			((get_regs()->rdx << 32) | (get_regs()->rax & 0xffffffff));
+			((vcpu0.regs.rdx << 32) | (vcpu0.regs.rax & 0xffffffff));
 	else
-		test->scratch = get_regs()->rcx;
+		test->scratch = vcpu0.regs.rcx;
 
 	return false;
 }
@@ -425,7 +425,7 @@ static bool check_msr_intercept(struct svm_test *test)
 
 static void prepare_mode_switch(struct svm_test *test)
 {
-	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
+	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
 		|  (1ULL << UD_VECTOR)
 		|  (1ULL << DF_VECTOR)
 		|  (1ULL << PF_VECTOR);
@@ -491,16 +491,16 @@ static bool mode_switch_finished(struct svm_test *test)
 {
 	u64 cr0, cr4, efer;
 
-	cr0  = vmcb->save.cr0;
-	cr4  = vmcb->save.cr4;
-	efer = vmcb->save.efer;
+	cr0  = vcpu0.vmcb->save.cr0;
+	cr4  = vcpu0.vmcb->save.cr4;
+	efer = vcpu0.vmcb->save.efer;
 
 	/* Only expect VMMCALL intercepts */
-	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL)
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL)
 		return true;
 
 	/* Jump over VMMCALL instruction */
-	vmcb->save.rip += 3;
+	vcpu0.vmcb->save.rip += 3;
 
 	/* Do sanity checks */
 	switch (test->scratch) {
@@ -535,7 +535,7 @@ static void prepare_ioio(struct svm_test *test)
 {
 	u8 *io_bitmap = svm_get_io_bitmap();
 
-	vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
 	test->scratch = 0;
 	memset(io_bitmap, 0, 8192);
 	io_bitmap[8192] = 0xFF;
@@ -619,17 +619,17 @@ static bool ioio_finished(struct svm_test *test)
 	u8 *io_bitmap = svm_get_io_bitmap();
 
 	/* Only expect IOIO intercepts */
-	if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
+	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL)
 		return true;
 
-	if (vmcb->control.exit_code != SVM_EXIT_IOIO)
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_IOIO)
 		return true;
 
 	/* one step forward */
 	test->scratch += 1;
 
-	port = vmcb->control.exit_info_1 >> 16;
-	size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
+	port = vcpu0.vmcb->control.exit_info_1 >> 16;
+	size = (vcpu0.vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
 
 	while (size--) {
 		io_bitmap[port / 8] &= ~(1 << (port & 7));
@@ -649,7 +649,7 @@ static bool check_ioio(struct svm_test *test)
 
 static void prepare_asid_zero(struct svm_test *test)
 {
-	vmcb->control.asid = 0;
+	vcpu0.vmcb->control.asid = 0;
 }
 
 static void test_asid_zero(struct svm_test *test)
@@ -659,12 +659,12 @@ static void test_asid_zero(struct svm_test *test)
 
 static bool check_asid_zero(struct svm_test *test)
 {
-	return vmcb->control.exit_code == SVM_EXIT_ERR;
+	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
 static void sel_cr0_bug_prepare(struct svm_test *test)
 {
-	vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
 }
 
 static bool sel_cr0_bug_finished(struct svm_test *test)
@@ -692,7 +692,7 @@ static void sel_cr0_bug_test(struct svm_test *test)
 
 static bool sel_cr0_bug_check(struct svm_test *test)
 {
-	return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
+	return vcpu0.vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
 }
 
 #define TSC_ADJUST_VALUE    (1ll << 32)
@@ -706,7 +706,7 @@ static bool tsc_adjust_supported(void)
 
 static void tsc_adjust_prepare(struct svm_test *test)
 {
-	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
+	vcpu0.vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
 
 	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
 	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
@@ -761,13 +761,13 @@ static void svm_tsc_scale_run_testcase(u64 duration,
 	guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
 
 	test_set_guest(svm_tsc_scale_guest);
-	vmcb->control.tsc_offset = tsc_offset;
+	vcpu0.vmcb->control.tsc_offset = tsc_offset;
 	wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32)));
 
 	start_tsc = rdtsc();
 
 	if (svm_vmrun() != SVM_EXIT_VMMCALL)
-		report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code);
+		report_fail("unexpected vm exit code 0x%x", vcpu0.vmcb->control.exit_code);
 
 	actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
 
@@ -851,7 +851,7 @@ static bool latency_finished(struct svm_test *test)
 
 	vmexit_sum += cycles;
 
-	vmcb->save.rip += 3;
+	vcpu0.vmcb->save.rip += 3;
 
 	runs -= 1;
 
@@ -862,7 +862,7 @@ static bool latency_finished(struct svm_test *test)
 
 static bool latency_finished_clean(struct svm_test *test)
 {
-	vmcb->control.clean = VMCB_CLEAN_ALL;
+	vcpu0.vmcb->control.clean = VMCB_CLEAN_ALL;
 	return latency_finished(test);
 }
 
@@ -885,7 +885,7 @@ static void lat_svm_insn_prepare(struct svm_test *test)
 
 static bool lat_svm_insn_finished(struct svm_test *test)
 {
-	u64 vmcb_phys = virt_to_phys(vmcb);
+	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
 	u64 cycles;
 
 	for ( ; runs != 0; runs--) {
@@ -964,8 +964,8 @@ static void pending_event_prepare(struct svm_test *test)
 
 	pending_event_guest_run = false;
 
-	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
-	vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+	vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
 		       APIC_DM_FIXED | ipi_vector, 0);
@@ -982,14 +982,14 @@ static bool pending_event_finished(struct svm_test *test)
 {
 	switch (get_test_stage(test)) {
 	case 0:
-		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
 			report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 
-		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
-		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
+		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 
 		if (pending_event_guest_run) {
 			report_fail("Guest ran before host received IPI\n");
@@ -1067,19 +1067,19 @@ static void pending_event_cli_test(struct svm_test *test)
 
 static bool pending_event_cli_finished(struct svm_test *test)
 {
-	if ( vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x",
-			    vmcb->control.exit_code);
+			    vcpu0.vmcb->control.exit_code);
 		return true;
 	}
 
 	switch (get_test_stage(test)) {
 	case 0:
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 
 		pending_event_ipi_fired = false;
 
-		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 
 		/* Now entering again with VINTR_MASKING=1.  */
 		apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
@@ -1209,29 +1209,29 @@ static bool interrupt_finished(struct svm_test *test)
 	switch (get_test_stage(test)) {
 	case 0:
 	case 2:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 
-		vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
-		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 		break;
 
 	case 1:
 	case 3:
-		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
 			report_fail("VMEXIT not due to intr intercept. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 
 		sti_nop_cli();
 
-		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
-		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
+		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 		break;
 
 	case 4:
@@ -1291,20 +1291,20 @@ static bool nmi_finished(struct svm_test *test)
 {
 	switch (get_test_stage(test)) {
 	case 0:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 
-		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
+		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
 		break;
 
 	case 1:
-		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
 			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 
@@ -1393,20 +1393,20 @@ static bool nmi_hlt_finished(struct svm_test *test)
 {
 	switch (get_test_stage(test)) {
 	case 1:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 
-		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
+		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
 		break;
 
 	case 2:
-		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
 			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 
@@ -1451,34 +1451,39 @@ static bool exc_inject_finished(struct svm_test *test)
 {
 	switch (get_test_stage(test)) {
 	case 0:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 3;
-		vmcb->control.event_inj = NMI_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID;
+		vcpu0.vmcb->save.rip += 3;
+		vcpu0.vmcb->control.event_inj = NMI_VECTOR |
+						SVM_EVTINJ_TYPE_EXEPT |
+						SVM_EVTINJ_VALID;
 		break;
 
 	case 1:
-		if (vmcb->control.exit_code != SVM_EXIT_ERR) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_ERR) {
 			report_fail("VMEXIT not due to error. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 		report(count_exc == 0, "exception with vector 2 not injected");
-		vmcb->control.event_inj = DE_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID;
+		vcpu0.vmcb->control.event_inj = DE_VECTOR |
+						SVM_EVTINJ_TYPE_EXEPT |
+						SVM_EVTINJ_VALID;
 		break;
 
 	case 2:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 		report(count_exc == 1, "divide overflow exception injected");
-		report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID), "eventinj.VALID cleared");
+		report(!(vcpu0.vmcb->control.event_inj & SVM_EVTINJ_VALID),
+		       "eventinj.VALID cleared");
 		break;
 
 	default:
@@ -1505,9 +1510,10 @@ static void virq_isr(isr_regs_t *regs)
 static void virq_inject_prepare(struct svm_test *test)
 {
 	handle_irq(0xf1, virq_isr);
-	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+
+	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
 		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
-	vmcb->control.int_vector = 0xf1;
+	vcpu0.vmcb->control.int_vector = 0xf1;
 	virq_fired = false;
 	set_test_stage(test, 0);
 }
@@ -1557,66 +1563,66 @@ static void virq_inject_test(struct svm_test *test)
 
 static bool virq_inject_finished(struct svm_test *test)
 {
-	vmcb->save.rip += 3;
+	vcpu0.vmcb->save.rip += 3;
 
 	switch (get_test_stage(test)) {
 	case 0:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		if (vmcb->control.int_ctl & V_IRQ_MASK) {
+		if (vcpu0.vmcb->control.int_ctl & V_IRQ_MASK) {
 			report_fail("V_IRQ not cleared on VMEXIT after firing");
 			return true;
 		}
 		virq_fired = false;
-		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
-		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
+		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
 			(0x0f << V_INTR_PRIO_SHIFT);
 		break;
 
 	case 1:
-		if (vmcb->control.exit_code != SVM_EXIT_VINTR) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VINTR) {
 			report_fail("VMEXIT not due to vintr. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 		if (virq_fired) {
 			report_fail("V_IRQ fired before SVM_EXIT_VINTR");
 			return true;
 		}
-		vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
+		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
 		break;
 
 	case 2:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 		virq_fired = false;
 		// Set irq to lower priority
-		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
 			(0x08 << V_INTR_PRIO_SHIFT);
 		// Raise guest TPR
-		vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
+		vcpu0.vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
 		break;
 
 	case 3:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
+		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
 		break;
 
 	case 4:
 		// INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
 		break;
@@ -1669,8 +1675,8 @@ static void reg_corruption_prepare(struct svm_test *test)
 {
 	set_test_stage(test, 0);
 
-	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
-	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK;
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
 
 	handle_irq(TIMER_VECTOR, reg_corruption_isr);
 
@@ -1706,9 +1712,9 @@ static bool reg_corruption_finished(struct svm_test *test)
 		goto cleanup;
 	}
 
-	if (vmcb->control.exit_code == SVM_EXIT_INTR) {
+	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_INTR) {
 
-		void* guest_rip = (void*)vmcb->save.rip;
+		void *guest_rip = (void *)vcpu0.vmcb->save.rip;
 
 		sti_nop_cli();
 
@@ -1777,7 +1783,7 @@ static volatile bool init_intercept;
 static void init_intercept_prepare(struct svm_test *test)
 {
 	init_intercept = false;
-	vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
 }
 
 static void init_intercept_test(struct svm_test *test)
@@ -1787,11 +1793,11 @@ static void init_intercept_test(struct svm_test *test)
 
 static bool init_intercept_finished(struct svm_test *test)
 {
-	vmcb->save.rip += 3;
+	vcpu0.vmcb->save.rip += 3;
 
-	if (vmcb->control.exit_code != SVM_EXIT_INIT) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INIT) {
 		report_fail("VMEXIT not due to init intercept. Exit reason 0x%x",
-			    vmcb->control.exit_code);
+			    vcpu0.vmcb->control.exit_code);
 
 		return true;
 	}
@@ -1890,12 +1896,12 @@ static bool host_rflags_finished(struct svm_test *test)
 {
 	switch (get_test_stage(test)) {
 	case 0:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("Unexpected VMEXIT. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 		/*
 		 * Setting host EFLAGS.TF not immediately before VMRUN, causes
 		 * #DB trap before first guest instruction is executed
@@ -1903,14 +1909,14 @@ static bool host_rflags_finished(struct svm_test *test)
 		host_rflags_set_tf = true;
 		break;
 	case 1:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
 		    host_rflags_guest_main_flag != 1) {
 			report_fail("Unexpected VMEXIT or #DB handler"
 				    " invoked before guest main. Exit reason 0x%x",
-				    vmcb->control.exit_code);
+				    vcpu0.vmcb->control.exit_code);
 			return true;
 		}
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 		/*
 		 * Setting host EFLAGS.TF immediately before VMRUN, causes #DB
 		 * trap after VMRUN completes on the host side (i.e., after
@@ -1919,21 +1925,21 @@ static bool host_rflags_finished(struct svm_test *test)
 		host_rflags_ss_on_vmrun = true;
 		break;
 	case 2:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
 		    rip_detected != (u64)&vmrun_rip + 3) {
 			report_fail("Unexpected VMEXIT or RIP mismatch."
 				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
-				    "%lx", vmcb->control.exit_code,
+				    "%lx", vcpu0.vmcb->control.exit_code,
 				    (u64)&vmrun_rip + 3, rip_detected);
 			return true;
 		}
 		host_rflags_set_rf = true;
 		host_rflags_guest_main_flag = 0;
 		host_rflags_vmrun_reached = false;
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 		break;
 	case 3:
-		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
 		    rip_detected != (u64)&vmrun_rip ||
 		    host_rflags_guest_main_flag != 1 ||
 		    host_rflags_db_handler_flag > 1 ||
@@ -1941,13 +1947,13 @@ static bool host_rflags_finished(struct svm_test *test)
 			report_fail("Unexpected VMEXIT or RIP mismatch or "
 				    "EFLAGS.RF not cleared."
 				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
-				    "%lx", vmcb->control.exit_code,
+				    "%lx", vcpu0.vmcb->control.exit_code,
 				    (u64)&vmrun_rip, rip_detected);
 			return true;
 		}
 		host_rflags_set_tf = false;
 		host_rflags_set_rf = false;
-		vmcb->save.rip += 3;
+		vcpu0.vmcb->save.rip += 3;
 		break;
 	default:
 		return true;
@@ -1989,7 +1995,7 @@ static void svm_cr4_osxsave_test(void)
 		unsigned long cr4 = read_cr4() | X86_CR4_OSXSAVE;
 
 		write_cr4(cr4);
-		vmcb->save.cr4 = cr4;
+		vcpu0.vmcb->save.cr4 = cr4;
 	}
 
 	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
@@ -2037,13 +2043,13 @@ static void basic_guest_main(struct svm_test *test)
 		tmp = val | mask;					\
 		switch (cr) {						\
 		case 0:							\
-			vmcb->save.cr0 = tmp;				\
+			vcpu0.vmcb->save.cr0 = tmp;				\
 			break;						\
 		case 3:							\
-			vmcb->save.cr3 = tmp;				\
+			vcpu0.vmcb->save.cr3 = tmp;				\
 			break;						\
 		case 4:							\
-			vmcb->save.cr4 = tmp;				\
+			vcpu0.vmcb->save.cr4 = tmp;				\
 		}							\
 		r = svm_vmrun();					\
 		report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x", \
@@ -2056,39 +2062,39 @@ static void test_efer(void)
 	/*
 	 * Un-setting EFER.SVME is illegal
 	 */
-	u64 efer_saved = vmcb->save.efer;
+	u64 efer_saved = vcpu0.vmcb->save.efer;
 	u64 efer = efer_saved;
 
 	report (svm_vmrun() == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
 	efer &= ~EFER_SVME;
-	vmcb->save.efer = efer;
+	vcpu0.vmcb->save.efer = efer;
 	report (svm_vmrun() == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
-	vmcb->save.efer = efer_saved;
+	vcpu0.vmcb->save.efer = efer_saved;
 
 	/*
 	 * EFER MBZ bits: 63:16, 9
 	 */
-	efer_saved = vmcb->save.efer;
+	efer_saved = vcpu0.vmcb->save.efer;
 
-	SVM_TEST_REG_RESERVED_BITS(8, 9, 1, "EFER", vmcb->save.efer,
+	SVM_TEST_REG_RESERVED_BITS(8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
 				   efer_saved, SVM_EFER_RESERVED_MASK);
-	SVM_TEST_REG_RESERVED_BITS(16, 63, 4, "EFER", vmcb->save.efer,
+	SVM_TEST_REG_RESERVED_BITS(16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
 				   efer_saved, SVM_EFER_RESERVED_MASK);
 
 	/*
 	 * EFER.LME and CR0.PG are both set and CR4.PAE is zero.
 	 */
-	u64 cr0_saved = vmcb->save.cr0;
+	u64 cr0_saved = vcpu0.vmcb->save.cr0;
 	u64 cr0;
-	u64 cr4_saved = vmcb->save.cr4;
+	u64 cr4_saved = vcpu0.vmcb->save.cr4;
 	u64 cr4;
 
 	efer = efer_saved | EFER_LME;
-	vmcb->save.efer = efer;
+	vcpu0.vmcb->save.efer = efer;
 	cr0 = cr0_saved | X86_CR0_PG | X86_CR0_PE;
-	vmcb->save.cr0 = cr0;
+	vcpu0.vmcb->save.cr0 = cr0;
 	cr4 = cr4_saved & ~X86_CR4_PAE;
-	vmcb->save.cr4 = cr4;
+	vcpu0.vmcb->save.cr4 = cr4;
 	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
 
@@ -2099,31 +2105,31 @@ static void test_efer(void)
 	 * SVM_EXIT_ERR.
 	 */
 	cr4 = cr4_saved | X86_CR4_PAE;
-	vmcb->save.cr4 = cr4;
+	vcpu0.vmcb->save.cr4 = cr4;
 	cr0 &= ~X86_CR0_PE;
-	vmcb->save.cr0 = cr0;
+	vcpu0.vmcb->save.cr0 = cr0;
 	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
 
 	/*
 	 * EFER.LME, CR0.PG, CR4.PAE, CS.L, and CS.D are all non-zero.
 	 */
-	u32 cs_attrib_saved = vmcb->save.cs.attrib;
+	u32 cs_attrib_saved = vcpu0.vmcb->save.cs.attrib;
 	u32 cs_attrib;
 
 	cr0 |= X86_CR0_PE;
-	vmcb->save.cr0 = cr0;
+	vcpu0.vmcb->save.cr0 = cr0;
 	cs_attrib = cs_attrib_saved | SVM_SELECTOR_L_MASK |
 		SVM_SELECTOR_DB_MASK;
-	vmcb->save.cs.attrib = cs_attrib;
+	vcpu0.vmcb->save.cs.attrib = cs_attrib;
 	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
 	       efer, cr0, cr4, cs_attrib);
 
-	vmcb->save.cr0 = cr0_saved;
-	vmcb->save.cr4 = cr4_saved;
-	vmcb->save.efer = efer_saved;
-	vmcb->save.cs.attrib = cs_attrib_saved;
+	vcpu0.vmcb->save.cr0 = cr0_saved;
+	vcpu0.vmcb->save.cr4 = cr4_saved;
+	vcpu0.vmcb->save.efer = efer_saved;
+	vcpu0.vmcb->save.cs.attrib = cs_attrib_saved;
 }
 
 static void test_cr0(void)
@@ -2131,37 +2137,37 @@ static void test_cr0(void)
 	/*
 	 * Un-setting CR0.CD and setting CR0.NW is illegal combination
 	 */
-	u64 cr0_saved = vmcb->save.cr0;
+	u64 cr0_saved = vcpu0.vmcb->save.cr0;
 	u64 cr0 = cr0_saved;
 
 	cr0 |= X86_CR0_CD;
 	cr0 &= ~X86_CR0_NW;
-	vmcb->save.cr0 = cr0;
+	vcpu0.vmcb->save.cr0 = cr0;
 	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
 		cr0);
 	cr0 |= X86_CR0_NW;
-	vmcb->save.cr0 = cr0;
+	vcpu0.vmcb->save.cr0 = cr0;
 	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
 		cr0);
 	cr0 &= ~X86_CR0_NW;
 	cr0 &= ~X86_CR0_CD;
-	vmcb->save.cr0 = cr0;
+	vcpu0.vmcb->save.cr0 = cr0;
 	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
 		cr0);
 	cr0 |= X86_CR0_NW;
-	vmcb->save.cr0 = cr0;
+	vcpu0.vmcb->save.cr0 = cr0;
 	report (svm_vmrun() == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
 		cr0);
-	vmcb->save.cr0 = cr0_saved;
+	vcpu0.vmcb->save.cr0 = cr0_saved;
 
 	/*
 	 * CR0[63:32] are not zero
 	 */
 	cr0 = cr0_saved;
 
-	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "CR0", vmcb->save.cr0, cr0_saved,
+	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
 				   SVM_CR0_RESERVED_MASK);
-	vmcb->save.cr0 = cr0_saved;
+	vcpu0.vmcb->save.cr0 = cr0_saved;
 }
 
 static void test_cr3(void)
@@ -2170,37 +2176,37 @@ static void test_cr3(void)
 	 * CR3 MBZ bits based on different modes:
 	 *   [63:52] - long mode
 	 */
-	u64 cr3_saved = vmcb->save.cr3;
+	u64 cr3_saved = vcpu0.vmcb->save.cr3;
 
 	SVM_TEST_CR_RESERVED_BITS(0, 63, 1, 3, cr3_saved,
 				  SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
 
-	vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
+	vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
 	report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
-	       vmcb->save.cr3);
+	       vcpu0.vmcb->save.cr3);
 
 	/*
 	 * CR3 non-MBZ reserved bits based on different modes:
 	 *   [11:5] [2:0] - long mode (PCIDE=0)
 	 *          [2:0] - PAE legacy mode
 	 */
-	u64 cr4_saved = vmcb->save.cr4;
+	u64 cr4_saved = vcpu0.vmcb->save.cr4;
 	u64 *pdpe = npt_get_pml4e();
 
 	/*
 	 * Long mode
 	 */
 	if (this_cpu_has(X86_FEATURE_PCID)) {
-		vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
+		vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
 		SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved,
 					  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) ");
 
-		vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
+		vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
 		report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
-		       vmcb->save.cr3);
+		       vcpu0.vmcb->save.cr3);
 	}
 
-	vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
+	vcpu0.vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
 
 	if (!npt_supported())
 		goto skip_npt_only;
@@ -2212,44 +2218,44 @@ static void test_cr3(void)
 				  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) ");
 
 	pdpe[0] |= 1ULL;
-	vmcb->save.cr3 = cr3_saved;
+	vcpu0.vmcb->save.cr3 = cr3_saved;
 
 	/*
 	 * PAE legacy
 	 */
 	pdpe[0] &= ~1ULL;
-	vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
+	vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
 	SVM_TEST_CR_RESERVED_BITS(0, 2, 1, 3, cr3_saved,
 				  SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) ");
 
 	pdpe[0] |= 1ULL;
 
 skip_npt_only:
-	vmcb->save.cr3 = cr3_saved;
-	vmcb->save.cr4 = cr4_saved;
+	vcpu0.vmcb->save.cr3 = cr3_saved;
+	vcpu0.vmcb->save.cr4 = cr4_saved;
 }
 
 /* Test CR4 MBZ bits based on legacy or long modes */
 static void test_cr4(void)
 {
-	u64 cr4_saved = vmcb->save.cr4;
-	u64 efer_saved = vmcb->save.efer;
+	u64 cr4_saved = vcpu0.vmcb->save.cr4;
+	u64 efer_saved = vcpu0.vmcb->save.efer;
 	u64 efer = efer_saved;
 
 	efer &= ~EFER_LME;
-	vmcb->save.efer = efer;
+	vcpu0.vmcb->save.efer = efer;
 	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
 				  SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, "");
 
 	efer |= EFER_LME;
-	vmcb->save.efer = efer;
+	vcpu0.vmcb->save.efer = efer;
 	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
 				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
 	SVM_TEST_CR_RESERVED_BITS(32, 63, 4, 4, cr4_saved,
 				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
 
-	vmcb->save.cr4 = cr4_saved;
-	vmcb->save.efer = efer_saved;
+	vcpu0.vmcb->save.cr4 = cr4_saved;
+	vcpu0.vmcb->save.efer = efer_saved;
 }
 
 static void test_dr(void)
@@ -2257,27 +2263,27 @@ static void test_dr(void)
 	/*
 	 * DR6[63:32] and DR7[63:32] are MBZ
 	 */
-	u64 dr_saved = vmcb->save.dr6;
+	u64 dr_saved = vcpu0.vmcb->save.dr6;
 
-	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR6", vmcb->save.dr6, dr_saved,
+	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
 				   SVM_DR6_RESERVED_MASK);
-	vmcb->save.dr6 = dr_saved;
+	vcpu0.vmcb->save.dr6 = dr_saved;
 
-	dr_saved = vmcb->save.dr7;
-	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR7", vmcb->save.dr7, dr_saved,
+	dr_saved = vcpu0.vmcb->save.dr7;
+	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
 				   SVM_DR7_RESERVED_MASK);
 
-	vmcb->save.dr7 = dr_saved;
+	vcpu0.vmcb->save.dr7 = dr_saved;
 }
 
 /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */
 #define	TEST_BITMAP_ADDR(save_intercept, type, addr, exit_code,		\
 			 msg) {						\
-		vmcb->control.intercept = saved_intercept | 1ULL << type; \
+		vcpu0.vmcb->control.intercept = saved_intercept | 1ULL << type; \
 		if (type == INTERCEPT_MSR_PROT)				\
-			vmcb->control.msrpm_base_pa = addr;		\
+			vcpu0.vmcb->control.msrpm_base_pa = addr;		\
 		else							\
-			vmcb->control.iopm_base_pa = addr;		\
+			vcpu0.vmcb->control.iopm_base_pa = addr;		\
 		report(svm_vmrun() == exit_code,			\
 		       "Test %s address: %lx", msg, addr);		\
 	}
@@ -2300,7 +2306,7 @@ static void test_dr(void)
  */
 static void test_msrpm_iopm_bitmap_addrs(void)
 {
-	u64 saved_intercept = vmcb->control.intercept;
+	u64 saved_intercept = vcpu0.vmcb->control.intercept;
 	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
 	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
 	u8 *io_bitmap = svm_get_io_bitmap();
@@ -2342,7 +2348,7 @@ static void test_msrpm_iopm_bitmap_addrs(void)
 	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr,
 			 SVM_EXIT_VMMCALL, "IOPM");
 
-	vmcb->control.intercept = saved_intercept;
+	vcpu0.vmcb->control.intercept = saved_intercept;
 }
 
 /*
@@ -2372,22 +2378,22 @@ static void test_canonicalization(void)
 	u64 saved_addr;
 	u64 return_value;
 	u64 addr_limit;
-	u64 vmcb_phys = virt_to_phys(vmcb);
+	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
 
 	addr_limit = (this_cpu_has(X86_FEATURE_LA57)) ? 57 : 48;
 	u64 noncanonical_mask = NONCANONICAL & ~((1ul << addr_limit) - 1);
 
-	TEST_CANONICAL_VMLOAD(vmcb->save.fs.base, "FS");
-	TEST_CANONICAL_VMLOAD(vmcb->save.gs.base, "GS");
-	TEST_CANONICAL_VMLOAD(vmcb->save.ldtr.base, "LDTR");
-	TEST_CANONICAL_VMLOAD(vmcb->save.tr.base, "TR");
-	TEST_CANONICAL_VMLOAD(vmcb->save.kernel_gs_base, "KERNEL GS");
-	TEST_CANONICAL_VMRUN(vmcb->save.es.base, "ES");
-	TEST_CANONICAL_VMRUN(vmcb->save.cs.base, "CS");
-	TEST_CANONICAL_VMRUN(vmcb->save.ss.base, "SS");
-	TEST_CANONICAL_VMRUN(vmcb->save.ds.base, "DS");
-	TEST_CANONICAL_VMRUN(vmcb->save.gdtr.base, "GDTR");
-	TEST_CANONICAL_VMRUN(vmcb->save.idtr.base, "IDTR");
+	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.fs.base, "FS");
+	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.gs.base, "GS");
+	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.ldtr.base, "LDTR");
+	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.tr.base, "TR");
+	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
+	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.es.base, "ES");
+	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.cs.base, "CS");
+	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.ss.base, "SS");
+	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.ds.base, "DS");
+	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.gdtr.base, "GDTR");
+	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.idtr.base, "IDTR");
 }
 
 /*
@@ -2441,7 +2447,7 @@ static void svm_test_singlestep(void)
 	/*
 	 * Trap expected after completion of first guest instruction
 	 */
-	vmcb->save.rflags |= X86_EFLAGS_TF;
+	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
 	report (__svm_vmrun((u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
 		guest_rflags_test_trap_rip == (u64)&insn2,
 		"Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
@@ -2449,17 +2455,19 @@ static void svm_test_singlestep(void)
 	 * No trap expected
 	 */
 	guest_rflags_test_trap_rip = 0;
-	vmcb->save.rip += 3;
-	vmcb->save.rflags |= X86_EFLAGS_TF;
-	report (__svm_vmrun(vmcb->save.rip) == SVM_EXIT_VMMCALL &&
-		guest_rflags_test_trap_rip == 0, "Test EFLAGS.TF on VMRUN: trap not expected");
+	vcpu0.vmcb->save.rip += 3;
+	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
+	report(__svm_vmrun(vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+		guest_rflags_test_trap_rip == 0,
+		"Test EFLAGS.TF on VMRUN: trap not expected");
 
 	/*
 	 * Let guest finish execution
 	 */
-	vmcb->save.rip += 3;
-	report (__svm_vmrun(vmcb->save.rip) == SVM_EXIT_VMMCALL &&
-		vmcb->save.rip == (u64)&guest_end, "Test EFLAGS.TF on VMRUN: guest execution completion");
+	vcpu0.vmcb->save.rip += 3;
+	report(__svm_vmrun(vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+		vcpu0.vmcb->save.rip == (u64)&guest_end,
+		"Test EFLAGS.TF on VMRUN: guest execution completion");
 }
 
 static bool volatile svm_errata_reproduced = false;
@@ -2530,7 +2538,7 @@ static void svm_vmrun_errata_test(void)
 
 static void vmload_vmsave_guest_main(struct svm_test *test)
 {
-	u64 vmcb_phys = virt_to_phys(vmcb);
+	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
 
 	asm volatile ("vmload %0" : : "a"(vmcb_phys));
 	asm volatile ("vmsave %0" : : "a"(vmcb_phys));
@@ -2538,7 +2546,7 @@ static void vmload_vmsave_guest_main(struct svm_test *test)
 
 static void svm_vmload_vmsave(void)
 {
-	u32 intercept_saved = vmcb->control.intercept;
+	u32 intercept_saved = vcpu0.vmcb->control.intercept;
 
 	test_set_guest(vmload_vmsave_guest_main);
 
@@ -2546,49 +2554,49 @@ static void svm_vmload_vmsave(void)
 	 * Disabling intercept for VMLOAD and VMSAVE doesn't cause
 	 * respective #VMEXIT to host
 	 */
-	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
-	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
+	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
+	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	/*
 	 * Enabling intercept for VMLOAD and VMSAVE causes respective
 	 * #VMEXIT to host
 	 */
-	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
-	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
-	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
+	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
-	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
+	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
-	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
-	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
+	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
-	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
-	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
+	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
-	vmcb->control.intercept = intercept_saved;
+	vcpu0.vmcb->control.intercept = intercept_saved;
 }
 
 static void prepare_vgif_enabled(struct svm_test *test)
@@ -2605,42 +2613,42 @@ static bool vgif_finished(struct svm_test *test)
 	switch (get_test_stage(test))
 		{
 		case 0:
-			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 				report_fail("VMEXIT not due to vmmcall.");
 				return true;
 			}
-			vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
-			vmcb->save.rip += 3;
+			vcpu0.vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
+			vcpu0.vmcb->save.rip += 3;
 			inc_test_stage(test);
 			break;
 		case 1:
-			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 				report_fail("VMEXIT not due to vmmcall.");
 				return true;
 			}
-			if (!(vmcb->control.int_ctl & V_GIF_MASK)) {
+			if (!(vcpu0.vmcb->control.int_ctl & V_GIF_MASK)) {
 				report_fail("Failed to set VGIF when executing STGI.");
-				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
 				return true;
 			}
 			report_pass("STGI set VGIF bit.");
-			vmcb->save.rip += 3;
+			vcpu0.vmcb->save.rip += 3;
 			inc_test_stage(test);
 			break;
 		case 2:
-			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 				report_fail("VMEXIT not due to vmmcall.");
 				return true;
 			}
-			if (vmcb->control.int_ctl & V_GIF_MASK) {
+			if (vcpu0.vmcb->control.int_ctl & V_GIF_MASK) {
 				report_fail("Failed to clear VGIF when executing CLGI.");
-				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
 				return true;
 			}
 			report_pass("CLGI cleared VGIF bit.");
-			vmcb->save.rip += 3;
+			vcpu0.vmcb->save.rip += 3;
 			inc_test_stage(test);
-			vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+			vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
 			break;
 		default:
 			return true;
@@ -2683,14 +2691,16 @@ static void pause_filter_run_test(int pause_iterations, int filter_value, int wa
 	pause_test_counter = pause_iterations;
 	wait_counter = wait_iterations;
 
-	vmcb->control.pause_filter_count = filter_value;
-	vmcb->control.pause_filter_thresh = threshold;
+	vcpu0.vmcb->control.pause_filter_count = filter_value;
+	vcpu0.vmcb->control.pause_filter_thresh = threshold;
 	svm_vmrun();
 
 	if (filter_value <= pause_iterations || wait_iterations < threshold)
-		report(vmcb->control.exit_code == SVM_EXIT_PAUSE, "expected PAUSE vmexit");
+		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_PAUSE,
+		       "expected PAUSE vmexit");
 	else
-		report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "no expected PAUSE vmexit");
+		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL,
+		       "no expected PAUSE vmexit");
 }
 
 static void pause_filter_test(void)
@@ -2700,7 +2710,7 @@ static void pause_filter_test(void)
 		return;
 	}
 
-	vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
+	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
 
 	// filter count more that pause count - no VMexit
 	pause_filter_run_test(10, 9, 0, 0);
@@ -2729,7 +2739,7 @@ static void svm_no_nm_test(void)
 	write_cr0(read_cr0() & ~X86_CR0_TS);
 	test_set_guest((test_guest_func)fnop);
 
-	vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
+	vcpu0.vmcb->save.cr0 = vcpu0.vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
 	report(svm_vmrun() == SVM_EXIT_VMMCALL,
 	       "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
 }
@@ -2860,21 +2870,20 @@ static void svm_lbrv_test0(void)
 
 static void svm_lbrv_test1(void)
 {
-	struct svm_gprs *regs = get_regs();
 
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
 
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
-	vmcb->control.virt_ext = 0;
+	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
+	vcpu0.vmcb->control.virt_ext = 0;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch1);
-	SVM_VMRUN(vmcb, regs);
+	SVM_VMRUN(&vcpu0);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 
-	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vmcb->control.exit_code);
+		       vcpu0.vmcb->control.exit_code);
 		return;
 	}
 
@@ -2884,23 +2893,21 @@ static void svm_lbrv_test1(void)
 
 static void svm_lbrv_test2(void)
 {
-	struct svm_gprs *regs = get_regs();
-
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
 
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
-	vmcb->control.virt_ext = 0;
+	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
+	vcpu0.vmcb->control.virt_ext = 0;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch2);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
-	SVM_VMRUN(vmcb, regs);
+	SVM_VMRUN(&vcpu0);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
-	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vmcb->control.exit_code);
+		       vcpu0.vmcb->control.exit_code);
 		return;
 	}
 
@@ -2910,32 +2917,32 @@ static void svm_lbrv_test2(void)
 
 static void svm_lbrv_nested_test1(void)
 {
-	struct svm_gprs *regs = get_regs();
-
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
 		return;
 	}
 
 	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)");
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
-	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
-	vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
+	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
+	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+	vcpu0.vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch3);
-	SVM_VMRUN(vmcb, regs);
+	SVM_VMRUN(&vcpu0);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
-	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vmcb->control.exit_code);
+		       vcpu0.vmcb->control.exit_code);
 		return;
 	}
 
-	if (vmcb->save.dbgctl != 0) {
-		report(false, "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx", vmcb->save.dbgctl);
+	if (vcpu0.vmcb->save.dbgctl != 0) {
+		report(false,
+		       "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx",
+		       vcpu0.vmcb->save.dbgctl);
 		return;
 	}
 
@@ -2945,30 +2952,28 @@ static void svm_lbrv_nested_test1(void)
 
 static void svm_lbrv_nested_test2(void)
 {
-	struct svm_gprs *regs = get_regs();
-
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
 		return;
 	}
 
 	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)");
-	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
-	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
+	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
 
-	vmcb->save.dbgctl = 0;
-	vmcb->save.br_from = (u64)&host_branch2_from;
-	vmcb->save.br_to = (u64)&host_branch2_to;
+	vcpu0.vmcb->save.dbgctl = 0;
+	vcpu0.vmcb->save.br_from = (u64)&host_branch2_from;
+	vcpu0.vmcb->save.br_to = (u64)&host_branch2_to;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch4);
-	SVM_VMRUN(vmcb, regs);
+	SVM_VMRUN(&vcpu0);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
-	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vmcb->control.exit_code);
+		       vcpu0.vmcb->control.exit_code);
 		return;
 	}
 
@@ -3013,8 +3018,8 @@ static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected
 	if (counter)
 		report(*counter == 1, "Interrupt is expected");
 
-	report (vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
-	report(vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
+	report(vcpu0.vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
+	report(vcpu0.vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
 	cli();
 }
 
@@ -3033,9 +3038,9 @@ static void svm_intr_intercept_mix_if(void)
 	// make a physical interrupt to be pending
 	handle_irq(0x55, dummy_isr);
 
-	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
-	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vmcb->save.rflags &= ~X86_EFLAGS_IF;
+	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
+	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_if_guest);
 	cli();
@@ -3066,9 +3071,9 @@ static void svm_intr_intercept_mix_gif(void)
 {
 	handle_irq(0x55, dummy_isr);
 
-	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
-	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vmcb->save.rflags &= ~X86_EFLAGS_IF;
+	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
+	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_gif_guest);
 	cli();
@@ -3096,9 +3101,9 @@ static void svm_intr_intercept_mix_gif2(void)
 {
 	handle_irq(0x55, dummy_isr);
 
-	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
-	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vmcb->save.rflags |= X86_EFLAGS_IF;
+	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
+	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_gif_guest2);
 	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
@@ -3125,9 +3130,9 @@ static void svm_intr_intercept_mix_nmi(void)
 {
 	handle_exception(2, dummy_nmi_handler);
 
-	vmcb->control.intercept |= (1 << INTERCEPT_NMI);
-	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vmcb->save.rflags |= X86_EFLAGS_IF;
+	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_NMI);
+	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_nmi_guest);
 	svm_intr_intercept_mix_run_guest(&nmi_recevied, SVM_EXIT_NMI);
@@ -3149,8 +3154,8 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test *test)
 
 static void svm_intr_intercept_mix_smi(void)
 {
-	vmcb->control.intercept |= (1 << INTERCEPT_SMI);
-	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_SMI);
+	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 	test_set_guest(svm_intr_intercept_mix_smi_guest);
 	svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI);
 }
@@ -3207,14 +3212,14 @@ static void handle_exception_in_l2(u8 vector)
 
 static void handle_exception_in_l1(u32 vector)
 {
-	u32 old_ie = vmcb->control.intercept_exceptions;
+	u32 old_ie = vcpu0.vmcb->control.intercept_exceptions;
 
-	vmcb->control.intercept_exceptions |= (1ULL << vector);
+	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << vector);
 
 	report(svm_vmrun() == (SVM_EXIT_EXCP_BASE + vector),
 		"%s handled by L1",  exception_mnemonic(vector));
 
-	vmcb->control.intercept_exceptions = old_ie;
+	vcpu0.vmcb->control.intercept_exceptions = old_ie;
 }
 
 static void svm_exception_test(void)
@@ -3227,10 +3232,10 @@ static void svm_exception_test(void)
 		test_set_guest((test_guest_func)t->guest_code);
 
 		handle_exception_in_l2(t->vector);
-		vmcb_ident(vmcb);
+		svm_vcpu_ident(&vcpu0);
 
 		handle_exception_in_l1(t->vector);
-		vmcb_ident(vmcb);
+		svm_vcpu_ident(&vcpu0);
 	}
 }
 
@@ -3243,10 +3248,10 @@ static void shutdown_intercept_test_guest(struct svm_test *test)
 static void svm_shutdown_intercept_test(void)
 {
 	test_set_guest(shutdown_intercept_test_guest);
-	vmcb->save.idtr.base = (u64)alloc_vpage();
-	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
+	vcpu0.vmcb->save.idtr.base = (u64)alloc_vpage();
+	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
 	svm_vmrun();
-	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
+	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
 }
 
 /*
@@ -3256,7 +3261,7 @@ static void svm_shutdown_intercept_test(void)
 
 static void exception_merging_prepare(struct svm_test *test)
 {
-	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 
 	/* break UD vector idt entry to get #GP*/
 	boot_idt[UD_VECTOR].type = 1;
@@ -3269,15 +3274,15 @@ static void exception_merging_test(struct svm_test *test)
 
 static bool exception_merging_finished(struct svm_test *test)
 {
-	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
-	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
+	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
+	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
 
-	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
 		report(false, "unexpected VM exit");
 		goto out;
 	}
 
-	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
+	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
 		report(false, "EXITINTINFO not valid");
 		goto out;
 	}
@@ -3313,7 +3318,7 @@ static bool exception_merging_check(struct svm_test *test)
 static void interrupt_merging_prepare(struct svm_test *test)
 {
 	/* intercept #GP */
-	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 
 	/* set local APIC to inject external interrupts */
 	apic_setup_timer(TIMER_VECTOR, APIC_LVT_TIMER_PERIODIC);
@@ -3335,15 +3340,15 @@ static void interrupt_merging_test(struct svm_test *test)
 static bool interrupt_merging_finished(struct svm_test *test)
 {
 
-	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
-	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
-	u32 error_code = vmcb->control.exit_info_1;
+	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
+	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
+	u32 error_code = vcpu0.vmcb->control.exit_info_1;
 
 	/* exit on external interrupts is disabled, thus timer interrupt
 	 * should be attempted to be delivered, but due to incorrect IDT entry
 	 * an #GP should be raised
 	 */
-	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
+	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
 		report(false, "unexpected VM exit");
 		goto cleanup;
 	}
@@ -3355,7 +3360,7 @@ static bool interrupt_merging_finished(struct svm_test *test)
 	}
 
 	/* Original interrupt should be preserved in EXITINTINFO */
-	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
+	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
 		report(false, "EXITINTINFO not valid");
 		goto cleanup;
 	}
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 23/27] svm: introduce struct svm_test_context
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (21 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 22/27] svm: introduce svm_vcpu Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 24/27] svm: use svm_test_context in v2 tests Maxim Levitsky
                   ` (4 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Introduce struct_svm_test_context which will contain all the current
test state instead of abusing the array of the test templates for this.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm.c       |  62 +++---
 x86/svm.h       |  35 ++--
 x86/svm_npt.c   |  48 ++---
 x86/svm_tests.c | 533 ++++++++++++++++++++++++------------------------
 4 files changed, 343 insertions(+), 335 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index 51ed4d06..6381dee9 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -28,37 +28,37 @@ bool default_supported(void)
 	return true;
 }
 
-void default_prepare(struct svm_test *test)
+void default_prepare(struct svm_test_context *ctx)
 {
 }
 
-void default_prepare_gif_clear(struct svm_test *test)
+void default_prepare_gif_clear(struct svm_test_context *ctx)
 {
 }
 
-bool default_finished(struct svm_test *test)
+bool default_finished(struct svm_test_context *ctx)
 {
 	return true; /* one vmexit */
 }
 
 
-int get_test_stage(struct svm_test *test)
+int get_test_stage(struct svm_test_context *ctx)
 {
 	barrier();
-	return test->scratch;
+	return ctx->scratch;
 }
 
-void set_test_stage(struct svm_test *test, int s)
+void set_test_stage(struct svm_test_context *ctx, int s)
 {
 	barrier();
-	test->scratch = s;
+	ctx->scratch = s;
 	barrier();
 }
 
-void inc_test_stage(struct svm_test *test)
+void inc_test_stage(struct svm_test_context *ctx)
 {
 	barrier();
-	test->scratch++;
+	ctx->scratch++;
 	barrier();
 }
 
@@ -69,20 +69,20 @@ void test_set_guest(test_guest_func func)
 	guest_main = func;
 }
 
-static void test_thunk(struct svm_test *test)
+static void test_thunk(struct svm_test_context *ctx)
 {
-	guest_main(test);
+	guest_main(ctx);
 	vmmcall();
 }
 
 
-struct svm_test *v2_test;
+struct svm_test_context *v2_ctx;
 
 
 int __svm_vmrun(u64 rip)
 {
 	vcpu0.vmcb->save.rip = (ulong)rip;
-	vcpu0.regs.rdi = (ulong)v2_test;
+	vcpu0.regs.rdi = (ulong)v2_ctx;
 	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
 	SVM_VMRUN(&vcpu0);
 	return vcpu0.vmcb->control.exit_code;
@@ -93,43 +93,43 @@ int svm_vmrun(void)
 	return __svm_vmrun((u64)test_thunk);
 }
 
-static noinline void test_run(struct svm_test *test)
+static noinline void test_run(struct svm_test_context *ctx)
 {
 	svm_vcpu_ident(&vcpu0);
 
-	if (test->v2) {
-		v2_test = test;
-		test->v2();
+	if (ctx->test->v2) {
+		v2_ctx = ctx;
+		ctx->test->v2();
 		return;
 	}
 
 	cli();
 
-	test->prepare(test);
-	guest_main = test->guest_func;
+	ctx->test->prepare(ctx);
+	guest_main = ctx->test->guest_func;
 	vcpu0.vmcb->save.rip = (ulong)test_thunk;
 	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
-	vcpu0.regs.rdi = (ulong)test;
+	vcpu0.regs.rdi = (ulong)ctx;
 	do {
 
 		clgi();
 		sti();
 
-		test->prepare_gif_clear(test);
+		ctx->test->prepare_gif_clear(ctx);
 
 		__SVM_VMRUN(&vcpu0, "vmrun_rip");
 
 		cli();
 		stgi();
 
-		++test->exits;
-	} while (!test->finished(test));
+		++ctx->exits;
+	} while (!ctx->test->finished(ctx));
 	sti();
 
-	report(test->succeeded(test), "%s", test->name);
+	report(ctx->test->succeeded(ctx), "%s", ctx->test->name);
 
-	if (test->on_vcpu)
-		test->on_vcpu_done = true;
+	if (ctx->test->on_vcpu)
+		ctx->on_vcpu_done = true;
 }
 
 int matched;
@@ -185,16 +185,22 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 	if (!setup_svm())
 		return 0;
 
+	struct svm_test_context ctx;
+
 	svm_vcpu_init(&vcpu0);
 
 	for (; svm_tests[i].name != NULL; i++) {
+
+		memset(&ctx, 0, sizeof(ctx));
+		ctx.test = &svm_tests[i];
+
 		if (!test_wanted(svm_tests[i].name, av, ac))
 			continue;
 		if (svm_tests[i].supported && !svm_tests[i].supported())
 			continue;
 
 		if (!svm_tests[i].on_vcpu) {
-			test_run(&svm_tests[i]);
+			test_run(&ctx);
 			continue;
 		}
 
@@ -203,7 +209,7 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 
 		on_cpu_async(svm_tests[i].on_vcpu, (void *)test_run, &svm_tests[i]);
 
-		while (!svm_tests[i].on_vcpu_done)
+		while (!ctx.on_vcpu_done)
 			cpu_relax();
 	}
 
diff --git a/x86/svm.h b/x86/svm.h
index 61fd2387..01d07a54 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -7,41 +7,44 @@
 
 #define LBR_CTL_ENABLE_MASK BIT_ULL(0)
 
+struct svm_test_context {
+	int exits;
+	ulong scratch;
+	bool on_vcpu_done;
+	struct svm_test *test;
+};
+
 struct svm_test {
 	const char *name;
 	bool (*supported)(void);
-	void (*prepare)(struct svm_test *test);
-	void (*prepare_gif_clear)(struct svm_test *test);
-	void (*guest_func)(struct svm_test *test);
-	bool (*finished)(struct svm_test *test);
-	bool (*succeeded)(struct svm_test *test);
-	int exits;
-	ulong scratch;
+	void (*prepare)(struct svm_test_context *ctx);
+	void (*prepare_gif_clear)(struct svm_test_context *ctx);
+	void (*guest_func)(struct svm_test_context *ctx);
+	bool (*finished)(struct svm_test_context *ctx);
+	bool (*succeeded)(struct svm_test_context *ctx);
 	/* Alternative test interface. */
 	void (*v2)(void);
 	int on_vcpu;
-	bool on_vcpu_done;
 };
 
-typedef void (*test_guest_func)(struct svm_test *);
+typedef void (*test_guest_func)(struct svm_test_context *ctx);
 
 int run_svm_tests(int ac, char **av, struct svm_test *svm_tests);
 
 bool smp_supported(void);
 bool default_supported(void);
-void default_prepare(struct svm_test *test);
-void default_prepare_gif_clear(struct svm_test *test);
-bool default_finished(struct svm_test *test);
-int get_test_stage(struct svm_test *test);
-void set_test_stage(struct svm_test *test, int s);
-void inc_test_stage(struct svm_test *test);
+void default_prepare(struct svm_test_context *ctx);
+void default_prepare_gif_clear(struct svm_test_context *ctx);
+bool default_finished(struct svm_test_context *ctx);
+int get_test_stage(struct svm_test_context *ctx);
+void set_test_stage(struct svm_test_context *ctx, int s);
+void inc_test_stage(struct svm_test_context *ctx);
 int __svm_vmrun(u64 rip);
 void __svm_bare_vmrun(void);
 int svm_vmrun(void);
 void test_set_guest(test_guest_func func);
 
 
-extern struct svm_test svm_tests[];
 extern struct svm_vcpu vcpu0;
 
 #endif
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index 53a82793..fe6cbb29 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -6,11 +6,11 @@
 
 static void *scratch_page;
 
-static void null_test(struct svm_test *test)
+static void null_test(struct svm_test_context *ctx)
 {
 }
 
-static void npt_np_prepare(struct svm_test *test)
+static void npt_np_prepare(struct svm_test_context *ctx)
 {
 	u64 *pte;
 
@@ -20,12 +20,12 @@ static void npt_np_prepare(struct svm_test *test)
 	*pte &= ~1ULL;
 }
 
-static void npt_np_test(struct svm_test *test)
+static void npt_np_test(struct svm_test_context *ctx)
 {
 	(void)*(volatile u64 *)scratch_page;
 }
 
-static bool npt_np_check(struct svm_test *test)
+static bool npt_np_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte((u64) scratch_page);
 
@@ -35,12 +35,12 @@ static bool npt_np_check(struct svm_test *test)
 	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000004ULL);
 }
 
-static void npt_nx_prepare(struct svm_test *test)
+static void npt_nx_prepare(struct svm_test_context *ctx)
 {
 	u64 *pte;
 
-	test->scratch = rdmsr(MSR_EFER);
-	wrmsr(MSR_EFER, test->scratch | EFER_NX);
+	ctx->scratch = rdmsr(MSR_EFER);
+	wrmsr(MSR_EFER, ctx->scratch | EFER_NX);
 
 	/* Clear the guest's EFER.NX, it should not affect NPT behavior. */
 	vcpu0.vmcb->save.efer &= ~EFER_NX;
@@ -50,11 +50,11 @@ static void npt_nx_prepare(struct svm_test *test)
 	*pte |= PT64_NX_MASK;
 }
 
-static bool npt_nx_check(struct svm_test *test)
+static bool npt_nx_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte((u64) null_test);
 
-	wrmsr(MSR_EFER, test->scratch);
+	wrmsr(MSR_EFER, ctx->scratch);
 
 	*pte &= ~PT64_NX_MASK;
 
@@ -62,7 +62,7 @@ static bool npt_nx_check(struct svm_test *test)
 	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000015ULL);
 }
 
-static void npt_us_prepare(struct svm_test *test)
+static void npt_us_prepare(struct svm_test_context *ctx)
 {
 	u64 *pte;
 
@@ -72,12 +72,12 @@ static void npt_us_prepare(struct svm_test *test)
 	*pte &= ~(1ULL << 2);
 }
 
-static void npt_us_test(struct svm_test *test)
+static void npt_us_test(struct svm_test_context *ctx)
 {
 	(void)*(volatile u64 *)scratch_page;
 }
 
-static bool npt_us_check(struct svm_test *test)
+static bool npt_us_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte((u64) scratch_page);
 
@@ -87,7 +87,7 @@ static bool npt_us_check(struct svm_test *test)
 	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000005ULL);
 }
 
-static void npt_rw_prepare(struct svm_test *test)
+static void npt_rw_prepare(struct svm_test_context *ctx)
 {
 
 	u64 *pte;
@@ -97,14 +97,14 @@ static void npt_rw_prepare(struct svm_test *test)
 	*pte &= ~(1ULL << 1);
 }
 
-static void npt_rw_test(struct svm_test *test)
+static void npt_rw_test(struct svm_test_context *ctx)
 {
 	u64 *data = (void *)(0x80000);
 
 	*data = 0;
 }
 
-static bool npt_rw_check(struct svm_test *test)
+static bool npt_rw_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte(0x80000);
 
@@ -114,7 +114,7 @@ static bool npt_rw_check(struct svm_test *test)
 	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
 }
 
-static void npt_rw_pfwalk_prepare(struct svm_test *test)
+static void npt_rw_pfwalk_prepare(struct svm_test_context *ctx)
 {
 
 	u64 *pte;
@@ -124,7 +124,7 @@ static void npt_rw_pfwalk_prepare(struct svm_test *test)
 	*pte &= ~(1ULL << 1);
 }
 
-static bool npt_rw_pfwalk_check(struct svm_test *test)
+static bool npt_rw_pfwalk_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte(read_cr3());
 
@@ -135,14 +135,14 @@ static bool npt_rw_pfwalk_check(struct svm_test *test)
 	    && (vcpu0.vmcb->control.exit_info_2 == read_cr3());
 }
 
-static void npt_l1mmio_prepare(struct svm_test *test)
+static void npt_l1mmio_prepare(struct svm_test_context *ctx)
 {
 }
 
 u32 nested_apic_version1;
 u32 nested_apic_version2;
 
-static void npt_l1mmio_test(struct svm_test *test)
+static void npt_l1mmio_test(struct svm_test_context *ctx)
 {
 	volatile u32 *data = (volatile void *)(0xfee00030UL);
 
@@ -150,7 +150,7 @@ static void npt_l1mmio_test(struct svm_test *test)
 	nested_apic_version2 = *data;
 }
 
-static bool npt_l1mmio_check(struct svm_test *test)
+static bool npt_l1mmio_check(struct svm_test_context *ctx)
 {
 	volatile u32 *data = (volatile void *)(0xfee00030);
 	u32 lvr = *data;
@@ -158,7 +158,7 @@ static bool npt_l1mmio_check(struct svm_test *test)
 	return nested_apic_version1 == lvr && nested_apic_version2 == lvr;
 }
 
-static void npt_rw_l1mmio_prepare(struct svm_test *test)
+static void npt_rw_l1mmio_prepare(struct svm_test_context *ctx)
 {
 
 	u64 *pte;
@@ -168,14 +168,14 @@ static void npt_rw_l1mmio_prepare(struct svm_test *test)
 	*pte &= ~(1ULL << 1);
 }
 
-static void npt_rw_l1mmio_test(struct svm_test *test)
+static void npt_rw_l1mmio_test(struct svm_test_context *ctx)
 {
 	volatile u32 *data = (volatile void *)(0xfee00080);
 
 	*data = *data;
 }
 
-static bool npt_rw_l1mmio_check(struct svm_test *test)
+static bool npt_rw_l1mmio_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte(0xfee00080);
 
@@ -185,7 +185,7 @@ static bool npt_rw_l1mmio_check(struct svm_test *test)
 	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
 }
 
-static void basic_guest_main(struct svm_test *test)
+static void basic_guest_main(struct svm_test_context *ctx)
 {
 }
 
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 0312df33..c29e9a5d 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -38,54 +38,54 @@ u64 latclgi_max;
 u64 latclgi_min;
 u64 runs;
 
-static void null_test(struct svm_test *test)
+static void null_test(struct svm_test_context *ctx)
 {
 }
 
-static bool null_check(struct svm_test *test)
+static bool null_check(struct svm_test_context *ctx)
 {
 	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL;
 }
 
-static void prepare_no_vmrun_int(struct svm_test *test)
+static void prepare_no_vmrun_int(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
 }
 
-static bool check_no_vmrun_int(struct svm_test *test)
+static bool check_no_vmrun_int(struct svm_test_context *ctx)
 {
 	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
-static void test_vmrun(struct svm_test *test)
+static void test_vmrun(struct svm_test_context *ctx)
 {
 	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vcpu0.vmcb)));
 }
 
-static bool check_vmrun(struct svm_test *test)
+static bool check_vmrun(struct svm_test_context *ctx)
 {
 	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMRUN;
 }
 
-static void prepare_rsm_intercept(struct svm_test *test)
+static void prepare_rsm_intercept(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept |= 1 << INTERCEPT_RSM;
 	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
 }
 
-static void test_rsm_intercept(struct svm_test *test)
+static void test_rsm_intercept(struct svm_test_context *ctx)
 {
 	asm volatile ("rsm" : : : "memory");
 }
 
-static bool check_rsm_intercept(struct svm_test *test)
+static bool check_rsm_intercept(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 2;
+	return get_test_stage(ctx) == 2;
 }
 
-static bool finished_rsm_intercept(struct svm_test *test)
+static bool finished_rsm_intercept(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_RSM) {
 			report_fail("VMEXIT not due to rsm. Exit reason 0x%x",
@@ -93,7 +93,7 @@ static bool finished_rsm_intercept(struct svm_test *test)
 			return true;
 		}
 		vcpu0.vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
-		inc_test_stage(test);
+		inc_test_stage(ctx);
 		break;
 
 	case 1:
@@ -103,41 +103,41 @@ static bool finished_rsm_intercept(struct svm_test *test)
 			return true;
 		}
 		vcpu0.vmcb->save.rip += 2;
-		inc_test_stage(test);
+		inc_test_stage(ctx);
 		break;
 
 	default:
 		return true;
 	}
-	return get_test_stage(test) == 2;
+	return get_test_stage(ctx) == 2;
 }
 
-static void prepare_cr3_intercept(struct svm_test *test)
+static void prepare_cr3_intercept(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
 }
 
-static void test_cr3_intercept(struct svm_test *test)
+static void test_cr3_intercept(struct svm_test_context *ctx)
 {
-	asm volatile ("mov %%cr3, %0" : "=r"(test->scratch) : : "memory");
+	asm volatile ("mov %%cr3, %0" : "=r"(ctx->scratch) : : "memory");
 }
 
-static bool check_cr3_intercept(struct svm_test *test)
+static bool check_cr3_intercept(struct svm_test_context *ctx)
 {
 	return vcpu0.vmcb->control.exit_code == SVM_EXIT_READ_CR3;
 }
 
-static bool check_cr3_nointercept(struct svm_test *test)
+static bool check_cr3_nointercept(struct svm_test_context *ctx)
 {
-	return null_check(test) && test->scratch == read_cr3();
+	return null_check(ctx) && ctx->scratch == read_cr3();
 }
 
-static void corrupt_cr3_intercept_bypass(void *_test)
+static void corrupt_cr3_intercept_bypass(void *_ctx)
 {
-	struct svm_test *test = _test;
+	struct svm_test_context *ctx = _ctx;
 	extern volatile u32 mmio_insn;
 
-	while (!__sync_bool_compare_and_swap(&test->scratch, 1, 2))
+	while (!__sync_bool_compare_and_swap(&ctx->scratch, 1, 2))
 		pause();
 	pause();
 	pause();
@@ -145,32 +145,32 @@ static void corrupt_cr3_intercept_bypass(void *_test)
 	mmio_insn = 0x90d8200f;  // mov %cr3, %rax; nop
 }
 
-static void prepare_cr3_intercept_bypass(struct svm_test *test)
+static void prepare_cr3_intercept_bypass(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
-	on_cpu_async(1, corrupt_cr3_intercept_bypass, test);
+	on_cpu_async(1, corrupt_cr3_intercept_bypass, ctx);
 }
 
-static void test_cr3_intercept_bypass(struct svm_test *test)
+static void test_cr3_intercept_bypass(struct svm_test_context *ctx)
 {
 	ulong a = 0xa0000;
 
-	test->scratch = 1;
-	while (test->scratch != 2)
+	ctx->scratch = 1;
+	while (ctx->scratch != 2)
 		barrier();
 
 	asm volatile ("mmio_insn: mov %0, (%0); nop"
 		      : "+a"(a) : : "memory");
-	test->scratch = a;
+	ctx->scratch = a;
 }
 
-static void prepare_dr_intercept(struct svm_test *test)
+static void prepare_dr_intercept(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept_dr_read = 0xff;
 	vcpu0.vmcb->control.intercept_dr_write = 0xff;
 }
 
-static void test_dr_intercept(struct svm_test *test)
+static void test_dr_intercept(struct svm_test_context *ctx)
 {
 	unsigned int i, failcnt = 0;
 
@@ -179,32 +179,32 @@ static void test_dr_intercept(struct svm_test *test)
 
 		switch (i) {
 		case 0:
-			asm volatile ("mov %%dr0, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr0, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		case 1:
-			asm volatile ("mov %%dr1, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr1, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		case 2:
-			asm volatile ("mov %%dr2, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr2, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		case 3:
-			asm volatile ("mov %%dr3, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr3, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		case 4:
-			asm volatile ("mov %%dr4, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr4, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		case 5:
-			asm volatile ("mov %%dr5, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr5, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		case 6:
-			asm volatile ("mov %%dr6, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr6, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		case 7:
-			asm volatile ("mov %%dr7, %0" : "=r"(test->scratch) : : "memory");
+			asm volatile ("mov %%dr7, %0" : "=r"(ctx->scratch) : : "memory");
 			break;
 		}
 
-		if (test->scratch != i) {
+		if (ctx->scratch != i) {
 			report_fail("dr%u read intercept", i);
 			failcnt++;
 		}
@@ -215,41 +215,41 @@ static void test_dr_intercept(struct svm_test *test)
 
 		switch (i) {
 		case 0:
-			asm volatile ("mov %0, %%dr0" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr0" : : "r"(ctx->scratch) : "memory");
 			break;
 		case 1:
-			asm volatile ("mov %0, %%dr1" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr1" : : "r"(ctx->scratch) : "memory");
 			break;
 		case 2:
-			asm volatile ("mov %0, %%dr2" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr2" : : "r"(ctx->scratch) : "memory");
 			break;
 		case 3:
-			asm volatile ("mov %0, %%dr3" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr3" : : "r"(ctx->scratch) : "memory");
 			break;
 		case 4:
-			asm volatile ("mov %0, %%dr4" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr4" : : "r"(ctx->scratch) : "memory");
 			break;
 		case 5:
-			asm volatile ("mov %0, %%dr5" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr5" : : "r"(ctx->scratch) : "memory");
 			break;
 		case 6:
-			asm volatile ("mov %0, %%dr6" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr6" : : "r"(ctx->scratch) : "memory");
 			break;
 		case 7:
-			asm volatile ("mov %0, %%dr7" : : "r"(test->scratch) : "memory");
+			asm volatile ("mov %0, %%dr7" : : "r"(ctx->scratch) : "memory");
 			break;
 		}
 
-		if (test->scratch != i) {
+		if (ctx->scratch != i) {
 			report_fail("dr%u write intercept", i);
 			failcnt++;
 		}
 	}
 
-	test->scratch = failcnt;
+	ctx->scratch = failcnt;
 }
 
-static bool dr_intercept_finished(struct svm_test *test)
+static bool dr_intercept_finished(struct svm_test_context *ctx)
 {
 	ulong n = (vcpu0.vmcb->control.exit_code - SVM_EXIT_READ_DR0);
 
@@ -264,7 +264,7 @@ static bool dr_intercept_finished(struct svm_test *test)
 	 * http://support.amd.com/TechDocs/24593.pdf
 	 * there are 16 VMEXIT codes each for DR read and write.
 	 */
-	test->scratch = (n % 16);
+	ctx->scratch = (n % 16);
 
 	/* Jump over MOV instruction */
 	vcpu0.vmcb->save.rip += 3;
@@ -272,9 +272,9 @@ static bool dr_intercept_finished(struct svm_test *test)
 	return false;
 }
 
-static bool check_dr_intercept(struct svm_test *test)
+static bool check_dr_intercept(struct svm_test_context *ctx)
 {
-	return !test->scratch;
+	return !ctx->scratch;
 }
 
 static bool next_rip_supported(void)
@@ -282,20 +282,20 @@ static bool next_rip_supported(void)
 	return this_cpu_has(X86_FEATURE_NRIPS);
 }
 
-static void prepare_next_rip(struct svm_test *test)
+static void prepare_next_rip(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
 }
 
 
-static void test_next_rip(struct svm_test *test)
+static void test_next_rip(struct svm_test_context *ctx)
 {
 	asm volatile ("rdtsc\n\t"
 		      ".globl exp_next_rip\n\t"
 		      "exp_next_rip:\n\t" ::: "eax", "edx");
 }
 
-static bool check_next_rip(struct svm_test *test)
+static bool check_next_rip(struct svm_test_context *ctx)
 {
 	extern char exp_next_rip;
 	unsigned long address = (unsigned long)&exp_next_rip;
@@ -304,14 +304,14 @@ static bool check_next_rip(struct svm_test *test)
 }
 
 
-static void prepare_msr_intercept(struct svm_test *test)
+static void prepare_msr_intercept(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
 	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
 }
 
-static void test_msr_intercept(struct svm_test *test)
+static void test_msr_intercept(struct svm_test_context *ctx)
 {
 	unsigned long msr_value = 0xef8056791234abcd; /* Arbitrary value */
 	unsigned long msr_index;
@@ -333,12 +333,12 @@ static void test_msr_intercept(struct svm_test *test)
 		else if (msr_index == 0xc0002000)
 			msr_index = 0xc0010000;
 
-		test->scratch = -1;
+		ctx->scratch = -1;
 
 		rdmsr(msr_index);
 
 		/* Check that a read intercept occurred for MSR at msr_index */
-		if (test->scratch != msr_index)
+		if (ctx->scratch != msr_index)
 			report_fail("MSR 0x%lx read intercept", msr_index);
 
 		/*
@@ -350,14 +350,14 @@ static void test_msr_intercept(struct svm_test *test)
 		wrmsr(msr_index, msr_value);
 
 		/* Check that a write intercept occurred for MSR with msr_value */
-		if (test->scratch != msr_value)
+		if (ctx->scratch != msr_value)
 			report_fail("MSR 0x%lx write intercept", msr_index);
 	}
 
-	test->scratch = -2;
+	ctx->scratch = -2;
 }
 
-static bool msr_intercept_finished(struct svm_test *test)
+static bool msr_intercept_finished(struct svm_test_context *ctx)
 {
 	u32 exit_code = vcpu0.vmcb->control.exit_code;
 	u64 exit_info_1;
@@ -402,37 +402,37 @@ static bool msr_intercept_finished(struct svm_test *test)
 
 	/*
 	 * Test whether the intercept was for RDMSR/WRMSR.
-	 * For RDMSR, test->scratch is set to the MSR index;
+	 * For RDMSR, ctx->scratch is set to the MSR index;
 	 *      RCX holds the MSR index.
-	 * For WRMSR, test->scratch is set to the MSR value;
+	 * For WRMSR, ctx->scratch is set to the MSR value;
 	 *      RDX holds the upper 32 bits of the MSR value,
 	 *      while RAX hold its lower 32 bits.
 	 */
 	if (exit_info_1)
-		test->scratch =
+		ctx->scratch =
 			((vcpu0.regs.rdx << 32) | (vcpu0.regs.rax & 0xffffffff));
 	else
-		test->scratch = vcpu0.regs.rcx;
+		ctx->scratch = vcpu0.regs.rcx;
 
 	return false;
 }
 
-static bool check_msr_intercept(struct svm_test *test)
+static bool check_msr_intercept(struct svm_test_context *ctx)
 {
 	memset(svm_get_msr_bitmap(), 0, MSR_BITMAP_SIZE);
-	return (test->scratch == -2);
+	return (ctx->scratch == -2);
 }
 
-static void prepare_mode_switch(struct svm_test *test)
+static void prepare_mode_switch(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
 		|  (1ULL << UD_VECTOR)
 		|  (1ULL << DF_VECTOR)
 		|  (1ULL << PF_VECTOR);
-	test->scratch = 0;
+	ctx->scratch = 0;
 }
 
-static void test_mode_switch(struct svm_test *test)
+static void test_mode_switch(struct svm_test_context *ctx)
 {
 	asm volatile("	cli\n"
 		     "	ljmp *1f\n" /* jump to 32-bit code segment */
@@ -487,7 +487,7 @@ static void test_mode_switch(struct svm_test *test)
 		     : "rax", "rbx", "rcx", "rdx", "memory");
 }
 
-static bool mode_switch_finished(struct svm_test *test)
+static bool mode_switch_finished(struct svm_test_context *ctx)
 {
 	u64 cr0, cr4, efer;
 
@@ -503,7 +503,7 @@ static bool mode_switch_finished(struct svm_test *test)
 	vcpu0.vmcb->save.rip += 3;
 
 	/* Do sanity checks */
-	switch (test->scratch) {
+	switch (ctx->scratch) {
 	case 0:
 		/* Test should be in real mode now - check for this */
 		if ((cr0  & 0x80000001) || /* CR0.PG, CR0.PE */
@@ -521,99 +521,99 @@ static bool mode_switch_finished(struct svm_test *test)
 	}
 
 	/* one step forward */
-	test->scratch += 1;
+	ctx->scratch += 1;
 
-	return test->scratch == 2;
+	return ctx->scratch == 2;
 }
 
-static bool check_mode_switch(struct svm_test *test)
+static bool check_mode_switch(struct svm_test_context *ctx)
 {
-	return test->scratch == 2;
+	return ctx->scratch == 2;
 }
 
-static void prepare_ioio(struct svm_test *test)
+static void prepare_ioio(struct svm_test_context *ctx)
 {
 	u8 *io_bitmap = svm_get_io_bitmap();
 
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
-	test->scratch = 0;
+	ctx->scratch = 0;
 	memset(io_bitmap, 0, 8192);
 	io_bitmap[8192] = 0xFF;
 }
 
-static void test_ioio(struct svm_test *test)
+static void test_ioio(struct svm_test_context *ctx)
 {
 	u8 *io_bitmap = svm_get_io_bitmap();
 
 	// stage 0, test IO pass
 	inb(0x5000);
 	outb(0x0, 0x5000);
-	if (get_test_stage(test) != 0)
+	if (get_test_stage(ctx) != 0)
 		goto fail;
 
 	// test IO width, in/out
 	io_bitmap[0] = 0xFF;
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 	inb(0x0);
-	if (get_test_stage(test) != 2)
+	if (get_test_stage(ctx) != 2)
 		goto fail;
 
 	outw(0x0, 0x0);
-	if (get_test_stage(test) != 3)
+	if (get_test_stage(ctx) != 3)
 		goto fail;
 
 	inl(0x0);
-	if (get_test_stage(test) != 4)
+	if (get_test_stage(ctx) != 4)
 		goto fail;
 
 	// test low/high IO port
 	io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8));
 	inb(0x5000);
-	if (get_test_stage(test) != 5)
+	if (get_test_stage(ctx) != 5)
 		goto fail;
 
 	io_bitmap[0x9000 / 8] = (1 << (0x9000 % 8));
 	inw(0x9000);
-	if (get_test_stage(test) != 6)
+	if (get_test_stage(ctx) != 6)
 		goto fail;
 
 	// test partial pass
 	io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8));
 	inl(0x4FFF);
-	if (get_test_stage(test) != 7)
+	if (get_test_stage(ctx) != 7)
 		goto fail;
 
 	// test across pages
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 	inl(0x7FFF);
-	if (get_test_stage(test) != 8)
+	if (get_test_stage(ctx) != 8)
 		goto fail;
 
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 	io_bitmap[0x8000 / 8] = 1 << (0x8000 % 8);
 	inl(0x7FFF);
-	if (get_test_stage(test) != 10)
+	if (get_test_stage(ctx) != 10)
 		goto fail;
 
 	io_bitmap[0] = 0;
 	inl(0xFFFF);
-	if (get_test_stage(test) != 11)
+	if (get_test_stage(ctx) != 11)
 		goto fail;
 
 	io_bitmap[0] = 0xFF;
 	io_bitmap[8192] = 0;
 	inl(0xFFFF);
-	inc_test_stage(test);
-	if (get_test_stage(test) != 12)
+	inc_test_stage(ctx);
+	if (get_test_stage(ctx) != 12)
 		goto fail;
 
 	return;
 fail:
-	report_fail("stage %d", get_test_stage(test));
-	test->scratch = -1;
+	report_fail("stage %d", get_test_stage(ctx));
+	ctx->scratch = -1;
 }
 
-static bool ioio_finished(struct svm_test *test)
+static bool ioio_finished(struct svm_test_context *ctx)
 {
 	unsigned port, size;
 	u8 *io_bitmap = svm_get_io_bitmap();
@@ -626,7 +626,7 @@ static bool ioio_finished(struct svm_test *test)
 		return true;
 
 	/* one step forward */
-	test->scratch += 1;
+	ctx->scratch += 1;
 
 	port = vcpu0.vmcb->control.exit_info_1 >> 16;
 	size = (vcpu0.vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
@@ -639,40 +639,40 @@ static bool ioio_finished(struct svm_test *test)
 	return false;
 }
 
-static bool check_ioio(struct svm_test *test)
+static bool check_ioio(struct svm_test_context *ctx)
 {
 	u8 *io_bitmap = svm_get_io_bitmap();
 
 	memset(io_bitmap, 0, 8193);
-	return test->scratch != -1;
+	return ctx->scratch != -1;
 }
 
-static void prepare_asid_zero(struct svm_test *test)
+static void prepare_asid_zero(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.asid = 0;
 }
 
-static void test_asid_zero(struct svm_test *test)
+static void test_asid_zero(struct svm_test_context *ctx)
 {
 	asm volatile ("vmmcall\n\t");
 }
 
-static bool check_asid_zero(struct svm_test *test)
+static bool check_asid_zero(struct svm_test_context *ctx)
 {
 	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
-static void sel_cr0_bug_prepare(struct svm_test *test)
+static void sel_cr0_bug_prepare(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
 }
 
-static bool sel_cr0_bug_finished(struct svm_test *test)
+static bool sel_cr0_bug_finished(struct svm_test_context *ctx)
 {
 	return true;
 }
 
-static void sel_cr0_bug_test(struct svm_test *test)
+static void sel_cr0_bug_test(struct svm_test_context *ctx)
 {
 	unsigned long cr0;
 
@@ -690,7 +690,7 @@ static void sel_cr0_bug_test(struct svm_test *test)
 	exit(report_summary());
 }
 
-static bool sel_cr0_bug_check(struct svm_test *test)
+static bool sel_cr0_bug_check(struct svm_test_context *ctx)
 {
 	return vcpu0.vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
 }
@@ -704,7 +704,7 @@ static bool tsc_adjust_supported(void)
 	return this_cpu_has(X86_FEATURE_TSC_ADJUST);
 }
 
-static void tsc_adjust_prepare(struct svm_test *test)
+static void tsc_adjust_prepare(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
 
@@ -713,7 +713,7 @@ static void tsc_adjust_prepare(struct svm_test *test)
 	ok = adjust == -TSC_ADJUST_VALUE;
 }
 
-static void tsc_adjust_test(struct svm_test *test)
+static void tsc_adjust_test(struct svm_test_context *ctx)
 {
 	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
 	ok &= adjust == -TSC_ADJUST_VALUE;
@@ -731,7 +731,7 @@ static void tsc_adjust_test(struct svm_test *test)
 	ok &= (l1_tsc_msr + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE;
 }
 
-static bool tsc_adjust_check(struct svm_test *test)
+static bool tsc_adjust_check(struct svm_test_context *ctx)
 {
 	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
 
@@ -745,7 +745,7 @@ static u64 guest_tsc_delay_value;
 #define TSC_SHIFT 24
 #define TSC_SCALE_ITERATIONS 10
 
-static void svm_tsc_scale_guest(struct svm_test *test)
+static void svm_tsc_scale_guest(struct svm_test_context *ctx)
 {
 	u64 start_tsc = rdtsc();
 
@@ -803,7 +803,7 @@ static void svm_tsc_scale_test(void)
 	svm_tsc_scale_run_testcase(50, 0.0001, rdrand());
 }
 
-static void latency_prepare(struct svm_test *test)
+static void latency_prepare(struct svm_test_context *ctx)
 {
 	runs = LATENCY_RUNS;
 	latvmrun_min = latvmexit_min = -1ULL;
@@ -812,7 +812,7 @@ static void latency_prepare(struct svm_test *test)
 	tsc_start = rdtsc();
 }
 
-static void latency_test(struct svm_test *test)
+static void latency_test(struct svm_test_context *ctx)
 {
 	u64 cycles;
 
@@ -835,7 +835,7 @@ start:
 	goto start;
 }
 
-static bool latency_finished(struct svm_test *test)
+static bool latency_finished(struct svm_test_context *ctx)
 {
 	u64 cycles;
 
@@ -860,13 +860,13 @@ static bool latency_finished(struct svm_test *test)
 	return runs == 0;
 }
 
-static bool latency_finished_clean(struct svm_test *test)
+static bool latency_finished_clean(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.clean = VMCB_CLEAN_ALL;
-	return latency_finished(test);
+	return latency_finished(ctx);
 }
 
-static bool latency_check(struct svm_test *test)
+static bool latency_check(struct svm_test_context *ctx)
 {
 	printf("    Latency VMRUN : max: %ld min: %ld avg: %ld\n", latvmrun_max,
 	       latvmrun_min, vmrun_sum / LATENCY_RUNS);
@@ -875,7 +875,7 @@ static bool latency_check(struct svm_test *test)
 	return true;
 }
 
-static void lat_svm_insn_prepare(struct svm_test *test)
+static void lat_svm_insn_prepare(struct svm_test_context *ctx)
 {
 	runs = LATENCY_RUNS;
 	latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL;
@@ -883,7 +883,7 @@ static void lat_svm_insn_prepare(struct svm_test *test)
 	vmload_sum = vmsave_sum = stgi_sum = clgi_sum;
 }
 
-static bool lat_svm_insn_finished(struct svm_test *test)
+static bool lat_svm_insn_finished(struct svm_test_context *ctx)
 {
 	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
 	u64 cycles;
@@ -931,7 +931,7 @@ static bool lat_svm_insn_finished(struct svm_test *test)
 	return true;
 }
 
-static bool lat_svm_insn_check(struct svm_test *test)
+static bool lat_svm_insn_check(struct svm_test_context *ctx)
 {
 	printf("    Latency VMLOAD: max: %ld min: %ld avg: %ld\n", latvmload_max,
 	       latvmload_min, vmload_sum / LATENCY_RUNS);
@@ -953,11 +953,10 @@ static void pending_event_ipi_isr(isr_regs_t *regs)
 	eoi();
 }
 
-static void pending_event_prepare(struct svm_test *test)
+static void pending_event_prepare(struct svm_test_context *ctx)
 {
 	int ipi_vector = 0xf1;
 
-
 	pending_event_ipi_fired = false;
 
 	handle_irq(ipi_vector, pending_event_ipi_isr);
@@ -970,17 +969,17 @@ static void pending_event_prepare(struct svm_test *test)
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
 		       APIC_DM_FIXED | ipi_vector, 0);
 
-	set_test_stage(test, 0);
+	set_test_stage(ctx, 0);
 }
 
-static void pending_event_test(struct svm_test *test)
+static void pending_event_test(struct svm_test_context *ctx)
 {
 	pending_event_guest_run = true;
 }
 
-static bool pending_event_finished(struct svm_test *test)
+static bool pending_event_finished(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
 			report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x",
@@ -1012,17 +1011,17 @@ static bool pending_event_finished(struct svm_test *test)
 		break;
 	}
 
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 
-	return get_test_stage(test) == 2;
+	return get_test_stage(ctx) == 2;
 }
 
-static bool pending_event_check(struct svm_test *test)
+static bool pending_event_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 2;
+	return get_test_stage(ctx) == 2;
 }
 
-static void pending_event_cli_prepare(struct svm_test *test)
+static void pending_event_cli_prepare(struct svm_test_context *ctx)
 {
 	pending_event_ipi_fired = false;
 
@@ -1031,18 +1030,18 @@ static void pending_event_cli_prepare(struct svm_test *test)
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
 		       APIC_DM_FIXED | 0xf1, 0);
 
-	set_test_stage(test, 0);
+	set_test_stage(ctx, 0);
 }
 
-static void pending_event_cli_prepare_gif_clear(struct svm_test *test)
+static void pending_event_cli_prepare_gif_clear(struct svm_test_context *ctx)
 {
 	asm("cli");
 }
 
-static void pending_event_cli_test(struct svm_test *test)
+static void pending_event_cli_test(struct svm_test_context *ctx)
 {
 	if (pending_event_ipi_fired == true) {
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		report_fail("Interrupt preceeded guest");
 		vmmcall();
 	}
@@ -1051,7 +1050,7 @@ static void pending_event_cli_test(struct svm_test *test)
 	sti_nop_cli();
 
 	if (pending_event_ipi_fired != true) {
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		report_fail("Interrupt not triggered by guest");
 	}
 
@@ -1065,7 +1064,7 @@ static void pending_event_cli_test(struct svm_test *test)
 	sti_nop_cli();
 }
 
-static bool pending_event_cli_finished(struct svm_test *test)
+static bool pending_event_cli_finished(struct svm_test_context *ctx)
 {
 	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x",
@@ -1073,7 +1072,7 @@ static bool pending_event_cli_finished(struct svm_test *test)
 		return true;
 	}
 
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 		vcpu0.vmcb->save.rip += 3;
 
@@ -1106,14 +1105,14 @@ static bool pending_event_cli_finished(struct svm_test *test)
 		return true;
 	}
 
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 
-	return get_test_stage(test) == 2;
+	return get_test_stage(ctx) == 2;
 }
 
-static bool pending_event_cli_check(struct svm_test *test)
+static bool pending_event_cli_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 2;
+	return get_test_stage(ctx) == 2;
 }
 
 #define TIMER_VECTOR    222
@@ -1126,14 +1125,14 @@ static void timer_isr(isr_regs_t *regs)
 	apic_write(APIC_EOI, 0);
 }
 
-static void interrupt_prepare(struct svm_test *test)
+static void interrupt_prepare(struct svm_test_context *ctx)
 {
 	handle_irq(TIMER_VECTOR, timer_isr);
 	timer_fired = false;
-	set_test_stage(test, 0);
+	set_test_stage(ctx, 0);
 }
 
-static void interrupt_test(struct svm_test *test)
+static void interrupt_test(struct svm_test_context *ctx)
 {
 	long long start, loops;
 
@@ -1147,7 +1146,7 @@ static void interrupt_test(struct svm_test *test)
 	report(timer_fired, "direct interrupt while running guest");
 
 	if (!timer_fired) {
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		vmmcall();
 	}
 
@@ -1163,7 +1162,7 @@ static void interrupt_test(struct svm_test *test)
 	report(timer_fired, "intercepted interrupt while running guest");
 
 	if (!timer_fired) {
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		vmmcall();
 	}
 
@@ -1180,7 +1179,7 @@ static void interrupt_test(struct svm_test *test)
 	       "direct interrupt + hlt");
 
 	if (!timer_fired) {
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		vmmcall();
 	}
 
@@ -1197,16 +1196,16 @@ static void interrupt_test(struct svm_test *test)
 	       "intercepted interrupt + hlt");
 
 	if (!timer_fired) {
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		vmmcall();
 	}
 
 	apic_cleanup_timer();
 }
 
-static bool interrupt_finished(struct svm_test *test)
+static bool interrupt_finished(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 	case 2:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
@@ -1241,14 +1240,14 @@ static bool interrupt_finished(struct svm_test *test)
 		return true;
 	}
 
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 
-	return get_test_stage(test) == 5;
+	return get_test_stage(ctx) == 5;
 }
 
-static bool interrupt_check(struct svm_test *test)
+static bool interrupt_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 5;
+	return get_test_stage(ctx) == 5;
 }
 
 static volatile bool nmi_fired;
@@ -1258,21 +1257,21 @@ static void nmi_handler(struct ex_regs *regs)
 	nmi_fired = true;
 }
 
-static void nmi_prepare(struct svm_test *test)
+static void nmi_prepare(struct svm_test_context *ctx)
 {
 	nmi_fired = false;
 	handle_exception(NMI_VECTOR, nmi_handler);
-	set_test_stage(test, 0);
+	set_test_stage(ctx, 0);
 }
 
-static void nmi_test(struct svm_test *test)
+static void nmi_test(struct svm_test_context *ctx)
 {
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, 0);
 
 	report(nmi_fired, "direct NMI while running guest");
 
 	if (!nmi_fired)
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 
 	vmmcall();
 
@@ -1282,14 +1281,14 @@ static void nmi_test(struct svm_test *test)
 
 	if (!nmi_fired) {
 		report(nmi_fired, "intercepted pending NMI not dispatched");
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 	}
 
 }
 
-static bool nmi_finished(struct svm_test *test)
+static bool nmi_finished(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
@@ -1318,30 +1317,30 @@ static bool nmi_finished(struct svm_test *test)
 		return true;
 	}
 
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 
-	return get_test_stage(test) == 3;
+	return get_test_stage(ctx) == 3;
 }
 
-static bool nmi_check(struct svm_test *test)
+static bool nmi_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 3;
+	return get_test_stage(ctx) == 3;
 }
 
 #define NMI_DELAY 100000000ULL
 
-static void nmi_message_thread(void *_test)
+static void nmi_message_thread(void *_ctx)
 {
-	struct svm_test *test = _test;
+	struct svm_test_context *ctx = _ctx;
 
-	while (get_test_stage(test) != 1)
+	while (get_test_stage(ctx) != 1)
 		pause();
 
 	delay(NMI_DELAY);
 
 	apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]);
 
-	while (get_test_stage(test) != 2)
+	while (get_test_stage(ctx) != 2)
 		pause();
 
 	delay(NMI_DELAY);
@@ -1349,15 +1348,15 @@ static void nmi_message_thread(void *_test)
 	apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]);
 }
 
-static void nmi_hlt_test(struct svm_test *test)
+static void nmi_hlt_test(struct svm_test_context *ctx)
 {
 	long long start;
 
-	on_cpu_async(1, nmi_message_thread, test);
+	on_cpu_async(1, nmi_message_thread, ctx);
 
 	start = rdtsc();
 
-	set_test_stage(test, 1);
+	set_test_stage(ctx, 1);
 
 	asm volatile ("hlt");
 
@@ -1365,7 +1364,7 @@ static void nmi_hlt_test(struct svm_test *test)
 	       "direct NMI + hlt");
 
 	if (!nmi_fired)
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 
 	nmi_fired = false;
 
@@ -1373,7 +1372,7 @@ static void nmi_hlt_test(struct svm_test *test)
 
 	start = rdtsc();
 
-	set_test_stage(test, 2);
+	set_test_stage(ctx, 2);
 
 	asm volatile ("hlt");
 
@@ -1382,16 +1381,16 @@ static void nmi_hlt_test(struct svm_test *test)
 
 	if (!nmi_fired) {
 		report(nmi_fired, "intercepted pending NMI not dispatched");
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		vmmcall();
 	}
 
-	set_test_stage(test, 3);
+	set_test_stage(ctx, 3);
 }
 
-static bool nmi_hlt_finished(struct svm_test *test)
+static bool nmi_hlt_finished(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 1:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
@@ -1420,12 +1419,12 @@ static bool nmi_hlt_finished(struct svm_test *test)
 		return true;
 	}
 
-	return get_test_stage(test) == 3;
+	return get_test_stage(ctx) == 3;
 }
 
-static bool nmi_hlt_check(struct svm_test *test)
+static bool nmi_hlt_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 3;
+	return get_test_stage(ctx) == 3;
 }
 
 static volatile int count_exc = 0;
@@ -1435,21 +1434,21 @@ static void my_isr(struct ex_regs *r)
 	count_exc++;
 }
 
-static void exc_inject_prepare(struct svm_test *test)
+static void exc_inject_prepare(struct svm_test_context *ctx)
 {
 	handle_exception(DE_VECTOR, my_isr);
 	handle_exception(NMI_VECTOR, my_isr);
 }
 
 
-static void exc_inject_test(struct svm_test *test)
+static void exc_inject_test(struct svm_test_context *ctx)
 {
 	asm volatile ("vmmcall\n\tvmmcall\n\t");
 }
 
-static bool exc_inject_finished(struct svm_test *test)
+static bool exc_inject_finished(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
@@ -1490,14 +1489,14 @@ static bool exc_inject_finished(struct svm_test *test)
 		return true;
 	}
 
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 
-	return get_test_stage(test) == 3;
+	return get_test_stage(ctx) == 3;
 }
 
-static bool exc_inject_check(struct svm_test *test)
+static bool exc_inject_check(struct svm_test_context *ctx)
 {
-	return count_exc == 1 && get_test_stage(test) == 3;
+	return count_exc == 1 && get_test_stage(ctx) == 3;
 }
 
 static volatile bool virq_fired;
@@ -1507,7 +1506,7 @@ static void virq_isr(isr_regs_t *regs)
 	virq_fired = true;
 }
 
-static void virq_inject_prepare(struct svm_test *test)
+static void virq_inject_prepare(struct svm_test_context *ctx)
 {
 	handle_irq(0xf1, virq_isr);
 
@@ -1515,14 +1514,14 @@ static void virq_inject_prepare(struct svm_test *test)
 		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
 	vcpu0.vmcb->control.int_vector = 0xf1;
 	virq_fired = false;
-	set_test_stage(test, 0);
+	set_test_stage(ctx, 0);
 }
 
-static void virq_inject_test(struct svm_test *test)
+static void virq_inject_test(struct svm_test_context *ctx)
 {
 	if (virq_fired) {
 		report_fail("virtual interrupt fired before L2 sti");
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		vmmcall();
 	}
 
@@ -1530,14 +1529,14 @@ static void virq_inject_test(struct svm_test *test)
 
 	if (!virq_fired) {
 		report_fail("virtual interrupt not fired after L2 sti");
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 	}
 
 	vmmcall();
 
 	if (virq_fired) {
 		report_fail("virtual interrupt fired before L2 sti after VINTR intercept");
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 		vmmcall();
 	}
 
@@ -1545,7 +1544,7 @@ static void virq_inject_test(struct svm_test *test)
 
 	if (!virq_fired) {
 		report_fail("virtual interrupt not fired after return from VINTR intercept");
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 	}
 
 	vmmcall();
@@ -1554,18 +1553,18 @@ static void virq_inject_test(struct svm_test *test)
 
 	if (virq_fired) {
 		report_fail("virtual interrupt fired when V_IRQ_PRIO less than V_TPR");
-		set_test_stage(test, -1);
+		set_test_stage(ctx, -1);
 	}
 
 	vmmcall();
 	vmmcall();
 }
 
-static bool virq_inject_finished(struct svm_test *test)
+static bool virq_inject_finished(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->save.rip += 3;
 
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
@@ -1631,14 +1630,14 @@ static bool virq_inject_finished(struct svm_test *test)
 		return true;
 	}
 
-	inc_test_stage(test);
+	inc_test_stage(ctx);
 
-	return get_test_stage(test) == 5;
+	return get_test_stage(ctx) == 5;
 }
 
-static bool virq_inject_check(struct svm_test *test)
+static bool virq_inject_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 5;
+	return get_test_stage(ctx) == 5;
 }
 
 /*
@@ -1671,9 +1670,9 @@ static void reg_corruption_isr(isr_regs_t *regs)
 	apic_write(APIC_EOI, 0);
 }
 
-static void reg_corruption_prepare(struct svm_test *test)
+static void reg_corruption_prepare(struct svm_test_context *ctx)
 {
-	set_test_stage(test, 0);
+	set_test_stage(ctx, 0);
 
 	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK;
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
@@ -1685,7 +1684,7 @@ static void reg_corruption_prepare(struct svm_test *test)
 	apic_start_timer(1000);
 }
 
-static void reg_corruption_test(struct svm_test *test)
+static void reg_corruption_test(struct svm_test_context *ctx)
 {
 	/* this is endless loop, which is interrupted by the timer interrupt */
 	asm volatile (
@@ -1703,12 +1702,12 @@ static void reg_corruption_test(struct svm_test *test)
 		      );
 }
 
-static bool reg_corruption_finished(struct svm_test *test)
+static bool reg_corruption_finished(struct svm_test_context *ctx)
 {
 	if (isr_cnt == 10000) {
 		report_pass("No RIP corruption detected after %d timer interrupts",
 			    isr_cnt);
-		set_test_stage(test, 1);
+		set_test_stage(ctx, 1);
 		goto cleanup;
 	}
 
@@ -1732,9 +1731,9 @@ cleanup:
 
 }
 
-static bool reg_corruption_check(struct svm_test *test)
+static bool reg_corruption_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 1;
+	return get_test_stage(ctx) == 1;
 }
 
 static void get_tss_entry(void *data)
@@ -1744,7 +1743,7 @@ static void get_tss_entry(void *data)
 
 static int orig_cpu_count;
 
-static void init_startup_prepare(struct svm_test *test)
+static void init_startup_prepare(struct svm_test_context *ctx)
 {
 	gdt_entry_t *tss_entry;
 	int i;
@@ -1768,30 +1767,30 @@ static void init_startup_prepare(struct svm_test *test)
 		delay(100000000ULL);
 }
 
-static bool init_startup_finished(struct svm_test *test)
+static bool init_startup_finished(struct svm_test_context *ctx)
 {
 	return true;
 }
 
-static bool init_startup_check(struct svm_test *test)
+static bool init_startup_check(struct svm_test_context *ctx)
 {
 	return atomic_read(&cpu_online_count) == orig_cpu_count;
 }
 
 static volatile bool init_intercept;
 
-static void init_intercept_prepare(struct svm_test *test)
+static void init_intercept_prepare(struct svm_test_context *ctx)
 {
 	init_intercept = false;
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
 }
 
-static void init_intercept_test(struct svm_test *test)
+static void init_intercept_test(struct svm_test_context *ctx)
 {
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT, 0);
 }
 
-static bool init_intercept_finished(struct svm_test *test)
+static bool init_intercept_finished(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->save.rip += 3;
 
@@ -1809,7 +1808,7 @@ static bool init_intercept_finished(struct svm_test *test)
 	return true;
 }
 
-static bool init_intercept_check(struct svm_test *test)
+static bool init_intercept_check(struct svm_test_context *ctx)
 {
 	return init_intercept;
 }
@@ -1865,36 +1864,36 @@ static void host_rflags_db_handler(struct ex_regs *r)
 	}
 }
 
-static void host_rflags_prepare(struct svm_test *test)
+static void host_rflags_prepare(struct svm_test_context *ctx)
 {
 	handle_exception(DB_VECTOR, host_rflags_db_handler);
-	set_test_stage(test, 0);
+	set_test_stage(ctx, 0);
 }
 
-static void host_rflags_prepare_gif_clear(struct svm_test *test)
+static void host_rflags_prepare_gif_clear(struct svm_test_context *ctx)
 {
 	if (host_rflags_set_tf)
 		write_rflags(read_rflags() | X86_EFLAGS_TF);
 }
 
-static void host_rflags_test(struct svm_test *test)
+static void host_rflags_test(struct svm_test_context *ctx)
 {
 	while (1) {
-		if (get_test_stage(test) > 0) {
+		if (get_test_stage(ctx) > 0) {
 			if ((host_rflags_set_tf && !host_rflags_ss_on_vmrun && !host_rflags_db_handler_flag) ||
 			    (host_rflags_set_rf && host_rflags_db_handler_flag == 1))
 				host_rflags_guest_main_flag = 1;
 		}
 
-		if (get_test_stage(test) == 4)
+		if (get_test_stage(ctx) == 4)
 			break;
 		vmmcall();
 	}
 }
 
-static bool host_rflags_finished(struct svm_test *test)
+static bool host_rflags_finished(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test)) {
+	switch (get_test_stage(ctx)) {
 	case 0:
 		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("Unexpected VMEXIT. Exit reason 0x%x",
@@ -1958,13 +1957,13 @@ static bool host_rflags_finished(struct svm_test *test)
 	default:
 		return true;
 	}
-	inc_test_stage(test);
-	return get_test_stage(test) == 5;
+	inc_test_stage(ctx);
+	return get_test_stage(ctx) == 5;
 }
 
-static bool host_rflags_check(struct svm_test *test)
+static bool host_rflags_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 4;
+	return get_test_stage(ctx) == 4;
 }
 
 #define TEST(name) { #name, .v2 = name }
@@ -1979,7 +1978,7 @@ static bool host_rflags_check(struct svm_test *test)
  * value than in L1.
  */
 
-static void svm_cr4_osxsave_test_guest(struct svm_test *test)
+static void svm_cr4_osxsave_test_guest(struct svm_test_context *ctx)
 {
 	write_cr4(read_cr4() & ~X86_CR4_OSXSAVE);
 }
@@ -2007,7 +2006,7 @@ static void svm_cr4_osxsave_test(void)
 	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set after VMRUN");
 }
 
-static void basic_guest_main(struct svm_test *test)
+static void basic_guest_main(struct svm_test_context *ctx)
 {
 }
 
@@ -2423,7 +2422,7 @@ static void svm_guest_state_test(void)
 	test_canonicalization();
 }
 
-extern void guest_rflags_test_guest(struct svm_test *test);
+extern void guest_rflags_test_guest(struct svm_test_context *ctx);
 extern u64 *insn2;
 extern u64 *guest_end;
 
@@ -2536,7 +2535,7 @@ static void svm_vmrun_errata_test(void)
 	}
 }
 
-static void vmload_vmsave_guest_main(struct svm_test *test)
+static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
 {
 	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
 
@@ -2599,18 +2598,18 @@ static void svm_vmload_vmsave(void)
 	vcpu0.vmcb->control.intercept = intercept_saved;
 }
 
-static void prepare_vgif_enabled(struct svm_test *test)
+static void prepare_vgif_enabled(struct svm_test_context *ctx)
 {
 }
 
-static void test_vgif(struct svm_test *test)
+static void test_vgif(struct svm_test_context *ctx)
 {
 	asm volatile ("vmmcall\n\tstgi\n\tvmmcall\n\tclgi\n\tvmmcall\n\t");
 }
 
-static bool vgif_finished(struct svm_test *test)
+static bool vgif_finished(struct svm_test_context *ctx)
 {
-	switch (get_test_stage(test))
+	switch (get_test_stage(ctx))
 		{
 		case 0:
 			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
@@ -2619,7 +2618,7 @@ static bool vgif_finished(struct svm_test *test)
 			}
 			vcpu0.vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
 			vcpu0.vmcb->save.rip += 3;
-			inc_test_stage(test);
+			inc_test_stage(ctx);
 			break;
 		case 1:
 			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
@@ -2633,7 +2632,7 @@ static bool vgif_finished(struct svm_test *test)
 			}
 			report_pass("STGI set VGIF bit.");
 			vcpu0.vmcb->save.rip += 3;
-			inc_test_stage(test);
+			inc_test_stage(ctx);
 			break;
 		case 2:
 			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
@@ -2647,7 +2646,7 @@ static bool vgif_finished(struct svm_test *test)
 			}
 			report_pass("CLGI cleared VGIF bit.");
 			vcpu0.vmcb->save.rip += 3;
-			inc_test_stage(test);
+			inc_test_stage(ctx);
 			vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
 			break;
 		default:
@@ -2655,19 +2654,19 @@ static bool vgif_finished(struct svm_test *test)
 			break;
 		}
 
-	return get_test_stage(test) == 3;
+	return get_test_stage(ctx) == 3;
 }
 
-static bool vgif_check(struct svm_test *test)
+static bool vgif_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 3;
+	return get_test_stage(ctx) == 3;
 }
 
 
 static int pause_test_counter;
 static int wait_counter;
 
-static void pause_filter_test_guest_main(struct svm_test *test)
+static void pause_filter_test_guest_main(struct svm_test_context *ctx)
 {
 	int i;
 	for (i = 0 ; i < pause_test_counter ; i++)
@@ -3025,7 +3024,7 @@ static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected
 
 
 // subtest: test that enabling EFLAGS.IF is enough to trigger an interrupt
-static void svm_intr_intercept_mix_if_guest(struct svm_test *test)
+static void svm_intr_intercept_mix_if_guest(struct svm_test_context *ctx)
 {
 	asm volatile("nop;nop;nop;nop");
 	report(!dummy_isr_recevied, "No interrupt expected");
@@ -3051,7 +3050,7 @@ static void svm_intr_intercept_mix_if(void)
 
 // subtest: test that a clever guest can trigger an interrupt by setting GIF
 // if GIF is not intercepted
-static void svm_intr_intercept_mix_gif_guest(struct svm_test *test)
+static void svm_intr_intercept_mix_gif_guest(struct svm_test_context *ctx)
 {
 
 	asm volatile("nop;nop;nop;nop");
@@ -3084,7 +3083,7 @@ static void svm_intr_intercept_mix_gif(void)
 // subtest: test that a clever guest can trigger an interrupt by setting GIF
 // if GIF is not intercepted and interrupt comes after guest
 // started running
-static void svm_intr_intercept_mix_gif_guest2(struct svm_test *test)
+static void svm_intr_intercept_mix_gif_guest2(struct svm_test_context *ctx)
 {
 	asm volatile("nop;nop;nop;nop");
 	report(!dummy_isr_recevied, "No interrupt expected");
@@ -3111,7 +3110,7 @@ static void svm_intr_intercept_mix_gif2(void)
 
 
 // subtest: test that pending NMI will be handled when guest enables GIF
-static void svm_intr_intercept_mix_nmi_guest(struct svm_test *test)
+static void svm_intr_intercept_mix_nmi_guest(struct svm_test_context *ctx)
 {
 	asm volatile("nop;nop;nop;nop");
 	report(!nmi_recevied, "No NMI expected");
@@ -3141,7 +3140,7 @@ static void svm_intr_intercept_mix_nmi(void)
 // test that pending SMI will be handled when guest enables GIF
 // TODO: can't really count #SMIs so just test that guest doesn't hang
 // and VMexits on SMI
-static void svm_intr_intercept_mix_smi_guest(struct svm_test *test)
+static void svm_intr_intercept_mix_smi_guest(struct svm_test_context *ctx)
 {
 	asm volatile("nop;nop;nop;nop");
 
@@ -3239,7 +3238,7 @@ static void svm_exception_test(void)
 	}
 }
 
-static void shutdown_intercept_test_guest(struct svm_test *test)
+static void shutdown_intercept_test_guest(struct svm_test_context *ctx)
 {
 	asm volatile ("ud2");
 	report_fail("should not reach here\n");
@@ -3259,7 +3258,7 @@ static void svm_shutdown_intercept_test(void)
  * when parent exception is intercepted
  */
 
-static void exception_merging_prepare(struct svm_test *test)
+static void exception_merging_prepare(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 
@@ -3267,12 +3266,12 @@ static void exception_merging_prepare(struct svm_test *test)
 	boot_idt[UD_VECTOR].type = 1;
 }
 
-static void exception_merging_test(struct svm_test *test)
+static void exception_merging_test(struct svm_test_context *ctx)
 {
 	asm volatile ("ud2");
 }
 
-static bool exception_merging_finished(struct svm_test *test)
+static bool exception_merging_finished(struct svm_test_context *ctx)
 {
 	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
 	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
@@ -3297,15 +3296,15 @@ static bool exception_merging_finished(struct svm_test *test)
 		goto out;
 	}
 
-	set_test_stage(test, 1);
+	set_test_stage(ctx, 1);
 out:
 	boot_idt[UD_VECTOR].type = 14;
 	return true;
 }
 
-static bool exception_merging_check(struct svm_test *test)
+static bool exception_merging_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 1;
+	return get_test_stage(ctx) == 1;
 }
 
 
@@ -3315,7 +3314,7 @@ static bool exception_merging_check(struct svm_test *test)
  * in EXITINTINFO of the exception
  */
 
-static void interrupt_merging_prepare(struct svm_test *test)
+static void interrupt_merging_prepare(struct svm_test_context *ctx)
 {
 	/* intercept #GP */
 	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
@@ -3327,7 +3326,7 @@ static void interrupt_merging_prepare(struct svm_test *test)
 
 #define INTERRUPT_MERGING_DELAY 100000000ULL
 
-static void interrupt_merging_test(struct svm_test *test)
+static void interrupt_merging_test(struct svm_test_context *ctx)
 {
 	handle_irq(TIMER_VECTOR, timer_isr);
 	/* break timer vector IDT entry to get #GP on interrupt delivery */
@@ -3337,7 +3336,7 @@ static void interrupt_merging_test(struct svm_test *test)
 	delay(INTERRUPT_MERGING_DELAY);
 }
 
-static bool interrupt_merging_finished(struct svm_test *test)
+static bool interrupt_merging_finished(struct svm_test_context *ctx)
 {
 
 	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
@@ -3375,7 +3374,7 @@ static bool interrupt_merging_finished(struct svm_test *test)
 		goto cleanup;
 	}
 
-	set_test_stage(test, 1);
+	set_test_stage(ctx, 1);
 
 cleanup:
 	// restore the IDT gate
@@ -3387,9 +3386,9 @@ cleanup:
 	return true;
 }
 
-static bool interrupt_merging_check(struct svm_test *test)
+static bool interrupt_merging_check(struct svm_test_context *ctx)
 {
-	return get_test_stage(test) == 1;
+	return get_test_stage(ctx) == 1;
 }
 
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 24/27] svm: use svm_test_context in v2 tests
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (22 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 23/27] svm: introduce struct svm_test_context Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-02 10:27   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 25/27] svm: move nested vcpu to test context Maxim Levitsky
                   ` (3 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm.c       |  14 +--
 x86/svm.h       |   7 +-
 x86/svm_npt.c   |  20 ++--
 x86/svm_tests.c | 262 ++++++++++++++++++++++++------------------------
 4 files changed, 152 insertions(+), 151 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index 6381dee9..06d34ac4 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -76,21 +76,18 @@ static void test_thunk(struct svm_test_context *ctx)
 }
 
 
-struct svm_test_context *v2_ctx;
-
-
-int __svm_vmrun(u64 rip)
+int __svm_vmrun(struct svm_test_context *ctx, u64 rip)
 {
 	vcpu0.vmcb->save.rip = (ulong)rip;
-	vcpu0.regs.rdi = (ulong)v2_ctx;
+	vcpu0.regs.rdi = (ulong)ctx;
 	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
 	SVM_VMRUN(&vcpu0);
 	return vcpu0.vmcb->control.exit_code;
 }
 
-int svm_vmrun(void)
+int svm_vmrun(struct svm_test_context *ctx)
 {
-	return __svm_vmrun((u64)test_thunk);
+	return __svm_vmrun(ctx, (u64)test_thunk);
 }
 
 static noinline void test_run(struct svm_test_context *ctx)
@@ -98,8 +95,7 @@ static noinline void test_run(struct svm_test_context *ctx)
 	svm_vcpu_ident(&vcpu0);
 
 	if (ctx->test->v2) {
-		v2_ctx = ctx;
-		ctx->test->v2();
+		ctx->test->v2(ctx);
 		return;
 	}
 
diff --git a/x86/svm.h b/x86/svm.h
index 01d07a54..961c4de3 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -23,7 +23,7 @@ struct svm_test {
 	bool (*finished)(struct svm_test_context *ctx);
 	bool (*succeeded)(struct svm_test_context *ctx);
 	/* Alternative test interface. */
-	void (*v2)(void);
+	void (*v2)(struct svm_test_context *ctx);
 	int on_vcpu;
 };
 
@@ -39,9 +39,8 @@ bool default_finished(struct svm_test_context *ctx);
 int get_test_stage(struct svm_test_context *ctx);
 void set_test_stage(struct svm_test_context *ctx, int s);
 void inc_test_stage(struct svm_test_context *ctx);
-int __svm_vmrun(u64 rip);
-void __svm_bare_vmrun(void);
-int svm_vmrun(void);
+int __svm_vmrun(struct svm_test_context *ctx, u64 rip);
+int svm_vmrun(struct svm_test_context *ctx);
 void test_set_guest(test_guest_func func);
 
 
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index fe6cbb29..fc16b4be 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -189,7 +189,8 @@ static void basic_guest_main(struct svm_test_context *ctx)
 {
 }
 
-static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
+static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
+				     u64 * pxe, u64 rsvd_bits, u64 efer,
 				     ulong cr4, u64 guest_efer, ulong guest_cr4)
 {
 	u64 pxe_orig = *pxe;
@@ -204,7 +205,7 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
 
 	*pxe |= rsvd_bits;
 
-	exit_reason = svm_vmrun();
+	exit_reason = svm_vmrun(ctx);
 
 	report(exit_reason == SVM_EXIT_NPF,
 	       "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits,
@@ -236,7 +237,8 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
 	*pxe = pxe_orig;
 }
 
-static void _svm_npt_rsvd_bits_test(u64 * pxe, u64 pxe_rsvd_bits, u64 efer,
+static void _svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
+				    u64 * pxe, u64 pxe_rsvd_bits, u64 efer,
 				    ulong cr4, u64 guest_efer, ulong guest_cr4)
 {
 	u64 rsvd_bits;
@@ -277,7 +279,7 @@ static void _svm_npt_rsvd_bits_test(u64 * pxe, u64 pxe_rsvd_bits, u64 efer,
 		else
 			guest_cr4 &= ~X86_CR4_SMEP;
 
-		__svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4,
+		__svm_npt_rsvd_bits_test(ctx, pxe, rsvd_bits, efer, cr4,
 					 guest_efer, guest_cr4);
 	}
 }
@@ -305,7 +307,7 @@ static u64 get_random_bits(u64 hi, u64 low)
 	return rsvd_bits;
 }
 
-static void svm_npt_rsvd_bits_test(void)
+static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
 {
 	u64 saved_efer, host_efer, sg_efer, guest_efer;
 	ulong saved_cr4, host_cr4, sg_cr4, guest_cr4;
@@ -330,22 +332,22 @@ static void svm_npt_rsvd_bits_test(void)
 	if (cpuid_maxphyaddr() >= 52)
 		goto skip_pte_test;
 
-	_svm_npt_rsvd_bits_test(npt_get_pte((u64) basic_guest_main),
+	_svm_npt_rsvd_bits_test(ctx, npt_get_pte((u64) basic_guest_main),
 				get_random_bits(51, cpuid_maxphyaddr()),
 				host_efer, host_cr4, guest_efer, guest_cr4);
 
 skip_pte_test:
-	_svm_npt_rsvd_bits_test(npt_get_pde((u64) basic_guest_main),
+	_svm_npt_rsvd_bits_test(ctx, npt_get_pde((u64) basic_guest_main),
 				get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
 				host_efer, host_cr4, guest_efer, guest_cr4);
 
-	_svm_npt_rsvd_bits_test(npt_get_pdpe((u64) basic_guest_main),
+	_svm_npt_rsvd_bits_test(ctx, npt_get_pdpe((u64) basic_guest_main),
 				PT_PAGE_SIZE_MASK |
 				(this_cpu_has(X86_FEATURE_GBPAGES) ?
 				 get_random_bits(29, 13) : 0), host_efer,
 				host_cr4, guest_efer, guest_cr4);
 
-	_svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8),
+	_svm_npt_rsvd_bits_test(ctx, npt_get_pml4e(), BIT_ULL(8),
 				host_efer, host_cr4, guest_efer, guest_cr4);
 
 	wrmsr(MSR_EFER, saved_efer);
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index c29e9a5d..6041ac24 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -753,7 +753,8 @@ static void svm_tsc_scale_guest(struct svm_test_context *ctx)
 		cpu_relax();
 }
 
-static void svm_tsc_scale_run_testcase(u64 duration,
+static void svm_tsc_scale_run_testcase(struct svm_test_context *ctx,
+				       u64 duration,
 				       double tsc_scale, u64 tsc_offset)
 {
 	u64 start_tsc, actual_duration;
@@ -766,7 +767,7 @@ static void svm_tsc_scale_run_testcase(u64 duration,
 
 	start_tsc = rdtsc();
 
-	if (svm_vmrun() != SVM_EXIT_VMMCALL)
+	if (svm_vmrun(ctx) != SVM_EXIT_VMMCALL)
 		report_fail("unexpected vm exit code 0x%x", vcpu0.vmcb->control.exit_code);
 
 	actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
@@ -775,7 +776,7 @@ static void svm_tsc_scale_run_testcase(u64 duration,
 	       duration, actual_duration);
 }
 
-static void svm_tsc_scale_test(void)
+static void svm_tsc_scale_test(struct svm_test_context *ctx)
 {
 	int i;
 
@@ -796,11 +797,11 @@ static void svm_tsc_scale_test(void)
 		report_info("duration=%d, tsc_scale=%d, tsc_offset=%ld",
 			    duration, (int)(tsc_scale * 100), tsc_offset);
 
-		svm_tsc_scale_run_testcase(duration, tsc_scale, tsc_offset);
+		svm_tsc_scale_run_testcase(ctx, duration, tsc_scale, tsc_offset);
 	}
 
-	svm_tsc_scale_run_testcase(50, 255, rdrand());
-	svm_tsc_scale_run_testcase(50, 0.0001, rdrand());
+	svm_tsc_scale_run_testcase(ctx, 50, 255, rdrand());
+	svm_tsc_scale_run_testcase(ctx, 50, 0.0001, rdrand());
 }
 
 static void latency_prepare(struct svm_test_context *ctx)
@@ -1983,7 +1984,7 @@ static void svm_cr4_osxsave_test_guest(struct svm_test_context *ctx)
 	write_cr4(read_cr4() & ~X86_CR4_OSXSAVE);
 }
 
-static void svm_cr4_osxsave_test(void)
+static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
 {
 	if (!this_cpu_has(X86_FEATURE_XSAVE)) {
 		report_skip("XSAVE not detected");
@@ -2000,7 +2001,7 @@ static void svm_cr4_osxsave_test(void)
 	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
 
 	test_set_guest(svm_cr4_osxsave_test_guest);
-	report(svm_vmrun() == SVM_EXIT_VMMCALL,
+	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
 	       "svm_cr4_osxsave_test_guest finished with VMMCALL");
 
 	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set after VMRUN");
@@ -2011,7 +2012,7 @@ static void basic_guest_main(struct svm_test_context *ctx)
 }
 
 
-#define SVM_TEST_REG_RESERVED_BITS(start, end, inc, str_name, reg, val,	\
+#define SVM_TEST_REG_RESERVED_BITS(ctx, start, end, inc, str_name, reg, val,	\
 				   resv_mask)				\
 {									\
 	u64 tmp, mask;							\
@@ -2023,12 +2024,12 @@ static void basic_guest_main(struct svm_test_context *ctx)
 			continue;					\
 		tmp = val | mask;					\
 		reg = tmp;						\
-		report(svm_vmrun() == SVM_EXIT_ERR, "Test %s %d:%d: %lx", \
+		report(svm_vmrun(ctx) == SVM_EXIT_ERR, "Test %s %d:%d: %lx", \
 		       str_name, end, start, tmp);			\
 	}								\
 }
 
-#define SVM_TEST_CR_RESERVED_BITS(start, end, inc, cr, val, resv_mask,	\
+#define SVM_TEST_CR_RESERVED_BITS(ctx, start, end, inc, cr, val, resv_mask,	\
 				  exit_code, test_name)			\
 {									\
 	u64 tmp, mask;							\
@@ -2050,13 +2051,13 @@ static void basic_guest_main(struct svm_test_context *ctx)
 		case 4:							\
 			vcpu0.vmcb->save.cr4 = tmp;				\
 		}							\
-		r = svm_vmrun();					\
+		r = svm_vmrun(ctx);					\
 		report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x", \
 		       cr, test_name, end, start, tmp, exit_code, r);	\
 	}								\
 }
 
-static void test_efer(void)
+static void test_efer(struct svm_test_context *ctx)
 {
 	/*
 	 * Un-setting EFER.SVME is illegal
@@ -2064,10 +2065,10 @@ static void test_efer(void)
 	u64 efer_saved = vcpu0.vmcb->save.efer;
 	u64 efer = efer_saved;
 
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
+	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
 	efer &= ~EFER_SVME;
 	vcpu0.vmcb->save.efer = efer;
-	report (svm_vmrun() == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
+	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
 	vcpu0.vmcb->save.efer = efer_saved;
 
 	/*
@@ -2075,9 +2076,9 @@ static void test_efer(void)
 	 */
 	efer_saved = vcpu0.vmcb->save.efer;
 
-	SVM_TEST_REG_RESERVED_BITS(8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
 				   efer_saved, SVM_EFER_RESERVED_MASK);
-	SVM_TEST_REG_RESERVED_BITS(16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
 				   efer_saved, SVM_EFER_RESERVED_MASK);
 
 	/*
@@ -2094,7 +2095,7 @@ static void test_efer(void)
 	vcpu0.vmcb->save.cr0 = cr0;
 	cr4 = cr4_saved & ~X86_CR4_PAE;
 	vcpu0.vmcb->save.cr4 = cr4;
-	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
+	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
 
 	/*
@@ -2107,7 +2108,7 @@ static void test_efer(void)
 	vcpu0.vmcb->save.cr4 = cr4;
 	cr0 &= ~X86_CR0_PE;
 	vcpu0.vmcb->save.cr0 = cr0;
-	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
+	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
 
 	/*
@@ -2121,7 +2122,7 @@ static void test_efer(void)
 	cs_attrib = cs_attrib_saved | SVM_SELECTOR_L_MASK |
 		SVM_SELECTOR_DB_MASK;
 	vcpu0.vmcb->save.cs.attrib = cs_attrib;
-	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
+	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
 	       efer, cr0, cr4, cs_attrib);
 
@@ -2131,7 +2132,7 @@ static void test_efer(void)
 	vcpu0.vmcb->save.cs.attrib = cs_attrib_saved;
 }
 
-static void test_cr0(void)
+static void test_cr0(struct svm_test_context *ctx)
 {
 	/*
 	 * Un-setting CR0.CD and setting CR0.NW is illegal combination
@@ -2142,20 +2143,20 @@ static void test_cr0(void)
 	cr0 |= X86_CR0_CD;
 	cr0 &= ~X86_CR0_NW;
 	vcpu0.vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
+	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
 		cr0);
 	cr0 |= X86_CR0_NW;
 	vcpu0.vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
+	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
 		cr0);
 	cr0 &= ~X86_CR0_NW;
 	cr0 &= ~X86_CR0_CD;
 	vcpu0.vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
+	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
 		cr0);
 	cr0 |= X86_CR0_NW;
 	vcpu0.vmcb->save.cr0 = cr0;
-	report (svm_vmrun() == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
+	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
 		cr0);
 	vcpu0.vmcb->save.cr0 = cr0_saved;
 
@@ -2164,12 +2165,12 @@ static void test_cr0(void)
 	 */
 	cr0 = cr0_saved;
 
-	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
 				   SVM_CR0_RESERVED_MASK);
 	vcpu0.vmcb->save.cr0 = cr0_saved;
 }
 
-static void test_cr3(void)
+static void test_cr3(struct svm_test_context *ctx)
 {
 	/*
 	 * CR3 MBZ bits based on different modes:
@@ -2177,11 +2178,11 @@ static void test_cr3(void)
 	 */
 	u64 cr3_saved = vcpu0.vmcb->save.cr3;
 
-	SVM_TEST_CR_RESERVED_BITS(0, 63, 1, 3, cr3_saved,
+	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 63, 1, 3, cr3_saved,
 				  SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
 
 	vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
-	report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
+	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
 	       vcpu0.vmcb->save.cr3);
 
 	/*
@@ -2197,11 +2198,11 @@ static void test_cr3(void)
 	 */
 	if (this_cpu_has(X86_FEATURE_PCID)) {
 		vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
-		SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved,
+		SVM_TEST_CR_RESERVED_BITS(ctx, 0, 11, 1, 3, cr3_saved,
 					  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) ");
 
 		vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
-		report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
+		report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
 		       vcpu0.vmcb->save.cr3);
 	}
 
@@ -2213,7 +2214,7 @@ static void test_cr3(void)
 	/* Clear P (Present) bit in NPT in order to trigger #NPF */
 	pdpe[0] &= ~1ULL;
 
-	SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved,
+	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 11, 1, 3, cr3_saved,
 				  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) ");
 
 	pdpe[0] |= 1ULL;
@@ -2224,7 +2225,7 @@ static void test_cr3(void)
 	 */
 	pdpe[0] &= ~1ULL;
 	vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
-	SVM_TEST_CR_RESERVED_BITS(0, 2, 1, 3, cr3_saved,
+	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 2, 1, 3, cr3_saved,
 				  SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) ");
 
 	pdpe[0] |= 1ULL;
@@ -2235,7 +2236,7 @@ skip_npt_only:
 }
 
 /* Test CR4 MBZ bits based on legacy or long modes */
-static void test_cr4(void)
+static void test_cr4(struct svm_test_context *ctx)
 {
 	u64 cr4_saved = vcpu0.vmcb->save.cr4;
 	u64 efer_saved = vcpu0.vmcb->save.efer;
@@ -2243,47 +2244,47 @@ static void test_cr4(void)
 
 	efer &= ~EFER_LME;
 	vcpu0.vmcb->save.efer = efer;
-	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
+	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
 				  SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, "");
 
 	efer |= EFER_LME;
 	vcpu0.vmcb->save.efer = efer;
-	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
+	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
 				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
-	SVM_TEST_CR_RESERVED_BITS(32, 63, 4, 4, cr4_saved,
+	SVM_TEST_CR_RESERVED_BITS(ctx, 32, 63, 4, 4, cr4_saved,
 				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
 
 	vcpu0.vmcb->save.cr4 = cr4_saved;
 	vcpu0.vmcb->save.efer = efer_saved;
 }
 
-static void test_dr(void)
+static void test_dr(struct svm_test_context *ctx)
 {
 	/*
 	 * DR6[63:32] and DR7[63:32] are MBZ
 	 */
 	u64 dr_saved = vcpu0.vmcb->save.dr6;
 
-	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
 				   SVM_DR6_RESERVED_MASK);
 	vcpu0.vmcb->save.dr6 = dr_saved;
 
 	dr_saved = vcpu0.vmcb->save.dr7;
-	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
 				   SVM_DR7_RESERVED_MASK);
 
 	vcpu0.vmcb->save.dr7 = dr_saved;
 }
 
 /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */
-#define	TEST_BITMAP_ADDR(save_intercept, type, addr, exit_code,		\
+#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,		\
 			 msg) {						\
 		vcpu0.vmcb->control.intercept = saved_intercept | 1ULL << type; \
 		if (type == INTERCEPT_MSR_PROT)				\
 			vcpu0.vmcb->control.msrpm_base_pa = addr;		\
 		else							\
 			vcpu0.vmcb->control.iopm_base_pa = addr;		\
-		report(svm_vmrun() == exit_code,			\
+		report(svm_vmrun(ctx) == exit_code,			\
 		       "Test %s address: %lx", msg, addr);		\
 	}
 
@@ -2303,48 +2304,48 @@ static void test_dr(void)
  * Note: Unallocated MSRPM addresses conforming to consistency checks, generate
  * #NPF.
  */
-static void test_msrpm_iopm_bitmap_addrs(void)
+static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
 {
 	u64 saved_intercept = vcpu0.vmcb->control.intercept;
 	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
 	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
 	u8 *io_bitmap = svm_get_io_bitmap();
 
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT,
 			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
 			 "MSRPM");
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT,
 			 addr_beyond_limit - 2 * PAGE_SIZE + 1, SVM_EXIT_ERR,
 			 "MSRPM");
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT,
 			 addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR,
 			 "MSRPM");
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT, addr,
 			 SVM_EXIT_VMMCALL, "MSRPM");
 	addr |= (1ull << 12) - 1;
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT, addr,
 			 SVM_EXIT_VMMCALL, "MSRPM");
 
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
 			 addr_beyond_limit - 4 * PAGE_SIZE, SVM_EXIT_VMMCALL,
 			 "IOPM");
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
 			 addr_beyond_limit - 3 * PAGE_SIZE, SVM_EXIT_VMMCALL,
 			 "IOPM");
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
 			 addr_beyond_limit - 2 * PAGE_SIZE - 2, SVM_EXIT_VMMCALL,
 			 "IOPM");
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
 			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
 			 "IOPM");
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
 			 addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR,
 			 "IOPM");
 	addr = virt_to_phys(io_bitmap) & (~((1ull << 11) - 1));
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT, addr,
 			 SVM_EXIT_VMMCALL, "IOPM");
 	addr |= (1ull << 12) - 1;
-	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr,
+	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT, addr,
 			 SVM_EXIT_VMMCALL, "IOPM");
 
 	vcpu0.vmcb->control.intercept = saved_intercept;
@@ -2354,16 +2355,16 @@ static void test_msrpm_iopm_bitmap_addrs(void)
  * Unlike VMSAVE, VMRUN seems not to update the value of noncanonical
  * segment bases in the VMCB.  However, VMENTRY succeeds as documented.
  */
-#define TEST_CANONICAL_VMRUN(seg_base, msg)				\
+#define TEST_CANONICAL_VMRUN(ctx, seg_base, msg)				\
 	saved_addr = seg_base;						\
 	seg_base = (seg_base & ((1ul << addr_limit) - 1)) | noncanonical_mask; \
-	return_value = svm_vmrun();					\
+	return_value = svm_vmrun(ctx);					\
 	report(return_value == SVM_EXIT_VMMCALL,			\
 	       "Successful VMRUN with noncanonical %s.base", msg);	\
 	seg_base = saved_addr;
 
 
-#define TEST_CANONICAL_VMLOAD(seg_base, msg)				\
+#define TEST_CANONICAL_VMLOAD(ctx, seg_base, msg)				\
 	saved_addr = seg_base;						\
 	seg_base = (seg_base & ((1ul << addr_limit) - 1)) | noncanonical_mask; \
 	asm volatile ("vmload %0" : : "a"(vmcb_phys) : "memory");	\
@@ -2372,7 +2373,7 @@ static void test_msrpm_iopm_bitmap_addrs(void)
 	       "Test %s.base for canonical form: %lx", msg, seg_base);	\
 	seg_base = saved_addr;
 
-static void test_canonicalization(void)
+static void test_canonicalization(struct svm_test_context *ctx)
 {
 	u64 saved_addr;
 	u64 return_value;
@@ -2382,17 +2383,17 @@ static void test_canonicalization(void)
 	addr_limit = (this_cpu_has(X86_FEATURE_LA57)) ? 57 : 48;
 	u64 noncanonical_mask = NONCANONICAL & ~((1ul << addr_limit) - 1);
 
-	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.fs.base, "FS");
-	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.gs.base, "GS");
-	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.ldtr.base, "LDTR");
-	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.tr.base, "TR");
-	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
-	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.es.base, "ES");
-	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.cs.base, "CS");
-	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.ss.base, "SS");
-	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.ds.base, "DS");
-	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.gdtr.base, "GDTR");
-	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.idtr.base, "IDTR");
+	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.fs.base, "FS");
+	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.gs.base, "GS");
+	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.ldtr.base, "LDTR");
+	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.tr.base, "TR");
+	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
+	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.es.base, "ES");
+	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.cs.base, "CS");
+	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ss.base, "SS");
+	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ds.base, "DS");
+	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.gdtr.base, "GDTR");
+	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.idtr.base, "IDTR");
 }
 
 /*
@@ -2410,16 +2411,16 @@ static void guest_rflags_test_db_handler(struct ex_regs *r)
 	r->rflags &= ~X86_EFLAGS_TF;
 }
 
-static void svm_guest_state_test(void)
+static void svm_guest_state_test(struct svm_test_context *ctx)
 {
 	test_set_guest(basic_guest_main);
-	test_efer();
-	test_cr0();
-	test_cr3();
-	test_cr4();
-	test_dr();
-	test_msrpm_iopm_bitmap_addrs();
-	test_canonicalization();
+	test_efer(ctx);
+	test_cr0(ctx);
+	test_cr3(ctx);
+	test_cr4(ctx);
+	test_dr(ctx);
+	test_msrpm_iopm_bitmap_addrs(ctx);
+	test_canonicalization(ctx);
 }
 
 extern void guest_rflags_test_guest(struct svm_test_context *ctx);
@@ -2439,7 +2440,7 @@ asm("guest_rflags_test_guest:\n\t"
     "pop %rbp\n\t"
     "ret");
 
-static void svm_test_singlestep(void)
+static void svm_test_singlestep(struct svm_test_context *ctx)
 {
 	handle_exception(DB_VECTOR, guest_rflags_test_db_handler);
 
@@ -2447,7 +2448,7 @@ static void svm_test_singlestep(void)
 	 * Trap expected after completion of first guest instruction
 	 */
 	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
-	report (__svm_vmrun((u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
+	report (__svm_vmrun(ctx, (u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
 		guest_rflags_test_trap_rip == (u64)&insn2,
 		"Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
 	/*
@@ -2456,7 +2457,7 @@ static void svm_test_singlestep(void)
 	guest_rflags_test_trap_rip = 0;
 	vcpu0.vmcb->save.rip += 3;
 	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
-	report(__svm_vmrun(vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
 		guest_rflags_test_trap_rip == 0,
 		"Test EFLAGS.TF on VMRUN: trap not expected");
 
@@ -2464,7 +2465,7 @@ static void svm_test_singlestep(void)
 	 * Let guest finish execution
 	 */
 	vcpu0.vmcb->save.rip += 3;
-	report(__svm_vmrun(vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
 		vcpu0.vmcb->save.rip == (u64)&guest_end,
 		"Test EFLAGS.TF on VMRUN: guest execution completion");
 }
@@ -2492,7 +2493,7 @@ static void gp_isr(struct ex_regs *r)
 	r->rip += 3;
 }
 
-static void svm_vmrun_errata_test(void)
+static void svm_vmrun_errata_test(struct svm_test_context *ctx)
 {
 	unsigned long *last_page = NULL;
 
@@ -2543,7 +2544,7 @@ static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
 	asm volatile ("vmsave %0" : : "a"(vmcb_phys));
 }
 
-static void svm_vmload_vmsave(void)
+static void svm_vmload_vmsave(struct svm_test_context *ctx)
 {
 	u32 intercept_saved = vcpu0.vmcb->control.intercept;
 
@@ -2555,7 +2556,7 @@ static void svm_vmload_vmsave(void)
 	 */
 	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
 	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
@@ -2564,34 +2565,34 @@ static void svm_vmload_vmsave(void)
 	 * #VMEXIT to host
 	 */
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
 	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
 	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
 	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
 	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
@@ -2683,7 +2684,9 @@ static void pause_filter_test_guest_main(struct svm_test_context *ctx)
 
 }
 
-static void pause_filter_run_test(int pause_iterations, int filter_value, int wait_iterations, int threshold)
+static void pause_filter_run_test(struct svm_test_context *ctx,
+				  int pause_iterations, int filter_value,
+				  int wait_iterations, int threshold)
 {
 	test_set_guest(pause_filter_test_guest_main);
 
@@ -2692,7 +2695,7 @@ static void pause_filter_run_test(int pause_iterations, int filter_value, int wa
 
 	vcpu0.vmcb->control.pause_filter_count = filter_value;
 	vcpu0.vmcb->control.pause_filter_thresh = threshold;
-	svm_vmrun();
+	svm_vmrun(ctx);
 
 	if (filter_value <= pause_iterations || wait_iterations < threshold)
 		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_PAUSE,
@@ -2702,7 +2705,7 @@ static void pause_filter_run_test(int pause_iterations, int filter_value, int wa
 		       "no expected PAUSE vmexit");
 }
 
-static void pause_filter_test(void)
+static void pause_filter_test(struct svm_test_context *ctx)
 {
 	if (!pause_filter_supported()) {
 		report_skip("PAUSE filter not supported in the guest");
@@ -2712,20 +2715,20 @@ static void pause_filter_test(void)
 	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
 
 	// filter count more that pause count - no VMexit
-	pause_filter_run_test(10, 9, 0, 0);
+	pause_filter_run_test(ctx, 10, 9, 0, 0);
 
 	// filter count smaller pause count - no VMexit
-	pause_filter_run_test(20, 21, 0, 0);
+	pause_filter_run_test(ctx, 20, 21, 0, 0);
 
 
 	if (pause_threshold_supported()) {
 		// filter count smaller pause count - no VMexit +  large enough threshold
 		// so that filter counter resets
-		pause_filter_run_test(20, 21, 1000, 10);
+		pause_filter_run_test(ctx, 20, 21, 1000, 10);
 
 		// filter count smaller pause count - no VMexit +  small threshold
 		// so that filter doesn't reset
-		pause_filter_run_test(20, 21, 10, 1000);
+		pause_filter_run_test(ctx, 20, 21, 10, 1000);
 	} else {
 		report_skip("PAUSE threshold not supported in the guest");
 		return;
@@ -2733,13 +2736,13 @@ static void pause_filter_test(void)
 }
 
 /* If CR0.TS and CR0.EM are cleared in L2, no #NM is generated. */
-static void svm_no_nm_test(void)
+static void svm_no_nm_test(struct svm_test_context *ctx)
 {
 	write_cr0(read_cr0() & ~X86_CR0_TS);
 	test_set_guest((test_guest_func)fnop);
 
 	vcpu0.vmcb->save.cr0 = vcpu0.vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
-	report(svm_vmrun() == SVM_EXIT_VMMCALL,
+	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
 	       "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
 }
 
@@ -2794,7 +2797,7 @@ extern u64 host_branch4_from, host_branch4_to;
 
 u64 dbgctl;
 
-static void svm_lbrv_test_guest1(void)
+static void svm_lbrv_test_guest1(struct svm_test_context *ctx)
 {
 	/*
 	 * This guest expects the LBR to be already enabled when it starts,
@@ -2818,7 +2821,7 @@ static void svm_lbrv_test_guest1(void)
 	asm volatile ("vmmcall\n");
 }
 
-static void svm_lbrv_test_guest2(void)
+static void svm_lbrv_test_guest2(struct svm_test_context *ctx)
 {
 	/*
 	 * This guest expects the LBR to be disabled when it starts,
@@ -2852,7 +2855,7 @@ static void svm_lbrv_test_guest2(void)
 	asm volatile ("vmmcall\n");
 }
 
-static void svm_lbrv_test0(void)
+static void svm_lbrv_test0(struct svm_test_context *ctx)
 {
 	report(true, "Basic LBR test");
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
@@ -2867,7 +2870,7 @@ static void svm_lbrv_test0(void)
 	check_lbr(&host_branch0_from, &host_branch0_to);
 }
 
-static void svm_lbrv_test1(void)
+static void svm_lbrv_test1(struct svm_test_context *ctx)
 {
 
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
@@ -2890,7 +2893,7 @@ static void svm_lbrv_test1(void)
 	check_lbr(&guest_branch0_from, &guest_branch0_to);
 }
 
-static void svm_lbrv_test2(void)
+static void svm_lbrv_test2(struct svm_test_context *ctx)
 {
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
 
@@ -2914,7 +2917,7 @@ static void svm_lbrv_test2(void)
 	check_lbr(&guest_branch2_from, &guest_branch2_to);
 }
 
-static void svm_lbrv_nested_test1(void)
+static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
 {
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
@@ -2949,7 +2952,7 @@ static void svm_lbrv_nested_test1(void)
 	check_lbr(&host_branch3_from, &host_branch3_to);
 }
 
-static void svm_lbrv_nested_test2(void)
+static void svm_lbrv_nested_test2(struct svm_test_context *ctx)
 {
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
@@ -2999,7 +3002,8 @@ static void dummy_nmi_handler(struct ex_regs *regs)
 }
 
 
-static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected_vmexit)
+static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
+					     volatile int *counter, int expected_vmexit)
 {
 	if (counter)
 		*counter = 0;
@@ -3007,7 +3011,7 @@ static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected
 	sti();  // host IF value should not matter
 	clgi(); // vmrun will set back GI to 1
 
-	svm_vmrun();
+	svm_vmrun(ctx);
 
 	if (counter)
 		report(!*counter, "No interrupt expected");
@@ -3032,7 +3036,7 @@ static void svm_intr_intercept_mix_if_guest(struct svm_test_context *ctx)
 	report(0, "must not reach here");
 }
 
-static void svm_intr_intercept_mix_if(void)
+static void svm_intr_intercept_mix_if(struct svm_test_context *ctx)
 {
 	// make a physical interrupt to be pending
 	handle_irq(0x55, dummy_isr);
@@ -3044,7 +3048,7 @@ static void svm_intr_intercept_mix_if(void)
 	test_set_guest(svm_intr_intercept_mix_if_guest);
 	cli();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
-	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
+	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
 }
 
 
@@ -3066,7 +3070,7 @@ static void svm_intr_intercept_mix_gif_guest(struct svm_test_context *ctx)
 	report(0, "must not reach here");
 }
 
-static void svm_intr_intercept_mix_gif(void)
+static void svm_intr_intercept_mix_gif(struct svm_test_context *ctx)
 {
 	handle_irq(0x55, dummy_isr);
 
@@ -3077,7 +3081,7 @@ static void svm_intr_intercept_mix_gif(void)
 	test_set_guest(svm_intr_intercept_mix_gif_guest);
 	cli();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
-	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
+	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
 }
 
 // subtest: test that a clever guest can trigger an interrupt by setting GIF
@@ -3096,7 +3100,7 @@ static void svm_intr_intercept_mix_gif_guest2(struct svm_test_context *ctx)
 	report(0, "must not reach here");
 }
 
-static void svm_intr_intercept_mix_gif2(void)
+static void svm_intr_intercept_mix_gif2(struct svm_test_context *ctx)
 {
 	handle_irq(0x55, dummy_isr);
 
@@ -3105,7 +3109,7 @@ static void svm_intr_intercept_mix_gif2(void)
 	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_gif_guest2);
-	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
+	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
 }
 
 
@@ -3125,7 +3129,7 @@ static void svm_intr_intercept_mix_nmi_guest(struct svm_test_context *ctx)
 	report(0, "must not reach here");
 }
 
-static void svm_intr_intercept_mix_nmi(void)
+static void svm_intr_intercept_mix_nmi(struct svm_test_context *ctx)
 {
 	handle_exception(2, dummy_nmi_handler);
 
@@ -3134,7 +3138,7 @@ static void svm_intr_intercept_mix_nmi(void)
 	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_nmi_guest);
-	svm_intr_intercept_mix_run_guest(&nmi_recevied, SVM_EXIT_NMI);
+	svm_intr_intercept_mix_run_guest(ctx, &nmi_recevied, SVM_EXIT_NMI);
 }
 
 // test that pending SMI will be handled when guest enables GIF
@@ -3151,12 +3155,12 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test_context *ctx)
 	report(0, "must not reach here");
 }
 
-static void svm_intr_intercept_mix_smi(void)
+static void svm_intr_intercept_mix_smi(struct svm_test_context *ctx)
 {
 	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_SMI);
 	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 	test_set_guest(svm_intr_intercept_mix_smi_guest);
-	svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI);
+	svm_intr_intercept_mix_run_guest(ctx, NULL, SVM_EXIT_SMI);
 }
 
 static void svm_l2_ac_test(void)
@@ -3198,30 +3202,30 @@ static void svm_exception_handler(struct ex_regs *regs)
 	vmmcall();
 }
 
-static void handle_exception_in_l2(u8 vector)
+static void handle_exception_in_l2(struct svm_test_context *ctx, u8 vector)
 {
 	handler old_handler = handle_exception(vector, svm_exception_handler);
 	svm_exception_test_vector = vector;
 
-	report(svm_vmrun() == SVM_EXIT_VMMCALL,
+	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
 		"%s handled by L2", exception_mnemonic(vector));
 
 	handle_exception(vector, old_handler);
 }
 
-static void handle_exception_in_l1(u32 vector)
+static void handle_exception_in_l1(struct svm_test_context *ctx, u32 vector)
 {
 	u32 old_ie = vcpu0.vmcb->control.intercept_exceptions;
 
 	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << vector);
 
-	report(svm_vmrun() == (SVM_EXIT_EXCP_BASE + vector),
+	report(svm_vmrun(ctx) == (SVM_EXIT_EXCP_BASE + vector),
 		"%s handled by L1",  exception_mnemonic(vector));
 
 	vcpu0.vmcb->control.intercept_exceptions = old_ie;
 }
 
-static void svm_exception_test(void)
+static void svm_exception_test(struct svm_test_context *ctx)
 {
 	struct svm_exception_test *t;
 	int i;
@@ -3230,10 +3234,10 @@ static void svm_exception_test(void)
 		t = &svm_exception_tests[i];
 		test_set_guest((test_guest_func)t->guest_code);
 
-		handle_exception_in_l2(t->vector);
+		handle_exception_in_l2(ctx, t->vector);
 		svm_vcpu_ident(&vcpu0);
 
-		handle_exception_in_l1(t->vector);
+		handle_exception_in_l1(ctx, t->vector);
 		svm_vcpu_ident(&vcpu0);
 	}
 }
@@ -3244,12 +3248,12 @@ static void shutdown_intercept_test_guest(struct svm_test_context *ctx)
 	report_fail("should not reach here\n");
 
 }
-static void svm_shutdown_intercept_test(void)
+static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
 {
 	test_set_guest(shutdown_intercept_test_guest);
 	vcpu0.vmcb->save.idtr.base = (u64)alloc_vpage();
 	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
-	svm_vmrun();
+	svm_vmrun(ctx);
 	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
 }
 
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 25/27] svm: move nested vcpu to test context
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (23 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 24/27] svm: use svm_test_context in v2 tests Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-02 10:22   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 26/27] svm: move test_guest_func " Maxim Levitsky
                   ` (2 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

This moves vcpu0 into svm_test_context and renames it to vcpu
to show that this is the current test vcpu

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm.c       |  26 +-
 x86/svm.h       |   5 +-
 x86/svm_npt.c   |  54 ++--
 x86/svm_tests.c | 753 ++++++++++++++++++++++++++++--------------------
 4 files changed, 486 insertions(+), 352 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index 06d34ac4..a3279545 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -16,8 +16,6 @@
 #include "apic.h"
 #include "svm_lib.h"
 
-struct svm_vcpu vcpu0;
-
 bool smp_supported(void)
 {
 	return cpu_count() > 1;
@@ -78,11 +76,11 @@ static void test_thunk(struct svm_test_context *ctx)
 
 int __svm_vmrun(struct svm_test_context *ctx, u64 rip)
 {
-	vcpu0.vmcb->save.rip = (ulong)rip;
-	vcpu0.regs.rdi = (ulong)ctx;
-	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
-	SVM_VMRUN(&vcpu0);
-	return vcpu0.vmcb->control.exit_code;
+	ctx->vcpu->vmcb->save.rip = (ulong)rip;
+	ctx->vcpu->regs.rdi = (ulong)ctx;
+	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
+	SVM_VMRUN(ctx->vcpu);
+	return ctx->vcpu->vmcb->control.exit_code;
 }
 
 int svm_vmrun(struct svm_test_context *ctx)
@@ -92,7 +90,7 @@ int svm_vmrun(struct svm_test_context *ctx)
 
 static noinline void test_run(struct svm_test_context *ctx)
 {
-	svm_vcpu_ident(&vcpu0);
+	svm_vcpu_ident(ctx->vcpu);
 
 	if (ctx->test->v2) {
 		ctx->test->v2(ctx);
@@ -103,9 +101,9 @@ static noinline void test_run(struct svm_test_context *ctx)
 
 	ctx->test->prepare(ctx);
 	guest_main = ctx->test->guest_func;
-	vcpu0.vmcb->save.rip = (ulong)test_thunk;
-	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
-	vcpu0.regs.rdi = (ulong)ctx;
+	ctx->vcpu->vmcb->save.rip = (ulong)test_thunk;
+	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
+	ctx->vcpu->regs.rdi = (ulong)ctx;
 	do {
 
 		clgi();
@@ -113,7 +111,7 @@ static noinline void test_run(struct svm_test_context *ctx)
 
 		ctx->test->prepare_gif_clear(ctx);
 
-		__SVM_VMRUN(&vcpu0, "vmrun_rip");
+		__SVM_VMRUN(ctx->vcpu, "vmrun_rip");
 
 		cli();
 		stgi();
@@ -182,13 +180,15 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 		return 0;
 
 	struct svm_test_context ctx;
+	struct svm_vcpu vcpu;
 
-	svm_vcpu_init(&vcpu0);
+	svm_vcpu_init(&vcpu);
 
 	for (; svm_tests[i].name != NULL; i++) {
 
 		memset(&ctx, 0, sizeof(ctx));
 		ctx.test = &svm_tests[i];
+		ctx.vcpu = &vcpu;
 
 		if (!test_wanted(svm_tests[i].name, av, ac))
 			continue;
diff --git a/x86/svm.h b/x86/svm.h
index 961c4de3..ec181715 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -12,6 +12,9 @@ struct svm_test_context {
 	ulong scratch;
 	bool on_vcpu_done;
 	struct svm_test *test;
+
+	/* TODO: test cases currently are single threaded */
+	struct svm_vcpu *vcpu;
 };
 
 struct svm_test {
@@ -44,6 +47,4 @@ int svm_vmrun(struct svm_test_context *ctx);
 void test_set_guest(test_guest_func func);
 
 
-extern struct svm_vcpu vcpu0;
-
 #endif
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index fc16b4be..39fd7198 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -27,23 +27,26 @@ static void npt_np_test(struct svm_test_context *ctx)
 
 static bool npt_np_check(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	u64 *pte = npt_get_pte((u64) scratch_page);
 
 	*pte |= 1ULL;
 
-	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000004ULL);
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000004ULL);
 }
 
 static void npt_nx_prepare(struct svm_test_context *ctx)
 {
 	u64 *pte;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	ctx->scratch = rdmsr(MSR_EFER);
 	wrmsr(MSR_EFER, ctx->scratch | EFER_NX);
 
 	/* Clear the guest's EFER.NX, it should not affect NPT behavior. */
-	vcpu0.vmcb->save.efer &= ~EFER_NX;
+	vmcb->save.efer &= ~EFER_NX;
 
 	pte = npt_get_pte((u64) null_test);
 
@@ -53,13 +56,14 @@ static void npt_nx_prepare(struct svm_test_context *ctx)
 static bool npt_nx_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte((u64) null_test);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	wrmsr(MSR_EFER, ctx->scratch);
 
 	*pte &= ~PT64_NX_MASK;
 
-	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000015ULL);
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000015ULL);
 }
 
 static void npt_us_prepare(struct svm_test_context *ctx)
@@ -80,11 +84,12 @@ static void npt_us_test(struct svm_test_context *ctx)
 static bool npt_us_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte((u64) scratch_page);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	*pte |= (1ULL << 2);
 
-	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000005ULL);
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000005ULL);
 }
 
 static void npt_rw_prepare(struct svm_test_context *ctx)
@@ -107,11 +112,12 @@ static void npt_rw_test(struct svm_test_context *ctx)
 static bool npt_rw_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte(0x80000);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	*pte |= (1ULL << 1);
 
-	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
 }
 
 static void npt_rw_pfwalk_prepare(struct svm_test_context *ctx)
@@ -127,12 +133,13 @@ static void npt_rw_pfwalk_prepare(struct svm_test_context *ctx)
 static bool npt_rw_pfwalk_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte(read_cr3());
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	*pte |= (1ULL << 1);
 
-	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vcpu0.vmcb->control.exit_info_1 == 0x200000007ULL)
-	    && (vcpu0.vmcb->control.exit_info_2 == read_cr3());
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x200000007ULL)
+	    && (vmcb->control.exit_info_2 == read_cr3());
 }
 
 static void npt_l1mmio_prepare(struct svm_test_context *ctx)
@@ -178,11 +185,12 @@ static void npt_rw_l1mmio_test(struct svm_test_context *ctx)
 static bool npt_rw_l1mmio_check(struct svm_test_context *ctx)
 {
 	u64 *pte = npt_get_pte(0xfee00080);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	*pte |= (1ULL << 1);
 
-	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
-	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
+	return (vmcb->control.exit_code == SVM_EXIT_NPF)
+	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
 }
 
 static void basic_guest_main(struct svm_test_context *ctx)
@@ -193,6 +201,7 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
 				     u64 * pxe, u64 rsvd_bits, u64 efer,
 				     ulong cr4, u64 guest_efer, ulong guest_cr4)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 	u64 pxe_orig = *pxe;
 	int exit_reason;
 	u64 pfec;
@@ -200,8 +209,8 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
 	wrmsr(MSR_EFER, efer);
 	write_cr4(cr4);
 
-	vcpu0.vmcb->save.efer = guest_efer;
-	vcpu0.vmcb->save.cr4 = guest_cr4;
+	vmcb->save.efer = guest_efer;
+	vmcb->save.cr4 = guest_cr4;
 
 	*pxe |= rsvd_bits;
 
@@ -227,10 +236,10 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
 
 	}
 
-	report(vcpu0.vmcb->control.exit_info_1 == pfec,
+	report(vmcb->control.exit_info_1 == pfec,
 	       "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
 	       "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
-	       pfec, vcpu0.vmcb->control.exit_info_1, *pxe,
+	       pfec, vmcb->control.exit_info_1, *pxe,
 	       !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
 	       !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
 
@@ -311,6 +320,7 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
 {
 	u64 saved_efer, host_efer, sg_efer, guest_efer;
 	ulong saved_cr4, host_cr4, sg_cr4, guest_cr4;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	if (!npt_supported()) {
 		report_skip("NPT not supported");
@@ -319,8 +329,8 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
 
 	saved_efer = host_efer = rdmsr(MSR_EFER);
 	saved_cr4 = host_cr4 = read_cr4();
-	sg_efer = guest_efer = vcpu0.vmcb->save.efer;
-	sg_cr4 = guest_cr4 = vcpu0.vmcb->save.cr4;
+	sg_efer = guest_efer = vmcb->save.efer;
+	sg_cr4 = guest_cr4 = vmcb->save.cr4;
 
 	test_set_guest(basic_guest_main);
 
@@ -352,8 +362,8 @@ skip_pte_test:
 
 	wrmsr(MSR_EFER, saved_efer);
 	write_cr4(saved_cr4);
-	vcpu0.vmcb->save.efer = sg_efer;
-	vcpu0.vmcb->save.cr4 = sg_cr4;
+	vmcb->save.efer = sg_efer;
+	vmcb->save.cr4 = sg_cr4;
 }
 
 #define NPT_V1_TEST(name, prepare, guest_code, check)				\
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 6041ac24..bd92fcee 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -44,33 +44,36 @@ static void null_test(struct svm_test_context *ctx)
 
 static bool null_check(struct svm_test_context *ctx)
 {
-	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL;
+	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_VMMCALL;
 }
 
 static void prepare_no_vmrun_int(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
+	ctx->vcpu->vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
 }
 
 static bool check_no_vmrun_int(struct svm_test_context *ctx)
 {
-	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
+	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
 static void test_vmrun(struct svm_test_context *ctx)
 {
-	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vcpu0.vmcb)));
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vmcb)));
 }
 
 static bool check_vmrun(struct svm_test_context *ctx)
 {
-	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMRUN;
+	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_VMRUN;
 }
 
 static void prepare_rsm_intercept(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept |= 1 << INTERCEPT_RSM;
-	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
+	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
 }
 
 static void test_rsm_intercept(struct svm_test_context *ctx)
@@ -85,24 +88,25 @@ static bool check_rsm_intercept(struct svm_test_context *ctx)
 
 static bool finished_rsm_intercept(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 	switch (get_test_stage(ctx)) {
 	case 0:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_RSM) {
+		if (vmcb->control.exit_code != SVM_EXIT_RSM) {
 			report_fail("VMEXIT not due to rsm. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
+		vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
 		inc_test_stage(ctx);
 		break;
 
 	case 1:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
+		if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
 			report_fail("VMEXIT not due to #UD. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 2;
+		vmcb->save.rip += 2;
 		inc_test_stage(ctx);
 		break;
 
@@ -114,7 +118,9 @@ static bool finished_rsm_intercept(struct svm_test_context *ctx)
 
 static void prepare_cr3_intercept(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.intercept_cr_read |= 1 << 3;
 }
 
 static void test_cr3_intercept(struct svm_test_context *ctx)
@@ -124,7 +130,8 @@ static void test_cr3_intercept(struct svm_test_context *ctx)
 
 static bool check_cr3_intercept(struct svm_test_context *ctx)
 {
-	return vcpu0.vmcb->control.exit_code == SVM_EXIT_READ_CR3;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	return vmcb->control.exit_code == SVM_EXIT_READ_CR3;
 }
 
 static bool check_cr3_nointercept(struct svm_test_context *ctx)
@@ -147,7 +154,9 @@ static void corrupt_cr3_intercept_bypass(void *_ctx)
 
 static void prepare_cr3_intercept_bypass(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.intercept_cr_read |= 1 << 3;
 	on_cpu_async(1, corrupt_cr3_intercept_bypass, ctx);
 }
 
@@ -166,8 +175,10 @@ static void test_cr3_intercept_bypass(struct svm_test_context *ctx)
 
 static void prepare_dr_intercept(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept_dr_read = 0xff;
-	vcpu0.vmcb->control.intercept_dr_write = 0xff;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.intercept_dr_read = 0xff;
+	vmcb->control.intercept_dr_write = 0xff;
 }
 
 static void test_dr_intercept(struct svm_test_context *ctx)
@@ -251,7 +262,8 @@ static void test_dr_intercept(struct svm_test_context *ctx)
 
 static bool dr_intercept_finished(struct svm_test_context *ctx)
 {
-	ulong n = (vcpu0.vmcb->control.exit_code - SVM_EXIT_READ_DR0);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0);
 
 	/* Only expect DR intercepts */
 	if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0))
@@ -267,7 +279,7 @@ static bool dr_intercept_finished(struct svm_test_context *ctx)
 	ctx->scratch = (n % 16);
 
 	/* Jump over MOV instruction */
-	vcpu0.vmcb->save.rip += 3;
+	vmcb->save.rip += 3;
 
 	return false;
 }
@@ -284,7 +296,8 @@ static bool next_rip_supported(void)
 
 static void prepare_next_rip(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
 }
 
 
@@ -299,15 +312,17 @@ static bool check_next_rip(struct svm_test_context *ctx)
 {
 	extern char exp_next_rip;
 	unsigned long address = (unsigned long)&exp_next_rip;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
-	return address == vcpu0.vmcb->control.next_rip;
+	return address == vmcb->control.next_rip;
 }
 
 
 static void prepare_msr_intercept(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
-	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
 }
 
@@ -359,12 +374,13 @@ static void test_msr_intercept(struct svm_test_context *ctx)
 
 static bool msr_intercept_finished(struct svm_test_context *ctx)
 {
-	u32 exit_code = vcpu0.vmcb->control.exit_code;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	u32 exit_code = vmcb->control.exit_code;
 	u64 exit_info_1;
 	u8 *opcode;
 
 	if (exit_code == SVM_EXIT_MSR) {
-		exit_info_1 = vcpu0.vmcb->control.exit_info_1;
+		exit_info_1 = vmcb->control.exit_info_1;
 	} else {
 		/*
 		 * If #GP exception occurs instead, check that it was
@@ -374,7 +390,7 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
 		if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR))
 			return true;
 
-		opcode = (u8 *)vcpu0.vmcb->save.rip;
+		opcode = (u8 *)vmcb->save.rip;
 		if (opcode[0] != 0x0f)
 			return true;
 
@@ -394,11 +410,11 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
 		 * RCX holds the MSR index.
 		 */
 		printf("%s 0x%lx #GP exception\n",
-		       exit_info_1 ? "WRMSR" : "RDMSR", vcpu0.regs.rcx);
+		       exit_info_1 ? "WRMSR" : "RDMSR", ctx->vcpu->regs.rcx);
 	}
 
 	/* Jump over RDMSR/WRMSR instruction */
-	vcpu0.vmcb->save.rip += 2;
+	vmcb->save.rip += 2;
 
 	/*
 	 * Test whether the intercept was for RDMSR/WRMSR.
@@ -410,9 +426,9 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
 	 */
 	if (exit_info_1)
 		ctx->scratch =
-			((vcpu0.regs.rdx << 32) | (vcpu0.regs.rax & 0xffffffff));
+			((ctx->vcpu->regs.rdx << 32) | (ctx->vcpu->regs.rax & 0xffffffff));
 	else
-		ctx->scratch = vcpu0.regs.rcx;
+		ctx->scratch = ctx->vcpu->regs.rcx;
 
 	return false;
 }
@@ -425,7 +441,9 @@ static bool check_msr_intercept(struct svm_test_context *ctx)
 
 static void prepare_mode_switch(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
 		|  (1ULL << UD_VECTOR)
 		|  (1ULL << DF_VECTOR)
 		|  (1ULL << PF_VECTOR);
@@ -490,17 +508,18 @@ static void test_mode_switch(struct svm_test_context *ctx)
 static bool mode_switch_finished(struct svm_test_context *ctx)
 {
 	u64 cr0, cr4, efer;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
-	cr0  = vcpu0.vmcb->save.cr0;
-	cr4  = vcpu0.vmcb->save.cr4;
-	efer = vcpu0.vmcb->save.efer;
+	cr0  = vmcb->save.cr0;
+	cr4  = vmcb->save.cr4;
+	efer = vmcb->save.efer;
 
 	/* Only expect VMMCALL intercepts */
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL)
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL)
 		return true;
 
 	/* Jump over VMMCALL instruction */
-	vcpu0.vmcb->save.rip += 3;
+	vmcb->save.rip += 3;
 
 	/* Do sanity checks */
 	switch (ctx->scratch) {
@@ -534,8 +553,9 @@ static bool check_mode_switch(struct svm_test_context *ctx)
 static void prepare_ioio(struct svm_test_context *ctx)
 {
 	u8 *io_bitmap = svm_get_io_bitmap();
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
 	ctx->scratch = 0;
 	memset(io_bitmap, 0, 8192);
 	io_bitmap[8192] = 0xFF;
@@ -617,19 +637,20 @@ static bool ioio_finished(struct svm_test_context *ctx)
 {
 	unsigned port, size;
 	u8 *io_bitmap = svm_get_io_bitmap();
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	/* Only expect IOIO intercepts */
-	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL)
+	if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
 		return true;
 
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_IOIO)
+	if (vmcb->control.exit_code != SVM_EXIT_IOIO)
 		return true;
 
 	/* one step forward */
 	ctx->scratch += 1;
 
-	port = vcpu0.vmcb->control.exit_info_1 >> 16;
-	size = (vcpu0.vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
+	port = vmcb->control.exit_info_1 >> 16;
+	size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
 
 	while (size--) {
 		io_bitmap[port / 8] &= ~(1 << (port & 7));
@@ -649,7 +670,9 @@ static bool check_ioio(struct svm_test_context *ctx)
 
 static void prepare_asid_zero(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.asid = 0;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.asid = 0;
 }
 
 static void test_asid_zero(struct svm_test_context *ctx)
@@ -659,12 +682,16 @@ static void test_asid_zero(struct svm_test_context *ctx)
 
 static bool check_asid_zero(struct svm_test_context *ctx)
 {
-	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	return vmcb->control.exit_code == SVM_EXIT_ERR;
 }
 
 static void sel_cr0_bug_prepare(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
 }
 
 static bool sel_cr0_bug_finished(struct svm_test_context *ctx)
@@ -692,7 +719,9 @@ static void sel_cr0_bug_test(struct svm_test_context *ctx)
 
 static bool sel_cr0_bug_check(struct svm_test_context *ctx)
 {
-	return vcpu0.vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
 }
 
 #define TSC_ADJUST_VALUE    (1ll << 32)
@@ -706,7 +735,9 @@ static bool tsc_adjust_supported(void)
 
 static void tsc_adjust_prepare(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
 
 	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
 	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
@@ -758,17 +789,18 @@ static void svm_tsc_scale_run_testcase(struct svm_test_context *ctx,
 				       double tsc_scale, u64 tsc_offset)
 {
 	u64 start_tsc, actual_duration;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
 
 	test_set_guest(svm_tsc_scale_guest);
-	vcpu0.vmcb->control.tsc_offset = tsc_offset;
+	vmcb->control.tsc_offset = tsc_offset;
 	wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32)));
 
 	start_tsc = rdtsc();
 
 	if (svm_vmrun(ctx) != SVM_EXIT_VMMCALL)
-		report_fail("unexpected vm exit code 0x%x", vcpu0.vmcb->control.exit_code);
+		report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code);
 
 	actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
 
@@ -839,6 +871,7 @@ start:
 static bool latency_finished(struct svm_test_context *ctx)
 {
 	u64 cycles;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	tsc_end = rdtsc();
 
@@ -852,7 +885,7 @@ static bool latency_finished(struct svm_test_context *ctx)
 
 	vmexit_sum += cycles;
 
-	vcpu0.vmcb->save.rip += 3;
+	vmcb->save.rip += 3;
 
 	runs -= 1;
 
@@ -863,7 +896,10 @@ static bool latency_finished(struct svm_test_context *ctx)
 
 static bool latency_finished_clean(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.clean = VMCB_CLEAN_ALL;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.clean = VMCB_CLEAN_ALL;
+
 	return latency_finished(ctx);
 }
 
@@ -886,7 +922,9 @@ static void lat_svm_insn_prepare(struct svm_test_context *ctx)
 
 static bool lat_svm_insn_finished(struct svm_test_context *ctx)
 {
-	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	u64 vmcb_phys = virt_to_phys(vmcb);
 	u64 cycles;
 
 	for ( ; runs != 0; runs--) {
@@ -957,6 +995,7 @@ static void pending_event_ipi_isr(isr_regs_t *regs)
 static void pending_event_prepare(struct svm_test_context *ctx)
 {
 	int ipi_vector = 0xf1;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	pending_event_ipi_fired = false;
 
@@ -964,8 +1003,8 @@ static void pending_event_prepare(struct svm_test_context *ctx)
 
 	pending_event_guest_run = false;
 
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
-	vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+	vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
 		       APIC_DM_FIXED | ipi_vector, 0);
@@ -980,16 +1019,18 @@ static void pending_event_test(struct svm_test_context *ctx)
 
 static bool pending_event_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	switch (get_test_stage(ctx)) {
 	case 0:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
+		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
 			report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 
-		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
-		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
+		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 
 		if (pending_event_guest_run) {
 			report_fail("Guest ran before host received IPI\n");
@@ -1067,19 +1108,21 @@ static void pending_event_cli_test(struct svm_test_context *ctx)
 
 static bool pending_event_cli_finished(struct svm_test_context *ctx)
 {
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x",
-			    vcpu0.vmcb->control.exit_code);
+			    vmcb->control.exit_code);
 		return true;
 	}
 
 	switch (get_test_stage(ctx)) {
 	case 0:
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 
 		pending_event_ipi_fired = false;
 
-		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 
 		/* Now entering again with VINTR_MASKING=1.  */
 		apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
@@ -1206,32 +1249,34 @@ static void interrupt_test(struct svm_test_context *ctx)
 
 static bool interrupt_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	switch (get_test_stage(ctx)) {
 	case 0:
 	case 2:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 
-		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
-		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
+		vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
 		break;
 
 	case 1:
 	case 3:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
+		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
 			report_fail("VMEXIT not due to intr intercept. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 
 		sti_nop_cli();
 
-		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
-		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
+		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 		break;
 
 	case 4:
@@ -1289,22 +1334,24 @@ static void nmi_test(struct svm_test_context *ctx)
 
 static bool nmi_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	switch (get_test_stage(ctx)) {
 	case 0:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 
-		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
+		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
 		break;
 
 	case 1:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
+		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
 			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 
@@ -1391,22 +1438,24 @@ static void nmi_hlt_test(struct svm_test_context *ctx)
 
 static bool nmi_hlt_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	switch (get_test_stage(ctx)) {
 	case 1:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 
-		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
+		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
 		break;
 
 	case 2:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
+		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
 			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 
@@ -1449,40 +1498,42 @@ static void exc_inject_test(struct svm_test_context *ctx)
 
 static bool exc_inject_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	switch (get_test_stage(ctx)) {
 	case 0:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 3;
-		vcpu0.vmcb->control.event_inj = NMI_VECTOR |
+		vmcb->save.rip += 3;
+		vmcb->control.event_inj = NMI_VECTOR |
 						SVM_EVTINJ_TYPE_EXEPT |
 						SVM_EVTINJ_VALID;
 		break;
 
 	case 1:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_ERR) {
+		if (vmcb->control.exit_code != SVM_EXIT_ERR) {
 			report_fail("VMEXIT not due to error. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 		report(count_exc == 0, "exception with vector 2 not injected");
-		vcpu0.vmcb->control.event_inj = DE_VECTOR |
+		vmcb->control.event_inj = DE_VECTOR |
 						SVM_EVTINJ_TYPE_EXEPT |
 						SVM_EVTINJ_VALID;
 		break;
 
 	case 2:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 		report(count_exc == 1, "divide overflow exception injected");
-		report(!(vcpu0.vmcb->control.event_inj & SVM_EVTINJ_VALID),
+		report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID),
 		       "eventinj.VALID cleared");
 		break;
 
@@ -1509,11 +1560,13 @@ static void virq_isr(isr_regs_t *regs)
 
 static void virq_inject_prepare(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	handle_irq(0xf1, virq_isr);
 
-	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
 		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
-	vcpu0.vmcb->control.int_vector = 0xf1;
+	vmcb->control.int_vector = 0xf1;
 	virq_fired = false;
 	set_test_stage(ctx, 0);
 }
@@ -1563,66 +1616,68 @@ static void virq_inject_test(struct svm_test_context *ctx)
 
 static bool virq_inject_finished(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->save.rip += 3;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->save.rip += 3;
 
 	switch (get_test_stage(ctx)) {
 	case 0:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		if (vcpu0.vmcb->control.int_ctl & V_IRQ_MASK) {
+		if (vmcb->control.int_ctl & V_IRQ_MASK) {
 			report_fail("V_IRQ not cleared on VMEXIT after firing");
 			return true;
 		}
 		virq_fired = false;
-		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
-		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
+		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
 			(0x0f << V_INTR_PRIO_SHIFT);
 		break;
 
 	case 1:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VINTR) {
+		if (vmcb->control.exit_code != SVM_EXIT_VINTR) {
 			report_fail("VMEXIT not due to vintr. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 		if (virq_fired) {
 			report_fail("V_IRQ fired before SVM_EXIT_VINTR");
 			return true;
 		}
-		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
+		vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
 		break;
 
 	case 2:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 		virq_fired = false;
 		// Set irq to lower priority
-		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
+		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
 			(0x08 << V_INTR_PRIO_SHIFT);
 		// Raise guest TPR
-		vcpu0.vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
+		vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
 		break;
 
 	case 3:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
+		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
 		break;
 
 	case 4:
 		// INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
 		break;
@@ -1673,10 +1728,12 @@ static void reg_corruption_isr(isr_regs_t *regs)
 
 static void reg_corruption_prepare(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	set_test_stage(ctx, 0);
 
-	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK;
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
+	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
+	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
 
 	handle_irq(TIMER_VECTOR, reg_corruption_isr);
 
@@ -1705,6 +1762,8 @@ static void reg_corruption_test(struct svm_test_context *ctx)
 
 static bool reg_corruption_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	if (isr_cnt == 10000) {
 		report_pass("No RIP corruption detected after %d timer interrupts",
 			    isr_cnt);
@@ -1712,9 +1771,9 @@ static bool reg_corruption_finished(struct svm_test_context *ctx)
 		goto cleanup;
 	}
 
-	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_INTR) {
+	if (vmcb->control.exit_code == SVM_EXIT_INTR) {
 
-		void *guest_rip = (void *)vcpu0.vmcb->save.rip;
+		void *guest_rip = (void *)vmcb->save.rip;
 
 		sti_nop_cli();
 
@@ -1782,8 +1841,10 @@ static volatile bool init_intercept;
 
 static void init_intercept_prepare(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	init_intercept = false;
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
 }
 
 static void init_intercept_test(struct svm_test_context *ctx)
@@ -1793,11 +1854,13 @@ static void init_intercept_test(struct svm_test_context *ctx)
 
 static bool init_intercept_finished(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->save.rip += 3;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->save.rip += 3;
 
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INIT) {
+	if (vmcb->control.exit_code != SVM_EXIT_INIT) {
 		report_fail("VMEXIT not due to init intercept. Exit reason 0x%x",
-			    vcpu0.vmcb->control.exit_code);
+			    vmcb->control.exit_code);
 
 		return true;
 	}
@@ -1894,14 +1957,16 @@ static void host_rflags_test(struct svm_test_context *ctx)
 
 static bool host_rflags_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	switch (get_test_stage(ctx)) {
 	case 0:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 			report_fail("Unexpected VMEXIT. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 		/*
 		 * Setting host EFLAGS.TF not immediately before VMRUN, causes
 		 * #DB trap before first guest instruction is executed
@@ -1909,14 +1974,14 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
 		host_rflags_set_tf = true;
 		break;
 	case 1:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
 		    host_rflags_guest_main_flag != 1) {
 			report_fail("Unexpected VMEXIT or #DB handler"
 				    " invoked before guest main. Exit reason 0x%x",
-				    vcpu0.vmcb->control.exit_code);
+				    vmcb->control.exit_code);
 			return true;
 		}
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 		/*
 		 * Setting host EFLAGS.TF immediately before VMRUN, causes #DB
 		 * trap after VMRUN completes on the host side (i.e., after
@@ -1925,21 +1990,21 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
 		host_rflags_ss_on_vmrun = true;
 		break;
 	case 2:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
 		    rip_detected != (u64)&vmrun_rip + 3) {
 			report_fail("Unexpected VMEXIT or RIP mismatch."
 				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
-				    "%lx", vcpu0.vmcb->control.exit_code,
+				    "%lx", vmcb->control.exit_code,
 				    (u64)&vmrun_rip + 3, rip_detected);
 			return true;
 		}
 		host_rflags_set_rf = true;
 		host_rflags_guest_main_flag = 0;
 		host_rflags_vmrun_reached = false;
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 		break;
 	case 3:
-		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
 		    rip_detected != (u64)&vmrun_rip ||
 		    host_rflags_guest_main_flag != 1 ||
 		    host_rflags_db_handler_flag > 1 ||
@@ -1947,13 +2012,13 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
 			report_fail("Unexpected VMEXIT or RIP mismatch or "
 				    "EFLAGS.RF not cleared."
 				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
-				    "%lx", vcpu0.vmcb->control.exit_code,
+				    "%lx", vmcb->control.exit_code,
 				    (u64)&vmrun_rip, rip_detected);
 			return true;
 		}
 		host_rflags_set_tf = false;
 		host_rflags_set_rf = false;
-		vcpu0.vmcb->save.rip += 3;
+		vmcb->save.rip += 3;
 		break;
 	default:
 		return true;
@@ -1986,6 +2051,8 @@ static void svm_cr4_osxsave_test_guest(struct svm_test_context *ctx)
 
 static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	if (!this_cpu_has(X86_FEATURE_XSAVE)) {
 		report_skip("XSAVE not detected");
 		return;
@@ -1995,7 +2062,7 @@ static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
 		unsigned long cr4 = read_cr4() | X86_CR4_OSXSAVE;
 
 		write_cr4(cr4);
-		vcpu0.vmcb->save.cr4 = cr4;
+		vmcb->save.cr4 = cr4;
 	}
 
 	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
@@ -2035,6 +2102,7 @@ static void basic_guest_main(struct svm_test_context *ctx)
 	u64 tmp, mask;							\
 	u32 r;								\
 	int i;								\
+	struct vmcb *vmcb = ctx->vcpu->vmcb;				\
 									\
 	for (i = start; i <= end; i = i + inc) {			\
 		mask = 1ull << i;					\
@@ -2043,13 +2111,13 @@ static void basic_guest_main(struct svm_test_context *ctx)
 		tmp = val | mask;					\
 		switch (cr) {						\
 		case 0:							\
-			vcpu0.vmcb->save.cr0 = tmp;				\
+			vmcb->save.cr0 = tmp;				\
 			break;						\
 		case 3:							\
-			vcpu0.vmcb->save.cr3 = tmp;				\
+			vmcb->save.cr3 = tmp;				\
 			break;						\
 		case 4:							\
-			vcpu0.vmcb->save.cr4 = tmp;				\
+			vmcb->save.cr4 = tmp;				\
 		}							\
 		r = svm_vmrun(ctx);					\
 		report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x", \
@@ -2062,39 +2130,40 @@ static void test_efer(struct svm_test_context *ctx)
 	/*
 	 * Un-setting EFER.SVME is illegal
 	 */
-	u64 efer_saved = vcpu0.vmcb->save.efer;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	u64 efer_saved = vmcb->save.efer;
 	u64 efer = efer_saved;
 
 	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
 	efer &= ~EFER_SVME;
-	vcpu0.vmcb->save.efer = efer;
+	vmcb->save.efer = efer;
 	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
-	vcpu0.vmcb->save.efer = efer_saved;
+	vmcb->save.efer = efer_saved;
 
 	/*
 	 * EFER MBZ bits: 63:16, 9
 	 */
-	efer_saved = vcpu0.vmcb->save.efer;
+	efer_saved = vmcb->save.efer;
 
-	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vmcb->save.efer,
 				   efer_saved, SVM_EFER_RESERVED_MASK);
-	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vmcb->save.efer,
 				   efer_saved, SVM_EFER_RESERVED_MASK);
 
 	/*
 	 * EFER.LME and CR0.PG are both set and CR4.PAE is zero.
 	 */
-	u64 cr0_saved = vcpu0.vmcb->save.cr0;
+	u64 cr0_saved = vmcb->save.cr0;
 	u64 cr0;
-	u64 cr4_saved = vcpu0.vmcb->save.cr4;
+	u64 cr4_saved = vmcb->save.cr4;
 	u64 cr4;
 
 	efer = efer_saved | EFER_LME;
-	vcpu0.vmcb->save.efer = efer;
+	vmcb->save.efer = efer;
 	cr0 = cr0_saved | X86_CR0_PG | X86_CR0_PE;
-	vcpu0.vmcb->save.cr0 = cr0;
+	vmcb->save.cr0 = cr0;
 	cr4 = cr4_saved & ~X86_CR4_PAE;
-	vcpu0.vmcb->save.cr4 = cr4;
+	vmcb->save.cr4 = cr4;
 	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
 
@@ -2105,31 +2174,31 @@ static void test_efer(struct svm_test_context *ctx)
 	 * SVM_EXIT_ERR.
 	 */
 	cr4 = cr4_saved | X86_CR4_PAE;
-	vcpu0.vmcb->save.cr4 = cr4;
+	vmcb->save.cr4 = cr4;
 	cr0 &= ~X86_CR0_PE;
-	vcpu0.vmcb->save.cr0 = cr0;
+	vmcb->save.cr0 = cr0;
 	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
 
 	/*
 	 * EFER.LME, CR0.PG, CR4.PAE, CS.L, and CS.D are all non-zero.
 	 */
-	u32 cs_attrib_saved = vcpu0.vmcb->save.cs.attrib;
+	u32 cs_attrib_saved = vmcb->save.cs.attrib;
 	u32 cs_attrib;
 
 	cr0 |= X86_CR0_PE;
-	vcpu0.vmcb->save.cr0 = cr0;
+	vmcb->save.cr0 = cr0;
 	cs_attrib = cs_attrib_saved | SVM_SELECTOR_L_MASK |
 		SVM_SELECTOR_DB_MASK;
-	vcpu0.vmcb->save.cs.attrib = cs_attrib;
+	vmcb->save.cs.attrib = cs_attrib;
 	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
 	       "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
 	       efer, cr0, cr4, cs_attrib);
 
-	vcpu0.vmcb->save.cr0 = cr0_saved;
-	vcpu0.vmcb->save.cr4 = cr4_saved;
-	vcpu0.vmcb->save.efer = efer_saved;
-	vcpu0.vmcb->save.cs.attrib = cs_attrib_saved;
+	vmcb->save.cr0 = cr0_saved;
+	vmcb->save.cr4 = cr4_saved;
+	vmcb->save.efer = efer_saved;
+	vmcb->save.cs.attrib = cs_attrib_saved;
 }
 
 static void test_cr0(struct svm_test_context *ctx)
@@ -2137,37 +2206,39 @@ static void test_cr0(struct svm_test_context *ctx)
 	/*
 	 * Un-setting CR0.CD and setting CR0.NW is illegal combination
 	 */
-	u64 cr0_saved = vcpu0.vmcb->save.cr0;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	u64 cr0_saved = vmcb->save.cr0;
 	u64 cr0 = cr0_saved;
 
 	cr0 |= X86_CR0_CD;
 	cr0 &= ~X86_CR0_NW;
-	vcpu0.vmcb->save.cr0 = cr0;
+	vmcb->save.cr0 = cr0;
 	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
 		cr0);
 	cr0 |= X86_CR0_NW;
-	vcpu0.vmcb->save.cr0 = cr0;
+	vmcb->save.cr0 = cr0;
 	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
 		cr0);
 	cr0 &= ~X86_CR0_NW;
 	cr0 &= ~X86_CR0_CD;
-	vcpu0.vmcb->save.cr0 = cr0;
+	vmcb->save.cr0 = cr0;
 	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
 		cr0);
 	cr0 |= X86_CR0_NW;
-	vcpu0.vmcb->save.cr0 = cr0;
+	vmcb->save.cr0 = cr0;
 	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
 		cr0);
-	vcpu0.vmcb->save.cr0 = cr0_saved;
+	vmcb->save.cr0 = cr0_saved;
 
 	/*
 	 * CR0[63:32] are not zero
 	 */
 	cr0 = cr0_saved;
 
-	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
+	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vmcb->save.cr0, cr0_saved,
 				   SVM_CR0_RESERVED_MASK);
-	vcpu0.vmcb->save.cr0 = cr0_saved;
+	vmcb->save.cr0 = cr0_saved;
 }
 
 static void test_cr3(struct svm_test_context *ctx)
@@ -2176,37 +2247,39 @@ static void test_cr3(struct svm_test_context *ctx)
 	 * CR3 MBZ bits based on different modes:
 	 *   [63:52] - long mode
 	 */
-	u64 cr3_saved = vcpu0.vmcb->save.cr3;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	u64 cr3_saved = vmcb->save.cr3;
 
 	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 63, 1, 3, cr3_saved,
 				  SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
 
-	vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
+	vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
 	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
-	       vcpu0.vmcb->save.cr3);
+	       vmcb->save.cr3);
 
 	/*
 	 * CR3 non-MBZ reserved bits based on different modes:
 	 *   [11:5] [2:0] - long mode (PCIDE=0)
 	 *          [2:0] - PAE legacy mode
 	 */
-	u64 cr4_saved = vcpu0.vmcb->save.cr4;
+	u64 cr4_saved = vmcb->save.cr4;
 	u64 *pdpe = npt_get_pml4e();
 
 	/*
 	 * Long mode
 	 */
 	if (this_cpu_has(X86_FEATURE_PCID)) {
-		vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
+		vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
 		SVM_TEST_CR_RESERVED_BITS(ctx, 0, 11, 1, 3, cr3_saved,
 					  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) ");
 
-		vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
+		vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
 		report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
-		       vcpu0.vmcb->save.cr3);
+		       vmcb->save.cr3);
 	}
 
-	vcpu0.vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
+	vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
 
 	if (!npt_supported())
 		goto skip_npt_only;
@@ -2218,44 +2291,46 @@ static void test_cr3(struct svm_test_context *ctx)
 				  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) ");
 
 	pdpe[0] |= 1ULL;
-	vcpu0.vmcb->save.cr3 = cr3_saved;
+	vmcb->save.cr3 = cr3_saved;
 
 	/*
 	 * PAE legacy
 	 */
 	pdpe[0] &= ~1ULL;
-	vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
+	vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
 	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 2, 1, 3, cr3_saved,
 				  SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) ");
 
 	pdpe[0] |= 1ULL;
 
 skip_npt_only:
-	vcpu0.vmcb->save.cr3 = cr3_saved;
-	vcpu0.vmcb->save.cr4 = cr4_saved;
+	vmcb->save.cr3 = cr3_saved;
+	vmcb->save.cr4 = cr4_saved;
 }
 
 /* Test CR4 MBZ bits based on legacy or long modes */
 static void test_cr4(struct svm_test_context *ctx)
 {
-	u64 cr4_saved = vcpu0.vmcb->save.cr4;
-	u64 efer_saved = vcpu0.vmcb->save.efer;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	u64 cr4_saved = vmcb->save.cr4;
+	u64 efer_saved = vmcb->save.efer;
 	u64 efer = efer_saved;
 
 	efer &= ~EFER_LME;
-	vcpu0.vmcb->save.efer = efer;
+	vmcb->save.efer = efer;
 	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
 				  SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, "");
 
 	efer |= EFER_LME;
-	vcpu0.vmcb->save.efer = efer;
+	vmcb->save.efer = efer;
 	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
 				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
 	SVM_TEST_CR_RESERVED_BITS(ctx, 32, 63, 4, 4, cr4_saved,
 				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
 
-	vcpu0.vmcb->save.cr4 = cr4_saved;
-	vcpu0.vmcb->save.efer = efer_saved;
+	vmcb->save.cr4 = cr4_saved;
+	vmcb->save.efer = efer_saved;
 }
 
 static void test_dr(struct svm_test_context *ctx)
@@ -2263,27 +2338,29 @@ static void test_dr(struct svm_test_context *ctx)
 	/*
 	 * DR6[63:32] and DR7[63:32] are MBZ
 	 */
-	u64 dr_saved = vcpu0.vmcb->save.dr6;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
-	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
+	u64 dr_saved = vmcb->save.dr6;
+
+	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vmcb->save.dr6, dr_saved,
 				   SVM_DR6_RESERVED_MASK);
-	vcpu0.vmcb->save.dr6 = dr_saved;
+	vmcb->save.dr6 = dr_saved;
 
-	dr_saved = vcpu0.vmcb->save.dr7;
-	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
+	dr_saved = vmcb->save.dr7;
+	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vmcb->save.dr7, dr_saved,
 				   SVM_DR7_RESERVED_MASK);
 
-	vcpu0.vmcb->save.dr7 = dr_saved;
+	vmcb->save.dr7 = dr_saved;
 }
 
 /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */
-#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,		\
+#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,	\
 			 msg) {						\
-		vcpu0.vmcb->control.intercept = saved_intercept | 1ULL << type; \
+		ctx->vcpu->vmcb->control.intercept = saved_intercept | 1ULL << type; \
 		if (type == INTERCEPT_MSR_PROT)				\
-			vcpu0.vmcb->control.msrpm_base_pa = addr;		\
+			ctx->vcpu->vmcb->control.msrpm_base_pa = addr;	\
 		else							\
-			vcpu0.vmcb->control.iopm_base_pa = addr;		\
+			ctx->vcpu->vmcb->control.iopm_base_pa = addr;	\
 		report(svm_vmrun(ctx) == exit_code,			\
 		       "Test %s address: %lx", msg, addr);		\
 	}
@@ -2306,7 +2383,9 @@ static void test_dr(struct svm_test_context *ctx)
  */
 static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
 {
-	u64 saved_intercept = vcpu0.vmcb->control.intercept;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	u64 saved_intercept = vmcb->control.intercept;
 	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
 	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
 	u8 *io_bitmap = svm_get_io_bitmap();
@@ -2348,7 +2427,7 @@ static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
 	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT, addr,
 			 SVM_EXIT_VMMCALL, "IOPM");
 
-	vcpu0.vmcb->control.intercept = saved_intercept;
+	vmcb->control.intercept = saved_intercept;
 }
 
 /*
@@ -2378,22 +2457,24 @@ static void test_canonicalization(struct svm_test_context *ctx)
 	u64 saved_addr;
 	u64 return_value;
 	u64 addr_limit;
-	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
+
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	u64 vmcb_phys = virt_to_phys(vmcb);
 
 	addr_limit = (this_cpu_has(X86_FEATURE_LA57)) ? 57 : 48;
 	u64 noncanonical_mask = NONCANONICAL & ~((1ul << addr_limit) - 1);
 
-	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.fs.base, "FS");
-	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.gs.base, "GS");
-	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.ldtr.base, "LDTR");
-	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.tr.base, "TR");
-	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
-	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.es.base, "ES");
-	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.cs.base, "CS");
-	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ss.base, "SS");
-	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ds.base, "DS");
-	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.gdtr.base, "GDTR");
-	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.idtr.base, "IDTR");
+	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.fs.base, "FS");
+	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.gs.base, "GS");
+	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.ldtr.base, "LDTR");
+	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.tr.base, "TR");
+	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.kernel_gs_base, "KERNEL GS");
+	TEST_CANONICAL_VMRUN(ctx, vmcb->save.es.base, "ES");
+	TEST_CANONICAL_VMRUN(ctx, vmcb->save.cs.base, "CS");
+	TEST_CANONICAL_VMRUN(ctx, vmcb->save.ss.base, "SS");
+	TEST_CANONICAL_VMRUN(ctx, vmcb->save.ds.base, "DS");
+	TEST_CANONICAL_VMRUN(ctx, vmcb->save.gdtr.base, "GDTR");
+	TEST_CANONICAL_VMRUN(ctx, vmcb->save.idtr.base, "IDTR");
 }
 
 /*
@@ -2442,12 +2523,14 @@ asm("guest_rflags_test_guest:\n\t"
 
 static void svm_test_singlestep(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	handle_exception(DB_VECTOR, guest_rflags_test_db_handler);
 
 	/*
 	 * Trap expected after completion of first guest instruction
 	 */
-	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
+	vmcb->save.rflags |= X86_EFLAGS_TF;
 	report (__svm_vmrun(ctx, (u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
 		guest_rflags_test_trap_rip == (u64)&insn2,
 		"Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
@@ -2455,18 +2538,18 @@ static void svm_test_singlestep(struct svm_test_context *ctx)
 	 * No trap expected
 	 */
 	guest_rflags_test_trap_rip = 0;
-	vcpu0.vmcb->save.rip += 3;
-	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
-	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+	vmcb->save.rip += 3;
+	vmcb->save.rflags |= X86_EFLAGS_TF;
+	report(__svm_vmrun(ctx, vmcb->save.rip) == SVM_EXIT_VMMCALL &&
 		guest_rflags_test_trap_rip == 0,
 		"Test EFLAGS.TF on VMRUN: trap not expected");
 
 	/*
 	 * Let guest finish execution
 	 */
-	vcpu0.vmcb->save.rip += 3;
-	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
-		vcpu0.vmcb->save.rip == (u64)&guest_end,
+	vmcb->save.rip += 3;
+	report(__svm_vmrun(ctx, vmcb->save.rip) == SVM_EXIT_VMMCALL &&
+		vmcb->save.rip == (u64)&guest_end,
 		"Test EFLAGS.TF on VMRUN: guest execution completion");
 }
 
@@ -2538,7 +2621,8 @@ static void svm_vmrun_errata_test(struct svm_test_context *ctx)
 
 static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
 {
-	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	u64 vmcb_phys = virt_to_phys(vmcb);
 
 	asm volatile ("vmload %0" : : "a"(vmcb_phys));
 	asm volatile ("vmsave %0" : : "a"(vmcb_phys));
@@ -2546,7 +2630,8 @@ static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
 
 static void svm_vmload_vmsave(struct svm_test_context *ctx)
 {
-	u32 intercept_saved = vcpu0.vmcb->control.intercept;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+	u32 intercept_saved = vmcb->control.intercept;
 
 	test_set_guest(vmload_vmsave_guest_main);
 
@@ -2554,49 +2639,49 @@ static void svm_vmload_vmsave(struct svm_test_context *ctx)
 	 * Disabling intercept for VMLOAD and VMSAVE doesn't cause
 	 * respective #VMEXIT to host
 	 */
-	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
-	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
 	/*
 	 * Enabling intercept for VMLOAD and VMSAVE causes respective
 	 * #VMEXIT to host
 	 */
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
-	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
-	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
-	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
+	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
-	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
+	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
 	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
 
-	vcpu0.vmcb->control.intercept = intercept_saved;
+	vmcb->control.intercept = intercept_saved;
 }
 
 static void prepare_vgif_enabled(struct svm_test_context *ctx)
@@ -2610,45 +2695,47 @@ static void test_vgif(struct svm_test_context *ctx)
 
 static bool vgif_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	switch (get_test_stage(ctx))
 		{
 		case 0:
-			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 				report_fail("VMEXIT not due to vmmcall.");
 				return true;
 			}
-			vcpu0.vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
-			vcpu0.vmcb->save.rip += 3;
+			vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
+			vmcb->save.rip += 3;
 			inc_test_stage(ctx);
 			break;
 		case 1:
-			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 				report_fail("VMEXIT not due to vmmcall.");
 				return true;
 			}
-			if (!(vcpu0.vmcb->control.int_ctl & V_GIF_MASK)) {
+			if (!(vmcb->control.int_ctl & V_GIF_MASK)) {
 				report_fail("Failed to set VGIF when executing STGI.");
-				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
 				return true;
 			}
 			report_pass("STGI set VGIF bit.");
-			vcpu0.vmcb->save.rip += 3;
+			vmcb->save.rip += 3;
 			inc_test_stage(ctx);
 			break;
 		case 2:
-			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 				report_fail("VMEXIT not due to vmmcall.");
 				return true;
 			}
-			if (vcpu0.vmcb->control.int_ctl & V_GIF_MASK) {
+			if (vmcb->control.int_ctl & V_GIF_MASK) {
 				report_fail("Failed to clear VGIF when executing CLGI.");
-				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
 				return true;
 			}
 			report_pass("CLGI cleared VGIF bit.");
-			vcpu0.vmcb->save.rip += 3;
+			vmcb->save.rip += 3;
 			inc_test_stage(ctx);
-			vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
+			vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
 			break;
 		default:
 			return true;
@@ -2688,31 +2775,35 @@ static void pause_filter_run_test(struct svm_test_context *ctx,
 				  int pause_iterations, int filter_value,
 				  int wait_iterations, int threshold)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	test_set_guest(pause_filter_test_guest_main);
 
 	pause_test_counter = pause_iterations;
 	wait_counter = wait_iterations;
 
-	vcpu0.vmcb->control.pause_filter_count = filter_value;
-	vcpu0.vmcb->control.pause_filter_thresh = threshold;
+	vmcb->control.pause_filter_count = filter_value;
+	vmcb->control.pause_filter_thresh = threshold;
 	svm_vmrun(ctx);
 
 	if (filter_value <= pause_iterations || wait_iterations < threshold)
-		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_PAUSE,
+		report(vmcb->control.exit_code == SVM_EXIT_PAUSE,
 		       "expected PAUSE vmexit");
 	else
-		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL,
+		report(vmcb->control.exit_code == SVM_EXIT_VMMCALL,
 		       "no expected PAUSE vmexit");
 }
 
 static void pause_filter_test(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	if (!pause_filter_supported()) {
 		report_skip("PAUSE filter not supported in the guest");
 		return;
 	}
 
-	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
+	vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
 
 	// filter count more that pause count - no VMexit
 	pause_filter_run_test(ctx, 10, 9, 0, 0);
@@ -2738,10 +2829,12 @@ static void pause_filter_test(struct svm_test_context *ctx)
 /* If CR0.TS and CR0.EM are cleared in L2, no #NM is generated. */
 static void svm_no_nm_test(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	write_cr0(read_cr0() & ~X86_CR0_TS);
 	test_set_guest((test_guest_func)fnop);
 
-	vcpu0.vmcb->save.cr0 = vcpu0.vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
+	vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
 	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
 	       "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
 }
@@ -2872,20 +2965,21 @@ static void svm_lbrv_test0(struct svm_test_context *ctx)
 
 static void svm_lbrv_test1(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
 
-	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
-	vcpu0.vmcb->control.virt_ext = 0;
+	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
+	vmcb->control.virt_ext = 0;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch1);
-	SVM_VMRUN(&vcpu0);
+	SVM_VMRUN(ctx->vcpu);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vcpu0.vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
@@ -2895,21 +2989,23 @@ static void svm_lbrv_test1(struct svm_test_context *ctx)
 
 static void svm_lbrv_test2(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
 
-	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
-	vcpu0.vmcb->control.virt_ext = 0;
+	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
+	vmcb->control.virt_ext = 0;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch2);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
-	SVM_VMRUN(&vcpu0);
+	SVM_VMRUN(ctx->vcpu);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vcpu0.vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
@@ -2919,32 +3015,34 @@ static void svm_lbrv_test2(struct svm_test_context *ctx)
 
 static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
 		return;
 	}
 
 	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)");
-	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
-	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
-	vcpu0.vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
+	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
+	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+	vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch3);
-	SVM_VMRUN(&vcpu0);
+	SVM_VMRUN(ctx->vcpu);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vcpu0.vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
-	if (vcpu0.vmcb->save.dbgctl != 0) {
+	if (vmcb->save.dbgctl != 0) {
 		report(false,
 		       "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx",
-		       vcpu0.vmcb->save.dbgctl);
+		       vmcb->save.dbgctl);
 		return;
 	}
 
@@ -2954,28 +3052,30 @@ static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
 
 static void svm_lbrv_nested_test2(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	if (!lbrv_supported()) {
 		report_skip("LBRV not supported in the guest");
 		return;
 	}
 
 	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)");
-	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
-	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
+	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
 
-	vcpu0.vmcb->save.dbgctl = 0;
-	vcpu0.vmcb->save.br_from = (u64)&host_branch2_from;
-	vcpu0.vmcb->save.br_to = (u64)&host_branch2_to;
+	vmcb->save.dbgctl = 0;
+	vmcb->save.br_from = (u64)&host_branch2_from;
+	vmcb->save.br_to = (u64)&host_branch2_to;
 
 	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
 	DO_BRANCH(host_branch4);
-	SVM_VMRUN(&vcpu0);
+	SVM_VMRUN(ctx->vcpu);
 	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
 	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
 
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
 		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
-		       vcpu0.vmcb->control.exit_code);
+		       vmcb->control.exit_code);
 		return;
 	}
 
@@ -3005,6 +3105,8 @@ static void dummy_nmi_handler(struct ex_regs *regs)
 static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
 					     volatile int *counter, int expected_vmexit)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	if (counter)
 		*counter = 0;
 
@@ -3021,8 +3123,8 @@ static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
 	if (counter)
 		report(*counter == 1, "Interrupt is expected");
 
-	report(vcpu0.vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
-	report(vcpu0.vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
+	report(vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
+	report(vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
 	cli();
 }
 
@@ -3038,12 +3140,14 @@ static void svm_intr_intercept_mix_if_guest(struct svm_test_context *ctx)
 
 static void svm_intr_intercept_mix_if(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	// make a physical interrupt to be pending
 	handle_irq(0x55, dummy_isr);
 
-	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
-	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
+	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
+	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_if_guest);
 	cli();
@@ -3072,11 +3176,13 @@ static void svm_intr_intercept_mix_gif_guest(struct svm_test_context *ctx)
 
 static void svm_intr_intercept_mix_gif(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	handle_irq(0x55, dummy_isr);
 
-	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
-	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
+	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
+	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_gif_guest);
 	cli();
@@ -3102,11 +3208,13 @@ static void svm_intr_intercept_mix_gif_guest2(struct svm_test_context *ctx)
 
 static void svm_intr_intercept_mix_gif2(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	handle_irq(0x55, dummy_isr);
 
-	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
-	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
+	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
+	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vmcb->save.rflags |= X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_gif_guest2);
 	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
@@ -3131,11 +3239,13 @@ static void svm_intr_intercept_mix_nmi_guest(struct svm_test_context *ctx)
 
 static void svm_intr_intercept_mix_nmi(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	handle_exception(2, dummy_nmi_handler);
 
-	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_NMI);
-	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
+	vmcb->control.intercept |= (1 << INTERCEPT_NMI);
+	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	vmcb->save.rflags |= X86_EFLAGS_IF;
 
 	test_set_guest(svm_intr_intercept_mix_nmi_guest);
 	svm_intr_intercept_mix_run_guest(ctx, &nmi_recevied, SVM_EXIT_NMI);
@@ -3157,8 +3267,10 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test_context *ctx)
 
 static void svm_intr_intercept_mix_smi(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_SMI);
-	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.intercept |= (1 << INTERCEPT_SMI);
+	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 	test_set_guest(svm_intr_intercept_mix_smi_guest);
 	svm_intr_intercept_mix_run_guest(ctx, NULL, SVM_EXIT_SMI);
 }
@@ -3215,14 +3327,16 @@ static void handle_exception_in_l2(struct svm_test_context *ctx, u8 vector)
 
 static void handle_exception_in_l1(struct svm_test_context *ctx, u32 vector)
 {
-	u32 old_ie = vcpu0.vmcb->control.intercept_exceptions;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	u32 old_ie = vmcb->control.intercept_exceptions;
 
-	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << vector);
+	vmcb->control.intercept_exceptions |= (1ULL << vector);
 
 	report(svm_vmrun(ctx) == (SVM_EXIT_EXCP_BASE + vector),
 		"%s handled by L1",  exception_mnemonic(vector));
 
-	vcpu0.vmcb->control.intercept_exceptions = old_ie;
+	vmcb->control.intercept_exceptions = old_ie;
 }
 
 static void svm_exception_test(struct svm_test_context *ctx)
@@ -3235,10 +3349,10 @@ static void svm_exception_test(struct svm_test_context *ctx)
 		test_set_guest((test_guest_func)t->guest_code);
 
 		handle_exception_in_l2(ctx, t->vector);
-		svm_vcpu_ident(&vcpu0);
+		svm_vcpu_ident(ctx->vcpu);
 
 		handle_exception_in_l1(ctx, t->vector);
-		svm_vcpu_ident(&vcpu0);
+		svm_vcpu_ident(ctx->vcpu);
 	}
 }
 
@@ -3250,11 +3364,13 @@ static void shutdown_intercept_test_guest(struct svm_test_context *ctx)
 }
 static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	test_set_guest(shutdown_intercept_test_guest);
-	vcpu0.vmcb->save.idtr.base = (u64)alloc_vpage();
-	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
+	vmcb->save.idtr.base = (u64)alloc_vpage();
+	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
 	svm_vmrun(ctx);
-	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
+	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
 }
 
 /*
@@ -3264,7 +3380,9 @@ static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
 
 static void exception_merging_prepare(struct svm_test_context *ctx)
 {
-	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 
 	/* break UD vector idt entry to get #GP*/
 	boot_idt[UD_VECTOR].type = 1;
@@ -3277,15 +3395,17 @@ static void exception_merging_test(struct svm_test_context *ctx)
 
 static bool exception_merging_finished(struct svm_test_context *ctx)
 {
-	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
-	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
+	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
+	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
 
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
+	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
 		report(false, "unexpected VM exit");
 		goto out;
 	}
 
-	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
+	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
 		report(false, "EXITINTINFO not valid");
 		goto out;
 	}
@@ -3320,8 +3440,10 @@ static bool exception_merging_check(struct svm_test_context *ctx)
 
 static void interrupt_merging_prepare(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
+
 	/* intercept #GP */
-	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
+	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
 
 	/* set local APIC to inject external interrupts */
 	apic_setup_timer(TIMER_VECTOR, APIC_LVT_TIMER_PERIODIC);
@@ -3342,16 +3464,17 @@ static void interrupt_merging_test(struct svm_test_context *ctx)
 
 static bool interrupt_merging_finished(struct svm_test_context *ctx)
 {
+	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
-	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
-	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
-	u32 error_code = vcpu0.vmcb->control.exit_info_1;
+	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
+	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
+	u32 error_code = vmcb->control.exit_info_1;
 
 	/* exit on external interrupts is disabled, thus timer interrupt
 	 * should be attempted to be delivered, but due to incorrect IDT entry
 	 * an #GP should be raised
 	 */
-	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
+	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
 		report(false, "unexpected VM exit");
 		goto cleanup;
 	}
@@ -3363,7 +3486,7 @@ static bool interrupt_merging_finished(struct svm_test_context *ctx)
 	}
 
 	/* Original interrupt should be preserved in EXITINTINFO */
-	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
+	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
 		report(false, "EXITINTINFO not valid");
 		goto cleanup;
 	}
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 26/27] svm: move test_guest_func to test context
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (24 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 25/27] svm: move nested vcpu to test context Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2022-12-02 10:28   ` Emanuele Giuseppe Esposito
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 27/27] x86: ipi_stress: add optional SVM support Maxim Levitsky
  2023-06-07 23:25 ` [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Sean Christopherson
  27 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Make test context have pointer to the guest function.
For V1 tests it is initialized from the test template,
for V2 tests, the test functions sets it.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/svm.c       | 12 ++++--------
 x86/svm.h       |  4 ++--
 x86/svm_npt.c   |  2 +-
 x86/svm_tests.c | 26 +++++++++++++-------------
 4 files changed, 20 insertions(+), 24 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index a3279545..244555d4 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -60,16 +60,11 @@ void inc_test_stage(struct svm_test_context *ctx)
 	barrier();
 }
 
-static test_guest_func guest_main;
-
-void test_set_guest(test_guest_func func)
-{
-	guest_main = func;
-}
 
 static void test_thunk(struct svm_test_context *ctx)
 {
-	guest_main(ctx);
+	if (ctx->guest_func)
+		ctx->guest_func(ctx);
 	vmmcall();
 }
 
@@ -93,6 +88,7 @@ static noinline void test_run(struct svm_test_context *ctx)
 	svm_vcpu_ident(ctx->vcpu);
 
 	if (ctx->test->v2) {
+		ctx->guest_func = NULL;
 		ctx->test->v2(ctx);
 		return;
 	}
@@ -100,7 +96,7 @@ static noinline void test_run(struct svm_test_context *ctx)
 	cli();
 
 	ctx->test->prepare(ctx);
-	guest_main = ctx->test->guest_func;
+	ctx->guest_func = ctx->test->guest_func;
 	ctx->vcpu->vmcb->save.rip = (ulong)test_thunk;
 	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
 	ctx->vcpu->regs.rdi = (ulong)ctx;
diff --git a/x86/svm.h b/x86/svm.h
index ec181715..149b76c4 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -15,6 +15,8 @@ struct svm_test_context {
 
 	/* TODO: test cases currently are single threaded */
 	struct svm_vcpu *vcpu;
+
+	void (*guest_func)(struct svm_test_context *ctx);
 };
 
 struct svm_test {
@@ -44,7 +46,5 @@ void set_test_stage(struct svm_test_context *ctx, int s);
 void inc_test_stage(struct svm_test_context *ctx);
 int __svm_vmrun(struct svm_test_context *ctx, u64 rip);
 int svm_vmrun(struct svm_test_context *ctx);
-void test_set_guest(test_guest_func func);
-
 
 #endif
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index 39fd7198..1e27f9ef 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -332,7 +332,7 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
 	sg_efer = guest_efer = vmcb->save.efer;
 	sg_cr4 = guest_cr4 = vmcb->save.cr4;
 
-	test_set_guest(basic_guest_main);
+	ctx->guest_func = basic_guest_main;
 
 	/*
 	 * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index bd92fcee..6d6dfa0e 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -793,7 +793,7 @@ static void svm_tsc_scale_run_testcase(struct svm_test_context *ctx,
 
 	guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
 
-	test_set_guest(svm_tsc_scale_guest);
+	ctx->guest_func = svm_tsc_scale_guest;
 	vmcb->control.tsc_offset = tsc_offset;
 	wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32)));
 
@@ -2067,7 +2067,7 @@ static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
 
 	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
 
-	test_set_guest(svm_cr4_osxsave_test_guest);
+	ctx->guest_func = svm_cr4_osxsave_test_guest;
 	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
 	       "svm_cr4_osxsave_test_guest finished with VMMCALL");
 
@@ -2494,7 +2494,7 @@ static void guest_rflags_test_db_handler(struct ex_regs *r)
 
 static void svm_guest_state_test(struct svm_test_context *ctx)
 {
-	test_set_guest(basic_guest_main);
+	ctx->guest_func = basic_guest_main;
 	test_efer(ctx);
 	test_cr0(ctx);
 	test_cr3(ctx);
@@ -2633,7 +2633,7 @@ static void svm_vmload_vmsave(struct svm_test_context *ctx)
 	struct vmcb *vmcb = ctx->vcpu->vmcb;
 	u32 intercept_saved = vmcb->control.intercept;
 
-	test_set_guest(vmload_vmsave_guest_main);
+	ctx->guest_func = vmload_vmsave_guest_main;
 
 	/*
 	 * Disabling intercept for VMLOAD and VMSAVE doesn't cause
@@ -2777,7 +2777,7 @@ static void pause_filter_run_test(struct svm_test_context *ctx,
 {
 	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
-	test_set_guest(pause_filter_test_guest_main);
+	ctx->guest_func = pause_filter_test_guest_main;
 
 	pause_test_counter = pause_iterations;
 	wait_counter = wait_iterations;
@@ -2832,7 +2832,7 @@ static void svm_no_nm_test(struct svm_test_context *ctx)
 	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
 	write_cr0(read_cr0() & ~X86_CR0_TS);
-	test_set_guest((test_guest_func)fnop);
+	ctx->guest_func = (test_guest_func)fnop;
 
 	vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
 	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
@@ -3149,7 +3149,7 @@ static void svm_intr_intercept_mix_if(struct svm_test_context *ctx)
 	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
-	test_set_guest(svm_intr_intercept_mix_if_guest);
+	ctx->guest_func = svm_intr_intercept_mix_if_guest;
 	cli();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
 	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
@@ -3184,7 +3184,7 @@ static void svm_intr_intercept_mix_gif(struct svm_test_context *ctx)
 	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 	vmcb->save.rflags &= ~X86_EFLAGS_IF;
 
-	test_set_guest(svm_intr_intercept_mix_gif_guest);
+	ctx->guest_func = svm_intr_intercept_mix_gif_guest;
 	cli();
 	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
 	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
@@ -3216,7 +3216,7 @@ static void svm_intr_intercept_mix_gif2(struct svm_test_context *ctx)
 	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 	vmcb->save.rflags |= X86_EFLAGS_IF;
 
-	test_set_guest(svm_intr_intercept_mix_gif_guest2);
+	ctx->guest_func = svm_intr_intercept_mix_gif_guest2;
 	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
 }
 
@@ -3247,7 +3247,7 @@ static void svm_intr_intercept_mix_nmi(struct svm_test_context *ctx)
 	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
 	vmcb->save.rflags |= X86_EFLAGS_IF;
 
-	test_set_guest(svm_intr_intercept_mix_nmi_guest);
+	ctx->guest_func = svm_intr_intercept_mix_nmi_guest;
 	svm_intr_intercept_mix_run_guest(ctx, &nmi_recevied, SVM_EXIT_NMI);
 }
 
@@ -3271,7 +3271,7 @@ static void svm_intr_intercept_mix_smi(struct svm_test_context *ctx)
 
 	vmcb->control.intercept |= (1 << INTERCEPT_SMI);
 	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
-	test_set_guest(svm_intr_intercept_mix_smi_guest);
+	ctx->guest_func = svm_intr_intercept_mix_smi_guest;
 	svm_intr_intercept_mix_run_guest(ctx, NULL, SVM_EXIT_SMI);
 }
 
@@ -3346,7 +3346,7 @@ static void svm_exception_test(struct svm_test_context *ctx)
 
 	for (i = 0; i < ARRAY_SIZE(svm_exception_tests); i++) {
 		t = &svm_exception_tests[i];
-		test_set_guest((test_guest_func)t->guest_code);
+		ctx->guest_func = (test_guest_func)t->guest_code;
 
 		handle_exception_in_l2(ctx, t->vector);
 		svm_vcpu_ident(ctx->vcpu);
@@ -3366,7 +3366,7 @@ static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
 {
 	struct vmcb *vmcb = ctx->vcpu->vmcb;
 
-	test_set_guest(shutdown_intercept_test_guest);
+	ctx->guest_func = shutdown_intercept_test_guest;
 	vmcb->save.idtr.base = (u64)alloc_vpage();
 	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
 	svm_vmrun(ctx);
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [kvm-unit-tests PATCH v3 27/27] x86: ipi_stress: add optional SVM support
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (25 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 26/27] svm: move test_guest_func " Maxim Levitsky
@ 2022-11-22 16:11 ` Maxim Levitsky
  2023-06-07 23:25 ` [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Sean Christopherson
  27 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-11-22 16:11 UTC (permalink / raw)
  To: kvm
  Cc: Andrew Jones, Alexandru Elisei, Maxim Levitsky, Paolo Bonzini,
	Claudio Imbrenda, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

Allow some vCPUs to be in SVM nested mode while waiting for
an interrupt.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 x86/ipi_stress.c | 79 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 78 insertions(+), 1 deletion(-)

diff --git a/x86/ipi_stress.c b/x86/ipi_stress.c
index dea3e605..1a4c5510 100644
--- a/x86/ipi_stress.c
+++ b/x86/ipi_stress.c
@@ -12,10 +12,12 @@
 #include "types.h"
 #include "alloc_page.h"
 #include "vmalloc.h"
+#include "svm_lib.h"
 #include "random.h"
 
 u64 num_iterations = 100000;
 float hlt_prob = 0.1;
+bool use_svm;
 volatile bool end_test;
 
 #define APIC_TIMER_PERIOD (1000*1000*1000)
@@ -25,6 +27,7 @@ struct cpu_test_state {
 	u64 last_isr_count;
 	struct random_state random;
 	int smp_id;
+	struct svm_vcpu vcpu;
 } *cpu_states;
 
 
@@ -71,6 +74,62 @@ static void wait_for_ipi(struct cpu_test_state *state)
 	assert(state->isr_count == old_count + 1);
 }
 
+#ifdef __x86_64__
+static void l2_guest_wait_for_ipi(struct cpu_test_state *state)
+{
+	wait_for_ipi(state);
+	asm volatile("vmmcall");
+}
+
+static void l2_guest_dummy(void)
+{
+	while (true)
+		asm volatile("vmmcall");
+}
+
+static void wait_for_ipi_in_l2(struct cpu_test_state *state)
+{
+	u64 old_count = state->isr_count;
+	struct svm_vcpu *vcpu = &state->vcpu;
+	bool poll_in_the_guest;
+
+	/*
+	 * if poll_in_the_guest is true, then the guest will run
+	 * with interrupts disabled and it will enable them for one instruction
+	 * (sometimes together with halting) until it receives an interrupts
+	 *
+	 * if poll_in_the_guest is false, the guest will always have
+	 * interrupts enabled and will usually receive the interrupt
+	 * right away, but in case it didn't we will run the guest again
+	 * until it does.
+	 *
+	 */
+	poll_in_the_guest = random_decision(&state->random, 50);
+
+	vcpu->regs.rdi = (u64)state;
+	vcpu->regs.rsp = (ulong)vcpu->stack;
+
+	vcpu->vmcb->save.rip = poll_in_the_guest ?
+			(ulong)l2_guest_wait_for_ipi :
+			(ulong)l2_guest_dummy;
+
+	if (!poll_in_the_guest)
+		vcpu->vmcb->save.rflags |= X86_EFLAGS_IF;
+	else
+		vcpu->vmcb->save.rflags &= ~X86_EFLAGS_IF;
+
+	do {
+		asm volatile("clgi;sti");
+		SVM_VMRUN(vcpu);
+		asm volatile("cli;stgi");
+		assert(vcpu->vmcb->control.exit_code == SVM_EXIT_VMMCALL);
+
+		if (poll_in_the_guest)
+			assert(old_count < state->isr_count);
+
+	} while (old_count == state->isr_count);
+}
+#endif
 
 static void vcpu_init(void *)
 {
@@ -85,6 +144,11 @@ static void vcpu_init(void *)
 	state->random = get_prng();
 	state->isr_count = 0;
 	state->smp_id = smp_id();
+
+#ifdef __x86_64__
+	if (use_svm)
+		svm_vcpu_init(&state->vcpu);
+#endif
 }
 
 static void vcpu_code(void *)
@@ -111,7 +175,12 @@ static void vcpu_code(void *)
 			break;
 
 		// wait for the IPI interrupt chain to come back to us
-		wait_for_ipi(state);
+#if __x86_64__
+		if (use_svm && random_decision(&state->random, 20))
+			wait_for_ipi_in_l2(state);
+		else
+#endif
+			wait_for_ipi(state);
 	}
 }
 
@@ -137,6 +206,14 @@ int main(int argc, void **argv)
 	setup_vm();
 	init_prng();
 
+#ifdef __x86_64__
+	if (this_cpu_has(X86_FEATURE_SVM)) {
+		use_svm = true;
+		if (!setup_svm())
+			use_svm = false;
+	}
+#endif
+
 	cpu_states = calloc(ncpus, sizeof(cpu_states[0]));
 
 	printf("found %d cpus\n", ncpus);
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator Maxim Levitsky
@ 2022-11-23  9:28   ` Claudio Imbrenda
  2022-11-23 12:54     ` Andrew Jones
  2022-12-06 14:07     ` Maxim Levitsky
  0 siblings, 2 replies; 56+ messages in thread
From: Claudio Imbrenda @ 2022-11-23  9:28 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Andrew Jones, Alexandru Elisei, Paolo Bonzini, Thomas Huth,
	Alex Bennée, Nico Boehr, Cathy Avery, Janosch Frank

On Tue, 22 Nov 2022 18:11:36 +0200
Maxim Levitsky <mlevitsk@redhat.com> wrote:

> Add a simple pseudo random number generator which can be used
> in the tests to add randomeness in a controlled manner.

ahh, yes I have wanted something like this in the library for quite some
time! thanks!

I have some comments regarding the interfaces (see below), and also a
request, if you could split the x86 part in a different patch, so we
can have a "pure" lib patch, and then you can have an x86-only patch
that uses the new interface

> 
> For x86 add a wrapper which initializes the PRNG with RDRAND,
> unless RANDOM_SEED env variable is set, in which case it is used
> instead.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  Makefile              |  3 ++-
>  README.md             |  1 +
>  lib/prng.c            | 41 +++++++++++++++++++++++++++++++++++++++++
>  lib/prng.h            | 23 +++++++++++++++++++++++
>  lib/x86/random.c      | 33 +++++++++++++++++++++++++++++++++
>  lib/x86/random.h      | 17 +++++++++++++++++
>  scripts/arch-run.bash |  2 +-
>  x86/Makefile.common   |  1 +
>  8 files changed, 119 insertions(+), 2 deletions(-)
>  create mode 100644 lib/prng.c
>  create mode 100644 lib/prng.h
>  create mode 100644 lib/x86/random.c
>  create mode 100644 lib/x86/random.h
> 
> diff --git a/Makefile b/Makefile
> index 6ed5deac..384b5acf 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -29,7 +29,8 @@ cflatobjs := \
>  	lib/string.o \
>  	lib/abort.o \
>  	lib/report.o \
> -	lib/stack.o
> +	lib/stack.o \
> +	lib/prng.o
>  
>  # libfdt paths
>  LIBFDT_objdir = lib/libfdt
> diff --git a/README.md b/README.md
> index 6e82dc22..5a677a03 100644
> --- a/README.md
> +++ b/README.md
> @@ -91,6 +91,7 @@ the framework.  The list of reserved environment variables is below
>      QEMU_ACCEL                   either kvm, hvf or tcg
>      QEMU_VERSION_STRING          string of the form `qemu -h | head -1`
>      KERNEL_VERSION_STRING        string of the form `uname -r`
> +    TEST_SEED                    integer to force a fixed seed for the prng
>  
>  Additionally these self-explanatory variables are reserved
>  
> diff --git a/lib/prng.c b/lib/prng.c
> new file mode 100644
> index 00000000..d9342eb3
> --- /dev/null
> +++ b/lib/prng.c
> @@ -0,0 +1,41 @@
> +
> +/*
> + * Random number generator that is usable from guest code. This is the
> + * Park-Miller LCG using standard constants.
> + */
> +
> +#include "libcflat.h"
> +#include "prng.h"
> +
> +struct random_state new_random_state(uint32_t seed)
> +{
> +	struct random_state s = {.seed = seed};
> +	return s;
> +}
> +
> +uint32_t random_u32(struct random_state *state)
> +{
> +	state->seed = (uint64_t)state->seed * 48271 % ((uint32_t)(1 << 31) - 1);

why not:

state->seed = state->seed * 48271ULL % (BIT_ULL(31) - 1);

I think it's more readable

> +	return state->seed;
> +}
> +
> +
> +uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max)
> +{
> +	uint32_t val = random_u32(state);
> +
> +	return val % (max - min + 1) + min;

what happens if max == UINT_MAX and min = 0 ?

maybe:

if (max - min == UINT_MAX)
	return val;

> +}
> +
> +/*
> + * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
> + * it will return true in 70.0% of cases)
> + */
> +bool random_decision(struct random_state *state, float percent_true)

I'm not a fan of floats in the lib...

> +{
> +	if (percent_true == 0)
> +		return 0;
> +	if (percent_true == 100)
> +		return 1;
> +	return random_range(state, 1, 10000) < (uint32_t)(percent_true * 100);

...especially when you are only using 2 decimal places anyway

can you rewrite it to take an unsigned int? 
e.g. if percent_true = 7123, it will return true in 71.23% of the cases

then you can rewrite the last line like this:

return random_range(state, 1, 10000) < percent_true;

> +}
> diff --git a/lib/prng.h b/lib/prng.h
> new file mode 100644
> index 00000000..61d3a48b
> --- /dev/null
> +++ b/lib/prng.h
> @@ -0,0 +1,23 @@
> +
> +#ifndef SRC_LIB_PRNG_H_
> +#define SRC_LIB_PRNG_H_
> +
> +struct random_state {
> +	uint32_t seed;
> +};
> +
> +struct random_state new_random_state(uint32_t seed);
> +uint32_t random_u32(struct random_state *state);
> +
> +/*
> + * return a random number from min to max (included)
> + */
> +uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max);
> +
> +/*
> + * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
> + * it will return true in 70.0% of cases)
> + */
> +bool random_decision(struct random_state *state, float percent_true);
> +
> +#endif /* SRC_LIB_PRNG_H_ */


and then put the rest below in a new patch

> diff --git a/lib/x86/random.c b/lib/x86/random.c
> new file mode 100644
> index 00000000..fcdd5fe8
> --- /dev/null
> +++ b/lib/x86/random.c
> @@ -0,0 +1,33 @@
> +
> +#include "libcflat.h"
> +#include "processor.h"
> +#include "prng.h"
> +#include "smp.h"
> +#include "asm/spinlock.h"
> +#include "random.h"
> +
> +static u32 test_seed;
> +static bool initialized;
> +
> +void init_prng(void)
> +{
> +	char *test_seed_str = getenv("TEST_SEED");
> +
> +	if (test_seed_str && strlen(test_seed_str))
> +		test_seed = atol(test_seed_str);
> +	else
> +#ifdef __x86_64__
> +		test_seed =  (u32)rdrand();
> +#else
> +		test_seed = (u32)(rdtsc() << 4);
> +#endif
> +	initialized = true;
> +
> +	printf("Test seed: %u\n", (unsigned int)test_seed);
> +}
> +
> +struct random_state get_prng(void)
> +{
> +	assert(initialized);
> +	return new_random_state(test_seed + this_cpu_read_smp_id());
> +}
> diff --git a/lib/x86/random.h b/lib/x86/random.h
> new file mode 100644
> index 00000000..795b450b
> --- /dev/null
> +++ b/lib/x86/random.h
> @@ -0,0 +1,17 @@
> +/*
> + * prng.h
> + *
> + *  Created on: Nov 9, 2022
> + *      Author: mlevitsk
> + */
> +
> +#ifndef SRC_LIB_X86_RANDOM_H_
> +#define SRC_LIB_X86_RANDOM_H_
> +
> +#include "libcflat.h"
> +#include "prng.h"
> +
> +void init_prng(void);
> +struct random_state get_prng(void);
> +
> +#endif /* SRC_LIB_X86_RANDOM_H_ */
> diff --git a/scripts/arch-run.bash b/scripts/arch-run.bash
> index 51e4b97b..238d19f8 100644
> --- a/scripts/arch-run.bash
> +++ b/scripts/arch-run.bash
> @@ -298,7 +298,7 @@ env_params ()
>  	KERNEL_EXTRAVERSION=${KERNEL_EXTRAVERSION%%[!0-9]*}
>  	! [[ $KERNEL_SUBLEVEL =~ ^[0-9]+$ ]] && unset $KERNEL_SUBLEVEL
>  	! [[ $KERNEL_EXTRAVERSION =~ ^[0-9]+$ ]] && unset $KERNEL_EXTRAVERSION
> -	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION
> +	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION TEST_SEED
>  }
>  
>  env_file ()
> diff --git a/x86/Makefile.common b/x86/Makefile.common
> index 698a48ab..fa0a50e6 100644
> --- a/x86/Makefile.common
> +++ b/x86/Makefile.common
> @@ -23,6 +23,7 @@ cflatobjs += lib/x86/stack.o
>  cflatobjs += lib/x86/fault_test.o
>  cflatobjs += lib/x86/delay.o
>  cflatobjs += lib/x86/pmu.o
> +cflatobjs += lib/x86/random.o
>  ifeq ($(CONFIG_EFI),y)
>  cflatobjs += lib/x86/amd_sev.o
>  cflatobjs += lib/efi.o


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator
  2022-11-23  9:28   ` Claudio Imbrenda
@ 2022-11-23 12:54     ` Andrew Jones
  2022-12-06 13:57       ` Maxim Levitsky
  2022-12-06 14:07     ` Maxim Levitsky
  1 sibling, 1 reply; 56+ messages in thread
From: Andrew Jones @ 2022-11-23 12:54 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: Maxim Levitsky, kvm, Andrew Jones, Alexandru Elisei,
	Paolo Bonzini, Thomas Huth, Alex Bennée, Nico Boehr,
	Cathy Avery, Janosch Frank

On Wed, Nov 23, 2022 at 10:28:50AM +0100, Claudio Imbrenda wrote:
> On Tue, 22 Nov 2022 18:11:36 +0200
> Maxim Levitsky <mlevitsk@redhat.com> wrote:
> 
> > Add a simple pseudo random number generator which can be used
> > in the tests to add randomeness in a controlled manner.
> 
> ahh, yes I have wanted something like this in the library for quite some
> time! thanks!

Here's another approach that we unfortunately never got merged.
https://lore.kernel.org/all/20211202115352.951548-5-alex.bennee@linaro.org/

Thanks,
drew

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli()
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli() Maxim Levitsky
@ 2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  2022-12-06 13:55     ` Maxim Levitsky
  0 siblings, 1 reply; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 13:46 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> This removes a layer of indirection which is strictly
> speaking not needed since its x86 code anyway.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  lib/x86/processor.h       | 19 +++++-----------
>  lib/x86/smp.c             |  2 +-
>  x86/apic.c                |  2 +-
>  x86/asyncpf.c             |  6 ++---
>  x86/eventinj.c            | 22 +++++++++---------
>  x86/hyperv_connections.c  |  2 +-
>  x86/hyperv_stimer.c       |  4 ++--
>  x86/hyperv_synic.c        |  6 ++---
>  x86/intel-iommu.c         |  2 +-
>  x86/ioapic.c              | 14 ++++++------
>  x86/pmu.c                 |  4 ++--
>  x86/svm.c                 |  4 ++--
>  x86/svm_tests.c           | 48 +++++++++++++++++++--------------------
>  x86/taskswitch2.c         |  4 ++--
>  x86/tscdeadline_latency.c |  4 ++--
>  x86/vmexit.c              | 18 +++++++--------
>  x86/vmx_tests.c           | 42 +++++++++++++++++-----------------
>  17 files changed, 98 insertions(+), 105 deletions(-)
> 
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index 7a9e8c82..b89f6a7c 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -653,11 +653,17 @@ static inline void pause(void)
>  	asm volatile ("pause");
>  }
>  
> +/* Disable interrupts as per x86 spec */
>  static inline void cli(void)
>  {
>  	asm volatile ("cli");
>  }
>  
> +/*
> + * Enable interrupts.
> + * Note that next instruction after sti will not have interrupts
> + * evaluated due to concept of 'interrupt shadow'
> + */
>  static inline void sti(void)
>  {
>  	asm volatile ("sti");
> @@ -732,19 +738,6 @@ static inline void wrtsc(u64 tsc)
>  	wrmsr(MSR_IA32_TSC, tsc);
>  }
>  
> -static inline void irq_disable(void)
> -{
> -	asm volatile("cli");
> -}
> -
> -/* Note that irq_enable() does not ensure an interrupt shadow due
> - * to the vagaries of compiler optimizations.  If you need the
> - * shadow, use a single asm with "sti" and the instruction after it.
Minor nitpick: instead of a new doc comment, why not use this same
above? Looks clearer to me.

Regardless,
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 02/27] x86: introduce sti_nop() and sti_nop_cli()
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 02/27] x86: introduce sti_nop() and sti_nop_cli() Maxim Levitsky
@ 2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 13:46 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Add functions that shorten the common usage of sti
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> 

Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 09/27] svm: add simple nested shutdown test.
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 09/27] svm: add simple nested shutdown test Maxim Levitsky
@ 2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  2022-12-06 13:56     ` Maxim Levitsky
  0 siblings, 1 reply; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 13:46 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Add a simple test that a shutdown in L2 is intercepted
> correctly by the L1.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  x86/svm_tests.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index a7641fb8..7a67132a 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -11,6 +11,7 @@
>  #include "apic.h"
>  #include "delay.h"
>  #include "x86/usermode.h"
> +#include "vmalloc.h"
>  
>  #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
>  
> @@ -3238,6 +3239,21 @@ static void svm_exception_test(void)
>  	}
>  }
>  
> +static void shutdown_intercept_test_guest(struct svm_test *test)
> +{
> +	asm volatile ("ud2");
> +	report_fail("should not reach here\n");
> +
Remove empty line here
> +}Add empty line here
> +static void svm_shutdown_intercept_test(void)
> +{
> +	test_set_guest(shutdown_intercept_test_guest);
> +	vmcb->save.idtr.base = (u64)alloc_vpage();
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
> +	svm_vmrun();
> +	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
> +}
> +


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 13/27] svm: remove get_npt_pte extern
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 13/27] svm: remove get_npt_pte extern Maxim Levitsky
@ 2022-12-01 13:46   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 13:46 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> get_npt_pte is unused
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  x86/svm.h | 1 -
>  1 file changed, 1 deletion(-)
> 
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 14/27] svm: move svm spec definitions to lib/x86/svm.h
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 14/27] svm: move svm spec definitions to lib/x86/svm.h Maxim Levitsky
@ 2022-12-01 13:54   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 13:54 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> This is first step of separating SVM code to a library
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  lib/x86/svm.h | 365 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  x86/svm.h     | 359 +------------------------------------------------
>  2 files changed, 366 insertions(+), 358 deletions(-)
>  create mode 100644 lib/x86/svm.h
> 
> diff --git a/lib/x86/svm.h b/lib/x86/svm.h
> new file mode 100644
> index 00000000..8b836c13
> --- /dev/null
> +++ b/lib/x86/svm.h
> @@ -0,0 +1,365 @@
> +
> +#ifndef SRC_LIB_X86_SVM_H_
> +#define SRC_LIB_X86_SVM_H_
> +
> +enum {
> +	INTERCEPT_INTR,
> +	INTERCEPT_NMI,
> +	INTERCEPT_SMI,
> +	INTERCEPT_INIT,
> +	INTERCEPT_VINTR,
> +	INTERCEPT_SELECTIVE_CR0,
> +	INTERCEPT_STORE_IDTR,
> +	INTERCEPT_STORE_GDTR,
> +	INTERCEPT_STORE_LDTR,
> +	INTERCEPT_STORE_TR,
> +	INTERCEPT_LOAD_IDTR,
> +	INTERCEPT_LOAD_GDTR,
> +	INTERCEPT_LOAD_LDTR,
> +	INTERCEPT_LOAD_TR,
> +	INTERCEPT_RDTSC,
> +	INTERCEPT_RDPMC,
> +	INTERCEPT_PUSHF,
> +	INTERCEPT_POPF,
> +	INTERCEPT_CPUID,
> +	INTERCEPT_RSM,
> +	INTERCEPT_IRET,
> +	INTERCEPT_INTn,
> +	INTERCEPT_INVD,
> +	INTERCEPT_PAUSE,
> +	INTERCEPT_HLT,
> +	INTERCEPT_INVLPG,
> +	INTERCEPT_INVLPGA,
> +	INTERCEPT_IOIO_PROT,
> +	INTERCEPT_MSR_PROT,
> +	INTERCEPT_TASK_SWITCH,
> +	INTERCEPT_FERR_FREEZE,
> +	INTERCEPT_SHUTDOWN,
> +	INTERCEPT_VMRUN,
> +	INTERCEPT_VMMCALL,
> +	INTERCEPT_VMLOAD,
> +	INTERCEPT_VMSAVE,
> +	INTERCEPT_STGI,
> +	INTERCEPT_CLGI,
> +	INTERCEPT_SKINIT,
> +	INTERCEPT_RDTSCP,
> +	INTERCEPT_ICEBP,
> +	INTERCEPT_WBINVD,
> +	INTERCEPT_MONITOR,
> +	INTERCEPT_MWAIT,
> +	INTERCEPT_MWAIT_COND,
> +};
> +
> +enum {
> +	VMCB_CLEAN_INTERCEPTS = 1, /* Intercept vectors, TSC offset, pause filter count */
> +	VMCB_CLEAN_PERM_MAP = 2,   /* IOPM Base and MSRPM Base */
> +	VMCB_CLEAN_ASID = 4,	   /* ASID */
> +	VMCB_CLEAN_INTR = 8,	   /* int_ctl, int_vector */
> +	VMCB_CLEAN_NPT = 16,	   /* npt_en, nCR3, gPAT */
> +	VMCB_CLEAN_CR = 32,	   /* CR0, CR3, CR4, EFER */
> +	VMCB_CLEAN_DR = 64,	   /* DR6, DR7 */
> +	VMCB_CLEAN_DT = 128,	   /* GDT, IDT */
> +	VMCB_CLEAN_SEG = 256,	   /* CS, DS, SS, ES, CPL */
> +	VMCB_CLEAN_CR2 = 512,	   /* CR2 only */
> +	VMCB_CLEAN_LBR = 1024,	   /* DBGCTL, BR_FROM, BR_TO, LAST_EX_FROM, LAST_EX_TO */
> +	VMCB_CLEAN_AVIC = 2048,	   /* APIC_BAR, APIC_BACKING_PAGE,
> +				    * PHYSICAL_TABLE pointer, LOGICAL_TABLE pointer
> +				    */
> +	VMCB_CLEAN_ALL = 4095,
> +};
> +
> +struct __attribute__ ((__packed__)) vmcb_control_area {
> +	u16 intercept_cr_read;
> +	u16 intercept_cr_write;
> +	u16 intercept_dr_read;
> +	u16 intercept_dr_write;
> +	u32 intercept_exceptions;
> +	u64 intercept;
> +	u8 reserved_1[40];
> +	u16 pause_filter_thresh;
> +	u16 pause_filter_count;
> +	u64 iopm_base_pa;
> +	u64 msrpm_base_pa;
> +	u64 tsc_offset;
> +	u32 asid;
> +	u8 tlb_ctl;
> +	u8 reserved_2[3];
> +	u32 int_ctl;
> +	u32 int_vector;
> +	u32 int_state;
> +	u8 reserved_3[4];
> +	u32 exit_code;
> +	u32 exit_code_hi;
> +	u64 exit_info_1;
> +	u64 exit_info_2;
> +	u32 exit_int_info;
> +	u32 exit_int_info_err;
> +	u64 nested_ctl;
> +	u8 reserved_4[16];
> +	u32 event_inj;
> +	u32 event_inj_err;
> +	u64 nested_cr3;
> +	u64 virt_ext;
> +	u32 clean;
> +	u32 reserved_5;
> +	u64 next_rip;
> +	u8 insn_len;
> +	u8 insn_bytes[15];
> +	u8 reserved_6[800];
> +};
> +
> +#define TLB_CONTROL_DO_NOTHING 0
> +#define TLB_CONTROL_FLUSH_ALL_ASID 1
> +
> +#define V_TPR_MASK 0x0f
> +
> +#define V_IRQ_SHIFT 8
> +#define V_IRQ_MASK (1 << V_IRQ_SHIFT)
> +
> +#define V_GIF_ENABLED_SHIFT 25
> +#define V_GIF_ENABLED_MASK (1 << V_GIF_ENABLED_SHIFT)
> +
> +#define V_GIF_SHIFT 9
> +#define V_GIF_MASK (1 << V_GIF_SHIFT)
> +
> +#define V_INTR_PRIO_SHIFT 16
> +#define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)
> +
> +#define V_IGN_TPR_SHIFT 20
> +#define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
> +
> +#define V_INTR_MASKING_SHIFT 24
> +#define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
> +
> +#define SVM_INTERRUPT_SHADOW_MASK 1
> +
> +#define SVM_IOIO_STR_SHIFT 2
> +#define SVM_IOIO_REP_SHIFT 3
> +#define SVM_IOIO_SIZE_SHIFT 4
> +#define SVM_IOIO_ASIZE_SHIFT 7
> +
> +#define SVM_IOIO_TYPE_MASK 1
> +#define SVM_IOIO_STR_MASK (1 << SVM_IOIO_STR_SHIFT)
> +#define SVM_IOIO_REP_MASK (1 << SVM_IOIO_REP_SHIFT)
> +#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
> +#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
> +
> +#define SVM_VM_CR_VALID_MASK	0x001fULL
> +#define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
> +#define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
> +
> +#define TSC_RATIO_DEFAULT   0x0100000000ULL
> +
> +struct __attribute__ ((__packed__)) vmcb_seg {
> +	u16 selector;
> +	u16 attrib;
> +	u32 limit;
> +	u64 base;
> +};
> +
> +struct __attribute__ ((__packed__)) vmcb_save_area {
> +	struct vmcb_seg es;
> +	struct vmcb_seg cs;
> +	struct vmcb_seg ss;
> +	struct vmcb_seg ds;
> +	struct vmcb_seg fs;
> +	struct vmcb_seg gs;
> +	struct vmcb_seg gdtr;
> +	struct vmcb_seg ldtr;
> +	struct vmcb_seg idtr;
> +	struct vmcb_seg tr;
> +	u8 reserved_1[43];
> +	u8 cpl;
> +	u8 reserved_2[4];
> +	u64 efer;
> +	u8 reserved_3[112];
> +	u64 cr4;
> +	u64 cr3;
> +	u64 cr0;
> +	u64 dr7;
> +	u64 dr6;
> +	u64 rflags;
> +	u64 rip;
> +	u8 reserved_4[88];
> +	u64 rsp;
> +	u8 reserved_5[24];
> +	u64 rax;
> +	u64 star;
> +	u64 lstar;
> +	u64 cstar;
> +	u64 sfmask;
> +	u64 kernel_gs_base;
> +	u64 sysenter_cs;
> +	u64 sysenter_esp;
> +	u64 sysenter_eip;
> +	u64 cr2;
> +	u8 reserved_6[32];
> +	u64 g_pat;
> +	u64 dbgctl;
> +	u64 br_from;
> +	u64 br_to;
> +	u64 last_excp_from;
> +	u64 last_excp_to;
> +};
> +
> +struct __attribute__ ((__packed__)) vmcb {
> +	struct vmcb_control_area control;
> +	struct vmcb_save_area save;
> +};
> +
> +#define SVM_CPUID_FEATURE_SHIFT 2
> +#define SVM_CPUID_FUNC 0x8000000a
> +
> +#define SVM_VM_CR_SVM_DISABLE 4
> +
> +#define SVM_SELECTOR_S_SHIFT 4
> +#define SVM_SELECTOR_DPL_SHIFT 5
> +#define SVM_SELECTOR_P_SHIFT 7
> +#define SVM_SELECTOR_AVL_SHIFT 8
> +#define SVM_SELECTOR_L_SHIFT 9
> +#define SVM_SELECTOR_DB_SHIFT 10
> +#define SVM_SELECTOR_G_SHIFT 11
> +
> +#define SVM_SELECTOR_TYPE_MASK (0xf)
> +#define SVM_SELECTOR_S_MASK (1 << SVM_SELECTOR_S_SHIFT)
> +#define SVM_SELECTOR_DPL_MASK (3 << SVM_SELECTOR_DPL_SHIFT)
> +#define SVM_SELECTOR_P_MASK (1 << SVM_SELECTOR_P_SHIFT)
> +#define SVM_SELECTOR_AVL_MASK (1 << SVM_SELECTOR_AVL_SHIFT)
> +#define SVM_SELECTOR_L_MASK (1 << SVM_SELECTOR_L_SHIFT)
> +#define SVM_SELECTOR_DB_MASK (1 << SVM_SELECTOR_DB_SHIFT)
> +#define SVM_SELECTOR_G_MASK (1 << SVM_SELECTOR_G_SHIFT)
> +
> +#define SVM_SELECTOR_WRITE_MASK (1 << 1)
> +#define SVM_SELECTOR_READ_MASK SVM_SELECTOR_WRITE_MASK
> +#define SVM_SELECTOR_CODE_MASK (1 << 3)
> +
> +#define INTERCEPT_CR0_MASK 1
> +#define INTERCEPT_CR3_MASK (1 << 3)
> +#define INTERCEPT_CR4_MASK (1 << 4)
> +#define INTERCEPT_CR8_MASK (1 << 8)
> +
> +#define INTERCEPT_DR0_MASK 1
> +#define INTERCEPT_DR1_MASK (1 << 1)
> +#define INTERCEPT_DR2_MASK (1 << 2)
> +#define INTERCEPT_DR3_MASK (1 << 3)
> +#define INTERCEPT_DR4_MASK (1 << 4)
> +#define INTERCEPT_DR5_MASK (1 << 5)
> +#define INTERCEPT_DR6_MASK (1 << 6)
> +#define INTERCEPT_DR7_MASK (1 << 7)
> +
> +#define SVM_EVTINJ_VEC_MASK 0xff
> +
> +#define SVM_EVTINJ_TYPE_SHIFT 8
> +#define SVM_EVTINJ_TYPE_MASK (7 << SVM_EVTINJ_TYPE_SHIFT)
> +
> +#define SVM_EVTINJ_TYPE_INTR (0 << SVM_EVTINJ_TYPE_SHIFT)
> +#define SVM_EVTINJ_TYPE_NMI (2 << SVM_EVTINJ_TYPE_SHIFT)
> +#define SVM_EVTINJ_TYPE_EXEPT (3 << SVM_EVTINJ_TYPE_SHIFT)
> +#define SVM_EVTINJ_TYPE_SOFT (4 << SVM_EVTINJ_TYPE_SHIFT)
> +
> +#define SVM_EVTINJ_VALID (1 << 31)
> +#define SVM_EVTINJ_VALID_ERR (1 << 11)
> +
> +#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
> +#define SVM_EXITINTINFO_TYPE_MASK SVM_EVTINJ_TYPE_MASK
> +
> +#define SVM_EXITINTINFO_TYPE_INTR SVM_EVTINJ_TYPE_INTR
> +#define SVM_EXITINTINFO_TYPE_NMI SVM_EVTINJ_TYPE_NMI
> +#define SVM_EXITINTINFO_TYPE_EXEPT SVM_EVTINJ_TYPE_EXEPT
> +#define SVM_EXITINTINFO_TYPE_SOFT SVM_EVTINJ_TYPE_SOFT
> +
> +#define SVM_EXITINTINFO_VALID SVM_EVTINJ_VALID
> +#define SVM_EXITINTINFO_VALID_ERR SVM_EVTINJ_VALID_ERR
> +
> +#define SVM_EXITINFOSHIFT_TS_REASON_IRET 36
> +#define SVM_EXITINFOSHIFT_TS_REASON_JMP 38
> +#define SVM_EXITINFOSHIFT_TS_HAS_ERROR_CODE 44
> +
> +#define SVM_EXIT_READ_CR0   0x000
> +#define SVM_EXIT_READ_CR3   0x003
> +#define SVM_EXIT_READ_CR4   0x004
> +#define SVM_EXIT_READ_CR8   0x008
> +#define SVM_EXIT_WRITE_CR0  0x010
> +#define SVM_EXIT_WRITE_CR3  0x013
> +#define SVM_EXIT_WRITE_CR4  0x014
> +#define SVM_EXIT_WRITE_CR8  0x018
> +#define SVM_EXIT_READ_DR0   0x020
> +#define SVM_EXIT_READ_DR1   0x021
> +#define SVM_EXIT_READ_DR2   0x022
> +#define SVM_EXIT_READ_DR3   0x023
> +#define SVM_EXIT_READ_DR4   0x024
> +#define SVM_EXIT_READ_DR5   0x025
> +#define SVM_EXIT_READ_DR6   0x026
> +#define SVM_EXIT_READ_DR7   0x027
> +#define SVM_EXIT_WRITE_DR0  0x030
> +#define SVM_EXIT_WRITE_DR1  0x031
> +#define SVM_EXIT_WRITE_DR2  0x032
> +#define SVM_EXIT_WRITE_DR3  0x033
> +#define SVM_EXIT_WRITE_DR4  0x034
> +#define SVM_EXIT_WRITE_DR5  0x035
> +#define SVM_EXIT_WRITE_DR6  0x036
> +#define SVM_EXIT_WRITE_DR7  0x037
> +#define SVM_EXIT_EXCP_BASE	  0x040

Apart from the indent here and below, I did a quick diff and everything
seems fine. So with that fixed:

Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>

> +#define SVM_EXIT_INTR	   0x060
> +#define SVM_EXIT_NMI		0x061
> +#define SVM_EXIT_SMI		0x062
> +#define SVM_EXIT_INIT	   0x063
> +#define SVM_EXIT_VINTR	  0x064
> +#define SVM_EXIT_CR0_SEL_WRITE  0x065
> +#define SVM_EXIT_IDTR_READ  0x066
> +#define SVM_EXIT_GDTR_READ  0x067
> +#define SVM_EXIT_LDTR_READ  0x068
> +#define SVM_EXIT_TR_READ	0x069
> +#define SVM_EXIT_IDTR_WRITE 0x06a
> +#define SVM_EXIT_GDTR_WRITE 0x06b
> +#define SVM_EXIT_LDTR_WRITE 0x06c
> +#define SVM_EXIT_TR_WRITE   0x06d
> +#define SVM_EXIT_RDTSC	  0x06e
> +#define SVM_EXIT_RDPMC	  0x06f
> +#define SVM_EXIT_PUSHF	  0x070
> +#define SVM_EXIT_POPF	   0x071
> +#define SVM_EXIT_CPUID	  0x072
> +#define SVM_EXIT_RSM		0x073
> +#define SVM_EXIT_IRET	   0x074
> +#define SVM_EXIT_SWINT	  0x075
> +#define SVM_EXIT_INVD	   0x076
> +#define SVM_EXIT_PAUSE	  0x077
> +#define SVM_EXIT_HLT		0x078
> +#define SVM_EXIT_INVLPG	 0x079
> +#define SVM_EXIT_INVLPGA	0x07a
> +#define SVM_EXIT_IOIO	   0x07b
> +#define SVM_EXIT_MSR		0x07c
> +#define SVM_EXIT_TASK_SWITCH	0x07d
> +#define SVM_EXIT_FERR_FREEZE	0x07e
> +#define SVM_EXIT_SHUTDOWN   0x07f
> +#define SVM_EXIT_VMRUN	  0x080
> +#define SVM_EXIT_VMMCALL	0x081
> +#define SVM_EXIT_VMLOAD	 0x082
> +#define SVM_EXIT_VMSAVE	 0x083
> +#define SVM_EXIT_STGI	   0x084
> +#define SVM_EXIT_CLGI	   0x085
> +#define SVM_EXIT_SKINIT	 0x086
> +#define SVM_EXIT_RDTSCP	 0x087
> +#define SVM_EXIT_ICEBP	  0x088
> +#define SVM_EXIT_WBINVD	 0x089
> +#define SVM_EXIT_MONITOR	0x08a
> +#define SVM_EXIT_MWAIT	  0x08b
> +#define SVM_EXIT_MWAIT_COND 0x08c
> +#define SVM_EXIT_NPF		0x400
> +
> +#define SVM_EXIT_ERR		-1
> +
> +#define SVM_CR0_SELECTIVE_MASK (X86_CR0_TS | X86_CR0_MP)
> +
> +#define SVM_CR0_RESERVED_MASK			0xffffffff00000000U
> +#define SVM_CR3_LONG_MBZ_MASK			0xfff0000000000000U
> +#define SVM_CR3_LONG_RESERVED_MASK		0x0000000000000fe7U
> +#define SVM_CR3_PAE_LEGACY_RESERVED_MASK	0x0000000000000007U
> +#define SVM_CR4_LEGACY_RESERVED_MASK		0xff08e000U
> +#define SVM_CR4_RESERVED_MASK			0xffffffffff08e000U
> +#define SVM_DR6_RESERVED_MASK			0xffffffffffff1ff0U
> +#define SVM_DR7_RESERVED_MASK			0xffffffff0000cc00U
> +#define SVM_EFER_RESERVED_MASK			0xffffffffffff0200U
> +
> +
> +#endif /* SRC_LIB_X86_SVM_H_ */
> diff --git a/x86/svm.h b/x86/svm.h
> index 1ad85ba4..3cd7ce8b 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -2,367 +2,10 @@
>  #define X86_SVM_H
>  
>  #include "libcflat.h"
> +#include <x86/svm.h>
>  
> -enum {
> -	INTERCEPT_INTR,
> -	INTERCEPT_NMI,
> -	INTERCEPT_SMI,
> -	INTERCEPT_INIT,
> -	INTERCEPT_VINTR,
> -	INTERCEPT_SELECTIVE_CR0,
> -	INTERCEPT_STORE_IDTR,
> -	INTERCEPT_STORE_GDTR,
> -	INTERCEPT_STORE_LDTR,
> -	INTERCEPT_STORE_TR,
> -	INTERCEPT_LOAD_IDTR,
> -	INTERCEPT_LOAD_GDTR,
> -	INTERCEPT_LOAD_LDTR,
> -	INTERCEPT_LOAD_TR,
> -	INTERCEPT_RDTSC,
> -	INTERCEPT_RDPMC,
> -	INTERCEPT_PUSHF,
> -	INTERCEPT_POPF,
> -	INTERCEPT_CPUID,
> -	INTERCEPT_RSM,
> -	INTERCEPT_IRET,
> -	INTERCEPT_INTn,
> -	INTERCEPT_INVD,
> -	INTERCEPT_PAUSE,
> -	INTERCEPT_HLT,
> -	INTERCEPT_INVLPG,
> -	INTERCEPT_INVLPGA,
> -	INTERCEPT_IOIO_PROT,
> -	INTERCEPT_MSR_PROT,
> -	INTERCEPT_TASK_SWITCH,
> -	INTERCEPT_FERR_FREEZE,
> -	INTERCEPT_SHUTDOWN,
> -	INTERCEPT_VMRUN,
> -	INTERCEPT_VMMCALL,
> -	INTERCEPT_VMLOAD,
> -	INTERCEPT_VMSAVE,
> -	INTERCEPT_STGI,
> -	INTERCEPT_CLGI,
> -	INTERCEPT_SKINIT,
> -	INTERCEPT_RDTSCP,
> -	INTERCEPT_ICEBP,
> -	INTERCEPT_WBINVD,
> -	INTERCEPT_MONITOR,
> -	INTERCEPT_MWAIT,
> -	INTERCEPT_MWAIT_COND,
> -};
> -
> -enum {
> -        VMCB_CLEAN_INTERCEPTS = 1, /* Intercept vectors, TSC offset, pause filter count */
> -        VMCB_CLEAN_PERM_MAP = 2,   /* IOPM Base and MSRPM Base */
> -        VMCB_CLEAN_ASID = 4,       /* ASID */
> -        VMCB_CLEAN_INTR = 8,       /* int_ctl, int_vector */
> -        VMCB_CLEAN_NPT = 16,       /* npt_en, nCR3, gPAT */
> -        VMCB_CLEAN_CR = 32,        /* CR0, CR3, CR4, EFER */
> -        VMCB_CLEAN_DR = 64,        /* DR6, DR7 */
> -        VMCB_CLEAN_DT = 128,       /* GDT, IDT */
> -        VMCB_CLEAN_SEG = 256,      /* CS, DS, SS, ES, CPL */
> -        VMCB_CLEAN_CR2 = 512,      /* CR2 only */
> -        VMCB_CLEAN_LBR = 1024,     /* DBGCTL, BR_FROM, BR_TO, LAST_EX_FROM, LAST_EX_TO */
> -        VMCB_CLEAN_AVIC = 2048,    /* APIC_BAR, APIC_BACKING_PAGE,
> -				      PHYSICAL_TABLE pointer, LOGICAL_TABLE pointer */
> -        VMCB_CLEAN_ALL = 4095,
> -};
> -
> -struct __attribute__ ((__packed__)) vmcb_control_area {
> -	u16 intercept_cr_read;
> -	u16 intercept_cr_write;
> -	u16 intercept_dr_read;
> -	u16 intercept_dr_write;
> -	u32 intercept_exceptions;
> -	u64 intercept;
> -	u8 reserved_1[40];
> -	u16 pause_filter_thresh;
> -	u16 pause_filter_count;
> -	u64 iopm_base_pa;
> -	u64 msrpm_base_pa;
> -	u64 tsc_offset;
> -	u32 asid;
> -	u8 tlb_ctl;
> -	u8 reserved_2[3];
> -	u32 int_ctl;
> -	u32 int_vector;
> -	u32 int_state;
> -	u8 reserved_3[4];
> -	u32 exit_code;
> -	u32 exit_code_hi;
> -	u64 exit_info_1;
> -	u64 exit_info_2;
> -	u32 exit_int_info;
> -	u32 exit_int_info_err;
> -	u64 nested_ctl;
> -	u8 reserved_4[16];
> -	u32 event_inj;
> -	u32 event_inj_err;
> -	u64 nested_cr3;
> -	u64 virt_ext;
> -	u32 clean;
> -	u32 reserved_5;
> -	u64 next_rip;
> -	u8 insn_len;
> -	u8 insn_bytes[15];
> -	u8 reserved_6[800];
> -};
> -
> -#define TLB_CONTROL_DO_NOTHING 0
> -#define TLB_CONTROL_FLUSH_ALL_ASID 1
> -
> -#define V_TPR_MASK 0x0f
> -
> -#define V_IRQ_SHIFT 8
> -#define V_IRQ_MASK (1 << V_IRQ_SHIFT)
> -
> -#define V_GIF_ENABLED_SHIFT 25
> -#define V_GIF_ENABLED_MASK (1 << V_GIF_ENABLED_SHIFT)
> -
> -#define V_GIF_SHIFT 9
> -#define V_GIF_MASK (1 << V_GIF_SHIFT)
> -
> -#define V_INTR_PRIO_SHIFT 16
> -#define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)
> -
> -#define V_IGN_TPR_SHIFT 20
> -#define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
> -
> -#define V_INTR_MASKING_SHIFT 24
> -#define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
> -
> -#define SVM_INTERRUPT_SHADOW_MASK 1
> -
> -#define SVM_IOIO_STR_SHIFT 2
> -#define SVM_IOIO_REP_SHIFT 3
> -#define SVM_IOIO_SIZE_SHIFT 4
> -#define SVM_IOIO_ASIZE_SHIFT 7
> -
> -#define SVM_IOIO_TYPE_MASK 1
> -#define SVM_IOIO_STR_MASK (1 << SVM_IOIO_STR_SHIFT)
> -#define SVM_IOIO_REP_MASK (1 << SVM_IOIO_REP_SHIFT)
> -#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
> -#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
> -
> -#define SVM_VM_CR_VALID_MASK	0x001fULL
> -#define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
> -#define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
> -
> -#define TSC_RATIO_DEFAULT   0x0100000000ULL
> -
> -struct __attribute__ ((__packed__)) vmcb_seg {
> -	u16 selector;
> -	u16 attrib;
> -	u32 limit;
> -	u64 base;
> -};
> -
> -struct __attribute__ ((__packed__)) vmcb_save_area {
> -	struct vmcb_seg es;
> -	struct vmcb_seg cs;
> -	struct vmcb_seg ss;
> -	struct vmcb_seg ds;
> -	struct vmcb_seg fs;
> -	struct vmcb_seg gs;
> -	struct vmcb_seg gdtr;
> -	struct vmcb_seg ldtr;
> -	struct vmcb_seg idtr;
> -	struct vmcb_seg tr;
> -	u8 reserved_1[43];
> -	u8 cpl;
> -	u8 reserved_2[4];
> -	u64 efer;
> -	u8 reserved_3[112];
> -	u64 cr4;
> -	u64 cr3;
> -	u64 cr0;
> -	u64 dr7;
> -	u64 dr6;
> -	u64 rflags;
> -	u64 rip;
> -	u8 reserved_4[88];
> -	u64 rsp;
> -	u8 reserved_5[24];
> -	u64 rax;
> -	u64 star;
> -	u64 lstar;
> -	u64 cstar;
> -	u64 sfmask;
> -	u64 kernel_gs_base;
> -	u64 sysenter_cs;
> -	u64 sysenter_esp;
> -	u64 sysenter_eip;
> -	u64 cr2;
> -	u8 reserved_6[32];
> -	u64 g_pat;
> -	u64 dbgctl;
> -	u64 br_from;
> -	u64 br_to;
> -	u64 last_excp_from;
> -	u64 last_excp_to;
> -};
> -
> -struct __attribute__ ((__packed__)) vmcb {
> -	struct vmcb_control_area control;
> -	struct vmcb_save_area save;
> -};
> -
> -#define SVM_CPUID_FEATURE_SHIFT 2
> -#define SVM_CPUID_FUNC 0x8000000a
> -
> -#define SVM_VM_CR_SVM_DISABLE 4
> -
> -#define SVM_SELECTOR_S_SHIFT 4
> -#define SVM_SELECTOR_DPL_SHIFT 5
> -#define SVM_SELECTOR_P_SHIFT 7
> -#define SVM_SELECTOR_AVL_SHIFT 8
> -#define SVM_SELECTOR_L_SHIFT 9
> -#define SVM_SELECTOR_DB_SHIFT 10
> -#define SVM_SELECTOR_G_SHIFT 11
> -
> -#define SVM_SELECTOR_TYPE_MASK (0xf)
> -#define SVM_SELECTOR_S_MASK (1 << SVM_SELECTOR_S_SHIFT)
> -#define SVM_SELECTOR_DPL_MASK (3 << SVM_SELECTOR_DPL_SHIFT)
> -#define SVM_SELECTOR_P_MASK (1 << SVM_SELECTOR_P_SHIFT)
> -#define SVM_SELECTOR_AVL_MASK (1 << SVM_SELECTOR_AVL_SHIFT)
> -#define SVM_SELECTOR_L_MASK (1 << SVM_SELECTOR_L_SHIFT)
> -#define SVM_SELECTOR_DB_MASK (1 << SVM_SELECTOR_DB_SHIFT)
> -#define SVM_SELECTOR_G_MASK (1 << SVM_SELECTOR_G_SHIFT)
> -
> -#define SVM_SELECTOR_WRITE_MASK (1 << 1)
> -#define SVM_SELECTOR_READ_MASK SVM_SELECTOR_WRITE_MASK
> -#define SVM_SELECTOR_CODE_MASK (1 << 3)
> -
> -#define INTERCEPT_CR0_MASK 1
> -#define INTERCEPT_CR3_MASK (1 << 3)
> -#define INTERCEPT_CR4_MASK (1 << 4)
> -#define INTERCEPT_CR8_MASK (1 << 8)
> -
> -#define INTERCEPT_DR0_MASK 1
> -#define INTERCEPT_DR1_MASK (1 << 1)
> -#define INTERCEPT_DR2_MASK (1 << 2)
> -#define INTERCEPT_DR3_MASK (1 << 3)
> -#define INTERCEPT_DR4_MASK (1 << 4)
> -#define INTERCEPT_DR5_MASK (1 << 5)
> -#define INTERCEPT_DR6_MASK (1 << 6)
> -#define INTERCEPT_DR7_MASK (1 << 7)
> -
> -#define SVM_EVTINJ_VEC_MASK 0xff
> -
> -#define SVM_EVTINJ_TYPE_SHIFT 8
> -#define SVM_EVTINJ_TYPE_MASK (7 << SVM_EVTINJ_TYPE_SHIFT)
> -
> -#define SVM_EVTINJ_TYPE_INTR (0 << SVM_EVTINJ_TYPE_SHIFT)
> -#define SVM_EVTINJ_TYPE_NMI (2 << SVM_EVTINJ_TYPE_SHIFT)
> -#define SVM_EVTINJ_TYPE_EXEPT (3 << SVM_EVTINJ_TYPE_SHIFT)
> -#define SVM_EVTINJ_TYPE_SOFT (4 << SVM_EVTINJ_TYPE_SHIFT)
> -
> -#define SVM_EVTINJ_VALID (1 << 31)
> -#define SVM_EVTINJ_VALID_ERR (1 << 11)
> -
> -#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
> -#define SVM_EXITINTINFO_TYPE_MASK SVM_EVTINJ_TYPE_MASK
> -
> -#define	SVM_EXITINTINFO_TYPE_INTR SVM_EVTINJ_TYPE_INTR
> -#define	SVM_EXITINTINFO_TYPE_NMI SVM_EVTINJ_TYPE_NMI
> -#define	SVM_EXITINTINFO_TYPE_EXEPT SVM_EVTINJ_TYPE_EXEPT
> -#define	SVM_EXITINTINFO_TYPE_SOFT SVM_EVTINJ_TYPE_SOFT
> -
> -#define SVM_EXITINTINFO_VALID SVM_EVTINJ_VALID
> -#define SVM_EXITINTINFO_VALID_ERR SVM_EVTINJ_VALID_ERR
> -
> -#define SVM_EXITINFOSHIFT_TS_REASON_IRET 36
> -#define SVM_EXITINFOSHIFT_TS_REASON_JMP 38
> -#define SVM_EXITINFOSHIFT_TS_HAS_ERROR_CODE 44
> -
> -#define	SVM_EXIT_READ_CR0 	0x000
> -#define	SVM_EXIT_READ_CR3 	0x003
> -#define	SVM_EXIT_READ_CR4 	0x004
> -#define	SVM_EXIT_READ_CR8 	0x008
> -#define	SVM_EXIT_WRITE_CR0 	0x010
> -#define	SVM_EXIT_WRITE_CR3 	0x013
> -#define	SVM_EXIT_WRITE_CR4 	0x014
> -#define	SVM_EXIT_WRITE_CR8 	0x018
> -#define	SVM_EXIT_READ_DR0 	0x020
> -#define	SVM_EXIT_READ_DR1 	0x021
> -#define	SVM_EXIT_READ_DR2 	0x022
> -#define	SVM_EXIT_READ_DR3 	0x023
> -#define	SVM_EXIT_READ_DR4 	0x024
> -#define	SVM_EXIT_READ_DR5 	0x025
> -#define	SVM_EXIT_READ_DR6 	0x026
> -#define	SVM_EXIT_READ_DR7 	0x027
> -#define	SVM_EXIT_WRITE_DR0 	0x030
> -#define	SVM_EXIT_WRITE_DR1 	0x031
> -#define	SVM_EXIT_WRITE_DR2 	0x032
> -#define	SVM_EXIT_WRITE_DR3 	0x033
> -#define	SVM_EXIT_WRITE_DR4 	0x034
> -#define	SVM_EXIT_WRITE_DR5 	0x035
> -#define	SVM_EXIT_WRITE_DR6 	0x036
> -#define	SVM_EXIT_WRITE_DR7 	0x037
> -#define SVM_EXIT_EXCP_BASE      0x040
> -#define SVM_EXIT_INTR		0x060
> -#define SVM_EXIT_NMI		0x061
> -#define SVM_EXIT_SMI		0x062
> -#define SVM_EXIT_INIT		0x063
> -#define SVM_EXIT_VINTR		0x064
> -#define SVM_EXIT_CR0_SEL_WRITE	0x065
> -#define SVM_EXIT_IDTR_READ	0x066
> -#define SVM_EXIT_GDTR_READ	0x067
> -#define SVM_EXIT_LDTR_READ	0x068
> -#define SVM_EXIT_TR_READ	0x069
> -#define SVM_EXIT_IDTR_WRITE	0x06a
> -#define SVM_EXIT_GDTR_WRITE	0x06b
> -#define SVM_EXIT_LDTR_WRITE	0x06c
> -#define SVM_EXIT_TR_WRITE	0x06d
> -#define SVM_EXIT_RDTSC		0x06e
> -#define SVM_EXIT_RDPMC		0x06f
> -#define SVM_EXIT_PUSHF		0x070
> -#define SVM_EXIT_POPF		0x071
> -#define SVM_EXIT_CPUID		0x072
> -#define SVM_EXIT_RSM		0x073
> -#define SVM_EXIT_IRET		0x074
> -#define SVM_EXIT_SWINT		0x075
> -#define SVM_EXIT_INVD		0x076
> -#define SVM_EXIT_PAUSE		0x077
> -#define SVM_EXIT_HLT		0x078
> -#define SVM_EXIT_INVLPG		0x079
> -#define SVM_EXIT_INVLPGA	0x07a
> -#define SVM_EXIT_IOIO		0x07b
> -#define SVM_EXIT_MSR		0x07c
> -#define SVM_EXIT_TASK_SWITCH	0x07d
> -#define SVM_EXIT_FERR_FREEZE	0x07e
> -#define SVM_EXIT_SHUTDOWN	0x07f
> -#define SVM_EXIT_VMRUN		0x080
> -#define SVM_EXIT_VMMCALL	0x081
> -#define SVM_EXIT_VMLOAD		0x082
> -#define SVM_EXIT_VMSAVE		0x083
> -#define SVM_EXIT_STGI		0x084
> -#define SVM_EXIT_CLGI		0x085
> -#define SVM_EXIT_SKINIT		0x086
> -#define SVM_EXIT_RDTSCP		0x087
> -#define SVM_EXIT_ICEBP		0x088
> -#define SVM_EXIT_WBINVD		0x089
> -#define SVM_EXIT_MONITOR	0x08a
> -#define SVM_EXIT_MWAIT		0x08b
> -#define SVM_EXIT_MWAIT_COND	0x08c
> -#define SVM_EXIT_NPF  		0x400
> -
> -#define SVM_EXIT_ERR		-1
> -
> -#define SVM_CR0_SELECTIVE_MASK (X86_CR0_TS | X86_CR0_MP)
> -
> -#define	SVM_CR0_RESERVED_MASK			0xffffffff00000000U
> -#define	SVM_CR3_LONG_MBZ_MASK			0xfff0000000000000U
> -#define	SVM_CR3_LONG_RESERVED_MASK		0x0000000000000fe7U
> -#define SVM_CR3_PAE_LEGACY_RESERVED_MASK	0x0000000000000007U
> -#define	SVM_CR4_LEGACY_RESERVED_MASK		0xff08e000U
> -#define	SVM_CR4_RESERVED_MASK			0xffffffffff08e000U
> -#define	SVM_DR6_RESERVED_MASK			0xffffffffffff1ff0U
> -#define	SVM_DR7_RESERVED_MASK			0xffffffff0000cc00U
> -#define	SVM_EFER_RESERVED_MASK			0xffffffffffff0200U
>  
>  #define MSR_BITMAP_SIZE 8192
> -
>  #define LBR_CTL_ENABLE_MASK BIT_ULL(0)
>  
>  struct svm_test {
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 15/27] svm: move some svm support functions into lib/x86/svm_lib.h
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 15/27] svm: move some svm support functions into lib/x86/svm_lib.h Maxim Levitsky
@ 2022-12-01 13:59   ` Emanuele Giuseppe Esposito
  2022-12-06 14:10     ` Maxim Levitsky
  0 siblings, 1 reply; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 13:59 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  lib/x86/svm_lib.h | 53 +++++++++++++++++++++++++++++++++++++++++++++++
>  x86/svm.c         | 36 +-------------------------------
>  x86/svm.h         | 18 ----------------
>  x86/svm_npt.c     |  1 +
>  x86/svm_tests.c   |  1 +
>  5 files changed, 56 insertions(+), 53 deletions(-)
>  create mode 100644 lib/x86/svm_lib.h
> 
> diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
> new file mode 100644
> index 00000000..04910281
> --- /dev/null
> +++ b/lib/x86/svm_lib.h
> @@ -0,0 +1,53 @@
> +#ifndef SRC_LIB_X86_SVM_LIB_H_
> +#define SRC_LIB_X86_SVM_LIB_H_
> +
> +#include <x86/svm.h>
> +#include "processor.h"
> +
> +static inline bool npt_supported(void)
> +{
> +	return this_cpu_has(X86_FEATURE_NPT);
> +}
> +
> +static inline bool vgif_supported(void)
> +{
> +	return this_cpu_has(X86_FEATURE_VGIF);
> +}
> +
> +static inline bool lbrv_supported(void)
> +{
> +	return this_cpu_has(X86_FEATURE_LBRV);
> +}
> +
> +static inline bool tsc_scale_supported(void)
> +{
> +	return this_cpu_has(X86_FEATURE_TSCRATEMSR);
> +}
> +
> +static inline bool pause_filter_supported(void)
> +{
> +	return this_cpu_has(X86_FEATURE_PAUSEFILTER);
> +}
> +
> +static inline bool pause_threshold_supported(void)
> +{
> +	return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
> +}
> +
> +static inline void vmmcall(void)
> +{
> +	asm volatile ("vmmcall" : : : "memory");
> +}
> +
> +static inline void stgi(void)
> +{
> +	asm volatile ("stgi");
> +}
> +
> +static inline void clgi(void)
> +{
> +	asm volatile ("clgi");
> +}
> +
Not an expert at all on this, but sti() and cli() in patch 1 are in
processor.h and stgi (g stansd for global?) and clgi are in a different
header? What about maybe moving them together?

> +
> +#endif /* SRC_LIB_X86_SVM_LIB_H_ */
> diff --git a/x86/svm.c b/x86/svm.c
> index 0b2a1d69..8d90a242 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -14,6 +14,7 @@
>  #include "alloc_page.h"
>  #include "isr.h"
>  #include "apic.h"
> +#include "svm_lib.h"
>  
>  /* for the nested page table*/
>  u64 *pml4e;
> @@ -54,32 +55,6 @@ bool default_supported(void)
>  	return true;
>  }
>  
> -bool vgif_supported(void)
> -{
> -	return this_cpu_has(X86_FEATURE_VGIF);
> -}
> -
> -bool lbrv_supported(void)
> -{
> -	return this_cpu_has(X86_FEATURE_LBRV);
> -}
> -
> -bool tsc_scale_supported(void)
> -{
> -	return this_cpu_has(X86_FEATURE_TSCRATEMSR);
> -}
> -
> -bool pause_filter_supported(void)
> -{
> -	return this_cpu_has(X86_FEATURE_PAUSEFILTER);
> -}
> -
> -bool pause_threshold_supported(void)
> -{
> -	return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
> -}
> -
> -
>  void default_prepare(struct svm_test *test)
>  {
>  	vmcb_ident(vmcb);
> @@ -94,10 +69,6 @@ bool default_finished(struct svm_test *test)
>  	return true; /* one vmexit */
>  }
>  
> -bool npt_supported(void)
> -{
> -	return this_cpu_has(X86_FEATURE_NPT);
> -}
>  
>  int get_test_stage(struct svm_test *test)
>  {
> @@ -128,11 +99,6 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
>  	seg->base = base;
>  }
>  
> -inline void vmmcall(void)
> -{
> -	asm volatile ("vmmcall" : : : "memory");
> -}
> -
>  static test_guest_func guest_main;
>  
>  void test_set_guest(test_guest_func func)
> diff --git a/x86/svm.h b/x86/svm.h
> index 3cd7ce8b..7cb1b898 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -53,21 +53,14 @@ u64 *npt_get_pdpe(u64 address);
>  u64 *npt_get_pml4e(void);
>  bool smp_supported(void);
>  bool default_supported(void);
> -bool vgif_supported(void);
> -bool lbrv_supported(void);
> -bool tsc_scale_supported(void);
> -bool pause_filter_supported(void);
> -bool pause_threshold_supported(void);
>  void default_prepare(struct svm_test *test);
>  void default_prepare_gif_clear(struct svm_test *test);
>  bool default_finished(struct svm_test *test);
> -bool npt_supported(void);
>  int get_test_stage(struct svm_test *test);
>  void set_test_stage(struct svm_test *test, int s);
>  void inc_test_stage(struct svm_test *test);
>  void vmcb_ident(struct vmcb *vmcb);
>  struct regs get_regs(void);
> -void vmmcall(void);
>  int __svm_vmrun(u64 rip);
>  void __svm_bare_vmrun(void);
>  int svm_vmrun(void);
> @@ -75,17 +68,6 @@ void test_set_guest(test_guest_func func);
>  
>  extern struct vmcb *vmcb;
>  
> -static inline void stgi(void)
> -{
> -    asm volatile ("stgi");
> -}
> -
> -static inline void clgi(void)
> -{
> -    asm volatile ("clgi");
> -}
> -
> -
>  
>  #define SAVE_GPR_C                              \
>          "xchg %%rbx, regs+0x8\n\t"              \
> diff --git a/x86/svm_npt.c b/x86/svm_npt.c
> index b791f1ac..8aac0bb6 100644
> --- a/x86/svm_npt.c
> +++ b/x86/svm_npt.c
> @@ -2,6 +2,7 @@
>  #include "vm.h"
>  #include "alloc_page.h"
>  #include "vmalloc.h"
> +#include "svm_lib.h"
>  
>  static void *scratch_page;
>  
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 202e9271..f86c2fa4 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -12,6 +12,7 @@
>  #include "delay.h"
>  #include "x86/usermode.h"
>  #include "vmalloc.h"
> +#include "svm_lib.h"
>  
>  #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
>  
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 16/27] svm: move setup_svm() to svm_lib.c
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 16/27] svm: move setup_svm() to svm_lib.c Maxim Levitsky
@ 2022-12-01 16:14   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 16:14 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---

Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>

>  lib/x86/svm.h       |   2 +
>  lib/x86/svm_lib.c   | 107 ++++++++++++++++++++++++++++++++++++++++++++
>  lib/x86/svm_lib.h   |  12 +++++
>  x86/Makefile.x86_64 |   2 +
>  x86/svm.c           |  90 ++-----------------------------------
>  x86/svm.h           |   6 +--
>  x86/svm_tests.c     |  18 +++++---
>  7 files changed, 138 insertions(+), 99 deletions(-)
>  create mode 100644 lib/x86/svm_lib.c
> 
> diff --git a/lib/x86/svm.h b/lib/x86/svm.h
> index 8b836c13..d714dac9 100644
> --- a/lib/x86/svm.h
> +++ b/lib/x86/svm.h
> @@ -2,6 +2,8 @@
>  #ifndef SRC_LIB_X86_SVM_H_
>  #define SRC_LIB_X86_SVM_H_
>  
> +#include "libcflat.h"
> +
>  enum {
>  	INTERCEPT_INTR,
>  	INTERCEPT_NMI,
> diff --git a/lib/x86/svm_lib.c b/lib/x86/svm_lib.c
> new file mode 100644
> index 00000000..cb80f08f
> --- /dev/null
> +++ b/lib/x86/svm_lib.c
> @@ -0,0 +1,107 @@
> +
> +#include "svm_lib.h"
> +#include "libcflat.h"
> +#include "processor.h"
> +#include "desc.h"
> +#include "msr.h"
> +#include "vm.h"
> +#include "smp.h"
> +#include "alloc_page.h"
> +#include "fwcfg.h"
> +
> +/* for the nested page table*/
> +static u64 *pml4e;
> +
> +static u8 *io_bitmap;
> +static u8 io_bitmap_area[16384];
> +
> +static u8 *msr_bitmap;
> +static u8 msr_bitmap_area[MSR_BITMAP_SIZE + PAGE_SIZE];
> +
> +
> +u64 *npt_get_pte(u64 address)
> +{
> +	return get_pte(npt_get_pml4e(), (void *)address);
> +}
> +
> +u64 *npt_get_pde(u64 address)
> +{
> +	struct pte_search search;
> +
> +	search = find_pte_level(npt_get_pml4e(), (void *)address, 2);
> +	return search.pte;
> +}
> +
> +u64 *npt_get_pdpe(u64 address)
> +{
> +	struct pte_search search;
> +
> +	search = find_pte_level(npt_get_pml4e(), (void *)address, 3);
> +	return search.pte;
> +}
> +
> +u64 *npt_get_pml4e(void)
> +{
> +	return pml4e;
> +}
> +
> +u8 *svm_get_msr_bitmap(void)
> +{
> +	return msr_bitmap;
> +}
> +
> +u8 *svm_get_io_bitmap(void)
> +{
> +	return io_bitmap;
> +}
> +
> +static void set_additional_vcpu_msr(void *msr_efer)
> +{
> +	void *hsave = alloc_page();
> +
> +	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
> +	wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME);
> +}
> +
> +static void setup_npt(void)
> +{
> +	u64 size = fwcfg_get_u64(FW_CFG_RAM_SIZE);
> +
> +	/* Ensure all <4gb is mapped, e.g. if there's no RAM above 4gb. */
> +	if (size < BIT_ULL(32))
> +		size = BIT_ULL(32);
> +
> +	pml4e = alloc_page();
> +
> +	/* NPT accesses are treated as "user" accesses. */
> +	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
> +}
> +
> +void setup_svm(void)
> +{
> +	void *hsave = alloc_page();
> +	int i;
> +
> +	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
> +	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> +
> +	io_bitmap = (void *) ALIGN((ulong)io_bitmap_area, PAGE_SIZE);
> +
> +	msr_bitmap = (void *) ALIGN((ulong)msr_bitmap_area, PAGE_SIZE);
> +
> +	if (!npt_supported())
> +		return;
> +
> +	for (i = 1; i < cpu_count(); i++)
> +		on_cpu(i, (void *)set_additional_vcpu_msr, (void *)rdmsr(MSR_EFER));
> +
> +	printf("NPT detected - running all tests with NPT enabled\n");
> +
> +	/*
> +	 * Nested paging supported - Build a nested page table
> +	 * Build the page-table bottom-up and map everything with 4k
> +	 * pages to get enough granularity for the NPT unit-tests.
> +	 */
> +
> +	setup_npt();
> +}
> diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
> index 04910281..b491eee6 100644
> --- a/lib/x86/svm_lib.h
> +++ b/lib/x86/svm_lib.h
> @@ -49,5 +49,17 @@ static inline void clgi(void)
>  	asm volatile ("clgi");
>  }
>  
> +void setup_svm(void);
> +
> +u64 *npt_get_pte(u64 address);
> +u64 *npt_get_pde(u64 address);
> +u64 *npt_get_pdpe(u64 address);
> +u64 *npt_get_pml4e(void);
> +
> +u8 *svm_get_msr_bitmap(void);
> +u8 *svm_get_io_bitmap(void);
> +
> +#define MSR_BITMAP_SIZE 8192
> +
>  
>  #endif /* SRC_LIB_X86_SVM_LIB_H_ */
> diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
> index f76ff18a..5e4c4cc0 100644
> --- a/x86/Makefile.x86_64
> +++ b/x86/Makefile.x86_64
> @@ -19,6 +19,8 @@ COMMON_CFLAGS += -mno-red-zone -mno-sse -mno-sse2 $(fcf_protection_full)
>  cflatobjs += lib/x86/setjmp64.o
>  cflatobjs += lib/x86/intel-iommu.o
>  cflatobjs += lib/x86/usermode.o
> +cflatobjs += lib/x86/svm_lib.o
> +
>  
>  tests = $(TEST_DIR)/apic.$(exe) \
>  	  $(TEST_DIR)/idt_test.$(exe) \
> diff --git a/x86/svm.c b/x86/svm.c
> index 8d90a242..9edf5500 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -16,35 +16,8 @@
>  #include "apic.h"
>  #include "svm_lib.h"
>  
> -/* for the nested page table*/
> -u64 *pml4e;
> -
>  struct vmcb *vmcb;
>  
> -u64 *npt_get_pte(u64 address)
> -{
> -	return get_pte(npt_get_pml4e(), (void*)address);
> -}
> -
> -u64 *npt_get_pde(u64 address)
> -{
> -	struct pte_search search;
> -	search = find_pte_level(npt_get_pml4e(), (void*)address, 2);
> -	return search.pte;
> -}
> -
> -u64 *npt_get_pdpe(u64 address)
> -{
> -	struct pte_search search;
> -	search = find_pte_level(npt_get_pml4e(), (void*)address, 3);
> -	return search.pte;
> -}
> -
> -u64 *npt_get_pml4e(void)
> -{
> -	return pml4e;
> -}
> -
>  bool smp_supported(void)
>  {
>  	return cpu_count() > 1;
> @@ -112,12 +85,6 @@ static void test_thunk(struct svm_test *test)
>  	vmmcall();
>  }
>  
> -u8 *io_bitmap;
> -u8 io_bitmap_area[16384];
> -
> -u8 *msr_bitmap;
> -u8 msr_bitmap_area[MSR_BITMAP_SIZE + PAGE_SIZE];
> -
>  void vmcb_ident(struct vmcb *vmcb)
>  {
>  	u64 vmcb_phys = virt_to_phys(vmcb);
> @@ -153,12 +120,12 @@ void vmcb_ident(struct vmcb *vmcb)
>  	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
>  		(1ULL << INTERCEPT_VMMCALL) |
>  		(1ULL << INTERCEPT_SHUTDOWN);
> -	ctrl->iopm_base_pa = virt_to_phys(io_bitmap);
> -	ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
> +	ctrl->iopm_base_pa = virt_to_phys(svm_get_io_bitmap());
> +	ctrl->msrpm_base_pa = virt_to_phys(svm_get_msr_bitmap());
>  
>  	if (npt_supported()) {
>  		ctrl->nested_ctl = 1;
> -		ctrl->nested_cr3 = (u64)pml4e;
> +		ctrl->nested_cr3 = (u64)npt_get_pml4e();
>  		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
>  	}
>  }
> @@ -247,57 +214,6 @@ static noinline void test_run(struct svm_test *test)
>  		test->on_vcpu_done = true;
>  }
>  
> -static void set_additional_vcpu_msr(void *msr_efer)
> -{
> -	void *hsave = alloc_page();
> -
> -	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
> -	wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME);
> -}
> -
> -static void setup_npt(void)
> -{
> -	u64 size = fwcfg_get_u64(FW_CFG_RAM_SIZE);
> -
> -	/* Ensure all <4gb is mapped, e.g. if there's no RAM above 4gb. */
> -	if (size < BIT_ULL(32))
> -		size = BIT_ULL(32);
> -
> -	pml4e = alloc_page();
> -
> -	/* NPT accesses are treated as "user" accesses. */
> -	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
> -}
> -
> -static void setup_svm(void)
> -{
> -	void *hsave = alloc_page();
> -	int i;
> -
> -	wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave));
> -	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> -
> -	io_bitmap = (void *) ALIGN((ulong)io_bitmap_area, PAGE_SIZE);
> -
> -	msr_bitmap = (void *) ALIGN((ulong)msr_bitmap_area, PAGE_SIZE);
> -
> -	if (!npt_supported())
> -		return;
> -
> -	for (i = 1; i < cpu_count(); i++)
> -		on_cpu(i, (void *)set_additional_vcpu_msr, (void *)rdmsr(MSR_EFER));
> -
> -	printf("NPT detected - running all tests with NPT enabled\n");
> -
> -	/*
> -	 * Nested paging supported - Build a nested page table
> -	 * Build the page-table bottom-up and map everything with 4k
> -	 * pages to get enough granularity for the NPT unit-tests.
> -	 */
> -
> -	setup_npt();
> -}
> -
>  int matched;
>  
>  static bool
> diff --git a/x86/svm.h b/x86/svm.h
> index 7cb1b898..67f3205d 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -5,7 +5,6 @@
>  #include <x86/svm.h>
>  
>  
> -#define MSR_BITMAP_SIZE 8192
>  #define LBR_CTL_ENABLE_MASK BIT_ULL(0)
>  
>  struct svm_test {
> @@ -47,10 +46,7 @@ struct regs {
>  typedef void (*test_guest_func)(struct svm_test *);
>  
>  int run_svm_tests(int ac, char **av, struct svm_test *svm_tests);
> -u64 *npt_get_pte(u64 address);
> -u64 *npt_get_pde(u64 address);
> -u64 *npt_get_pdpe(u64 address);
> -u64 *npt_get_pml4e(void);
> +
>  bool smp_supported(void);
>  bool default_supported(void);
>  void default_prepare(struct svm_test *test);
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index f86c2fa4..712d24e2 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -307,14 +307,13 @@ static bool check_next_rip(struct svm_test *test)
>  	return address == vmcb->control.next_rip;
>  }
>  
> -extern u8 *msr_bitmap;
>  
>  static void prepare_msr_intercept(struct svm_test *test)
>  {
>  	default_prepare(test);
>  	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
>  	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> -	memset(msr_bitmap, 0xff, MSR_BITMAP_SIZE);
> +	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
>  }
>  
>  static void test_msr_intercept(struct svm_test *test)
> @@ -425,7 +424,7 @@ static bool msr_intercept_finished(struct svm_test *test)
>  
>  static bool check_msr_intercept(struct svm_test *test)
>  {
> -	memset(msr_bitmap, 0, MSR_BITMAP_SIZE);
> +	memset(svm_get_msr_bitmap(), 0, MSR_BITMAP_SIZE);
>  	return (test->scratch == -2);
>  }
>  
> @@ -537,10 +536,10 @@ static bool check_mode_switch(struct svm_test *test)
>  	return test->scratch == 2;
>  }
>  
> -extern u8 *io_bitmap;
> -
>  static void prepare_ioio(struct svm_test *test)
>  {
> +	u8 *io_bitmap = svm_get_io_bitmap();
> +
>  	vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
>  	test->scratch = 0;
>  	memset(io_bitmap, 0, 8192);
> @@ -549,6 +548,8 @@ static void prepare_ioio(struct svm_test *test)
>  
>  static void test_ioio(struct svm_test *test)
>  {
> +	u8 *io_bitmap = svm_get_io_bitmap();
> +
>  	// stage 0, test IO pass
>  	inb(0x5000);
>  	outb(0x0, 0x5000);
> @@ -612,7 +613,6 @@ static void test_ioio(struct svm_test *test)
>  		goto fail;
>  
>  	return;
> -
>  fail:
>  	report_fail("stage %d", get_test_stage(test));
>  	test->scratch = -1;
> @@ -621,6 +621,7 @@ fail:
>  static bool ioio_finished(struct svm_test *test)
>  {
>  	unsigned port, size;
> +	u8 *io_bitmap = svm_get_io_bitmap();
>  
>  	/* Only expect IOIO intercepts */
>  	if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
> @@ -645,6 +646,8 @@ static bool ioio_finished(struct svm_test *test)
>  
>  static bool check_ioio(struct svm_test *test)
>  {
> +	u8 *io_bitmap = svm_get_io_bitmap();
> +
>  	memset(io_bitmap, 0, 8193);
>  	return test->scratch != -1;
>  }
> @@ -2316,7 +2319,8 @@ static void test_msrpm_iopm_bitmap_addrs(void)
>  {
>  	u64 saved_intercept = vmcb->control.intercept;
>  	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
> -	u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1));
> +	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
> +	u8 *io_bitmap = svm_get_io_bitmap();
>  
>  	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
>  			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 18/27] svm: move vmcb_ident to svm_lib.c
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 18/27] svm: move vmcb_ident to svm_lib.c Maxim Levitsky
@ 2022-12-01 16:18   ` Emanuele Giuseppe Esposito
  2022-12-06 14:11     ` Maxim Levitsky
  0 siblings, 1 reply; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-01 16:18 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Extract vmcb_ident to svm_lib.c
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>

Not sure if it holds for kvm unit tests, but indent of vmcb_set_seg
parameters seems a little bit off.

If that's fine:
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>

> ---
>  lib/x86/svm_lib.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
>  lib/x86/svm_lib.h |  4 ++++
>  x86/svm.c         | 54 -----------------------------------------------
>  x86/svm.h         |  1 -
>  4 files changed, 58 insertions(+), 55 deletions(-)
> 
> diff --git a/lib/x86/svm_lib.c b/lib/x86/svm_lib.c
> index c7194909..aed757a1 100644
> --- a/lib/x86/svm_lib.c
> +++ b/lib/x86/svm_lib.c
> @@ -109,3 +109,57 @@ bool setup_svm(void)
>  	setup_npt();
>  	return true;
>  }
> +
> +void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
> +			 u64 base, u32 limit, u32 attr)
> +{
> +	seg->selector = selector;
> +	seg->attrib = attr;
> +	seg->limit = limit;
> +	seg->base = base;
> +}
> +
> +void vmcb_ident(struct vmcb *vmcb)
> +{
> +	u64 vmcb_phys = virt_to_phys(vmcb);
> +	struct vmcb_save_area *save = &vmcb->save;
> +	struct vmcb_control_area *ctrl = &vmcb->control;
> +	u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> +		| SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK;
> +	u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> +		| SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK;
> +	struct descriptor_table_ptr desc_table_ptr;
> +
> +	memset(vmcb, 0, sizeof(*vmcb));
> +	asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory");
> +	vmcb_set_seg(&save->es, read_es(), 0, -1U, data_seg_attr);
> +	vmcb_set_seg(&save->cs, read_cs(), 0, -1U, code_seg_attr);
> +	vmcb_set_seg(&save->ss, read_ss(), 0, -1U, data_seg_attr);
> +	vmcb_set_seg(&save->ds, read_ds(), 0, -1U, data_seg_attr);
> +	sgdt(&desc_table_ptr);
> +	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> +	sidt(&desc_table_ptr);
> +	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> +	ctrl->asid = 1;
> +	save->cpl = 0;
> +	save->efer = rdmsr(MSR_EFER);
> +	save->cr4 = read_cr4();
> +	save->cr3 = read_cr3();
> +	save->cr0 = read_cr0();
> +	save->dr7 = read_dr7();
> +	save->dr6 = read_dr6();
> +	save->cr2 = read_cr2();
> +	save->g_pat = rdmsr(MSR_IA32_CR_PAT);
> +	save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> +	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
> +		(1ULL << INTERCEPT_VMMCALL) |
> +		(1ULL << INTERCEPT_SHUTDOWN);
> +	ctrl->iopm_base_pa = virt_to_phys(io_bitmap);
> +	ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
> +
> +	if (npt_supported()) {
> +		ctrl->nested_ctl = 1;
> +		ctrl->nested_cr3 = (u64)pml4e;
> +		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
> +	}
> +}
> diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
> index f603ff93..3bb098dc 100644
> --- a/lib/x86/svm_lib.h
> +++ b/lib/x86/svm_lib.h
> @@ -49,7 +49,11 @@ static inline void clgi(void)
>  	asm volatile ("clgi");
>  }
>  
> +void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
> +				  u64 base, u32 limit, u32 attr);
> +
>  bool setup_svm(void);
> +void vmcb_ident(struct vmcb *vmcb);
>  
>  u64 *npt_get_pte(u64 address);
>  u64 *npt_get_pde(u64 address);
> diff --git a/x86/svm.c b/x86/svm.c
> index cf246c37..5e2c3a83 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -63,15 +63,6 @@ void inc_test_stage(struct svm_test *test)
>  	barrier();
>  }
>  
> -static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
> -			 u64 base, u32 limit, u32 attr)
> -{
> -	seg->selector = selector;
> -	seg->attrib = attr;
> -	seg->limit = limit;
> -	seg->base = base;
> -}
> -
>  static test_guest_func guest_main;
>  
>  void test_set_guest(test_guest_func func)
> @@ -85,51 +76,6 @@ static void test_thunk(struct svm_test *test)
>  	vmmcall();
>  }
>  
> -void vmcb_ident(struct vmcb *vmcb)
> -{
> -	u64 vmcb_phys = virt_to_phys(vmcb);
> -	struct vmcb_save_area *save = &vmcb->save;
> -	struct vmcb_control_area *ctrl = &vmcb->control;
> -	u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> -		| SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK;
> -	u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> -		| SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK;
> -	struct descriptor_table_ptr desc_table_ptr;
> -
> -	memset(vmcb, 0, sizeof(*vmcb));
> -	asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory");
> -	vmcb_set_seg(&save->es, read_es(), 0, -1U, data_seg_attr);
> -	vmcb_set_seg(&save->cs, read_cs(), 0, -1U, code_seg_attr);
> -	vmcb_set_seg(&save->ss, read_ss(), 0, -1U, data_seg_attr);
> -	vmcb_set_seg(&save->ds, read_ds(), 0, -1U, data_seg_attr);
> -	sgdt(&desc_table_ptr);
> -	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> -	sidt(&desc_table_ptr);
> -	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> -	ctrl->asid = 1;
> -	save->cpl = 0;
> -	save->efer = rdmsr(MSR_EFER);
> -	save->cr4 = read_cr4();
> -	save->cr3 = read_cr3();
> -	save->cr0 = read_cr0();
> -	save->dr7 = read_dr7();
> -	save->dr6 = read_dr6();
> -	save->cr2 = read_cr2();
> -	save->g_pat = rdmsr(MSR_IA32_CR_PAT);
> -	save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> -	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
> -		(1ULL << INTERCEPT_VMMCALL) |
> -		(1ULL << INTERCEPT_SHUTDOWN);
> -	ctrl->iopm_base_pa = virt_to_phys(svm_get_io_bitmap());
> -	ctrl->msrpm_base_pa = virt_to_phys(svm_get_msr_bitmap());
> -
> -	if (npt_supported()) {
> -		ctrl->nested_ctl = 1;
> -		ctrl->nested_cr3 = (u64)npt_get_pml4e();
> -		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
> -	}
> -}
> -
>  struct regs regs;
>  
>  struct regs get_regs(void)
> diff --git a/x86/svm.h b/x86/svm.h
> index 67f3205d..a4aabeb2 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -55,7 +55,6 @@ bool default_finished(struct svm_test *test);
>  int get_test_stage(struct svm_test *test);
>  void set_test_stage(struct svm_test *test, int s);
>  void inc_test_stage(struct svm_test *test);
> -void vmcb_ident(struct vmcb *vmcb);
>  struct regs get_regs(void);
>  int __svm_vmrun(u64 rip);
>  void __svm_bare_vmrun(void);
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 21/27] svm: cleanup the default_prepare
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 21/27] svm: cleanup the default_prepare Maxim Levitsky
@ 2022-12-02  9:45   ` Emanuele Giuseppe Esposito
  2022-12-06 13:56     ` Maxim Levitsky
  0 siblings, 1 reply; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-02  9:45 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> default_prepare only calls vmcb_indent, which is called before
> each test anyway
> 
> Also don't call this now empty function from other
> .prepare functions
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  x86/svm.c       |  1 -
>  x86/svm_tests.c | 18 ------------------
>  2 files changed, 19 deletions(-)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index 2ab553a5..5667402b 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -30,7 +30,6 @@ bool default_supported(void)
>  
>  void default_prepare(struct svm_test *test)
>  {
> -	vmcb_ident(vmcb);
>  }

Makes sense removing it, but maybe remove the function alltogether since
it is not used anymore and then change test_run() to handle ->prepare ==
NULL?

>  
>  void default_prepare_gif_clear(struct svm_test *test)
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 70e41300..3b68718e 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -69,7 +69,6 @@ static bool check_vmrun(struct svm_test *test)
>  
>  static void prepare_rsm_intercept(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
>  	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
>  }
> @@ -115,7 +114,6 @@ static bool finished_rsm_intercept(struct svm_test *test)
>  
>  static void prepare_cr3_intercept(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	vmcb->control.intercept_cr_read |= 1 << 3;
>  }
>  
> @@ -149,7 +147,6 @@ static void corrupt_cr3_intercept_bypass(void *_test)
>  
>  static void prepare_cr3_intercept_bypass(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	vmcb->control.intercept_cr_read |= 1 << 3;
>  	on_cpu_async(1, corrupt_cr3_intercept_bypass, test);
>  }
> @@ -169,7 +166,6 @@ static void test_cr3_intercept_bypass(struct svm_test *test)
>  
>  static void prepare_dr_intercept(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	vmcb->control.intercept_dr_read = 0xff;
>  	vmcb->control.intercept_dr_write = 0xff;
>  }
> @@ -310,7 +306,6 @@ static bool check_next_rip(struct svm_test *test)
>  
>  static void prepare_msr_intercept(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
>  	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
>  	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
> @@ -711,7 +706,6 @@ static bool tsc_adjust_supported(void)
>  
>  static void tsc_adjust_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
>  
>  	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
> @@ -811,7 +805,6 @@ static void svm_tsc_scale_test(void)
>  
>  static void latency_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	runs = LATENCY_RUNS;
>  	latvmrun_min = latvmexit_min = -1ULL;
>  	latvmrun_max = latvmexit_max = 0;
> @@ -884,7 +877,6 @@ static bool latency_check(struct svm_test *test)
>  
>  static void lat_svm_insn_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	runs = LATENCY_RUNS;
>  	latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL;
>  	latvmload_max = latvmsave_max = latstgi_max = latclgi_max = 0;
> @@ -965,7 +957,6 @@ static void pending_event_prepare(struct svm_test *test)
>  {
>  	int ipi_vector = 0xf1;
>  
> -	default_prepare(test);
>  
>  	pending_event_ipi_fired = false;
>  
> @@ -1033,8 +1024,6 @@ static bool pending_event_check(struct svm_test *test)
>  
>  static void pending_event_cli_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
> -
>  	pending_event_ipi_fired = false;
>  
>  	handle_irq(0xf1, pending_event_ipi_isr);
> @@ -1139,7 +1128,6 @@ static void timer_isr(isr_regs_t *regs)
>  
>  static void interrupt_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	handle_irq(TIMER_VECTOR, timer_isr);
>  	timer_fired = false;
>  	set_test_stage(test, 0);
> @@ -1272,7 +1260,6 @@ static void nmi_handler(struct ex_regs *regs)
>  
>  static void nmi_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	nmi_fired = false;
>  	handle_exception(NMI_VECTOR, nmi_handler);
>  	set_test_stage(test, 0);
> @@ -1450,7 +1437,6 @@ static void my_isr(struct ex_regs *r)
>  
>  static void exc_inject_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	handle_exception(DE_VECTOR, my_isr);
>  	handle_exception(NMI_VECTOR, my_isr);
>  }
> @@ -1519,7 +1505,6 @@ static void virq_isr(isr_regs_t *regs)
>  static void virq_inject_prepare(struct svm_test *test)
>  {
>  	handle_irq(0xf1, virq_isr);
> -	default_prepare(test);
>  	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
>  		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
>  	vmcb->control.int_vector = 0xf1;
> @@ -1682,7 +1667,6 @@ static void reg_corruption_isr(isr_regs_t *regs)
>  
>  static void reg_corruption_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	set_test_stage(test, 0);
>  
>  	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
> @@ -1877,7 +1861,6 @@ static void host_rflags_db_handler(struct ex_regs *r)
>  
>  static void host_rflags_prepare(struct svm_test *test)
>  {
> -	default_prepare(test);
>  	handle_exception(DB_VECTOR, host_rflags_db_handler);
>  	set_test_stage(test, 0);
>  }
> @@ -2610,7 +2593,6 @@ static void svm_vmload_vmsave(void)
>  
>  static void prepare_vgif_enabled(struct svm_test *test)
>  {
> -	default_prepare(test);
>  }
>  
>  static void test_vgif(struct svm_test *test)
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 20/27] svm: move v2 tests run into test_run
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 20/27] svm: move v2 tests run into test_run Maxim Levitsky
@ 2022-12-02  9:53   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-02  9:53 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Move v2 tests running into test_run which allows to have code that runs the
> test in one place and allows to run v2 tests on a non 0 vCPU if needed.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---

Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>

>  x86/svm.c | 33 +++++++++++++++++++--------------
>  1 file changed, 19 insertions(+), 14 deletions(-)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index 220bce66..2ab553a5 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -106,6 +106,13 @@ int svm_vmrun(void)
>  
>  static noinline void test_run(struct svm_test *test)
>  {
> +	if (test->v2) {
> +		vmcb_ident(vmcb);
> +		v2_test = test;
> +		test->v2();
> +		return;
> +	}
> +
>  	cli();
>  	vmcb_ident(vmcb);
>  
> @@ -196,21 +203,19 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
>  			continue;
>  		if (svm_tests[i].supported && !svm_tests[i].supported())
>  			continue;
> -		if (svm_tests[i].v2 == NULL) {
> -			if (svm_tests[i].on_vcpu) {
> -				if (cpu_count() <= svm_tests[i].on_vcpu)
> -					continue;
> -				on_cpu_async(svm_tests[i].on_vcpu, (void *)test_run, &svm_tests[i]);
> -				while (!svm_tests[i].on_vcpu_done)
> -					cpu_relax();
> -			}
> -			else
> -				test_run(&svm_tests[i]);
> -		} else {
> -			vmcb_ident(vmcb);
> -			v2_test = &(svm_tests[i]);
> -			svm_tests[i].v2();
> +
> +		if (!svm_tests[i].on_vcpu) {
> +			test_run(&svm_tests[i]);
> +			continue;
>  		}
> +
> +		if (cpu_count() <= svm_tests[i].on_vcpu)
> +			continue;
> +
> +		on_cpu_async(svm_tests[i].on_vcpu, (void *)test_run, &svm_tests[i]);
> +
> +		while (!svm_tests[i].on_vcpu_done)
> +			cpu_relax();
>  	}
>  
>  	if (!matched)
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 19/27] svm: rewerite vm entry macros
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 19/27] svm: rewerite vm entry macros Maxim Levitsky
@ 2022-12-02 10:14   ` Emanuele Giuseppe Esposito
  2022-12-06 13:56     ` Maxim Levitsky
  0 siblings, 1 reply; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-02 10:14 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Make SVM VM entry macros to not use harcode regs label
> and also simplify them as much as possible
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  lib/x86/svm_lib.h | 71 +++++++++++++++++++++++++++++++++++++++++++++++
>  x86/svm.c         | 58 ++++++++++++--------------------------
>  x86/svm.h         | 70 ++--------------------------------------------
>  x86/svm_tests.c   | 24 ++++++++++------
>  4 files changed, 106 insertions(+), 117 deletions(-)
> 
> diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
> index 3bb098dc..f9c2b352 100644
> --- a/lib/x86/svm_lib.h
> +++ b/lib/x86/svm_lib.h
> @@ -66,4 +66,75 @@ u8 *svm_get_io_bitmap(void);
>  #define MSR_BITMAP_SIZE 8192
>  
>  
> +struct svm_gprs {
> +	u64 rax;
> +	u64 rbx;
> +	u64 rcx;
> +	u64 rdx;
> +	u64 rbp;
> +	u64 rsi;
> +	u64 rdi;
> +	u64 r8;
> +	u64 r9;
> +	u64 r10;
> +	u64 r11;
> +	u64 r12;
> +	u64 r13;
> +	u64 r14;
> +	u64 r15;
> +	u64 rsp;
> +};
> +
> +#define SWAP_GPRS \
> +	"xchg %%rbx, 0x08(%%rax)\n"           \
> +	"xchg %%rcx, 0x10(%%rax)\n"           \
> +	"xchg %%rdx, 0x18(%%rax)\n"           \
> +	"xchg %%rbp, 0x20(%%rax)\n"           \
> +	"xchg %%rsi, 0x28(%%rax)\n"           \
> +	"xchg %%rdi, 0x30(%%rax)\n"           \
> +	"xchg %%r8,  0x38(%%rax)\n"           \
> +	"xchg %%r9,  0x40(%%rax)\n"           \
> +	"xchg %%r10, 0x48(%%rax)\n"           \
> +	"xchg %%r11, 0x50(%%rax)\n"           \
> +	"xchg %%r12, 0x58(%%rax)\n"           \
> +	"xchg %%r13, 0x60(%%rax)\n"           \
> +	"xchg %%r14, 0x68(%%rax)\n"           \
> +	"xchg %%r15, 0x70(%%rax)\n"           \
> +	\
> +
> +
> +#define __SVM_VMRUN(vmcb, regs, label)        \
> +{                                             \
> +	u32 dummy;                            \
> +\
> +	(vmcb)->save.rax = (regs)->rax;       \
> +	(vmcb)->save.rsp = (regs)->rsp;       \
> +\
> +	asm volatile (                        \
> +		"vmload %%rax\n"              \
> +		"push %%rbx\n"                \
> +		"push %%rax\n"                \
> +		"mov %%rbx, %%rax\n"          \
> +		SWAP_GPRS                     \
> +		"pop %%rax\n"                 \
> +		".global " label "\n"         \
> +		label ": vmrun %%rax\n"       \
> +		"vmsave %%rax\n"              \
> +		"pop %%rax\n"                 \
> +		SWAP_GPRS                     \
> +		: "=a"(dummy),                \
> +		  "=b"(dummy)                 \
> +		: "a" (virt_to_phys(vmcb)),   \
> +		  "b"(regs)                   \
> +		/* clobbers*/                 \
> +		: "memory"                    \
> +	);                                    \
> +\
> +	(regs)->rax = (vmcb)->save.rax;       \
> +	(regs)->rsp = (vmcb)->save.rsp;       \
> +}
> +
> +#define SVM_VMRUN(vmcb, regs) \
> +	__SVM_VMRUN(vmcb, regs, "vmrun_dummy_label_%=")
> +
>  #endif /* SRC_LIB_X86_SVM_LIB_H_ */
> diff --git a/x86/svm.c b/x86/svm.c
> index 5e2c3a83..220bce66 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -76,16 +76,13 @@ static void test_thunk(struct svm_test *test)
>  	vmmcall();
>  }
>  
> -struct regs regs;
> +static struct svm_gprs regs;
>  
> -struct regs get_regs(void)
> +struct svm_gprs *get_regs(void)
>  {
> -	return regs;
> +	return &regs;
>  }
>  
> -// rax handled specially below
> -
> -
>  struct svm_test *v2_test;
>  
>  
> @@ -94,16 +91,10 @@ u64 guest_stack[10000];
>  int __svm_vmrun(u64 rip)
>  {
>  	vmcb->save.rip = (ulong)rip;
> -	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
> +	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
>  	regs.rdi = (ulong)v2_test;
>  
> -	asm volatile (
> -		      ASM_PRE_VMRUN_CMD
> -		      "vmrun %%rax\n\t"               \
> -		      ASM_POST_VMRUN_CMD
> -		      :
> -		      : "a" (virt_to_phys(vmcb))
> -		      : "memory", "r15");
> +	SVM_VMRUN(vmcb, &regs);
>  
>  	return (vmcb->control.exit_code);
>  }
> @@ -113,43 +104,28 @@ int svm_vmrun(void)
>  	return __svm_vmrun((u64)test_thunk);
>  }
>  
> -extern u8 vmrun_rip;
> -
>  static noinline void test_run(struct svm_test *test)
>  {
> -	u64 vmcb_phys = virt_to_phys(vmcb);
> -
>  	cli();
>  	vmcb_ident(vmcb);
>  
>  	test->prepare(test);
>  	guest_main = test->guest_func;
>  	vmcb->save.rip = (ulong)test_thunk;
> -	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
> +	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
>  	regs.rdi = (ulong)test;
>  	do {
> -		struct svm_test *the_test = test;
> -		u64 the_vmcb = vmcb_phys;
> -		asm volatile (
> -			      "clgi;\n\t" // semi-colon needed for LLVM compatibility
> -			      "sti \n\t"
> -			      "call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t"
> -			      "mov %[vmcb_phys], %%rax \n\t"
> -			      ASM_PRE_VMRUN_CMD
> -			      ".global vmrun_rip\n\t"		\
> -			      "vmrun_rip: vmrun %%rax\n\t"    \
> -			      ASM_POST_VMRUN_CMD
> -			      "cli \n\t"
> -			      "stgi"
> -			      : // inputs clobbered by the guest:
> -				"=D" (the_test),            // first argument register
> -				"=b" (the_vmcb)             // callee save register!
> -			      : [test] "0" (the_test),
> -				[vmcb_phys] "1"(the_vmcb),
> -				[PREPARE_GIF_CLEAR] "i" (offsetof(struct svm_test, prepare_gif_clear))
> -			      : "rax", "rcx", "rdx", "rsi",
> -				"r8", "r9", "r10", "r11" , "r12", "r13", "r14", "r15",
> -				"memory");
> +
> +		clgi();
> +		sti();
> +
> +		test->prepare_gif_clear(test);
> +
> +		__SVM_VMRUN(vmcb, &regs, "vmrun_rip");
> +
> +		cli();
> +		stgi();
> +
>  		++test->exits;
>  	} while (!test->finished(test));
>  	sti();
> diff --git a/x86/svm.h b/x86/svm.h
> index a4aabeb2..6f809ce3 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -23,26 +23,6 @@ struct svm_test {
>  	bool on_vcpu_done;
>  };
>  
> -struct regs {
> -	u64 rax;
> -	u64 rbx;
> -	u64 rcx;
> -	u64 rdx;
> -	u64 cr2;
> -	u64 rbp;
> -	u64 rsi;
> -	u64 rdi;
> -	u64 r8;
> -	u64 r9;
> -	u64 r10;
> -	u64 r11;
> -	u64 r12;
> -	u64 r13;
> -	u64 r14;
> -	u64 r15;
> -	u64 rflags;
> -};
> -
>  typedef void (*test_guest_func)(struct svm_test *);
>  
>  int run_svm_tests(int ac, char **av, struct svm_test *svm_tests);
> @@ -55,58 +35,12 @@ bool default_finished(struct svm_test *test);
>  int get_test_stage(struct svm_test *test);
>  void set_test_stage(struct svm_test *test, int s);
>  void inc_test_stage(struct svm_test *test);
> -struct regs get_regs(void);
> +struct svm_gprs *get_regs(void);
>  int __svm_vmrun(u64 rip);
>  void __svm_bare_vmrun(void);
>  int svm_vmrun(void);
>  void test_set_guest(test_guest_func func);
>  
>  extern struct vmcb *vmcb;
> -
> -
> -#define SAVE_GPR_C                              \
> -        "xchg %%rbx, regs+0x8\n\t"              \
> -        "xchg %%rcx, regs+0x10\n\t"             \
> -        "xchg %%rdx, regs+0x18\n\t"             \
> -        "xchg %%rbp, regs+0x28\n\t"             \
> -        "xchg %%rsi, regs+0x30\n\t"             \
> -        "xchg %%rdi, regs+0x38\n\t"             \
> -        "xchg %%r8, regs+0x40\n\t"              \
> -        "xchg %%r9, regs+0x48\n\t"              \
> -        "xchg %%r10, regs+0x50\n\t"             \
> -        "xchg %%r11, regs+0x58\n\t"             \
> -        "xchg %%r12, regs+0x60\n\t"             \
> -        "xchg %%r13, regs+0x68\n\t"             \
> -        "xchg %%r14, regs+0x70\n\t"             \
> -        "xchg %%r15, regs+0x78\n\t"
> -
> -#define LOAD_GPR_C      SAVE_GPR_C
> -
> -#define ASM_PRE_VMRUN_CMD                       \
> -                "vmload %%rax\n\t"              \
> -                "mov regs+0x80, %%r15\n\t"      \
> -                "mov %%r15, 0x170(%%rax)\n\t"   \
> -                "mov regs, %%r15\n\t"           \
> -                "mov %%r15, 0x1f8(%%rax)\n\t"   \
> -                LOAD_GPR_C                      \
> -
> -#define ASM_POST_VMRUN_CMD                      \
> -                SAVE_GPR_C                      \
> -                "mov 0x170(%%rax), %%r15\n\t"   \
> -                "mov %%r15, regs+0x80\n\t"      \
> -                "mov 0x1f8(%%rax), %%r15\n\t"   \
> -                "mov %%r15, regs\n\t"           \
> -                "vmsave %%rax\n\t"              \
> -
> -
> -
> -#define SVM_BARE_VMRUN \
> -	asm volatile ( \
> -		ASM_PRE_VMRUN_CMD \
> -                "vmrun %%rax\n\t"               \
> -		ASM_POST_VMRUN_CMD \
> -		: \
> -		: "a" (virt_to_phys(vmcb)) \
> -		: "memory", "r15") \
> -
> +extern struct svm_test svm_tests[];

svm_tests[] has nothing to do with the patch here, and it's probably
also useless? I see it is being deleted in patch 23...

>  #endif
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 712d24e2..70e41300 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -399,7 +399,7 @@ static bool msr_intercept_finished(struct svm_test *test)
>  		 * RCX holds the MSR index.
>  		 */
>  		printf("%s 0x%lx #GP exception\n",
> -		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs().rcx);
> +		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs()->rcx);
>  	}
>  
>  	/* Jump over RDMSR/WRMSR instruction */
> @@ -415,9 +415,9 @@ static bool msr_intercept_finished(struct svm_test *test)
>  	 */
>  	if (exit_info_1)
>  		test->scratch =
> -			((get_regs().rdx << 32) | (vmcb->save.rax & 0xffffffff));
> +			((get_regs()->rdx << 32) | (get_regs()->rax & 0xffffffff));
>  	else
> -		test->scratch = get_regs().rcx;
> +		test->scratch = get_regs()->rcx;
>  
>  	return false;
>  }
> @@ -1842,7 +1842,7 @@ static volatile bool host_rflags_set_tf = false;
>  static volatile bool host_rflags_set_rf = false;
>  static u64 rip_detected;
>  
> -extern u64 *vmrun_rip;
> +extern u64 vmrun_rip;
>  
>  static void host_rflags_db_handler(struct ex_regs *r)
>  {
> @@ -2878,6 +2878,8 @@ static void svm_lbrv_test0(void)
>  
>  static void svm_lbrv_test1(void)
>  {
> +	struct svm_gprs *regs = get_regs();
> +
>  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
>  
>  	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> @@ -2885,7 +2887,7 @@ static void svm_lbrv_test1(void)
>  
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch1);
> -	SVM_BARE_VMRUN;
> +	SVM_VMRUN(vmcb, regs);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  
>  	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> @@ -2900,6 +2902,8 @@ static void svm_lbrv_test1(void)
>  
>  static void svm_lbrv_test2(void)
>  {
> +	struct svm_gprs *regs = get_regs();
> +
>  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
>  
>  	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> @@ -2908,7 +2912,7 @@ static void svm_lbrv_test2(void)
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch2);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> -	SVM_BARE_VMRUN;
> +	SVM_VMRUN(vmcb, regs);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>  
> @@ -2924,6 +2928,8 @@ static void svm_lbrv_test2(void)
>  
>  static void svm_lbrv_nested_test1(void)
>  {
> +	struct svm_gprs *regs = get_regs();
> +
>  	if (!lbrv_supported()) {
>  		report_skip("LBRV not supported in the guest");
>  		return;
> @@ -2936,7 +2942,7 @@ static void svm_lbrv_nested_test1(void)
>  
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch3);
> -	SVM_BARE_VMRUN;
> +	SVM_VMRUN(vmcb, regs);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>  
> @@ -2957,6 +2963,8 @@ static void svm_lbrv_nested_test1(void)
>  
>  static void svm_lbrv_nested_test2(void)
>  {
> +	struct svm_gprs *regs = get_regs();
> +
>  	if (!lbrv_supported()) {
>  		report_skip("LBRV not supported in the guest");
>  		return;
> @@ -2972,7 +2980,7 @@ static void svm_lbrv_nested_test2(void)
>  
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch4);
> -	SVM_BARE_VMRUN;
> +	SVM_VMRUN(vmcb, regs);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>  
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 25/27] svm: move nested vcpu to test context
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 25/27] svm: move nested vcpu to test context Maxim Levitsky
@ 2022-12-02 10:22   ` Emanuele Giuseppe Esposito
  2022-12-06 14:29     ` Maxim Levitsky
  0 siblings, 1 reply; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-02 10:22 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> This moves vcpu0 into svm_test_context and renames it to vcpu
> to show that this is the current test vcpu
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>

Those are exactly the same changes that you did in patch 22. Maybe
squash them together? Reordering the patches of course.

> ---
>  x86/svm.c       |  26 +-
>  x86/svm.h       |   5 +-
>  x86/svm_npt.c   |  54 ++--
>  x86/svm_tests.c | 753 ++++++++++++++++++++++++++++--------------------
>  4 files changed, 486 insertions(+), 352 deletions(-)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index 06d34ac4..a3279545 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -16,8 +16,6 @@
>  #include "apic.h"
>  #include "svm_lib.h"
>  
> -struct svm_vcpu vcpu0;
> -
>  bool smp_supported(void)
>  {
>  	return cpu_count() > 1;
> @@ -78,11 +76,11 @@ static void test_thunk(struct svm_test_context *ctx)
>  
>  int __svm_vmrun(struct svm_test_context *ctx, u64 rip)
>  {
> -	vcpu0.vmcb->save.rip = (ulong)rip;
> -	vcpu0.regs.rdi = (ulong)ctx;
> -	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
> -	SVM_VMRUN(&vcpu0);
> -	return vcpu0.vmcb->control.exit_code;
> +	ctx->vcpu->vmcb->save.rip = (ulong)rip;
> +	ctx->vcpu->regs.rdi = (ulong)ctx;
> +	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
> +	SVM_VMRUN(ctx->vcpu);
> +	return ctx->vcpu->vmcb->control.exit_code;
>  }
>  
>  int svm_vmrun(struct svm_test_context *ctx)
> @@ -92,7 +90,7 @@ int svm_vmrun(struct svm_test_context *ctx)
>  
>  static noinline void test_run(struct svm_test_context *ctx)
>  {
> -	svm_vcpu_ident(&vcpu0);
> +	svm_vcpu_ident(ctx->vcpu);
>  
>  	if (ctx->test->v2) {
>  		ctx->test->v2(ctx);
> @@ -103,9 +101,9 @@ static noinline void test_run(struct svm_test_context *ctx)
>  
>  	ctx->test->prepare(ctx);
>  	guest_main = ctx->test->guest_func;
> -	vcpu0.vmcb->save.rip = (ulong)test_thunk;
> -	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
> -	vcpu0.regs.rdi = (ulong)ctx;
> +	ctx->vcpu->vmcb->save.rip = (ulong)test_thunk;
> +	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
> +	ctx->vcpu->regs.rdi = (ulong)ctx;
>  	do {
>  
>  		clgi();
> @@ -113,7 +111,7 @@ static noinline void test_run(struct svm_test_context *ctx)
>  
>  		ctx->test->prepare_gif_clear(ctx);
>  
> -		__SVM_VMRUN(&vcpu0, "vmrun_rip");
> +		__SVM_VMRUN(ctx->vcpu, "vmrun_rip");
>  
>  		cli();
>  		stgi();
> @@ -182,13 +180,15 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
>  		return 0;
>  
>  	struct svm_test_context ctx;
> +	struct svm_vcpu vcpu;
>  
> -	svm_vcpu_init(&vcpu0);
> +	svm_vcpu_init(&vcpu);
>  
>  	for (; svm_tests[i].name != NULL; i++) {
>  
>  		memset(&ctx, 0, sizeof(ctx));
>  		ctx.test = &svm_tests[i];
> +		ctx.vcpu = &vcpu;
>  
>  		if (!test_wanted(svm_tests[i].name, av, ac))
>  			continue;
> diff --git a/x86/svm.h b/x86/svm.h
> index 961c4de3..ec181715 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -12,6 +12,9 @@ struct svm_test_context {
>  	ulong scratch;
>  	bool on_vcpu_done;
>  	struct svm_test *test;
> +
> +	/* TODO: test cases currently are single threaded */
> +	struct svm_vcpu *vcpu;
>  };
>  
>  struct svm_test {
> @@ -44,6 +47,4 @@ int svm_vmrun(struct svm_test_context *ctx);
>  void test_set_guest(test_guest_func func);
>  
>  
> -extern struct svm_vcpu vcpu0;
> -
>  #endif
> diff --git a/x86/svm_npt.c b/x86/svm_npt.c
> index fc16b4be..39fd7198 100644
> --- a/x86/svm_npt.c
> +++ b/x86/svm_npt.c
> @@ -27,23 +27,26 @@ static void npt_np_test(struct svm_test_context *ctx)
>  
>  static bool npt_np_check(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	u64 *pte = npt_get_pte((u64) scratch_page);
>  
>  	*pte |= 1ULL;
>  
> -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000004ULL);
> +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> +	    && (vmcb->control.exit_info_1 == 0x100000004ULL);
>  }
>  
>  static void npt_nx_prepare(struct svm_test_context *ctx)
>  {
>  	u64 *pte;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	ctx->scratch = rdmsr(MSR_EFER);
>  	wrmsr(MSR_EFER, ctx->scratch | EFER_NX);
>  
>  	/* Clear the guest's EFER.NX, it should not affect NPT behavior. */
> -	vcpu0.vmcb->save.efer &= ~EFER_NX;
> +	vmcb->save.efer &= ~EFER_NX;
>  
>  	pte = npt_get_pte((u64) null_test);
>  
> @@ -53,13 +56,14 @@ static void npt_nx_prepare(struct svm_test_context *ctx)
>  static bool npt_nx_check(struct svm_test_context *ctx)
>  {
>  	u64 *pte = npt_get_pte((u64) null_test);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	wrmsr(MSR_EFER, ctx->scratch);
>  
>  	*pte &= ~PT64_NX_MASK;
>  
> -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000015ULL);
> +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> +	    && (vmcb->control.exit_info_1 == 0x100000015ULL);
>  }
>  
>  static void npt_us_prepare(struct svm_test_context *ctx)
> @@ -80,11 +84,12 @@ static void npt_us_test(struct svm_test_context *ctx)
>  static bool npt_us_check(struct svm_test_context *ctx)
>  {
>  	u64 *pte = npt_get_pte((u64) scratch_page);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	*pte |= (1ULL << 2);
>  
> -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000005ULL);
> +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> +	    && (vmcb->control.exit_info_1 == 0x100000005ULL);
>  }
>  
>  static void npt_rw_prepare(struct svm_test_context *ctx)
> @@ -107,11 +112,12 @@ static void npt_rw_test(struct svm_test_context *ctx)
>  static bool npt_rw_check(struct svm_test_context *ctx)
>  {
>  	u64 *pte = npt_get_pte(0x80000);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	*pte |= (1ULL << 1);
>  
> -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
> +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> +	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
>  }
>  
>  static void npt_rw_pfwalk_prepare(struct svm_test_context *ctx)
> @@ -127,12 +133,13 @@ static void npt_rw_pfwalk_prepare(struct svm_test_context *ctx)
>  static bool npt_rw_pfwalk_check(struct svm_test_context *ctx)
>  {
>  	u64 *pte = npt_get_pte(read_cr3());
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	*pte |= (1ULL << 1);
>  
> -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> -	    && (vcpu0.vmcb->control.exit_info_1 == 0x200000007ULL)
> -	    && (vcpu0.vmcb->control.exit_info_2 == read_cr3());
> +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> +	    && (vmcb->control.exit_info_1 == 0x200000007ULL)
> +	    && (vmcb->control.exit_info_2 == read_cr3());
>  }
>  
>  static void npt_l1mmio_prepare(struct svm_test_context *ctx)
> @@ -178,11 +185,12 @@ static void npt_rw_l1mmio_test(struct svm_test_context *ctx)
>  static bool npt_rw_l1mmio_check(struct svm_test_context *ctx)
>  {
>  	u64 *pte = npt_get_pte(0xfee00080);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	*pte |= (1ULL << 1);
>  
> -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
> +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> +	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
>  }
>  
>  static void basic_guest_main(struct svm_test_context *ctx)
> @@ -193,6 +201,7 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
>  				     u64 * pxe, u64 rsvd_bits, u64 efer,
>  				     ulong cr4, u64 guest_efer, ulong guest_cr4)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  	u64 pxe_orig = *pxe;
>  	int exit_reason;
>  	u64 pfec;
> @@ -200,8 +209,8 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
>  	wrmsr(MSR_EFER, efer);
>  	write_cr4(cr4);
>  
> -	vcpu0.vmcb->save.efer = guest_efer;
> -	vcpu0.vmcb->save.cr4 = guest_cr4;
> +	vmcb->save.efer = guest_efer;
> +	vmcb->save.cr4 = guest_cr4;
>  
>  	*pxe |= rsvd_bits;
>  
> @@ -227,10 +236,10 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
>  
>  	}
>  
> -	report(vcpu0.vmcb->control.exit_info_1 == pfec,
> +	report(vmcb->control.exit_info_1 == pfec,
>  	       "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
>  	       "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
> -	       pfec, vcpu0.vmcb->control.exit_info_1, *pxe,
> +	       pfec, vmcb->control.exit_info_1, *pxe,
>  	       !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
>  	       !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
>  
> @@ -311,6 +320,7 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
>  {
>  	u64 saved_efer, host_efer, sg_efer, guest_efer;
>  	ulong saved_cr4, host_cr4, sg_cr4, guest_cr4;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	if (!npt_supported()) {
>  		report_skip("NPT not supported");
> @@ -319,8 +329,8 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
>  
>  	saved_efer = host_efer = rdmsr(MSR_EFER);
>  	saved_cr4 = host_cr4 = read_cr4();
> -	sg_efer = guest_efer = vcpu0.vmcb->save.efer;
> -	sg_cr4 = guest_cr4 = vcpu0.vmcb->save.cr4;
> +	sg_efer = guest_efer = vmcb->save.efer;
> +	sg_cr4 = guest_cr4 = vmcb->save.cr4;
>  
>  	test_set_guest(basic_guest_main);
>  
> @@ -352,8 +362,8 @@ skip_pte_test:
>  
>  	wrmsr(MSR_EFER, saved_efer);
>  	write_cr4(saved_cr4);
> -	vcpu0.vmcb->save.efer = sg_efer;
> -	vcpu0.vmcb->save.cr4 = sg_cr4;
> +	vmcb->save.efer = sg_efer;
> +	vmcb->save.cr4 = sg_cr4;
>  }
>  
>  #define NPT_V1_TEST(name, prepare, guest_code, check)				\
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 6041ac24..bd92fcee 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -44,33 +44,36 @@ static void null_test(struct svm_test_context *ctx)
>  
>  static bool null_check(struct svm_test_context *ctx)
>  {
> -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL;
> +	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_VMMCALL;
>  }
>  
>  static void prepare_no_vmrun_int(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
> +	ctx->vcpu->vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
>  }
>  
>  static bool check_no_vmrun_int(struct svm_test_context *ctx)
>  {
> -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
> +	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_ERR;
>  }
>  
>  static void test_vmrun(struct svm_test_context *ctx)
>  {
> -	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vcpu0.vmcb)));
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vmcb)));
>  }
>  
>  static bool check_vmrun(struct svm_test_context *ctx)
>  {
> -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMRUN;
> +	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_VMRUN;
>  }
>  
>  static void prepare_rsm_intercept(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept |= 1 << INTERCEPT_RSM;
> -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
> +	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
>  }
>  
>  static void test_rsm_intercept(struct svm_test_context *ctx)
> @@ -85,24 +88,25 @@ static bool check_rsm_intercept(struct svm_test_context *ctx)
>  
>  static bool finished_rsm_intercept(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  	switch (get_test_stage(ctx)) {
>  	case 0:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_RSM) {
> +		if (vmcb->control.exit_code != SVM_EXIT_RSM) {
>  			report_fail("VMEXIT not due to rsm. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
> +		vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
>  		inc_test_stage(ctx);
>  		break;
>  
>  	case 1:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
> +		if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
>  			report_fail("VMEXIT not due to #UD. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 2;
> +		vmcb->save.rip += 2;
>  		inc_test_stage(ctx);
>  		break;
>  
> @@ -114,7 +118,9 @@ static bool finished_rsm_intercept(struct svm_test_context *ctx)
>  
>  static void prepare_cr3_intercept(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.intercept_cr_read |= 1 << 3;
>  }
>  
>  static void test_cr3_intercept(struct svm_test_context *ctx)
> @@ -124,7 +130,8 @@ static void test_cr3_intercept(struct svm_test_context *ctx)
>  
>  static bool check_cr3_intercept(struct svm_test_context *ctx)
>  {
> -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_READ_CR3;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	return vmcb->control.exit_code == SVM_EXIT_READ_CR3;
>  }
>  
>  static bool check_cr3_nointercept(struct svm_test_context *ctx)
> @@ -147,7 +154,9 @@ static void corrupt_cr3_intercept_bypass(void *_ctx)
>  
>  static void prepare_cr3_intercept_bypass(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.intercept_cr_read |= 1 << 3;
>  	on_cpu_async(1, corrupt_cr3_intercept_bypass, ctx);
>  }
>  
> @@ -166,8 +175,10 @@ static void test_cr3_intercept_bypass(struct svm_test_context *ctx)
>  
>  static void prepare_dr_intercept(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept_dr_read = 0xff;
> -	vcpu0.vmcb->control.intercept_dr_write = 0xff;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.intercept_dr_read = 0xff;
> +	vmcb->control.intercept_dr_write = 0xff;
>  }
>  
>  static void test_dr_intercept(struct svm_test_context *ctx)
> @@ -251,7 +262,8 @@ static void test_dr_intercept(struct svm_test_context *ctx)
>  
>  static bool dr_intercept_finished(struct svm_test_context *ctx)
>  {
> -	ulong n = (vcpu0.vmcb->control.exit_code - SVM_EXIT_READ_DR0);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0);
>  
>  	/* Only expect DR intercepts */
>  	if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0))
> @@ -267,7 +279,7 @@ static bool dr_intercept_finished(struct svm_test_context *ctx)
>  	ctx->scratch = (n % 16);
>  
>  	/* Jump over MOV instruction */
> -	vcpu0.vmcb->save.rip += 3;
> +	vmcb->save.rip += 3;
>  
>  	return false;
>  }
> @@ -284,7 +296,8 @@ static bool next_rip_supported(void)
>  
>  static void prepare_next_rip(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
>  }
>  
>  
> @@ -299,15 +312,17 @@ static bool check_next_rip(struct svm_test_context *ctx)
>  {
>  	extern char exp_next_rip;
>  	unsigned long address = (unsigned long)&exp_next_rip;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
> -	return address == vcpu0.vmcb->control.next_rip;
> +	return address == vmcb->control.next_rip;
>  }
>  
>  
>  static void prepare_msr_intercept(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
> -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
> +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
>  	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
>  }
>  
> @@ -359,12 +374,13 @@ static void test_msr_intercept(struct svm_test_context *ctx)
>  
>  static bool msr_intercept_finished(struct svm_test_context *ctx)
>  {
> -	u32 exit_code = vcpu0.vmcb->control.exit_code;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	u32 exit_code = vmcb->control.exit_code;
>  	u64 exit_info_1;
>  	u8 *opcode;
>  
>  	if (exit_code == SVM_EXIT_MSR) {
> -		exit_info_1 = vcpu0.vmcb->control.exit_info_1;
> +		exit_info_1 = vmcb->control.exit_info_1;
>  	} else {
>  		/*
>  		 * If #GP exception occurs instead, check that it was
> @@ -374,7 +390,7 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
>  		if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR))
>  			return true;
>  
> -		opcode = (u8 *)vcpu0.vmcb->save.rip;
> +		opcode = (u8 *)vmcb->save.rip;
>  		if (opcode[0] != 0x0f)
>  			return true;
>  
> @@ -394,11 +410,11 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
>  		 * RCX holds the MSR index.
>  		 */
>  		printf("%s 0x%lx #GP exception\n",
> -		       exit_info_1 ? "WRMSR" : "RDMSR", vcpu0.regs.rcx);
> +		       exit_info_1 ? "WRMSR" : "RDMSR", ctx->vcpu->regs.rcx);
>  	}
>  
>  	/* Jump over RDMSR/WRMSR instruction */
> -	vcpu0.vmcb->save.rip += 2;
> +	vmcb->save.rip += 2;
>  
>  	/*
>  	 * Test whether the intercept was for RDMSR/WRMSR.
> @@ -410,9 +426,9 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
>  	 */
>  	if (exit_info_1)
>  		ctx->scratch =
> -			((vcpu0.regs.rdx << 32) | (vcpu0.regs.rax & 0xffffffff));
> +			((ctx->vcpu->regs.rdx << 32) | (ctx->vcpu->regs.rax & 0xffffffff));
>  	else
> -		ctx->scratch = vcpu0.regs.rcx;
> +		ctx->scratch = ctx->vcpu->regs.rcx;
>  
>  	return false;
>  }
> @@ -425,7 +441,9 @@ static bool check_msr_intercept(struct svm_test_context *ctx)
>  
>  static void prepare_mode_switch(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
>  		|  (1ULL << UD_VECTOR)
>  		|  (1ULL << DF_VECTOR)
>  		|  (1ULL << PF_VECTOR);
> @@ -490,17 +508,18 @@ static void test_mode_switch(struct svm_test_context *ctx)
>  static bool mode_switch_finished(struct svm_test_context *ctx)
>  {
>  	u64 cr0, cr4, efer;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
> -	cr0  = vcpu0.vmcb->save.cr0;
> -	cr4  = vcpu0.vmcb->save.cr4;
> -	efer = vcpu0.vmcb->save.efer;
> +	cr0  = vmcb->save.cr0;
> +	cr4  = vmcb->save.cr4;
> +	efer = vmcb->save.efer;
>  
>  	/* Only expect VMMCALL intercepts */
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL)
> +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL)
>  		return true;
>  
>  	/* Jump over VMMCALL instruction */
> -	vcpu0.vmcb->save.rip += 3;
> +	vmcb->save.rip += 3;
>  
>  	/* Do sanity checks */
>  	switch (ctx->scratch) {
> @@ -534,8 +553,9 @@ static bool check_mode_switch(struct svm_test_context *ctx)
>  static void prepare_ioio(struct svm_test_context *ctx)
>  {
>  	u8 *io_bitmap = svm_get_io_bitmap();
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
>  	ctx->scratch = 0;
>  	memset(io_bitmap, 0, 8192);
>  	io_bitmap[8192] = 0xFF;
> @@ -617,19 +637,20 @@ static bool ioio_finished(struct svm_test_context *ctx)
>  {
>  	unsigned port, size;
>  	u8 *io_bitmap = svm_get_io_bitmap();
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	/* Only expect IOIO intercepts */
> -	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL)
> +	if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
>  		return true;
>  
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_IOIO)
> +	if (vmcb->control.exit_code != SVM_EXIT_IOIO)
>  		return true;
>  
>  	/* one step forward */
>  	ctx->scratch += 1;
>  
> -	port = vcpu0.vmcb->control.exit_info_1 >> 16;
> -	size = (vcpu0.vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
> +	port = vmcb->control.exit_info_1 >> 16;
> +	size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
>  
>  	while (size--) {
>  		io_bitmap[port / 8] &= ~(1 << (port & 7));
> @@ -649,7 +670,9 @@ static bool check_ioio(struct svm_test_context *ctx)
>  
>  static void prepare_asid_zero(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.asid = 0;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.asid = 0;
>  }
>  
>  static void test_asid_zero(struct svm_test_context *ctx)
> @@ -659,12 +682,16 @@ static void test_asid_zero(struct svm_test_context *ctx)
>  
>  static bool check_asid_zero(struct svm_test_context *ctx)
>  {
> -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	return vmcb->control.exit_code == SVM_EXIT_ERR;
>  }
>  
>  static void sel_cr0_bug_prepare(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
>  }
>  
>  static bool sel_cr0_bug_finished(struct svm_test_context *ctx)
> @@ -692,7 +719,9 @@ static void sel_cr0_bug_test(struct svm_test_context *ctx)
>  
>  static bool sel_cr0_bug_check(struct svm_test_context *ctx)
>  {
> -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
>  }
>  
>  #define TSC_ADJUST_VALUE    (1ll << 32)
> @@ -706,7 +735,9 @@ static bool tsc_adjust_supported(void)
>  
>  static void tsc_adjust_prepare(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
>  
>  	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
>  	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
> @@ -758,17 +789,18 @@ static void svm_tsc_scale_run_testcase(struct svm_test_context *ctx,
>  				       double tsc_scale, u64 tsc_offset)
>  {
>  	u64 start_tsc, actual_duration;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
>  
>  	test_set_guest(svm_tsc_scale_guest);
> -	vcpu0.vmcb->control.tsc_offset = tsc_offset;
> +	vmcb->control.tsc_offset = tsc_offset;
>  	wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32)));
>  
>  	start_tsc = rdtsc();
>  
>  	if (svm_vmrun(ctx) != SVM_EXIT_VMMCALL)
> -		report_fail("unexpected vm exit code 0x%x", vcpu0.vmcb->control.exit_code);
> +		report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code);
>  
>  	actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
>  
> @@ -839,6 +871,7 @@ start:
>  static bool latency_finished(struct svm_test_context *ctx)
>  {
>  	u64 cycles;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	tsc_end = rdtsc();
>  
> @@ -852,7 +885,7 @@ static bool latency_finished(struct svm_test_context *ctx)
>  
>  	vmexit_sum += cycles;
>  
> -	vcpu0.vmcb->save.rip += 3;
> +	vmcb->save.rip += 3;
>  
>  	runs -= 1;
>  
> @@ -863,7 +896,10 @@ static bool latency_finished(struct svm_test_context *ctx)
>  
>  static bool latency_finished_clean(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.clean = VMCB_CLEAN_ALL;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.clean = VMCB_CLEAN_ALL;
> +
>  	return latency_finished(ctx);
>  }
>  
> @@ -886,7 +922,9 @@ static void lat_svm_insn_prepare(struct svm_test_context *ctx)
>  
>  static bool lat_svm_insn_finished(struct svm_test_context *ctx)
>  {
> -	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	u64 vmcb_phys = virt_to_phys(vmcb);
>  	u64 cycles;
>  
>  	for ( ; runs != 0; runs--) {
> @@ -957,6 +995,7 @@ static void pending_event_ipi_isr(isr_regs_t *regs)
>  static void pending_event_prepare(struct svm_test_context *ctx)
>  {
>  	int ipi_vector = 0xf1;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	pending_event_ipi_fired = false;
>  
> @@ -964,8 +1003,8 @@ static void pending_event_prepare(struct svm_test_context *ctx)
>  
>  	pending_event_guest_run = false;
>  
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> -	vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> +	vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
>  
>  	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
>  		       APIC_DM_FIXED | ipi_vector, 0);
> @@ -980,16 +1019,18 @@ static void pending_event_test(struct svm_test_context *ctx)
>  
>  static bool pending_event_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	switch (get_test_stage(ctx)) {
>  	case 0:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
> +		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
>  			report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  
> -		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> -		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> +		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> +		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  
>  		if (pending_event_guest_run) {
>  			report_fail("Guest ran before host received IPI\n");
> @@ -1067,19 +1108,21 @@ static void pending_event_cli_test(struct svm_test_context *ctx)
>  
>  static bool pending_event_cli_finished(struct svm_test_context *ctx)
>  {
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  		report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x",
> -			    vcpu0.vmcb->control.exit_code);
> +			    vmcb->control.exit_code);
>  		return true;
>  	}
>  
>  	switch (get_test_stage(ctx)) {
>  	case 0:
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  
>  		pending_event_ipi_fired = false;
>  
> -		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> +		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
>  
>  		/* Now entering again with VINTR_MASKING=1.  */
>  		apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
> @@ -1206,32 +1249,34 @@ static void interrupt_test(struct svm_test_context *ctx)
>  
>  static bool interrupt_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	switch (get_test_stage(ctx)) {
>  	case 0:
>  	case 2:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  
> -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> -		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> +		vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> +		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
>  		break;
>  
>  	case 1:
>  	case 3:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
> +		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
>  			report_fail("VMEXIT not due to intr intercept. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  
>  		sti_nop_cli();
>  
> -		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> -		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> +		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> +		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  		break;
>  
>  	case 4:
> @@ -1289,22 +1334,24 @@ static void nmi_test(struct svm_test_context *ctx)
>  
>  static bool nmi_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	switch (get_test_stage(ctx)) {
>  	case 0:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  
> -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
> +		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
>  		break;
>  
>  	case 1:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
> +		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
>  			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  
> @@ -1391,22 +1438,24 @@ static void nmi_hlt_test(struct svm_test_context *ctx)
>  
>  static bool nmi_hlt_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	switch (get_test_stage(ctx)) {
>  	case 1:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  
> -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
> +		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
>  		break;
>  
>  	case 2:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
> +		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
>  			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  
> @@ -1449,40 +1498,42 @@ static void exc_inject_test(struct svm_test_context *ctx)
>  
>  static bool exc_inject_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	switch (get_test_stage(ctx)) {
>  	case 0:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 3;
> -		vcpu0.vmcb->control.event_inj = NMI_VECTOR |
> +		vmcb->save.rip += 3;
> +		vmcb->control.event_inj = NMI_VECTOR |
>  						SVM_EVTINJ_TYPE_EXEPT |
>  						SVM_EVTINJ_VALID;
>  		break;
>  
>  	case 1:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_ERR) {
> +		if (vmcb->control.exit_code != SVM_EXIT_ERR) {
>  			report_fail("VMEXIT not due to error. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  		report(count_exc == 0, "exception with vector 2 not injected");
> -		vcpu0.vmcb->control.event_inj = DE_VECTOR |
> +		vmcb->control.event_inj = DE_VECTOR |
>  						SVM_EVTINJ_TYPE_EXEPT |
>  						SVM_EVTINJ_VALID;
>  		break;
>  
>  	case 2:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  		report(count_exc == 1, "divide overflow exception injected");
> -		report(!(vcpu0.vmcb->control.event_inj & SVM_EVTINJ_VALID),
> +		report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID),
>  		       "eventinj.VALID cleared");
>  		break;
>  
> @@ -1509,11 +1560,13 @@ static void virq_isr(isr_regs_t *regs)
>  
>  static void virq_inject_prepare(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	handle_irq(0xf1, virq_isr);
>  
> -	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> +	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
>  		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
> -	vcpu0.vmcb->control.int_vector = 0xf1;
> +	vmcb->control.int_vector = 0xf1;
>  	virq_fired = false;
>  	set_test_stage(ctx, 0);
>  }
> @@ -1563,66 +1616,68 @@ static void virq_inject_test(struct svm_test_context *ctx)
>  
>  static bool virq_inject_finished(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->save.rip += 3;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->save.rip += 3;
>  
>  	switch (get_test_stage(ctx)) {
>  	case 0:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		if (vcpu0.vmcb->control.int_ctl & V_IRQ_MASK) {
> +		if (vmcb->control.int_ctl & V_IRQ_MASK) {
>  			report_fail("V_IRQ not cleared on VMEXIT after firing");
>  			return true;
>  		}
>  		virq_fired = false;
> -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
> -		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> +		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
> +		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
>  			(0x0f << V_INTR_PRIO_SHIFT);
>  		break;
>  
>  	case 1:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VINTR) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VINTR) {
>  			report_fail("VMEXIT not due to vintr. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  		if (virq_fired) {
>  			report_fail("V_IRQ fired before SVM_EXIT_VINTR");
>  			return true;
>  		}
> -		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
> +		vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
>  		break;
>  
>  	case 2:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  		virq_fired = false;
>  		// Set irq to lower priority
> -		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> +		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
>  			(0x08 << V_INTR_PRIO_SHIFT);
>  		// Raise guest TPR
> -		vcpu0.vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
> +		vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
>  		break;
>  
>  	case 3:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
> +		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
>  		break;
>  
>  	case 4:
>  		// INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
>  		break;
> @@ -1673,10 +1728,12 @@ static void reg_corruption_isr(isr_regs_t *regs)
>  
>  static void reg_corruption_prepare(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	set_test_stage(ctx, 0);
>  
> -	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK;
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> +	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
>  
>  	handle_irq(TIMER_VECTOR, reg_corruption_isr);
>  
> @@ -1705,6 +1762,8 @@ static void reg_corruption_test(struct svm_test_context *ctx)
>  
>  static bool reg_corruption_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	if (isr_cnt == 10000) {
>  		report_pass("No RIP corruption detected after %d timer interrupts",
>  			    isr_cnt);
> @@ -1712,9 +1771,9 @@ static bool reg_corruption_finished(struct svm_test_context *ctx)
>  		goto cleanup;
>  	}
>  
> -	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_INTR) {
> +	if (vmcb->control.exit_code == SVM_EXIT_INTR) {
>  
> -		void *guest_rip = (void *)vcpu0.vmcb->save.rip;
> +		void *guest_rip = (void *)vmcb->save.rip;
>  
>  		sti_nop_cli();
>  
> @@ -1782,8 +1841,10 @@ static volatile bool init_intercept;
>  
>  static void init_intercept_prepare(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	init_intercept = false;
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
>  }
>  
>  static void init_intercept_test(struct svm_test_context *ctx)
> @@ -1793,11 +1854,13 @@ static void init_intercept_test(struct svm_test_context *ctx)
>  
>  static bool init_intercept_finished(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->save.rip += 3;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->save.rip += 3;
>  
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INIT) {
> +	if (vmcb->control.exit_code != SVM_EXIT_INIT) {
>  		report_fail("VMEXIT not due to init intercept. Exit reason 0x%x",
> -			    vcpu0.vmcb->control.exit_code);
> +			    vmcb->control.exit_code);
>  
>  		return true;
>  	}
> @@ -1894,14 +1957,16 @@ static void host_rflags_test(struct svm_test_context *ctx)
>  
>  static bool host_rflags_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	switch (get_test_stage(ctx)) {
>  	case 0:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  			report_fail("Unexpected VMEXIT. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  		/*
>  		 * Setting host EFLAGS.TF not immediately before VMRUN, causes
>  		 * #DB trap before first guest instruction is executed
> @@ -1909,14 +1974,14 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
>  		host_rflags_set_tf = true;
>  		break;
>  	case 1:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
>  		    host_rflags_guest_main_flag != 1) {
>  			report_fail("Unexpected VMEXIT or #DB handler"
>  				    " invoked before guest main. Exit reason 0x%x",
> -				    vcpu0.vmcb->control.exit_code);
> +				    vmcb->control.exit_code);
>  			return true;
>  		}
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  		/*
>  		 * Setting host EFLAGS.TF immediately before VMRUN, causes #DB
>  		 * trap after VMRUN completes on the host side (i.e., after
> @@ -1925,21 +1990,21 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
>  		host_rflags_ss_on_vmrun = true;
>  		break;
>  	case 2:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
>  		    rip_detected != (u64)&vmrun_rip + 3) {
>  			report_fail("Unexpected VMEXIT or RIP mismatch."
>  				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
> -				    "%lx", vcpu0.vmcb->control.exit_code,
> +				    "%lx", vmcb->control.exit_code,
>  				    (u64)&vmrun_rip + 3, rip_detected);
>  			return true;
>  		}
>  		host_rflags_set_rf = true;
>  		host_rflags_guest_main_flag = 0;
>  		host_rflags_vmrun_reached = false;
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  		break;
>  	case 3:
> -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
>  		    rip_detected != (u64)&vmrun_rip ||
>  		    host_rflags_guest_main_flag != 1 ||
>  		    host_rflags_db_handler_flag > 1 ||
> @@ -1947,13 +2012,13 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
>  			report_fail("Unexpected VMEXIT or RIP mismatch or "
>  				    "EFLAGS.RF not cleared."
>  				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
> -				    "%lx", vcpu0.vmcb->control.exit_code,
> +				    "%lx", vmcb->control.exit_code,
>  				    (u64)&vmrun_rip, rip_detected);
>  			return true;
>  		}
>  		host_rflags_set_tf = false;
>  		host_rflags_set_rf = false;
> -		vcpu0.vmcb->save.rip += 3;
> +		vmcb->save.rip += 3;
>  		break;
>  	default:
>  		return true;
> @@ -1986,6 +2051,8 @@ static void svm_cr4_osxsave_test_guest(struct svm_test_context *ctx)
>  
>  static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	if (!this_cpu_has(X86_FEATURE_XSAVE)) {
>  		report_skip("XSAVE not detected");
>  		return;
> @@ -1995,7 +2062,7 @@ static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
>  		unsigned long cr4 = read_cr4() | X86_CR4_OSXSAVE;
>  
>  		write_cr4(cr4);
> -		vcpu0.vmcb->save.cr4 = cr4;
> +		vmcb->save.cr4 = cr4;
>  	}
>  
>  	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
> @@ -2035,6 +2102,7 @@ static void basic_guest_main(struct svm_test_context *ctx)
>  	u64 tmp, mask;							\
>  	u32 r;								\
>  	int i;								\
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;				\
>  									\
>  	for (i = start; i <= end; i = i + inc) {			\
>  		mask = 1ull << i;					\
> @@ -2043,13 +2111,13 @@ static void basic_guest_main(struct svm_test_context *ctx)
>  		tmp = val | mask;					\
>  		switch (cr) {						\
>  		case 0:							\
> -			vcpu0.vmcb->save.cr0 = tmp;				\
> +			vmcb->save.cr0 = tmp;				\
>  			break;						\
>  		case 3:							\
> -			vcpu0.vmcb->save.cr3 = tmp;				\
> +			vmcb->save.cr3 = tmp;				\
>  			break;						\
>  		case 4:							\
> -			vcpu0.vmcb->save.cr4 = tmp;				\
> +			vmcb->save.cr4 = tmp;				\
>  		}							\
>  		r = svm_vmrun(ctx);					\
>  		report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x", \
> @@ -2062,39 +2130,40 @@ static void test_efer(struct svm_test_context *ctx)
>  	/*
>  	 * Un-setting EFER.SVME is illegal
>  	 */
> -	u64 efer_saved = vcpu0.vmcb->save.efer;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	u64 efer_saved = vmcb->save.efer;
>  	u64 efer = efer_saved;
>  
>  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
>  	efer &= ~EFER_SVME;
> -	vcpu0.vmcb->save.efer = efer;
> +	vmcb->save.efer = efer;
>  	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
> -	vcpu0.vmcb->save.efer = efer_saved;
> +	vmcb->save.efer = efer_saved;
>  
>  	/*
>  	 * EFER MBZ bits: 63:16, 9
>  	 */
> -	efer_saved = vcpu0.vmcb->save.efer;
> +	efer_saved = vmcb->save.efer;
>  
> -	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vmcb->save.efer,
>  				   efer_saved, SVM_EFER_RESERVED_MASK);
> -	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vmcb->save.efer,
>  				   efer_saved, SVM_EFER_RESERVED_MASK);
>  
>  	/*
>  	 * EFER.LME and CR0.PG are both set and CR4.PAE is zero.
>  	 */
> -	u64 cr0_saved = vcpu0.vmcb->save.cr0;
> +	u64 cr0_saved = vmcb->save.cr0;
>  	u64 cr0;
> -	u64 cr4_saved = vcpu0.vmcb->save.cr4;
> +	u64 cr4_saved = vmcb->save.cr4;
>  	u64 cr4;
>  
>  	efer = efer_saved | EFER_LME;
> -	vcpu0.vmcb->save.efer = efer;
> +	vmcb->save.efer = efer;
>  	cr0 = cr0_saved | X86_CR0_PG | X86_CR0_PE;
> -	vcpu0.vmcb->save.cr0 = cr0;
> +	vmcb->save.cr0 = cr0;
>  	cr4 = cr4_saved & ~X86_CR4_PAE;
> -	vcpu0.vmcb->save.cr4 = cr4;
> +	vmcb->save.cr4 = cr4;
>  	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
>  	       "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
>  
> @@ -2105,31 +2174,31 @@ static void test_efer(struct svm_test_context *ctx)
>  	 * SVM_EXIT_ERR.
>  	 */
>  	cr4 = cr4_saved | X86_CR4_PAE;
> -	vcpu0.vmcb->save.cr4 = cr4;
> +	vmcb->save.cr4 = cr4;
>  	cr0 &= ~X86_CR0_PE;
> -	vcpu0.vmcb->save.cr0 = cr0;
> +	vmcb->save.cr0 = cr0;
>  	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
>  	       "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
>  
>  	/*
>  	 * EFER.LME, CR0.PG, CR4.PAE, CS.L, and CS.D are all non-zero.
>  	 */
> -	u32 cs_attrib_saved = vcpu0.vmcb->save.cs.attrib;
> +	u32 cs_attrib_saved = vmcb->save.cs.attrib;
>  	u32 cs_attrib;
>  
>  	cr0 |= X86_CR0_PE;
> -	vcpu0.vmcb->save.cr0 = cr0;
> +	vmcb->save.cr0 = cr0;
>  	cs_attrib = cs_attrib_saved | SVM_SELECTOR_L_MASK |
>  		SVM_SELECTOR_DB_MASK;
> -	vcpu0.vmcb->save.cs.attrib = cs_attrib;
> +	vmcb->save.cs.attrib = cs_attrib;
>  	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
>  	       "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
>  	       efer, cr0, cr4, cs_attrib);
>  
> -	vcpu0.vmcb->save.cr0 = cr0_saved;
> -	vcpu0.vmcb->save.cr4 = cr4_saved;
> -	vcpu0.vmcb->save.efer = efer_saved;
> -	vcpu0.vmcb->save.cs.attrib = cs_attrib_saved;
> +	vmcb->save.cr0 = cr0_saved;
> +	vmcb->save.cr4 = cr4_saved;
> +	vmcb->save.efer = efer_saved;
> +	vmcb->save.cs.attrib = cs_attrib_saved;
>  }
>  
>  static void test_cr0(struct svm_test_context *ctx)
> @@ -2137,37 +2206,39 @@ static void test_cr0(struct svm_test_context *ctx)
>  	/*
>  	 * Un-setting CR0.CD and setting CR0.NW is illegal combination
>  	 */
> -	u64 cr0_saved = vcpu0.vmcb->save.cr0;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	u64 cr0_saved = vmcb->save.cr0;
>  	u64 cr0 = cr0_saved;
>  
>  	cr0 |= X86_CR0_CD;
>  	cr0 &= ~X86_CR0_NW;
> -	vcpu0.vmcb->save.cr0 = cr0;
> +	vmcb->save.cr0 = cr0;
>  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
>  		cr0);
>  	cr0 |= X86_CR0_NW;
> -	vcpu0.vmcb->save.cr0 = cr0;
> +	vmcb->save.cr0 = cr0;
>  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
>  		cr0);
>  	cr0 &= ~X86_CR0_NW;
>  	cr0 &= ~X86_CR0_CD;
> -	vcpu0.vmcb->save.cr0 = cr0;
> +	vmcb->save.cr0 = cr0;
>  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
>  		cr0);
>  	cr0 |= X86_CR0_NW;
> -	vcpu0.vmcb->save.cr0 = cr0;
> +	vmcb->save.cr0 = cr0;
>  	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
>  		cr0);
> -	vcpu0.vmcb->save.cr0 = cr0_saved;
> +	vmcb->save.cr0 = cr0_saved;
>  
>  	/*
>  	 * CR0[63:32] are not zero
>  	 */
>  	cr0 = cr0_saved;
>  
> -	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vmcb->save.cr0, cr0_saved,
>  				   SVM_CR0_RESERVED_MASK);
> -	vcpu0.vmcb->save.cr0 = cr0_saved;
> +	vmcb->save.cr0 = cr0_saved;
>  }
>  
>  static void test_cr3(struct svm_test_context *ctx)
> @@ -2176,37 +2247,39 @@ static void test_cr3(struct svm_test_context *ctx)
>  	 * CR3 MBZ bits based on different modes:
>  	 *   [63:52] - long mode
>  	 */
> -	u64 cr3_saved = vcpu0.vmcb->save.cr3;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	u64 cr3_saved = vmcb->save.cr3;
>  
>  	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 63, 1, 3, cr3_saved,
>  				  SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
>  
> -	vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
> +	vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
>  	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
> -	       vcpu0.vmcb->save.cr3);
> +	       vmcb->save.cr3);
>  
>  	/*
>  	 * CR3 non-MBZ reserved bits based on different modes:
>  	 *   [11:5] [2:0] - long mode (PCIDE=0)
>  	 *          [2:0] - PAE legacy mode
>  	 */
> -	u64 cr4_saved = vcpu0.vmcb->save.cr4;
> +	u64 cr4_saved = vmcb->save.cr4;
>  	u64 *pdpe = npt_get_pml4e();
>  
>  	/*
>  	 * Long mode
>  	 */
>  	if (this_cpu_has(X86_FEATURE_PCID)) {
> -		vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
> +		vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
>  		SVM_TEST_CR_RESERVED_BITS(ctx, 0, 11, 1, 3, cr3_saved,
>  					  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) ");
>  
> -		vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
> +		vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
>  		report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
> -		       vcpu0.vmcb->save.cr3);
> +		       vmcb->save.cr3);
>  	}
>  
> -	vcpu0.vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
> +	vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
>  
>  	if (!npt_supported())
>  		goto skip_npt_only;
> @@ -2218,44 +2291,46 @@ static void test_cr3(struct svm_test_context *ctx)
>  				  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) ");
>  
>  	pdpe[0] |= 1ULL;
> -	vcpu0.vmcb->save.cr3 = cr3_saved;
> +	vmcb->save.cr3 = cr3_saved;
>  
>  	/*
>  	 * PAE legacy
>  	 */
>  	pdpe[0] &= ~1ULL;
> -	vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
> +	vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
>  	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 2, 1, 3, cr3_saved,
>  				  SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) ");
>  
>  	pdpe[0] |= 1ULL;
>  
>  skip_npt_only:
> -	vcpu0.vmcb->save.cr3 = cr3_saved;
> -	vcpu0.vmcb->save.cr4 = cr4_saved;
> +	vmcb->save.cr3 = cr3_saved;
> +	vmcb->save.cr4 = cr4_saved;
>  }
>  
>  /* Test CR4 MBZ bits based on legacy or long modes */
>  static void test_cr4(struct svm_test_context *ctx)
>  {
> -	u64 cr4_saved = vcpu0.vmcb->save.cr4;
> -	u64 efer_saved = vcpu0.vmcb->save.efer;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	u64 cr4_saved = vmcb->save.cr4;
> +	u64 efer_saved = vmcb->save.efer;
>  	u64 efer = efer_saved;
>  
>  	efer &= ~EFER_LME;
> -	vcpu0.vmcb->save.efer = efer;
> +	vmcb->save.efer = efer;
>  	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
>  				  SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, "");
>  
>  	efer |= EFER_LME;
> -	vcpu0.vmcb->save.efer = efer;
> +	vmcb->save.efer = efer;
>  	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
>  				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
>  	SVM_TEST_CR_RESERVED_BITS(ctx, 32, 63, 4, 4, cr4_saved,
>  				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
>  
> -	vcpu0.vmcb->save.cr4 = cr4_saved;
> -	vcpu0.vmcb->save.efer = efer_saved;
> +	vmcb->save.cr4 = cr4_saved;
> +	vmcb->save.efer = efer_saved;
>  }
>  
>  static void test_dr(struct svm_test_context *ctx)
> @@ -2263,27 +2338,29 @@ static void test_dr(struct svm_test_context *ctx)
>  	/*
>  	 * DR6[63:32] and DR7[63:32] are MBZ
>  	 */
> -	u64 dr_saved = vcpu0.vmcb->save.dr6;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
> -	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
> +	u64 dr_saved = vmcb->save.dr6;
> +
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vmcb->save.dr6, dr_saved,
>  				   SVM_DR6_RESERVED_MASK);
> -	vcpu0.vmcb->save.dr6 = dr_saved;
> +	vmcb->save.dr6 = dr_saved;
>  
> -	dr_saved = vcpu0.vmcb->save.dr7;
> -	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
> +	dr_saved = vmcb->save.dr7;
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vmcb->save.dr7, dr_saved,
>  				   SVM_DR7_RESERVED_MASK);
>  
> -	vcpu0.vmcb->save.dr7 = dr_saved;
> +	vmcb->save.dr7 = dr_saved;
>  }
>  
>  /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */
> -#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,		\
> +#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,	\
>  			 msg) {						\
> -		vcpu0.vmcb->control.intercept = saved_intercept | 1ULL << type; \
> +		ctx->vcpu->vmcb->control.intercept = saved_intercept | 1ULL << type; \
>  		if (type == INTERCEPT_MSR_PROT)				\
> -			vcpu0.vmcb->control.msrpm_base_pa = addr;		\
> +			ctx->vcpu->vmcb->control.msrpm_base_pa = addr;	\
>  		else							\
> -			vcpu0.vmcb->control.iopm_base_pa = addr;		\
> +			ctx->vcpu->vmcb->control.iopm_base_pa = addr;	\
>  		report(svm_vmrun(ctx) == exit_code,			\
>  		       "Test %s address: %lx", msg, addr);		\
>  	}
> @@ -2306,7 +2383,9 @@ static void test_dr(struct svm_test_context *ctx)
>   */
>  static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
>  {
> -	u64 saved_intercept = vcpu0.vmcb->control.intercept;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	u64 saved_intercept = vmcb->control.intercept;
>  	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
>  	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
>  	u8 *io_bitmap = svm_get_io_bitmap();
> @@ -2348,7 +2427,7 @@ static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
>  	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT, addr,
>  			 SVM_EXIT_VMMCALL, "IOPM");
>  
> -	vcpu0.vmcb->control.intercept = saved_intercept;
> +	vmcb->control.intercept = saved_intercept;
>  }
>  
>  /*
> @@ -2378,22 +2457,24 @@ static void test_canonicalization(struct svm_test_context *ctx)
>  	u64 saved_addr;
>  	u64 return_value;
>  	u64 addr_limit;
> -	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
> +
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	u64 vmcb_phys = virt_to_phys(vmcb);
>  
>  	addr_limit = (this_cpu_has(X86_FEATURE_LA57)) ? 57 : 48;
>  	u64 noncanonical_mask = NONCANONICAL & ~((1ul << addr_limit) - 1);
>  
> -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.fs.base, "FS");
> -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.gs.base, "GS");
> -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.ldtr.base, "LDTR");
> -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.tr.base, "TR");
> -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
> -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.es.base, "ES");
> -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.cs.base, "CS");
> -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ss.base, "SS");
> -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ds.base, "DS");
> -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.gdtr.base, "GDTR");
> -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.idtr.base, "IDTR");
> +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.fs.base, "FS");
> +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.gs.base, "GS");
> +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.ldtr.base, "LDTR");
> +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.tr.base, "TR");
> +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.kernel_gs_base, "KERNEL GS");
> +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.es.base, "ES");
> +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.cs.base, "CS");
> +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.ss.base, "SS");
> +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.ds.base, "DS");
> +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.gdtr.base, "GDTR");
> +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.idtr.base, "IDTR");
>  }
>  
>  /*
> @@ -2442,12 +2523,14 @@ asm("guest_rflags_test_guest:\n\t"
>  
>  static void svm_test_singlestep(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	handle_exception(DB_VECTOR, guest_rflags_test_db_handler);
>  
>  	/*
>  	 * Trap expected after completion of first guest instruction
>  	 */
> -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
> +	vmcb->save.rflags |= X86_EFLAGS_TF;
>  	report (__svm_vmrun(ctx, (u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
>  		guest_rflags_test_trap_rip == (u64)&insn2,
>  		"Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
> @@ -2455,18 +2538,18 @@ static void svm_test_singlestep(struct svm_test_context *ctx)
>  	 * No trap expected
>  	 */
>  	guest_rflags_test_trap_rip = 0;
> -	vcpu0.vmcb->save.rip += 3;
> -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
> -	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> +	vmcb->save.rip += 3;
> +	vmcb->save.rflags |= X86_EFLAGS_TF;
> +	report(__svm_vmrun(ctx, vmcb->save.rip) == SVM_EXIT_VMMCALL &&
>  		guest_rflags_test_trap_rip == 0,
>  		"Test EFLAGS.TF on VMRUN: trap not expected");
>  
>  	/*
>  	 * Let guest finish execution
>  	 */
> -	vcpu0.vmcb->save.rip += 3;
> -	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> -		vcpu0.vmcb->save.rip == (u64)&guest_end,
> +	vmcb->save.rip += 3;
> +	report(__svm_vmrun(ctx, vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> +		vmcb->save.rip == (u64)&guest_end,
>  		"Test EFLAGS.TF on VMRUN: guest execution completion");
>  }
>  
> @@ -2538,7 +2621,8 @@ static void svm_vmrun_errata_test(struct svm_test_context *ctx)
>  
>  static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
>  {
> -	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	u64 vmcb_phys = virt_to_phys(vmcb);
>  
>  	asm volatile ("vmload %0" : : "a"(vmcb_phys));
>  	asm volatile ("vmsave %0" : : "a"(vmcb_phys));
> @@ -2546,7 +2630,8 @@ static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
>  
>  static void svm_vmload_vmsave(struct svm_test_context *ctx)
>  {
> -	u32 intercept_saved = vcpu0.vmcb->control.intercept;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +	u32 intercept_saved = vmcb->control.intercept;
>  
>  	test_set_guest(vmload_vmsave_guest_main);
>  
> @@ -2554,49 +2639,49 @@ static void svm_vmload_vmsave(struct svm_test_context *ctx)
>  	 * Disabling intercept for VMLOAD and VMSAVE doesn't cause
>  	 * respective #VMEXIT to host
>  	 */
> -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
>  	/*
>  	 * Enabling intercept for VMLOAD and VMSAVE causes respective
>  	 * #VMEXIT to host
>  	 */
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
> -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
> -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
> -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
> -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
> -	vcpu0.vmcb->control.intercept = intercept_saved;
> +	vmcb->control.intercept = intercept_saved;
>  }
>  
>  static void prepare_vgif_enabled(struct svm_test_context *ctx)
> @@ -2610,45 +2695,47 @@ static void test_vgif(struct svm_test_context *ctx)
>  
>  static bool vgif_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	switch (get_test_stage(ctx))
>  		{
>  		case 0:
> -			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  				report_fail("VMEXIT not due to vmmcall.");
>  				return true;
>  			}
> -			vcpu0.vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
> -			vcpu0.vmcb->save.rip += 3;
> +			vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
> +			vmcb->save.rip += 3;
>  			inc_test_stage(ctx);
>  			break;
>  		case 1:
> -			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  				report_fail("VMEXIT not due to vmmcall.");
>  				return true;
>  			}
> -			if (!(vcpu0.vmcb->control.int_ctl & V_GIF_MASK)) {
> +			if (!(vmcb->control.int_ctl & V_GIF_MASK)) {
>  				report_fail("Failed to set VGIF when executing STGI.");
> -				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> +				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
>  				return true;
>  			}
>  			report_pass("STGI set VGIF bit.");
> -			vcpu0.vmcb->save.rip += 3;
> +			vmcb->save.rip += 3;
>  			inc_test_stage(ctx);
>  			break;
>  		case 2:
> -			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  				report_fail("VMEXIT not due to vmmcall.");
>  				return true;
>  			}
> -			if (vcpu0.vmcb->control.int_ctl & V_GIF_MASK) {
> +			if (vmcb->control.int_ctl & V_GIF_MASK) {
>  				report_fail("Failed to clear VGIF when executing CLGI.");
> -				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> +				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
>  				return true;
>  			}
>  			report_pass("CLGI cleared VGIF bit.");
> -			vcpu0.vmcb->save.rip += 3;
> +			vmcb->save.rip += 3;
>  			inc_test_stage(ctx);
> -			vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> +			vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
>  			break;
>  		default:
>  			return true;
> @@ -2688,31 +2775,35 @@ static void pause_filter_run_test(struct svm_test_context *ctx,
>  				  int pause_iterations, int filter_value,
>  				  int wait_iterations, int threshold)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	test_set_guest(pause_filter_test_guest_main);
>  
>  	pause_test_counter = pause_iterations;
>  	wait_counter = wait_iterations;
>  
> -	vcpu0.vmcb->control.pause_filter_count = filter_value;
> -	vcpu0.vmcb->control.pause_filter_thresh = threshold;
> +	vmcb->control.pause_filter_count = filter_value;
> +	vmcb->control.pause_filter_thresh = threshold;
>  	svm_vmrun(ctx);
>  
>  	if (filter_value <= pause_iterations || wait_iterations < threshold)
> -		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_PAUSE,
> +		report(vmcb->control.exit_code == SVM_EXIT_PAUSE,
>  		       "expected PAUSE vmexit");
>  	else
> -		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL,
> +		report(vmcb->control.exit_code == SVM_EXIT_VMMCALL,
>  		       "no expected PAUSE vmexit");
>  }
>  
>  static void pause_filter_test(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	if (!pause_filter_supported()) {
>  		report_skip("PAUSE filter not supported in the guest");
>  		return;
>  	}
>  
> -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
> +	vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
>  
>  	// filter count more that pause count - no VMexit
>  	pause_filter_run_test(ctx, 10, 9, 0, 0);
> @@ -2738,10 +2829,12 @@ static void pause_filter_test(struct svm_test_context *ctx)
>  /* If CR0.TS and CR0.EM are cleared in L2, no #NM is generated. */
>  static void svm_no_nm_test(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	write_cr0(read_cr0() & ~X86_CR0_TS);
>  	test_set_guest((test_guest_func)fnop);
>  
> -	vcpu0.vmcb->save.cr0 = vcpu0.vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
> +	vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
>  	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
>  	       "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
>  }
> @@ -2872,20 +2965,21 @@ static void svm_lbrv_test0(struct svm_test_context *ctx)
>  
>  static void svm_lbrv_test1(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
>  
> -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> -	vcpu0.vmcb->control.virt_ext = 0;
> +	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> +	vmcb->control.virt_ext = 0;
>  
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch1);
> -	SVM_VMRUN(&vcpu0);
> +	SVM_VMRUN(ctx->vcpu);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> -		       vcpu0.vmcb->control.exit_code);
> +		       vmcb->control.exit_code);
>  		return;
>  	}
>  
> @@ -2895,21 +2989,23 @@ static void svm_lbrv_test1(struct svm_test_context *ctx)
>  
>  static void svm_lbrv_test2(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
>  
> -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> -	vcpu0.vmcb->control.virt_ext = 0;
> +	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> +	vmcb->control.virt_ext = 0;
>  
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch2);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> -	SVM_VMRUN(&vcpu0);
> +	SVM_VMRUN(ctx->vcpu);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>  
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> -		       vcpu0.vmcb->control.exit_code);
> +		       vmcb->control.exit_code);
>  		return;
>  	}
>  
> @@ -2919,32 +3015,34 @@ static void svm_lbrv_test2(struct svm_test_context *ctx)
>  
>  static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	if (!lbrv_supported()) {
>  		report_skip("LBRV not supported in the guest");
>  		return;
>  	}
>  
>  	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)");
> -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> -	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
> -	vcpu0.vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
> +	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> +	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
> +	vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
>  
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch3);
> -	SVM_VMRUN(&vcpu0);
> +	SVM_VMRUN(ctx->vcpu);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>  
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> -		       vcpu0.vmcb->control.exit_code);
> +		       vmcb->control.exit_code);
>  		return;
>  	}
>  
> -	if (vcpu0.vmcb->save.dbgctl != 0) {
> +	if (vmcb->save.dbgctl != 0) {
>  		report(false,
>  		       "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx",
> -		       vcpu0.vmcb->save.dbgctl);
> +		       vmcb->save.dbgctl);
>  		return;
>  	}
>  
> @@ -2954,28 +3052,30 @@ static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
>  
>  static void svm_lbrv_nested_test2(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	if (!lbrv_supported()) {
>  		report_skip("LBRV not supported in the guest");
>  		return;
>  	}
>  
>  	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)");
> -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> -	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
> +	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> +	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
>  
> -	vcpu0.vmcb->save.dbgctl = 0;
> -	vcpu0.vmcb->save.br_from = (u64)&host_branch2_from;
> -	vcpu0.vmcb->save.br_to = (u64)&host_branch2_to;
> +	vmcb->save.dbgctl = 0;
> +	vmcb->save.br_from = (u64)&host_branch2_from;
> +	vmcb->save.br_to = (u64)&host_branch2_to;
>  
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>  	DO_BRANCH(host_branch4);
> -	SVM_VMRUN(&vcpu0);
> +	SVM_VMRUN(ctx->vcpu);
>  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>  
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
>  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> -		       vcpu0.vmcb->control.exit_code);
> +		       vmcb->control.exit_code);
>  		return;
>  	}
>  
> @@ -3005,6 +3105,8 @@ static void dummy_nmi_handler(struct ex_regs *regs)
>  static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
>  					     volatile int *counter, int expected_vmexit)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	if (counter)
>  		*counter = 0;
>  
> @@ -3021,8 +3123,8 @@ static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
>  	if (counter)
>  		report(*counter == 1, "Interrupt is expected");
>  
> -	report(vcpu0.vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
> -	report(vcpu0.vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
> +	report(vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
> +	report(vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
>  	cli();
>  }
>  
> @@ -3038,12 +3140,14 @@ static void svm_intr_intercept_mix_if_guest(struct svm_test_context *ctx)
>  
>  static void svm_intr_intercept_mix_if(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	// make a physical interrupt to be pending
>  	handle_irq(0x55, dummy_isr);
>  
> -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> -	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
> +	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> +	vmcb->save.rflags &= ~X86_EFLAGS_IF;
>  
>  	test_set_guest(svm_intr_intercept_mix_if_guest);
>  	cli();
> @@ -3072,11 +3176,13 @@ static void svm_intr_intercept_mix_gif_guest(struct svm_test_context *ctx)
>  
>  static void svm_intr_intercept_mix_gif(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	handle_irq(0x55, dummy_isr);
>  
> -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> -	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
> +	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> +	vmcb->save.rflags &= ~X86_EFLAGS_IF;
>  
>  	test_set_guest(svm_intr_intercept_mix_gif_guest);
>  	cli();
> @@ -3102,11 +3208,13 @@ static void svm_intr_intercept_mix_gif_guest2(struct svm_test_context *ctx)
>  
>  static void svm_intr_intercept_mix_gif2(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	handle_irq(0x55, dummy_isr);
>  
> -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
> +	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> +	vmcb->save.rflags |= X86_EFLAGS_IF;
>  
>  	test_set_guest(svm_intr_intercept_mix_gif_guest2);
>  	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
> @@ -3131,11 +3239,13 @@ static void svm_intr_intercept_mix_nmi_guest(struct svm_test_context *ctx)
>  
>  static void svm_intr_intercept_mix_nmi(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	handle_exception(2, dummy_nmi_handler);
>  
> -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_NMI);
> -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
> +	vmcb->control.intercept |= (1 << INTERCEPT_NMI);
> +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> +	vmcb->save.rflags |= X86_EFLAGS_IF;
>  
>  	test_set_guest(svm_intr_intercept_mix_nmi_guest);
>  	svm_intr_intercept_mix_run_guest(ctx, &nmi_recevied, SVM_EXIT_NMI);
> @@ -3157,8 +3267,10 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test_context *ctx)
>  
>  static void svm_intr_intercept_mix_smi(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_SMI);
> -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.intercept |= (1 << INTERCEPT_SMI);
> +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  	test_set_guest(svm_intr_intercept_mix_smi_guest);
>  	svm_intr_intercept_mix_run_guest(ctx, NULL, SVM_EXIT_SMI);
>  }
> @@ -3215,14 +3327,16 @@ static void handle_exception_in_l2(struct svm_test_context *ctx, u8 vector)
>  
>  static void handle_exception_in_l1(struct svm_test_context *ctx, u32 vector)
>  {
> -	u32 old_ie = vcpu0.vmcb->control.intercept_exceptions;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	u32 old_ie = vmcb->control.intercept_exceptions;
>  
> -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << vector);
> +	vmcb->control.intercept_exceptions |= (1ULL << vector);
>  
>  	report(svm_vmrun(ctx) == (SVM_EXIT_EXCP_BASE + vector),
>  		"%s handled by L1",  exception_mnemonic(vector));
>  
> -	vcpu0.vmcb->control.intercept_exceptions = old_ie;
> +	vmcb->control.intercept_exceptions = old_ie;
>  }
>  
>  static void svm_exception_test(struct svm_test_context *ctx)
> @@ -3235,10 +3349,10 @@ static void svm_exception_test(struct svm_test_context *ctx)
>  		test_set_guest((test_guest_func)t->guest_code);
>  
>  		handle_exception_in_l2(ctx, t->vector);
> -		svm_vcpu_ident(&vcpu0);
> +		svm_vcpu_ident(ctx->vcpu);
>  
>  		handle_exception_in_l1(ctx, t->vector);
> -		svm_vcpu_ident(&vcpu0);
> +		svm_vcpu_ident(ctx->vcpu);
>  	}
>  }
>  
> @@ -3250,11 +3364,13 @@ static void shutdown_intercept_test_guest(struct svm_test_context *ctx)
>  }
>  static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	test_set_guest(shutdown_intercept_test_guest);
> -	vcpu0.vmcb->save.idtr.base = (u64)alloc_vpage();
> -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
> +	vmcb->save.idtr.base = (u64)alloc_vpage();
> +	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
>  	svm_vmrun(ctx);
> -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
> +	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
>  }
>  
>  /*
> @@ -3264,7 +3380,9 @@ static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
>  
>  static void exception_merging_prepare(struct svm_test_context *ctx)
>  {
> -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
>  
>  	/* break UD vector idt entry to get #GP*/
>  	boot_idt[UD_VECTOR].type = 1;
> @@ -3277,15 +3395,17 @@ static void exception_merging_test(struct svm_test_context *ctx)
>  
>  static bool exception_merging_finished(struct svm_test_context *ctx)
>  {
> -	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> -	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
> +	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> +	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
>  
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
> +	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
>  		report(false, "unexpected VM exit");
>  		goto out;
>  	}
>  
> -	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
> +	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
>  		report(false, "EXITINTINFO not valid");
>  		goto out;
>  	}
> @@ -3320,8 +3440,10 @@ static bool exception_merging_check(struct svm_test_context *ctx)
>  
>  static void interrupt_merging_prepare(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> +
>  	/* intercept #GP */
> -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
>  
>  	/* set local APIC to inject external interrupts */
>  	apic_setup_timer(TIMER_VECTOR, APIC_LVT_TIMER_PERIODIC);
> @@ -3342,16 +3464,17 @@ static void interrupt_merging_test(struct svm_test_context *ctx)
>  
>  static bool interrupt_merging_finished(struct svm_test_context *ctx)
>  {
> +	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
> -	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> -	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
> -	u32 error_code = vcpu0.vmcb->control.exit_info_1;
> +	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> +	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
> +	u32 error_code = vmcb->control.exit_info_1;
>  
>  	/* exit on external interrupts is disabled, thus timer interrupt
>  	 * should be attempted to be delivered, but due to incorrect IDT entry
>  	 * an #GP should be raised
>  	 */
> -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
> +	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
>  		report(false, "unexpected VM exit");
>  		goto cleanup;
>  	}
> @@ -3363,7 +3486,7 @@ static bool interrupt_merging_finished(struct svm_test_context *ctx)
>  	}
>  
>  	/* Original interrupt should be preserved in EXITINTINFO */
> -	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
> +	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
>  		report(false, "EXITINTINFO not valid");
>  		goto cleanup;
>  	}
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 24/27] svm: use svm_test_context in v2 tests
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 24/27] svm: use svm_test_context in v2 tests Maxim Levitsky
@ 2022-12-02 10:27   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-02 10:27 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>

Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> ---
>  x86/svm.c       |  14 +--
>  x86/svm.h       |   7 +-
>  x86/svm_npt.c   |  20 ++--
>  x86/svm_tests.c | 262 ++++++++++++++++++++++++------------------------
>  4 files changed, 152 insertions(+), 151 deletions(-)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index 6381dee9..06d34ac4 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -76,21 +76,18 @@ static void test_thunk(struct svm_test_context *ctx)
>  }
>  
>  
> -struct svm_test_context *v2_ctx;
> -
> -
> -int __svm_vmrun(u64 rip)
> +int __svm_vmrun(struct svm_test_context *ctx, u64 rip)
>  {
>  	vcpu0.vmcb->save.rip = (ulong)rip;
> -	vcpu0.regs.rdi = (ulong)v2_ctx;
> +	vcpu0.regs.rdi = (ulong)ctx;
>  	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
>  	SVM_VMRUN(&vcpu0);
>  	return vcpu0.vmcb->control.exit_code;
>  }
>  
> -int svm_vmrun(void)
> +int svm_vmrun(struct svm_test_context *ctx)
>  {
> -	return __svm_vmrun((u64)test_thunk);
> +	return __svm_vmrun(ctx, (u64)test_thunk);
>  }
>  
>  static noinline void test_run(struct svm_test_context *ctx)
> @@ -98,8 +95,7 @@ static noinline void test_run(struct svm_test_context *ctx)
>  	svm_vcpu_ident(&vcpu0);
>  
>  	if (ctx->test->v2) {
> -		v2_ctx = ctx;
> -		ctx->test->v2();
> +		ctx->test->v2(ctx);
>  		return;
>  	}
>  
> diff --git a/x86/svm.h b/x86/svm.h
> index 01d07a54..961c4de3 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -23,7 +23,7 @@ struct svm_test {
>  	bool (*finished)(struct svm_test_context *ctx);
>  	bool (*succeeded)(struct svm_test_context *ctx);
>  	/* Alternative test interface. */
> -	void (*v2)(void);
> +	void (*v2)(struct svm_test_context *ctx);
>  	int on_vcpu;
>  };
>  
> @@ -39,9 +39,8 @@ bool default_finished(struct svm_test_context *ctx);
>  int get_test_stage(struct svm_test_context *ctx);
>  void set_test_stage(struct svm_test_context *ctx, int s);
>  void inc_test_stage(struct svm_test_context *ctx);
> -int __svm_vmrun(u64 rip);
> -void __svm_bare_vmrun(void);
> -int svm_vmrun(void);
> +int __svm_vmrun(struct svm_test_context *ctx, u64 rip);
> +int svm_vmrun(struct svm_test_context *ctx);
>  void test_set_guest(test_guest_func func);
>  
>  
> diff --git a/x86/svm_npt.c b/x86/svm_npt.c
> index fe6cbb29..fc16b4be 100644
> --- a/x86/svm_npt.c
> +++ b/x86/svm_npt.c
> @@ -189,7 +189,8 @@ static void basic_guest_main(struct svm_test_context *ctx)
>  {
>  }
>  
> -static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
> +static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
> +				     u64 * pxe, u64 rsvd_bits, u64 efer,
>  				     ulong cr4, u64 guest_efer, ulong guest_cr4)
>  {
>  	u64 pxe_orig = *pxe;
> @@ -204,7 +205,7 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
>  
>  	*pxe |= rsvd_bits;
>  
> -	exit_reason = svm_vmrun();
> +	exit_reason = svm_vmrun(ctx);
>  
>  	report(exit_reason == SVM_EXIT_NPF,
>  	       "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits,
> @@ -236,7 +237,8 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer,
>  	*pxe = pxe_orig;
>  }
>  
> -static void _svm_npt_rsvd_bits_test(u64 * pxe, u64 pxe_rsvd_bits, u64 efer,
> +static void _svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
> +				    u64 * pxe, u64 pxe_rsvd_bits, u64 efer,
>  				    ulong cr4, u64 guest_efer, ulong guest_cr4)
>  {
>  	u64 rsvd_bits;
> @@ -277,7 +279,7 @@ static void _svm_npt_rsvd_bits_test(u64 * pxe, u64 pxe_rsvd_bits, u64 efer,
>  		else
>  			guest_cr4 &= ~X86_CR4_SMEP;
>  
> -		__svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4,
> +		__svm_npt_rsvd_bits_test(ctx, pxe, rsvd_bits, efer, cr4,
>  					 guest_efer, guest_cr4);
>  	}
>  }
> @@ -305,7 +307,7 @@ static u64 get_random_bits(u64 hi, u64 low)
>  	return rsvd_bits;
>  }
>  
> -static void svm_npt_rsvd_bits_test(void)
> +static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
>  {
>  	u64 saved_efer, host_efer, sg_efer, guest_efer;
>  	ulong saved_cr4, host_cr4, sg_cr4, guest_cr4;
> @@ -330,22 +332,22 @@ static void svm_npt_rsvd_bits_test(void)
>  	if (cpuid_maxphyaddr() >= 52)
>  		goto skip_pte_test;
>  
> -	_svm_npt_rsvd_bits_test(npt_get_pte((u64) basic_guest_main),
> +	_svm_npt_rsvd_bits_test(ctx, npt_get_pte((u64) basic_guest_main),
>  				get_random_bits(51, cpuid_maxphyaddr()),
>  				host_efer, host_cr4, guest_efer, guest_cr4);
>  
>  skip_pte_test:
> -	_svm_npt_rsvd_bits_test(npt_get_pde((u64) basic_guest_main),
> +	_svm_npt_rsvd_bits_test(ctx, npt_get_pde((u64) basic_guest_main),
>  				get_random_bits(20, 13) | PT_PAGE_SIZE_MASK,
>  				host_efer, host_cr4, guest_efer, guest_cr4);
>  
> -	_svm_npt_rsvd_bits_test(npt_get_pdpe((u64) basic_guest_main),
> +	_svm_npt_rsvd_bits_test(ctx, npt_get_pdpe((u64) basic_guest_main),
>  				PT_PAGE_SIZE_MASK |
>  				(this_cpu_has(X86_FEATURE_GBPAGES) ?
>  				 get_random_bits(29, 13) : 0), host_efer,
>  				host_cr4, guest_efer, guest_cr4);
>  
> -	_svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8),
> +	_svm_npt_rsvd_bits_test(ctx, npt_get_pml4e(), BIT_ULL(8),
>  				host_efer, host_cr4, guest_efer, guest_cr4);
>  
>  	wrmsr(MSR_EFER, saved_efer);
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index c29e9a5d..6041ac24 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -753,7 +753,8 @@ static void svm_tsc_scale_guest(struct svm_test_context *ctx)
>  		cpu_relax();
>  }
>  
> -static void svm_tsc_scale_run_testcase(u64 duration,
> +static void svm_tsc_scale_run_testcase(struct svm_test_context *ctx,
> +				       u64 duration,
>  				       double tsc_scale, u64 tsc_offset)
>  {
>  	u64 start_tsc, actual_duration;
> @@ -766,7 +767,7 @@ static void svm_tsc_scale_run_testcase(u64 duration,
>  
>  	start_tsc = rdtsc();
>  
> -	if (svm_vmrun() != SVM_EXIT_VMMCALL)
> +	if (svm_vmrun(ctx) != SVM_EXIT_VMMCALL)
>  		report_fail("unexpected vm exit code 0x%x", vcpu0.vmcb->control.exit_code);
>  
>  	actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
> @@ -775,7 +776,7 @@ static void svm_tsc_scale_run_testcase(u64 duration,
>  	       duration, actual_duration);
>  }
>  
> -static void svm_tsc_scale_test(void)
> +static void svm_tsc_scale_test(struct svm_test_context *ctx)
>  {
>  	int i;
>  
> @@ -796,11 +797,11 @@ static void svm_tsc_scale_test(void)
>  		report_info("duration=%d, tsc_scale=%d, tsc_offset=%ld",
>  			    duration, (int)(tsc_scale * 100), tsc_offset);
>  
> -		svm_tsc_scale_run_testcase(duration, tsc_scale, tsc_offset);
> +		svm_tsc_scale_run_testcase(ctx, duration, tsc_scale, tsc_offset);
>  	}
>  
> -	svm_tsc_scale_run_testcase(50, 255, rdrand());
> -	svm_tsc_scale_run_testcase(50, 0.0001, rdrand());
> +	svm_tsc_scale_run_testcase(ctx, 50, 255, rdrand());
> +	svm_tsc_scale_run_testcase(ctx, 50, 0.0001, rdrand());
>  }
>  
>  static void latency_prepare(struct svm_test_context *ctx)
> @@ -1983,7 +1984,7 @@ static void svm_cr4_osxsave_test_guest(struct svm_test_context *ctx)
>  	write_cr4(read_cr4() & ~X86_CR4_OSXSAVE);
>  }
>  
> -static void svm_cr4_osxsave_test(void)
> +static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
>  {
>  	if (!this_cpu_has(X86_FEATURE_XSAVE)) {
>  		report_skip("XSAVE not detected");
> @@ -2000,7 +2001,7 @@ static void svm_cr4_osxsave_test(void)
>  	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
>  
>  	test_set_guest(svm_cr4_osxsave_test_guest);
> -	report(svm_vmrun() == SVM_EXIT_VMMCALL,
> +	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
>  	       "svm_cr4_osxsave_test_guest finished with VMMCALL");
>  
>  	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set after VMRUN");
> @@ -2011,7 +2012,7 @@ static void basic_guest_main(struct svm_test_context *ctx)
>  }
>  
>  
> -#define SVM_TEST_REG_RESERVED_BITS(start, end, inc, str_name, reg, val,	\
> +#define SVM_TEST_REG_RESERVED_BITS(ctx, start, end, inc, str_name, reg, val,	\
>  				   resv_mask)				\
>  {									\
>  	u64 tmp, mask;							\
> @@ -2023,12 +2024,12 @@ static void basic_guest_main(struct svm_test_context *ctx)
>  			continue;					\
>  		tmp = val | mask;					\
>  		reg = tmp;						\
> -		report(svm_vmrun() == SVM_EXIT_ERR, "Test %s %d:%d: %lx", \
> +		report(svm_vmrun(ctx) == SVM_EXIT_ERR, "Test %s %d:%d: %lx", \
>  		       str_name, end, start, tmp);			\
>  	}								\
>  }
>  
> -#define SVM_TEST_CR_RESERVED_BITS(start, end, inc, cr, val, resv_mask,	\
> +#define SVM_TEST_CR_RESERVED_BITS(ctx, start, end, inc, cr, val, resv_mask,	\
>  				  exit_code, test_name)			\
>  {									\
>  	u64 tmp, mask;							\
> @@ -2050,13 +2051,13 @@ static void basic_guest_main(struct svm_test_context *ctx)
>  		case 4:							\
>  			vcpu0.vmcb->save.cr4 = tmp;				\
>  		}							\
> -		r = svm_vmrun();					\
> +		r = svm_vmrun(ctx);					\
>  		report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x", \
>  		       cr, test_name, end, start, tmp, exit_code, r);	\
>  	}								\
>  }
>  
> -static void test_efer(void)
> +static void test_efer(struct svm_test_context *ctx)
>  {
>  	/*
>  	 * Un-setting EFER.SVME is illegal
> @@ -2064,10 +2065,10 @@ static void test_efer(void)
>  	u64 efer_saved = vcpu0.vmcb->save.efer;
>  	u64 efer = efer_saved;
>  
> -	report (svm_vmrun() == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
> +	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
>  	efer &= ~EFER_SVME;
>  	vcpu0.vmcb->save.efer = efer;
> -	report (svm_vmrun() == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
> +	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
>  	vcpu0.vmcb->save.efer = efer_saved;
>  
>  	/*
> @@ -2075,9 +2076,9 @@ static void test_efer(void)
>  	 */
>  	efer_saved = vcpu0.vmcb->save.efer;
>  
> -	SVM_TEST_REG_RESERVED_BITS(8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
>  				   efer_saved, SVM_EFER_RESERVED_MASK);
> -	SVM_TEST_REG_RESERVED_BITS(16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
>  				   efer_saved, SVM_EFER_RESERVED_MASK);
>  
>  	/*
> @@ -2094,7 +2095,7 @@ static void test_efer(void)
>  	vcpu0.vmcb->save.cr0 = cr0;
>  	cr4 = cr4_saved & ~X86_CR4_PAE;
>  	vcpu0.vmcb->save.cr4 = cr4;
> -	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
> +	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
>  	       "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
>  
>  	/*
> @@ -2107,7 +2108,7 @@ static void test_efer(void)
>  	vcpu0.vmcb->save.cr4 = cr4;
>  	cr0 &= ~X86_CR0_PE;
>  	vcpu0.vmcb->save.cr0 = cr0;
> -	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
> +	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
>  	       "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
>  
>  	/*
> @@ -2121,7 +2122,7 @@ static void test_efer(void)
>  	cs_attrib = cs_attrib_saved | SVM_SELECTOR_L_MASK |
>  		SVM_SELECTOR_DB_MASK;
>  	vcpu0.vmcb->save.cs.attrib = cs_attrib;
> -	report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
> +	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
>  	       "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
>  	       efer, cr0, cr4, cs_attrib);
>  
> @@ -2131,7 +2132,7 @@ static void test_efer(void)
>  	vcpu0.vmcb->save.cs.attrib = cs_attrib_saved;
>  }
>  
> -static void test_cr0(void)
> +static void test_cr0(struct svm_test_context *ctx)
>  {
>  	/*
>  	 * Un-setting CR0.CD and setting CR0.NW is illegal combination
> @@ -2142,20 +2143,20 @@ static void test_cr0(void)
>  	cr0 |= X86_CR0_CD;
>  	cr0 &= ~X86_CR0_NW;
>  	vcpu0.vmcb->save.cr0 = cr0;
> -	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
> +	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
>  		cr0);
>  	cr0 |= X86_CR0_NW;
>  	vcpu0.vmcb->save.cr0 = cr0;
> -	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
> +	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
>  		cr0);
>  	cr0 &= ~X86_CR0_NW;
>  	cr0 &= ~X86_CR0_CD;
>  	vcpu0.vmcb->save.cr0 = cr0;
> -	report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
> +	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
>  		cr0);
>  	cr0 |= X86_CR0_NW;
>  	vcpu0.vmcb->save.cr0 = cr0;
> -	report (svm_vmrun() == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
> +	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
>  		cr0);
>  	vcpu0.vmcb->save.cr0 = cr0_saved;
>  
> @@ -2164,12 +2165,12 @@ static void test_cr0(void)
>  	 */
>  	cr0 = cr0_saved;
>  
> -	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
>  				   SVM_CR0_RESERVED_MASK);
>  	vcpu0.vmcb->save.cr0 = cr0_saved;
>  }
>  
> -static void test_cr3(void)
> +static void test_cr3(struct svm_test_context *ctx)
>  {
>  	/*
>  	 * CR3 MBZ bits based on different modes:
> @@ -2177,11 +2178,11 @@ static void test_cr3(void)
>  	 */
>  	u64 cr3_saved = vcpu0.vmcb->save.cr3;
>  
> -	SVM_TEST_CR_RESERVED_BITS(0, 63, 1, 3, cr3_saved,
> +	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 63, 1, 3, cr3_saved,
>  				  SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
>  
>  	vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
> -	report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
> +	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
>  	       vcpu0.vmcb->save.cr3);
>  
>  	/*
> @@ -2197,11 +2198,11 @@ static void test_cr3(void)
>  	 */
>  	if (this_cpu_has(X86_FEATURE_PCID)) {
>  		vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
> -		SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved,
> +		SVM_TEST_CR_RESERVED_BITS(ctx, 0, 11, 1, 3, cr3_saved,
>  					  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) ");
>  
>  		vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
> -		report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
> +		report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
>  		       vcpu0.vmcb->save.cr3);
>  	}
>  
> @@ -2213,7 +2214,7 @@ static void test_cr3(void)
>  	/* Clear P (Present) bit in NPT in order to trigger #NPF */
>  	pdpe[0] &= ~1ULL;
>  
> -	SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved,
> +	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 11, 1, 3, cr3_saved,
>  				  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) ");
>  
>  	pdpe[0] |= 1ULL;
> @@ -2224,7 +2225,7 @@ static void test_cr3(void)
>  	 */
>  	pdpe[0] &= ~1ULL;
>  	vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
> -	SVM_TEST_CR_RESERVED_BITS(0, 2, 1, 3, cr3_saved,
> +	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 2, 1, 3, cr3_saved,
>  				  SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) ");
>  
>  	pdpe[0] |= 1ULL;
> @@ -2235,7 +2236,7 @@ skip_npt_only:
>  }
>  
>  /* Test CR4 MBZ bits based on legacy or long modes */
> -static void test_cr4(void)
> +static void test_cr4(struct svm_test_context *ctx)
>  {
>  	u64 cr4_saved = vcpu0.vmcb->save.cr4;
>  	u64 efer_saved = vcpu0.vmcb->save.efer;
> @@ -2243,47 +2244,47 @@ static void test_cr4(void)
>  
>  	efer &= ~EFER_LME;
>  	vcpu0.vmcb->save.efer = efer;
> -	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
> +	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
>  				  SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, "");
>  
>  	efer |= EFER_LME;
>  	vcpu0.vmcb->save.efer = efer;
> -	SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved,
> +	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
>  				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
> -	SVM_TEST_CR_RESERVED_BITS(32, 63, 4, 4, cr4_saved,
> +	SVM_TEST_CR_RESERVED_BITS(ctx, 32, 63, 4, 4, cr4_saved,
>  				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
>  
>  	vcpu0.vmcb->save.cr4 = cr4_saved;
>  	vcpu0.vmcb->save.efer = efer_saved;
>  }
>  
> -static void test_dr(void)
> +static void test_dr(struct svm_test_context *ctx)
>  {
>  	/*
>  	 * DR6[63:32] and DR7[63:32] are MBZ
>  	 */
>  	u64 dr_saved = vcpu0.vmcb->save.dr6;
>  
> -	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
>  				   SVM_DR6_RESERVED_MASK);
>  	vcpu0.vmcb->save.dr6 = dr_saved;
>  
>  	dr_saved = vcpu0.vmcb->save.dr7;
> -	SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
> +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
>  				   SVM_DR7_RESERVED_MASK);
>  
>  	vcpu0.vmcb->save.dr7 = dr_saved;
>  }
>  
>  /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */
> -#define	TEST_BITMAP_ADDR(save_intercept, type, addr, exit_code,		\
> +#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,		\
>  			 msg) {						\
>  		vcpu0.vmcb->control.intercept = saved_intercept | 1ULL << type; \
>  		if (type == INTERCEPT_MSR_PROT)				\
>  			vcpu0.vmcb->control.msrpm_base_pa = addr;		\
>  		else							\
>  			vcpu0.vmcb->control.iopm_base_pa = addr;		\
> -		report(svm_vmrun() == exit_code,			\
> +		report(svm_vmrun(ctx) == exit_code,			\
>  		       "Test %s address: %lx", msg, addr);		\
>  	}
>  
> @@ -2303,48 +2304,48 @@ static void test_dr(void)
>   * Note: Unallocated MSRPM addresses conforming to consistency checks, generate
>   * #NPF.
>   */
> -static void test_msrpm_iopm_bitmap_addrs(void)
> +static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
>  {
>  	u64 saved_intercept = vcpu0.vmcb->control.intercept;
>  	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
>  	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
>  	u8 *io_bitmap = svm_get_io_bitmap();
>  
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT,
>  			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
>  			 "MSRPM");
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT,
>  			 addr_beyond_limit - 2 * PAGE_SIZE + 1, SVM_EXIT_ERR,
>  			 "MSRPM");
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT,
>  			 addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR,
>  			 "MSRPM");
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT, addr,
>  			 SVM_EXIT_VMMCALL, "MSRPM");
>  	addr |= (1ull << 12) - 1;
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_MSR_PROT, addr,
>  			 SVM_EXIT_VMMCALL, "MSRPM");
>  
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
>  			 addr_beyond_limit - 4 * PAGE_SIZE, SVM_EXIT_VMMCALL,
>  			 "IOPM");
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
>  			 addr_beyond_limit - 3 * PAGE_SIZE, SVM_EXIT_VMMCALL,
>  			 "IOPM");
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
>  			 addr_beyond_limit - 2 * PAGE_SIZE - 2, SVM_EXIT_VMMCALL,
>  			 "IOPM");
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
>  			 addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR,
>  			 "IOPM");
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT,
>  			 addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR,
>  			 "IOPM");
>  	addr = virt_to_phys(io_bitmap) & (~((1ull << 11) - 1));
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT, addr,
>  			 SVM_EXIT_VMMCALL, "IOPM");
>  	addr |= (1ull << 12) - 1;
> -	TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr,
> +	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT, addr,
>  			 SVM_EXIT_VMMCALL, "IOPM");
>  
>  	vcpu0.vmcb->control.intercept = saved_intercept;
> @@ -2354,16 +2355,16 @@ static void test_msrpm_iopm_bitmap_addrs(void)
>   * Unlike VMSAVE, VMRUN seems not to update the value of noncanonical
>   * segment bases in the VMCB.  However, VMENTRY succeeds as documented.
>   */
> -#define TEST_CANONICAL_VMRUN(seg_base, msg)				\
> +#define TEST_CANONICAL_VMRUN(ctx, seg_base, msg)				\
>  	saved_addr = seg_base;						\
>  	seg_base = (seg_base & ((1ul << addr_limit) - 1)) | noncanonical_mask; \
> -	return_value = svm_vmrun();					\
> +	return_value = svm_vmrun(ctx);					\
>  	report(return_value == SVM_EXIT_VMMCALL,			\
>  	       "Successful VMRUN with noncanonical %s.base", msg);	\
>  	seg_base = saved_addr;
>  
>  
> -#define TEST_CANONICAL_VMLOAD(seg_base, msg)				\
> +#define TEST_CANONICAL_VMLOAD(ctx, seg_base, msg)				\
>  	saved_addr = seg_base;						\
>  	seg_base = (seg_base & ((1ul << addr_limit) - 1)) | noncanonical_mask; \
>  	asm volatile ("vmload %0" : : "a"(vmcb_phys) : "memory");	\
> @@ -2372,7 +2373,7 @@ static void test_msrpm_iopm_bitmap_addrs(void)
>  	       "Test %s.base for canonical form: %lx", msg, seg_base);	\
>  	seg_base = saved_addr;
>  
> -static void test_canonicalization(void)
> +static void test_canonicalization(struct svm_test_context *ctx)
>  {
>  	u64 saved_addr;
>  	u64 return_value;
> @@ -2382,17 +2383,17 @@ static void test_canonicalization(void)
>  	addr_limit = (this_cpu_has(X86_FEATURE_LA57)) ? 57 : 48;
>  	u64 noncanonical_mask = NONCANONICAL & ~((1ul << addr_limit) - 1);
>  
> -	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.fs.base, "FS");
> -	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.gs.base, "GS");
> -	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.ldtr.base, "LDTR");
> -	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.tr.base, "TR");
> -	TEST_CANONICAL_VMLOAD(vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
> -	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.es.base, "ES");
> -	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.cs.base, "CS");
> -	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.ss.base, "SS");
> -	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.ds.base, "DS");
> -	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.gdtr.base, "GDTR");
> -	TEST_CANONICAL_VMRUN(vcpu0.vmcb->save.idtr.base, "IDTR");
> +	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.fs.base, "FS");
> +	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.gs.base, "GS");
> +	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.ldtr.base, "LDTR");
> +	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.tr.base, "TR");
> +	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
> +	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.es.base, "ES");
> +	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.cs.base, "CS");
> +	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ss.base, "SS");
> +	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ds.base, "DS");
> +	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.gdtr.base, "GDTR");
> +	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.idtr.base, "IDTR");
>  }
>  
>  /*
> @@ -2410,16 +2411,16 @@ static void guest_rflags_test_db_handler(struct ex_regs *r)
>  	r->rflags &= ~X86_EFLAGS_TF;
>  }
>  
> -static void svm_guest_state_test(void)
> +static void svm_guest_state_test(struct svm_test_context *ctx)
>  {
>  	test_set_guest(basic_guest_main);
> -	test_efer();
> -	test_cr0();
> -	test_cr3();
> -	test_cr4();
> -	test_dr();
> -	test_msrpm_iopm_bitmap_addrs();
> -	test_canonicalization();
> +	test_efer(ctx);
> +	test_cr0(ctx);
> +	test_cr3(ctx);
> +	test_cr4(ctx);
> +	test_dr(ctx);
> +	test_msrpm_iopm_bitmap_addrs(ctx);
> +	test_canonicalization(ctx);
>  }
>  
>  extern void guest_rflags_test_guest(struct svm_test_context *ctx);
> @@ -2439,7 +2440,7 @@ asm("guest_rflags_test_guest:\n\t"
>      "pop %rbp\n\t"
>      "ret");
>  
> -static void svm_test_singlestep(void)
> +static void svm_test_singlestep(struct svm_test_context *ctx)
>  {
>  	handle_exception(DB_VECTOR, guest_rflags_test_db_handler);
>  
> @@ -2447,7 +2448,7 @@ static void svm_test_singlestep(void)
>  	 * Trap expected after completion of first guest instruction
>  	 */
>  	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
> -	report (__svm_vmrun((u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
> +	report (__svm_vmrun(ctx, (u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
>  		guest_rflags_test_trap_rip == (u64)&insn2,
>  		"Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
>  	/*
> @@ -2456,7 +2457,7 @@ static void svm_test_singlestep(void)
>  	guest_rflags_test_trap_rip = 0;
>  	vcpu0.vmcb->save.rip += 3;
>  	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
> -	report(__svm_vmrun(vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> +	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
>  		guest_rflags_test_trap_rip == 0,
>  		"Test EFLAGS.TF on VMRUN: trap not expected");
>  
> @@ -2464,7 +2465,7 @@ static void svm_test_singlestep(void)
>  	 * Let guest finish execution
>  	 */
>  	vcpu0.vmcb->save.rip += 3;
> -	report(__svm_vmrun(vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> +	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
>  		vcpu0.vmcb->save.rip == (u64)&guest_end,
>  		"Test EFLAGS.TF on VMRUN: guest execution completion");
>  }
> @@ -2492,7 +2493,7 @@ static void gp_isr(struct ex_regs *r)
>  	r->rip += 3;
>  }
>  
> -static void svm_vmrun_errata_test(void)
> +static void svm_vmrun_errata_test(struct svm_test_context *ctx)
>  {
>  	unsigned long *last_page = NULL;
>  
> @@ -2543,7 +2544,7 @@ static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
>  	asm volatile ("vmsave %0" : : "a"(vmcb_phys));
>  }
>  
> -static void svm_vmload_vmsave(void)
> +static void svm_vmload_vmsave(struct svm_test_context *ctx)
>  {
>  	u32 intercept_saved = vcpu0.vmcb->control.intercept;
>  
> @@ -2555,7 +2556,7 @@ static void svm_vmload_vmsave(void)
>  	 */
>  	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
>  	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
> @@ -2564,34 +2565,34 @@ static void svm_vmload_vmsave(void)
>  	 * #VMEXIT to host
>  	 */
>  	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
>  	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
>  	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
>  	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
>  	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
>  	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
>  	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
>  	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
>  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
>  
> @@ -2683,7 +2684,9 @@ static void pause_filter_test_guest_main(struct svm_test_context *ctx)
>  
>  }
>  
> -static void pause_filter_run_test(int pause_iterations, int filter_value, int wait_iterations, int threshold)
> +static void pause_filter_run_test(struct svm_test_context *ctx,
> +				  int pause_iterations, int filter_value,
> +				  int wait_iterations, int threshold)
>  {
>  	test_set_guest(pause_filter_test_guest_main);
>  
> @@ -2692,7 +2695,7 @@ static void pause_filter_run_test(int pause_iterations, int filter_value, int wa
>  
>  	vcpu0.vmcb->control.pause_filter_count = filter_value;
>  	vcpu0.vmcb->control.pause_filter_thresh = threshold;
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  
>  	if (filter_value <= pause_iterations || wait_iterations < threshold)
>  		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_PAUSE,
> @@ -2702,7 +2705,7 @@ static void pause_filter_run_test(int pause_iterations, int filter_value, int wa
>  		       "no expected PAUSE vmexit");
>  }
>  
> -static void pause_filter_test(void)
> +static void pause_filter_test(struct svm_test_context *ctx)
>  {
>  	if (!pause_filter_supported()) {
>  		report_skip("PAUSE filter not supported in the guest");
> @@ -2712,20 +2715,20 @@ static void pause_filter_test(void)
>  	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
>  
>  	// filter count more that pause count - no VMexit
> -	pause_filter_run_test(10, 9, 0, 0);
> +	pause_filter_run_test(ctx, 10, 9, 0, 0);
>  
>  	// filter count smaller pause count - no VMexit
> -	pause_filter_run_test(20, 21, 0, 0);
> +	pause_filter_run_test(ctx, 20, 21, 0, 0);
>  
>  
>  	if (pause_threshold_supported()) {
>  		// filter count smaller pause count - no VMexit +  large enough threshold
>  		// so that filter counter resets
> -		pause_filter_run_test(20, 21, 1000, 10);
> +		pause_filter_run_test(ctx, 20, 21, 1000, 10);
>  
>  		// filter count smaller pause count - no VMexit +  small threshold
>  		// so that filter doesn't reset
> -		pause_filter_run_test(20, 21, 10, 1000);
> +		pause_filter_run_test(ctx, 20, 21, 10, 1000);
>  	} else {
>  		report_skip("PAUSE threshold not supported in the guest");
>  		return;
> @@ -2733,13 +2736,13 @@ static void pause_filter_test(void)
>  }
>  
>  /* If CR0.TS and CR0.EM are cleared in L2, no #NM is generated. */
> -static void svm_no_nm_test(void)
> +static void svm_no_nm_test(struct svm_test_context *ctx)
>  {
>  	write_cr0(read_cr0() & ~X86_CR0_TS);
>  	test_set_guest((test_guest_func)fnop);
>  
>  	vcpu0.vmcb->save.cr0 = vcpu0.vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
> -	report(svm_vmrun() == SVM_EXIT_VMMCALL,
> +	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
>  	       "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
>  }
>  
> @@ -2794,7 +2797,7 @@ extern u64 host_branch4_from, host_branch4_to;
>  
>  u64 dbgctl;
>  
> -static void svm_lbrv_test_guest1(void)
> +static void svm_lbrv_test_guest1(struct svm_test_context *ctx)
>  {
>  	/*
>  	 * This guest expects the LBR to be already enabled when it starts,
> @@ -2818,7 +2821,7 @@ static void svm_lbrv_test_guest1(void)
>  	asm volatile ("vmmcall\n");
>  }
>  
> -static void svm_lbrv_test_guest2(void)
> +static void svm_lbrv_test_guest2(struct svm_test_context *ctx)
>  {
>  	/*
>  	 * This guest expects the LBR to be disabled when it starts,
> @@ -2852,7 +2855,7 @@ static void svm_lbrv_test_guest2(void)
>  	asm volatile ("vmmcall\n");
>  }
>  
> -static void svm_lbrv_test0(void)
> +static void svm_lbrv_test0(struct svm_test_context *ctx)
>  {
>  	report(true, "Basic LBR test");
>  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> @@ -2867,7 +2870,7 @@ static void svm_lbrv_test0(void)
>  	check_lbr(&host_branch0_from, &host_branch0_to);
>  }
>  
> -static void svm_lbrv_test1(void)
> +static void svm_lbrv_test1(struct svm_test_context *ctx)
>  {
>  
>  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
> @@ -2890,7 +2893,7 @@ static void svm_lbrv_test1(void)
>  	check_lbr(&guest_branch0_from, &guest_branch0_to);
>  }
>  
> -static void svm_lbrv_test2(void)
> +static void svm_lbrv_test2(struct svm_test_context *ctx)
>  {
>  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
>  
> @@ -2914,7 +2917,7 @@ static void svm_lbrv_test2(void)
>  	check_lbr(&guest_branch2_from, &guest_branch2_to);
>  }
>  
> -static void svm_lbrv_nested_test1(void)
> +static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
>  {
>  	if (!lbrv_supported()) {
>  		report_skip("LBRV not supported in the guest");
> @@ -2949,7 +2952,7 @@ static void svm_lbrv_nested_test1(void)
>  	check_lbr(&host_branch3_from, &host_branch3_to);
>  }
>  
> -static void svm_lbrv_nested_test2(void)
> +static void svm_lbrv_nested_test2(struct svm_test_context *ctx)
>  {
>  	if (!lbrv_supported()) {
>  		report_skip("LBRV not supported in the guest");
> @@ -2999,7 +3002,8 @@ static void dummy_nmi_handler(struct ex_regs *regs)
>  }
>  
>  
> -static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected_vmexit)
> +static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
> +					     volatile int *counter, int expected_vmexit)
>  {
>  	if (counter)
>  		*counter = 0;
> @@ -3007,7 +3011,7 @@ static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected
>  	sti();  // host IF value should not matter
>  	clgi(); // vmrun will set back GI to 1
>  
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  
>  	if (counter)
>  		report(!*counter, "No interrupt expected");
> @@ -3032,7 +3036,7 @@ static void svm_intr_intercept_mix_if_guest(struct svm_test_context *ctx)
>  	report(0, "must not reach here");
>  }
>  
> -static void svm_intr_intercept_mix_if(void)
> +static void svm_intr_intercept_mix_if(struct svm_test_context *ctx)
>  {
>  	// make a physical interrupt to be pending
>  	handle_irq(0x55, dummy_isr);
> @@ -3044,7 +3048,7 @@ static void svm_intr_intercept_mix_if(void)
>  	test_set_guest(svm_intr_intercept_mix_if_guest);
>  	cli();
>  	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
> -	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
> +	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
>  }
>  
>  
> @@ -3066,7 +3070,7 @@ static void svm_intr_intercept_mix_gif_guest(struct svm_test_context *ctx)
>  	report(0, "must not reach here");
>  }
>  
> -static void svm_intr_intercept_mix_gif(void)
> +static void svm_intr_intercept_mix_gif(struct svm_test_context *ctx)
>  {
>  	handle_irq(0x55, dummy_isr);
>  
> @@ -3077,7 +3081,7 @@ static void svm_intr_intercept_mix_gif(void)
>  	test_set_guest(svm_intr_intercept_mix_gif_guest);
>  	cli();
>  	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
> -	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
> +	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
>  }
>  
>  // subtest: test that a clever guest can trigger an interrupt by setting GIF
> @@ -3096,7 +3100,7 @@ static void svm_intr_intercept_mix_gif_guest2(struct svm_test_context *ctx)
>  	report(0, "must not reach here");
>  }
>  
> -static void svm_intr_intercept_mix_gif2(void)
> +static void svm_intr_intercept_mix_gif2(struct svm_test_context *ctx)
>  {
>  	handle_irq(0x55, dummy_isr);
>  
> @@ -3105,7 +3109,7 @@ static void svm_intr_intercept_mix_gif2(void)
>  	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
>  
>  	test_set_guest(svm_intr_intercept_mix_gif_guest2);
> -	svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR);
> +	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
>  }
>  
>  
> @@ -3125,7 +3129,7 @@ static void svm_intr_intercept_mix_nmi_guest(struct svm_test_context *ctx)
>  	report(0, "must not reach here");
>  }
>  
> -static void svm_intr_intercept_mix_nmi(void)
> +static void svm_intr_intercept_mix_nmi(struct svm_test_context *ctx)
>  {
>  	handle_exception(2, dummy_nmi_handler);
>  
> @@ -3134,7 +3138,7 @@ static void svm_intr_intercept_mix_nmi(void)
>  	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
>  
>  	test_set_guest(svm_intr_intercept_mix_nmi_guest);
> -	svm_intr_intercept_mix_run_guest(&nmi_recevied, SVM_EXIT_NMI);
> +	svm_intr_intercept_mix_run_guest(ctx, &nmi_recevied, SVM_EXIT_NMI);
>  }
>  
>  // test that pending SMI will be handled when guest enables GIF
> @@ -3151,12 +3155,12 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test_context *ctx)
>  	report(0, "must not reach here");
>  }
>  
> -static void svm_intr_intercept_mix_smi(void)
> +static void svm_intr_intercept_mix_smi(struct svm_test_context *ctx)
>  {
>  	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_SMI);
>  	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  	test_set_guest(svm_intr_intercept_mix_smi_guest);
> -	svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI);
> +	svm_intr_intercept_mix_run_guest(ctx, NULL, SVM_EXIT_SMI);
>  }
>  
>  static void svm_l2_ac_test(void)
> @@ -3198,30 +3202,30 @@ static void svm_exception_handler(struct ex_regs *regs)
>  	vmmcall();
>  }
>  
> -static void handle_exception_in_l2(u8 vector)
> +static void handle_exception_in_l2(struct svm_test_context *ctx, u8 vector)
>  {
>  	handler old_handler = handle_exception(vector, svm_exception_handler);
>  	svm_exception_test_vector = vector;
>  
> -	report(svm_vmrun() == SVM_EXIT_VMMCALL,
> +	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
>  		"%s handled by L2", exception_mnemonic(vector));
>  
>  	handle_exception(vector, old_handler);
>  }
>  
> -static void handle_exception_in_l1(u32 vector)
> +static void handle_exception_in_l1(struct svm_test_context *ctx, u32 vector)
>  {
>  	u32 old_ie = vcpu0.vmcb->control.intercept_exceptions;
>  
>  	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << vector);
>  
> -	report(svm_vmrun() == (SVM_EXIT_EXCP_BASE + vector),
> +	report(svm_vmrun(ctx) == (SVM_EXIT_EXCP_BASE + vector),
>  		"%s handled by L1",  exception_mnemonic(vector));
>  
>  	vcpu0.vmcb->control.intercept_exceptions = old_ie;
>  }
>  
> -static void svm_exception_test(void)
> +static void svm_exception_test(struct svm_test_context *ctx)
>  {
>  	struct svm_exception_test *t;
>  	int i;
> @@ -3230,10 +3234,10 @@ static void svm_exception_test(void)
>  		t = &svm_exception_tests[i];
>  		test_set_guest((test_guest_func)t->guest_code);
>  
> -		handle_exception_in_l2(t->vector);
> +		handle_exception_in_l2(ctx, t->vector);
>  		svm_vcpu_ident(&vcpu0);
>  
> -		handle_exception_in_l1(t->vector);
> +		handle_exception_in_l1(ctx, t->vector);
>  		svm_vcpu_ident(&vcpu0);
>  	}
>  }
> @@ -3244,12 +3248,12 @@ static void shutdown_intercept_test_guest(struct svm_test_context *ctx)
>  	report_fail("should not reach here\n");
>  
>  }
> -static void svm_shutdown_intercept_test(void)
> +static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
>  {
>  	test_set_guest(shutdown_intercept_test_guest);
>  	vcpu0.vmcb->save.idtr.base = (u64)alloc_vpage();
>  	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
> -	svm_vmrun();
> +	svm_vmrun(ctx);
>  	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
>  }
>  
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 26/27] svm: move test_guest_func to test context
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 26/27] svm: move test_guest_func " Maxim Levitsky
@ 2022-12-02 10:28   ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-02 10:28 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> Make test context have pointer to the guest function.
> For V1 tests it is initialized from the test template,
> for V2 tests, the test functions sets it.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>

Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>

> ---
>  x86/svm.c       | 12 ++++--------
>  x86/svm.h       |  4 ++--
>  x86/svm_npt.c   |  2 +-
>  x86/svm_tests.c | 26 +++++++++++++-------------
>  4 files changed, 20 insertions(+), 24 deletions(-)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index a3279545..244555d4 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -60,16 +60,11 @@ void inc_test_stage(struct svm_test_context *ctx)
>  	barrier();
>  }
>  
> -static test_guest_func guest_main;
> -
> -void test_set_guest(test_guest_func func)
> -{
> -	guest_main = func;
> -}
>  
>  static void test_thunk(struct svm_test_context *ctx)
>  {
> -	guest_main(ctx);
> +	if (ctx->guest_func)
> +		ctx->guest_func(ctx);
>  	vmmcall();
>  }
>  
> @@ -93,6 +88,7 @@ static noinline void test_run(struct svm_test_context *ctx)
>  	svm_vcpu_ident(ctx->vcpu);
>  
>  	if (ctx->test->v2) {
> +		ctx->guest_func = NULL;
>  		ctx->test->v2(ctx);
>  		return;
>  	}
> @@ -100,7 +96,7 @@ static noinline void test_run(struct svm_test_context *ctx)
>  	cli();
>  
>  	ctx->test->prepare(ctx);
> -	guest_main = ctx->test->guest_func;
> +	ctx->guest_func = ctx->test->guest_func;
>  	ctx->vcpu->vmcb->save.rip = (ulong)test_thunk;
>  	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
>  	ctx->vcpu->regs.rdi = (ulong)ctx;
> diff --git a/x86/svm.h b/x86/svm.h
> index ec181715..149b76c4 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -15,6 +15,8 @@ struct svm_test_context {
>  
>  	/* TODO: test cases currently are single threaded */
>  	struct svm_vcpu *vcpu;
> +
> +	void (*guest_func)(struct svm_test_context *ctx);
>  };
>  
>  struct svm_test {
> @@ -44,7 +46,5 @@ void set_test_stage(struct svm_test_context *ctx, int s);
>  void inc_test_stage(struct svm_test_context *ctx);
>  int __svm_vmrun(struct svm_test_context *ctx, u64 rip);
>  int svm_vmrun(struct svm_test_context *ctx);
> -void test_set_guest(test_guest_func func);
> -
>  
>  #endif
> diff --git a/x86/svm_npt.c b/x86/svm_npt.c
> index 39fd7198..1e27f9ef 100644
> --- a/x86/svm_npt.c
> +++ b/x86/svm_npt.c
> @@ -332,7 +332,7 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
>  	sg_efer = guest_efer = vmcb->save.efer;
>  	sg_cr4 = guest_cr4 = vmcb->save.cr4;
>  
> -	test_set_guest(basic_guest_main);
> +	ctx->guest_func = basic_guest_main;
>  
>  	/*
>  	 * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index bd92fcee..6d6dfa0e 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -793,7 +793,7 @@ static void svm_tsc_scale_run_testcase(struct svm_test_context *ctx,
>  
>  	guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
>  
> -	test_set_guest(svm_tsc_scale_guest);
> +	ctx->guest_func = svm_tsc_scale_guest;
>  	vmcb->control.tsc_offset = tsc_offset;
>  	wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32)));
>  
> @@ -2067,7 +2067,7 @@ static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
>  
>  	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
>  
> -	test_set_guest(svm_cr4_osxsave_test_guest);
> +	ctx->guest_func = svm_cr4_osxsave_test_guest;
>  	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
>  	       "svm_cr4_osxsave_test_guest finished with VMMCALL");
>  
> @@ -2494,7 +2494,7 @@ static void guest_rflags_test_db_handler(struct ex_regs *r)
>  
>  static void svm_guest_state_test(struct svm_test_context *ctx)
>  {
> -	test_set_guest(basic_guest_main);
> +	ctx->guest_func = basic_guest_main;
>  	test_efer(ctx);
>  	test_cr0(ctx);
>  	test_cr3(ctx);
> @@ -2633,7 +2633,7 @@ static void svm_vmload_vmsave(struct svm_test_context *ctx)
>  	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  	u32 intercept_saved = vmcb->control.intercept;
>  
> -	test_set_guest(vmload_vmsave_guest_main);
> +	ctx->guest_func = vmload_vmsave_guest_main;
>  
>  	/*
>  	 * Disabling intercept for VMLOAD and VMSAVE doesn't cause
> @@ -2777,7 +2777,7 @@ static void pause_filter_run_test(struct svm_test_context *ctx,
>  {
>  	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
> -	test_set_guest(pause_filter_test_guest_main);
> +	ctx->guest_func = pause_filter_test_guest_main;
>  
>  	pause_test_counter = pause_iterations;
>  	wait_counter = wait_iterations;
> @@ -2832,7 +2832,7 @@ static void svm_no_nm_test(struct svm_test_context *ctx)
>  	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
>  	write_cr0(read_cr0() & ~X86_CR0_TS);
> -	test_set_guest((test_guest_func)fnop);
> +	ctx->guest_func = (test_guest_func)fnop;
>  
>  	vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
>  	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
> @@ -3149,7 +3149,7 @@ static void svm_intr_intercept_mix_if(struct svm_test_context *ctx)
>  	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  	vmcb->save.rflags &= ~X86_EFLAGS_IF;
>  
> -	test_set_guest(svm_intr_intercept_mix_if_guest);
> +	ctx->guest_func = svm_intr_intercept_mix_if_guest;
>  	cli();
>  	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
>  	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
> @@ -3184,7 +3184,7 @@ static void svm_intr_intercept_mix_gif(struct svm_test_context *ctx)
>  	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  	vmcb->save.rflags &= ~X86_EFLAGS_IF;
>  
> -	test_set_guest(svm_intr_intercept_mix_gif_guest);
> +	ctx->guest_func = svm_intr_intercept_mix_gif_guest;
>  	cli();
>  	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0);
>  	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
> @@ -3216,7 +3216,7 @@ static void svm_intr_intercept_mix_gif2(struct svm_test_context *ctx)
>  	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  	vmcb->save.rflags |= X86_EFLAGS_IF;
>  
> -	test_set_guest(svm_intr_intercept_mix_gif_guest2);
> +	ctx->guest_func = svm_intr_intercept_mix_gif_guest2;
>  	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
>  }
>  
> @@ -3247,7 +3247,7 @@ static void svm_intr_intercept_mix_nmi(struct svm_test_context *ctx)
>  	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
>  	vmcb->save.rflags |= X86_EFLAGS_IF;
>  
> -	test_set_guest(svm_intr_intercept_mix_nmi_guest);
> +	ctx->guest_func = svm_intr_intercept_mix_nmi_guest;
>  	svm_intr_intercept_mix_run_guest(ctx, &nmi_recevied, SVM_EXIT_NMI);
>  }
>  
> @@ -3271,7 +3271,7 @@ static void svm_intr_intercept_mix_smi(struct svm_test_context *ctx)
>  
>  	vmcb->control.intercept |= (1 << INTERCEPT_SMI);
>  	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> -	test_set_guest(svm_intr_intercept_mix_smi_guest);
> +	ctx->guest_func = svm_intr_intercept_mix_smi_guest;
>  	svm_intr_intercept_mix_run_guest(ctx, NULL, SVM_EXIT_SMI);
>  }
>  
> @@ -3346,7 +3346,7 @@ static void svm_exception_test(struct svm_test_context *ctx)
>  
>  	for (i = 0; i < ARRAY_SIZE(svm_exception_tests); i++) {
>  		t = &svm_exception_tests[i];
> -		test_set_guest((test_guest_func)t->guest_code);
> +		ctx->guest_func = (test_guest_func)t->guest_code;
>  
>  		handle_exception_in_l2(ctx, t->vector);
>  		svm_vcpu_ident(ctx->vcpu);
> @@ -3366,7 +3366,7 @@ static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
>  {
>  	struct vmcb *vmcb = ctx->vcpu->vmcb;
>  
> -	test_set_guest(shutdown_intercept_test_guest);
> +	ctx->guest_func = shutdown_intercept_test_guest;
>  	vmcb->save.idtr.base = (u64)alloc_vpage();
>  	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
>  	svm_vmrun(ctx);
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli()
  2022-12-01 13:46   ` Emanuele Giuseppe Esposito
@ 2022-12-06 13:55     ` Maxim Levitsky
  2022-12-06 14:15       ` Emanuele Giuseppe Esposito
  0 siblings, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 13:55 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Thu, 2022-12-01 at 14:46 +0100, Emanuele Giuseppe Esposito wrote:
> 
> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> > This removes a layer of indirection which is strictly
> > speaking not needed since its x86 code anyway.
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  lib/x86/processor.h       | 19 +++++-----------
> >  lib/x86/smp.c             |  2 +-
> >  x86/apic.c                |  2 +-
> >  x86/asyncpf.c             |  6 ++---
> >  x86/eventinj.c            | 22 +++++++++---------
> >  x86/hyperv_connections.c  |  2 +-
> >  x86/hyperv_stimer.c       |  4 ++--
> >  x86/hyperv_synic.c        |  6 ++---
> >  x86/intel-iommu.c         |  2 +-
> >  x86/ioapic.c              | 14 ++++++------
> >  x86/pmu.c                 |  4 ++--
> >  x86/svm.c                 |  4 ++--
> >  x86/svm_tests.c           | 48 +++++++++++++++++++--------------------
> >  x86/taskswitch2.c         |  4 ++--
> >  x86/tscdeadline_latency.c |  4 ++--
> >  x86/vmexit.c              | 18 +++++++--------
> >  x86/vmx_tests.c           | 42 +++++++++++++++++-----------------
> >  17 files changed, 98 insertions(+), 105 deletions(-)
> > 
> > diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> > index 7a9e8c82..b89f6a7c 100644
> > --- a/lib/x86/processor.h
> > +++ b/lib/x86/processor.h
> > @@ -653,11 +653,17 @@ static inline void pause(void)
> >  	asm volatile ("pause");
> >  }
> >  
> > +/* Disable interrupts as per x86 spec */
> >  static inline void cli(void)
> >  {
> >  	asm volatile ("cli");
> >  }
> >  
> > +/*
> > + * Enable interrupts.
> > + * Note that next instruction after sti will not have interrupts
> > + * evaluated due to concept of 'interrupt shadow'
> > + */
> >  static inline void sti(void)
> >  {
> >  	asm volatile ("sti");
> > @@ -732,19 +738,6 @@ static inline void wrtsc(u64 tsc)
> >  	wrmsr(MSR_IA32_TSC, tsc);
> >  }
> >  
> > -static inline void irq_disable(void)
> > -{
> > -	asm volatile("cli");
> > -}
> > -
> > -/* Note that irq_enable() does not ensure an interrupt shadow due
> > - * to the vagaries of compiler optimizations.  If you need the
> > - * shadow, use a single asm with "sti" and the instruction after it.
> Minor nitpick: instead of a new doc comment, why not use this same
> above? Looks clearer to me.
> 
> Regardless,
> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> 

I am not 100% sure what you mean.
Note that cli() doesn't have the same interrupt window thing as sti().

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 09/27] svm: add simple nested shutdown test.
  2022-12-01 13:46   ` Emanuele Giuseppe Esposito
@ 2022-12-06 13:56     ` Maxim Levitsky
  0 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 13:56 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Thu, 2022-12-01 at 14:46 +0100, Emanuele Giuseppe Esposito wrote:
> 
> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> > Add a simple test that a shutdown in L2 is intercepted
> > correctly by the L1.
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  x86/svm_tests.c | 17 +++++++++++++++++
> >  1 file changed, 17 insertions(+)
> > 
> > diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> > index a7641fb8..7a67132a 100644
> > --- a/x86/svm_tests.c
> > +++ b/x86/svm_tests.c
> > @@ -11,6 +11,7 @@
> >  #include "apic.h"
> >  #include "delay.h"
> >  #include "x86/usermode.h"
> > +#include "vmalloc.h"
> >  
> >  #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
> >  
> > @@ -3238,6 +3239,21 @@ static void svm_exception_test(void)
> >  	}
> >  }
> >  
> > +static void shutdown_intercept_test_guest(struct svm_test *test)
> > +{
> > +	asm volatile ("ud2");
> > +	report_fail("should not reach here\n");
> > +
> Remove empty line here

Will do,
Best regards,
	Maxim Levitsky

> > +}Add empty line here
> > +static void svm_shutdown_intercept_test(void)
> > +{
> > +	test_set_guest(shutdown_intercept_test_guest);
> > +	vmcb->save.idtr.base = (u64)alloc_vpage();
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
> > +	svm_vmrun();
> > +	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
> > +}
> > +



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 19/27] svm: rewerite vm entry macros
  2022-12-02 10:14   ` Emanuele Giuseppe Esposito
@ 2022-12-06 13:56     ` Maxim Levitsky
  0 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 13:56 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Fri, 2022-12-02 at 11:14 +0100, Emanuele Giuseppe Esposito wrote:
> 
> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> > Make SVM VM entry macros to not use harcode regs label
> > and also simplify them as much as possible
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  lib/x86/svm_lib.h | 71 +++++++++++++++++++++++++++++++++++++++++++++++
> >  x86/svm.c         | 58 ++++++++++++--------------------------
> >  x86/svm.h         | 70 ++--------------------------------------------
> >  x86/svm_tests.c   | 24 ++++++++++------
> >  4 files changed, 106 insertions(+), 117 deletions(-)
> > 
> > diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
> > index 3bb098dc..f9c2b352 100644
> > --- a/lib/x86/svm_lib.h
> > +++ b/lib/x86/svm_lib.h
> > @@ -66,4 +66,75 @@ u8 *svm_get_io_bitmap(void);
> >  #define MSR_BITMAP_SIZE 8192
> >  
> >  
> > +struct svm_gprs {
> > +	u64 rax;
> > +	u64 rbx;
> > +	u64 rcx;
> > +	u64 rdx;
> > +	u64 rbp;
> > +	u64 rsi;
> > +	u64 rdi;
> > +	u64 r8;
> > +	u64 r9;
> > +	u64 r10;
> > +	u64 r11;
> > +	u64 r12;
> > +	u64 r13;
> > +	u64 r14;
> > +	u64 r15;
> > +	u64 rsp;
> > +};
> > +
> > +#define SWAP_GPRS \
> > +	"xchg %%rbx, 0x08(%%rax)\n"           \
> > +	"xchg %%rcx, 0x10(%%rax)\n"           \
> > +	"xchg %%rdx, 0x18(%%rax)\n"           \
> > +	"xchg %%rbp, 0x20(%%rax)\n"           \
> > +	"xchg %%rsi, 0x28(%%rax)\n"           \
> > +	"xchg %%rdi, 0x30(%%rax)\n"           \
> > +	"xchg %%r8,  0x38(%%rax)\n"           \
> > +	"xchg %%r9,  0x40(%%rax)\n"           \
> > +	"xchg %%r10, 0x48(%%rax)\n"           \
> > +	"xchg %%r11, 0x50(%%rax)\n"           \
> > +	"xchg %%r12, 0x58(%%rax)\n"           \
> > +	"xchg %%r13, 0x60(%%rax)\n"           \
> > +	"xchg %%r14, 0x68(%%rax)\n"           \
> > +	"xchg %%r15, 0x70(%%rax)\n"           \
> > +	\
> > +
> > +
> > +#define __SVM_VMRUN(vmcb, regs, label)        \
> > +{                                             \
> > +	u32 dummy;                            \
> > +\
> > +	(vmcb)->save.rax = (regs)->rax;       \
> > +	(vmcb)->save.rsp = (regs)->rsp;       \
> > +\
> > +	asm volatile (                        \
> > +		"vmload %%rax\n"              \
> > +		"push %%rbx\n"                \
> > +		"push %%rax\n"                \
> > +		"mov %%rbx, %%rax\n"          \
> > +		SWAP_GPRS                     \
> > +		"pop %%rax\n"                 \
> > +		".global " label "\n"         \
> > +		label ": vmrun %%rax\n"       \
> > +		"vmsave %%rax\n"              \
> > +		"pop %%rax\n"                 \
> > +		SWAP_GPRS                     \
> > +		: "=a"(dummy),                \
> > +		  "=b"(dummy)                 \
> > +		: "a" (virt_to_phys(vmcb)),   \
> > +		  "b"(regs)                   \
> > +		/* clobbers*/                 \
> > +		: "memory"                    \
> > +	);                                    \
> > +\
> > +	(regs)->rax = (vmcb)->save.rax;       \
> > +	(regs)->rsp = (vmcb)->save.rsp;       \
> > +}
> > +
> > +#define SVM_VMRUN(vmcb, regs) \
> > +	__SVM_VMRUN(vmcb, regs, "vmrun_dummy_label_%=")
> > +
> >  #endif /* SRC_LIB_X86_SVM_LIB_H_ */
> > diff --git a/x86/svm.c b/x86/svm.c
> > index 5e2c3a83..220bce66 100644
> > --- a/x86/svm.c
> > +++ b/x86/svm.c
> > @@ -76,16 +76,13 @@ static void test_thunk(struct svm_test *test)
> >  	vmmcall();
> >  }
> >  
> > -struct regs regs;
> > +static struct svm_gprs regs;
> >  
> > -struct regs get_regs(void)
> > +struct svm_gprs *get_regs(void)
> >  {
> > -	return regs;
> > +	return &regs;
> >  }
> >  
> > -// rax handled specially below
> > -
> > -
> >  struct svm_test *v2_test;
> >  
> >  
> > @@ -94,16 +91,10 @@ u64 guest_stack[10000];
> >  int __svm_vmrun(u64 rip)
> >  {
> >  	vmcb->save.rip = (ulong)rip;
> > -	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
> > +	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
> >  	regs.rdi = (ulong)v2_test;
> >  
> > -	asm volatile (
> > -		      ASM_PRE_VMRUN_CMD
> > -		      "vmrun %%rax\n\t"               \
> > -		      ASM_POST_VMRUN_CMD
> > -		      :
> > -		      : "a" (virt_to_phys(vmcb))
> > -		      : "memory", "r15");
> > +	SVM_VMRUN(vmcb, &regs);
> >  
> >  	return (vmcb->control.exit_code);
> >  }
> > @@ -113,43 +104,28 @@ int svm_vmrun(void)
> >  	return __svm_vmrun((u64)test_thunk);
> >  }
> >  
> > -extern u8 vmrun_rip;
> > -
> >  static noinline void test_run(struct svm_test *test)
> >  {
> > -	u64 vmcb_phys = virt_to_phys(vmcb);
> > -
> >  	cli();
> >  	vmcb_ident(vmcb);
> >  
> >  	test->prepare(test);
> >  	guest_main = test->guest_func;
> >  	vmcb->save.rip = (ulong)test_thunk;
> > -	vmcb->save.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
> > +	regs.rsp = (ulong)(guest_stack + ARRAY_SIZE(guest_stack));
> >  	regs.rdi = (ulong)test;
> >  	do {
> > -		struct svm_test *the_test = test;
> > -		u64 the_vmcb = vmcb_phys;
> > -		asm volatile (
> > -			      "clgi;\n\t" // semi-colon needed for LLVM compatibility
> > -			      "sti \n\t"
> > -			      "call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t"
> > -			      "mov %[vmcb_phys], %%rax \n\t"
> > -			      ASM_PRE_VMRUN_CMD
> > -			      ".global vmrun_rip\n\t"		\
> > -			      "vmrun_rip: vmrun %%rax\n\t"    \
> > -			      ASM_POST_VMRUN_CMD
> > -			      "cli \n\t"
> > -			      "stgi"
> > -			      : // inputs clobbered by the guest:
> > -				"=D" (the_test),            // first argument register
> > -				"=b" (the_vmcb)             // callee save register!
> > -			      : [test] "0" (the_test),
> > -				[vmcb_phys] "1"(the_vmcb),
> > -				[PREPARE_GIF_CLEAR] "i" (offsetof(struct svm_test, prepare_gif_clear))
> > -			      : "rax", "rcx", "rdx", "rsi",
> > -				"r8", "r9", "r10", "r11" , "r12", "r13", "r14", "r15",
> > -				"memory");
> > +
> > +		clgi();
> > +		sti();
> > +
> > +		test->prepare_gif_clear(test);
> > +
> > +		__SVM_VMRUN(vmcb, &regs, "vmrun_rip");
> > +
> > +		cli();
> > +		stgi();
> > +
> >  		++test->exits;
> >  	} while (!test->finished(test));
> >  	sti();
> > diff --git a/x86/svm.h b/x86/svm.h
> > index a4aabeb2..6f809ce3 100644
> > --- a/x86/svm.h
> > +++ b/x86/svm.h
> > @@ -23,26 +23,6 @@ struct svm_test {
> >  	bool on_vcpu_done;
> >  };
> >  
> > -struct regs {
> > -	u64 rax;
> > -	u64 rbx;
> > -	u64 rcx;
> > -	u64 rdx;
> > -	u64 cr2;
> > -	u64 rbp;
> > -	u64 rsi;
> > -	u64 rdi;
> > -	u64 r8;
> > -	u64 r9;
> > -	u64 r10;
> > -	u64 r11;
> > -	u64 r12;
> > -	u64 r13;
> > -	u64 r14;
> > -	u64 r15;
> > -	u64 rflags;
> > -};
> > -
> >  typedef void (*test_guest_func)(struct svm_test *);
> >  
> >  int run_svm_tests(int ac, char **av, struct svm_test *svm_tests);
> > @@ -55,58 +35,12 @@ bool default_finished(struct svm_test *test);
> >  int get_test_stage(struct svm_test *test);
> >  void set_test_stage(struct svm_test *test, int s);
> >  void inc_test_stage(struct svm_test *test);
> > -struct regs get_regs(void);
> > +struct svm_gprs *get_regs(void);
> >  int __svm_vmrun(u64 rip);
> >  void __svm_bare_vmrun(void);
> >  int svm_vmrun(void);
> >  void test_set_guest(test_guest_func func);
> >  
> >  extern struct vmcb *vmcb;
> > -
> > -
> > -#define SAVE_GPR_C                              \
> > -        "xchg %%rbx, regs+0x8\n\t"              \
> > -        "xchg %%rcx, regs+0x10\n\t"             \
> > -        "xchg %%rdx, regs+0x18\n\t"             \
> > -        "xchg %%rbp, regs+0x28\n\t"             \
> > -        "xchg %%rsi, regs+0x30\n\t"             \
> > -        "xchg %%rdi, regs+0x38\n\t"             \
> > -        "xchg %%r8, regs+0x40\n\t"              \
> > -        "xchg %%r9, regs+0x48\n\t"              \
> > -        "xchg %%r10, regs+0x50\n\t"             \
> > -        "xchg %%r11, regs+0x58\n\t"             \
> > -        "xchg %%r12, regs+0x60\n\t"             \
> > -        "xchg %%r13, regs+0x68\n\t"             \
> > -        "xchg %%r14, regs+0x70\n\t"             \
> > -        "xchg %%r15, regs+0x78\n\t"
> > -
> > -#define LOAD_GPR_C      SAVE_GPR_C
> > -
> > -#define ASM_PRE_VMRUN_CMD                       \
> > -                "vmload %%rax\n\t"              \
> > -                "mov regs+0x80, %%r15\n\t"      \
> > -                "mov %%r15, 0x170(%%rax)\n\t"   \
> > -                "mov regs, %%r15\n\t"           \
> > -                "mov %%r15, 0x1f8(%%rax)\n\t"   \
> > -                LOAD_GPR_C                      \
> > -
> > -#define ASM_POST_VMRUN_CMD                      \
> > -                SAVE_GPR_C                      \
> > -                "mov 0x170(%%rax), %%r15\n\t"   \
> > -                "mov %%r15, regs+0x80\n\t"      \
> > -                "mov 0x1f8(%%rax), %%r15\n\t"   \
> > -                "mov %%r15, regs\n\t"           \
> > -                "vmsave %%rax\n\t"              \
> > -
> > -
> > -
> > -#define SVM_BARE_VMRUN \
> > -	asm volatile ( \
> > -		ASM_PRE_VMRUN_CMD \
> > -                "vmrun %%rax\n\t"               \
> > -		ASM_POST_VMRUN_CMD \
> > -		: \
> > -		: "a" (virt_to_phys(vmcb)) \
> > -		: "memory", "r15") \
> > -
> > +extern struct svm_test svm_tests[];
> 
> svm_tests[] has nothing to do with the patch here, and it's probably
> also useless? I see it is being deleted in patch 23...

Probably leftover from refactoring. Will remove.

Best regards,
	Maxim Levitsky

> 
> >  #endif
> > diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> > index 712d24e2..70e41300 100644
> > --- a/x86/svm_tests.c
> > +++ b/x86/svm_tests.c
> > @@ -399,7 +399,7 @@ static bool msr_intercept_finished(struct svm_test *test)
> >  		 * RCX holds the MSR index.
> >  		 */
> >  		printf("%s 0x%lx #GP exception\n",
> > -		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs().rcx);
> > +		       exit_info_1 ? "WRMSR" : "RDMSR", get_regs()->rcx);
> >  	}
> >  
> >  	/* Jump over RDMSR/WRMSR instruction */
> > @@ -415,9 +415,9 @@ static bool msr_intercept_finished(struct svm_test *test)
> >  	 */
> >  	if (exit_info_1)
> >  		test->scratch =
> > -			((get_regs().rdx << 32) | (vmcb->save.rax & 0xffffffff));
> > +			((get_regs()->rdx << 32) | (get_regs()->rax & 0xffffffff));
> >  	else
> > -		test->scratch = get_regs().rcx;
> > +		test->scratch = get_regs()->rcx;
> >  
> >  	return false;
> >  }
> > @@ -1842,7 +1842,7 @@ static volatile bool host_rflags_set_tf = false;
> >  static volatile bool host_rflags_set_rf = false;
> >  static u64 rip_detected;
> >  
> > -extern u64 *vmrun_rip;
> > +extern u64 vmrun_rip;
> >  
> >  static void host_rflags_db_handler(struct ex_regs *r)
> >  {
> > @@ -2878,6 +2878,8 @@ static void svm_lbrv_test0(void)
> >  
> >  static void svm_lbrv_test1(void)
> >  {
> > +	struct svm_gprs *regs = get_regs();
> > +
> >  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
> >  
> >  	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> > @@ -2885,7 +2887,7 @@ static void svm_lbrv_test1(void)
> >  
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch1);
> > -	SVM_BARE_VMRUN;
> > +	SVM_VMRUN(vmcb, regs);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  
> >  	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > @@ -2900,6 +2902,8 @@ static void svm_lbrv_test1(void)
> >  
> >  static void svm_lbrv_test2(void)
> >  {
> > +	struct svm_gprs *regs = get_regs();
> > +
> >  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
> >  
> >  	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> > @@ -2908,7 +2912,7 @@ static void svm_lbrv_test2(void)
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch2);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> > -	SVM_BARE_VMRUN;
> > +	SVM_VMRUN(vmcb, regs);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> >  
> > @@ -2924,6 +2928,8 @@ static void svm_lbrv_test2(void)
> >  
> >  static void svm_lbrv_nested_test1(void)
> >  {
> > +	struct svm_gprs *regs = get_regs();
> > +
> >  	if (!lbrv_supported()) {
> >  		report_skip("LBRV not supported in the guest");
> >  		return;
> > @@ -2936,7 +2942,7 @@ static void svm_lbrv_nested_test1(void)
> >  
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch3);
> > -	SVM_BARE_VMRUN;
> > +	SVM_VMRUN(vmcb, regs);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> >  
> > @@ -2957,6 +2963,8 @@ static void svm_lbrv_nested_test1(void)
> >  
> >  static void svm_lbrv_nested_test2(void)
> >  {
> > +	struct svm_gprs *regs = get_regs();
> > +
> >  	if (!lbrv_supported()) {
> >  		report_skip("LBRV not supported in the guest");
> >  		return;
> > @@ -2972,7 +2980,7 @@ static void svm_lbrv_nested_test2(void)
> >  
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch4);
> > -	SVM_BARE_VMRUN;
> > +	SVM_VMRUN(vmcb, regs);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> >  
> > 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 21/27] svm: cleanup the default_prepare
  2022-12-02  9:45   ` Emanuele Giuseppe Esposito
@ 2022-12-06 13:56     ` Maxim Levitsky
  0 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 13:56 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Fri, 2022-12-02 at 10:45 +0100, Emanuele Giuseppe Esposito wrote:
> 
> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> > default_prepare only calls vmcb_indent, which is called before
> > each test anyway
> > 
> > Also don't call this now empty function from other
> > .prepare functions
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  x86/svm.c       |  1 -
> >  x86/svm_tests.c | 18 ------------------
> >  2 files changed, 19 deletions(-)
> > 
> > diff --git a/x86/svm.c b/x86/svm.c
> > index 2ab553a5..5667402b 100644
> > --- a/x86/svm.c
> > +++ b/x86/svm.c
> > @@ -30,7 +30,6 @@ bool default_supported(void)
> >  
> >  void default_prepare(struct svm_test *test)
> >  {
> > -	vmcb_ident(vmcb);
> >  }
> 
> Makes sense removing it, but maybe remove the function alltogether since
> it is not used anymore and then change test_run() to handle ->prepare ==
> NULL?


I had a version of the refactoring which removed all of these, but that made
the table of unit tests look ugly with all the NULL's there, which suggests
that this table should be rewrittern to use named initializers.

I decided to drop off this for now, so I'll do that later.

Best regards,
	Maxim Levitsky

> 
> >  
> >  void default_prepare_gif_clear(struct svm_test *test)
> > diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> > index 70e41300..3b68718e 100644
> > --- a/x86/svm_tests.c
> > +++ b/x86/svm_tests.c
> > @@ -69,7 +69,6 @@ static bool check_vmrun(struct svm_test *test)
> >  
> >  static void prepare_rsm_intercept(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
> >  	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
> >  }
> > @@ -115,7 +114,6 @@ static bool finished_rsm_intercept(struct svm_test *test)
> >  
> >  static void prepare_cr3_intercept(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	vmcb->control.intercept_cr_read |= 1 << 3;
> >  }
> >  
> > @@ -149,7 +147,6 @@ static void corrupt_cr3_intercept_bypass(void *_test)
> >  
> >  static void prepare_cr3_intercept_bypass(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	vmcb->control.intercept_cr_read |= 1 << 3;
> >  	on_cpu_async(1, corrupt_cr3_intercept_bypass, test);
> >  }
> > @@ -169,7 +166,6 @@ static void test_cr3_intercept_bypass(struct svm_test *test)
> >  
> >  static void prepare_dr_intercept(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	vmcb->control.intercept_dr_read = 0xff;
> >  	vmcb->control.intercept_dr_write = 0xff;
> >  }
> > @@ -310,7 +306,6 @@ static bool check_next_rip(struct svm_test *test)
> >  
> >  static void prepare_msr_intercept(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
> >  	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> >  	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
> > @@ -711,7 +706,6 @@ static bool tsc_adjust_supported(void)
> >  
> >  static void tsc_adjust_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
> >  
> >  	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
> > @@ -811,7 +805,6 @@ static void svm_tsc_scale_test(void)
> >  
> >  static void latency_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	runs = LATENCY_RUNS;
> >  	latvmrun_min = latvmexit_min = -1ULL;
> >  	latvmrun_max = latvmexit_max = 0;
> > @@ -884,7 +877,6 @@ static bool latency_check(struct svm_test *test)
> >  
> >  static void lat_svm_insn_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	runs = LATENCY_RUNS;
> >  	latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL;
> >  	latvmload_max = latvmsave_max = latstgi_max = latclgi_max = 0;
> > @@ -965,7 +957,6 @@ static void pending_event_prepare(struct svm_test *test)
> >  {
> >  	int ipi_vector = 0xf1;
> >  
> > -	default_prepare(test);
> >  
> >  	pending_event_ipi_fired = false;
> >  
> > @@ -1033,8 +1024,6 @@ static bool pending_event_check(struct svm_test *test)
> >  
> >  static void pending_event_cli_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> > -
> >  	pending_event_ipi_fired = false;
> >  
> >  	handle_irq(0xf1, pending_event_ipi_isr);
> > @@ -1139,7 +1128,6 @@ static void timer_isr(isr_regs_t *regs)
> >  
> >  static void interrupt_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	handle_irq(TIMER_VECTOR, timer_isr);
> >  	timer_fired = false;
> >  	set_test_stage(test, 0);
> > @@ -1272,7 +1260,6 @@ static void nmi_handler(struct ex_regs *regs)
> >  
> >  static void nmi_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	nmi_fired = false;
> >  	handle_exception(NMI_VECTOR, nmi_handler);
> >  	set_test_stage(test, 0);
> > @@ -1450,7 +1437,6 @@ static void my_isr(struct ex_regs *r)
> >  
> >  static void exc_inject_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	handle_exception(DE_VECTOR, my_isr);
> >  	handle_exception(NMI_VECTOR, my_isr);
> >  }
> > @@ -1519,7 +1505,6 @@ static void virq_isr(isr_regs_t *regs)
> >  static void virq_inject_prepare(struct svm_test *test)
> >  {
> >  	handle_irq(0xf1, virq_isr);
> > -	default_prepare(test);
> >  	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> >  		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
> >  	vmcb->control.int_vector = 0xf1;
> > @@ -1682,7 +1667,6 @@ static void reg_corruption_isr(isr_regs_t *regs)
> >  
> >  static void reg_corruption_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	set_test_stage(test, 0);
> >  
> >  	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
> > @@ -1877,7 +1861,6 @@ static void host_rflags_db_handler(struct ex_regs *r)
> >  
> >  static void host_rflags_prepare(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  	handle_exception(DB_VECTOR, host_rflags_db_handler);
> >  	set_test_stage(test, 0);
> >  }
> > @@ -2610,7 +2593,6 @@ static void svm_vmload_vmsave(void)
> >  
> >  static void prepare_vgif_enabled(struct svm_test *test)
> >  {
> > -	default_prepare(test);
> >  }
> >  
> >  static void test_vgif(struct svm_test *test)
> > 





^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator
  2022-11-23 12:54     ` Andrew Jones
@ 2022-12-06 13:57       ` Maxim Levitsky
  0 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 13:57 UTC (permalink / raw)
  To: Andrew Jones, Claudio Imbrenda
  Cc: kvm, Andrew Jones, Alexandru Elisei, Paolo Bonzini, Thomas Huth,
	Alex Bennée, Nico Boehr, Cathy Avery, Janosch Frank

On Wed, 2022-11-23 at 13:54 +0100, Andrew Jones wrote:
> On Wed, Nov 23, 2022 at 10:28:50AM +0100, Claudio Imbrenda wrote:
> > On Tue, 22 Nov 2022 18:11:36 +0200
> > Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > 
> > > Add a simple pseudo random number generator which can be used
> > > in the tests to add randomeness in a controlled manner.
> > 
> > ahh, yes I have wanted something like this in the library for quite some
> > time! thanks!
> 
> Here's another approach that we unfortunately never got merged.
> https://lore.kernel.org/all/20211202115352.951548-5-alex.bennee@linaro.org/
> 
> Thanks,
> drew
> 

Looks like a better version of the generator I took (I took the same as is proposed for in-kernel selftests).

It is reallly pity that pathes are getting lost like that, we should not need to do double work.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator
  2022-11-23  9:28   ` Claudio Imbrenda
  2022-11-23 12:54     ` Andrew Jones
@ 2022-12-06 14:07     ` Maxim Levitsky
  2022-12-14 10:33       ` Claudio Imbrenda
  1 sibling, 1 reply; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 14:07 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: kvm, Andrew Jones, Alexandru Elisei, Paolo Bonzini, Thomas Huth,
	Alex Bennée, Nico Boehr, Cathy Avery, Janosch Frank

On Wed, 2022-11-23 at 10:28 +0100, Claudio Imbrenda wrote:
> On Tue, 22 Nov 2022 18:11:36 +0200
> Maxim Levitsky <mlevitsk@redhat.com> wrote:
> 
> > Add a simple pseudo random number generator which can be used
> > in the tests to add randomeness in a controlled manner.
> 
> ahh, yes I have wanted something like this in the library for quite some
> time! thanks!
> 
> I have some comments regarding the interfaces (see below), and also a
> request, if you could split the x86 part in a different patch, so we
> can have a "pure" lib patch, and then you can have an x86-only patch
> that uses the new interface
> 
> > For x86 add a wrapper which initializes the PRNG with RDRAND,
> > unless RANDOM_SEED env variable is set, in which case it is used
> > instead.
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  Makefile              |  3 ++-
> >  README.md             |  1 +
> >  lib/prng.c            | 41 +++++++++++++++++++++++++++++++++++++++++
> >  lib/prng.h            | 23 +++++++++++++++++++++++
> >  lib/x86/random.c      | 33 +++++++++++++++++++++++++++++++++
> >  lib/x86/random.h      | 17 +++++++++++++++++
> >  scripts/arch-run.bash |  2 +-
> >  x86/Makefile.common   |  1 +
> >  8 files changed, 119 insertions(+), 2 deletions(-)
> >  create mode 100644 lib/prng.c
> >  create mode 100644 lib/prng.h
> >  create mode 100644 lib/x86/random.c
> >  create mode 100644 lib/x86/random.h
> > 
> > diff --git a/Makefile b/Makefile
> > index 6ed5deac..384b5acf 100644
> > --- a/Makefile
> > +++ b/Makefile
> > @@ -29,7 +29,8 @@ cflatobjs := \
> >  	lib/string.o \
> >  	lib/abort.o \
> >  	lib/report.o \
> > -	lib/stack.o
> > +	lib/stack.o \
> > +	lib/prng.o
> >  
> >  # libfdt paths
> >  LIBFDT_objdir = lib/libfdt
> > diff --git a/README.md b/README.md
> > index 6e82dc22..5a677a03 100644
> > --- a/README.md
> > +++ b/README.md
> > @@ -91,6 +91,7 @@ the framework.  The list of reserved environment variables is below
> >      QEMU_ACCEL                   either kvm, hvf or tcg
> >      QEMU_VERSION_STRING          string of the form `qemu -h | head -1`
> >      KERNEL_VERSION_STRING        string of the form `uname -r`
> > +    TEST_SEED                    integer to force a fixed seed for the prng
> >  
> >  Additionally these self-explanatory variables are reserved
> >  
> > diff --git a/lib/prng.c b/lib/prng.c
> > new file mode 100644
> > index 00000000..d9342eb3
> > --- /dev/null
> > +++ b/lib/prng.c
> > @@ -0,0 +1,41 @@
> > +
> > +/*
> > + * Random number generator that is usable from guest code. This is the
> > + * Park-Miller LCG using standard constants.
> > + */
> > +
> > +#include "libcflat.h"
> > +#include "prng.h"
> > +
> > +struct random_state new_random_state(uint32_t seed)
> > +{
> > +	struct random_state s = {.seed = seed};
> > +	return s;
> > +}
> > +
> > +uint32_t random_u32(struct random_state *state)
> > +{
> > +	state->seed = (uint64_t)state->seed * 48271 % ((uint32_t)(1 << 31) - 1);
> 
> why not:
> 
> state->seed = state->seed * 48271ULL % (BIT_ULL(31) - 1);
> 
> I think it's more readable

I copied this code vertabium from a patch that was send to in-kernel selftests
as Sean suggested me to do.

I to be honest would have picked some more complex random generator like the
Mersenne Twister or something like that, since performance is not an issue here,
and this generator is I think geared toward beeing as fast as possible.

But againg I don't care much about this, any source of randomness is better
that nothing.

> 
> > +	return state->seed;
> > +}
> > +
> > +
> > +uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max)
> > +{
> > +	uint32_t val = random_u32(state);
> > +
> > +	return val % (max - min + 1) + min;
> 
> what happens if max == UINT_MAX and min = 0 ?
> 
> maybe:
> 
> if (max - min == UINT_MAX)
> 	return val;

Makes sense.
> 
> > +}
> > +
> > +/*
> > + * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
> > + * it will return true in 70.0% of cases)
> > + */
> > +bool random_decision(struct random_state *state, float percent_true)
> 
> I'm not a fan of floats in the lib...
> 
> > +{
> > +	if (percent_true == 0)
> > +		return 0;
> > +	if (percent_true == 100)
> > +		return 1;
> > +	return random_range(state, 1, 10000) < (uint32_t)(percent_true * 100);
> 
> ...especially when you are only using 2 decimal places anyway

I was thinking the same about this, there are pros and cons,
Using a fixed point integer is a bit less usable but overall I don't mind
using it.



> 
> can you rewrite it to take an unsigned int? 
> e.g. if percent_true = 7123, it will return true in 71.23% of the cases
> 
> then you can rewrite the last line like this:
> 
> return random_range(state, 1, 10000) < percent_true;
> 
> > +}
> > diff --git a/lib/prng.h b/lib/prng.h
> > new file mode 100644
> > index 00000000..61d3a48b
> > --- /dev/null
> > +++ b/lib/prng.h
> > @@ -0,0 +1,23 @@
> > +
> > +#ifndef SRC_LIB_PRNG_H_
> > +#define SRC_LIB_PRNG_H_
> > +
> > +struct random_state {
> > +	uint32_t seed;
> > +};
> > +
> > +struct random_state new_random_state(uint32_t seed);
> > +uint32_t random_u32(struct random_state *state);
> > +
> > +/*
> > + * return a random number from min to max (included)
> > + */
> > +uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max);
> > +
> > +/*
> > + * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
> > + * it will return true in 70.0% of cases)
> > + */
> > +bool random_decision(struct random_state *state, float percent_true);
> > +
> > +#endif /* SRC_LIB_PRNG_H_ */
> 
> and then put the rest below in a new patch

No problem. Note that x86 specific bits can be minimized
but that requires plumbing in several things that you would
take for granted, and in particular, env vars
are x86 specific, and apic id is x86 specific.

When a random number generator is wired to a new arch,
this can be fixed.

Best regards,
	Maxim Levitsky

> 
> > diff --git a/lib/x86/random.c b/lib/x86/random.c
> > new file mode 100644
> > index 00000000..fcdd5fe8
> > --- /dev/null
> > +++ b/lib/x86/random.c
> > @@ -0,0 +1,33 @@
> > +
> > +#include "libcflat.h"
> > +#include "processor.h"
> > +#include "prng.h"
> > +#include "smp.h"
> > +#include "asm/spinlock.h"
> > +#include "random.h"
> > +
> > +static u32 test_seed;
> > +static bool initialized;
> > +
> > +void init_prng(void)
> > +{
> > +	char *test_seed_str = getenv("TEST_SEED");
> > +
> > +	if (test_seed_str && strlen(test_seed_str))
> > +		test_seed = atol(test_seed_str);
> > +	else
> > +#ifdef __x86_64__
> > +		test_seed =  (u32)rdrand();
> > +#else
> > +		test_seed = (u32)(rdtsc() << 4);
> > +#endif
> > +	initialized = true;
> > +
> > +	printf("Test seed: %u\n", (unsigned int)test_seed);
> > +}
> > +
> > +struct random_state get_prng(void)
> > +{
> > +	assert(initialized);
> > +	return new_random_state(test_seed + this_cpu_read_smp_id());
> > +}
> > diff --git a/lib/x86/random.h b/lib/x86/random.h
> > new file mode 100644
> > index 00000000..795b450b
> > --- /dev/null
> > +++ b/lib/x86/random.h
> > @@ -0,0 +1,17 @@
> > +/*
> > + * prng.h
> > + *
> > + *  Created on: Nov 9, 2022
> > + *      Author: mlevitsk
> > + */
> > +
> > +#ifndef SRC_LIB_X86_RANDOM_H_
> > +#define SRC_LIB_X86_RANDOM_H_
> > +
> > +#include "libcflat.h"
> > +#include "prng.h"
> > +
> > +void init_prng(void);
> > +struct random_state get_prng(void);
> > +
> > +#endif /* SRC_LIB_X86_RANDOM_H_ */
> > diff --git a/scripts/arch-run.bash b/scripts/arch-run.bash
> > index 51e4b97b..238d19f8 100644
> > --- a/scripts/arch-run.bash
> > +++ b/scripts/arch-run.bash
> > @@ -298,7 +298,7 @@ env_params ()
> >  	KERNEL_EXTRAVERSION=${KERNEL_EXTRAVERSION%%[!0-9]*}
> >  	! [[ $KERNEL_SUBLEVEL =~ ^[0-9]+$ ]] && unset $KERNEL_SUBLEVEL
> >  	! [[ $KERNEL_EXTRAVERSION =~ ^[0-9]+$ ]] && unset $KERNEL_EXTRAVERSION
> > -	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION
> > +	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION TEST_SEED
> >  }
> >  
> >  env_file ()
> > diff --git a/x86/Makefile.common b/x86/Makefile.common
> > index 698a48ab..fa0a50e6 100644
> > --- a/x86/Makefile.common
> > +++ b/x86/Makefile.common
> > @@ -23,6 +23,7 @@ cflatobjs += lib/x86/stack.o
> >  cflatobjs += lib/x86/fault_test.o
> >  cflatobjs += lib/x86/delay.o
> >  cflatobjs += lib/x86/pmu.o
> > +cflatobjs += lib/x86/random.o
> >  ifeq ($(CONFIG_EFI),y)
> >  cflatobjs += lib/x86/amd_sev.o
> >  cflatobjs += lib/efi.o



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 15/27] svm: move some svm support functions into lib/x86/svm_lib.h
  2022-12-01 13:59   ` Emanuele Giuseppe Esposito
@ 2022-12-06 14:10     ` Maxim Levitsky
  0 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 14:10 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Thu, 2022-12-01 at 14:59 +0100, Emanuele Giuseppe Esposito wrote:
> 
> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  lib/x86/svm_lib.h | 53 +++++++++++++++++++++++++++++++++++++++++++++++
> >  x86/svm.c         | 36 +-------------------------------
> >  x86/svm.h         | 18 ----------------
> >  x86/svm_npt.c     |  1 +
> >  x86/svm_tests.c   |  1 +
> >  5 files changed, 56 insertions(+), 53 deletions(-)
> >  create mode 100644 lib/x86/svm_lib.h
> > 
> > diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
> > new file mode 100644
> > index 00000000..04910281
> > --- /dev/null
> > +++ b/lib/x86/svm_lib.h
> > @@ -0,0 +1,53 @@
> > +#ifndef SRC_LIB_X86_SVM_LIB_H_
> > +#define SRC_LIB_X86_SVM_LIB_H_
> > +
> > +#include <x86/svm.h>
> > +#include "processor.h"
> > +
> > +static inline bool npt_supported(void)
> > +{
> > +	return this_cpu_has(X86_FEATURE_NPT);
> > +}
> > +
> > +static inline bool vgif_supported(void)
> > +{
> > +	return this_cpu_has(X86_FEATURE_VGIF);
> > +}
> > +
> > +static inline bool lbrv_supported(void)
> > +{
> > +	return this_cpu_has(X86_FEATURE_LBRV);
> > +}
> > +
> > +static inline bool tsc_scale_supported(void)
> > +{
> > +	return this_cpu_has(X86_FEATURE_TSCRATEMSR);
> > +}
> > +
> > +static inline bool pause_filter_supported(void)
> > +{
> > +	return this_cpu_has(X86_FEATURE_PAUSEFILTER);
> > +}
> > +
> > +static inline bool pause_threshold_supported(void)
> > +{
> > +	return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
> > +}
> > +
> > +static inline void vmmcall(void)
> > +{
> > +	asm volatile ("vmmcall" : : : "memory");
> > +}
> > +
> > +static inline void stgi(void)
> > +{
> > +	asm volatile ("stgi");
> > +}
> > +
> > +static inline void clgi(void)
> > +{
> > +	asm volatile ("clgi");
> > +}
> > +
> Not an expert at all on this, but sti() and cli() in patch 1 are in
> processor.h and stgi (g stansd for global?) and clgi are in a different
> header? What about maybe moving them together?

Well the GI (global interrupt flag) is AMD specific, and even more correctly
SVM specific as well. Same for VMMCALL (Intel has VMCALL instead).

Best regards,
	Maxim Levitsky
> 
> > +
> > +#endif /* SRC_LIB_X86_SVM_LIB_H_ */
> > diff --git a/x86/svm.c b/x86/svm.c
> > index 0b2a1d69..8d90a242 100644
> > --- a/x86/svm.c
> > +++ b/x86/svm.c
> > @@ -14,6 +14,7 @@
> >  #include "alloc_page.h"
> >  #include "isr.h"
> >  #include "apic.h"
> > +#include "svm_lib.h"
> >  
> >  /* for the nested page table*/
> >  u64 *pml4e;
> > @@ -54,32 +55,6 @@ bool default_supported(void)
> >  	return true;
> >  }
> >  
> > -bool vgif_supported(void)
> > -{
> > -	return this_cpu_has(X86_FEATURE_VGIF);
> > -}
> > -
> > -bool lbrv_supported(void)
> > -{
> > -	return this_cpu_has(X86_FEATURE_LBRV);
> > -}
> > -
> > -bool tsc_scale_supported(void)
> > -{
> > -	return this_cpu_has(X86_FEATURE_TSCRATEMSR);
> > -}
> > -
> > -bool pause_filter_supported(void)
> > -{
> > -	return this_cpu_has(X86_FEATURE_PAUSEFILTER);
> > -}
> > -
> > -bool pause_threshold_supported(void)
> > -{
> > -	return this_cpu_has(X86_FEATURE_PFTHRESHOLD);
> > -}
> > -
> > -
> >  void default_prepare(struct svm_test *test)
> >  {
> >  	vmcb_ident(vmcb);
> > @@ -94,10 +69,6 @@ bool default_finished(struct svm_test *test)
> >  	return true; /* one vmexit */
> >  }
> >  
> > -bool npt_supported(void)
> > -{
> > -	return this_cpu_has(X86_FEATURE_NPT);
> > -}
> >  
> >  int get_test_stage(struct svm_test *test)
> >  {
> > @@ -128,11 +99,6 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
> >  	seg->base = base;
> >  }
> >  
> > -inline void vmmcall(void)
> > -{
> > -	asm volatile ("vmmcall" : : : "memory");
> > -}
> > -
> >  static test_guest_func guest_main;
> >  
> >  void test_set_guest(test_guest_func func)
> > diff --git a/x86/svm.h b/x86/svm.h
> > index 3cd7ce8b..7cb1b898 100644
> > --- a/x86/svm.h
> > +++ b/x86/svm.h
> > @@ -53,21 +53,14 @@ u64 *npt_get_pdpe(u64 address);
> >  u64 *npt_get_pml4e(void);
> >  bool smp_supported(void);
> >  bool default_supported(void);
> > -bool vgif_supported(void);
> > -bool lbrv_supported(void);
> > -bool tsc_scale_supported(void);
> > -bool pause_filter_supported(void);
> > -bool pause_threshold_supported(void);
> >  void default_prepare(struct svm_test *test);
> >  void default_prepare_gif_clear(struct svm_test *test);
> >  bool default_finished(struct svm_test *test);
> > -bool npt_supported(void);
> >  int get_test_stage(struct svm_test *test);
> >  void set_test_stage(struct svm_test *test, int s);
> >  void inc_test_stage(struct svm_test *test);
> >  void vmcb_ident(struct vmcb *vmcb);
> >  struct regs get_regs(void);
> > -void vmmcall(void);
> >  int __svm_vmrun(u64 rip);
> >  void __svm_bare_vmrun(void);
> >  int svm_vmrun(void);
> > @@ -75,17 +68,6 @@ void test_set_guest(test_guest_func func);
> >  
> >  extern struct vmcb *vmcb;
> >  
> > -static inline void stgi(void)
> > -{
> > -    asm volatile ("stgi");
> > -}
> > -
> > -static inline void clgi(void)
> > -{
> > -    asm volatile ("clgi");
> > -}
> > -
> > -
> >  
> >  #define SAVE_GPR_C                              \
> >          "xchg %%rbx, regs+0x8\n\t"              \
> > diff --git a/x86/svm_npt.c b/x86/svm_npt.c
> > index b791f1ac..8aac0bb6 100644
> > --- a/x86/svm_npt.c
> > +++ b/x86/svm_npt.c
> > @@ -2,6 +2,7 @@
> >  #include "vm.h"
> >  #include "alloc_page.h"
> >  #include "vmalloc.h"
> > +#include "svm_lib.h"
> >  
> >  static void *scratch_page;
> >  
> > diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> > index 202e9271..f86c2fa4 100644
> > --- a/x86/svm_tests.c
> > +++ b/x86/svm_tests.c
> > @@ -12,6 +12,7 @@
> >  #include "delay.h"
> >  #include "x86/usermode.h"
> >  #include "vmalloc.h"
> > +#include "svm_lib.h"
> >  
> >  #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f
> >  
> > 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 18/27] svm: move vmcb_ident to svm_lib.c
  2022-12-01 16:18   ` Emanuele Giuseppe Esposito
@ 2022-12-06 14:11     ` Maxim Levitsky
  0 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 14:11 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Thu, 2022-12-01 at 17:18 +0100, Emanuele Giuseppe Esposito wrote:
> 
> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> > Extract vmcb_ident to svm_lib.c
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> 
> Not sure if it holds for kvm unit tests, but indent of vmcb_set_seg
> parameters seems a little bit off.

True, I will fix.

Best regards,
	Maxim Levitsky

> 
> If that's fine:
> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> 
> > ---
> >  lib/x86/svm_lib.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
> >  lib/x86/svm_lib.h |  4 ++++
> >  x86/svm.c         | 54 -----------------------------------------------
> >  x86/svm.h         |  1 -
> >  4 files changed, 58 insertions(+), 55 deletions(-)
> > 
> > diff --git a/lib/x86/svm_lib.c b/lib/x86/svm_lib.c
> > index c7194909..aed757a1 100644
> > --- a/lib/x86/svm_lib.c
> > +++ b/lib/x86/svm_lib.c
> > @@ -109,3 +109,57 @@ bool setup_svm(void)
> >  	setup_npt();
> >  	return true;
> >  }
> > +
> > +void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
> > +			 u64 base, u32 limit, u32 attr)
> > +{
> > +	seg->selector = selector;
> > +	seg->attrib = attr;
> > +	seg->limit = limit;
> > +	seg->base = base;
> > +}
> > +
> > +void vmcb_ident(struct vmcb *vmcb)
> > +{
> > +	u64 vmcb_phys = virt_to_phys(vmcb);
> > +	struct vmcb_save_area *save = &vmcb->save;
> > +	struct vmcb_control_area *ctrl = &vmcb->control;
> > +	u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> > +		| SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK;
> > +	u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> > +		| SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK;
> > +	struct descriptor_table_ptr desc_table_ptr;
> > +
> > +	memset(vmcb, 0, sizeof(*vmcb));
> > +	asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory");
> > +	vmcb_set_seg(&save->es, read_es(), 0, -1U, data_seg_attr);
> > +	vmcb_set_seg(&save->cs, read_cs(), 0, -1U, code_seg_attr);
> > +	vmcb_set_seg(&save->ss, read_ss(), 0, -1U, data_seg_attr);
> > +	vmcb_set_seg(&save->ds, read_ds(), 0, -1U, data_seg_attr);
> > +	sgdt(&desc_table_ptr);
> > +	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> > +	sidt(&desc_table_ptr);
> > +	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> > +	ctrl->asid = 1;
> > +	save->cpl = 0;
> > +	save->efer = rdmsr(MSR_EFER);
> > +	save->cr4 = read_cr4();
> > +	save->cr3 = read_cr3();
> > +	save->cr0 = read_cr0();
> > +	save->dr7 = read_dr7();
> > +	save->dr6 = read_dr6();
> > +	save->cr2 = read_cr2();
> > +	save->g_pat = rdmsr(MSR_IA32_CR_PAT);
> > +	save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> > +	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
> > +		(1ULL << INTERCEPT_VMMCALL) |
> > +		(1ULL << INTERCEPT_SHUTDOWN);
> > +	ctrl->iopm_base_pa = virt_to_phys(io_bitmap);
> > +	ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
> > +
> > +	if (npt_supported()) {
> > +		ctrl->nested_ctl = 1;
> > +		ctrl->nested_cr3 = (u64)pml4e;
> > +		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
> > +	}
> > +}
> > diff --git a/lib/x86/svm_lib.h b/lib/x86/svm_lib.h
> > index f603ff93..3bb098dc 100644
> > --- a/lib/x86/svm_lib.h
> > +++ b/lib/x86/svm_lib.h
> > @@ -49,7 +49,11 @@ static inline void clgi(void)
> >  	asm volatile ("clgi");
> >  }
> >  
> > +void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
> > +				  u64 base, u32 limit, u32 attr);
> > +
> >  bool setup_svm(void);
> > +void vmcb_ident(struct vmcb *vmcb);
> >  
> >  u64 *npt_get_pte(u64 address);
> >  u64 *npt_get_pde(u64 address);
> > diff --git a/x86/svm.c b/x86/svm.c
> > index cf246c37..5e2c3a83 100644
> > --- a/x86/svm.c
> > +++ b/x86/svm.c
> > @@ -63,15 +63,6 @@ void inc_test_stage(struct svm_test *test)
> >  	barrier();
> >  }
> >  
> > -static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
> > -			 u64 base, u32 limit, u32 attr)
> > -{
> > -	seg->selector = selector;
> > -	seg->attrib = attr;
> > -	seg->limit = limit;
> > -	seg->base = base;
> > -}
> > -
> >  static test_guest_func guest_main;
> >  
> >  void test_set_guest(test_guest_func func)
> > @@ -85,51 +76,6 @@ static void test_thunk(struct svm_test *test)
> >  	vmmcall();
> >  }
> >  
> > -void vmcb_ident(struct vmcb *vmcb)
> > -{
> > -	u64 vmcb_phys = virt_to_phys(vmcb);
> > -	struct vmcb_save_area *save = &vmcb->save;
> > -	struct vmcb_control_area *ctrl = &vmcb->control;
> > -	u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> > -		| SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK;
> > -	u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
> > -		| SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK;
> > -	struct descriptor_table_ptr desc_table_ptr;
> > -
> > -	memset(vmcb, 0, sizeof(*vmcb));
> > -	asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory");
> > -	vmcb_set_seg(&save->es, read_es(), 0, -1U, data_seg_attr);
> > -	vmcb_set_seg(&save->cs, read_cs(), 0, -1U, code_seg_attr);
> > -	vmcb_set_seg(&save->ss, read_ss(), 0, -1U, data_seg_attr);
> > -	vmcb_set_seg(&save->ds, read_ds(), 0, -1U, data_seg_attr);
> > -	sgdt(&desc_table_ptr);
> > -	vmcb_set_seg(&save->gdtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> > -	sidt(&desc_table_ptr);
> > -	vmcb_set_seg(&save->idtr, 0, desc_table_ptr.base, desc_table_ptr.limit, 0);
> > -	ctrl->asid = 1;
> > -	save->cpl = 0;
> > -	save->efer = rdmsr(MSR_EFER);
> > -	save->cr4 = read_cr4();
> > -	save->cr3 = read_cr3();
> > -	save->cr0 = read_cr0();
> > -	save->dr7 = read_dr7();
> > -	save->dr6 = read_dr6();
> > -	save->cr2 = read_cr2();
> > -	save->g_pat = rdmsr(MSR_IA32_CR_PAT);
> > -	save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> > -	ctrl->intercept = (1ULL << INTERCEPT_VMRUN) |
> > -		(1ULL << INTERCEPT_VMMCALL) |
> > -		(1ULL << INTERCEPT_SHUTDOWN);
> > -	ctrl->iopm_base_pa = virt_to_phys(svm_get_io_bitmap());
> > -	ctrl->msrpm_base_pa = virt_to_phys(svm_get_msr_bitmap());
> > -
> > -	if (npt_supported()) {
> > -		ctrl->nested_ctl = 1;
> > -		ctrl->nested_cr3 = (u64)npt_get_pml4e();
> > -		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
> > -	}
> > -}
> > -
> >  struct regs regs;
> >  
> >  struct regs get_regs(void)
> > diff --git a/x86/svm.h b/x86/svm.h
> > index 67f3205d..a4aabeb2 100644
> > --- a/x86/svm.h
> > +++ b/x86/svm.h
> > @@ -55,7 +55,6 @@ bool default_finished(struct svm_test *test);
> >  int get_test_stage(struct svm_test *test);
> >  void set_test_stage(struct svm_test *test, int s);
> >  void inc_test_stage(struct svm_test *test);
> > -void vmcb_ident(struct vmcb *vmcb);
> >  struct regs get_regs(void);
> >  int __svm_vmrun(u64 rip);
> >  void __svm_bare_vmrun(void);
> > 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli()
  2022-12-06 13:55     ` Maxim Levitsky
@ 2022-12-06 14:15       ` Emanuele Giuseppe Esposito
  0 siblings, 0 replies; 56+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-12-06 14:15 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank



Am 06/12/2022 um 14:55 schrieb Maxim Levitsky:
> On Thu, 2022-12-01 at 14:46 +0100, Emanuele Giuseppe Esposito wrote:
>>
>> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
>>> This removes a layer of indirection which is strictly
>>> speaking not needed since its x86 code anyway.
>>>
>>> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
>>> ---
>>>  lib/x86/processor.h       | 19 +++++-----------
>>>  lib/x86/smp.c             |  2 +-
>>>  x86/apic.c                |  2 +-
>>>  x86/asyncpf.c             |  6 ++---
>>>  x86/eventinj.c            | 22 +++++++++---------
>>>  x86/hyperv_connections.c  |  2 +-
>>>  x86/hyperv_stimer.c       |  4 ++--
>>>  x86/hyperv_synic.c        |  6 ++---
>>>  x86/intel-iommu.c         |  2 +-
>>>  x86/ioapic.c              | 14 ++++++------
>>>  x86/pmu.c                 |  4 ++--
>>>  x86/svm.c                 |  4 ++--
>>>  x86/svm_tests.c           | 48 +++++++++++++++++++--------------------
>>>  x86/taskswitch2.c         |  4 ++--
>>>  x86/tscdeadline_latency.c |  4 ++--
>>>  x86/vmexit.c              | 18 +++++++--------
>>>  x86/vmx_tests.c           | 42 +++++++++++++++++-----------------
>>>  17 files changed, 98 insertions(+), 105 deletions(-)
>>>
>>> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
>>> index 7a9e8c82..b89f6a7c 100644
>>> --- a/lib/x86/processor.h
>>> +++ b/lib/x86/processor.h
>>> @@ -653,11 +653,17 @@ static inline void pause(void)
>>>  	asm volatile ("pause");
>>>  }
>>>  
>>> +/* Disable interrupts as per x86 spec */
>>>  static inline void cli(void)
>>>  {
>>>  	asm volatile ("cli");
>>>  }
>>>  
>>> +/*
>>> + * Enable interrupts.
>>> + * Note that next instruction after sti will not have interrupts
>>> + * evaluated due to concept of 'interrupt shadow'
>>> + */
>>>  static inline void sti(void)
>>>  {
>>>  	asm volatile ("sti");
>>> @@ -732,19 +738,6 @@ static inline void wrtsc(u64 tsc)
>>>  	wrmsr(MSR_IA32_TSC, tsc);
>>>  }
>>>  
>>> -static inline void irq_disable(void)
>>> -{
>>> -	asm volatile("cli");
>>> -}
>>> -
>>> -/* Note that irq_enable() does not ensure an interrupt shadow due
>>> - * to the vagaries of compiler optimizations.  If you need the
>>> - * shadow, use a single asm with "sti" and the instruction after it.
>> Minor nitpick: instead of a new doc comment, why not use this same
>> above? Looks clearer to me.
>>
>> Regardless,
>> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>>
> 
> I am not 100% sure what you mean.
> Note that cli() doesn't have the same interrupt window thing as sti().
> 

I mean replacing

>>> +/*
>>> + * Enable interrupts.
>>> + * Note that next instruction after sti will not have interrupts
>>> + * evaluated due to concept of 'interrupt shadow'
>>> + */

with

>>> -/* Note that irq_enable() does not ensure an interrupt shadow due
>>> - * to the vagaries of compiler optimizations.  If you need the
>>> - * shadow, use a single asm with "sti" and the instruction after it.

Emanuele


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 25/27] svm: move nested vcpu to test context
  2022-12-02 10:22   ` Emanuele Giuseppe Esposito
@ 2022-12-06 14:29     ` Maxim Levitsky
  0 siblings, 0 replies; 56+ messages in thread
From: Maxim Levitsky @ 2022-12-06 14:29 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, kvm
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Fri, 2022-12-02 at 11:22 +0100, Emanuele Giuseppe Esposito wrote:
> 
> Am 22/11/2022 um 17:11 schrieb Maxim Levitsky:
> > This moves vcpu0 into svm_test_context and renames it to vcpu
> > to show that this is the current test vcpu
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> 
> Those are exactly the same changes that you did in patch 22. Maybe
> squash them together? Reordering the patches of course.

I might do this, but I'll like to get some feedback on this refactoring,
which I put last because I afraid that I invested too much time into
this and it might be rejected.

Best regards,
	Maxim Levitsky

> 
> > ---
> >  x86/svm.c       |  26 +-
> >  x86/svm.h       |   5 +-
> >  x86/svm_npt.c   |  54 ++--
> >  x86/svm_tests.c | 753 ++++++++++++++++++++++++++++--------------------
> >  4 files changed, 486 insertions(+), 352 deletions(-)
> > 
> > diff --git a/x86/svm.c b/x86/svm.c
> > index 06d34ac4..a3279545 100644
> > --- a/x86/svm.c
> > +++ b/x86/svm.c
> > @@ -16,8 +16,6 @@
> >  #include "apic.h"
> >  #include "svm_lib.h"
> >  
> > -struct svm_vcpu vcpu0;
> > -
> >  bool smp_supported(void)
> >  {
> >  	return cpu_count() > 1;
> > @@ -78,11 +76,11 @@ static void test_thunk(struct svm_test_context *ctx)
> >  
> >  int __svm_vmrun(struct svm_test_context *ctx, u64 rip)
> >  {
> > -	vcpu0.vmcb->save.rip = (ulong)rip;
> > -	vcpu0.regs.rdi = (ulong)ctx;
> > -	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
> > -	SVM_VMRUN(&vcpu0);
> > -	return vcpu0.vmcb->control.exit_code;
> > +	ctx->vcpu->vmcb->save.rip = (ulong)rip;
> > +	ctx->vcpu->regs.rdi = (ulong)ctx;
> > +	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
> > +	SVM_VMRUN(ctx->vcpu);
> > +	return ctx->vcpu->vmcb->control.exit_code;
> >  }
> >  
> >  int svm_vmrun(struct svm_test_context *ctx)
> > @@ -92,7 +90,7 @@ int svm_vmrun(struct svm_test_context *ctx)
> >  
> >  static noinline void test_run(struct svm_test_context *ctx)
> >  {
> > -	svm_vcpu_ident(&vcpu0);
> > +	svm_vcpu_ident(ctx->vcpu);
> >  
> >  	if (ctx->test->v2) {
> >  		ctx->test->v2(ctx);
> > @@ -103,9 +101,9 @@ static noinline void test_run(struct svm_test_context *ctx)
> >  
> >  	ctx->test->prepare(ctx);
> >  	guest_main = ctx->test->guest_func;
> > -	vcpu0.vmcb->save.rip = (ulong)test_thunk;
> > -	vcpu0.regs.rsp = (ulong)(vcpu0.stack);
> > -	vcpu0.regs.rdi = (ulong)ctx;
> > +	ctx->vcpu->vmcb->save.rip = (ulong)test_thunk;
> > +	ctx->vcpu->regs.rsp = (ulong)(ctx->vcpu->stack);
> > +	ctx->vcpu->regs.rdi = (ulong)ctx;
> >  	do {
> >  
> >  		clgi();
> > @@ -113,7 +111,7 @@ static noinline void test_run(struct svm_test_context *ctx)
> >  
> >  		ctx->test->prepare_gif_clear(ctx);
> >  
> > -		__SVM_VMRUN(&vcpu0, "vmrun_rip");
> > +		__SVM_VMRUN(ctx->vcpu, "vmrun_rip");
> >  
> >  		cli();
> >  		stgi();
> > @@ -182,13 +180,15 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
> >  		return 0;
> >  
> >  	struct svm_test_context ctx;
> > +	struct svm_vcpu vcpu;
> >  
> > -	svm_vcpu_init(&vcpu0);
> > +	svm_vcpu_init(&vcpu);
> >  
> >  	for (; svm_tests[i].name != NULL; i++) {
> >  
> >  		memset(&ctx, 0, sizeof(ctx));
> >  		ctx.test = &svm_tests[i];
> > +		ctx.vcpu = &vcpu;
> >  
> >  		if (!test_wanted(svm_tests[i].name, av, ac))
> >  			continue;
> > diff --git a/x86/svm.h b/x86/svm.h
> > index 961c4de3..ec181715 100644
> > --- a/x86/svm.h
> > +++ b/x86/svm.h
> > @@ -12,6 +12,9 @@ struct svm_test_context {
> >  	ulong scratch;
> >  	bool on_vcpu_done;
> >  	struct svm_test *test;
> > +
> > +	/* TODO: test cases currently are single threaded */
> > +	struct svm_vcpu *vcpu;
> >  };
> >  
> >  struct svm_test {
> > @@ -44,6 +47,4 @@ int svm_vmrun(struct svm_test_context *ctx);
> >  void test_set_guest(test_guest_func func);
> >  
> >  
> > -extern struct svm_vcpu vcpu0;
> > -
> >  #endif
> > diff --git a/x86/svm_npt.c b/x86/svm_npt.c
> > index fc16b4be..39fd7198 100644
> > --- a/x86/svm_npt.c
> > +++ b/x86/svm_npt.c
> > @@ -27,23 +27,26 @@ static void npt_np_test(struct svm_test_context *ctx)
> >  
> >  static bool npt_np_check(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	u64 *pte = npt_get_pte((u64) scratch_page);
> >  
> >  	*pte |= 1ULL;
> >  
> > -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> > -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000004ULL);
> > +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> > +	    && (vmcb->control.exit_info_1 == 0x100000004ULL);
> >  }
> >  
> >  static void npt_nx_prepare(struct svm_test_context *ctx)
> >  {
> >  	u64 *pte;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	ctx->scratch = rdmsr(MSR_EFER);
> >  	wrmsr(MSR_EFER, ctx->scratch | EFER_NX);
> >  
> >  	/* Clear the guest's EFER.NX, it should not affect NPT behavior. */
> > -	vcpu0.vmcb->save.efer &= ~EFER_NX;
> > +	vmcb->save.efer &= ~EFER_NX;
> >  
> >  	pte = npt_get_pte((u64) null_test);
> >  
> > @@ -53,13 +56,14 @@ static void npt_nx_prepare(struct svm_test_context *ctx)
> >  static bool npt_nx_check(struct svm_test_context *ctx)
> >  {
> >  	u64 *pte = npt_get_pte((u64) null_test);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	wrmsr(MSR_EFER, ctx->scratch);
> >  
> >  	*pte &= ~PT64_NX_MASK;
> >  
> > -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> > -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000015ULL);
> > +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> > +	    && (vmcb->control.exit_info_1 == 0x100000015ULL);
> >  }
> >  
> >  static void npt_us_prepare(struct svm_test_context *ctx)
> > @@ -80,11 +84,12 @@ static void npt_us_test(struct svm_test_context *ctx)
> >  static bool npt_us_check(struct svm_test_context *ctx)
> >  {
> >  	u64 *pte = npt_get_pte((u64) scratch_page);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	*pte |= (1ULL << 2);
> >  
> > -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> > -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000005ULL);
> > +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> > +	    && (vmcb->control.exit_info_1 == 0x100000005ULL);
> >  }
> >  
> >  static void npt_rw_prepare(struct svm_test_context *ctx)
> > @@ -107,11 +112,12 @@ static void npt_rw_test(struct svm_test_context *ctx)
> >  static bool npt_rw_check(struct svm_test_context *ctx)
> >  {
> >  	u64 *pte = npt_get_pte(0x80000);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	*pte |= (1ULL << 1);
> >  
> > -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> > -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
> > +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> > +	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
> >  }
> >  
> >  static void npt_rw_pfwalk_prepare(struct svm_test_context *ctx)
> > @@ -127,12 +133,13 @@ static void npt_rw_pfwalk_prepare(struct svm_test_context *ctx)
> >  static bool npt_rw_pfwalk_check(struct svm_test_context *ctx)
> >  {
> >  	u64 *pte = npt_get_pte(read_cr3());
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	*pte |= (1ULL << 1);
> >  
> > -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> > -	    && (vcpu0.vmcb->control.exit_info_1 == 0x200000007ULL)
> > -	    && (vcpu0.vmcb->control.exit_info_2 == read_cr3());
> > +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> > +	    && (vmcb->control.exit_info_1 == 0x200000007ULL)
> > +	    && (vmcb->control.exit_info_2 == read_cr3());
> >  }
> >  
> >  static void npt_l1mmio_prepare(struct svm_test_context *ctx)
> > @@ -178,11 +185,12 @@ static void npt_rw_l1mmio_test(struct svm_test_context *ctx)
> >  static bool npt_rw_l1mmio_check(struct svm_test_context *ctx)
> >  {
> >  	u64 *pte = npt_get_pte(0xfee00080);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	*pte |= (1ULL << 1);
> >  
> > -	return (vcpu0.vmcb->control.exit_code == SVM_EXIT_NPF)
> > -	    && (vcpu0.vmcb->control.exit_info_1 == 0x100000007ULL);
> > +	return (vmcb->control.exit_code == SVM_EXIT_NPF)
> > +	    && (vmcb->control.exit_info_1 == 0x100000007ULL);
> >  }
> >  
> >  static void basic_guest_main(struct svm_test_context *ctx)
> > @@ -193,6 +201,7 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
> >  				     u64 * pxe, u64 rsvd_bits, u64 efer,
> >  				     ulong cr4, u64 guest_efer, ulong guest_cr4)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  	u64 pxe_orig = *pxe;
> >  	int exit_reason;
> >  	u64 pfec;
> > @@ -200,8 +209,8 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
> >  	wrmsr(MSR_EFER, efer);
> >  	write_cr4(cr4);
> >  
> > -	vcpu0.vmcb->save.efer = guest_efer;
> > -	vcpu0.vmcb->save.cr4 = guest_cr4;
> > +	vmcb->save.efer = guest_efer;
> > +	vmcb->save.cr4 = guest_cr4;
> >  
> >  	*pxe |= rsvd_bits;
> >  
> > @@ -227,10 +236,10 @@ static void __svm_npt_rsvd_bits_test(struct svm_test_context *ctx,
> >  
> >  	}
> >  
> > -	report(vcpu0.vmcb->control.exit_info_1 == pfec,
> > +	report(vmcb->control.exit_info_1 == pfec,
> >  	       "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx.  "
> >  	       "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u",
> > -	       pfec, vcpu0.vmcb->control.exit_info_1, *pxe,
> > +	       pfec, vmcb->control.exit_info_1, *pxe,
> >  	       !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP),
> >  	       !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP));
> >  
> > @@ -311,6 +320,7 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
> >  {
> >  	u64 saved_efer, host_efer, sg_efer, guest_efer;
> >  	ulong saved_cr4, host_cr4, sg_cr4, guest_cr4;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	if (!npt_supported()) {
> >  		report_skip("NPT not supported");
> > @@ -319,8 +329,8 @@ static void svm_npt_rsvd_bits_test(struct svm_test_context *ctx)
> >  
> >  	saved_efer = host_efer = rdmsr(MSR_EFER);
> >  	saved_cr4 = host_cr4 = read_cr4();
> > -	sg_efer = guest_efer = vcpu0.vmcb->save.efer;
> > -	sg_cr4 = guest_cr4 = vcpu0.vmcb->save.cr4;
> > +	sg_efer = guest_efer = vmcb->save.efer;
> > +	sg_cr4 = guest_cr4 = vmcb->save.cr4;
> >  
> >  	test_set_guest(basic_guest_main);
> >  
> > @@ -352,8 +362,8 @@ skip_pte_test:
> >  
> >  	wrmsr(MSR_EFER, saved_efer);
> >  	write_cr4(saved_cr4);
> > -	vcpu0.vmcb->save.efer = sg_efer;
> > -	vcpu0.vmcb->save.cr4 = sg_cr4;
> > +	vmcb->save.efer = sg_efer;
> > +	vmcb->save.cr4 = sg_cr4;
> >  }
> >  
> >  #define NPT_V1_TEST(name, prepare, guest_code, check)				\
> > diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> > index 6041ac24..bd92fcee 100644
> > --- a/x86/svm_tests.c
> > +++ b/x86/svm_tests.c
> > @@ -44,33 +44,36 @@ static void null_test(struct svm_test_context *ctx)
> >  
> >  static bool null_check(struct svm_test_context *ctx)
> >  {
> > -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL;
> > +	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_VMMCALL;
> >  }
> >  
> >  static void prepare_no_vmrun_int(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
> > +	ctx->vcpu->vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN);
> >  }
> >  
> >  static bool check_no_vmrun_int(struct svm_test_context *ctx)
> >  {
> > -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
> > +	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_ERR;
> >  }
> >  
> >  static void test_vmrun(struct svm_test_context *ctx)
> >  {
> > -	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vcpu0.vmcb)));
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	asm volatile ("vmrun %0" : : "a"(virt_to_phys(vmcb)));
> >  }
> >  
> >  static bool check_vmrun(struct svm_test_context *ctx)
> >  {
> > -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_VMRUN;
> > +	return ctx->vcpu->vmcb->control.exit_code == SVM_EXIT_VMRUN;
> >  }
> >  
> >  static void prepare_rsm_intercept(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept |= 1 << INTERCEPT_RSM;
> > -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	vmcb->control.intercept |= 1 << INTERCEPT_RSM;
> > +	vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR);
> >  }
> >  
> >  static void test_rsm_intercept(struct svm_test_context *ctx)
> > @@ -85,24 +88,25 @@ static bool check_rsm_intercept(struct svm_test_context *ctx)
> >  
> >  static bool finished_rsm_intercept(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_RSM) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_RSM) {
> >  			report_fail("VMEXIT not due to rsm. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
> > +		vmcb->control.intercept &= ~(1 << INTERCEPT_RSM);
> >  		inc_test_stage(ctx);
> >  		break;
> >  
> >  	case 1:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) {
> >  			report_fail("VMEXIT not due to #UD. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 2;
> > +		vmcb->save.rip += 2;
> >  		inc_test_stage(ctx);
> >  		break;
> >  
> > @@ -114,7 +118,9 @@ static bool finished_rsm_intercept(struct svm_test_context *ctx)
> >  
> >  static void prepare_cr3_intercept(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.intercept_cr_read |= 1 << 3;
> >  }
> >  
> >  static void test_cr3_intercept(struct svm_test_context *ctx)
> > @@ -124,7 +130,8 @@ static void test_cr3_intercept(struct svm_test_context *ctx)
> >  
> >  static bool check_cr3_intercept(struct svm_test_context *ctx)
> >  {
> > -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_READ_CR3;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	return vmcb->control.exit_code == SVM_EXIT_READ_CR3;
> >  }
> >  
> >  static bool check_cr3_nointercept(struct svm_test_context *ctx)
> > @@ -147,7 +154,9 @@ static void corrupt_cr3_intercept_bypass(void *_ctx)
> >  
> >  static void prepare_cr3_intercept_bypass(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept_cr_read |= 1 << 3;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.intercept_cr_read |= 1 << 3;
> >  	on_cpu_async(1, corrupt_cr3_intercept_bypass, ctx);
> >  }
> >  
> > @@ -166,8 +175,10 @@ static void test_cr3_intercept_bypass(struct svm_test_context *ctx)
> >  
> >  static void prepare_dr_intercept(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept_dr_read = 0xff;
> > -	vcpu0.vmcb->control.intercept_dr_write = 0xff;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.intercept_dr_read = 0xff;
> > +	vmcb->control.intercept_dr_write = 0xff;
> >  }
> >  
> >  static void test_dr_intercept(struct svm_test_context *ctx)
> > @@ -251,7 +262,8 @@ static void test_dr_intercept(struct svm_test_context *ctx)
> >  
> >  static bool dr_intercept_finished(struct svm_test_context *ctx)
> >  {
> > -	ulong n = (vcpu0.vmcb->control.exit_code - SVM_EXIT_READ_DR0);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0);
> >  
> >  	/* Only expect DR intercepts */
> >  	if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0))
> > @@ -267,7 +279,7 @@ static bool dr_intercept_finished(struct svm_test_context *ctx)
> >  	ctx->scratch = (n % 16);
> >  
> >  	/* Jump over MOV instruction */
> > -	vcpu0.vmcb->save.rip += 3;
> > +	vmcb->save.rip += 3;
> >  
> >  	return false;
> >  }
> > @@ -284,7 +296,8 @@ static bool next_rip_supported(void)
> >  
> >  static void prepare_next_rip(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC);
> >  }
> >  
> >  
> > @@ -299,15 +312,17 @@ static bool check_next_rip(struct svm_test_context *ctx)
> >  {
> >  	extern char exp_next_rip;
> >  	unsigned long address = (unsigned long)&exp_next_rip;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> > -	return address == vcpu0.vmcb->control.next_rip;
> > +	return address == vmcb->control.next_rip;
> >  }
> >  
> >  
> >  static void prepare_msr_intercept(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
> > -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT);
> > +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> >  	memset(svm_get_msr_bitmap(), 0xff, MSR_BITMAP_SIZE);
> >  }
> >  
> > @@ -359,12 +374,13 @@ static void test_msr_intercept(struct svm_test_context *ctx)
> >  
> >  static bool msr_intercept_finished(struct svm_test_context *ctx)
> >  {
> > -	u32 exit_code = vcpu0.vmcb->control.exit_code;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	u32 exit_code = vmcb->control.exit_code;
> >  	u64 exit_info_1;
> >  	u8 *opcode;
> >  
> >  	if (exit_code == SVM_EXIT_MSR) {
> > -		exit_info_1 = vcpu0.vmcb->control.exit_info_1;
> > +		exit_info_1 = vmcb->control.exit_info_1;
> >  	} else {
> >  		/*
> >  		 * If #GP exception occurs instead, check that it was
> > @@ -374,7 +390,7 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
> >  		if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR))
> >  			return true;
> >  
> > -		opcode = (u8 *)vcpu0.vmcb->save.rip;
> > +		opcode = (u8 *)vmcb->save.rip;
> >  		if (opcode[0] != 0x0f)
> >  			return true;
> >  
> > @@ -394,11 +410,11 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
> >  		 * RCX holds the MSR index.
> >  		 */
> >  		printf("%s 0x%lx #GP exception\n",
> > -		       exit_info_1 ? "WRMSR" : "RDMSR", vcpu0.regs.rcx);
> > +		       exit_info_1 ? "WRMSR" : "RDMSR", ctx->vcpu->regs.rcx);
> >  	}
> >  
> >  	/* Jump over RDMSR/WRMSR instruction */
> > -	vcpu0.vmcb->save.rip += 2;
> > +	vmcb->save.rip += 2;
> >  
> >  	/*
> >  	 * Test whether the intercept was for RDMSR/WRMSR.
> > @@ -410,9 +426,9 @@ static bool msr_intercept_finished(struct svm_test_context *ctx)
> >  	 */
> >  	if (exit_info_1)
> >  		ctx->scratch =
> > -			((vcpu0.regs.rdx << 32) | (vcpu0.regs.rax & 0xffffffff));
> > +			((ctx->vcpu->regs.rdx << 32) | (ctx->vcpu->regs.rax & 0xffffffff));
> >  	else
> > -		ctx->scratch = vcpu0.regs.rcx;
> > +		ctx->scratch = ctx->vcpu->regs.rcx;
> >  
> >  	return false;
> >  }
> > @@ -425,7 +441,9 @@ static bool check_msr_intercept(struct svm_test_context *ctx)
> >  
> >  static void prepare_mode_switch(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR)
> >  		|  (1ULL << UD_VECTOR)
> >  		|  (1ULL << DF_VECTOR)
> >  		|  (1ULL << PF_VECTOR);
> > @@ -490,17 +508,18 @@ static void test_mode_switch(struct svm_test_context *ctx)
> >  static bool mode_switch_finished(struct svm_test_context *ctx)
> >  {
> >  	u64 cr0, cr4, efer;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> > -	cr0  = vcpu0.vmcb->save.cr0;
> > -	cr4  = vcpu0.vmcb->save.cr4;
> > -	efer = vcpu0.vmcb->save.efer;
> > +	cr0  = vmcb->save.cr0;
> > +	cr4  = vmcb->save.cr4;
> > +	efer = vmcb->save.efer;
> >  
> >  	/* Only expect VMMCALL intercepts */
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL)
> > +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL)
> >  		return true;
> >  
> >  	/* Jump over VMMCALL instruction */
> > -	vcpu0.vmcb->save.rip += 3;
> > +	vmcb->save.rip += 3;
> >  
> >  	/* Do sanity checks */
> >  	switch (ctx->scratch) {
> > @@ -534,8 +553,9 @@ static bool check_mode_switch(struct svm_test_context *ctx)
> >  static void prepare_ioio(struct svm_test_context *ctx)
> >  {
> >  	u8 *io_bitmap = svm_get_io_bitmap();
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT);
> >  	ctx->scratch = 0;
> >  	memset(io_bitmap, 0, 8192);
> >  	io_bitmap[8192] = 0xFF;
> > @@ -617,19 +637,20 @@ static bool ioio_finished(struct svm_test_context *ctx)
> >  {
> >  	unsigned port, size;
> >  	u8 *io_bitmap = svm_get_io_bitmap();
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	/* Only expect IOIO intercepts */
> > -	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL)
> > +	if (vmcb->control.exit_code == SVM_EXIT_VMMCALL)
> >  		return true;
> >  
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_IOIO)
> > +	if (vmcb->control.exit_code != SVM_EXIT_IOIO)
> >  		return true;
> >  
> >  	/* one step forward */
> >  	ctx->scratch += 1;
> >  
> > -	port = vcpu0.vmcb->control.exit_info_1 >> 16;
> > -	size = (vcpu0.vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
> > +	port = vmcb->control.exit_info_1 >> 16;
> > +	size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7;
> >  
> >  	while (size--) {
> >  		io_bitmap[port / 8] &= ~(1 << (port & 7));
> > @@ -649,7 +670,9 @@ static bool check_ioio(struct svm_test_context *ctx)
> >  
> >  static void prepare_asid_zero(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.asid = 0;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.asid = 0;
> >  }
> >  
> >  static void test_asid_zero(struct svm_test_context *ctx)
> > @@ -659,12 +682,16 @@ static void test_asid_zero(struct svm_test_context *ctx)
> >  
> >  static bool check_asid_zero(struct svm_test_context *ctx)
> >  {
> > -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_ERR;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	return vmcb->control.exit_code == SVM_EXIT_ERR;
> >  }
> >  
> >  static void sel_cr0_bug_prepare(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
> >  }
> >  
> >  static bool sel_cr0_bug_finished(struct svm_test_context *ctx)
> > @@ -692,7 +719,9 @@ static void sel_cr0_bug_test(struct svm_test_context *ctx)
> >  
> >  static bool sel_cr0_bug_check(struct svm_test_context *ctx)
> >  {
> > -	return vcpu0.vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
> >  }
> >  
> >  #define TSC_ADJUST_VALUE    (1ll << 32)
> > @@ -706,7 +735,9 @@ static bool tsc_adjust_supported(void)
> >  
> >  static void tsc_adjust_prepare(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.tsc_offset = TSC_OFFSET_VALUE;
> >  
> >  	wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE);
> >  	int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST);
> > @@ -758,17 +789,18 @@ static void svm_tsc_scale_run_testcase(struct svm_test_context *ctx,
> >  				       double tsc_scale, u64 tsc_offset)
> >  {
> >  	u64 start_tsc, actual_duration;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale;
> >  
> >  	test_set_guest(svm_tsc_scale_guest);
> > -	vcpu0.vmcb->control.tsc_offset = tsc_offset;
> > +	vmcb->control.tsc_offset = tsc_offset;
> >  	wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32)));
> >  
> >  	start_tsc = rdtsc();
> >  
> >  	if (svm_vmrun(ctx) != SVM_EXIT_VMMCALL)
> > -		report_fail("unexpected vm exit code 0x%x", vcpu0.vmcb->control.exit_code);
> > +		report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code);
> >  
> >  	actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT;
> >  
> > @@ -839,6 +871,7 @@ start:
> >  static bool latency_finished(struct svm_test_context *ctx)
> >  {
> >  	u64 cycles;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	tsc_end = rdtsc();
> >  
> > @@ -852,7 +885,7 @@ static bool latency_finished(struct svm_test_context *ctx)
> >  
> >  	vmexit_sum += cycles;
> >  
> > -	vcpu0.vmcb->save.rip += 3;
> > +	vmcb->save.rip += 3;
> >  
> >  	runs -= 1;
> >  
> > @@ -863,7 +896,10 @@ static bool latency_finished(struct svm_test_context *ctx)
> >  
> >  static bool latency_finished_clean(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.clean = VMCB_CLEAN_ALL;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.clean = VMCB_CLEAN_ALL;
> > +
> >  	return latency_finished(ctx);
> >  }
> >  
> > @@ -886,7 +922,9 @@ static void lat_svm_insn_prepare(struct svm_test_context *ctx)
> >  
> >  static bool lat_svm_insn_finished(struct svm_test_context *ctx)
> >  {
> > -	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	u64 vmcb_phys = virt_to_phys(vmcb);
> >  	u64 cycles;
> >  
> >  	for ( ; runs != 0; runs--) {
> > @@ -957,6 +995,7 @@ static void pending_event_ipi_isr(isr_regs_t *regs)
> >  static void pending_event_prepare(struct svm_test_context *ctx)
> >  {
> >  	int ipi_vector = 0xf1;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	pending_event_ipi_fired = false;
> >  
> > @@ -964,8 +1003,8 @@ static void pending_event_prepare(struct svm_test_context *ctx)
> >  
> >  	pending_event_guest_run = false;
> >  
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> > -	vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> > +	vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> >  
> >  	apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
> >  		       APIC_DM_FIXED | ipi_vector, 0);
> > @@ -980,16 +1019,18 @@ static void pending_event_test(struct svm_test_context *ctx)
> >  
> >  static bool pending_event_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
> >  			report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  
> > -		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> > -		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > +		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> > +		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> >  
> >  		if (pending_event_guest_run) {
> >  			report_fail("Guest ran before host received IPI\n");
> > @@ -1067,19 +1108,21 @@ static void pending_event_cli_test(struct svm_test_context *ctx)
> >  
> >  static bool pending_event_cli_finished(struct svm_test_context *ctx)
> >  {
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  		report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x",
> > -			    vcpu0.vmcb->control.exit_code);
> > +			    vmcb->control.exit_code);
> >  		return true;
> >  	}
> >  
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  
> >  		pending_event_ipi_fired = false;
> >  
> > -		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> > +		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> >  
> >  		/* Now entering again with VINTR_MASKING=1.  */
> >  		apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL |
> > @@ -1206,32 +1249,34 @@ static void interrupt_test(struct svm_test_context *ctx)
> >  
> >  static bool interrupt_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> >  	case 2:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  
> > -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> > -		vcpu0.vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> > +		vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> > +		vmcb->control.int_ctl |= V_INTR_MASKING_MASK;
> >  		break;
> >  
> >  	case 1:
> >  	case 3:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INTR) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_INTR) {
> >  			report_fail("VMEXIT not due to intr intercept. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  
> >  		sti_nop_cli();
> >  
> > -		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> > -		vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > +		vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR);
> > +		vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> >  		break;
> >  
> >  	case 4:
> > @@ -1289,22 +1334,24 @@ static void nmi_test(struct svm_test_context *ctx)
> >  
> >  static bool nmi_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  
> > -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
> > +		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
> >  		break;
> >  
> >  	case 1:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
> >  			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  
> > @@ -1391,22 +1438,24 @@ static void nmi_hlt_test(struct svm_test_context *ctx)
> >  
> >  static bool nmi_hlt_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	switch (get_test_stage(ctx)) {
> >  	case 1:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  
> > -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
> > +		vmcb->control.intercept |= (1ULL << INTERCEPT_NMI);
> >  		break;
> >  
> >  	case 2:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_NMI) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_NMI) {
> >  			report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  
> > @@ -1449,40 +1498,42 @@ static void exc_inject_test(struct svm_test_context *ctx)
> >  
> >  static bool exc_inject_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 3;
> > -		vcpu0.vmcb->control.event_inj = NMI_VECTOR |
> > +		vmcb->save.rip += 3;
> > +		vmcb->control.event_inj = NMI_VECTOR |
> >  						SVM_EVTINJ_TYPE_EXEPT |
> >  						SVM_EVTINJ_VALID;
> >  		break;
> >  
> >  	case 1:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_ERR) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_ERR) {
> >  			report_fail("VMEXIT not due to error. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  		report(count_exc == 0, "exception with vector 2 not injected");
> > -		vcpu0.vmcb->control.event_inj = DE_VECTOR |
> > +		vmcb->control.event_inj = DE_VECTOR |
> >  						SVM_EVTINJ_TYPE_EXEPT |
> >  						SVM_EVTINJ_VALID;
> >  		break;
> >  
> >  	case 2:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  		report(count_exc == 1, "divide overflow exception injected");
> > -		report(!(vcpu0.vmcb->control.event_inj & SVM_EVTINJ_VALID),
> > +		report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID),
> >  		       "eventinj.VALID cleared");
> >  		break;
> >  
> > @@ -1509,11 +1560,13 @@ static void virq_isr(isr_regs_t *regs)
> >  
> >  static void virq_inject_prepare(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	handle_irq(0xf1, virq_isr);
> >  
> > -	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> > +	vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> >  		(0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority
> > -	vcpu0.vmcb->control.int_vector = 0xf1;
> > +	vmcb->control.int_vector = 0xf1;
> >  	virq_fired = false;
> >  	set_test_stage(ctx, 0);
> >  }
> > @@ -1563,66 +1616,68 @@ static void virq_inject_test(struct svm_test_context *ctx)
> >  
> >  static bool virq_inject_finished(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->save.rip += 3;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->save.rip += 3;
> >  
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		if (vcpu0.vmcb->control.int_ctl & V_IRQ_MASK) {
> > +		if (vmcb->control.int_ctl & V_IRQ_MASK) {
> >  			report_fail("V_IRQ not cleared on VMEXIT after firing");
> >  			return true;
> >  		}
> >  		virq_fired = false;
> > -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
> > -		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> > +		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
> > +		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> >  			(0x0f << V_INTR_PRIO_SHIFT);
> >  		break;
> >  
> >  	case 1:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VINTR) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VINTR) {
> >  			report_fail("VMEXIT not due to vintr. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  		if (virq_fired) {
> >  			report_fail("V_IRQ fired before SVM_EXIT_VINTR");
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
> > +		vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR);
> >  		break;
> >  
> >  	case 2:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  		virq_fired = false;
> >  		// Set irq to lower priority
> > -		vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> > +		vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK |
> >  			(0x08 << V_INTR_PRIO_SHIFT);
> >  		// Raise guest TPR
> > -		vcpu0.vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
> > +		vmcb->control.int_ctl |= 0x0a & V_TPR_MASK;
> >  		break;
> >  
> >  	case 3:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
> > +		vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR);
> >  		break;
> >  
> >  	case 4:
> >  		// INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> >  		break;
> > @@ -1673,10 +1728,12 @@ static void reg_corruption_isr(isr_regs_t *regs)
> >  
> >  static void reg_corruption_prepare(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	set_test_stage(ctx, 0);
> >  
> > -	vcpu0.vmcb->control.int_ctl = V_INTR_MASKING_MASK;
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> > +	vmcb->control.int_ctl = V_INTR_MASKING_MASK;
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_INTR);
> >  
> >  	handle_irq(TIMER_VECTOR, reg_corruption_isr);
> >  
> > @@ -1705,6 +1762,8 @@ static void reg_corruption_test(struct svm_test_context *ctx)
> >  
> >  static bool reg_corruption_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	if (isr_cnt == 10000) {
> >  		report_pass("No RIP corruption detected after %d timer interrupts",
> >  			    isr_cnt);
> > @@ -1712,9 +1771,9 @@ static bool reg_corruption_finished(struct svm_test_context *ctx)
> >  		goto cleanup;
> >  	}
> >  
> > -	if (vcpu0.vmcb->control.exit_code == SVM_EXIT_INTR) {
> > +	if (vmcb->control.exit_code == SVM_EXIT_INTR) {
> >  
> > -		void *guest_rip = (void *)vcpu0.vmcb->save.rip;
> > +		void *guest_rip = (void *)vmcb->save.rip;
> >  
> >  		sti_nop_cli();
> >  
> > @@ -1782,8 +1841,10 @@ static volatile bool init_intercept;
> >  
> >  static void init_intercept_prepare(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	init_intercept = false;
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_INIT);
> >  }
> >  
> >  static void init_intercept_test(struct svm_test_context *ctx)
> > @@ -1793,11 +1854,13 @@ static void init_intercept_test(struct svm_test_context *ctx)
> >  
> >  static bool init_intercept_finished(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->save.rip += 3;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->save.rip += 3;
> >  
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_INIT) {
> > +	if (vmcb->control.exit_code != SVM_EXIT_INIT) {
> >  		report_fail("VMEXIT not due to init intercept. Exit reason 0x%x",
> > -			    vcpu0.vmcb->control.exit_code);
> > +			    vmcb->control.exit_code);
> >  
> >  		return true;
> >  	}
> > @@ -1894,14 +1957,16 @@ static void host_rflags_test(struct svm_test_context *ctx)
> >  
> >  static bool host_rflags_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	switch (get_test_stage(ctx)) {
> >  	case 0:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  			report_fail("Unexpected VMEXIT. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  		/*
> >  		 * Setting host EFLAGS.TF not immediately before VMRUN, causes
> >  		 * #DB trap before first guest instruction is executed
> > @@ -1909,14 +1974,14 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
> >  		host_rflags_set_tf = true;
> >  		break;
> >  	case 1:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> >  		    host_rflags_guest_main_flag != 1) {
> >  			report_fail("Unexpected VMEXIT or #DB handler"
> >  				    " invoked before guest main. Exit reason 0x%x",
> > -				    vcpu0.vmcb->control.exit_code);
> > +				    vmcb->control.exit_code);
> >  			return true;
> >  		}
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  		/*
> >  		 * Setting host EFLAGS.TF immediately before VMRUN, causes #DB
> >  		 * trap after VMRUN completes on the host side (i.e., after
> > @@ -1925,21 +1990,21 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
> >  		host_rflags_ss_on_vmrun = true;
> >  		break;
> >  	case 2:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> >  		    rip_detected != (u64)&vmrun_rip + 3) {
> >  			report_fail("Unexpected VMEXIT or RIP mismatch."
> >  				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
> > -				    "%lx", vcpu0.vmcb->control.exit_code,
> > +				    "%lx", vmcb->control.exit_code,
> >  				    (u64)&vmrun_rip + 3, rip_detected);
> >  			return true;
> >  		}
> >  		host_rflags_set_rf = true;
> >  		host_rflags_guest_main_flag = 0;
> >  		host_rflags_vmrun_reached = false;
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  		break;
> >  	case 3:
> > -		if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> > +		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
> >  		    rip_detected != (u64)&vmrun_rip ||
> >  		    host_rflags_guest_main_flag != 1 ||
> >  		    host_rflags_db_handler_flag > 1 ||
> > @@ -1947,13 +2012,13 @@ static bool host_rflags_finished(struct svm_test_context *ctx)
> >  			report_fail("Unexpected VMEXIT or RIP mismatch or "
> >  				    "EFLAGS.RF not cleared."
> >  				    " Exit reason 0x%x, RIP actual: %lx, RIP expected: "
> > -				    "%lx", vcpu0.vmcb->control.exit_code,
> > +				    "%lx", vmcb->control.exit_code,
> >  				    (u64)&vmrun_rip, rip_detected);
> >  			return true;
> >  		}
> >  		host_rflags_set_tf = false;
> >  		host_rflags_set_rf = false;
> > -		vcpu0.vmcb->save.rip += 3;
> > +		vmcb->save.rip += 3;
> >  		break;
> >  	default:
> >  		return true;
> > @@ -1986,6 +2051,8 @@ static void svm_cr4_osxsave_test_guest(struct svm_test_context *ctx)
> >  
> >  static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	if (!this_cpu_has(X86_FEATURE_XSAVE)) {
> >  		report_skip("XSAVE not detected");
> >  		return;
> > @@ -1995,7 +2062,7 @@ static void svm_cr4_osxsave_test(struct svm_test_context *ctx)
> >  		unsigned long cr4 = read_cr4() | X86_CR4_OSXSAVE;
> >  
> >  		write_cr4(cr4);
> > -		vcpu0.vmcb->save.cr4 = cr4;
> > +		vmcb->save.cr4 = cr4;
> >  	}
> >  
> >  	report(this_cpu_has(X86_FEATURE_OSXSAVE), "CPUID.01H:ECX.XSAVE set before VMRUN");
> > @@ -2035,6 +2102,7 @@ static void basic_guest_main(struct svm_test_context *ctx)
> >  	u64 tmp, mask;							\
> >  	u32 r;								\
> >  	int i;								\
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;				\
> >  									\
> >  	for (i = start; i <= end; i = i + inc) {			\
> >  		mask = 1ull << i;					\
> > @@ -2043,13 +2111,13 @@ static void basic_guest_main(struct svm_test_context *ctx)
> >  		tmp = val | mask;					\
> >  		switch (cr) {						\
> >  		case 0:							\
> > -			vcpu0.vmcb->save.cr0 = tmp;				\
> > +			vmcb->save.cr0 = tmp;				\
> >  			break;						\
> >  		case 3:							\
> > -			vcpu0.vmcb->save.cr3 = tmp;				\
> > +			vmcb->save.cr3 = tmp;				\
> >  			break;						\
> >  		case 4:							\
> > -			vcpu0.vmcb->save.cr4 = tmp;				\
> > +			vmcb->save.cr4 = tmp;				\
> >  		}							\
> >  		r = svm_vmrun(ctx);					\
> >  		report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x", \
> > @@ -2062,39 +2130,40 @@ static void test_efer(struct svm_test_context *ctx)
> >  	/*
> >  	 * Un-setting EFER.SVME is illegal
> >  	 */
> > -	u64 efer_saved = vcpu0.vmcb->save.efer;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	u64 efer_saved = vmcb->save.efer;
> >  	u64 efer = efer_saved;
> >  
> >  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "EFER.SVME: %lx", efer);
> >  	efer &= ~EFER_SVME;
> > -	vcpu0.vmcb->save.efer = efer;
> > +	vmcb->save.efer = efer;
> >  	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.SVME: %lx", efer);
> > -	vcpu0.vmcb->save.efer = efer_saved;
> > +	vmcb->save.efer = efer_saved;
> >  
> >  	/*
> >  	 * EFER MBZ bits: 63:16, 9
> >  	 */
> > -	efer_saved = vcpu0.vmcb->save.efer;
> > +	efer_saved = vmcb->save.efer;
> >  
> > -	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vcpu0.vmcb->save.efer,
> > +	SVM_TEST_REG_RESERVED_BITS(ctx, 8, 9, 1, "EFER", vmcb->save.efer,
> >  				   efer_saved, SVM_EFER_RESERVED_MASK);
> > -	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vcpu0.vmcb->save.efer,
> > +	SVM_TEST_REG_RESERVED_BITS(ctx, 16, 63, 4, "EFER", vmcb->save.efer,
> >  				   efer_saved, SVM_EFER_RESERVED_MASK);
> >  
> >  	/*
> >  	 * EFER.LME and CR0.PG are both set and CR4.PAE is zero.
> >  	 */
> > -	u64 cr0_saved = vcpu0.vmcb->save.cr0;
> > +	u64 cr0_saved = vmcb->save.cr0;
> >  	u64 cr0;
> > -	u64 cr4_saved = vcpu0.vmcb->save.cr4;
> > +	u64 cr4_saved = vmcb->save.cr4;
> >  	u64 cr4;
> >  
> >  	efer = efer_saved | EFER_LME;
> > -	vcpu0.vmcb->save.efer = efer;
> > +	vmcb->save.efer = efer;
> >  	cr0 = cr0_saved | X86_CR0_PG | X86_CR0_PE;
> > -	vcpu0.vmcb->save.cr0 = cr0;
> > +	vmcb->save.cr0 = cr0;
> >  	cr4 = cr4_saved & ~X86_CR4_PAE;
> > -	vcpu0.vmcb->save.cr4 = cr4;
> > +	vmcb->save.cr4 = cr4;
> >  	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
> >  	       "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4);
> >  
> > @@ -2105,31 +2174,31 @@ static void test_efer(struct svm_test_context *ctx)
> >  	 * SVM_EXIT_ERR.
> >  	 */
> >  	cr4 = cr4_saved | X86_CR4_PAE;
> > -	vcpu0.vmcb->save.cr4 = cr4;
> > +	vmcb->save.cr4 = cr4;
> >  	cr0 &= ~X86_CR0_PE;
> > -	vcpu0.vmcb->save.cr0 = cr0;
> > +	vmcb->save.cr0 = cr0;
> >  	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
> >  	       "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0);
> >  
> >  	/*
> >  	 * EFER.LME, CR0.PG, CR4.PAE, CS.L, and CS.D are all non-zero.
> >  	 */
> > -	u32 cs_attrib_saved = vcpu0.vmcb->save.cs.attrib;
> > +	u32 cs_attrib_saved = vmcb->save.cs.attrib;
> >  	u32 cs_attrib;
> >  
> >  	cr0 |= X86_CR0_PE;
> > -	vcpu0.vmcb->save.cr0 = cr0;
> > +	vmcb->save.cr0 = cr0;
> >  	cs_attrib = cs_attrib_saved | SVM_SELECTOR_L_MASK |
> >  		SVM_SELECTOR_DB_MASK;
> > -	vcpu0.vmcb->save.cs.attrib = cs_attrib;
> > +	vmcb->save.cs.attrib = cs_attrib;
> >  	report(svm_vmrun(ctx) == SVM_EXIT_ERR, "EFER.LME=1 (%lx), "
> >  	       "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)",
> >  	       efer, cr0, cr4, cs_attrib);
> >  
> > -	vcpu0.vmcb->save.cr0 = cr0_saved;
> > -	vcpu0.vmcb->save.cr4 = cr4_saved;
> > -	vcpu0.vmcb->save.efer = efer_saved;
> > -	vcpu0.vmcb->save.cs.attrib = cs_attrib_saved;
> > +	vmcb->save.cr0 = cr0_saved;
> > +	vmcb->save.cr4 = cr4_saved;
> > +	vmcb->save.efer = efer_saved;
> > +	vmcb->save.cs.attrib = cs_attrib_saved;
> >  }
> >  
> >  static void test_cr0(struct svm_test_context *ctx)
> > @@ -2137,37 +2206,39 @@ static void test_cr0(struct svm_test_context *ctx)
> >  	/*
> >  	 * Un-setting CR0.CD and setting CR0.NW is illegal combination
> >  	 */
> > -	u64 cr0_saved = vcpu0.vmcb->save.cr0;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	u64 cr0_saved = vmcb->save.cr0;
> >  	u64 cr0 = cr0_saved;
> >  
> >  	cr0 |= X86_CR0_CD;
> >  	cr0 &= ~X86_CR0_NW;
> > -	vcpu0.vmcb->save.cr0 = cr0;
> > +	vmcb->save.cr0 = cr0;
> >  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx",
> >  		cr0);
> >  	cr0 |= X86_CR0_NW;
> > -	vcpu0.vmcb->save.cr0 = cr0;
> > +	vmcb->save.cr0 = cr0;
> >  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx",
> >  		cr0);
> >  	cr0 &= ~X86_CR0_NW;
> >  	cr0 &= ~X86_CR0_CD;
> > -	vcpu0.vmcb->save.cr0 = cr0;
> > +	vmcb->save.cr0 = cr0;
> >  	report (svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx",
> >  		cr0);
> >  	cr0 |= X86_CR0_NW;
> > -	vcpu0.vmcb->save.cr0 = cr0;
> > +	vmcb->save.cr0 = cr0;
> >  	report (svm_vmrun(ctx) == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx",
> >  		cr0);
> > -	vcpu0.vmcb->save.cr0 = cr0_saved;
> > +	vmcb->save.cr0 = cr0_saved;
> >  
> >  	/*
> >  	 * CR0[63:32] are not zero
> >  	 */
> >  	cr0 = cr0_saved;
> >  
> > -	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vcpu0.vmcb->save.cr0, cr0_saved,
> > +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "CR0", vmcb->save.cr0, cr0_saved,
> >  				   SVM_CR0_RESERVED_MASK);
> > -	vcpu0.vmcb->save.cr0 = cr0_saved;
> > +	vmcb->save.cr0 = cr0_saved;
> >  }
> >  
> >  static void test_cr3(struct svm_test_context *ctx)
> > @@ -2176,37 +2247,39 @@ static void test_cr3(struct svm_test_context *ctx)
> >  	 * CR3 MBZ bits based on different modes:
> >  	 *   [63:52] - long mode
> >  	 */
> > -	u64 cr3_saved = vcpu0.vmcb->save.cr3;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	u64 cr3_saved = vmcb->save.cr3;
> >  
> >  	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 63, 1, 3, cr3_saved,
> >  				  SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, "");
> >  
> > -	vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
> > +	vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK;
> >  	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
> > -	       vcpu0.vmcb->save.cr3);
> > +	       vmcb->save.cr3);
> >  
> >  	/*
> >  	 * CR3 non-MBZ reserved bits based on different modes:
> >  	 *   [11:5] [2:0] - long mode (PCIDE=0)
> >  	 *          [2:0] - PAE legacy mode
> >  	 */
> > -	u64 cr4_saved = vcpu0.vmcb->save.cr4;
> > +	u64 cr4_saved = vmcb->save.cr4;
> >  	u64 *pdpe = npt_get_pml4e();
> >  
> >  	/*
> >  	 * Long mode
> >  	 */
> >  	if (this_cpu_has(X86_FEATURE_PCID)) {
> > -		vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
> > +		vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE;
> >  		SVM_TEST_CR_RESERVED_BITS(ctx, 0, 11, 1, 3, cr3_saved,
> >  					  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) ");
> >  
> > -		vcpu0.vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
> > +		vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK;
> >  		report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx",
> > -		       vcpu0.vmcb->save.cr3);
> > +		       vmcb->save.cr3);
> >  	}
> >  
> > -	vcpu0.vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
> > +	vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE;
> >  
> >  	if (!npt_supported())
> >  		goto skip_npt_only;
> > @@ -2218,44 +2291,46 @@ static void test_cr3(struct svm_test_context *ctx)
> >  				  SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) ");
> >  
> >  	pdpe[0] |= 1ULL;
> > -	vcpu0.vmcb->save.cr3 = cr3_saved;
> > +	vmcb->save.cr3 = cr3_saved;
> >  
> >  	/*
> >  	 * PAE legacy
> >  	 */
> >  	pdpe[0] &= ~1ULL;
> > -	vcpu0.vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
> > +	vmcb->save.cr4 = cr4_saved | X86_CR4_PAE;
> >  	SVM_TEST_CR_RESERVED_BITS(ctx, 0, 2, 1, 3, cr3_saved,
> >  				  SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) ");
> >  
> >  	pdpe[0] |= 1ULL;
> >  
> >  skip_npt_only:
> > -	vcpu0.vmcb->save.cr3 = cr3_saved;
> > -	vcpu0.vmcb->save.cr4 = cr4_saved;
> > +	vmcb->save.cr3 = cr3_saved;
> > +	vmcb->save.cr4 = cr4_saved;
> >  }
> >  
> >  /* Test CR4 MBZ bits based on legacy or long modes */
> >  static void test_cr4(struct svm_test_context *ctx)
> >  {
> > -	u64 cr4_saved = vcpu0.vmcb->save.cr4;
> > -	u64 efer_saved = vcpu0.vmcb->save.efer;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	u64 cr4_saved = vmcb->save.cr4;
> > +	u64 efer_saved = vmcb->save.efer;
> >  	u64 efer = efer_saved;
> >  
> >  	efer &= ~EFER_LME;
> > -	vcpu0.vmcb->save.efer = efer;
> > +	vmcb->save.efer = efer;
> >  	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
> >  				  SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, "");
> >  
> >  	efer |= EFER_LME;
> > -	vcpu0.vmcb->save.efer = efer;
> > +	vmcb->save.efer = efer;
> >  	SVM_TEST_CR_RESERVED_BITS(ctx, 12, 31, 1, 4, cr4_saved,
> >  				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
> >  	SVM_TEST_CR_RESERVED_BITS(ctx, 32, 63, 4, 4, cr4_saved,
> >  				  SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, "");
> >  
> > -	vcpu0.vmcb->save.cr4 = cr4_saved;
> > -	vcpu0.vmcb->save.efer = efer_saved;
> > +	vmcb->save.cr4 = cr4_saved;
> > +	vmcb->save.efer = efer_saved;
> >  }
> >  
> >  static void test_dr(struct svm_test_context *ctx)
> > @@ -2263,27 +2338,29 @@ static void test_dr(struct svm_test_context *ctx)
> >  	/*
> >  	 * DR6[63:32] and DR7[63:32] are MBZ
> >  	 */
> > -	u64 dr_saved = vcpu0.vmcb->save.dr6;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> > -	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vcpu0.vmcb->save.dr6, dr_saved,
> > +	u64 dr_saved = vmcb->save.dr6;
> > +
> > +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR6", vmcb->save.dr6, dr_saved,
> >  				   SVM_DR6_RESERVED_MASK);
> > -	vcpu0.vmcb->save.dr6 = dr_saved;
> > +	vmcb->save.dr6 = dr_saved;
> >  
> > -	dr_saved = vcpu0.vmcb->save.dr7;
> > -	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vcpu0.vmcb->save.dr7, dr_saved,
> > +	dr_saved = vmcb->save.dr7;
> > +	SVM_TEST_REG_RESERVED_BITS(ctx, 32, 63, 4, "DR7", vmcb->save.dr7, dr_saved,
> >  				   SVM_DR7_RESERVED_MASK);
> >  
> > -	vcpu0.vmcb->save.dr7 = dr_saved;
> > +	vmcb->save.dr7 = dr_saved;
> >  }
> >  
> >  /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */
> > -#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,		\
> > +#define	TEST_BITMAP_ADDR(ctx, save_intercept, type, addr, exit_code,	\
> >  			 msg) {						\
> > -		vcpu0.vmcb->control.intercept = saved_intercept | 1ULL << type; \
> > +		ctx->vcpu->vmcb->control.intercept = saved_intercept | 1ULL << type; \
> >  		if (type == INTERCEPT_MSR_PROT)				\
> > -			vcpu0.vmcb->control.msrpm_base_pa = addr;		\
> > +			ctx->vcpu->vmcb->control.msrpm_base_pa = addr;	\
> >  		else							\
> > -			vcpu0.vmcb->control.iopm_base_pa = addr;		\
> > +			ctx->vcpu->vmcb->control.iopm_base_pa = addr;	\
> >  		report(svm_vmrun(ctx) == exit_code,			\
> >  		       "Test %s address: %lx", msg, addr);		\
> >  	}
> > @@ -2306,7 +2383,9 @@ static void test_dr(struct svm_test_context *ctx)
> >   */
> >  static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
> >  {
> > -	u64 saved_intercept = vcpu0.vmcb->control.intercept;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	u64 saved_intercept = vmcb->control.intercept;
> >  	u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr();
> >  	u64 addr = virt_to_phys(svm_get_msr_bitmap()) & (~((1ull << 12) - 1));
> >  	u8 *io_bitmap = svm_get_io_bitmap();
> > @@ -2348,7 +2427,7 @@ static void test_msrpm_iopm_bitmap_addrs(struct svm_test_context *ctx)
> >  	TEST_BITMAP_ADDR(ctx, saved_intercept, INTERCEPT_IOIO_PROT, addr,
> >  			 SVM_EXIT_VMMCALL, "IOPM");
> >  
> > -	vcpu0.vmcb->control.intercept = saved_intercept;
> > +	vmcb->control.intercept = saved_intercept;
> >  }
> >  
> >  /*
> > @@ -2378,22 +2457,24 @@ static void test_canonicalization(struct svm_test_context *ctx)
> >  	u64 saved_addr;
> >  	u64 return_value;
> >  	u64 addr_limit;
> > -	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
> > +
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	u64 vmcb_phys = virt_to_phys(vmcb);
> >  
> >  	addr_limit = (this_cpu_has(X86_FEATURE_LA57)) ? 57 : 48;
> >  	u64 noncanonical_mask = NONCANONICAL & ~((1ul << addr_limit) - 1);
> >  
> > -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.fs.base, "FS");
> > -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.gs.base, "GS");
> > -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.ldtr.base, "LDTR");
> > -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.tr.base, "TR");
> > -	TEST_CANONICAL_VMLOAD(ctx, vcpu0.vmcb->save.kernel_gs_base, "KERNEL GS");
> > -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.es.base, "ES");
> > -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.cs.base, "CS");
> > -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ss.base, "SS");
> > -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.ds.base, "DS");
> > -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.gdtr.base, "GDTR");
> > -	TEST_CANONICAL_VMRUN(ctx, vcpu0.vmcb->save.idtr.base, "IDTR");
> > +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.fs.base, "FS");
> > +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.gs.base, "GS");
> > +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.ldtr.base, "LDTR");
> > +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.tr.base, "TR");
> > +	TEST_CANONICAL_VMLOAD(ctx, vmcb->save.kernel_gs_base, "KERNEL GS");
> > +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.es.base, "ES");
> > +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.cs.base, "CS");
> > +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.ss.base, "SS");
> > +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.ds.base, "DS");
> > +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.gdtr.base, "GDTR");
> > +	TEST_CANONICAL_VMRUN(ctx, vmcb->save.idtr.base, "IDTR");
> >  }
> >  
> >  /*
> > @@ -2442,12 +2523,14 @@ asm("guest_rflags_test_guest:\n\t"
> >  
> >  static void svm_test_singlestep(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	handle_exception(DB_VECTOR, guest_rflags_test_db_handler);
> >  
> >  	/*
> >  	 * Trap expected after completion of first guest instruction
> >  	 */
> > -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
> > +	vmcb->save.rflags |= X86_EFLAGS_TF;
> >  	report (__svm_vmrun(ctx, (u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL &&
> >  		guest_rflags_test_trap_rip == (u64)&insn2,
> >  		"Test EFLAGS.TF on VMRUN: trap expected  after completion of first guest instruction");
> > @@ -2455,18 +2538,18 @@ static void svm_test_singlestep(struct svm_test_context *ctx)
> >  	 * No trap expected
> >  	 */
> >  	guest_rflags_test_trap_rip = 0;
> > -	vcpu0.vmcb->save.rip += 3;
> > -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_TF;
> > -	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> > +	vmcb->save.rip += 3;
> > +	vmcb->save.rflags |= X86_EFLAGS_TF;
> > +	report(__svm_vmrun(ctx, vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> >  		guest_rflags_test_trap_rip == 0,
> >  		"Test EFLAGS.TF on VMRUN: trap not expected");
> >  
> >  	/*
> >  	 * Let guest finish execution
> >  	 */
> > -	vcpu0.vmcb->save.rip += 3;
> > -	report(__svm_vmrun(ctx, vcpu0.vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> > -		vcpu0.vmcb->save.rip == (u64)&guest_end,
> > +	vmcb->save.rip += 3;
> > +	report(__svm_vmrun(ctx, vmcb->save.rip) == SVM_EXIT_VMMCALL &&
> > +		vmcb->save.rip == (u64)&guest_end,
> >  		"Test EFLAGS.TF on VMRUN: guest execution completion");
> >  }
> >  
> > @@ -2538,7 +2621,8 @@ static void svm_vmrun_errata_test(struct svm_test_context *ctx)
> >  
> >  static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
> >  {
> > -	u64 vmcb_phys = virt_to_phys(vcpu0.vmcb);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	u64 vmcb_phys = virt_to_phys(vmcb);
> >  
> >  	asm volatile ("vmload %0" : : "a"(vmcb_phys));
> >  	asm volatile ("vmsave %0" : : "a"(vmcb_phys));
> > @@ -2546,7 +2630,8 @@ static void vmload_vmsave_guest_main(struct svm_test_context *ctx)
> >  
> >  static void svm_vmload_vmsave(struct svm_test_context *ctx)
> >  {
> > -	u32 intercept_saved = vcpu0.vmcb->control.intercept;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +	u32 intercept_saved = vmcb->control.intercept;
> >  
> >  	test_set_guest(vmload_vmsave_guest_main);
> >  
> > @@ -2554,49 +2639,49 @@ static void svm_vmload_vmsave(struct svm_test_context *ctx)
> >  	 * Disabling intercept for VMLOAD and VMSAVE doesn't cause
> >  	 * respective #VMEXIT to host
> >  	 */
> > -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> > -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> > +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> > +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
> >  
> >  	/*
> >  	 * Enabling intercept for VMLOAD and VMSAVE causes respective
> >  	 * #VMEXIT to host
> >  	 */
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
> > -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> > +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
> > -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> > +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
> >  
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT");
> > -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> > +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
> >  
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT");
> > -	vcpu0.vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> > +	vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> > +	report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test "
> >  	       "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT");
> >  
> > -	vcpu0.vmcb->control.intercept = intercept_saved;
> > +	vmcb->control.intercept = intercept_saved;
> >  }
> >  
> >  static void prepare_vgif_enabled(struct svm_test_context *ctx)
> > @@ -2610,45 +2695,47 @@ static void test_vgif(struct svm_test_context *ctx)
> >  
> >  static bool vgif_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	switch (get_test_stage(ctx))
> >  		{
> >  		case 0:
> > -			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  				report_fail("VMEXIT not due to vmmcall.");
> >  				return true;
> >  			}
> > -			vcpu0.vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
> > -			vcpu0.vmcb->save.rip += 3;
> > +			vmcb->control.int_ctl |= V_GIF_ENABLED_MASK;
> > +			vmcb->save.rip += 3;
> >  			inc_test_stage(ctx);
> >  			break;
> >  		case 1:
> > -			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  				report_fail("VMEXIT not due to vmmcall.");
> >  				return true;
> >  			}
> > -			if (!(vcpu0.vmcb->control.int_ctl & V_GIF_MASK)) {
> > +			if (!(vmcb->control.int_ctl & V_GIF_MASK)) {
> >  				report_fail("Failed to set VGIF when executing STGI.");
> > -				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> > +				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> >  				return true;
> >  			}
> >  			report_pass("STGI set VGIF bit.");
> > -			vcpu0.vmcb->save.rip += 3;
> > +			vmcb->save.rip += 3;
> >  			inc_test_stage(ctx);
> >  			break;
> >  		case 2:
> > -			if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +			if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  				report_fail("VMEXIT not due to vmmcall.");
> >  				return true;
> >  			}
> > -			if (vcpu0.vmcb->control.int_ctl & V_GIF_MASK) {
> > +			if (vmcb->control.int_ctl & V_GIF_MASK) {
> >  				report_fail("Failed to clear VGIF when executing CLGI.");
> > -				vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> > +				vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> >  				return true;
> >  			}
> >  			report_pass("CLGI cleared VGIF bit.");
> > -			vcpu0.vmcb->save.rip += 3;
> > +			vmcb->save.rip += 3;
> >  			inc_test_stage(ctx);
> > -			vcpu0.vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> > +			vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK;
> >  			break;
> >  		default:
> >  			return true;
> > @@ -2688,31 +2775,35 @@ static void pause_filter_run_test(struct svm_test_context *ctx,
> >  				  int pause_iterations, int filter_value,
> >  				  int wait_iterations, int threshold)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	test_set_guest(pause_filter_test_guest_main);
> >  
> >  	pause_test_counter = pause_iterations;
> >  	wait_counter = wait_iterations;
> >  
> > -	vcpu0.vmcb->control.pause_filter_count = filter_value;
> > -	vcpu0.vmcb->control.pause_filter_thresh = threshold;
> > +	vmcb->control.pause_filter_count = filter_value;
> > +	vmcb->control.pause_filter_thresh = threshold;
> >  	svm_vmrun(ctx);
> >  
> >  	if (filter_value <= pause_iterations || wait_iterations < threshold)
> > -		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_PAUSE,
> > +		report(vmcb->control.exit_code == SVM_EXIT_PAUSE,
> >  		       "expected PAUSE vmexit");
> >  	else
> > -		report(vcpu0.vmcb->control.exit_code == SVM_EXIT_VMMCALL,
> > +		report(vmcb->control.exit_code == SVM_EXIT_VMMCALL,
> >  		       "no expected PAUSE vmexit");
> >  }
> >  
> >  static void pause_filter_test(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	if (!pause_filter_supported()) {
> >  		report_skip("PAUSE filter not supported in the guest");
> >  		return;
> >  	}
> >  
> > -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
> > +	vmcb->control.intercept |= (1 << INTERCEPT_PAUSE);
> >  
> >  	// filter count more that pause count - no VMexit
> >  	pause_filter_run_test(ctx, 10, 9, 0, 0);
> > @@ -2738,10 +2829,12 @@ static void pause_filter_test(struct svm_test_context *ctx)
> >  /* If CR0.TS and CR0.EM are cleared in L2, no #NM is generated. */
> >  static void svm_no_nm_test(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	write_cr0(read_cr0() & ~X86_CR0_TS);
> >  	test_set_guest((test_guest_func)fnop);
> >  
> > -	vcpu0.vmcb->save.cr0 = vcpu0.vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
> > +	vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM);
> >  	report(svm_vmrun(ctx) == SVM_EXIT_VMMCALL,
> >  	       "fnop with CR0.TS and CR0.EM unset no #NM excpetion");
> >  }
> > @@ -2872,20 +2965,21 @@ static void svm_lbrv_test0(struct svm_test_context *ctx)
> >  
> >  static void svm_lbrv_test1(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> >  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
> >  
> > -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> > -	vcpu0.vmcb->control.virt_ext = 0;
> > +	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> > +	vmcb->control.virt_ext = 0;
> >  
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch1);
> > -	SVM_VMRUN(&vcpu0);
> > +	SVM_VMRUN(ctx->vcpu);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -		       vcpu0.vmcb->control.exit_code);
> > +		       vmcb->control.exit_code);
> >  		return;
> >  	}
> >  
> > @@ -2895,21 +2989,23 @@ static void svm_lbrv_test1(struct svm_test_context *ctx)
> >  
> >  static void svm_lbrv_test2(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
> >  
> > -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> > -	vcpu0.vmcb->control.virt_ext = 0;
> > +	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> > +	vmcb->control.virt_ext = 0;
> >  
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch2);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> > -	SVM_VMRUN(&vcpu0);
> > +	SVM_VMRUN(ctx->vcpu);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> >  
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -		       vcpu0.vmcb->control.exit_code);
> > +		       vmcb->control.exit_code);
> >  		return;
> >  	}
> >  
> > @@ -2919,32 +3015,34 @@ static void svm_lbrv_test2(struct svm_test_context *ctx)
> >  
> >  static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	if (!lbrv_supported()) {
> >  		report_skip("LBRV not supported in the guest");
> >  		return;
> >  	}
> >  
> >  	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)");
> > -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> > -	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
> > -	vcpu0.vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
> > +	vmcb->save.rip = (ulong)svm_lbrv_test_guest1;
> > +	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
> > +	vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
> >  
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch3);
> > -	SVM_VMRUN(&vcpu0);
> > +	SVM_VMRUN(ctx->vcpu);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> >  
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -		       vcpu0.vmcb->control.exit_code);
> > +		       vmcb->control.exit_code);
> >  		return;
> >  	}
> >  
> > -	if (vcpu0.vmcb->save.dbgctl != 0) {
> > +	if (vmcb->save.dbgctl != 0) {
> >  		report(false,
> >  		       "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx",
> > -		       vcpu0.vmcb->save.dbgctl);
> > +		       vmcb->save.dbgctl);
> >  		return;
> >  	}
> >  
> > @@ -2954,28 +3052,30 @@ static void svm_lbrv_nested_test1(struct svm_test_context *ctx)
> >  
> >  static void svm_lbrv_nested_test2(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	if (!lbrv_supported()) {
> >  		report_skip("LBRV not supported in the guest");
> >  		return;
> >  	}
> >  
> >  	report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)");
> > -	vcpu0.vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> > -	vcpu0.vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
> > +	vmcb->save.rip = (ulong)svm_lbrv_test_guest2;
> > +	vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
> >  
> > -	vcpu0.vmcb->save.dbgctl = 0;
> > -	vcpu0.vmcb->save.br_from = (u64)&host_branch2_from;
> > -	vcpu0.vmcb->save.br_to = (u64)&host_branch2_to;
> > +	vmcb->save.dbgctl = 0;
> > +	vmcb->save.br_from = (u64)&host_branch2_from;
> > +	vmcb->save.br_to = (u64)&host_branch2_to;
> >  
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> >  	DO_BRANCH(host_branch4);
> > -	SVM_VMRUN(&vcpu0);
> > +	SVM_VMRUN(ctx->vcpu);
> >  	dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> >  	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> >  
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> > +	if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
> >  		report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x",
> > -		       vcpu0.vmcb->control.exit_code);
> > +		       vmcb->control.exit_code);
> >  		return;
> >  	}
> >  
> > @@ -3005,6 +3105,8 @@ static void dummy_nmi_handler(struct ex_regs *regs)
> >  static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
> >  					     volatile int *counter, int expected_vmexit)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	if (counter)
> >  		*counter = 0;
> >  
> > @@ -3021,8 +3123,8 @@ static void svm_intr_intercept_mix_run_guest(struct svm_test_context *ctx,
> >  	if (counter)
> >  		report(*counter == 1, "Interrupt is expected");
> >  
> > -	report(vcpu0.vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
> > -	report(vcpu0.vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
> > +	report(vmcb->control.exit_code == expected_vmexit, "Test expected VM exit");
> > +	report(vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now");
> >  	cli();
> >  }
> >  
> > @@ -3038,12 +3140,14 @@ static void svm_intr_intercept_mix_if_guest(struct svm_test_context *ctx)
> >  
> >  static void svm_intr_intercept_mix_if(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	// make a physical interrupt to be pending
> >  	handle_irq(0x55, dummy_isr);
> >  
> > -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> > -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > -	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
> > +	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> > +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > +	vmcb->save.rflags &= ~X86_EFLAGS_IF;
> >  
> >  	test_set_guest(svm_intr_intercept_mix_if_guest);
> >  	cli();
> > @@ -3072,11 +3176,13 @@ static void svm_intr_intercept_mix_gif_guest(struct svm_test_context *ctx)
> >  
> >  static void svm_intr_intercept_mix_gif(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	handle_irq(0x55, dummy_isr);
> >  
> > -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> > -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > -	vcpu0.vmcb->save.rflags &= ~X86_EFLAGS_IF;
> > +	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> > +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > +	vmcb->save.rflags &= ~X86_EFLAGS_IF;
> >  
> >  	test_set_guest(svm_intr_intercept_mix_gif_guest);
> >  	cli();
> > @@ -3102,11 +3208,13 @@ static void svm_intr_intercept_mix_gif_guest2(struct svm_test_context *ctx)
> >  
> >  static void svm_intr_intercept_mix_gif2(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	handle_irq(0x55, dummy_isr);
> >  
> > -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> > -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
> > +	vmcb->control.intercept |= (1 << INTERCEPT_INTR);
> > +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > +	vmcb->save.rflags |= X86_EFLAGS_IF;
> >  
> >  	test_set_guest(svm_intr_intercept_mix_gif_guest2);
> >  	svm_intr_intercept_mix_run_guest(ctx, &dummy_isr_recevied, SVM_EXIT_INTR);
> > @@ -3131,11 +3239,13 @@ static void svm_intr_intercept_mix_nmi_guest(struct svm_test_context *ctx)
> >  
> >  static void svm_intr_intercept_mix_nmi(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	handle_exception(2, dummy_nmi_handler);
> >  
> > -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_NMI);
> > -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > -	vcpu0.vmcb->save.rflags |= X86_EFLAGS_IF;
> > +	vmcb->control.intercept |= (1 << INTERCEPT_NMI);
> > +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > +	vmcb->save.rflags |= X86_EFLAGS_IF;
> >  
> >  	test_set_guest(svm_intr_intercept_mix_nmi_guest);
> >  	svm_intr_intercept_mix_run_guest(ctx, &nmi_recevied, SVM_EXIT_NMI);
> > @@ -3157,8 +3267,10 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test_context *ctx)
> >  
> >  static void svm_intr_intercept_mix_smi(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept |= (1 << INTERCEPT_SMI);
> > -	vcpu0.vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.intercept |= (1 << INTERCEPT_SMI);
> > +	vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
> >  	test_set_guest(svm_intr_intercept_mix_smi_guest);
> >  	svm_intr_intercept_mix_run_guest(ctx, NULL, SVM_EXIT_SMI);
> >  }
> > @@ -3215,14 +3327,16 @@ static void handle_exception_in_l2(struct svm_test_context *ctx, u8 vector)
> >  
> >  static void handle_exception_in_l1(struct svm_test_context *ctx, u32 vector)
> >  {
> > -	u32 old_ie = vcpu0.vmcb->control.intercept_exceptions;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	u32 old_ie = vmcb->control.intercept_exceptions;
> >  
> > -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << vector);
> > +	vmcb->control.intercept_exceptions |= (1ULL << vector);
> >  
> >  	report(svm_vmrun(ctx) == (SVM_EXIT_EXCP_BASE + vector),
> >  		"%s handled by L1",  exception_mnemonic(vector));
> >  
> > -	vcpu0.vmcb->control.intercept_exceptions = old_ie;
> > +	vmcb->control.intercept_exceptions = old_ie;
> >  }
> >  
> >  static void svm_exception_test(struct svm_test_context *ctx)
> > @@ -3235,10 +3349,10 @@ static void svm_exception_test(struct svm_test_context *ctx)
> >  		test_set_guest((test_guest_func)t->guest_code);
> >  
> >  		handle_exception_in_l2(ctx, t->vector);
> > -		svm_vcpu_ident(&vcpu0);
> > +		svm_vcpu_ident(ctx->vcpu);
> >  
> >  		handle_exception_in_l1(ctx, t->vector);
> > -		svm_vcpu_ident(&vcpu0);
> > +		svm_vcpu_ident(ctx->vcpu);
> >  	}
> >  }
> >  
> > @@ -3250,11 +3364,13 @@ static void shutdown_intercept_test_guest(struct svm_test_context *ctx)
> >  }
> >  static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	test_set_guest(shutdown_intercept_test_guest);
> > -	vcpu0.vmcb->save.idtr.base = (u64)alloc_vpage();
> > -	vcpu0.vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
> > +	vmcb->save.idtr.base = (u64)alloc_vpage();
> > +	vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN);
> >  	svm_vmrun(ctx);
> > -	report(vcpu0.vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
> > +	report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed");
> >  }
> >  
> >  /*
> > @@ -3264,7 +3380,9 @@ static void svm_shutdown_intercept_test(struct svm_test_context *ctx)
> >  
> >  static void exception_merging_prepare(struct svm_test_context *ctx)
> >  {
> > -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> >  
> >  	/* break UD vector idt entry to get #GP*/
> >  	boot_idt[UD_VECTOR].type = 1;
> > @@ -3277,15 +3395,17 @@ static void exception_merging_test(struct svm_test_context *ctx)
> >  
> >  static bool exception_merging_finished(struct svm_test_context *ctx)
> >  {
> > -	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> > -	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> > +	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> > +	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
> >  
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
> > +	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
> >  		report(false, "unexpected VM exit");
> >  		goto out;
> >  	}
> >  
> > -	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
> > +	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
> >  		report(false, "EXITINTINFO not valid");
> >  		goto out;
> >  	}
> > @@ -3320,8 +3440,10 @@ static bool exception_merging_check(struct svm_test_context *ctx)
> >  
> >  static void interrupt_merging_prepare(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> > +
> >  	/* intercept #GP */
> > -	vcpu0.vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> > +	vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR);
> >  
> >  	/* set local APIC to inject external interrupts */
> >  	apic_setup_timer(TIMER_VECTOR, APIC_LVT_TIMER_PERIODIC);
> > @@ -3342,16 +3464,17 @@ static void interrupt_merging_test(struct svm_test_context *ctx)
> >  
> >  static bool interrupt_merging_finished(struct svm_test_context *ctx)
> >  {
> > +	struct vmcb *vmcb = ctx->vcpu->vmcb;
> >  
> > -	u32 vec = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> > -	u32 type = vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
> > -	u32 error_code = vcpu0.vmcb->control.exit_info_1;
> > +	u32 vec = vmcb->control.exit_int_info & SVM_EXITINTINFO_VEC_MASK;
> > +	u32 type = vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK;
> > +	u32 error_code = vmcb->control.exit_info_1;
> >  
> >  	/* exit on external interrupts is disabled, thus timer interrupt
> >  	 * should be attempted to be delivered, but due to incorrect IDT entry
> >  	 * an #GP should be raised
> >  	 */
> > -	if (vcpu0.vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
> > +	if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + GP_VECTOR) {
> >  		report(false, "unexpected VM exit");
> >  		goto cleanup;
> >  	}
> > @@ -3363,7 +3486,7 @@ static bool interrupt_merging_finished(struct svm_test_context *ctx)
> >  	}
> >  
> >  	/* Original interrupt should be preserved in EXITINTINFO */
> > -	if (!(vcpu0.vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
> > +	if (!(vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)) {
> >  		report(false, "EXITINTINFO not valid");
> >  		goto cleanup;
> >  	}
> > 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator
  2022-12-06 14:07     ` Maxim Levitsky
@ 2022-12-14 10:33       ` Claudio Imbrenda
  0 siblings, 0 replies; 56+ messages in thread
From: Claudio Imbrenda @ 2022-12-14 10:33 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Andrew Jones, Alexandru Elisei, Paolo Bonzini, Thomas Huth,
	Alex Bennée, Nico Boehr, Cathy Avery, Janosch Frank

On Tue, 06 Dec 2022 16:07:39 +0200
Maxim Levitsky <mlevitsk@redhat.com> wrote:

> On Wed, 2022-11-23 at 10:28 +0100, Claudio Imbrenda wrote:
> > On Tue, 22 Nov 2022 18:11:36 +0200
> > Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >   
> > > Add a simple pseudo random number generator which can be used
> > > in the tests to add randomeness in a controlled manner.  
> > 
> > ahh, yes I have wanted something like this in the library for quite some
> > time! thanks!
> > 
> > I have some comments regarding the interfaces (see below), and also a
> > request, if you could split the x86 part in a different patch, so we
> > can have a "pure" lib patch, and then you can have an x86-only patch
> > that uses the new interface
> >   
> > > For x86 add a wrapper which initializes the PRNG with RDRAND,
> > > unless RANDOM_SEED env variable is set, in which case it is used
> > > instead.
> > > 
> > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > > ---
> > >  Makefile              |  3 ++-
> > >  README.md             |  1 +
> > >  lib/prng.c            | 41 +++++++++++++++++++++++++++++++++++++++++
> > >  lib/prng.h            | 23 +++++++++++++++++++++++
> > >  lib/x86/random.c      | 33 +++++++++++++++++++++++++++++++++
> > >  lib/x86/random.h      | 17 +++++++++++++++++
> > >  scripts/arch-run.bash |  2 +-
> > >  x86/Makefile.common   |  1 +
> > >  8 files changed, 119 insertions(+), 2 deletions(-)
> > >  create mode 100644 lib/prng.c
> > >  create mode 100644 lib/prng.h
> > >  create mode 100644 lib/x86/random.c
> > >  create mode 100644 lib/x86/random.h
> > > 
> > > diff --git a/Makefile b/Makefile
> > > index 6ed5deac..384b5acf 100644
> > > --- a/Makefile
> > > +++ b/Makefile
> > > @@ -29,7 +29,8 @@ cflatobjs := \
> > >  	lib/string.o \
> > >  	lib/abort.o \
> > >  	lib/report.o \
> > > -	lib/stack.o
> > > +	lib/stack.o \
> > > +	lib/prng.o
> > >  
> > >  # libfdt paths
> > >  LIBFDT_objdir = lib/libfdt
> > > diff --git a/README.md b/README.md
> > > index 6e82dc22..5a677a03 100644
> > > --- a/README.md
> > > +++ b/README.md
> > > @@ -91,6 +91,7 @@ the framework.  The list of reserved environment variables is below
> > >      QEMU_ACCEL                   either kvm, hvf or tcg
> > >      QEMU_VERSION_STRING          string of the form `qemu -h | head -1`
> > >      KERNEL_VERSION_STRING        string of the form `uname -r`
> > > +    TEST_SEED                    integer to force a fixed seed for the prng
> > >  
> > >  Additionally these self-explanatory variables are reserved
> > >  
> > > diff --git a/lib/prng.c b/lib/prng.c
> > > new file mode 100644
> > > index 00000000..d9342eb3
> > > --- /dev/null
> > > +++ b/lib/prng.c
> > > @@ -0,0 +1,41 @@
> > > +
> > > +/*
> > > + * Random number generator that is usable from guest code. This is the
> > > + * Park-Miller LCG using standard constants.
> > > + */
> > > +
> > > +#include "libcflat.h"
> > > +#include "prng.h"
> > > +
> > > +struct random_state new_random_state(uint32_t seed)
> > > +{
> > > +	struct random_state s = {.seed = seed};
> > > +	return s;
> > > +}
> > > +
> > > +uint32_t random_u32(struct random_state *state)
> > > +{
> > > +	state->seed = (uint64_t)state->seed * 48271 % ((uint32_t)(1 << 31) - 1);  
> > 
> > why not:
> > 
> > state->seed = state->seed * 48271ULL % (BIT_ULL(31) - 1);
> > 
> > I think it's more readable  
> 
> I copied this code vertabium from a patch that was send to in-kernel selftests
> as Sean suggested me to do.

fair enough :)

> 
> I to be honest would have picked some more complex random generator like the
> Mersenne Twister or something like that, since performance is not an issue here,
> and this generator is I think geared toward beeing as fast as possible.

I think that the important thing is that the generator is random enough
to confuse the various branch predictors and prefetchers. If the code
is simple, it's even better, because it's easier to understand.

> 
> But againg I don't care much about this, any source of randomness is better
> that nothing.

exactly

> 
> >   
> > > +	return state->seed;
> > > +}
> > > +
> > > +
> > > +uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max)
> > > +{
> > > +	uint32_t val = random_u32(state);
> > > +
> > > +	return val % (max - min + 1) + min;  
> > 
> > what happens if max == UINT_MAX and min = 0 ?
> > 
> > maybe:
> > 
> > if (max - min == UINT_MAX)
> > 	return val;  
> 
> Makes sense.
> >   
> > > +}
> > > +
> > > +/*
> > > + * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
> > > + * it will return true in 70.0% of cases)
> > > + */
> > > +bool random_decision(struct random_state *state, float percent_true)  
> > 
> > I'm not a fan of floats in the lib...
> >   
> > > +{
> > > +	if (percent_true == 0)
> > > +		return 0;
> > > +	if (percent_true == 100)
> > > +		return 1;
> > > +	return random_range(state, 1, 10000) < (uint32_t)(percent_true * 100);  
> > 
> > ...especially when you are only using 2 decimal places anyway  
> 
> I was thinking the same about this, there are pros and cons,
> Using a fixed point integer is a bit less usable but overall I don't mind
> using it.

maybe you can add a wrapper macro?

#define RANDOM_DECISION_F(state, percent) \
	random_decision((state), 100 * (percent))

would be functionally equivalent (I think), but only generate fp code
when actually used. (feel free to chose a nicer name for it)

> 
> 
> 
> > 
> > can you rewrite it to take an unsigned int? 
> > e.g. if percent_true = 7123, it will return true in 71.23% of the cases
> > 
> > then you can rewrite the last line like this:
> > 
> > return random_range(state, 1, 10000) < percent_true;
> >   
> > > +}
> > > diff --git a/lib/prng.h b/lib/prng.h
> > > new file mode 100644
> > > index 00000000..61d3a48b
> > > --- /dev/null
> > > +++ b/lib/prng.h
> > > @@ -0,0 +1,23 @@
> > > +
> > > +#ifndef SRC_LIB_PRNG_H_
> > > +#define SRC_LIB_PRNG_H_
> > > +
> > > +struct random_state {
> > > +	uint32_t seed;
> > > +};
> > > +
> > > +struct random_state new_random_state(uint32_t seed);
> > > +uint32_t random_u32(struct random_state *state);
> > > +
> > > +/*
> > > + * return a random number from min to max (included)
> > > + */
> > > +uint32_t random_range(struct random_state *state, uint32_t min, uint32_t max);
> > > +
> > > +/*
> > > + * Returns true randomly in 'percent_true' cases (e.g if percent_true = 70.0,
> > > + * it will return true in 70.0% of cases)
> > > + */
> > > +bool random_decision(struct random_state *state, float percent_true);
> > > +
> > > +#endif /* SRC_LIB_PRNG_H_ */  
> > 
> > and then put the rest below in a new patch  
> 
> No problem. Note that x86 specific bits can be minimized
> but that requires plumbing in several things that you would
> take for granted, and in particular, env vars
> are x86 specific, and apic id is x86 specific.

I'm not sure I understand your point. The lib code above does not depend
on the arch code below, so you can split this into a non-arch patch, and
an arch patch that uses the non-arch patch. I think it's cleaner to not
mix common code and arch code.

> 
> When a random number generator is wired to a new arch,
> this can be fixed.
> 
> Best regards,
> 	Maxim Levitsky
> 
> >   
> > > diff --git a/lib/x86/random.c b/lib/x86/random.c
> > > new file mode 100644
> > > index 00000000..fcdd5fe8
> > > --- /dev/null
> > > +++ b/lib/x86/random.c
> > > @@ -0,0 +1,33 @@
> > > +
> > > +#include "libcflat.h"
> > > +#include "processor.h"
> > > +#include "prng.h"
> > > +#include "smp.h"
> > > +#include "asm/spinlock.h"
> > > +#include "random.h"
> > > +
> > > +static u32 test_seed;
> > > +static bool initialized;
> > > +
> > > +void init_prng(void)
> > > +{
> > > +	char *test_seed_str = getenv("TEST_SEED");
> > > +
> > > +	if (test_seed_str && strlen(test_seed_str))
> > > +		test_seed = atol(test_seed_str);
> > > +	else
> > > +#ifdef __x86_64__
> > > +		test_seed =  (u32)rdrand();
> > > +#else
> > > +		test_seed = (u32)(rdtsc() << 4);
> > > +#endif
> > > +	initialized = true;
> > > +
> > > +	printf("Test seed: %u\n", (unsigned int)test_seed);
> > > +}
> > > +
> > > +struct random_state get_prng(void)
> > > +{
> > > +	assert(initialized);
> > > +	return new_random_state(test_seed + this_cpu_read_smp_id());
> > > +}
> > > diff --git a/lib/x86/random.h b/lib/x86/random.h
> > > new file mode 100644
> > > index 00000000..795b450b
> > > --- /dev/null
> > > +++ b/lib/x86/random.h
> > > @@ -0,0 +1,17 @@
> > > +/*
> > > + * prng.h
> > > + *
> > > + *  Created on: Nov 9, 2022
> > > + *      Author: mlevitsk
> > > + */
> > > +
> > > +#ifndef SRC_LIB_X86_RANDOM_H_
> > > +#define SRC_LIB_X86_RANDOM_H_
> > > +
> > > +#include "libcflat.h"
> > > +#include "prng.h"
> > > +
> > > +void init_prng(void);
> > > +struct random_state get_prng(void);
> > > +
> > > +#endif /* SRC_LIB_X86_RANDOM_H_ */
> > > diff --git a/scripts/arch-run.bash b/scripts/arch-run.bash
> > > index 51e4b97b..238d19f8 100644
> > > --- a/scripts/arch-run.bash
> > > +++ b/scripts/arch-run.bash
> > > @@ -298,7 +298,7 @@ env_params ()
> > >  	KERNEL_EXTRAVERSION=${KERNEL_EXTRAVERSION%%[!0-9]*}
> > >  	! [[ $KERNEL_SUBLEVEL =~ ^[0-9]+$ ]] && unset $KERNEL_SUBLEVEL
> > >  	! [[ $KERNEL_EXTRAVERSION =~ ^[0-9]+$ ]] && unset $KERNEL_EXTRAVERSION
> > > -	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION
> > > +	env_add_params KERNEL_VERSION_STRING KERNEL_VERSION KERNEL_PATCHLEVEL KERNEL_SUBLEVEL KERNEL_EXTRAVERSION TEST_SEED
> > >  }
> > >  
> > >  env_file ()
> > > diff --git a/x86/Makefile.common b/x86/Makefile.common
> > > index 698a48ab..fa0a50e6 100644
> > > --- a/x86/Makefile.common
> > > +++ b/x86/Makefile.common
> > > @@ -23,6 +23,7 @@ cflatobjs += lib/x86/stack.o
> > >  cflatobjs += lib/x86/fault_test.o
> > >  cflatobjs += lib/x86/delay.o
> > >  cflatobjs += lib/x86/pmu.o
> > > +cflatobjs += lib/x86/random.o
> > >  ifeq ($(CONFIG_EFI),y)
> > >  cflatobjs += lib/x86/amd_sev.o
> > >  cflatobjs += lib/efi.o  
> 
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests
  2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
                   ` (26 preceding siblings ...)
  2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 27/27] x86: ipi_stress: add optional SVM support Maxim Levitsky
@ 2023-06-07 23:25 ` Sean Christopherson
  27 siblings, 0 replies; 56+ messages in thread
From: Sean Christopherson @ 2023-06-07 23:25 UTC (permalink / raw)
  To: Sean Christopherson, kvm, Maxim Levitsky
  Cc: Andrew Jones, Alexandru Elisei, Paolo Bonzini, Claudio Imbrenda,
	Thomas Huth, Alex Bennée, Nico Boehr, Cathy Avery,
	Janosch Frank

On Tue, 22 Nov 2022 18:11:25 +0200, Maxim Levitsky wrote:
> This is set of fixes and new unit tests that I developed for the
> KVM unit tests.
> 
> I also did some work to separate the SVM code into a minimal
> support library so that you could use it from an arbitrary test.
> 
> V2:
> 
> [...]

Applied select patches to kvm-x86 next, mostly things that are smallish and
straightforward.

Please, please split all of this stuff into more manageable series, with
one theme per series.  Even with the patches I applied out of the way, there
are at least 4 or 5 distinct series here.

[01/27] x86: replace irq_{enable|disable}() with sti()/cli()
        https://github.com/kvm-x86/kvm-unit-tests/commit/ed31b56333aa
[02/27] x86: introduce sti_nop() and sti_nop_cli()
        https://github.com/kvm-x86/kvm-unit-tests/commit/a159f4c91608
[03/27] x86: add few helper functions for apic local timer
        https://github.com/kvm-x86/kvm-unit-tests/commit/7a507c9f5b74
[04/27] svm: remove nop after stgi/clgi
        https://github.com/kvm-x86/kvm-unit-tests/commit/783f817a17f1
[05/27] svm: make svm_intr_intercept_mix_if/gif test a bit more robust
        https://github.com/kvm-x86/kvm-unit-tests/commit/d0ffdee8f95b
[06/27] svm: use apic_start_timer/apic_stop_timer instead of open coding it
        https://github.com/kvm-x86/kvm-unit-tests/commit/2e4e8a4fe921
[09/27] svm: add simple nested shutdown test.
        https://github.com/kvm-x86/kvm-unit-tests/commit/e5bedc838c3b

[13/27] svm: remove get_npt_pte extern
        https://github.com/kvm-x86/kvm-unit-tests/commit/cc15e55699e9

--
https://github.com/kvm-x86/kvm-unit-tests/tree/next

^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2023-06-07 23:27 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-22 16:11 [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 01/27] x86: replace irq_{enable|disable}() with sti()/cli() Maxim Levitsky
2022-12-01 13:46   ` Emanuele Giuseppe Esposito
2022-12-06 13:55     ` Maxim Levitsky
2022-12-06 14:15       ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 02/27] x86: introduce sti_nop() and sti_nop_cli() Maxim Levitsky
2022-12-01 13:46   ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 03/27] x86: add few helper functions for apic local timer Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 04/27] svm: remove nop after stgi/clgi Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 05/27] svm: make svm_intr_intercept_mix_if/gif test a bit more robust Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 06/27] svm: use apic_start_timer/apic_stop_timer instead of open coding it Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 07/27] x86: Add test for #SMI during interrupt window Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 08/27] x86: Add a simple test for SYSENTER instruction Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 09/27] svm: add simple nested shutdown test Maxim Levitsky
2022-12-01 13:46   ` Emanuele Giuseppe Esposito
2022-12-06 13:56     ` Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 10/27] SVM: add two tests for exitintinto on exception Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 11/27] lib: Add random number generator Maxim Levitsky
2022-11-23  9:28   ` Claudio Imbrenda
2022-11-23 12:54     ` Andrew Jones
2022-12-06 13:57       ` Maxim Levitsky
2022-12-06 14:07     ` Maxim Levitsky
2022-12-14 10:33       ` Claudio Imbrenda
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 12/27] x86: add IPI stress test Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 13/27] svm: remove get_npt_pte extern Maxim Levitsky
2022-12-01 13:46   ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 14/27] svm: move svm spec definitions to lib/x86/svm.h Maxim Levitsky
2022-12-01 13:54   ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 15/27] svm: move some svm support functions into lib/x86/svm_lib.h Maxim Levitsky
2022-12-01 13:59   ` Emanuele Giuseppe Esposito
2022-12-06 14:10     ` Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 16/27] svm: move setup_svm() to svm_lib.c Maxim Levitsky
2022-12-01 16:14   ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 17/27] svm: correctly skip if NPT not supported Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 18/27] svm: move vmcb_ident to svm_lib.c Maxim Levitsky
2022-12-01 16:18   ` Emanuele Giuseppe Esposito
2022-12-06 14:11     ` Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 19/27] svm: rewerite vm entry macros Maxim Levitsky
2022-12-02 10:14   ` Emanuele Giuseppe Esposito
2022-12-06 13:56     ` Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 20/27] svm: move v2 tests run into test_run Maxim Levitsky
2022-12-02  9:53   ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 21/27] svm: cleanup the default_prepare Maxim Levitsky
2022-12-02  9:45   ` Emanuele Giuseppe Esposito
2022-12-06 13:56     ` Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 22/27] svm: introduce svm_vcpu Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 23/27] svm: introduce struct svm_test_context Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 24/27] svm: use svm_test_context in v2 tests Maxim Levitsky
2022-12-02 10:27   ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 25/27] svm: move nested vcpu to test context Maxim Levitsky
2022-12-02 10:22   ` Emanuele Giuseppe Esposito
2022-12-06 14:29     ` Maxim Levitsky
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 26/27] svm: move test_guest_func " Maxim Levitsky
2022-12-02 10:28   ` Emanuele Giuseppe Esposito
2022-11-22 16:11 ` [kvm-unit-tests PATCH v3 27/27] x86: ipi_stress: add optional SVM support Maxim Levitsky
2023-06-07 23:25 ` [kvm-unit-tests PATCH v3 00/27] kvm-unit-tests: set of fixes and new tests Sean Christopherson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.