* [PATCH v2 0/3] KVM: arm64: selftests: Add edge cases tests for the arch timer
@ 2022-03-17 4:51 ` Ricardo Koller
0 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add a new selftests that validates some edge cases related to the virtual
arch-timer, for example:
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- having the same timer condition firing multiple times.
The tests run while checking the state of the IRQs (e.g., pending when they
are supposed to be) and stressing things a bit by waiting for interrupts
while: re-scheduling the vcpu (with sched_yield()), by migrating the vcpu
between cores, or by sleeping in userspace (with usleep()).
The first commit adds a timer utility function. The second commit adds
some sanity checks and basic tests for the timer. The third commit adds
the actual edge case tests (like forcing rollovers).
v1 -> v2:
- Remove the checks for timers firing within some margin; only leave the
checks for timers not firing ahead of time. Also remove the tests that
depend on timers firing within some margin. [Oliver, Marc]
- Collect R-b tag from Oliver (first commit). [Oliver]
- Multiple nits: replace wfi_ functions with wait_, reduce use of macros,
drop typedefs, use IAR_SPURIOUS from header, move some comments functions
to top. [Oliver]
- Don't fail if the test has a single cpu available. [Oliver]
- Don't fail if there's no GICv3 available. [Oliver]
v1: https://lore.kernel.org/kvmarm/20220302172144.2734258-1-ricarkol@google.com/
There is a slight complication with where this series applies. The test added
here fails without commit cc94d47ce16d ("kvm: selftests: aarch64: fix assert in
gicv3_access_reg") which lives in kvmarm/next. However, it can't be built on
top of kvmarm/next as it depends on commit 456f89e0928a ("KVM: selftests:
aarch64: Skip tests if we can't create a vgic-v3") which is not in kvmarm/next.
Ricardo Koller (3):
KVM: arm64: selftests: add timer_get_tval() lib function
KVM: arm64: selftests: add arch_timer_edge_cases
KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 891 ++++++++++++++++++
.../kvm/include/aarch64/arch_timer.h | 18 +-
4 files changed, 910 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
--
2.35.1.723.g4982287a31-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 0/3] KVM: arm64: selftests: Add edge cases tests for the arch timer
@ 2022-03-17 4:51 ` Ricardo Koller
0 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add a new selftests that validates some edge cases related to the virtual
arch-timer, for example:
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- having the same timer condition firing multiple times.
The tests run while checking the state of the IRQs (e.g., pending when they
are supposed to be) and stressing things a bit by waiting for interrupts
while: re-scheduling the vcpu (with sched_yield()), by migrating the vcpu
between cores, or by sleeping in userspace (with usleep()).
The first commit adds a timer utility function. The second commit adds
some sanity checks and basic tests for the timer. The third commit adds
the actual edge case tests (like forcing rollovers).
v1 -> v2:
- Remove the checks for timers firing within some margin; only leave the
checks for timers not firing ahead of time. Also remove the tests that
depend on timers firing within some margin. [Oliver, Marc]
- Collect R-b tag from Oliver (first commit). [Oliver]
- Multiple nits: replace wfi_ functions with wait_, reduce use of macros,
drop typedefs, use IAR_SPURIOUS from header, move some comments functions
to top. [Oliver]
- Don't fail if the test has a single cpu available. [Oliver]
- Don't fail if there's no GICv3 available. [Oliver]
v1: https://lore.kernel.org/kvmarm/20220302172144.2734258-1-ricarkol@google.com/
There is a slight complication with where this series applies. The test added
here fails without commit cc94d47ce16d ("kvm: selftests: aarch64: fix assert in
gicv3_access_reg") which lives in kvmarm/next. However, it can't be built on
top of kvmarm/next as it depends on commit 456f89e0928a ("KVM: selftests:
aarch64: Skip tests if we can't create a vgic-v3") which is not in kvmarm/next.
Ricardo Koller (3):
KVM: arm64: selftests: add timer_get_tval() lib function
KVM: arm64: selftests: add arch_timer_edge_cases
KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 891 ++++++++++++++++++
.../kvm/include/aarch64/arch_timer.h | 18 +-
4 files changed, 910 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
--
2.35.1.723.g4982287a31-goog
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 1/3] KVM: arm64: selftests: add timer_get_tval() lib function
2022-03-17 4:51 ` Ricardo Koller
@ 2022-03-17 4:51 ` Ricardo Koller
-1 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add timer_get_tval() into the arch timer library functions in
selftests/kvm. Bonus: change the set_tval function to get an int32_t
(tval is signed).
Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../selftests/kvm/include/aarch64/arch_timer.h | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
index cb7c03de3a21..93f35a4fc1aa 100644
--- a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
@@ -79,7 +79,7 @@ static inline uint64_t timer_get_cval(enum arch_timer timer)
return 0;
}
-static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
+static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
{
switch (timer) {
case VIRTUAL:
@@ -95,6 +95,22 @@ static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
isb();
}
+static inline int32_t timer_get_tval(enum arch_timer timer)
+{
+ isb();
+ switch (timer) {
+ case VIRTUAL:
+ return (int32_t)read_sysreg(cntv_tval_el0);
+ case PHYSICAL:
+ return (int32_t)read_sysreg(cntp_tval_el0);
+ default:
+ GUEST_ASSERT_1(0, timer);
+ }
+
+ /* We should not reach here */
+ return 0;
+}
+
static inline void timer_set_ctl(enum arch_timer timer, uint32_t ctl)
{
switch (timer) {
--
2.35.1.723.g4982287a31-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 1/3] KVM: arm64: selftests: add timer_get_tval() lib function
@ 2022-03-17 4:51 ` Ricardo Koller
0 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add timer_get_tval() into the arch timer library functions in
selftests/kvm. Bonus: change the set_tval function to get an int32_t
(tval is signed).
Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../selftests/kvm/include/aarch64/arch_timer.h | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
index cb7c03de3a21..93f35a4fc1aa 100644
--- a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
@@ -79,7 +79,7 @@ static inline uint64_t timer_get_cval(enum arch_timer timer)
return 0;
}
-static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
+static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
{
switch (timer) {
case VIRTUAL:
@@ -95,6 +95,22 @@ static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
isb();
}
+static inline int32_t timer_get_tval(enum arch_timer timer)
+{
+ isb();
+ switch (timer) {
+ case VIRTUAL:
+ return (int32_t)read_sysreg(cntv_tval_el0);
+ case PHYSICAL:
+ return (int32_t)read_sysreg(cntp_tval_el0);
+ default:
+ GUEST_ASSERT_1(0, timer);
+ }
+
+ /* We should not reach here */
+ return 0;
+}
+
static inline void timer_set_ctl(enum arch_timer timer, uint32_t ctl)
{
switch (timer) {
--
2.35.1.723.g4982287a31-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
2022-03-17 4:51 ` Ricardo Koller
@ 2022-03-17 4:51 ` Ricardo Koller
-1 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add an arch_timer edge-cases selftest. For now, just add some basic
sanity checks, and some stress conditions (like waiting for the timers
while re-scheduling the vcpu). The next commit will add the actual edge
case tests.
This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
interrupts for suspended vCPU".
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 568 ++++++++++++++++++
3 files changed, 570 insertions(+)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index dce7de7755e6..8f7e0123dd28 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
/aarch64/arch_timer
+/aarch64/arch_timer_edge_cases
/aarch64/debug-exceptions
/aarch64/get-reg-list
/aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 17c3f0749f05..d4466ca76f21 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -100,6 +100,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
+TEST_GEN_PROGS_aarch64 += aarch64/arch_timer_edge_cases
TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
new file mode 100644
index 000000000000..dc399482e35d
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -0,0 +1,568 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
+ *
+ * Some of these tests program timers and then wait indefinitely for them to
+ * fire. We rely on having a timeout mechanism in the "runner", like
+ * tools/testing/selftests/kselftest/runner.sh.
+ *
+ * Copyright (c) 2021, Google LLC.
+ */
+
+#define _GNU_SOURCE
+
+#include <stdlib.h>
+#include <pthread.h>
+#include <linux/kvm.h>
+#include <linux/sizes.h>
+#include <linux/bitmap.h>
+#include <sched.h>
+#include <sys/sysinfo.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "delay.h"
+#include "arch_timer.h"
+#include "gic.h"
+#include "vgic.h"
+
+#define VCPUID 0
+
+#define msecs_to_usecs(msec) ((msec) * 1000LL)
+
+#define CVAL_MAX (~0ULL)
+/* tval is a signed 32-bit int. */
+#define TVAL_MAX INT_MAX
+#define TVAL_MIN INT_MIN
+
+#define GICD_BASE_GPA 0x8000000ULL
+#define GICR_BASE_GPA 0x80A0000ULL
+
+/* After how much time we say there is no IRQ. */
+#define TIMEOUT_NO_IRQ_US msecs_to_usecs(50)
+
+/* A nice counter value to use as the starting one for most tests. */
+#define DEF_CNT (CVAL_MAX / 2)
+
+/* Number of runs. */
+#define NR_TEST_ITERS_DEF 5
+
+/* Shared with IRQ handler. */
+volatile struct test_vcpu_shared_data {
+ int handled;
+} shared_data;
+
+struct test_args {
+ /* Virtual or physical timer and counter tests. */
+ enum arch_timer timer;
+ /* Number of iterations. */
+ int iterations;
+};
+
+struct test_args test_args = {
+ /* Only testing VIRTUAL timers for now. */
+ .timer = VIRTUAL,
+ .iterations = NR_TEST_ITERS_DEF,
+};
+
+static int vtimer_irq, ptimer_irq;
+
+enum sync_cmd {
+ SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_SCHED_YIELD,
+ USERSPACE_MIGRATE_SELF,
+};
+
+typedef void (*wait_method_t)(void);
+
+static void wait_for_non_spurious_irq(void);
+static void wait_poll_for_irq(void);
+static void wait_sched_poll_for_irq(void);
+static void wait_migrate_poll_for_irq(void);
+
+wait_method_t wait_method[] = {
+ wait_for_non_spurious_irq,
+ wait_poll_for_irq,
+ wait_sched_poll_for_irq,
+ wait_migrate_poll_for_irq,
+};
+
+enum timer_view {
+ TIMER_CVAL,
+ TIMER_TVAL,
+};
+
+/* Pair of pcpus for the test to alternate between. */
+static int pcpus[2] = {-1, -1};
+static int pcpus_idx;
+
+static uint32_t next_pcpu(void)
+{
+ pcpus_idx = 1 - pcpus_idx;
+ return pcpus[pcpus_idx];
+}
+
+#define ASSERT_IRQS_HANDLED_2(__nr, arg1, arg2) do { \
+ int __h = shared_data.handled; \
+ GUEST_ASSERT_4(__h == (__nr), __h, __nr, arg1, arg2); \
+} while (0)
+
+#define ASSERT_IRQS_HANDLED_1(__nr, arg1) \
+ ASSERT_IRQS_HANDLED_2((__nr), arg1, 0)
+
+#define ASSERT_IRQS_HANDLED(__nr) \
+ ASSERT_IRQS_HANDLED_2((__nr), 0, 0)
+
+#define SET_COUNTER(__ctr, __t) \
+ GUEST_SYNC_ARGS(SET_REG_KVM_REG_ARM_TIMER_CNT, (__ctr), (__t), 0, 0)
+
+#define USERSPACE_CMD(__cmd) \
+ GUEST_SYNC_ARGS(__cmd, 0, 0, 0, 0)
+
+#define USERSPACE_SCHEDULE() \
+ USERSPACE_CMD(USERSPACE_SCHED_YIELD)
+
+#define USERSPACE_MIGRATE_VCPU() \
+ USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+
+static void guest_irq_handler(struct ex_regs *regs)
+{
+ unsigned int intid = gic_get_and_ack_irq();
+ uint64_t cnt, cval;
+ uint32_t ctl;
+
+ if (intid == IAR_SPURIOUS)
+ return;
+
+ GUEST_ASSERT(gic_irq_get_pending(intid));
+
+ ctl = timer_get_ctl(test_args.timer);
+ cnt = timer_get_cntct(test_args.timer);
+ cval = timer_get_cval(test_args.timer);
+
+ GUEST_ASSERT_1(ctl & CTL_ISTATUS, ctl);
+
+ /* Disable and mask the timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ GUEST_ASSERT(!gic_irq_get_pending(intid));
+
+ shared_data.handled++;
+
+ /* The IRQ should not fire before time. */
+ GUEST_ASSERT_2(cnt >= cval, cnt, cval);
+
+ gic_set_eoi(intid);
+}
+
+static void program_timer_irq(uint64_t xval, uint32_t ctl, enum timer_view tv)
+{
+ shared_data.handled = 0;
+
+ switch (tv) {
+ case TIMER_CVAL:
+ timer_set_cval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ case TIMER_TVAL:
+ timer_set_tval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ default:
+ GUEST_ASSERT(0);
+ }
+}
+
+/*
+ * Should be called with IRQs masked.
+ */
+static void wait_for_non_spurious_irq(void)
+{
+ int h;
+
+ for (h = shared_data.handled; h == shared_data.handled;) {
+ asm volatile("wfi\n"
+ "msr daifclr, #2\n"
+ /* handle IRQ */
+ "msr daifset, #2\n"
+ : : : "memory");
+ }
+}
+
+/*
+ * Wait for an non-spurious IRQ by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHED_YIELD).
+ *
+ * Should be called with IRQs masked. Not really needed like the wfi above, but
+ * it should match the others.
+ */
+static void poll_for_non_spurious_irq(bool userspace,
+ enum sync_cmd userspace_cmd)
+{
+ int h;
+
+ h = shared_data.handled;
+
+ local_irq_enable();
+ while (h == shared_data.handled) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+ local_irq_disable();
+}
+
+static void wait_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(false, -1);
+}
+
+static void wait_sched_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_SCHED_YIELD);
+}
+
+static void wait_migrate_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
+}
+
+/*
+ * Reset the timer state to some nice values like the counter not being close
+ * to the edge, and the control register masked and disabled.
+ */
+static void reset_timer_state(uint64_t cnt)
+{
+ SET_COUNTER(cnt, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timer(uint64_t reset_cnt, uint64_t xval,
+ wait_method_t wm, enum timer_view tv)
+{
+ local_irq_disable();
+
+ reset_timer_state(reset_cnt);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ wm();
+
+ ASSERT_IRQS_HANDLED_1(1, tv);
+ local_irq_enable();
+}
+
+static void test_basic_functionality(void)
+{
+ int32_t tval = (int32_t)msec_to_cycles(10);
+ uint64_t cval;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ cval = DEF_CNT + msec_to_cycles(10);
+
+ test_timer(DEF_CNT, cval, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, tval, wm, TIMER_TVAL);
+ }
+}
+
+/*
+ * This test checks basic timer behavior without actually firing timers, things
+ * like: the relationship between cval and tval, tval down-counting.
+ */
+static void timers_sanity_checks(bool use_sched)
+{
+ uint64_t cval;
+
+ reset_timer_state(DEF_CNT);
+
+ local_irq_disable();
+
+ /* cval in the past */
+ timer_set_cval(test_args.timer, timer_get_cntct(test_args.timer) - 1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ /* tval in the past */
+ timer_set_tval(test_args.timer, -1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_cval(test_args.timer) <
+ timer_get_cntct(test_args.timer));
+
+ /* tval larger than TVAL_MAX. */
+ cval = timer_get_cntct(test_args.timer) + 2ULL * TVAL_MAX - 1;
+ timer_set_cval(test_args.timer, cval);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) <= 0);
+ GUEST_ASSERT_EQ(cval, timer_get_cval(test_args.timer));
+
+ /* tval should keep down-counting from 0 to -1. */
+ SET_COUNTER(DEF_CNT, test_args.timer);
+ timer_set_tval(test_args.timer, 0);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ /* We just need 1 cycle to pass. */
+ isb();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ local_irq_enable();
+
+ /* Mask and disable any pending timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timers_sanity_checks(void)
+{
+ timers_sanity_checks(false);
+ /* Check how KVM saves/restores these edge-case values. */
+ timers_sanity_checks(true);
+}
+
+static void guest_run_iteration(void)
+{
+ test_timers_sanity_checks();
+ test_basic_functionality();
+}
+
+static void guest_code(void)
+{
+ int i;
+
+ local_irq_disable();
+
+ gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ timer_set_ctl(PHYSICAL, CTL_IMASK);
+
+ gic_irq_enable(vtimer_irq);
+ gic_irq_enable(ptimer_irq);
+ local_irq_enable();
+
+ for (i = 0; i < test_args.iterations; i++) {
+ GUEST_SYNC(i);
+ guest_run_iteration();
+ }
+
+ GUEST_DONE();
+}
+
+static void migrate_self(uint32_t new_pcpu)
+{
+ int ret;
+ cpu_set_t cpuset;
+ pthread_t thread;
+
+ thread = pthread_self();
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(new_pcpu, &cpuset);
+
+ pr_debug("Migrating from %u to %u\n", sched_getcpu(), new_pcpu);
+
+ ret = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
+
+ TEST_ASSERT(ret == 0, "Failed to migrate to pCPU: %u; ret: %d\n",
+ new_pcpu, ret);
+}
+
+/*
+ * Set the two pcpus that the test will use to alternate between. Default to
+ * use the current cpu as pcpus[0] and the one right after in the affinity set
+ * as pcpus[1].
+ */
+static void set_default_pcpus(void)
+{
+ int max = get_nprocs();
+ int curr = sched_getcpu();
+ cpu_set_t cpuset;
+ long i;
+
+ TEST_ASSERT(max > 1, "Need at least 2 online pcpus.");
+
+ pcpus[0] = curr;
+
+ sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuset);
+ for (i = (curr + 1) % CPU_SETSIZE; i != curr; i = (i + 1) % CPU_SETSIZE) {
+ if (CPU_ISSET(i, &cpuset)) {
+ pcpus[1] = i;
+ break;
+ }
+ }
+
+ TEST_ASSERT(pcpus[1] != -1, "Couldn't find a second pcpu.");
+ pr_debug("pcpus: %d %d\n", pcpus[0], pcpus[1]);
+}
+
+static void kvm_set_cntxct(struct kvm_vm *vm, uint64_t cnt, enum arch_timer timer)
+{
+ TEST_ASSERT(timer == VIRTUAL,
+ "Only supports setting the virtual counter for now.");
+
+ struct kvm_one_reg reg = {
+ .id = KVM_REG_ARM_TIMER_CNT,
+ .addr = (uint64_t)&cnt,
+ };
+ vcpu_set_reg(vm, 0, ®);
+}
+
+static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
+{
+ enum sync_cmd cmd = uc->args[1];
+ uint64_t val = uc->args[2];
+ enum arch_timer timer = uc->args[3];
+
+ switch (cmd) {
+ case SET_REG_KVM_REG_ARM_TIMER_CNT:
+ kvm_set_cntxct(vm, val, timer);
+ break;
+ case USERSPACE_SCHED_YIELD:
+ sched_yield();
+ break;
+ case USERSPACE_MIGRATE_SELF:
+ migrate_self(next_pcpu());
+ break;
+ default:
+ break;
+ }
+}
+
+static void test_run(struct kvm_vm *vm)
+{
+ struct ucall uc;
+ int stage = 0;
+
+ /* Start on the first pcpu. */
+ migrate_self(pcpus[0]);
+
+ sync_global_to_guest(vm, test_args);
+
+ for (stage = 0; ; stage++) {
+ vcpu_run(vm, VCPUID);
+ switch (get_ucall(vm, VCPUID, &uc)) {
+ case UCALL_SYNC:
+ handle_sync(vm, &uc);
+ break;
+ case UCALL_DONE:
+ goto out;
+ case UCALL_ABORT:
+ TEST_FAIL("%s at %s:%ld\n\tvalues: %lu, %lu; %lu",
+ (const char *)uc.args[0], __FILE__, uc.args[1],
+ uc.args[2], uc.args[3], uc.args[4]);
+ goto out;
+ default:
+ TEST_FAIL("Unexpected guest exit\n");
+ }
+ }
+
+out:
+ return;
+}
+
+static void test_init_timer_irq(struct kvm_vm *vm)
+{
+ int vcpu_fd = vcpu_get_fd(vm, VCPUID);
+
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_PTIMER, &ptimer_irq, false);
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_VTIMER, &vtimer_irq, false);
+
+ sync_global_to_guest(vm, ptimer_irq);
+ sync_global_to_guest(vm, vtimer_irq);
+
+ pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq);
+}
+
+static struct kvm_vm *test_vm_create(void)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_default(VCPUID, 0, guest_code);
+
+ vm_init_descriptor_tables(vm);
+ vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
+
+ vcpu_init_descriptor_tables(vm, 0);
+
+ ucall_init(vm, NULL);
+ test_init_timer_irq(vm);
+ ret = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+ if (ret < 0) {
+ print_skip("Failed to create vgic-v3, skipping");
+ exit(KSFT_SKIP);
+ }
+
+ return vm;
+}
+
+static void test_print_help(char *name)
+{
+ pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ name);
+ pr_info("\t-i: Number of iterations (default: %u)\n",
+ NR_TEST_ITERS_DEF);
+ pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-h: Print this help message\n");
+}
+
+static bool parse_args(int argc, char *argv[])
+{
+ int opt, ret;
+
+ while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ switch (opt) {
+ case 'i':
+ test_args.iterations = atoi(optarg);
+ if (test_args.iterations <= 0) {
+ pr_info("Positive value needed for -i\n");
+ goto err;
+ }
+ break;
+ case 'p':
+ ret = sscanf(optarg, "%u,%u", &pcpus[0], &pcpus[1]);
+ if (ret != 2) {
+ pr_info("Invalid pcpus pair");
+ goto err;
+ }
+ break;
+ case 'h':
+ default:
+ goto err;
+ }
+ }
+
+ return true;
+
+err:
+ test_print_help(argv[0]);
+ return false;
+}
+
+int main(int argc, char *argv[])
+{
+ struct kvm_vm *vm;
+
+ /* Tell stdout not to buffer its content */
+ setbuf(stdout, NULL);
+
+ if (!parse_args(argc, argv))
+ exit(KSFT_SKIP);
+
+ if (get_nprocs() < 2)
+ exit(KSFT_SKIP);
+
+ if (pcpus[0] == -1 || pcpus[1] == -1)
+ set_default_pcpus();
+
+ vm = test_vm_create();
+ test_run(vm);
+ kvm_vm_free(vm);
+
+ return 0;
+}
--
2.35.1.723.g4982287a31-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
@ 2022-03-17 4:51 ` Ricardo Koller
0 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add an arch_timer edge-cases selftest. For now, just add some basic
sanity checks, and some stress conditions (like waiting for the timers
while re-scheduling the vcpu). The next commit will add the actual edge
case tests.
This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
interrupts for suspended vCPU".
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 568 ++++++++++++++++++
3 files changed, 570 insertions(+)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index dce7de7755e6..8f7e0123dd28 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
/aarch64/arch_timer
+/aarch64/arch_timer_edge_cases
/aarch64/debug-exceptions
/aarch64/get-reg-list
/aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 17c3f0749f05..d4466ca76f21 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -100,6 +100,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
+TEST_GEN_PROGS_aarch64 += aarch64/arch_timer_edge_cases
TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
new file mode 100644
index 000000000000..dc399482e35d
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -0,0 +1,568 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
+ *
+ * Some of these tests program timers and then wait indefinitely for them to
+ * fire. We rely on having a timeout mechanism in the "runner", like
+ * tools/testing/selftests/kselftest/runner.sh.
+ *
+ * Copyright (c) 2021, Google LLC.
+ */
+
+#define _GNU_SOURCE
+
+#include <stdlib.h>
+#include <pthread.h>
+#include <linux/kvm.h>
+#include <linux/sizes.h>
+#include <linux/bitmap.h>
+#include <sched.h>
+#include <sys/sysinfo.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "delay.h"
+#include "arch_timer.h"
+#include "gic.h"
+#include "vgic.h"
+
+#define VCPUID 0
+
+#define msecs_to_usecs(msec) ((msec) * 1000LL)
+
+#define CVAL_MAX (~0ULL)
+/* tval is a signed 32-bit int. */
+#define TVAL_MAX INT_MAX
+#define TVAL_MIN INT_MIN
+
+#define GICD_BASE_GPA 0x8000000ULL
+#define GICR_BASE_GPA 0x80A0000ULL
+
+/* After how much time we say there is no IRQ. */
+#define TIMEOUT_NO_IRQ_US msecs_to_usecs(50)
+
+/* A nice counter value to use as the starting one for most tests. */
+#define DEF_CNT (CVAL_MAX / 2)
+
+/* Number of runs. */
+#define NR_TEST_ITERS_DEF 5
+
+/* Shared with IRQ handler. */
+volatile struct test_vcpu_shared_data {
+ int handled;
+} shared_data;
+
+struct test_args {
+ /* Virtual or physical timer and counter tests. */
+ enum arch_timer timer;
+ /* Number of iterations. */
+ int iterations;
+};
+
+struct test_args test_args = {
+ /* Only testing VIRTUAL timers for now. */
+ .timer = VIRTUAL,
+ .iterations = NR_TEST_ITERS_DEF,
+};
+
+static int vtimer_irq, ptimer_irq;
+
+enum sync_cmd {
+ SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_SCHED_YIELD,
+ USERSPACE_MIGRATE_SELF,
+};
+
+typedef void (*wait_method_t)(void);
+
+static void wait_for_non_spurious_irq(void);
+static void wait_poll_for_irq(void);
+static void wait_sched_poll_for_irq(void);
+static void wait_migrate_poll_for_irq(void);
+
+wait_method_t wait_method[] = {
+ wait_for_non_spurious_irq,
+ wait_poll_for_irq,
+ wait_sched_poll_for_irq,
+ wait_migrate_poll_for_irq,
+};
+
+enum timer_view {
+ TIMER_CVAL,
+ TIMER_TVAL,
+};
+
+/* Pair of pcpus for the test to alternate between. */
+static int pcpus[2] = {-1, -1};
+static int pcpus_idx;
+
+static uint32_t next_pcpu(void)
+{
+ pcpus_idx = 1 - pcpus_idx;
+ return pcpus[pcpus_idx];
+}
+
+#define ASSERT_IRQS_HANDLED_2(__nr, arg1, arg2) do { \
+ int __h = shared_data.handled; \
+ GUEST_ASSERT_4(__h == (__nr), __h, __nr, arg1, arg2); \
+} while (0)
+
+#define ASSERT_IRQS_HANDLED_1(__nr, arg1) \
+ ASSERT_IRQS_HANDLED_2((__nr), arg1, 0)
+
+#define ASSERT_IRQS_HANDLED(__nr) \
+ ASSERT_IRQS_HANDLED_2((__nr), 0, 0)
+
+#define SET_COUNTER(__ctr, __t) \
+ GUEST_SYNC_ARGS(SET_REG_KVM_REG_ARM_TIMER_CNT, (__ctr), (__t), 0, 0)
+
+#define USERSPACE_CMD(__cmd) \
+ GUEST_SYNC_ARGS(__cmd, 0, 0, 0, 0)
+
+#define USERSPACE_SCHEDULE() \
+ USERSPACE_CMD(USERSPACE_SCHED_YIELD)
+
+#define USERSPACE_MIGRATE_VCPU() \
+ USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+
+static void guest_irq_handler(struct ex_regs *regs)
+{
+ unsigned int intid = gic_get_and_ack_irq();
+ uint64_t cnt, cval;
+ uint32_t ctl;
+
+ if (intid == IAR_SPURIOUS)
+ return;
+
+ GUEST_ASSERT(gic_irq_get_pending(intid));
+
+ ctl = timer_get_ctl(test_args.timer);
+ cnt = timer_get_cntct(test_args.timer);
+ cval = timer_get_cval(test_args.timer);
+
+ GUEST_ASSERT_1(ctl & CTL_ISTATUS, ctl);
+
+ /* Disable and mask the timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ GUEST_ASSERT(!gic_irq_get_pending(intid));
+
+ shared_data.handled++;
+
+ /* The IRQ should not fire before time. */
+ GUEST_ASSERT_2(cnt >= cval, cnt, cval);
+
+ gic_set_eoi(intid);
+}
+
+static void program_timer_irq(uint64_t xval, uint32_t ctl, enum timer_view tv)
+{
+ shared_data.handled = 0;
+
+ switch (tv) {
+ case TIMER_CVAL:
+ timer_set_cval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ case TIMER_TVAL:
+ timer_set_tval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ default:
+ GUEST_ASSERT(0);
+ }
+}
+
+/*
+ * Should be called with IRQs masked.
+ */
+static void wait_for_non_spurious_irq(void)
+{
+ int h;
+
+ for (h = shared_data.handled; h == shared_data.handled;) {
+ asm volatile("wfi\n"
+ "msr daifclr, #2\n"
+ /* handle IRQ */
+ "msr daifset, #2\n"
+ : : : "memory");
+ }
+}
+
+/*
+ * Wait for an non-spurious IRQ by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHED_YIELD).
+ *
+ * Should be called with IRQs masked. Not really needed like the wfi above, but
+ * it should match the others.
+ */
+static void poll_for_non_spurious_irq(bool userspace,
+ enum sync_cmd userspace_cmd)
+{
+ int h;
+
+ h = shared_data.handled;
+
+ local_irq_enable();
+ while (h == shared_data.handled) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+ local_irq_disable();
+}
+
+static void wait_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(false, -1);
+}
+
+static void wait_sched_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_SCHED_YIELD);
+}
+
+static void wait_migrate_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
+}
+
+/*
+ * Reset the timer state to some nice values like the counter not being close
+ * to the edge, and the control register masked and disabled.
+ */
+static void reset_timer_state(uint64_t cnt)
+{
+ SET_COUNTER(cnt, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timer(uint64_t reset_cnt, uint64_t xval,
+ wait_method_t wm, enum timer_view tv)
+{
+ local_irq_disable();
+
+ reset_timer_state(reset_cnt);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ wm();
+
+ ASSERT_IRQS_HANDLED_1(1, tv);
+ local_irq_enable();
+}
+
+static void test_basic_functionality(void)
+{
+ int32_t tval = (int32_t)msec_to_cycles(10);
+ uint64_t cval;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ cval = DEF_CNT + msec_to_cycles(10);
+
+ test_timer(DEF_CNT, cval, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, tval, wm, TIMER_TVAL);
+ }
+}
+
+/*
+ * This test checks basic timer behavior without actually firing timers, things
+ * like: the relationship between cval and tval, tval down-counting.
+ */
+static void timers_sanity_checks(bool use_sched)
+{
+ uint64_t cval;
+
+ reset_timer_state(DEF_CNT);
+
+ local_irq_disable();
+
+ /* cval in the past */
+ timer_set_cval(test_args.timer, timer_get_cntct(test_args.timer) - 1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ /* tval in the past */
+ timer_set_tval(test_args.timer, -1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_cval(test_args.timer) <
+ timer_get_cntct(test_args.timer));
+
+ /* tval larger than TVAL_MAX. */
+ cval = timer_get_cntct(test_args.timer) + 2ULL * TVAL_MAX - 1;
+ timer_set_cval(test_args.timer, cval);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) <= 0);
+ GUEST_ASSERT_EQ(cval, timer_get_cval(test_args.timer));
+
+ /* tval should keep down-counting from 0 to -1. */
+ SET_COUNTER(DEF_CNT, test_args.timer);
+ timer_set_tval(test_args.timer, 0);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ /* We just need 1 cycle to pass. */
+ isb();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ local_irq_enable();
+
+ /* Mask and disable any pending timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timers_sanity_checks(void)
+{
+ timers_sanity_checks(false);
+ /* Check how KVM saves/restores these edge-case values. */
+ timers_sanity_checks(true);
+}
+
+static void guest_run_iteration(void)
+{
+ test_timers_sanity_checks();
+ test_basic_functionality();
+}
+
+static void guest_code(void)
+{
+ int i;
+
+ local_irq_disable();
+
+ gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ timer_set_ctl(PHYSICAL, CTL_IMASK);
+
+ gic_irq_enable(vtimer_irq);
+ gic_irq_enable(ptimer_irq);
+ local_irq_enable();
+
+ for (i = 0; i < test_args.iterations; i++) {
+ GUEST_SYNC(i);
+ guest_run_iteration();
+ }
+
+ GUEST_DONE();
+}
+
+static void migrate_self(uint32_t new_pcpu)
+{
+ int ret;
+ cpu_set_t cpuset;
+ pthread_t thread;
+
+ thread = pthread_self();
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(new_pcpu, &cpuset);
+
+ pr_debug("Migrating from %u to %u\n", sched_getcpu(), new_pcpu);
+
+ ret = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
+
+ TEST_ASSERT(ret == 0, "Failed to migrate to pCPU: %u; ret: %d\n",
+ new_pcpu, ret);
+}
+
+/*
+ * Set the two pcpus that the test will use to alternate between. Default to
+ * use the current cpu as pcpus[0] and the one right after in the affinity set
+ * as pcpus[1].
+ */
+static void set_default_pcpus(void)
+{
+ int max = get_nprocs();
+ int curr = sched_getcpu();
+ cpu_set_t cpuset;
+ long i;
+
+ TEST_ASSERT(max > 1, "Need at least 2 online pcpus.");
+
+ pcpus[0] = curr;
+
+ sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuset);
+ for (i = (curr + 1) % CPU_SETSIZE; i != curr; i = (i + 1) % CPU_SETSIZE) {
+ if (CPU_ISSET(i, &cpuset)) {
+ pcpus[1] = i;
+ break;
+ }
+ }
+
+ TEST_ASSERT(pcpus[1] != -1, "Couldn't find a second pcpu.");
+ pr_debug("pcpus: %d %d\n", pcpus[0], pcpus[1]);
+}
+
+static void kvm_set_cntxct(struct kvm_vm *vm, uint64_t cnt, enum arch_timer timer)
+{
+ TEST_ASSERT(timer == VIRTUAL,
+ "Only supports setting the virtual counter for now.");
+
+ struct kvm_one_reg reg = {
+ .id = KVM_REG_ARM_TIMER_CNT,
+ .addr = (uint64_t)&cnt,
+ };
+ vcpu_set_reg(vm, 0, ®);
+}
+
+static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
+{
+ enum sync_cmd cmd = uc->args[1];
+ uint64_t val = uc->args[2];
+ enum arch_timer timer = uc->args[3];
+
+ switch (cmd) {
+ case SET_REG_KVM_REG_ARM_TIMER_CNT:
+ kvm_set_cntxct(vm, val, timer);
+ break;
+ case USERSPACE_SCHED_YIELD:
+ sched_yield();
+ break;
+ case USERSPACE_MIGRATE_SELF:
+ migrate_self(next_pcpu());
+ break;
+ default:
+ break;
+ }
+}
+
+static void test_run(struct kvm_vm *vm)
+{
+ struct ucall uc;
+ int stage = 0;
+
+ /* Start on the first pcpu. */
+ migrate_self(pcpus[0]);
+
+ sync_global_to_guest(vm, test_args);
+
+ for (stage = 0; ; stage++) {
+ vcpu_run(vm, VCPUID);
+ switch (get_ucall(vm, VCPUID, &uc)) {
+ case UCALL_SYNC:
+ handle_sync(vm, &uc);
+ break;
+ case UCALL_DONE:
+ goto out;
+ case UCALL_ABORT:
+ TEST_FAIL("%s at %s:%ld\n\tvalues: %lu, %lu; %lu",
+ (const char *)uc.args[0], __FILE__, uc.args[1],
+ uc.args[2], uc.args[3], uc.args[4]);
+ goto out;
+ default:
+ TEST_FAIL("Unexpected guest exit\n");
+ }
+ }
+
+out:
+ return;
+}
+
+static void test_init_timer_irq(struct kvm_vm *vm)
+{
+ int vcpu_fd = vcpu_get_fd(vm, VCPUID);
+
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_PTIMER, &ptimer_irq, false);
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_VTIMER, &vtimer_irq, false);
+
+ sync_global_to_guest(vm, ptimer_irq);
+ sync_global_to_guest(vm, vtimer_irq);
+
+ pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq);
+}
+
+static struct kvm_vm *test_vm_create(void)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_default(VCPUID, 0, guest_code);
+
+ vm_init_descriptor_tables(vm);
+ vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
+
+ vcpu_init_descriptor_tables(vm, 0);
+
+ ucall_init(vm, NULL);
+ test_init_timer_irq(vm);
+ ret = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+ if (ret < 0) {
+ print_skip("Failed to create vgic-v3, skipping");
+ exit(KSFT_SKIP);
+ }
+
+ return vm;
+}
+
+static void test_print_help(char *name)
+{
+ pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ name);
+ pr_info("\t-i: Number of iterations (default: %u)\n",
+ NR_TEST_ITERS_DEF);
+ pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-h: Print this help message\n");
+}
+
+static bool parse_args(int argc, char *argv[])
+{
+ int opt, ret;
+
+ while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ switch (opt) {
+ case 'i':
+ test_args.iterations = atoi(optarg);
+ if (test_args.iterations <= 0) {
+ pr_info("Positive value needed for -i\n");
+ goto err;
+ }
+ break;
+ case 'p':
+ ret = sscanf(optarg, "%u,%u", &pcpus[0], &pcpus[1]);
+ if (ret != 2) {
+ pr_info("Invalid pcpus pair");
+ goto err;
+ }
+ break;
+ case 'h':
+ default:
+ goto err;
+ }
+ }
+
+ return true;
+
+err:
+ test_print_help(argv[0]);
+ return false;
+}
+
+int main(int argc, char *argv[])
+{
+ struct kvm_vm *vm;
+
+ /* Tell stdout not to buffer its content */
+ setbuf(stdout, NULL);
+
+ if (!parse_args(argc, argv))
+ exit(KSFT_SKIP);
+
+ if (get_nprocs() < 2)
+ exit(KSFT_SKIP);
+
+ if (pcpus[0] == -1 || pcpus[1] == -1)
+ set_default_pcpus();
+
+ vm = test_vm_create();
+ test_run(vm);
+ kvm_vm_free(vm);
+
+ return 0;
+}
--
2.35.1.723.g4982287a31-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 3/3] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
2022-03-17 4:51 ` Ricardo Koller
@ 2022-03-17 4:51 ` Ricardo Koller
-1 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add tests that validates some edge cases related to the virtual
arch-timer:
- timers in the past, including TVALs that rollover from 0.
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- reprograming timers.
- the same timer condition firing multiple times.
- masking/unmasking using the timer control mask.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../kvm/aarch64/arch_timer_edge_cases.c | 329 +++++++++++++++++-
1 file changed, 326 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
index dc399482e35d..0eeb72767bea 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -2,6 +2,12 @@
/*
* arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
*
+ * The test validates some edge cases related to the virtual arch-timer:
+ * - timers across counter roll-overs.
+ * - moving counters ahead and behind pending timers.
+ * - reprograming timers.
+ * - the same timer condition firing multiple times.
+ *
* Some of these tests program timers and then wait indefinitely for them to
* fire. We rely on having a timeout mechanism in the "runner", like
* tools/testing/selftests/kselftest/runner.sh.
@@ -47,6 +53,9 @@
/* Number of runs. */
#define NR_TEST_ITERS_DEF 5
+/* Default "long" wait test time in ms. */
+#define LONG_WAIT_TEST_MS 100
+
/* Shared with IRQ handler. */
volatile struct test_vcpu_shared_data {
int handled;
@@ -55,6 +64,8 @@ volatile struct test_vcpu_shared_data {
struct test_args {
/* Virtual or physical timer and counter tests. */
enum arch_timer timer;
+ /* Delay used in the test_long_timer_delays test. */
+ uint64_t long_wait_ms;
/* Number of iterations. */
int iterations;
};
@@ -62,6 +73,7 @@ struct test_args {
struct test_args test_args = {
/* Only testing VIRTUAL timers for now. */
.timer = VIRTUAL,
+ .long_wait_ms = LONG_WAIT_TEST_MS,
.iterations = NR_TEST_ITERS_DEF,
};
@@ -69,10 +81,25 @@ static int vtimer_irq, ptimer_irq;
enum sync_cmd {
SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_USLEEP,
USERSPACE_SCHED_YIELD,
USERSPACE_MIGRATE_SELF,
};
+typedef void (*sleep_method_t)(uint64_t usec);
+
+static void sleep_poll(uint64_t usec);
+static void sleep_sched_poll(uint64_t usec);
+static void sleep_in_userspace(uint64_t usec);
+static void sleep_migrate(uint64_t usec);
+
+sleep_method_t sleep_method[] = {
+ sleep_poll,
+ sleep_sched_poll,
+ sleep_migrate,
+ sleep_in_userspace,
+};
+
typedef void (*wait_method_t)(void);
static void wait_for_non_spurious_irq(void);
@@ -125,6 +152,9 @@ static uint32_t next_pcpu(void)
#define USERSPACE_MIGRATE_VCPU() \
USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+#define SLEEP_IN_USERSPACE(__usecs) \
+ GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
+
static void guest_irq_handler(struct ex_regs *regs)
{
unsigned int intid = gic_get_and_ack_irq();
@@ -227,6 +257,60 @@ static void wait_migrate_poll_for_irq(void)
poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
}
+/*
+ * Sleep for usec microseconds by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
+ */
+static void guest_poll(enum arch_timer timer, uint64_t usec,
+ bool userspace, enum sync_cmd userspace_cmd)
+{
+ uint64_t cycles = usec_to_cycles(usec);
+ uint64_t start = timer_get_cntct(timer);
+
+ /*
+ * TODO: Take care of roll-overs. Right now, we are fine as we use the
+ * virtual timer/counter for all of our roll-over tests, and so we can use
+ * the physical counter for this function. Assert this (temporarily):
+ */
+ GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
+
+ while ((timer_get_cntct(timer) - start) < cycles) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+}
+
+static void sleep_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, false, -1);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_sched_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_migrate(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_in_userspace(uint64_t usec)
+{
+ SLEEP_IN_USERSPACE(usec);
+}
+
/*
* Reset the timer state to some nice values like the counter not being close
* to the edge, and the control register masked and disabled.
@@ -251,6 +335,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
local_irq_enable();
}
+/*
+ * Set the counter to just below the edge (CVAL_MAX) and set a timer that
+ * crosses it over.
+ */
+static void test_timers_across_rollovers(void)
+{
+ uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
+ }
+}
+
+/* Check that timer control masks actually mask a timer being fired. */
+static void test_timer_control_masked(sleep_method_t guest_sleep)
+{
+ reset_timer_state(DEF_CNT);
+
+ /* Local IRQs are not masked at this point. */
+
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
+ guest_sleep(TIMEOUT_NO_IRQ_US);
+
+ ASSERT_IRQS_HANDLED(0);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
+static void test_timer_control_mask_then_unmask(wait_method_t wm)
+{
+ reset_timer_state(DEF_CNT);
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* No IRQs because the timer is still masked. */
+ ASSERT_IRQS_HANDLED(0);
+
+ /* Unmask the timer, and then get an IRQ. */
+ local_irq_disable();
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/*
+ * Set a timer at the edge, and wait with irqs masked for so long that the
+ * counter rolls over and the "Timer Condition" doesn't apply anymore. We
+ * should still get an IRQ.
+ */
+static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
+{
+ local_irq_disable();
+ reset_timer_state(CVAL_MAX - msec_to_cycles(5));
+
+ program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
+
+ GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
+ guest_sleep(msecs_to_usecs(10));
+ GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
+
+ local_irq_enable();
+ isb();
+
+ ASSERT_IRQS_HANDLED(0);
+}
+
+static void test_control_masks(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_timer_control_masked(sleep_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_timer_control_mask_then_unmask(wait_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_irq_masked_timer_across_rollover(sleep_method[i]);
+}
+
+static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
+{
+ int i;
+
+ local_irq_disable();
+ reset_timer_state(DEF_CNT);
+
+ program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
+
+ for (i = 1; i <= num; i++) {
+ wm();
+
+ /*
+ * The IRQ handler masked and disabled the timer.
+ * Enable and unmmask it again.
+ */
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+
+ ASSERT_IRQS_HANDLED(i);
+ }
+
+ local_irq_enable();
+}
+
+static void test_timers_fired_multiple_times(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_fire_a_timer_multiple_times(wait_method[i], 1000);
+}
+
+/* Set a timer for cval1 then reprogram it to cval1. */
+static void test_reprogram_timer(wait_method_t wm, bool use_sched,
+ uint64_t cnt, uint64_t cval1, uint64_t cval2)
+{
+ local_irq_disable();
+ reset_timer_state(cnt);
+
+ program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
+
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+
+ timer_set_cval(test_args.timer, cval2);
+
+ wm();
+
+ local_irq_enable();
+ ASSERT_IRQS_HANDLED(1);
+};
+
+static void test_reprogram_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ }
+}
+
static void test_basic_functionality(void)
{
int32_t tval = (int32_t)msec_to_cycles(10);
@@ -306,7 +540,7 @@ static void timers_sanity_checks(bool use_sched)
if (use_sched)
USERSPACE_SCHEDULE();
/* We just need 1 cycle to pass. */
- isb();
+ sleep_poll(1);
GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
local_irq_enable();
@@ -322,10 +556,86 @@ static void test_timers_sanity_checks(void)
timers_sanity_checks(true);
}
+/*
+ * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
+ * then wait for an IRQ.
+ */
+static void test_set_counter_after_programming_timer(uint64_t cnt_1,
+ uint64_t xval, uint64_t cnt_2, wait_method_t wm,
+ enum timer_view tv)
+{
+ local_irq_disable();
+
+ SET_COUNTER(cnt_1, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ SET_COUNTER(cnt_2, test_args.timer);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/* Set a timer and then move the counter ahead of it. */
+static void test_move_counters_after_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_set_counter_after_programming_timer(0, DEF_CNT,
+ DEF_CNT + 1, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(CVAL_MAX, 1,
+ 2, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(0, TVAL_MAX,
+ (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
+ }
+}
+
+static void test_timers_in_the_past(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
+ test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
+ }
+}
+
+static void test_long_timer_delays(void)
+{
+ uint64_t wait_ms = test_args.long_wait_ms;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
+ }
+}
+
static void guest_run_iteration(void)
{
test_timers_sanity_checks();
test_basic_functionality();
+
+ test_timers_in_the_past();
+ test_timers_across_rollovers();
+
+ test_move_counters_after_timers();
+ test_reprogram_timers();
+
+ test_control_masks();
+
+ test_timers_fired_multiple_times();
}
static void guest_code(void)
@@ -348,6 +658,7 @@ static void guest_code(void)
guest_run_iteration();
}
+ test_long_timer_delays();
GUEST_DONE();
}
@@ -420,6 +731,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
case SET_REG_KVM_REG_ARM_TIMER_CNT:
kvm_set_cntxct(vm, val, timer);
break;
+ case USERSPACE_USLEEP:
+ usleep(val);
+ break;
case USERSPACE_SCHED_YIELD:
sched_yield();
break;
@@ -503,11 +817,13 @@ static struct kvm_vm *test_vm_create(void)
static void test_print_help(char *name)
{
- pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
name);
pr_info("\t-i: Number of iterations (default: %u)\n",
NR_TEST_ITERS_DEF);
pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
+ LONG_WAIT_TEST_MS);
pr_info("\t-h: Print this help message\n");
}
@@ -515,7 +831,7 @@ static bool parse_args(int argc, char *argv[])
{
int opt, ret;
- while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
switch (opt) {
case 'i':
test_args.iterations = atoi(optarg);
@@ -531,6 +847,13 @@ static bool parse_args(int argc, char *argv[])
goto err;
}
break;
+ case 'l':
+ test_args.long_wait_ms = atoi(optarg);
+ if (test_args.long_wait_ms <= 0) {
+ pr_info("Positive value needed for -l\n");
+ goto err;
+ }
+ break;
case 'h':
default:
goto err;
--
2.35.1.723.g4982287a31-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 3/3] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
@ 2022-03-17 4:51 ` Ricardo Koller
0 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-17 4:51 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add tests that validates some edge cases related to the virtual
arch-timer:
- timers in the past, including TVALs that rollover from 0.
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- reprograming timers.
- the same timer condition firing multiple times.
- masking/unmasking using the timer control mask.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../kvm/aarch64/arch_timer_edge_cases.c | 329 +++++++++++++++++-
1 file changed, 326 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
index dc399482e35d..0eeb72767bea 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -2,6 +2,12 @@
/*
* arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
*
+ * The test validates some edge cases related to the virtual arch-timer:
+ * - timers across counter roll-overs.
+ * - moving counters ahead and behind pending timers.
+ * - reprograming timers.
+ * - the same timer condition firing multiple times.
+ *
* Some of these tests program timers and then wait indefinitely for them to
* fire. We rely on having a timeout mechanism in the "runner", like
* tools/testing/selftests/kselftest/runner.sh.
@@ -47,6 +53,9 @@
/* Number of runs. */
#define NR_TEST_ITERS_DEF 5
+/* Default "long" wait test time in ms. */
+#define LONG_WAIT_TEST_MS 100
+
/* Shared with IRQ handler. */
volatile struct test_vcpu_shared_data {
int handled;
@@ -55,6 +64,8 @@ volatile struct test_vcpu_shared_data {
struct test_args {
/* Virtual or physical timer and counter tests. */
enum arch_timer timer;
+ /* Delay used in the test_long_timer_delays test. */
+ uint64_t long_wait_ms;
/* Number of iterations. */
int iterations;
};
@@ -62,6 +73,7 @@ struct test_args {
struct test_args test_args = {
/* Only testing VIRTUAL timers for now. */
.timer = VIRTUAL,
+ .long_wait_ms = LONG_WAIT_TEST_MS,
.iterations = NR_TEST_ITERS_DEF,
};
@@ -69,10 +81,25 @@ static int vtimer_irq, ptimer_irq;
enum sync_cmd {
SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_USLEEP,
USERSPACE_SCHED_YIELD,
USERSPACE_MIGRATE_SELF,
};
+typedef void (*sleep_method_t)(uint64_t usec);
+
+static void sleep_poll(uint64_t usec);
+static void sleep_sched_poll(uint64_t usec);
+static void sleep_in_userspace(uint64_t usec);
+static void sleep_migrate(uint64_t usec);
+
+sleep_method_t sleep_method[] = {
+ sleep_poll,
+ sleep_sched_poll,
+ sleep_migrate,
+ sleep_in_userspace,
+};
+
typedef void (*wait_method_t)(void);
static void wait_for_non_spurious_irq(void);
@@ -125,6 +152,9 @@ static uint32_t next_pcpu(void)
#define USERSPACE_MIGRATE_VCPU() \
USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+#define SLEEP_IN_USERSPACE(__usecs) \
+ GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
+
static void guest_irq_handler(struct ex_regs *regs)
{
unsigned int intid = gic_get_and_ack_irq();
@@ -227,6 +257,60 @@ static void wait_migrate_poll_for_irq(void)
poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
}
+/*
+ * Sleep for usec microseconds by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
+ */
+static void guest_poll(enum arch_timer timer, uint64_t usec,
+ bool userspace, enum sync_cmd userspace_cmd)
+{
+ uint64_t cycles = usec_to_cycles(usec);
+ uint64_t start = timer_get_cntct(timer);
+
+ /*
+ * TODO: Take care of roll-overs. Right now, we are fine as we use the
+ * virtual timer/counter for all of our roll-over tests, and so we can use
+ * the physical counter for this function. Assert this (temporarily):
+ */
+ GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
+
+ while ((timer_get_cntct(timer) - start) < cycles) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+}
+
+static void sleep_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, false, -1);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_sched_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_migrate(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_in_userspace(uint64_t usec)
+{
+ SLEEP_IN_USERSPACE(usec);
+}
+
/*
* Reset the timer state to some nice values like the counter not being close
* to the edge, and the control register masked and disabled.
@@ -251,6 +335,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
local_irq_enable();
}
+/*
+ * Set the counter to just below the edge (CVAL_MAX) and set a timer that
+ * crosses it over.
+ */
+static void test_timers_across_rollovers(void)
+{
+ uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
+ }
+}
+
+/* Check that timer control masks actually mask a timer being fired. */
+static void test_timer_control_masked(sleep_method_t guest_sleep)
+{
+ reset_timer_state(DEF_CNT);
+
+ /* Local IRQs are not masked at this point. */
+
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
+ guest_sleep(TIMEOUT_NO_IRQ_US);
+
+ ASSERT_IRQS_HANDLED(0);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
+static void test_timer_control_mask_then_unmask(wait_method_t wm)
+{
+ reset_timer_state(DEF_CNT);
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* No IRQs because the timer is still masked. */
+ ASSERT_IRQS_HANDLED(0);
+
+ /* Unmask the timer, and then get an IRQ. */
+ local_irq_disable();
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/*
+ * Set a timer at the edge, and wait with irqs masked for so long that the
+ * counter rolls over and the "Timer Condition" doesn't apply anymore. We
+ * should still get an IRQ.
+ */
+static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
+{
+ local_irq_disable();
+ reset_timer_state(CVAL_MAX - msec_to_cycles(5));
+
+ program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
+
+ GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
+ guest_sleep(msecs_to_usecs(10));
+ GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
+
+ local_irq_enable();
+ isb();
+
+ ASSERT_IRQS_HANDLED(0);
+}
+
+static void test_control_masks(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_timer_control_masked(sleep_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_timer_control_mask_then_unmask(wait_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_irq_masked_timer_across_rollover(sleep_method[i]);
+}
+
+static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
+{
+ int i;
+
+ local_irq_disable();
+ reset_timer_state(DEF_CNT);
+
+ program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
+
+ for (i = 1; i <= num; i++) {
+ wm();
+
+ /*
+ * The IRQ handler masked and disabled the timer.
+ * Enable and unmmask it again.
+ */
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+
+ ASSERT_IRQS_HANDLED(i);
+ }
+
+ local_irq_enable();
+}
+
+static void test_timers_fired_multiple_times(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_fire_a_timer_multiple_times(wait_method[i], 1000);
+}
+
+/* Set a timer for cval1 then reprogram it to cval1. */
+static void test_reprogram_timer(wait_method_t wm, bool use_sched,
+ uint64_t cnt, uint64_t cval1, uint64_t cval2)
+{
+ local_irq_disable();
+ reset_timer_state(cnt);
+
+ program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
+
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+
+ timer_set_cval(test_args.timer, cval2);
+
+ wm();
+
+ local_irq_enable();
+ ASSERT_IRQS_HANDLED(1);
+};
+
+static void test_reprogram_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ }
+}
+
static void test_basic_functionality(void)
{
int32_t tval = (int32_t)msec_to_cycles(10);
@@ -306,7 +540,7 @@ static void timers_sanity_checks(bool use_sched)
if (use_sched)
USERSPACE_SCHEDULE();
/* We just need 1 cycle to pass. */
- isb();
+ sleep_poll(1);
GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
local_irq_enable();
@@ -322,10 +556,86 @@ static void test_timers_sanity_checks(void)
timers_sanity_checks(true);
}
+/*
+ * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
+ * then wait for an IRQ.
+ */
+static void test_set_counter_after_programming_timer(uint64_t cnt_1,
+ uint64_t xval, uint64_t cnt_2, wait_method_t wm,
+ enum timer_view tv)
+{
+ local_irq_disable();
+
+ SET_COUNTER(cnt_1, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ SET_COUNTER(cnt_2, test_args.timer);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/* Set a timer and then move the counter ahead of it. */
+static void test_move_counters_after_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_set_counter_after_programming_timer(0, DEF_CNT,
+ DEF_CNT + 1, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(CVAL_MAX, 1,
+ 2, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(0, TVAL_MAX,
+ (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
+ }
+}
+
+static void test_timers_in_the_past(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
+ test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
+ }
+}
+
+static void test_long_timer_delays(void)
+{
+ uint64_t wait_ms = test_args.long_wait_ms;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
+ }
+}
+
static void guest_run_iteration(void)
{
test_timers_sanity_checks();
test_basic_functionality();
+
+ test_timers_in_the_past();
+ test_timers_across_rollovers();
+
+ test_move_counters_after_timers();
+ test_reprogram_timers();
+
+ test_control_masks();
+
+ test_timers_fired_multiple_times();
}
static void guest_code(void)
@@ -348,6 +658,7 @@ static void guest_code(void)
guest_run_iteration();
}
+ test_long_timer_delays();
GUEST_DONE();
}
@@ -420,6 +731,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
case SET_REG_KVM_REG_ARM_TIMER_CNT:
kvm_set_cntxct(vm, val, timer);
break;
+ case USERSPACE_USLEEP:
+ usleep(val);
+ break;
case USERSPACE_SCHED_YIELD:
sched_yield();
break;
@@ -503,11 +817,13 @@ static struct kvm_vm *test_vm_create(void)
static void test_print_help(char *name)
{
- pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
name);
pr_info("\t-i: Number of iterations (default: %u)\n",
NR_TEST_ITERS_DEF);
pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
+ LONG_WAIT_TEST_MS);
pr_info("\t-h: Print this help message\n");
}
@@ -515,7 +831,7 @@ static bool parse_args(int argc, char *argv[])
{
int opt, ret;
- while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
switch (opt) {
case 'i':
test_args.iterations = atoi(optarg);
@@ -531,6 +847,13 @@ static bool parse_args(int argc, char *argv[])
goto err;
}
break;
+ case 'l':
+ test_args.long_wait_ms = atoi(optarg);
+ if (test_args.long_wait_ms <= 0) {
+ pr_info("Positive value needed for -l\n");
+ goto err;
+ }
+ break;
case 'h':
default:
goto err;
--
2.35.1.723.g4982287a31-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
2022-03-17 4:51 ` Ricardo Koller
@ 2022-03-17 6:44 ` Oliver Upton
-1 siblings, 0 replies; 20+ messages in thread
From: Oliver Upton @ 2022-03-17 6:44 UTC (permalink / raw)
To: Ricardo Koller
Cc: kvm, kvmarm, drjones, pbonzini, maz, alexandru.elisei,
eric.auger, reijiw, rananta
On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
> Add an arch_timer edge-cases selftest. For now, just add some basic
> sanity checks, and some stress conditions (like waiting for the timers
> while re-scheduling the vcpu). The next commit will add the actual edge
> case tests.
>
> This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
> interrupts for suspended vCPU".
>
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/Makefile | 1 +
> .../kvm/aarch64/arch_timer_edge_cases.c | 568 ++++++++++++++++++
> 3 files changed, 570 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index dce7de7755e6..8f7e0123dd28 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -1,5 +1,6 @@
> # SPDX-License-Identifier: GPL-2.0-only
> /aarch64/arch_timer
> +/aarch64/arch_timer_edge_cases
> /aarch64/debug-exceptions
> /aarch64/get-reg-list
> /aarch64/psci_cpu_on_test
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 17c3f0749f05..d4466ca76f21 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -100,6 +100,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
>
> TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
> +TEST_GEN_PROGS_aarch64 += aarch64/arch_timer_edge_cases
> TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
> TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
> TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
> diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> new file mode 100644
> index 000000000000..dc399482e35d
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> @@ -0,0 +1,568 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
> + *
> + * Some of these tests program timers and then wait indefinitely for them to
> + * fire. We rely on having a timeout mechanism in the "runner", like
> + * tools/testing/selftests/kselftest/runner.sh.
> + *
> + * Copyright (c) 2021, Google LLC.
> + */
> +
> +#define _GNU_SOURCE
> +
> +#include <stdlib.h>
> +#include <pthread.h>
> +#include <linux/kvm.h>
> +#include <linux/sizes.h>
> +#include <linux/bitmap.h>
> +#include <sched.h>
> +#include <sys/sysinfo.h>
> +
> +#include "kvm_util.h"
> +#include "processor.h"
> +#include "delay.h"
> +#include "arch_timer.h"
> +#include "gic.h"
> +#include "vgic.h"
> +
> +#define VCPUID 0
> +
> +#define msecs_to_usecs(msec) ((msec) * 1000LL)
nit: drop the macro and simply open code the multiplication of
USEC_PER_MSEC
> +
> +#define CVAL_MAX (~0ULL)
> +/* tval is a signed 32-bit int. */
> +#define TVAL_MAX INT_MAX
> +#define TVAL_MIN INT_MIN
> +
> +#define GICD_BASE_GPA 0x8000000ULL
> +#define GICR_BASE_GPA 0x80A0000ULL
> +
> +/* After how much time we say there is no IRQ. */
> +#define TIMEOUT_NO_IRQ_US msecs_to_usecs(50)
> +
> +/* A nice counter value to use as the starting one for most tests. */
> +#define DEF_CNT (CVAL_MAX / 2)
> +
> +/* Number of runs. */
> +#define NR_TEST_ITERS_DEF 5
> +
> +/* Shared with IRQ handler. */
> +volatile struct test_vcpu_shared_data {
> + int handled;
> +} shared_data;
> +
> +struct test_args {
> + /* Virtual or physical timer and counter tests. */
> + enum arch_timer timer;
> + /* Number of iterations. */
> + int iterations;
> +};
> +
> +struct test_args test_args = {
> + /* Only testing VIRTUAL timers for now. */
> + .timer = VIRTUAL,
> + .iterations = NR_TEST_ITERS_DEF,
> +};
> +
> +static int vtimer_irq, ptimer_irq;
> +
> +enum sync_cmd {
> + SET_REG_KVM_REG_ARM_TIMER_CNT,
> + USERSPACE_SCHED_YIELD,
> + USERSPACE_MIGRATE_SELF,
> +};
> +
> +typedef void (*wait_method_t)(void);
> +
> +static void wait_for_non_spurious_irq(void);
> +static void wait_poll_for_irq(void);
> +static void wait_sched_poll_for_irq(void);
> +static void wait_migrate_poll_for_irq(void);
> +
> +wait_method_t wait_method[] = {
> + wait_for_non_spurious_irq,
> + wait_poll_for_irq,
> + wait_sched_poll_for_irq,
> + wait_migrate_poll_for_irq,
> +};
> +
> +enum timer_view {
> + TIMER_CVAL,
> + TIMER_TVAL,
> +};
> +
> +/* Pair of pcpus for the test to alternate between. */
> +static int pcpus[2] = {-1, -1};
> +static int pcpus_idx;
> +
> +static uint32_t next_pcpu(void)
> +{
> + pcpus_idx = 1 - pcpus_idx;
> + return pcpus[pcpus_idx];
> +}
> +
> +#define ASSERT_IRQS_HANDLED_2(__nr, arg1, arg2) do { \
> + int __h = shared_data.handled; \
> + GUEST_ASSERT_4(__h == (__nr), __h, __nr, arg1, arg2); \
> +} while (0)
> +
> +#define ASSERT_IRQS_HANDLED_1(__nr, arg1) \
> + ASSERT_IRQS_HANDLED_2((__nr), arg1, 0)
> +
> +#define ASSERT_IRQS_HANDLED(__nr) \
> + ASSERT_IRQS_HANDLED_2((__nr), 0, 0)
> +
> +#define SET_COUNTER(__ctr, __t) \
> + GUEST_SYNC_ARGS(SET_REG_KVM_REG_ARM_TIMER_CNT, (__ctr), (__t), 0, 0)
> +
> +#define USERSPACE_CMD(__cmd) \
> + GUEST_SYNC_ARGS(__cmd, 0, 0, 0, 0)
> +
> +#define USERSPACE_SCHEDULE() \
> + USERSPACE_CMD(USERSPACE_SCHED_YIELD)
> +
> +#define USERSPACE_MIGRATE_VCPU() \
> + USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
> +
> +static void guest_irq_handler(struct ex_regs *regs)
> +{
> + unsigned int intid = gic_get_and_ack_irq();
> + uint64_t cnt, cval;
> + uint32_t ctl;
> +
> + if (intid == IAR_SPURIOUS)
> + return;
> +
> + GUEST_ASSERT(gic_irq_get_pending(intid));
> +
> + ctl = timer_get_ctl(test_args.timer);
> + cnt = timer_get_cntct(test_args.timer);
> + cval = timer_get_cval(test_args.timer);
> +
> + GUEST_ASSERT_1(ctl & CTL_ISTATUS, ctl);
> +
> + /* Disable and mask the timer. */
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> + GUEST_ASSERT(!gic_irq_get_pending(intid));
> +
> + shared_data.handled++;
> +
> + /* The IRQ should not fire before time. */
> + GUEST_ASSERT_2(cnt >= cval, cnt, cval);
> +
> + gic_set_eoi(intid);
> +}
> +
> +static void program_timer_irq(uint64_t xval, uint32_t ctl, enum timer_view tv) > +{
> + shared_data.handled = 0;
> +
> + switch (tv) {
> + case TIMER_CVAL:
> + timer_set_cval(test_args.timer, xval);
> + timer_set_ctl(test_args.timer, ctl);
> + break;
> + case TIMER_TVAL:
> + timer_set_tval(test_args.timer, xval);
> + timer_set_ctl(test_args.timer, ctl);
> + break;
> + default:
> + GUEST_ASSERT(0);
> + }
> +}
> +
> +/*
> + * Should be called with IRQs masked.
> + */
> +static void wait_for_non_spurious_irq(void)
> +{
> + int h;
> +
> + for (h = shared_data.handled; h == shared_data.handled;) {
how about:
h = shared_data.handled;
/* Wait for the IRQ handler to process an interrupt */
while (h == shared_data.handled) {
}
Brain is trained to think iteration when I see a for loop, and a while
loop does a better job hinting that we are waiting for a condition to be
met.
> + asm volatile("wfi\n"
> + "msr daifclr, #2\n"
> + /* handle IRQ */
I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
having a hard time finding the same language in the H.a revision of the
manual :-/
> + "msr daifset, #2\n"
> + : : : "memory");
> + }
> +}
> +
> +/*
> + * Wait for an non-spurious IRQ by polling in the guest (userspace=0) or in
> + * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHED_YIELD).
> + *
> + * Should be called with IRQs masked. Not really needed like the wfi above, but
> + * it should match the others.
> + */
> +static void poll_for_non_spurious_irq(bool userspace,
> + enum sync_cmd userspace_cmd)
> +{
> + int h;
> +
> + h = shared_data.handled;
> +
> + local_irq_enable();
> + while (h == shared_data.handled) {
> + if (userspace)
> + USERSPACE_CMD(userspace_cmd);
> + else
> + cpu_relax();
> + }
> + local_irq_disable();
> +}
> +
> +static void wait_poll_for_irq(void)
> +{
> + poll_for_non_spurious_irq(false, -1);
> +}
> +
> +static void wait_sched_poll_for_irq(void)
> +{
> + poll_for_non_spurious_irq(true, USERSPACE_SCHED_YIELD);
> +}
> +
> +static void wait_migrate_poll_for_irq(void)
> +{
> + poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
> +}
> +
> +/*
> + * Reset the timer state to some nice values like the counter not being close
> + * to the edge, and the control register masked and disabled.
> + */
> +static void reset_timer_state(uint64_t cnt)
> +{
> + SET_COUNTER(cnt, test_args.timer);
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +}
> +
> +static void test_timer(uint64_t reset_cnt, uint64_t xval,
> + wait_method_t wm, enum timer_view tv)
> +{
> + local_irq_disable();
> +
> + reset_timer_state(reset_cnt);
> +
> + program_timer_irq(xval, CTL_ENABLE, tv);
> + wm();
> +
> + ASSERT_IRQS_HANDLED_1(1, tv);
> + local_irq_enable();
> +}
> +
> +static void test_basic_functionality(void)
> +{
> + int32_t tval = (int32_t)msec_to_cycles(10);
> + uint64_t cval;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + cval = DEF_CNT + msec_to_cycles(10);
> +
> + test_timer(DEF_CNT, cval, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, tval, wm, TIMER_TVAL);
> + }
> +}
> +
> +/*
> + * This test checks basic timer behavior without actually firing timers, things
> + * like: the relationship between cval and tval, tval down-counting.
> + */
> +static void timers_sanity_checks(bool use_sched)
> +{
> + uint64_t cval;
> +
> + reset_timer_state(DEF_CNT);
> +
> + local_irq_disable();
> +
> + /* cval in the past */
> + timer_set_cval(test_args.timer, timer_get_cntct(test_args.timer) - 1);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
> +
> + /* tval in the past */
> + timer_set_tval(test_args.timer, -1);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + GUEST_ASSERT(timer_get_cval(test_args.timer) <
> + timer_get_cntct(test_args.timer));
> +
> + /* tval larger than TVAL_MAX. */
> + cval = timer_get_cntct(test_args.timer) + 2ULL * TVAL_MAX - 1;
> + timer_set_cval(test_args.timer, cval);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + GUEST_ASSERT(timer_get_tval(test_args.timer) <= 0);
> + GUEST_ASSERT_EQ(cval, timer_get_cval(test_args.timer));
> +
> + /* tval should keep down-counting from 0 to -1. */
> + SET_COUNTER(DEF_CNT, test_args.timer);
> + timer_set_tval(test_args.timer, 0);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + /* We just need 1 cycle to pass. */
> + isb();
Somewhat paranoid, but:
If the CPU retires the ISB much more quickly than the counter ticks, its
possible that you could observe an invalid TVAL even with a valid
implementation.
What if you spin waiting for CNT to increment before the assertion? Then
you for sureknow (and can tell by reading the test) that the
implementation is broken.
> + GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
> +
> + local_irq_enable();
> +
> + /* Mask and disable any pending timer. */
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +}
> +
> +static void test_timers_sanity_checks(void)
> +{
> + timers_sanity_checks(false);
> + /* Check how KVM saves/restores these edge-case values. */
> + timers_sanity_checks(true);
> +}
> +
> +static void guest_run_iteration(void)
> +{
> + test_timers_sanity_checks();
> + test_basic_functionality();
> +}
> +
> +static void guest_code(void)
> +{
> + int i;
> +
> + local_irq_disable();
> +
> + gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
> +
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> + timer_set_ctl(PHYSICAL, CTL_IMASK);
> +
> + gic_irq_enable(vtimer_irq);
> + gic_irq_enable(ptimer_irq);
> + local_irq_enable();
> +
> + for (i = 0; i < test_args.iterations; i++) {
> + GUEST_SYNC(i);
> + guest_run_iteration();
> + }
> +
> + GUEST_DONE();
> +}
> +
> +static void migrate_self(uint32_t new_pcpu)
> +{
> + int ret;
> + cpu_set_t cpuset;
> + pthread_t thread;
> +
> + thread = pthread_self();
> +
> + CPU_ZERO(&cpuset);
> + CPU_SET(new_pcpu, &cpuset);
> +
> + pr_debug("Migrating from %u to %u\n", sched_getcpu(), new_pcpu);
> +
> + ret = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
> +
> + TEST_ASSERT(ret == 0, "Failed to migrate to pCPU: %u; ret: %d\n",
> + new_pcpu, ret);
> +}
> +
> +/*
> + * Set the two pcpus that the test will use to alternate between. Default to
> + * use the current cpu as pcpus[0] and the one right after in the affinity set
> + * as pcpus[1].
> + */
> +static void set_default_pcpus(void)
> +{
> + int max = get_nprocs();
> + int curr = sched_getcpu();
> + cpu_set_t cpuset;
> + long i;
> +
> + TEST_ASSERT(max > 1, "Need at least 2 online pcpus.");
> +
I don't think this could ever be reached, given that you exit(KSFT_SKIP)
for this condition. Maybe just take the string from this assertion and
add a pr_skip() to your early exit.
> + pcpus[0] = curr;
> +
> + sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuset);
> + for (i = (curr + 1) % CPU_SETSIZE; i != curr; i = (i + 1) % CPU_SETSIZE) {
> + if (CPU_ISSET(i, &cpuset)) {
> + pcpus[1] = i;
> + break;
> + }
> + }
> +
> + TEST_ASSERT(pcpus[1] != -1, "Couldn't find a second pcpu.");
> + pr_debug("pcpus: %d %d\n", pcpus[0], pcpus[1]);
> +}
> +
> +static void kvm_set_cntxct(struct kvm_vm *vm, uint64_t cnt, enum arch_timer timer)
> +{
> + TEST_ASSERT(timer == VIRTUAL,
> + "Only supports setting the virtual counter for now.");
> +
> + struct kvm_one_reg reg = {
> + .id = KVM_REG_ARM_TIMER_CNT,
> + .addr = (uint64_t)&cnt,
> + };
> + vcpu_set_reg(vm, 0, ®);
> +}
> +
> +static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
> +{
> + enum sync_cmd cmd = uc->args[1];
> + uint64_t val = uc->args[2];
> + enum arch_timer timer = uc->args[3];
> +
> + switch (cmd) {
> + case SET_REG_KVM_REG_ARM_TIMER_CNT:
> + kvm_set_cntxct(vm, val, timer);
> + break;
> + case USERSPACE_SCHED_YIELD:
> + sched_yield();
> + break;
> + case USERSPACE_MIGRATE_SELF:
> + migrate_self(next_pcpu());
> + break;
> + default:
> + break;
> + }
> +}
> +
> +static void test_run(struct kvm_vm *vm)
> +{
> + struct ucall uc;
> + int stage = 0;
> +
> + /* Start on the first pcpu. */
> + migrate_self(pcpus[0]);
> +
> + sync_global_to_guest(vm, test_args);
> +
> + for (stage = 0; ; stage++) {
> + vcpu_run(vm, VCPUID);
> + switch (get_ucall(vm, VCPUID, &uc)) {
> + case UCALL_SYNC:
> + handle_sync(vm, &uc);
> + break;
> + case UCALL_DONE:
> + goto out;
> + case UCALL_ABORT:
> + TEST_FAIL("%s at %s:%ld\n\tvalues: %lu, %lu; %lu",
> + (const char *)uc.args[0], __FILE__, uc.args[1],
> + uc.args[2], uc.args[3], uc.args[4]);
> + goto out;
> + default:
> + TEST_FAIL("Unexpected guest exit\n");
> + }
> + }
> +
> +out:
> + return;
> +}
> +
> +static void test_init_timer_irq(struct kvm_vm *vm)
> +{
> + int vcpu_fd = vcpu_get_fd(vm, VCPUID);
> +
> + kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
> + KVM_ARM_VCPU_TIMER_IRQ_PTIMER, &ptimer_irq, false);
> + kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
> + KVM_ARM_VCPU_TIMER_IRQ_VTIMER, &vtimer_irq, false);
> +
> + sync_global_to_guest(vm, ptimer_irq);
> + sync_global_to_guest(vm, vtimer_irq);
> +
> + pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq);
> +}
> +
> +static struct kvm_vm *test_vm_create(void)
> +{
> + struct kvm_vm *vm;
> + int ret;
> +
> + vm = vm_create_default(VCPUID, 0, guest_code);
> +
> + vm_init_descriptor_tables(vm);
> + vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
> +
> + vcpu_init_descriptor_tables(vm, 0);
> +
> + ucall_init(vm, NULL);
> + test_init_timer_irq(vm);
> + ret = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
> + if (ret < 0) {
> + print_skip("Failed to create vgic-v3, skipping");
I believe print_skip() already appends something about skipping the test.
> + exit(KSFT_SKIP);
> + }
> +
> + return vm;
> +}
> +
> +static void test_print_help(char *name)
> +{
> + pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
> + name);
> + pr_info("\t-i: Number of iterations (default: %u)\n",
> + NR_TEST_ITERS_DEF);
> + pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
> + pr_info("\t-h: Print this help message\n");
> +}
> +
> +static bool parse_args(int argc, char *argv[])
> +{
> + int opt, ret;
> +
> + while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
> + switch (opt) {
> + case 'i':
> + test_args.iterations = atoi(optarg);
> + if (test_args.iterations <= 0) {
> + pr_info("Positive value needed for -i\n");
> + goto err;
> + }
> + break;
> + case 'p':
> + ret = sscanf(optarg, "%u,%u", &pcpus[0], &pcpus[1]);
It may also be useful to check if the desired CPUs are in the set of
online CPUs, exiting early if not.
> + if (ret != 2) {
> + pr_info("Invalid pcpus pair");
> + goto err;
> + }
> + break;
> + case 'h':
> + default:
> + goto err;
> + }
> + }
> +
> + return true;
> +
> +err:
> + test_print_help(argv[0]);
> + return false;
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + struct kvm_vm *vm;
> +
> + /* Tell stdout not to buffer its content */
> + setbuf(stdout, NULL);
> +
> + if (!parse_args(argc, argv))
> + exit(KSFT_SKIP);
> +
> + if (get_nprocs() < 2)
> + exit(KSFT_SKIP);
> +
> + if (pcpus[0] == -1 || pcpus[1] == -1)
> + set_default_pcpus();
> +
> + vm = test_vm_create();
> + test_run(vm);
> + kvm_vm_free(vm);
> +
> + return 0;
> +}
> --
> 2.35.1.723.g4982287a31-goog
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
@ 2022-03-17 6:44 ` Oliver Upton
0 siblings, 0 replies; 20+ messages in thread
From: Oliver Upton @ 2022-03-17 6:44 UTC (permalink / raw)
To: Ricardo Koller; +Cc: kvm, maz, pbonzini, kvmarm
On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
> Add an arch_timer edge-cases selftest. For now, just add some basic
> sanity checks, and some stress conditions (like waiting for the timers
> while re-scheduling the vcpu). The next commit will add the actual edge
> case tests.
>
> This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
> interrupts for suspended vCPU".
>
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/Makefile | 1 +
> .../kvm/aarch64/arch_timer_edge_cases.c | 568 ++++++++++++++++++
> 3 files changed, 570 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index dce7de7755e6..8f7e0123dd28 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -1,5 +1,6 @@
> # SPDX-License-Identifier: GPL-2.0-only
> /aarch64/arch_timer
> +/aarch64/arch_timer_edge_cases
> /aarch64/debug-exceptions
> /aarch64/get-reg-list
> /aarch64/psci_cpu_on_test
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 17c3f0749f05..d4466ca76f21 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -100,6 +100,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
>
> TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
> +TEST_GEN_PROGS_aarch64 += aarch64/arch_timer_edge_cases
> TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
> TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
> TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
> diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> new file mode 100644
> index 000000000000..dc399482e35d
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> @@ -0,0 +1,568 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
> + *
> + * Some of these tests program timers and then wait indefinitely for them to
> + * fire. We rely on having a timeout mechanism in the "runner", like
> + * tools/testing/selftests/kselftest/runner.sh.
> + *
> + * Copyright (c) 2021, Google LLC.
> + */
> +
> +#define _GNU_SOURCE
> +
> +#include <stdlib.h>
> +#include <pthread.h>
> +#include <linux/kvm.h>
> +#include <linux/sizes.h>
> +#include <linux/bitmap.h>
> +#include <sched.h>
> +#include <sys/sysinfo.h>
> +
> +#include "kvm_util.h"
> +#include "processor.h"
> +#include "delay.h"
> +#include "arch_timer.h"
> +#include "gic.h"
> +#include "vgic.h"
> +
> +#define VCPUID 0
> +
> +#define msecs_to_usecs(msec) ((msec) * 1000LL)
nit: drop the macro and simply open code the multiplication of
USEC_PER_MSEC
> +
> +#define CVAL_MAX (~0ULL)
> +/* tval is a signed 32-bit int. */
> +#define TVAL_MAX INT_MAX
> +#define TVAL_MIN INT_MIN
> +
> +#define GICD_BASE_GPA 0x8000000ULL
> +#define GICR_BASE_GPA 0x80A0000ULL
> +
> +/* After how much time we say there is no IRQ. */
> +#define TIMEOUT_NO_IRQ_US msecs_to_usecs(50)
> +
> +/* A nice counter value to use as the starting one for most tests. */
> +#define DEF_CNT (CVAL_MAX / 2)
> +
> +/* Number of runs. */
> +#define NR_TEST_ITERS_DEF 5
> +
> +/* Shared with IRQ handler. */
> +volatile struct test_vcpu_shared_data {
> + int handled;
> +} shared_data;
> +
> +struct test_args {
> + /* Virtual or physical timer and counter tests. */
> + enum arch_timer timer;
> + /* Number of iterations. */
> + int iterations;
> +};
> +
> +struct test_args test_args = {
> + /* Only testing VIRTUAL timers for now. */
> + .timer = VIRTUAL,
> + .iterations = NR_TEST_ITERS_DEF,
> +};
> +
> +static int vtimer_irq, ptimer_irq;
> +
> +enum sync_cmd {
> + SET_REG_KVM_REG_ARM_TIMER_CNT,
> + USERSPACE_SCHED_YIELD,
> + USERSPACE_MIGRATE_SELF,
> +};
> +
> +typedef void (*wait_method_t)(void);
> +
> +static void wait_for_non_spurious_irq(void);
> +static void wait_poll_for_irq(void);
> +static void wait_sched_poll_for_irq(void);
> +static void wait_migrate_poll_for_irq(void);
> +
> +wait_method_t wait_method[] = {
> + wait_for_non_spurious_irq,
> + wait_poll_for_irq,
> + wait_sched_poll_for_irq,
> + wait_migrate_poll_for_irq,
> +};
> +
> +enum timer_view {
> + TIMER_CVAL,
> + TIMER_TVAL,
> +};
> +
> +/* Pair of pcpus for the test to alternate between. */
> +static int pcpus[2] = {-1, -1};
> +static int pcpus_idx;
> +
> +static uint32_t next_pcpu(void)
> +{
> + pcpus_idx = 1 - pcpus_idx;
> + return pcpus[pcpus_idx];
> +}
> +
> +#define ASSERT_IRQS_HANDLED_2(__nr, arg1, arg2) do { \
> + int __h = shared_data.handled; \
> + GUEST_ASSERT_4(__h == (__nr), __h, __nr, arg1, arg2); \
> +} while (0)
> +
> +#define ASSERT_IRQS_HANDLED_1(__nr, arg1) \
> + ASSERT_IRQS_HANDLED_2((__nr), arg1, 0)
> +
> +#define ASSERT_IRQS_HANDLED(__nr) \
> + ASSERT_IRQS_HANDLED_2((__nr), 0, 0)
> +
> +#define SET_COUNTER(__ctr, __t) \
> + GUEST_SYNC_ARGS(SET_REG_KVM_REG_ARM_TIMER_CNT, (__ctr), (__t), 0, 0)
> +
> +#define USERSPACE_CMD(__cmd) \
> + GUEST_SYNC_ARGS(__cmd, 0, 0, 0, 0)
> +
> +#define USERSPACE_SCHEDULE() \
> + USERSPACE_CMD(USERSPACE_SCHED_YIELD)
> +
> +#define USERSPACE_MIGRATE_VCPU() \
> + USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
> +
> +static void guest_irq_handler(struct ex_regs *regs)
> +{
> + unsigned int intid = gic_get_and_ack_irq();
> + uint64_t cnt, cval;
> + uint32_t ctl;
> +
> + if (intid == IAR_SPURIOUS)
> + return;
> +
> + GUEST_ASSERT(gic_irq_get_pending(intid));
> +
> + ctl = timer_get_ctl(test_args.timer);
> + cnt = timer_get_cntct(test_args.timer);
> + cval = timer_get_cval(test_args.timer);
> +
> + GUEST_ASSERT_1(ctl & CTL_ISTATUS, ctl);
> +
> + /* Disable and mask the timer. */
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> + GUEST_ASSERT(!gic_irq_get_pending(intid));
> +
> + shared_data.handled++;
> +
> + /* The IRQ should not fire before time. */
> + GUEST_ASSERT_2(cnt >= cval, cnt, cval);
> +
> + gic_set_eoi(intid);
> +}
> +
> +static void program_timer_irq(uint64_t xval, uint32_t ctl, enum timer_view tv) > +{
> + shared_data.handled = 0;
> +
> + switch (tv) {
> + case TIMER_CVAL:
> + timer_set_cval(test_args.timer, xval);
> + timer_set_ctl(test_args.timer, ctl);
> + break;
> + case TIMER_TVAL:
> + timer_set_tval(test_args.timer, xval);
> + timer_set_ctl(test_args.timer, ctl);
> + break;
> + default:
> + GUEST_ASSERT(0);
> + }
> +}
> +
> +/*
> + * Should be called with IRQs masked.
> + */
> +static void wait_for_non_spurious_irq(void)
> +{
> + int h;
> +
> + for (h = shared_data.handled; h == shared_data.handled;) {
how about:
h = shared_data.handled;
/* Wait for the IRQ handler to process an interrupt */
while (h == shared_data.handled) {
}
Brain is trained to think iteration when I see a for loop, and a while
loop does a better job hinting that we are waiting for a condition to be
met.
> + asm volatile("wfi\n"
> + "msr daifclr, #2\n"
> + /* handle IRQ */
I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
having a hard time finding the same language in the H.a revision of the
manual :-/
> + "msr daifset, #2\n"
> + : : : "memory");
> + }
> +}
> +
> +/*
> + * Wait for an non-spurious IRQ by polling in the guest (userspace=0) or in
> + * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHED_YIELD).
> + *
> + * Should be called with IRQs masked. Not really needed like the wfi above, but
> + * it should match the others.
> + */
> +static void poll_for_non_spurious_irq(bool userspace,
> + enum sync_cmd userspace_cmd)
> +{
> + int h;
> +
> + h = shared_data.handled;
> +
> + local_irq_enable();
> + while (h == shared_data.handled) {
> + if (userspace)
> + USERSPACE_CMD(userspace_cmd);
> + else
> + cpu_relax();
> + }
> + local_irq_disable();
> +}
> +
> +static void wait_poll_for_irq(void)
> +{
> + poll_for_non_spurious_irq(false, -1);
> +}
> +
> +static void wait_sched_poll_for_irq(void)
> +{
> + poll_for_non_spurious_irq(true, USERSPACE_SCHED_YIELD);
> +}
> +
> +static void wait_migrate_poll_for_irq(void)
> +{
> + poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
> +}
> +
> +/*
> + * Reset the timer state to some nice values like the counter not being close
> + * to the edge, and the control register masked and disabled.
> + */
> +static void reset_timer_state(uint64_t cnt)
> +{
> + SET_COUNTER(cnt, test_args.timer);
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +}
> +
> +static void test_timer(uint64_t reset_cnt, uint64_t xval,
> + wait_method_t wm, enum timer_view tv)
> +{
> + local_irq_disable();
> +
> + reset_timer_state(reset_cnt);
> +
> + program_timer_irq(xval, CTL_ENABLE, tv);
> + wm();
> +
> + ASSERT_IRQS_HANDLED_1(1, tv);
> + local_irq_enable();
> +}
> +
> +static void test_basic_functionality(void)
> +{
> + int32_t tval = (int32_t)msec_to_cycles(10);
> + uint64_t cval;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + cval = DEF_CNT + msec_to_cycles(10);
> +
> + test_timer(DEF_CNT, cval, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, tval, wm, TIMER_TVAL);
> + }
> +}
> +
> +/*
> + * This test checks basic timer behavior without actually firing timers, things
> + * like: the relationship between cval and tval, tval down-counting.
> + */
> +static void timers_sanity_checks(bool use_sched)
> +{
> + uint64_t cval;
> +
> + reset_timer_state(DEF_CNT);
> +
> + local_irq_disable();
> +
> + /* cval in the past */
> + timer_set_cval(test_args.timer, timer_get_cntct(test_args.timer) - 1);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
> +
> + /* tval in the past */
> + timer_set_tval(test_args.timer, -1);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + GUEST_ASSERT(timer_get_cval(test_args.timer) <
> + timer_get_cntct(test_args.timer));
> +
> + /* tval larger than TVAL_MAX. */
> + cval = timer_get_cntct(test_args.timer) + 2ULL * TVAL_MAX - 1;
> + timer_set_cval(test_args.timer, cval);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + GUEST_ASSERT(timer_get_tval(test_args.timer) <= 0);
> + GUEST_ASSERT_EQ(cval, timer_get_cval(test_args.timer));
> +
> + /* tval should keep down-counting from 0 to -1. */
> + SET_COUNTER(DEF_CNT, test_args.timer);
> + timer_set_tval(test_args.timer, 0);
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> + /* We just need 1 cycle to pass. */
> + isb();
Somewhat paranoid, but:
If the CPU retires the ISB much more quickly than the counter ticks, its
possible that you could observe an invalid TVAL even with a valid
implementation.
What if you spin waiting for CNT to increment before the assertion? Then
you for sureknow (and can tell by reading the test) that the
implementation is broken.
> + GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
> +
> + local_irq_enable();
> +
> + /* Mask and disable any pending timer. */
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +}
> +
> +static void test_timers_sanity_checks(void)
> +{
> + timers_sanity_checks(false);
> + /* Check how KVM saves/restores these edge-case values. */
> + timers_sanity_checks(true);
> +}
> +
> +static void guest_run_iteration(void)
> +{
> + test_timers_sanity_checks();
> + test_basic_functionality();
> +}
> +
> +static void guest_code(void)
> +{
> + int i;
> +
> + local_irq_disable();
> +
> + gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
> +
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> + timer_set_ctl(PHYSICAL, CTL_IMASK);
> +
> + gic_irq_enable(vtimer_irq);
> + gic_irq_enable(ptimer_irq);
> + local_irq_enable();
> +
> + for (i = 0; i < test_args.iterations; i++) {
> + GUEST_SYNC(i);
> + guest_run_iteration();
> + }
> +
> + GUEST_DONE();
> +}
> +
> +static void migrate_self(uint32_t new_pcpu)
> +{
> + int ret;
> + cpu_set_t cpuset;
> + pthread_t thread;
> +
> + thread = pthread_self();
> +
> + CPU_ZERO(&cpuset);
> + CPU_SET(new_pcpu, &cpuset);
> +
> + pr_debug("Migrating from %u to %u\n", sched_getcpu(), new_pcpu);
> +
> + ret = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
> +
> + TEST_ASSERT(ret == 0, "Failed to migrate to pCPU: %u; ret: %d\n",
> + new_pcpu, ret);
> +}
> +
> +/*
> + * Set the two pcpus that the test will use to alternate between. Default to
> + * use the current cpu as pcpus[0] and the one right after in the affinity set
> + * as pcpus[1].
> + */
> +static void set_default_pcpus(void)
> +{
> + int max = get_nprocs();
> + int curr = sched_getcpu();
> + cpu_set_t cpuset;
> + long i;
> +
> + TEST_ASSERT(max > 1, "Need at least 2 online pcpus.");
> +
I don't think this could ever be reached, given that you exit(KSFT_SKIP)
for this condition. Maybe just take the string from this assertion and
add a pr_skip() to your early exit.
> + pcpus[0] = curr;
> +
> + sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuset);
> + for (i = (curr + 1) % CPU_SETSIZE; i != curr; i = (i + 1) % CPU_SETSIZE) {
> + if (CPU_ISSET(i, &cpuset)) {
> + pcpus[1] = i;
> + break;
> + }
> + }
> +
> + TEST_ASSERT(pcpus[1] != -1, "Couldn't find a second pcpu.");
> + pr_debug("pcpus: %d %d\n", pcpus[0], pcpus[1]);
> +}
> +
> +static void kvm_set_cntxct(struct kvm_vm *vm, uint64_t cnt, enum arch_timer timer)
> +{
> + TEST_ASSERT(timer == VIRTUAL,
> + "Only supports setting the virtual counter for now.");
> +
> + struct kvm_one_reg reg = {
> + .id = KVM_REG_ARM_TIMER_CNT,
> + .addr = (uint64_t)&cnt,
> + };
> + vcpu_set_reg(vm, 0, ®);
> +}
> +
> +static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
> +{
> + enum sync_cmd cmd = uc->args[1];
> + uint64_t val = uc->args[2];
> + enum arch_timer timer = uc->args[3];
> +
> + switch (cmd) {
> + case SET_REG_KVM_REG_ARM_TIMER_CNT:
> + kvm_set_cntxct(vm, val, timer);
> + break;
> + case USERSPACE_SCHED_YIELD:
> + sched_yield();
> + break;
> + case USERSPACE_MIGRATE_SELF:
> + migrate_self(next_pcpu());
> + break;
> + default:
> + break;
> + }
> +}
> +
> +static void test_run(struct kvm_vm *vm)
> +{
> + struct ucall uc;
> + int stage = 0;
> +
> + /* Start on the first pcpu. */
> + migrate_self(pcpus[0]);
> +
> + sync_global_to_guest(vm, test_args);
> +
> + for (stage = 0; ; stage++) {
> + vcpu_run(vm, VCPUID);
> + switch (get_ucall(vm, VCPUID, &uc)) {
> + case UCALL_SYNC:
> + handle_sync(vm, &uc);
> + break;
> + case UCALL_DONE:
> + goto out;
> + case UCALL_ABORT:
> + TEST_FAIL("%s at %s:%ld\n\tvalues: %lu, %lu; %lu",
> + (const char *)uc.args[0], __FILE__, uc.args[1],
> + uc.args[2], uc.args[3], uc.args[4]);
> + goto out;
> + default:
> + TEST_FAIL("Unexpected guest exit\n");
> + }
> + }
> +
> +out:
> + return;
> +}
> +
> +static void test_init_timer_irq(struct kvm_vm *vm)
> +{
> + int vcpu_fd = vcpu_get_fd(vm, VCPUID);
> +
> + kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
> + KVM_ARM_VCPU_TIMER_IRQ_PTIMER, &ptimer_irq, false);
> + kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
> + KVM_ARM_VCPU_TIMER_IRQ_VTIMER, &vtimer_irq, false);
> +
> + sync_global_to_guest(vm, ptimer_irq);
> + sync_global_to_guest(vm, vtimer_irq);
> +
> + pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq);
> +}
> +
> +static struct kvm_vm *test_vm_create(void)
> +{
> + struct kvm_vm *vm;
> + int ret;
> +
> + vm = vm_create_default(VCPUID, 0, guest_code);
> +
> + vm_init_descriptor_tables(vm);
> + vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
> +
> + vcpu_init_descriptor_tables(vm, 0);
> +
> + ucall_init(vm, NULL);
> + test_init_timer_irq(vm);
> + ret = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
> + if (ret < 0) {
> + print_skip("Failed to create vgic-v3, skipping");
I believe print_skip() already appends something about skipping the test.
> + exit(KSFT_SKIP);
> + }
> +
> + return vm;
> +}
> +
> +static void test_print_help(char *name)
> +{
> + pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
> + name);
> + pr_info("\t-i: Number of iterations (default: %u)\n",
> + NR_TEST_ITERS_DEF);
> + pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
> + pr_info("\t-h: Print this help message\n");
> +}
> +
> +static bool parse_args(int argc, char *argv[])
> +{
> + int opt, ret;
> +
> + while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
> + switch (opt) {
> + case 'i':
> + test_args.iterations = atoi(optarg);
> + if (test_args.iterations <= 0) {
> + pr_info("Positive value needed for -i\n");
> + goto err;
> + }
> + break;
> + case 'p':
> + ret = sscanf(optarg, "%u,%u", &pcpus[0], &pcpus[1]);
It may also be useful to check if the desired CPUs are in the set of
online CPUs, exiting early if not.
> + if (ret != 2) {
> + pr_info("Invalid pcpus pair");
> + goto err;
> + }
> + break;
> + case 'h':
> + default:
> + goto err;
> + }
> + }
> +
> + return true;
> +
> +err:
> + test_print_help(argv[0]);
> + return false;
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + struct kvm_vm *vm;
> +
> + /* Tell stdout not to buffer its content */
> + setbuf(stdout, NULL);
> +
> + if (!parse_args(argc, argv))
> + exit(KSFT_SKIP);
> +
> + if (get_nprocs() < 2)
> + exit(KSFT_SKIP);
> +
> + if (pcpus[0] == -1 || pcpus[1] == -1)
> + set_default_pcpus();
> +
> + vm = test_vm_create();
> + test_run(vm);
> + kvm_vm_free(vm);
> +
> + return 0;
> +}
> --
> 2.35.1.723.g4982287a31-goog
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 3/3] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
2022-03-17 4:51 ` Ricardo Koller
@ 2022-03-17 7:27 ` Oliver Upton
-1 siblings, 0 replies; 20+ messages in thread
From: Oliver Upton @ 2022-03-17 7:27 UTC (permalink / raw)
To: Ricardo Koller; +Cc: kvm, maz, pbonzini, kvmarm
Hi Ricardo,
On Wed, Mar 16, 2022 at 09:51:27PM -0700, Ricardo Koller wrote:
> Add tests that validates some edge cases related to the virtual
> arch-timer:
> - timers in the past, including TVALs that rollover from 0.
> - timers across counter roll-overs.
> - moving counters ahead and behind pending timers.
> - reprograming timers.
> - the same timer condition firing multiple times.
> - masking/unmasking using the timer control mask.
>
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
> .../kvm/aarch64/arch_timer_edge_cases.c | 329 +++++++++++++++++-
> 1 file changed, 326 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> index dc399482e35d..0eeb72767bea 100644
> --- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> +++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> @@ -2,6 +2,12 @@
> /*
> * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
> *
> + * The test validates some edge cases related to the virtual arch-timer:
> + * - timers across counter roll-overs.
> + * - moving counters ahead and behind pending timers.
> + * - reprograming timers.
> + * - the same timer condition firing multiple times.
> + *
> * Some of these tests program timers and then wait indefinitely for them to
> * fire. We rely on having a timeout mechanism in the "runner", like
> * tools/testing/selftests/kselftest/runner.sh.
> @@ -47,6 +53,9 @@
> /* Number of runs. */
> #define NR_TEST_ITERS_DEF 5
>
> +/* Default "long" wait test time in ms. */
> +#define LONG_WAIT_TEST_MS 100
> +
> /* Shared with IRQ handler. */
> volatile struct test_vcpu_shared_data {
> int handled;
> @@ -55,6 +64,8 @@ volatile struct test_vcpu_shared_data {
> struct test_args {
> /* Virtual or physical timer and counter tests. */
> enum arch_timer timer;
> + /* Delay used in the test_long_timer_delays test. */
> + uint64_t long_wait_ms;
> /* Number of iterations. */
> int iterations;
> };
> @@ -62,6 +73,7 @@ struct test_args {
> struct test_args test_args = {
> /* Only testing VIRTUAL timers for now. */
> .timer = VIRTUAL,
> + .long_wait_ms = LONG_WAIT_TEST_MS,
> .iterations = NR_TEST_ITERS_DEF,
> };
>
> @@ -69,10 +81,25 @@ static int vtimer_irq, ptimer_irq;
>
> enum sync_cmd {
> SET_REG_KVM_REG_ARM_TIMER_CNT,
> + USERSPACE_USLEEP,
> USERSPACE_SCHED_YIELD,
> USERSPACE_MIGRATE_SELF,
> };
>
> +typedef void (*sleep_method_t)(uint64_t usec);
> +
> +static void sleep_poll(uint64_t usec);
> +static void sleep_sched_poll(uint64_t usec);
> +static void sleep_in_userspace(uint64_t usec);
> +static void sleep_migrate(uint64_t usec);
> +
> +sleep_method_t sleep_method[] = {
> + sleep_poll,
> + sleep_sched_poll,
> + sleep_migrate,
> + sleep_in_userspace,
> +};
> +
> typedef void (*wait_method_t)(void);
>
> static void wait_for_non_spurious_irq(void);
> @@ -125,6 +152,9 @@ static uint32_t next_pcpu(void)
> #define USERSPACE_MIGRATE_VCPU() \
> USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
>
> +#define SLEEP_IN_USERSPACE(__usecs) \
> + GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
> +
> static void guest_irq_handler(struct ex_regs *regs)
> {
> unsigned int intid = gic_get_and_ack_irq();
> @@ -227,6 +257,60 @@ static void wait_migrate_poll_for_irq(void)
> poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
> }
>
> +/*
> + * Sleep for usec microseconds by polling in the guest (userspace=0) or in
> + * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
> + */
> +static void guest_poll(enum arch_timer timer, uint64_t usec,
> + bool userspace, enum sync_cmd userspace_cmd)
> +{
> + uint64_t cycles = usec_to_cycles(usec);
> + uint64_t start = timer_get_cntct(timer);
> +
> + /*
> + * TODO: Take care of roll-overs. Right now, we are fine as we use the
> + * virtual timer/counter for all of our roll-over tests, and so we can use
> + * the physical counter for this function. Assert this (temporarily):
> + */
> + GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
> +
> + while ((timer_get_cntct(timer) - start) < cycles) {
> + if (userspace)
> + USERSPACE_CMD(userspace_cmd);
> + else
> + cpu_relax();
> + }
> +}
> +
> +static void sleep_poll(uint64_t usec)
> +{
> + if (test_args.timer == VIRTUAL)
> + guest_poll(PHYSICAL, usec, false, -1);
> + else
> + GUEST_ASSERT(0); /* Not implemented. */
> +}
> +
> +static void sleep_sched_poll(uint64_t usec)
> +{
> + if (test_args.timer == VIRTUAL)
> + guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
> + else
> + GUEST_ASSERT(0); /* Not implemented. */
> +}
> +
> +static void sleep_migrate(uint64_t usec)
> +{
> + if (test_args.timer == VIRTUAL)
> + guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
> + else
> + GUEST_ASSERT(0); /* Not implemented. */
> +}
It may be worth mentioning that you're polling the counter that is *not*
under test. I presume you do this so your sleep implementation is not
affected by the test manipulations to the virtual counter.
> +static void sleep_in_userspace(uint64_t usec)
> +{
> + SLEEP_IN_USERSPACE(usec);
> +}
> +
> /*
> * Reset the timer state to some nice values like the counter not being close
> * to the edge, and the control register masked and disabled.
> @@ -251,6 +335,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
> local_irq_enable();
> }
>
> +/*
> + * Set the counter to just below the edge (CVAL_MAX) and set a timer that
> + * crosses it over.
> + */
> +static void test_timers_across_rollovers(void)
> +{
> + uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
> + test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
> + test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
> + }
> +}
> +
> +/* Check that timer control masks actually mask a timer being fired. */
> +static void test_timer_control_masked(sleep_method_t guest_sleep)
> +{
> + reset_timer_state(DEF_CNT);
> +
> + /* Local IRQs are not masked at this point. */
> +
> + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> +
> + /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
> + guest_sleep(TIMEOUT_NO_IRQ_US);
> +
> + ASSERT_IRQS_HANDLED(0);
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +}
> +
> +/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
> +static void test_timer_control_mask_then_unmask(wait_method_t wm)
> +{
> + reset_timer_state(DEF_CNT);
> + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> +
> + /* No IRQs because the timer is still masked. */
> + ASSERT_IRQS_HANDLED(0);
> +
> + /* Unmask the timer, and then get an IRQ. */
> + local_irq_disable();
> + timer_set_ctl(test_args.timer, CTL_ENABLE);
> + wm();
> +
> + ASSERT_IRQS_HANDLED(1);
> + local_irq_enable();
> +}
> +
> +/*
> + * Set a timer at the edge, and wait with irqs masked for so long that the
> + * counter rolls over and the "Timer Condition" doesn't apply anymore. We
> + * should still get an IRQ.
It looks like the test asserts we do not get an interrupt if the timer
condtion is no longer valid.
> + */
> +static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
> +{
> + local_irq_disable();
> + reset_timer_state(CVAL_MAX - msec_to_cycles(5));
> +
> + program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
> +
> + GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
> + guest_sleep(msecs_to_usecs(10));
> + GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
> +
> + local_irq_enable();
> + isb();
> +
> + ASSERT_IRQS_HANDLED(0);
> +}
> +
> +static void test_control_masks(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> + test_timer_control_masked(sleep_method[i]);
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> + test_timer_control_mask_then_unmask(wait_method[i]);
> +
> + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> + test_irq_masked_timer_across_rollover(sleep_method[i]);
> +}
> +
> +static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
> +{
> + int i;
> +
> + local_irq_disable();
> + reset_timer_state(DEF_CNT);
> +
> + program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
> +
> + for (i = 1; i <= num; i++) {
> + wm();
> +
> + /*
> + * The IRQ handler masked and disabled the timer.
> + * Enable and unmmask it again.
> + */
> + timer_set_ctl(test_args.timer, CTL_ENABLE);
> +
> + ASSERT_IRQS_HANDLED(i);
> + }
> +
> + local_irq_enable();
> +}
> +
> +static void test_timers_fired_multiple_times(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> + test_fire_a_timer_multiple_times(wait_method[i], 1000);
> +}
> +
> +/* Set a timer for cval1 then reprogram it to cval1. */
reprogram it to cval2
> +static void test_reprogram_timer(wait_method_t wm, bool use_sched,
> + uint64_t cnt, uint64_t cval1, uint64_t cval2)
> +{
> + local_irq_disable();
> + reset_timer_state(cnt);
> +
> + program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
> +
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> +
> + timer_set_cval(test_args.timer, cval2);
> +
> + wm();
> +
> + local_irq_enable();
> + ASSERT_IRQS_HANDLED(1);
> +};
> +
> +static void test_reprogram_timers(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> + }
> +}
> +
> static void test_basic_functionality(void)
> {
> int32_t tval = (int32_t)msec_to_cycles(10);
> @@ -306,7 +540,7 @@ static void timers_sanity_checks(bool use_sched)
> if (use_sched)
> USERSPACE_SCHEDULE();
> /* We just need 1 cycle to pass. */
> - isb();
> + sleep_poll(1);
Ah, this is exactly what I was asking for in patch 2. Is it possible to
hoist the needed pieces into that patch so you always poll CNTPCT before
reaching the assertion?
--
Thanks,
Oliver
> GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
>
> local_irq_enable();
> @@ -322,10 +556,86 @@ static void test_timers_sanity_checks(void)
> timers_sanity_checks(true);
> }
>
> +/*
> + * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
> + * then wait for an IRQ.
> + */
> +static void test_set_counter_after_programming_timer(uint64_t cnt_1,
> + uint64_t xval, uint64_t cnt_2, wait_method_t wm,
> + enum timer_view tv)
> +{
> + local_irq_disable();
> +
> + SET_COUNTER(cnt_1, test_args.timer);
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +
> + program_timer_irq(xval, CTL_ENABLE, tv);
> + SET_COUNTER(cnt_2, test_args.timer);
> + wm();
> +
> + ASSERT_IRQS_HANDLED(1);
> + local_irq_enable();
> +}
> +
> +/* Set a timer and then move the counter ahead of it. */
> +static void test_move_counters_after_timers(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_set_counter_after_programming_timer(0, DEF_CNT,
> + DEF_CNT + 1, wm, TIMER_CVAL);
> + test_set_counter_after_programming_timer(CVAL_MAX, 1,
> + 2, wm, TIMER_CVAL);
> + test_set_counter_after_programming_timer(0, TVAL_MAX,
> + (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
> + }
> +}
> +
> +static void test_timers_in_the_past(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
> + test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
> + }
> +}
> +
> +static void test_long_timer_delays(void)
> +{
> + uint64_t wait_ms = test_args.long_wait_ms;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
> + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
> + }
> +}
> +
> static void guest_run_iteration(void)
> {
> test_timers_sanity_checks();
> test_basic_functionality();
> +
> + test_timers_in_the_past();
> + test_timers_across_rollovers();
> +
> + test_move_counters_after_timers();
> + test_reprogram_timers();
> +
> + test_control_masks();
> +
> + test_timers_fired_multiple_times();
> }
>
> static void guest_code(void)
> @@ -348,6 +658,7 @@ static void guest_code(void)
> guest_run_iteration();
> }
>
> + test_long_timer_delays();
> GUEST_DONE();
> }
>
> @@ -420,6 +731,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
> case SET_REG_KVM_REG_ARM_TIMER_CNT:
> kvm_set_cntxct(vm, val, timer);
> break;
> + case USERSPACE_USLEEP:
> + usleep(val);
> + break;
> case USERSPACE_SCHED_YIELD:
> sched_yield();
> break;
> @@ -503,11 +817,13 @@ static struct kvm_vm *test_vm_create(void)
>
> static void test_print_help(char *name)
> {
> - pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
> + pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
> name);
> pr_info("\t-i: Number of iterations (default: %u)\n",
> NR_TEST_ITERS_DEF);
> pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
> + pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
> + LONG_WAIT_TEST_MS);
> pr_info("\t-h: Print this help message\n");
> }
>
> @@ -515,7 +831,7 @@ static bool parse_args(int argc, char *argv[])
> {
> int opt, ret;
>
> - while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
> + while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
> switch (opt) {
> case 'i':
> test_args.iterations = atoi(optarg);
> @@ -531,6 +847,13 @@ static bool parse_args(int argc, char *argv[])
> goto err;
> }
> break;
> + case 'l':
> + test_args.long_wait_ms = atoi(optarg);
> + if (test_args.long_wait_ms <= 0) {
> + pr_info("Positive value needed for -l\n");
> + goto err;
> + }
> + break;
> case 'h':
> default:
> goto err;
> --
> 2.35.1.723.g4982287a31-goog
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 3/3] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
@ 2022-03-17 7:27 ` Oliver Upton
0 siblings, 0 replies; 20+ messages in thread
From: Oliver Upton @ 2022-03-17 7:27 UTC (permalink / raw)
To: Ricardo Koller
Cc: kvm, kvmarm, drjones, pbonzini, maz, alexandru.elisei,
eric.auger, reijiw, rananta
Hi Ricardo,
On Wed, Mar 16, 2022 at 09:51:27PM -0700, Ricardo Koller wrote:
> Add tests that validates some edge cases related to the virtual
> arch-timer:
> - timers in the past, including TVALs that rollover from 0.
> - timers across counter roll-overs.
> - moving counters ahead and behind pending timers.
> - reprograming timers.
> - the same timer condition firing multiple times.
> - masking/unmasking using the timer control mask.
>
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
> .../kvm/aarch64/arch_timer_edge_cases.c | 329 +++++++++++++++++-
> 1 file changed, 326 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> index dc399482e35d..0eeb72767bea 100644
> --- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> +++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> @@ -2,6 +2,12 @@
> /*
> * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
> *
> + * The test validates some edge cases related to the virtual arch-timer:
> + * - timers across counter roll-overs.
> + * - moving counters ahead and behind pending timers.
> + * - reprograming timers.
> + * - the same timer condition firing multiple times.
> + *
> * Some of these tests program timers and then wait indefinitely for them to
> * fire. We rely on having a timeout mechanism in the "runner", like
> * tools/testing/selftests/kselftest/runner.sh.
> @@ -47,6 +53,9 @@
> /* Number of runs. */
> #define NR_TEST_ITERS_DEF 5
>
> +/* Default "long" wait test time in ms. */
> +#define LONG_WAIT_TEST_MS 100
> +
> /* Shared with IRQ handler. */
> volatile struct test_vcpu_shared_data {
> int handled;
> @@ -55,6 +64,8 @@ volatile struct test_vcpu_shared_data {
> struct test_args {
> /* Virtual or physical timer and counter tests. */
> enum arch_timer timer;
> + /* Delay used in the test_long_timer_delays test. */
> + uint64_t long_wait_ms;
> /* Number of iterations. */
> int iterations;
> };
> @@ -62,6 +73,7 @@ struct test_args {
> struct test_args test_args = {
> /* Only testing VIRTUAL timers for now. */
> .timer = VIRTUAL,
> + .long_wait_ms = LONG_WAIT_TEST_MS,
> .iterations = NR_TEST_ITERS_DEF,
> };
>
> @@ -69,10 +81,25 @@ static int vtimer_irq, ptimer_irq;
>
> enum sync_cmd {
> SET_REG_KVM_REG_ARM_TIMER_CNT,
> + USERSPACE_USLEEP,
> USERSPACE_SCHED_YIELD,
> USERSPACE_MIGRATE_SELF,
> };
>
> +typedef void (*sleep_method_t)(uint64_t usec);
> +
> +static void sleep_poll(uint64_t usec);
> +static void sleep_sched_poll(uint64_t usec);
> +static void sleep_in_userspace(uint64_t usec);
> +static void sleep_migrate(uint64_t usec);
> +
> +sleep_method_t sleep_method[] = {
> + sleep_poll,
> + sleep_sched_poll,
> + sleep_migrate,
> + sleep_in_userspace,
> +};
> +
> typedef void (*wait_method_t)(void);
>
> static void wait_for_non_spurious_irq(void);
> @@ -125,6 +152,9 @@ static uint32_t next_pcpu(void)
> #define USERSPACE_MIGRATE_VCPU() \
> USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
>
> +#define SLEEP_IN_USERSPACE(__usecs) \
> + GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
> +
> static void guest_irq_handler(struct ex_regs *regs)
> {
> unsigned int intid = gic_get_and_ack_irq();
> @@ -227,6 +257,60 @@ static void wait_migrate_poll_for_irq(void)
> poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
> }
>
> +/*
> + * Sleep for usec microseconds by polling in the guest (userspace=0) or in
> + * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
> + */
> +static void guest_poll(enum arch_timer timer, uint64_t usec,
> + bool userspace, enum sync_cmd userspace_cmd)
> +{
> + uint64_t cycles = usec_to_cycles(usec);
> + uint64_t start = timer_get_cntct(timer);
> +
> + /*
> + * TODO: Take care of roll-overs. Right now, we are fine as we use the
> + * virtual timer/counter for all of our roll-over tests, and so we can use
> + * the physical counter for this function. Assert this (temporarily):
> + */
> + GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
> +
> + while ((timer_get_cntct(timer) - start) < cycles) {
> + if (userspace)
> + USERSPACE_CMD(userspace_cmd);
> + else
> + cpu_relax();
> + }
> +}
> +
> +static void sleep_poll(uint64_t usec)
> +{
> + if (test_args.timer == VIRTUAL)
> + guest_poll(PHYSICAL, usec, false, -1);
> + else
> + GUEST_ASSERT(0); /* Not implemented. */
> +}
> +
> +static void sleep_sched_poll(uint64_t usec)
> +{
> + if (test_args.timer == VIRTUAL)
> + guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
> + else
> + GUEST_ASSERT(0); /* Not implemented. */
> +}
> +
> +static void sleep_migrate(uint64_t usec)
> +{
> + if (test_args.timer == VIRTUAL)
> + guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
> + else
> + GUEST_ASSERT(0); /* Not implemented. */
> +}
It may be worth mentioning that you're polling the counter that is *not*
under test. I presume you do this so your sleep implementation is not
affected by the test manipulations to the virtual counter.
> +static void sleep_in_userspace(uint64_t usec)
> +{
> + SLEEP_IN_USERSPACE(usec);
> +}
> +
> /*
> * Reset the timer state to some nice values like the counter not being close
> * to the edge, and the control register masked and disabled.
> @@ -251,6 +335,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
> local_irq_enable();
> }
>
> +/*
> + * Set the counter to just below the edge (CVAL_MAX) and set a timer that
> + * crosses it over.
> + */
> +static void test_timers_across_rollovers(void)
> +{
> + uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
> + test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
> + test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
> + }
> +}
> +
> +/* Check that timer control masks actually mask a timer being fired. */
> +static void test_timer_control_masked(sleep_method_t guest_sleep)
> +{
> + reset_timer_state(DEF_CNT);
> +
> + /* Local IRQs are not masked at this point. */
> +
> + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> +
> + /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
> + guest_sleep(TIMEOUT_NO_IRQ_US);
> +
> + ASSERT_IRQS_HANDLED(0);
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +}
> +
> +/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
> +static void test_timer_control_mask_then_unmask(wait_method_t wm)
> +{
> + reset_timer_state(DEF_CNT);
> + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> +
> + /* No IRQs because the timer is still masked. */
> + ASSERT_IRQS_HANDLED(0);
> +
> + /* Unmask the timer, and then get an IRQ. */
> + local_irq_disable();
> + timer_set_ctl(test_args.timer, CTL_ENABLE);
> + wm();
> +
> + ASSERT_IRQS_HANDLED(1);
> + local_irq_enable();
> +}
> +
> +/*
> + * Set a timer at the edge, and wait with irqs masked for so long that the
> + * counter rolls over and the "Timer Condition" doesn't apply anymore. We
> + * should still get an IRQ.
It looks like the test asserts we do not get an interrupt if the timer
condtion is no longer valid.
> + */
> +static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
> +{
> + local_irq_disable();
> + reset_timer_state(CVAL_MAX - msec_to_cycles(5));
> +
> + program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
> +
> + GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
> + guest_sleep(msecs_to_usecs(10));
> + GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
> +
> + local_irq_enable();
> + isb();
> +
> + ASSERT_IRQS_HANDLED(0);
> +}
> +
> +static void test_control_masks(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> + test_timer_control_masked(sleep_method[i]);
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> + test_timer_control_mask_then_unmask(wait_method[i]);
> +
> + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> + test_irq_masked_timer_across_rollover(sleep_method[i]);
> +}
> +
> +static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
> +{
> + int i;
> +
> + local_irq_disable();
> + reset_timer_state(DEF_CNT);
> +
> + program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
> +
> + for (i = 1; i <= num; i++) {
> + wm();
> +
> + /*
> + * The IRQ handler masked and disabled the timer.
> + * Enable and unmmask it again.
> + */
> + timer_set_ctl(test_args.timer, CTL_ENABLE);
> +
> + ASSERT_IRQS_HANDLED(i);
> + }
> +
> + local_irq_enable();
> +}
> +
> +static void test_timers_fired_multiple_times(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> + test_fire_a_timer_multiple_times(wait_method[i], 1000);
> +}
> +
> +/* Set a timer for cval1 then reprogram it to cval1. */
reprogram it to cval2
> +static void test_reprogram_timer(wait_method_t wm, bool use_sched,
> + uint64_t cnt, uint64_t cval1, uint64_t cval2)
> +{
> + local_irq_disable();
> + reset_timer_state(cnt);
> +
> + program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
> +
> + if (use_sched)
> + USERSPACE_SCHEDULE();
> +
> + timer_set_cval(test_args.timer, cval2);
> +
> + wm();
> +
> + local_irq_enable();
> + ASSERT_IRQS_HANDLED(1);
> +};
> +
> +static void test_reprogram_timers(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> + }
> +}
> +
> static void test_basic_functionality(void)
> {
> int32_t tval = (int32_t)msec_to_cycles(10);
> @@ -306,7 +540,7 @@ static void timers_sanity_checks(bool use_sched)
> if (use_sched)
> USERSPACE_SCHEDULE();
> /* We just need 1 cycle to pass. */
> - isb();
> + sleep_poll(1);
Ah, this is exactly what I was asking for in patch 2. Is it possible to
hoist the needed pieces into that patch so you always poll CNTPCT before
reaching the assertion?
--
Thanks,
Oliver
> GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
>
> local_irq_enable();
> @@ -322,10 +556,86 @@ static void test_timers_sanity_checks(void)
> timers_sanity_checks(true);
> }
>
> +/*
> + * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
> + * then wait for an IRQ.
> + */
> +static void test_set_counter_after_programming_timer(uint64_t cnt_1,
> + uint64_t xval, uint64_t cnt_2, wait_method_t wm,
> + enum timer_view tv)
> +{
> + local_irq_disable();
> +
> + SET_COUNTER(cnt_1, test_args.timer);
> + timer_set_ctl(test_args.timer, CTL_IMASK);
> +
> + program_timer_irq(xval, CTL_ENABLE, tv);
> + SET_COUNTER(cnt_2, test_args.timer);
> + wm();
> +
> + ASSERT_IRQS_HANDLED(1);
> + local_irq_enable();
> +}
> +
> +/* Set a timer and then move the counter ahead of it. */
> +static void test_move_counters_after_timers(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_set_counter_after_programming_timer(0, DEF_CNT,
> + DEF_CNT + 1, wm, TIMER_CVAL);
> + test_set_counter_after_programming_timer(CVAL_MAX, 1,
> + 2, wm, TIMER_CVAL);
> + test_set_counter_after_programming_timer(0, TVAL_MAX,
> + (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
> + }
> +}
> +
> +static void test_timers_in_the_past(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
> + test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
> + test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
> + }
> +}
> +
> +static void test_long_timer_delays(void)
> +{
> + uint64_t wait_ms = test_args.long_wait_ms;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> + wait_method_t wm = wait_method[i];
> +
> + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
> + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
> + }
> +}
> +
> static void guest_run_iteration(void)
> {
> test_timers_sanity_checks();
> test_basic_functionality();
> +
> + test_timers_in_the_past();
> + test_timers_across_rollovers();
> +
> + test_move_counters_after_timers();
> + test_reprogram_timers();
> +
> + test_control_masks();
> +
> + test_timers_fired_multiple_times();
> }
>
> static void guest_code(void)
> @@ -348,6 +658,7 @@ static void guest_code(void)
> guest_run_iteration();
> }
>
> + test_long_timer_delays();
> GUEST_DONE();
> }
>
> @@ -420,6 +731,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
> case SET_REG_KVM_REG_ARM_TIMER_CNT:
> kvm_set_cntxct(vm, val, timer);
> break;
> + case USERSPACE_USLEEP:
> + usleep(val);
> + break;
> case USERSPACE_SCHED_YIELD:
> sched_yield();
> break;
> @@ -503,11 +817,13 @@ static struct kvm_vm *test_vm_create(void)
>
> static void test_print_help(char *name)
> {
> - pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
> + pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
> name);
> pr_info("\t-i: Number of iterations (default: %u)\n",
> NR_TEST_ITERS_DEF);
> pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
> + pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
> + LONG_WAIT_TEST_MS);
> pr_info("\t-h: Print this help message\n");
> }
>
> @@ -515,7 +831,7 @@ static bool parse_args(int argc, char *argv[])
> {
> int opt, ret;
>
> - while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
> + while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
> switch (opt) {
> case 'i':
> test_args.iterations = atoi(optarg);
> @@ -531,6 +847,13 @@ static bool parse_args(int argc, char *argv[])
> goto err;
> }
> break;
> + case 'l':
> + test_args.long_wait_ms = atoi(optarg);
> + if (test_args.long_wait_ms <= 0) {
> + pr_info("Positive value needed for -l\n");
> + goto err;
> + }
> + break;
> case 'h':
> default:
> goto err;
> --
> 2.35.1.723.g4982287a31-goog
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
2022-03-17 6:44 ` Oliver Upton
@ 2022-03-17 8:52 ` Marc Zyngier
-1 siblings, 0 replies; 20+ messages in thread
From: Marc Zyngier @ 2022-03-17 8:52 UTC (permalink / raw)
To: Oliver Upton
Cc: Ricardo Koller, kvm, kvmarm, drjones, pbonzini, alexandru.elisei,
eric.auger, reijiw, rananta
On 2022-03-17 06:44, Oliver Upton wrote:
> On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
>> Add an arch_timer edge-cases selftest. For now, just add some basic
>> sanity checks, and some stress conditions (like waiting for the timers
>> while re-scheduling the vcpu). The next commit will add the actual
>> edge
>> case tests.
>>
>> This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
>> interrupts for suspended vCPU".
>>
>> Reviewed-by: Reiji Watanabe <reijiw@google.com>
>> Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
>> Signed-off-by: Ricardo Koller <ricarkol@google.com>
[...]
>> + asm volatile("wfi\n"
>> + "msr daifclr, #2\n"
>> + /* handle IRQ */
>
> I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
> having a hard time finding the same language in the H.a revision of the
> manual :-/
D1.3.6 probably is what you are looking for.
"Context synchronization event" is the key phrase to remember
when grepping through the ARM ARM. And yes, the new layout is
a nightmare (as if we really needed an additional 2800 pages...).
>
>> + "msr daifset, #2\n"
>> + : : : "memory");
>> + }
>> +}
[...]
>> + /* tval should keep down-counting from 0 to -1. */
>> + SET_COUNTER(DEF_CNT, test_args.timer);
>> + timer_set_tval(test_args.timer, 0);
>> + if (use_sched)
>> + USERSPACE_SCHEDULE();
>> + /* We just need 1 cycle to pass. */
>> + isb();
>
> Somewhat paranoid, but:
>
> If the CPU retires the ISB much more quickly than the counter ticks,
> its
> possible that you could observe an invalid TVAL even with a valid
> implementation.
Worse than that:
- ISB doesn't need to take any time at all. It just needs to ensure
that everything is synchronised. Depending on how the CPU is built,
this can come for free.
- There is no relation between the counter ticks and CPU cycles.
> What if you spin waiting for CNT to increment before the assertion?
> Then
> you for sureknow (and can tell by reading the test) that the
> implementation is broken.
That's basically the only way to implement this. You can't rely
on any other event.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
@ 2022-03-17 8:52 ` Marc Zyngier
0 siblings, 0 replies; 20+ messages in thread
From: Marc Zyngier @ 2022-03-17 8:52 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvm, pbonzini, kvmarm
On 2022-03-17 06:44, Oliver Upton wrote:
> On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
>> Add an arch_timer edge-cases selftest. For now, just add some basic
>> sanity checks, and some stress conditions (like waiting for the timers
>> while re-scheduling the vcpu). The next commit will add the actual
>> edge
>> case tests.
>>
>> This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
>> interrupts for suspended vCPU".
>>
>> Reviewed-by: Reiji Watanabe <reijiw@google.com>
>> Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
>> Signed-off-by: Ricardo Koller <ricarkol@google.com>
[...]
>> + asm volatile("wfi\n"
>> + "msr daifclr, #2\n"
>> + /* handle IRQ */
>
> I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
> having a hard time finding the same language in the H.a revision of the
> manual :-/
D1.3.6 probably is what you are looking for.
"Context synchronization event" is the key phrase to remember
when grepping through the ARM ARM. And yes, the new layout is
a nightmare (as if we really needed an additional 2800 pages...).
>
>> + "msr daifset, #2\n"
>> + : : : "memory");
>> + }
>> +}
[...]
>> + /* tval should keep down-counting from 0 to -1. */
>> + SET_COUNTER(DEF_CNT, test_args.timer);
>> + timer_set_tval(test_args.timer, 0);
>> + if (use_sched)
>> + USERSPACE_SCHEDULE();
>> + /* We just need 1 cycle to pass. */
>> + isb();
>
> Somewhat paranoid, but:
>
> If the CPU retires the ISB much more quickly than the counter ticks,
> its
> possible that you could observe an invalid TVAL even with a valid
> implementation.
Worse than that:
- ISB doesn't need to take any time at all. It just needs to ensure
that everything is synchronised. Depending on how the CPU is built,
this can come for free.
- There is no relation between the counter ticks and CPU cycles.
> What if you spin waiting for CNT to increment before the assertion?
> Then
> you for sureknow (and can tell by reading the test) that the
> implementation is broken.
That's basically the only way to implement this. You can't rely
on any other event.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
2022-03-17 8:52 ` Marc Zyngier
@ 2022-03-17 16:56 ` Oliver Upton
-1 siblings, 0 replies; 20+ messages in thread
From: Oliver Upton @ 2022-03-17 16:56 UTC (permalink / raw)
To: Marc Zyngier
Cc: Ricardo Koller, kvm, kvmarm, drjones, pbonzini, Alexandru.Elisei,
eric.auger, reijiw, rananta
On Thu, Mar 17, 2022 at 1:52 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On 2022-03-17 06:44, Oliver Upton wrote:
> > On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
> >> Add an arch_timer edge-cases selftest. For now, just add some basic
> >> sanity checks, and some stress conditions (like waiting for the timers
> >> while re-scheduling the vcpu). The next commit will add the actual
> >> edge
> >> case tests.
> >>
> >> This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
> >> interrupts for suspended vCPU".
> >>
> >> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> >> Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
> >> Signed-off-by: Ricardo Koller <ricarkol@google.com>
>
> [...]
>
> >> + asm volatile("wfi\n"
> >> + "msr daifclr, #2\n"
> >> + /* handle IRQ */
> >
> > I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
> > having a hard time finding the same language in the H.a revision of the
> > manual :-/
>
> D1.3.6 probably is what you are looking for.
>
> "Context synchronization event" is the key phrase to remember
> when grepping through the ARM ARM. And yes, the new layout is
> a nightmare (as if we really needed an additional 2800 pages...).
Thanks! I have yet to find a PDF viewer that can chew through such a
document for a search term at a decent clip. And all the extra pages
just made the problem even worse.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
@ 2022-03-17 16:56 ` Oliver Upton
0 siblings, 0 replies; 20+ messages in thread
From: Oliver Upton @ 2022-03-17 16:56 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, pbonzini, kvmarm
On Thu, Mar 17, 2022 at 1:52 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On 2022-03-17 06:44, Oliver Upton wrote:
> > On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
> >> Add an arch_timer edge-cases selftest. For now, just add some basic
> >> sanity checks, and some stress conditions (like waiting for the timers
> >> while re-scheduling the vcpu). The next commit will add the actual
> >> edge
> >> case tests.
> >>
> >> This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
> >> interrupts for suspended vCPU".
> >>
> >> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> >> Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
> >> Signed-off-by: Ricardo Koller <ricarkol@google.com>
>
> [...]
>
> >> + asm volatile("wfi\n"
> >> + "msr daifclr, #2\n"
> >> + /* handle IRQ */
> >
> > I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
> > having a hard time finding the same language in the H.a revision of the
> > manual :-/
>
> D1.3.6 probably is what you are looking for.
>
> "Context synchronization event" is the key phrase to remember
> when grepping through the ARM ARM. And yes, the new layout is
> a nightmare (as if we really needed an additional 2800 pages...).
Thanks! I have yet to find a PDF viewer that can chew through such a
document for a search term at a decent clip. And all the extra pages
just made the problem even worse.
--
Thanks,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
2022-03-17 8:52 ` Marc Zyngier
@ 2022-03-18 20:49 ` Ricardo Koller
-1 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-18 20:49 UTC (permalink / raw)
To: Marc Zyngier
Cc: Oliver Upton, kvm, kvmarm, drjones, pbonzini, alexandru.elisei,
eric.auger, reijiw, rananta
On Thu, Mar 17, 2022 at 08:52:38AM +0000, Marc Zyngier wrote:
> On 2022-03-17 06:44, Oliver Upton wrote:
> > On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
> > > Add an arch_timer edge-cases selftest. For now, just add some basic
> > > sanity checks, and some stress conditions (like waiting for the timers
> > > while re-scheduling the vcpu). The next commit will add the actual
> > > edge
> > > case tests.
> > >
> > > This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
> > > interrupts for suspended vCPU".
> > >
> > > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> > > Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Ricardo Koller <ricarkol@google.com>
>
> [...]
>
> > > + asm volatile("wfi\n"
> > > + "msr daifclr, #2\n"
> > > + /* handle IRQ */
> >
> > I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
> > having a hard time finding the same language in the H.a revision of the
> > manual :-/
Got it, adding it. Saw that there is a similar pattern in the kernel and
it has an ISB in the middle.
>
> D1.3.6 probably is what you are looking for.
>
> "Context synchronization event" is the key phrase to remember
> when grepping through the ARM ARM. And yes, the new layout is
> a nightmare (as if we really needed an additional 2800 pages...).
>
> >
> > > + "msr daifset, #2\n"
> > > + : : : "memory");
> > > + }
> > > +}
>
> [...]
>
> > > + /* tval should keep down-counting from 0 to -1. */
> > > + SET_COUNTER(DEF_CNT, test_args.timer);
> > > + timer_set_tval(test_args.timer, 0);
> > > + if (use_sched)
> > > + USERSPACE_SCHEDULE();
> > > + /* We just need 1 cycle to pass. */
> > > + isb();
> >
> > Somewhat paranoid, but:
> >
> > If the CPU retires the ISB much more quickly than the counter ticks, its
> > possible that you could observe an invalid TVAL even with a valid
> > implementation.
>
> Worse than that:
>
> - ISB doesn't need to take any time at all. It just needs to ensure
> that everything is synchronised. Depending on how the CPU is built,
> this can come for free.
>
> - There is no relation between the counter ticks and CPU cycles.
Good point.
>
> > What if you spin waiting for CNT to increment before the assertion? Then
> > you for sureknow (and can tell by reading the test) that the
> > implementation is broken.
>
> That's basically the only way to implement this. You can't rely
> on any other event.
The next commit fixes this (by spinning on the counter). Will move it
here.
>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...
Thank you both for the review.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases
@ 2022-03-18 20:49 ` Ricardo Koller
0 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-18 20:49 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, pbonzini, kvmarm
On Thu, Mar 17, 2022 at 08:52:38AM +0000, Marc Zyngier wrote:
> On 2022-03-17 06:44, Oliver Upton wrote:
> > On Wed, Mar 16, 2022 at 09:51:26PM -0700, Ricardo Koller wrote:
> > > Add an arch_timer edge-cases selftest. For now, just add some basic
> > > sanity checks, and some stress conditions (like waiting for the timers
> > > while re-scheduling the vcpu). The next commit will add the actual
> > > edge
> > > case tests.
> > >
> > > This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
> > > interrupts for suspended vCPU".
> > >
> > > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> > > Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Ricardo Koller <ricarkol@google.com>
>
> [...]
>
> > > + asm volatile("wfi\n"
> > > + "msr daifclr, #2\n"
> > > + /* handle IRQ */
> >
> > I believe an isb is owed here (DDI0487G.b D1.13.4). Annoyingly, I am
> > having a hard time finding the same language in the H.a revision of the
> > manual :-/
Got it, adding it. Saw that there is a similar pattern in the kernel and
it has an ISB in the middle.
>
> D1.3.6 probably is what you are looking for.
>
> "Context synchronization event" is the key phrase to remember
> when grepping through the ARM ARM. And yes, the new layout is
> a nightmare (as if we really needed an additional 2800 pages...).
>
> >
> > > + "msr daifset, #2\n"
> > > + : : : "memory");
> > > + }
> > > +}
>
> [...]
>
> > > + /* tval should keep down-counting from 0 to -1. */
> > > + SET_COUNTER(DEF_CNT, test_args.timer);
> > > + timer_set_tval(test_args.timer, 0);
> > > + if (use_sched)
> > > + USERSPACE_SCHEDULE();
> > > + /* We just need 1 cycle to pass. */
> > > + isb();
> >
> > Somewhat paranoid, but:
> >
> > If the CPU retires the ISB much more quickly than the counter ticks, its
> > possible that you could observe an invalid TVAL even with a valid
> > implementation.
>
> Worse than that:
>
> - ISB doesn't need to take any time at all. It just needs to ensure
> that everything is synchronised. Depending on how the CPU is built,
> this can come for free.
>
> - There is no relation between the counter ticks and CPU cycles.
Good point.
>
> > What if you spin waiting for CNT to increment before the assertion? Then
> > you for sureknow (and can tell by reading the test) that the
> > implementation is broken.
>
> That's basically the only way to implement this. You can't rely
> on any other event.
The next commit fixes this (by spinning on the counter). Will move it
here.
>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...
Thank you both for the review.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 3/3] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
2022-03-17 7:27 ` Oliver Upton
@ 2022-03-18 20:54 ` Ricardo Koller
-1 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-18 20:54 UTC (permalink / raw)
To: Oliver Upton
Cc: kvm, kvmarm, drjones, pbonzini, maz, alexandru.elisei,
eric.auger, reijiw, rananta
On Thu, Mar 17, 2022 at 07:27:50AM +0000, Oliver Upton wrote:
> Hi Ricardo,
>
> On Wed, Mar 16, 2022 at 09:51:27PM -0700, Ricardo Koller wrote:
> > Add tests that validates some edge cases related to the virtual
> > arch-timer:
> > - timers in the past, including TVALs that rollover from 0.
> > - timers across counter roll-overs.
> > - moving counters ahead and behind pending timers.
> > - reprograming timers.
> > - the same timer condition firing multiple times.
> > - masking/unmasking using the timer control mask.
> >
> > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > ---
> > .../kvm/aarch64/arch_timer_edge_cases.c | 329 +++++++++++++++++-
> > 1 file changed, 326 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> > index dc399482e35d..0eeb72767bea 100644
> > --- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> > +++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> > @@ -2,6 +2,12 @@
> > /*
> > * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
> > *
> > + * The test validates some edge cases related to the virtual arch-timer:
> > + * - timers across counter roll-overs.
> > + * - moving counters ahead and behind pending timers.
> > + * - reprograming timers.
> > + * - the same timer condition firing multiple times.
> > + *
> > * Some of these tests program timers and then wait indefinitely for them to
> > * fire. We rely on having a timeout mechanism in the "runner", like
> > * tools/testing/selftests/kselftest/runner.sh.
> > @@ -47,6 +53,9 @@
> > /* Number of runs. */
> > #define NR_TEST_ITERS_DEF 5
> >
> > +/* Default "long" wait test time in ms. */
> > +#define LONG_WAIT_TEST_MS 100
> > +
> > /* Shared with IRQ handler. */
> > volatile struct test_vcpu_shared_data {
> > int handled;
> > @@ -55,6 +64,8 @@ volatile struct test_vcpu_shared_data {
> > struct test_args {
> > /* Virtual or physical timer and counter tests. */
> > enum arch_timer timer;
> > + /* Delay used in the test_long_timer_delays test. */
> > + uint64_t long_wait_ms;
> > /* Number of iterations. */
> > int iterations;
> > };
> > @@ -62,6 +73,7 @@ struct test_args {
> > struct test_args test_args = {
> > /* Only testing VIRTUAL timers for now. */
> > .timer = VIRTUAL,
> > + .long_wait_ms = LONG_WAIT_TEST_MS,
> > .iterations = NR_TEST_ITERS_DEF,
> > };
> >
> > @@ -69,10 +81,25 @@ static int vtimer_irq, ptimer_irq;
> >
> > enum sync_cmd {
> > SET_REG_KVM_REG_ARM_TIMER_CNT,
> > + USERSPACE_USLEEP,
> > USERSPACE_SCHED_YIELD,
> > USERSPACE_MIGRATE_SELF,
> > };
> >
> > +typedef void (*sleep_method_t)(uint64_t usec);
> > +
> > +static void sleep_poll(uint64_t usec);
> > +static void sleep_sched_poll(uint64_t usec);
> > +static void sleep_in_userspace(uint64_t usec);
> > +static void sleep_migrate(uint64_t usec);
> > +
> > +sleep_method_t sleep_method[] = {
> > + sleep_poll,
> > + sleep_sched_poll,
> > + sleep_migrate,
> > + sleep_in_userspace,
> > +};
> > +
> > typedef void (*wait_method_t)(void);
> >
> > static void wait_for_non_spurious_irq(void);
> > @@ -125,6 +152,9 @@ static uint32_t next_pcpu(void)
> > #define USERSPACE_MIGRATE_VCPU() \
> > USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
> >
> > +#define SLEEP_IN_USERSPACE(__usecs) \
> > + GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
> > +
> > static void guest_irq_handler(struct ex_regs *regs)
> > {
> > unsigned int intid = gic_get_and_ack_irq();
> > @@ -227,6 +257,60 @@ static void wait_migrate_poll_for_irq(void)
> > poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
> > }
> >
> > +/*
> > + * Sleep for usec microseconds by polling in the guest (userspace=0) or in
> > + * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
> > + */
> > +static void guest_poll(enum arch_timer timer, uint64_t usec,
> > + bool userspace, enum sync_cmd userspace_cmd)
> > +{
> > + uint64_t cycles = usec_to_cycles(usec);
> > + uint64_t start = timer_get_cntct(timer);
> > +
> > + /*
> > + * TODO: Take care of roll-overs. Right now, we are fine as we use the
> > + * virtual timer/counter for all of our roll-over tests, and so we can use
> > + * the physical counter for this function. Assert this (temporarily):
> > + */
> > + GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
> > +
> > + while ((timer_get_cntct(timer) - start) < cycles) {
> > + if (userspace)
> > + USERSPACE_CMD(userspace_cmd);
> > + else
> > + cpu_relax();
> > + }
> > +}
> > +
> > +static void sleep_poll(uint64_t usec)
> > +{
> > + if (test_args.timer == VIRTUAL)
> > + guest_poll(PHYSICAL, usec, false, -1);
> > + else
> > + GUEST_ASSERT(0); /* Not implemented. */
> > +}
> > +
> > +static void sleep_sched_poll(uint64_t usec)
> > +{
> > + if (test_args.timer == VIRTUAL)
> > + guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
> > + else
> > + GUEST_ASSERT(0); /* Not implemented. */
> > +}
> > +
> > +static void sleep_migrate(uint64_t usec)
> > +{
> > + if (test_args.timer == VIRTUAL)
> > + guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
> > + else
> > + GUEST_ASSERT(0); /* Not implemented. */
> > +}
>
> It may be worth mentioning that you're polling the counter that is *not*
> under test. I presume you do this so your sleep implementation is not
> affected by the test manipulations to the virtual counter.
>
> > +static void sleep_in_userspace(uint64_t usec)
> > +{
> > + SLEEP_IN_USERSPACE(usec);
> > +}
> > +
> > /*
> > * Reset the timer state to some nice values like the counter not being close
> > * to the edge, and the control register masked and disabled.
> > @@ -251,6 +335,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
> > local_irq_enable();
> > }
> >
> > +/*
> > + * Set the counter to just below the edge (CVAL_MAX) and set a timer that
> > + * crosses it over.
> > + */
> > +static void test_timers_across_rollovers(void)
> > +{
> > + uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
> > + test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
> > + test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > +/* Check that timer control masks actually mask a timer being fired. */
> > +static void test_timer_control_masked(sleep_method_t guest_sleep)
> > +{
> > + reset_timer_state(DEF_CNT);
> > +
> > + /* Local IRQs are not masked at this point. */
> > +
> > + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> > +
> > + /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
> > + guest_sleep(TIMEOUT_NO_IRQ_US);
> > +
> > + ASSERT_IRQS_HANDLED(0);
> > + timer_set_ctl(test_args.timer, CTL_IMASK);
> > +}
> > +
> > +/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
> > +static void test_timer_control_mask_then_unmask(wait_method_t wm)
> > +{
> > + reset_timer_state(DEF_CNT);
> > + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> > +
> > + /* No IRQs because the timer is still masked. */
> > + ASSERT_IRQS_HANDLED(0);
> > +
> > + /* Unmask the timer, and then get an IRQ. */
> > + local_irq_disable();
> > + timer_set_ctl(test_args.timer, CTL_ENABLE);
> > + wm();
> > +
> > + ASSERT_IRQS_HANDLED(1);
> > + local_irq_enable();
> > +}
> > +
> > +/*
> > + * Set a timer at the edge, and wait with irqs masked for so long that the
> > + * counter rolls over and the "Timer Condition" doesn't apply anymore. We
> > + * should still get an IRQ.
>
> It looks like the test asserts we do not get an interrupt if the timer
> condtion is no longer valid.
>
> > + */
> > +static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
> > +{
> > + local_irq_disable();
> > + reset_timer_state(CVAL_MAX - msec_to_cycles(5));
> > +
> > + program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
> > +
> > + GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
> > + guest_sleep(msecs_to_usecs(10));
> > + GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
> > +
> > + local_irq_enable();
> > + isb();
> > +
> > + ASSERT_IRQS_HANDLED(0);
> > +}
> > +
> > +static void test_control_masks(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> > + test_timer_control_masked(sleep_method[i]);
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> > + test_timer_control_mask_then_unmask(wait_method[i]);
> > +
> > + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> > + test_irq_masked_timer_across_rollover(sleep_method[i]);
> > +}
> > +
> > +static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
> > +{
> > + int i;
> > +
> > + local_irq_disable();
> > + reset_timer_state(DEF_CNT);
> > +
> > + program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
> > +
> > + for (i = 1; i <= num; i++) {
> > + wm();
> > +
> > + /*
> > + * The IRQ handler masked and disabled the timer.
> > + * Enable and unmmask it again.
> > + */
> > + timer_set_ctl(test_args.timer, CTL_ENABLE);
> > +
> > + ASSERT_IRQS_HANDLED(i);
> > + }
> > +
> > + local_irq_enable();
> > +}
> > +
> > +static void test_timers_fired_multiple_times(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> > + test_fire_a_timer_multiple_times(wait_method[i], 1000);
> > +}
> > +
> > +/* Set a timer for cval1 then reprogram it to cval1. */
>
> reprogram it to cval2
>
> > +static void test_reprogram_timer(wait_method_t wm, bool use_sched,
> > + uint64_t cnt, uint64_t cval1, uint64_t cval2)
> > +{
> > + local_irq_disable();
> > + reset_timer_state(cnt);
> > +
> > + program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
> > +
> > + if (use_sched)
> > + USERSPACE_SCHEDULE();
> > +
> > + timer_set_cval(test_args.timer, cval2);
> > +
> > + wm();
> > +
> > + local_irq_enable();
> > + ASSERT_IRQS_HANDLED(1);
> > +};
> > +
> > +static void test_reprogram_timers(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> > + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> > + }
> > +}
> > +
> > static void test_basic_functionality(void)
> > {
> > int32_t tval = (int32_t)msec_to_cycles(10);
> > @@ -306,7 +540,7 @@ static void timers_sanity_checks(bool use_sched)
> > if (use_sched)
> > USERSPACE_SCHEDULE();
> > /* We just need 1 cycle to pass. */
> > - isb();
> > + sleep_poll(1);
>
> Ah, this is exactly what I was asking for in patch 2. Is it possible to
> hoist the needed pieces into that patch so you always poll CNTPCT before
> reaching the assertion?
>
> --
> Thanks,
> Oliver
ACK on all the comments. Will send a v3.
Thanks,
Ricardo
>
> > GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
> >
> > local_irq_enable();
> > @@ -322,10 +556,86 @@ static void test_timers_sanity_checks(void)
> > timers_sanity_checks(true);
> > }
> >
> > +/*
> > + * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
> > + * then wait for an IRQ.
> > + */
> > +static void test_set_counter_after_programming_timer(uint64_t cnt_1,
> > + uint64_t xval, uint64_t cnt_2, wait_method_t wm,
> > + enum timer_view tv)
> > +{
> > + local_irq_disable();
> > +
> > + SET_COUNTER(cnt_1, test_args.timer);
> > + timer_set_ctl(test_args.timer, CTL_IMASK);
> > +
> > + program_timer_irq(xval, CTL_ENABLE, tv);
> > + SET_COUNTER(cnt_2, test_args.timer);
> > + wm();
> > +
> > + ASSERT_IRQS_HANDLED(1);
> > + local_irq_enable();
> > +}
> > +
> > +/* Set a timer and then move the counter ahead of it. */
> > +static void test_move_counters_after_timers(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_set_counter_after_programming_timer(0, DEF_CNT,
> > + DEF_CNT + 1, wm, TIMER_CVAL);
> > + test_set_counter_after_programming_timer(CVAL_MAX, 1,
> > + 2, wm, TIMER_CVAL);
> > + test_set_counter_after_programming_timer(0, TVAL_MAX,
> > + (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > +static void test_timers_in_the_past(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
> > + test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
> > + test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
> > + test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
> > + test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > +static void test_long_timer_delays(void)
> > +{
> > + uint64_t wait_ms = test_args.long_wait_ms;
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
> > + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > static void guest_run_iteration(void)
> > {
> > test_timers_sanity_checks();
> > test_basic_functionality();
> > +
> > + test_timers_in_the_past();
> > + test_timers_across_rollovers();
> > +
> > + test_move_counters_after_timers();
> > + test_reprogram_timers();
> > +
> > + test_control_masks();
> > +
> > + test_timers_fired_multiple_times();
> > }
> >
> > static void guest_code(void)
> > @@ -348,6 +658,7 @@ static void guest_code(void)
> > guest_run_iteration();
> > }
> >
> > + test_long_timer_delays();
> > GUEST_DONE();
> > }
> >
> > @@ -420,6 +731,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
> > case SET_REG_KVM_REG_ARM_TIMER_CNT:
> > kvm_set_cntxct(vm, val, timer);
> > break;
> > + case USERSPACE_USLEEP:
> > + usleep(val);
> > + break;
> > case USERSPACE_SCHED_YIELD:
> > sched_yield();
> > break;
> > @@ -503,11 +817,13 @@ static struct kvm_vm *test_vm_create(void)
> >
> > static void test_print_help(char *name)
> > {
> > - pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
> > + pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
> > name);
> > pr_info("\t-i: Number of iterations (default: %u)\n",
> > NR_TEST_ITERS_DEF);
> > pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
> > + pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
> > + LONG_WAIT_TEST_MS);
> > pr_info("\t-h: Print this help message\n");
> > }
> >
> > @@ -515,7 +831,7 @@ static bool parse_args(int argc, char *argv[])
> > {
> > int opt, ret;
> >
> > - while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
> > + while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
> > switch (opt) {
> > case 'i':
> > test_args.iterations = atoi(optarg);
> > @@ -531,6 +847,13 @@ static bool parse_args(int argc, char *argv[])
> > goto err;
> > }
> > break;
> > + case 'l':
> > + test_args.long_wait_ms = atoi(optarg);
> > + if (test_args.long_wait_ms <= 0) {
> > + pr_info("Positive value needed for -l\n");
> > + goto err;
> > + }
> > + break;
> > case 'h':
> > default:
> > goto err;
> > --
> > 2.35.1.723.g4982287a31-goog
> >
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 3/3] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
@ 2022-03-18 20:54 ` Ricardo Koller
0 siblings, 0 replies; 20+ messages in thread
From: Ricardo Koller @ 2022-03-18 20:54 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvm, maz, pbonzini, kvmarm
On Thu, Mar 17, 2022 at 07:27:50AM +0000, Oliver Upton wrote:
> Hi Ricardo,
>
> On Wed, Mar 16, 2022 at 09:51:27PM -0700, Ricardo Koller wrote:
> > Add tests that validates some edge cases related to the virtual
> > arch-timer:
> > - timers in the past, including TVALs that rollover from 0.
> > - timers across counter roll-overs.
> > - moving counters ahead and behind pending timers.
> > - reprograming timers.
> > - the same timer condition firing multiple times.
> > - masking/unmasking using the timer control mask.
> >
> > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > ---
> > .../kvm/aarch64/arch_timer_edge_cases.c | 329 +++++++++++++++++-
> > 1 file changed, 326 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> > index dc399482e35d..0eeb72767bea 100644
> > --- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> > +++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
> > @@ -2,6 +2,12 @@
> > /*
> > * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
> > *
> > + * The test validates some edge cases related to the virtual arch-timer:
> > + * - timers across counter roll-overs.
> > + * - moving counters ahead and behind pending timers.
> > + * - reprograming timers.
> > + * - the same timer condition firing multiple times.
> > + *
> > * Some of these tests program timers and then wait indefinitely for them to
> > * fire. We rely on having a timeout mechanism in the "runner", like
> > * tools/testing/selftests/kselftest/runner.sh.
> > @@ -47,6 +53,9 @@
> > /* Number of runs. */
> > #define NR_TEST_ITERS_DEF 5
> >
> > +/* Default "long" wait test time in ms. */
> > +#define LONG_WAIT_TEST_MS 100
> > +
> > /* Shared with IRQ handler. */
> > volatile struct test_vcpu_shared_data {
> > int handled;
> > @@ -55,6 +64,8 @@ volatile struct test_vcpu_shared_data {
> > struct test_args {
> > /* Virtual or physical timer and counter tests. */
> > enum arch_timer timer;
> > + /* Delay used in the test_long_timer_delays test. */
> > + uint64_t long_wait_ms;
> > /* Number of iterations. */
> > int iterations;
> > };
> > @@ -62,6 +73,7 @@ struct test_args {
> > struct test_args test_args = {
> > /* Only testing VIRTUAL timers for now. */
> > .timer = VIRTUAL,
> > + .long_wait_ms = LONG_WAIT_TEST_MS,
> > .iterations = NR_TEST_ITERS_DEF,
> > };
> >
> > @@ -69,10 +81,25 @@ static int vtimer_irq, ptimer_irq;
> >
> > enum sync_cmd {
> > SET_REG_KVM_REG_ARM_TIMER_CNT,
> > + USERSPACE_USLEEP,
> > USERSPACE_SCHED_YIELD,
> > USERSPACE_MIGRATE_SELF,
> > };
> >
> > +typedef void (*sleep_method_t)(uint64_t usec);
> > +
> > +static void sleep_poll(uint64_t usec);
> > +static void sleep_sched_poll(uint64_t usec);
> > +static void sleep_in_userspace(uint64_t usec);
> > +static void sleep_migrate(uint64_t usec);
> > +
> > +sleep_method_t sleep_method[] = {
> > + sleep_poll,
> > + sleep_sched_poll,
> > + sleep_migrate,
> > + sleep_in_userspace,
> > +};
> > +
> > typedef void (*wait_method_t)(void);
> >
> > static void wait_for_non_spurious_irq(void);
> > @@ -125,6 +152,9 @@ static uint32_t next_pcpu(void)
> > #define USERSPACE_MIGRATE_VCPU() \
> > USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
> >
> > +#define SLEEP_IN_USERSPACE(__usecs) \
> > + GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
> > +
> > static void guest_irq_handler(struct ex_regs *regs)
> > {
> > unsigned int intid = gic_get_and_ack_irq();
> > @@ -227,6 +257,60 @@ static void wait_migrate_poll_for_irq(void)
> > poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
> > }
> >
> > +/*
> > + * Sleep for usec microseconds by polling in the guest (userspace=0) or in
> > + * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
> > + */
> > +static void guest_poll(enum arch_timer timer, uint64_t usec,
> > + bool userspace, enum sync_cmd userspace_cmd)
> > +{
> > + uint64_t cycles = usec_to_cycles(usec);
> > + uint64_t start = timer_get_cntct(timer);
> > +
> > + /*
> > + * TODO: Take care of roll-overs. Right now, we are fine as we use the
> > + * virtual timer/counter for all of our roll-over tests, and so we can use
> > + * the physical counter for this function. Assert this (temporarily):
> > + */
> > + GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
> > +
> > + while ((timer_get_cntct(timer) - start) < cycles) {
> > + if (userspace)
> > + USERSPACE_CMD(userspace_cmd);
> > + else
> > + cpu_relax();
> > + }
> > +}
> > +
> > +static void sleep_poll(uint64_t usec)
> > +{
> > + if (test_args.timer == VIRTUAL)
> > + guest_poll(PHYSICAL, usec, false, -1);
> > + else
> > + GUEST_ASSERT(0); /* Not implemented. */
> > +}
> > +
> > +static void sleep_sched_poll(uint64_t usec)
> > +{
> > + if (test_args.timer == VIRTUAL)
> > + guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
> > + else
> > + GUEST_ASSERT(0); /* Not implemented. */
> > +}
> > +
> > +static void sleep_migrate(uint64_t usec)
> > +{
> > + if (test_args.timer == VIRTUAL)
> > + guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
> > + else
> > + GUEST_ASSERT(0); /* Not implemented. */
> > +}
>
> It may be worth mentioning that you're polling the counter that is *not*
> under test. I presume you do this so your sleep implementation is not
> affected by the test manipulations to the virtual counter.
>
> > +static void sleep_in_userspace(uint64_t usec)
> > +{
> > + SLEEP_IN_USERSPACE(usec);
> > +}
> > +
> > /*
> > * Reset the timer state to some nice values like the counter not being close
> > * to the edge, and the control register masked and disabled.
> > @@ -251,6 +335,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
> > local_irq_enable();
> > }
> >
> > +/*
> > + * Set the counter to just below the edge (CVAL_MAX) and set a timer that
> > + * crosses it over.
> > + */
> > +static void test_timers_across_rollovers(void)
> > +{
> > + uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
> > + test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
> > + test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > +/* Check that timer control masks actually mask a timer being fired. */
> > +static void test_timer_control_masked(sleep_method_t guest_sleep)
> > +{
> > + reset_timer_state(DEF_CNT);
> > +
> > + /* Local IRQs are not masked at this point. */
> > +
> > + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> > +
> > + /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
> > + guest_sleep(TIMEOUT_NO_IRQ_US);
> > +
> > + ASSERT_IRQS_HANDLED(0);
> > + timer_set_ctl(test_args.timer, CTL_IMASK);
> > +}
> > +
> > +/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
> > +static void test_timer_control_mask_then_unmask(wait_method_t wm)
> > +{
> > + reset_timer_state(DEF_CNT);
> > + program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
> > +
> > + /* No IRQs because the timer is still masked. */
> > + ASSERT_IRQS_HANDLED(0);
> > +
> > + /* Unmask the timer, and then get an IRQ. */
> > + local_irq_disable();
> > + timer_set_ctl(test_args.timer, CTL_ENABLE);
> > + wm();
> > +
> > + ASSERT_IRQS_HANDLED(1);
> > + local_irq_enable();
> > +}
> > +
> > +/*
> > + * Set a timer at the edge, and wait with irqs masked for so long that the
> > + * counter rolls over and the "Timer Condition" doesn't apply anymore. We
> > + * should still get an IRQ.
>
> It looks like the test asserts we do not get an interrupt if the timer
> condtion is no longer valid.
>
> > + */
> > +static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
> > +{
> > + local_irq_disable();
> > + reset_timer_state(CVAL_MAX - msec_to_cycles(5));
> > +
> > + program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
> > +
> > + GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
> > + guest_sleep(msecs_to_usecs(10));
> > + GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
> > +
> > + local_irq_enable();
> > + isb();
> > +
> > + ASSERT_IRQS_HANDLED(0);
> > +}
> > +
> > +static void test_control_masks(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> > + test_timer_control_masked(sleep_method[i]);
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> > + test_timer_control_mask_then_unmask(wait_method[i]);
> > +
> > + for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
> > + test_irq_masked_timer_across_rollover(sleep_method[i]);
> > +}
> > +
> > +static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
> > +{
> > + int i;
> > +
> > + local_irq_disable();
> > + reset_timer_state(DEF_CNT);
> > +
> > + program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
> > +
> > + for (i = 1; i <= num; i++) {
> > + wm();
> > +
> > + /*
> > + * The IRQ handler masked and disabled the timer.
> > + * Enable and unmmask it again.
> > + */
> > + timer_set_ctl(test_args.timer, CTL_ENABLE);
> > +
> > + ASSERT_IRQS_HANDLED(i);
> > + }
> > +
> > + local_irq_enable();
> > +}
> > +
> > +static void test_timers_fired_multiple_times(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++)
> > + test_fire_a_timer_multiple_times(wait_method[i], 1000);
> > +}
> > +
> > +/* Set a timer for cval1 then reprogram it to cval1. */
>
> reprogram it to cval2
>
> > +static void test_reprogram_timer(wait_method_t wm, bool use_sched,
> > + uint64_t cnt, uint64_t cval1, uint64_t cval2)
> > +{
> > + local_irq_disable();
> > + reset_timer_state(cnt);
> > +
> > + program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
> > +
> > + if (use_sched)
> > + USERSPACE_SCHEDULE();
> > +
> > + timer_set_cval(test_args.timer, cval2);
> > +
> > + wm();
> > +
> > + local_irq_enable();
> > + ASSERT_IRQS_HANDLED(1);
> > +};
> > +
> > +static void test_reprogram_timers(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> > + test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
> > + }
> > +}
> > +
> > static void test_basic_functionality(void)
> > {
> > int32_t tval = (int32_t)msec_to_cycles(10);
> > @@ -306,7 +540,7 @@ static void timers_sanity_checks(bool use_sched)
> > if (use_sched)
> > USERSPACE_SCHEDULE();
> > /* We just need 1 cycle to pass. */
> > - isb();
> > + sleep_poll(1);
>
> Ah, this is exactly what I was asking for in patch 2. Is it possible to
> hoist the needed pieces into that patch so you always poll CNTPCT before
> reaching the assertion?
>
> --
> Thanks,
> Oliver
ACK on all the comments. Will send a v3.
Thanks,
Ricardo
>
> > GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
> >
> > local_irq_enable();
> > @@ -322,10 +556,86 @@ static void test_timers_sanity_checks(void)
> > timers_sanity_checks(true);
> > }
> >
> > +/*
> > + * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
> > + * then wait for an IRQ.
> > + */
> > +static void test_set_counter_after_programming_timer(uint64_t cnt_1,
> > + uint64_t xval, uint64_t cnt_2, wait_method_t wm,
> > + enum timer_view tv)
> > +{
> > + local_irq_disable();
> > +
> > + SET_COUNTER(cnt_1, test_args.timer);
> > + timer_set_ctl(test_args.timer, CTL_IMASK);
> > +
> > + program_timer_irq(xval, CTL_ENABLE, tv);
> > + SET_COUNTER(cnt_2, test_args.timer);
> > + wm();
> > +
> > + ASSERT_IRQS_HANDLED(1);
> > + local_irq_enable();
> > +}
> > +
> > +/* Set a timer and then move the counter ahead of it. */
> > +static void test_move_counters_after_timers(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_set_counter_after_programming_timer(0, DEF_CNT,
> > + DEF_CNT + 1, wm, TIMER_CVAL);
> > + test_set_counter_after_programming_timer(CVAL_MAX, 1,
> > + 2, wm, TIMER_CVAL);
> > + test_set_counter_after_programming_timer(0, TVAL_MAX,
> > + (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > +static void test_timers_in_the_past(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
> > + test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
> > + test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
> > + test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
> > + test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > +static void test_long_timer_delays(void)
> > +{
> > + uint64_t wait_ms = test_args.long_wait_ms;
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
> > + wait_method_t wm = wait_method[i];
> > +
> > + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
> > + test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
> > + }
> > +}
> > +
> > static void guest_run_iteration(void)
> > {
> > test_timers_sanity_checks();
> > test_basic_functionality();
> > +
> > + test_timers_in_the_past();
> > + test_timers_across_rollovers();
> > +
> > + test_move_counters_after_timers();
> > + test_reprogram_timers();
> > +
> > + test_control_masks();
> > +
> > + test_timers_fired_multiple_times();
> > }
> >
> > static void guest_code(void)
> > @@ -348,6 +658,7 @@ static void guest_code(void)
> > guest_run_iteration();
> > }
> >
> > + test_long_timer_delays();
> > GUEST_DONE();
> > }
> >
> > @@ -420,6 +731,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
> > case SET_REG_KVM_REG_ARM_TIMER_CNT:
> > kvm_set_cntxct(vm, val, timer);
> > break;
> > + case USERSPACE_USLEEP:
> > + usleep(val);
> > + break;
> > case USERSPACE_SCHED_YIELD:
> > sched_yield();
> > break;
> > @@ -503,11 +817,13 @@ static struct kvm_vm *test_vm_create(void)
> >
> > static void test_print_help(char *name)
> > {
> > - pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
> > + pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
> > name);
> > pr_info("\t-i: Number of iterations (default: %u)\n",
> > NR_TEST_ITERS_DEF);
> > pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
> > + pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
> > + LONG_WAIT_TEST_MS);
> > pr_info("\t-h: Print this help message\n");
> > }
> >
> > @@ -515,7 +831,7 @@ static bool parse_args(int argc, char *argv[])
> > {
> > int opt, ret;
> >
> > - while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
> > + while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
> > switch (opt) {
> > case 'i':
> > test_args.iterations = atoi(optarg);
> > @@ -531,6 +847,13 @@ static bool parse_args(int argc, char *argv[])
> > goto err;
> > }
> > break;
> > + case 'l':
> > + test_args.long_wait_ms = atoi(optarg);
> > + if (test_args.long_wait_ms <= 0) {
> > + pr_info("Positive value needed for -l\n");
> > + goto err;
> > + }
> > + break;
> > case 'h':
> > default:
> > goto err;
> > --
> > 2.35.1.723.g4982287a31-goog
> >
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2022-03-18 20:54 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-17 4:51 [PATCH v2 0/3] KVM: arm64: selftests: Add edge cases tests for the arch timer Ricardo Koller
2022-03-17 4:51 ` Ricardo Koller
2022-03-17 4:51 ` [PATCH v2 1/3] KVM: arm64: selftests: add timer_get_tval() lib function Ricardo Koller
2022-03-17 4:51 ` Ricardo Koller
2022-03-17 4:51 ` [PATCH v2 2/3] KVM: arm64: selftests: add arch_timer_edge_cases Ricardo Koller
2022-03-17 4:51 ` Ricardo Koller
2022-03-17 6:44 ` Oliver Upton
2022-03-17 6:44 ` Oliver Upton
2022-03-17 8:52 ` Marc Zyngier
2022-03-17 8:52 ` Marc Zyngier
2022-03-17 16:56 ` Oliver Upton
2022-03-17 16:56 ` Oliver Upton
2022-03-18 20:49 ` Ricardo Koller
2022-03-18 20:49 ` Ricardo Koller
2022-03-17 4:51 ` [PATCH v2 3/3] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases Ricardo Koller
2022-03-17 4:51 ` Ricardo Koller
2022-03-17 7:27 ` Oliver Upton
2022-03-17 7:27 ` Oliver Upton
2022-03-18 20:54 ` Ricardo Koller
2022-03-18 20:54 ` Ricardo Koller
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.