* [PATCH v4 0/4] KVM: arm64: selftests: Add edge cases tests for the arch timer
@ 2022-04-04 21:46 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add a new selftests that validates some edge cases related to the virtual
arch-timer, for example:
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- having the same timer condition firing multiple times.
The tests run while checking the state of the IRQs (e.g., pending when they
are supposed to be) and stressing things a bit by waiting for interrupts
while: re-scheduling the vcpu (with sched_yield()), by migrating the vcpu
between cores, or by sleeping in userspace (with usleep()).
The first commit adds a timer utility function. The second commit adds
some sanity checks and basic tests for the timer. The third commit adds
the actual edge case tests (like forcing rollovers).
v3 ->v4:
- Fix is_cpu_online to also check whether the cpu is allowed to run the current
process (on top of being online). Renamed is_cpu_online() to
is_cpu_eligible_to_run(). [Sean]
v2 -> v3: https://lore.kernel.org/kvmarm/20220322172319.2943101-1-ricarkol@google.com/
- Add missing isb when polling for IRQ being handled. [Oliver, Marc]
- Wait for a counter pass by polling on it (instead of the previous isb).
[Oliver, Marc]
- Edits in some comments. [Oliver]
- Dropping the msecs_to_usecs macro. [Oliver]
- Skipping test if desired pcpus are not online. This needed adding a
library function (is_cpu_online). [Oliver]
v1 -> v2: https://lore.kernel.org/kvmarm/20220317045127.124602-1-ricarkol@google.com/
- Remove the checks for timers firing within some margin; only leave the
checks for timers not firing ahead of time. Also remove the tests that
depend on timers firing within some margin. [Oliver, Marc]
- Collect R-b tag from Oliver (first commit). [Oliver]
- Multiple nits: replace wfi_ functions with wait_, reduce use of macros,
drop typedefs, use IAR_SPURIOUS from header, move some comments functions
to top. [Oliver]
- Don't fail if the test has a single cpu available. [Oliver]
- Don't fail if there's no GICv3 available. [Oliver]
v1: https://lore.kernel.org/kvmarm/20220302172144.2734258-1-ricarkol@google.com/
Ricardo Koller (4):
KVM: arm64: selftests: add timer_get_tval() lib function
KVM: selftests: add is_cpu_eligible_to_run() utility function
KVM: arm64: selftests: add arch_timer_edge_cases
KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 904 ++++++++++++++++++
.../kvm/include/aarch64/arch_timer.h | 18 +-
.../testing/selftests/kvm/include/test_util.h | 2 +
tools/testing/selftests/kvm/lib/test_util.c | 20 +-
6 files changed, 944 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
--
2.35.1.1094.g7c7d902a7c-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v4 0/4] KVM: arm64: selftests: Add edge cases tests for the arch timer
@ 2022-04-04 21:46 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add a new selftests that validates some edge cases related to the virtual
arch-timer, for example:
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- having the same timer condition firing multiple times.
The tests run while checking the state of the IRQs (e.g., pending when they
are supposed to be) and stressing things a bit by waiting for interrupts
while: re-scheduling the vcpu (with sched_yield()), by migrating the vcpu
between cores, or by sleeping in userspace (with usleep()).
The first commit adds a timer utility function. The second commit adds
some sanity checks and basic tests for the timer. The third commit adds
the actual edge case tests (like forcing rollovers).
v3 ->v4:
- Fix is_cpu_online to also check whether the cpu is allowed to run the current
process (on top of being online). Renamed is_cpu_online() to
is_cpu_eligible_to_run(). [Sean]
v2 -> v3: https://lore.kernel.org/kvmarm/20220322172319.2943101-1-ricarkol@google.com/
- Add missing isb when polling for IRQ being handled. [Oliver, Marc]
- Wait for a counter pass by polling on it (instead of the previous isb).
[Oliver, Marc]
- Edits in some comments. [Oliver]
- Dropping the msecs_to_usecs macro. [Oliver]
- Skipping test if desired pcpus are not online. This needed adding a
library function (is_cpu_online). [Oliver]
v1 -> v2: https://lore.kernel.org/kvmarm/20220317045127.124602-1-ricarkol@google.com/
- Remove the checks for timers firing within some margin; only leave the
checks for timers not firing ahead of time. Also remove the tests that
depend on timers firing within some margin. [Oliver, Marc]
- Collect R-b tag from Oliver (first commit). [Oliver]
- Multiple nits: replace wfi_ functions with wait_, reduce use of macros,
drop typedefs, use IAR_SPURIOUS from header, move some comments functions
to top. [Oliver]
- Don't fail if the test has a single cpu available. [Oliver]
- Don't fail if there's no GICv3 available. [Oliver]
v1: https://lore.kernel.org/kvmarm/20220302172144.2734258-1-ricarkol@google.com/
Ricardo Koller (4):
KVM: arm64: selftests: add timer_get_tval() lib function
KVM: selftests: add is_cpu_eligible_to_run() utility function
KVM: arm64: selftests: add arch_timer_edge_cases
KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 904 ++++++++++++++++++
.../kvm/include/aarch64/arch_timer.h | 18 +-
.../testing/selftests/kvm/include/test_util.h | 2 +
tools/testing/selftests/kvm/lib/test_util.c | 20 +-
6 files changed, 944 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
--
2.35.1.1094.g7c7d902a7c-goog
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v4 1/4] KVM: arm64: selftests: add timer_get_tval() lib function
2022-04-04 21:46 ` Ricardo Koller
@ 2022-04-04 21:46 ` Ricardo Koller
-1 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add timer_get_tval() into the arch timer library functions in
selftests/kvm. Bonus: change the set_tval function to get an int32_t
(tval is signed).
Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../selftests/kvm/include/aarch64/arch_timer.h | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
index cb7c03de3a21..93f35a4fc1aa 100644
--- a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
@@ -79,7 +79,7 @@ static inline uint64_t timer_get_cval(enum arch_timer timer)
return 0;
}
-static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
+static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
{
switch (timer) {
case VIRTUAL:
@@ -95,6 +95,22 @@ static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
isb();
}
+static inline int32_t timer_get_tval(enum arch_timer timer)
+{
+ isb();
+ switch (timer) {
+ case VIRTUAL:
+ return (int32_t)read_sysreg(cntv_tval_el0);
+ case PHYSICAL:
+ return (int32_t)read_sysreg(cntp_tval_el0);
+ default:
+ GUEST_ASSERT_1(0, timer);
+ }
+
+ /* We should not reach here */
+ return 0;
+}
+
static inline void timer_set_ctl(enum arch_timer timer, uint32_t ctl)
{
switch (timer) {
--
2.35.1.1094.g7c7d902a7c-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 1/4] KVM: arm64: selftests: add timer_get_tval() lib function
@ 2022-04-04 21:46 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add timer_get_tval() into the arch timer library functions in
selftests/kvm. Bonus: change the set_tval function to get an int32_t
(tval is signed).
Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../selftests/kvm/include/aarch64/arch_timer.h | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
index cb7c03de3a21..93f35a4fc1aa 100644
--- a/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/aarch64/arch_timer.h
@@ -79,7 +79,7 @@ static inline uint64_t timer_get_cval(enum arch_timer timer)
return 0;
}
-static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
+static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
{
switch (timer) {
case VIRTUAL:
@@ -95,6 +95,22 @@ static inline void timer_set_tval(enum arch_timer timer, uint32_t tval)
isb();
}
+static inline int32_t timer_get_tval(enum arch_timer timer)
+{
+ isb();
+ switch (timer) {
+ case VIRTUAL:
+ return (int32_t)read_sysreg(cntv_tval_el0);
+ case PHYSICAL:
+ return (int32_t)read_sysreg(cntp_tval_el0);
+ default:
+ GUEST_ASSERT_1(0, timer);
+ }
+
+ /* We should not reach here */
+ return 0;
+}
+
static inline void timer_set_ctl(enum arch_timer timer, uint32_t ctl)
{
switch (timer) {
--
2.35.1.1094.g7c7d902a7c-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 2/4] KVM: selftests: add is_cpu_eligible_to_run() utility function
2022-04-04 21:46 ` Ricardo Koller
@ 2022-04-04 21:46 ` Ricardo Koller
-1 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add is_cpu_eligible_to_run() utility function, which checks whether the current
process, or one of its threads, is eligible to run on a particular CPU.
This information is obtained using sched_getaffinity.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../testing/selftests/kvm/include/test_util.h | 2 ++
tools/testing/selftests/kvm/lib/test_util.c | 20 ++++++++++++++++++-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 99e0dcdc923f..a7653f369b6c 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -143,4 +143,6 @@ static inline void *align_ptr_up(void *x, size_t size)
return (void *)align_up((unsigned long)x, size);
}
+bool is_cpu_eligible_to_run(int pcpu);
+
#endif /* SELFTEST_KVM_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 6d23878bbfe1..7813a68333c0 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -4,6 +4,7 @@
*
* Copyright (C) 2020, Google LLC.
*/
+#define _GNU_SOURCE
#include <assert.h>
#include <ctype.h>
@@ -13,7 +14,9 @@
#include <sys/stat.h>
#include <sys/syscall.h>
#include <linux/mman.h>
-#include "linux/kernel.h"
+#include <linux/kernel.h>
+#include <sched.h>
+#include <sys/sysinfo.h>
#include "test_util.h"
@@ -334,3 +337,18 @@ long get_run_delay(void)
return val[1];
}
+
+bool is_cpu_eligible_to_run(int pcpu)
+{
+ cpu_set_t cpuset;
+ long i, nprocs;
+
+ nprocs = get_nprocs_conf();
+ sched_getaffinity(0, sizeof(cpu_set_t), &cpuset);
+ for (i = 0; i < nprocs; i++) {
+ if (i == pcpu)
+ return CPU_ISSET(i, &cpuset);
+ }
+
+ return false;
+}
--
2.35.1.1094.g7c7d902a7c-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 2/4] KVM: selftests: add is_cpu_eligible_to_run() utility function
@ 2022-04-04 21:46 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add is_cpu_eligible_to_run() utility function, which checks whether the current
process, or one of its threads, is eligible to run on a particular CPU.
This information is obtained using sched_getaffinity.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../testing/selftests/kvm/include/test_util.h | 2 ++
tools/testing/selftests/kvm/lib/test_util.c | 20 ++++++++++++++++++-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 99e0dcdc923f..a7653f369b6c 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -143,4 +143,6 @@ static inline void *align_ptr_up(void *x, size_t size)
return (void *)align_up((unsigned long)x, size);
}
+bool is_cpu_eligible_to_run(int pcpu);
+
#endif /* SELFTEST_KVM_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 6d23878bbfe1..7813a68333c0 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -4,6 +4,7 @@
*
* Copyright (C) 2020, Google LLC.
*/
+#define _GNU_SOURCE
#include <assert.h>
#include <ctype.h>
@@ -13,7 +14,9 @@
#include <sys/stat.h>
#include <sys/syscall.h>
#include <linux/mman.h>
-#include "linux/kernel.h"
+#include <linux/kernel.h>
+#include <sched.h>
+#include <sys/sysinfo.h>
#include "test_util.h"
@@ -334,3 +337,18 @@ long get_run_delay(void)
return val[1];
}
+
+bool is_cpu_eligible_to_run(int pcpu)
+{
+ cpu_set_t cpuset;
+ long i, nprocs;
+
+ nprocs = get_nprocs_conf();
+ sched_getaffinity(0, sizeof(cpu_set_t), &cpuset);
+ for (i = 0; i < nprocs; i++) {
+ if (i == pcpu)
+ return CPU_ISSET(i, &cpuset);
+ }
+
+ return false;
+}
--
2.35.1.1094.g7c7d902a7c-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 3/4] KVM: arm64: selftests: add arch_timer_edge_cases
2022-04-04 21:46 ` Ricardo Koller
@ 2022-04-04 21:46 ` Ricardo Koller
-1 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add an arch_timer edge-cases selftest. For now, just add some basic
sanity checks, and some stress conditions (like waiting for the timers
while re-scheduling the vcpu). The next commit will add the actual edge
case tests.
This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
interrupts for suspended vCPU".
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 608 ++++++++++++++++++
3 files changed, 610 insertions(+)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index dce7de7755e6..8f7e0123dd28 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
/aarch64/arch_timer
+/aarch64/arch_timer_edge_cases
/aarch64/debug-exceptions
/aarch64/get-reg-list
/aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 0e4926bc9a58..17a0f32cfc91 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -101,6 +101,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
+TEST_GEN_PROGS_aarch64 += aarch64/arch_timer_edge_cases
TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
new file mode 100644
index 000000000000..5f1e9c050b99
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -0,0 +1,608 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
+ *
+ * Some of these tests program timers and then wait indefinitely for them to
+ * fire. We rely on having a timeout mechanism in the "runner", like
+ * tools/testing/selftests/kselftest/runner.sh.
+ *
+ * Copyright (c) 2022, Google LLC.
+ */
+
+#define _GNU_SOURCE
+
+#include <stdlib.h>
+#include <pthread.h>
+#include <linux/kvm.h>
+#include <linux/sizes.h>
+#include <linux/bitmap.h>
+#include <sched.h>
+#include <sys/sysinfo.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "delay.h"
+#include "arch_timer.h"
+#include "gic.h"
+#include "vgic.h"
+
+#define VCPUID 0
+
+#define CVAL_MAX (~0ULL)
+/* tval is a signed 32-bit int. */
+#define TVAL_MAX INT_MAX
+#define TVAL_MIN INT_MIN
+
+#define GICD_BASE_GPA 0x8000000ULL
+#define GICR_BASE_GPA 0x80A0000ULL
+
+/* After how much time we say there is no IRQ. */
+#define TIMEOUT_NO_IRQ_US (50 * 1000LL)
+
+/* A nice counter value to use as the starting one for most tests. */
+#define DEF_CNT (CVAL_MAX / 2)
+
+/* Number of runs. */
+#define NR_TEST_ITERS_DEF 5
+
+/* Shared with IRQ handler. */
+volatile struct test_vcpu_shared_data {
+ int handled;
+} shared_data;
+
+struct test_args {
+ /* Virtual or physical timer and counter tests. */
+ enum arch_timer timer;
+ /* Number of iterations. */
+ int iterations;
+};
+
+struct test_args test_args = {
+ /* Only testing VIRTUAL timers for now. */
+ .timer = VIRTUAL,
+ .iterations = NR_TEST_ITERS_DEF,
+};
+
+static int vtimer_irq, ptimer_irq;
+
+enum sync_cmd {
+ SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_SCHED_YIELD,
+ USERSPACE_MIGRATE_SELF,
+};
+
+typedef void (*wait_method_t)(void);
+
+static void wait_for_non_spurious_irq(void);
+static void wait_poll_for_irq(void);
+static void wait_sched_poll_for_irq(void);
+static void wait_migrate_poll_for_irq(void);
+
+wait_method_t wait_method[] = {
+ wait_for_non_spurious_irq,
+ wait_poll_for_irq,
+ wait_sched_poll_for_irq,
+ wait_migrate_poll_for_irq,
+};
+
+enum timer_view {
+ TIMER_CVAL,
+ TIMER_TVAL,
+};
+
+/* Pair of pcpus for the test to alternate between. */
+static int pcpus[2] = {-1, -1};
+static int pcpus_idx;
+
+static uint32_t next_pcpu(void)
+{
+ pcpus_idx = 1 - pcpus_idx;
+ return pcpus[pcpus_idx];
+}
+
+#define ASSERT_IRQS_HANDLED_2(__nr, arg1, arg2) do { \
+ int __h = shared_data.handled; \
+ GUEST_ASSERT_4(__h == (__nr), __h, __nr, arg1, arg2); \
+} while (0)
+
+#define ASSERT_IRQS_HANDLED_1(__nr, arg1) \
+ ASSERT_IRQS_HANDLED_2((__nr), arg1, 0)
+
+#define ASSERT_IRQS_HANDLED(__nr) \
+ ASSERT_IRQS_HANDLED_2((__nr), 0, 0)
+
+#define SET_COUNTER(__ctr, __t) \
+ GUEST_SYNC_ARGS(SET_REG_KVM_REG_ARM_TIMER_CNT, (__ctr), (__t), 0, 0)
+
+#define USERSPACE_CMD(__cmd) \
+ GUEST_SYNC_ARGS(__cmd, 0, 0, 0, 0)
+
+#define USERSPACE_SCHEDULE() \
+ USERSPACE_CMD(USERSPACE_SCHED_YIELD)
+
+#define USERSPACE_MIGRATE_VCPU() \
+ USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+
+static void guest_irq_handler(struct ex_regs *regs)
+{
+ unsigned int intid = gic_get_and_ack_irq();
+ uint64_t cnt, cval;
+ uint32_t ctl;
+
+ if (intid == IAR_SPURIOUS)
+ return;
+
+ GUEST_ASSERT(gic_irq_get_pending(intid));
+
+ ctl = timer_get_ctl(test_args.timer);
+ cnt = timer_get_cntct(test_args.timer);
+ cval = timer_get_cval(test_args.timer);
+
+ GUEST_ASSERT_1(ctl & CTL_ISTATUS, ctl);
+
+ /* Disable and mask the timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ GUEST_ASSERT(!gic_irq_get_pending(intid));
+
+ shared_data.handled++;
+
+ /* The IRQ should not fire before time. */
+ GUEST_ASSERT_2(cnt >= cval, cnt, cval);
+
+ gic_set_eoi(intid);
+}
+
+/*
+ * Sleep for usec microseconds by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
+ */
+static void guest_poll(enum arch_timer timer, uint64_t usec,
+ bool userspace, enum sync_cmd userspace_cmd)
+{
+ uint64_t cycles = usec_to_cycles(usec);
+ uint64_t start = timer_get_cntct(timer);
+
+ /*
+ * TODO: Take care of roll-overs. Right now, we are fine as we use the
+ * virtual timer/counter for all of our roll-over tests, and so we can use
+ * the physical counter for this function. Assert this (temporarily):
+ */
+ GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
+
+ while ((timer_get_cntct(timer) - start) < cycles) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+}
+
+static void sleep_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, false, -1);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void program_timer_irq(uint64_t xval, uint32_t ctl, enum timer_view tv)
+{
+ shared_data.handled = 0;
+
+ switch (tv) {
+ case TIMER_CVAL:
+ timer_set_cval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ case TIMER_TVAL:
+ timer_set_tval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ default:
+ GUEST_ASSERT(0);
+ }
+}
+
+/*
+ * Should be called with IRQs masked.
+ */
+static void wait_for_non_spurious_irq(void)
+{
+ int h = shared_data.handled;
+
+ /* Wait for the IRQ handler to process an interrupt */
+ while (h == shared_data.handled) {
+ asm volatile("wfi\n"
+ "msr daifclr, #2\n"
+ /* handle IRQ */
+ "isb\n"
+ "msr daifset, #2\n"
+ : : : "memory");
+ }
+}
+
+/*
+ * Wait for an non-spurious IRQ by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHED_YIELD).
+ *
+ * Should be called with IRQs masked. Not really needed like the wfi above, but
+ * it should match the others.
+ */
+static void poll_for_non_spurious_irq(bool userspace,
+ enum sync_cmd userspace_cmd)
+{
+ int h;
+
+ h = shared_data.handled;
+
+ local_irq_enable();
+ while (h == shared_data.handled) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+ local_irq_disable();
+}
+
+static void wait_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(false, -1);
+}
+
+static void wait_sched_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_SCHED_YIELD);
+}
+
+static void wait_migrate_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
+}
+
+/*
+ * Reset the timer state to some nice values like the counter not being close
+ * to the edge, and the control register masked and disabled.
+ */
+static void reset_timer_state(uint64_t cnt)
+{
+ SET_COUNTER(cnt, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timer(uint64_t reset_cnt, uint64_t xval,
+ wait_method_t wm, enum timer_view tv)
+{
+ local_irq_disable();
+
+ reset_timer_state(reset_cnt);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ wm();
+
+ ASSERT_IRQS_HANDLED_1(1, tv);
+ local_irq_enable();
+}
+
+static void test_basic_functionality(void)
+{
+ int32_t tval = (int32_t)msec_to_cycles(10);
+ uint64_t cval;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ cval = DEF_CNT + msec_to_cycles(10);
+
+ test_timer(DEF_CNT, cval, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, tval, wm, TIMER_TVAL);
+ }
+}
+
+/*
+ * This test checks basic timer behavior without actually firing timers, things
+ * like: the relationship between cval and tval, tval down-counting.
+ */
+static void timers_sanity_checks(bool use_sched)
+{
+ uint64_t cval;
+
+ reset_timer_state(DEF_CNT);
+
+ local_irq_disable();
+
+ /* cval in the past */
+ timer_set_cval(test_args.timer, timer_get_cntct(test_args.timer) - 1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ /* tval in the past */
+ timer_set_tval(test_args.timer, -1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_cval(test_args.timer) <
+ timer_get_cntct(test_args.timer));
+
+ /* tval larger than TVAL_MAX. */
+ cval = timer_get_cntct(test_args.timer) + 2ULL * TVAL_MAX - 1;
+ timer_set_cval(test_args.timer, cval);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) <= 0);
+ GUEST_ASSERT_EQ(cval, timer_get_cval(test_args.timer));
+
+ /* tval should keep down-counting from 0 to -1. */
+ SET_COUNTER(DEF_CNT, test_args.timer);
+ timer_set_tval(test_args.timer, 0);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ /* We just need 1 counter tick to pass. */
+ sleep_poll(1);
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ local_irq_enable();
+
+ /* Mask and disable any pending timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timers_sanity_checks(void)
+{
+ timers_sanity_checks(false);
+ /* Check how KVM saves/restores these edge-case values. */
+ timers_sanity_checks(true);
+}
+
+static void guest_run_iteration(void)
+{
+ test_timers_sanity_checks();
+ test_basic_functionality();
+}
+
+static void guest_code(void)
+{
+ int i;
+
+ local_irq_disable();
+
+ gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ timer_set_ctl(PHYSICAL, CTL_IMASK);
+
+ gic_irq_enable(vtimer_irq);
+ gic_irq_enable(ptimer_irq);
+ local_irq_enable();
+
+ for (i = 0; i < test_args.iterations; i++) {
+ GUEST_SYNC(i);
+ guest_run_iteration();
+ }
+
+ GUEST_DONE();
+}
+
+static void migrate_self(uint32_t new_pcpu)
+{
+ int ret;
+ cpu_set_t cpuset;
+ pthread_t thread;
+
+ thread = pthread_self();
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(new_pcpu, &cpuset);
+
+ pr_debug("Migrating from %u to %u\n", sched_getcpu(), new_pcpu);
+
+ ret = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
+
+ TEST_ASSERT(ret == 0, "Failed to migrate to pCPU: %u; ret: %d\n",
+ new_pcpu, ret);
+}
+
+/*
+ * Set the two pcpus that the test will use to alternate between. Default to
+ * use the current cpu as pcpus[0] and the one right after in the affinity set
+ * as pcpus[1].
+ */
+static void set_default_pcpus(void)
+{
+ int curr = sched_getcpu();
+ cpu_set_t cpuset;
+ long i;
+
+ pcpus[0] = curr;
+
+ sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuset);
+ for (i = (curr + 1) % CPU_SETSIZE; i != curr; i = (i + 1) % CPU_SETSIZE) {
+ if (CPU_ISSET(i, &cpuset)) {
+ pcpus[1] = i;
+ break;
+ }
+ }
+
+ TEST_ASSERT(pcpus[1] != -1, "Couldn't find a second pcpu.");
+ pr_debug("pcpus: %d %d\n", pcpus[0], pcpus[1]);
+}
+
+static void kvm_set_cntxct(struct kvm_vm *vm, uint64_t cnt, enum arch_timer timer)
+{
+ TEST_ASSERT(timer == VIRTUAL,
+ "Only supports setting the virtual counter for now.");
+
+ struct kvm_one_reg reg = {
+ .id = KVM_REG_ARM_TIMER_CNT,
+ .addr = (uint64_t)&cnt,
+ };
+ vcpu_set_reg(vm, 0, ®);
+}
+
+static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
+{
+ enum sync_cmd cmd = uc->args[1];
+ uint64_t val = uc->args[2];
+ enum arch_timer timer = uc->args[3];
+
+ switch (cmd) {
+ case SET_REG_KVM_REG_ARM_TIMER_CNT:
+ kvm_set_cntxct(vm, val, timer);
+ break;
+ case USERSPACE_SCHED_YIELD:
+ sched_yield();
+ break;
+ case USERSPACE_MIGRATE_SELF:
+ migrate_self(next_pcpu());
+ break;
+ default:
+ break;
+ }
+}
+
+static void test_run(struct kvm_vm *vm)
+{
+ struct ucall uc;
+ int stage = 0;
+
+ /* Start on the first pcpu. */
+ migrate_self(pcpus[0]);
+
+ sync_global_to_guest(vm, test_args);
+
+ for (stage = 0; ; stage++) {
+ vcpu_run(vm, VCPUID);
+ switch (get_ucall(vm, VCPUID, &uc)) {
+ case UCALL_SYNC:
+ handle_sync(vm, &uc);
+ break;
+ case UCALL_DONE:
+ goto out;
+ case UCALL_ABORT:
+ TEST_FAIL("%s at %s:%ld\n\tvalues: %lu, %lu; %lu",
+ (const char *)uc.args[0], __FILE__, uc.args[1],
+ uc.args[2], uc.args[3], uc.args[4]);
+ goto out;
+ default:
+ TEST_FAIL("Unexpected guest exit\n");
+ }
+ }
+
+out:
+ return;
+}
+
+static void test_init_timer_irq(struct kvm_vm *vm)
+{
+ int vcpu_fd = vcpu_get_fd(vm, VCPUID);
+
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_PTIMER, &ptimer_irq, false);
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_VTIMER, &vtimer_irq, false);
+
+ sync_global_to_guest(vm, ptimer_irq);
+ sync_global_to_guest(vm, vtimer_irq);
+
+ pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq);
+}
+
+static struct kvm_vm *test_vm_create(void)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_default(VCPUID, 0, guest_code);
+
+ vm_init_descriptor_tables(vm);
+ vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
+
+ vcpu_init_descriptor_tables(vm, 0);
+
+ ucall_init(vm, NULL);
+ test_init_timer_irq(vm);
+ ret = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+ if (ret < 0) {
+ print_skip("Failed to create vgic-v3");
+ exit(KSFT_SKIP);
+ }
+
+ return vm;
+}
+
+static void test_print_help(char *name)
+{
+ pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ name);
+ pr_info("\t-i: Number of iterations (default: %u)\n",
+ NR_TEST_ITERS_DEF);
+ pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-h: Print this help message\n");
+}
+
+static bool parse_args(int argc, char *argv[])
+{
+ int opt, ret;
+
+ while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ switch (opt) {
+ case 'i':
+ test_args.iterations = atoi(optarg);
+ if (test_args.iterations <= 0) {
+ print_skip("Positive value needed for -i");
+ goto err;
+ }
+ break;
+ case 'p':
+ ret = sscanf(optarg, "%u,%u", &pcpus[0], &pcpus[1]);
+ if (ret != 2) {
+ print_skip("Could not parse the pcpus string");
+ goto err;
+ }
+ if (!is_cpu_eligible_to_run(pcpus[0])) {
+ print_skip("Invalid pcpu %d", pcpus[0]);
+ goto err;
+ }
+ if (!is_cpu_eligible_to_run(pcpus[1])) {
+ print_skip("Invalid pcpu %d", pcpus[1]);
+ goto err;
+ }
+ break;
+ case 'h':
+ default:
+ goto err;
+ }
+ }
+
+ return true;
+
+err:
+ test_print_help(argv[0]);
+ return false;
+}
+
+int main(int argc, char *argv[])
+{
+ struct kvm_vm *vm;
+
+ /* Tell stdout not to buffer its content */
+ setbuf(stdout, NULL);
+
+ if (!parse_args(argc, argv))
+ exit(KSFT_SKIP);
+
+ if (get_nprocs() < 2) {
+ print_skip("Need at least 2 online pcpus");
+ exit(KSFT_SKIP);
+ }
+
+ if (pcpus[0] == -1 || pcpus[1] == -1)
+ set_default_pcpus();
+
+ vm = test_vm_create();
+ test_run(vm);
+ kvm_vm_free(vm);
+
+ return 0;
+}
--
2.35.1.1094.g7c7d902a7c-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 3/4] KVM: arm64: selftests: add arch_timer_edge_cases
@ 2022-04-04 21:46 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add an arch_timer edge-cases selftest. For now, just add some basic
sanity checks, and some stress conditions (like waiting for the timers
while re-scheduling the vcpu). The next commit will add the actual edge
case tests.
This test fails without a867e9d0cc1 "KVM: arm64: Don't miss pending
interrupts for suspended vCPU".
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/arch_timer_edge_cases.c | 608 ++++++++++++++++++
3 files changed, 610 insertions(+)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index dce7de7755e6..8f7e0123dd28 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
/aarch64/arch_timer
+/aarch64/arch_timer_edge_cases
/aarch64/debug-exceptions
/aarch64/get-reg-list
/aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 0e4926bc9a58..17a0f32cfc91 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -101,6 +101,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
+TEST_GEN_PROGS_aarch64 += aarch64/arch_timer_edge_cases
TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
new file mode 100644
index 000000000000..5f1e9c050b99
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -0,0 +1,608 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
+ *
+ * Some of these tests program timers and then wait indefinitely for them to
+ * fire. We rely on having a timeout mechanism in the "runner", like
+ * tools/testing/selftests/kselftest/runner.sh.
+ *
+ * Copyright (c) 2022, Google LLC.
+ */
+
+#define _GNU_SOURCE
+
+#include <stdlib.h>
+#include <pthread.h>
+#include <linux/kvm.h>
+#include <linux/sizes.h>
+#include <linux/bitmap.h>
+#include <sched.h>
+#include <sys/sysinfo.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "delay.h"
+#include "arch_timer.h"
+#include "gic.h"
+#include "vgic.h"
+
+#define VCPUID 0
+
+#define CVAL_MAX (~0ULL)
+/* tval is a signed 32-bit int. */
+#define TVAL_MAX INT_MAX
+#define TVAL_MIN INT_MIN
+
+#define GICD_BASE_GPA 0x8000000ULL
+#define GICR_BASE_GPA 0x80A0000ULL
+
+/* After how much time we say there is no IRQ. */
+#define TIMEOUT_NO_IRQ_US (50 * 1000LL)
+
+/* A nice counter value to use as the starting one for most tests. */
+#define DEF_CNT (CVAL_MAX / 2)
+
+/* Number of runs. */
+#define NR_TEST_ITERS_DEF 5
+
+/* Shared with IRQ handler. */
+volatile struct test_vcpu_shared_data {
+ int handled;
+} shared_data;
+
+struct test_args {
+ /* Virtual or physical timer and counter tests. */
+ enum arch_timer timer;
+ /* Number of iterations. */
+ int iterations;
+};
+
+struct test_args test_args = {
+ /* Only testing VIRTUAL timers for now. */
+ .timer = VIRTUAL,
+ .iterations = NR_TEST_ITERS_DEF,
+};
+
+static int vtimer_irq, ptimer_irq;
+
+enum sync_cmd {
+ SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_SCHED_YIELD,
+ USERSPACE_MIGRATE_SELF,
+};
+
+typedef void (*wait_method_t)(void);
+
+static void wait_for_non_spurious_irq(void);
+static void wait_poll_for_irq(void);
+static void wait_sched_poll_for_irq(void);
+static void wait_migrate_poll_for_irq(void);
+
+wait_method_t wait_method[] = {
+ wait_for_non_spurious_irq,
+ wait_poll_for_irq,
+ wait_sched_poll_for_irq,
+ wait_migrate_poll_for_irq,
+};
+
+enum timer_view {
+ TIMER_CVAL,
+ TIMER_TVAL,
+};
+
+/* Pair of pcpus for the test to alternate between. */
+static int pcpus[2] = {-1, -1};
+static int pcpus_idx;
+
+static uint32_t next_pcpu(void)
+{
+ pcpus_idx = 1 - pcpus_idx;
+ return pcpus[pcpus_idx];
+}
+
+#define ASSERT_IRQS_HANDLED_2(__nr, arg1, arg2) do { \
+ int __h = shared_data.handled; \
+ GUEST_ASSERT_4(__h == (__nr), __h, __nr, arg1, arg2); \
+} while (0)
+
+#define ASSERT_IRQS_HANDLED_1(__nr, arg1) \
+ ASSERT_IRQS_HANDLED_2((__nr), arg1, 0)
+
+#define ASSERT_IRQS_HANDLED(__nr) \
+ ASSERT_IRQS_HANDLED_2((__nr), 0, 0)
+
+#define SET_COUNTER(__ctr, __t) \
+ GUEST_SYNC_ARGS(SET_REG_KVM_REG_ARM_TIMER_CNT, (__ctr), (__t), 0, 0)
+
+#define USERSPACE_CMD(__cmd) \
+ GUEST_SYNC_ARGS(__cmd, 0, 0, 0, 0)
+
+#define USERSPACE_SCHEDULE() \
+ USERSPACE_CMD(USERSPACE_SCHED_YIELD)
+
+#define USERSPACE_MIGRATE_VCPU() \
+ USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+
+static void guest_irq_handler(struct ex_regs *regs)
+{
+ unsigned int intid = gic_get_and_ack_irq();
+ uint64_t cnt, cval;
+ uint32_t ctl;
+
+ if (intid == IAR_SPURIOUS)
+ return;
+
+ GUEST_ASSERT(gic_irq_get_pending(intid));
+
+ ctl = timer_get_ctl(test_args.timer);
+ cnt = timer_get_cntct(test_args.timer);
+ cval = timer_get_cval(test_args.timer);
+
+ GUEST_ASSERT_1(ctl & CTL_ISTATUS, ctl);
+
+ /* Disable and mask the timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ GUEST_ASSERT(!gic_irq_get_pending(intid));
+
+ shared_data.handled++;
+
+ /* The IRQ should not fire before time. */
+ GUEST_ASSERT_2(cnt >= cval, cnt, cval);
+
+ gic_set_eoi(intid);
+}
+
+/*
+ * Sleep for usec microseconds by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHEDULE).
+ */
+static void guest_poll(enum arch_timer timer, uint64_t usec,
+ bool userspace, enum sync_cmd userspace_cmd)
+{
+ uint64_t cycles = usec_to_cycles(usec);
+ uint64_t start = timer_get_cntct(timer);
+
+ /*
+ * TODO: Take care of roll-overs. Right now, we are fine as we use the
+ * virtual timer/counter for all of our roll-over tests, and so we can use
+ * the physical counter for this function. Assert this (temporarily):
+ */
+ GUEST_ASSERT(test_args.timer == VIRTUAL && timer == PHYSICAL);
+
+ while ((timer_get_cntct(timer) - start) < cycles) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+}
+
+static void sleep_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, false, -1);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void program_timer_irq(uint64_t xval, uint32_t ctl, enum timer_view tv)
+{
+ shared_data.handled = 0;
+
+ switch (tv) {
+ case TIMER_CVAL:
+ timer_set_cval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ case TIMER_TVAL:
+ timer_set_tval(test_args.timer, xval);
+ timer_set_ctl(test_args.timer, ctl);
+ break;
+ default:
+ GUEST_ASSERT(0);
+ }
+}
+
+/*
+ * Should be called with IRQs masked.
+ */
+static void wait_for_non_spurious_irq(void)
+{
+ int h = shared_data.handled;
+
+ /* Wait for the IRQ handler to process an interrupt */
+ while (h == shared_data.handled) {
+ asm volatile("wfi\n"
+ "msr daifclr, #2\n"
+ /* handle IRQ */
+ "isb\n"
+ "msr daifset, #2\n"
+ : : : "memory");
+ }
+}
+
+/*
+ * Wait for an non-spurious IRQ by polling in the guest (userspace=0) or in
+ * userspace (e.g., userspace=1 and userspace_cmd=USERSPACE_SCHED_YIELD).
+ *
+ * Should be called with IRQs masked. Not really needed like the wfi above, but
+ * it should match the others.
+ */
+static void poll_for_non_spurious_irq(bool userspace,
+ enum sync_cmd userspace_cmd)
+{
+ int h;
+
+ h = shared_data.handled;
+
+ local_irq_enable();
+ while (h == shared_data.handled) {
+ if (userspace)
+ USERSPACE_CMD(userspace_cmd);
+ else
+ cpu_relax();
+ }
+ local_irq_disable();
+}
+
+static void wait_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(false, -1);
+}
+
+static void wait_sched_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_SCHED_YIELD);
+}
+
+static void wait_migrate_poll_for_irq(void)
+{
+ poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
+}
+
+/*
+ * Reset the timer state to some nice values like the counter not being close
+ * to the edge, and the control register masked and disabled.
+ */
+static void reset_timer_state(uint64_t cnt)
+{
+ SET_COUNTER(cnt, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timer(uint64_t reset_cnt, uint64_t xval,
+ wait_method_t wm, enum timer_view tv)
+{
+ local_irq_disable();
+
+ reset_timer_state(reset_cnt);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ wm();
+
+ ASSERT_IRQS_HANDLED_1(1, tv);
+ local_irq_enable();
+}
+
+static void test_basic_functionality(void)
+{
+ int32_t tval = (int32_t)msec_to_cycles(10);
+ uint64_t cval;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ cval = DEF_CNT + msec_to_cycles(10);
+
+ test_timer(DEF_CNT, cval, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, tval, wm, TIMER_TVAL);
+ }
+}
+
+/*
+ * This test checks basic timer behavior without actually firing timers, things
+ * like: the relationship between cval and tval, tval down-counting.
+ */
+static void timers_sanity_checks(bool use_sched)
+{
+ uint64_t cval;
+
+ reset_timer_state(DEF_CNT);
+
+ local_irq_disable();
+
+ /* cval in the past */
+ timer_set_cval(test_args.timer, timer_get_cntct(test_args.timer) - 1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ /* tval in the past */
+ timer_set_tval(test_args.timer, -1);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_cval(test_args.timer) <
+ timer_get_cntct(test_args.timer));
+
+ /* tval larger than TVAL_MAX. */
+ cval = timer_get_cntct(test_args.timer) + 2ULL * TVAL_MAX - 1;
+ timer_set_cval(test_args.timer, cval);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ GUEST_ASSERT(timer_get_tval(test_args.timer) <= 0);
+ GUEST_ASSERT_EQ(cval, timer_get_cval(test_args.timer));
+
+ /* tval should keep down-counting from 0 to -1. */
+ SET_COUNTER(DEF_CNT, test_args.timer);
+ timer_set_tval(test_args.timer, 0);
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+ /* We just need 1 counter tick to pass. */
+ sleep_poll(1);
+ GUEST_ASSERT(timer_get_tval(test_args.timer) < 0);
+
+ local_irq_enable();
+
+ /* Mask and disable any pending timer. */
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+static void test_timers_sanity_checks(void)
+{
+ timers_sanity_checks(false);
+ /* Check how KVM saves/restores these edge-case values. */
+ timers_sanity_checks(true);
+}
+
+static void guest_run_iteration(void)
+{
+ test_timers_sanity_checks();
+ test_basic_functionality();
+}
+
+static void guest_code(void)
+{
+ int i;
+
+ local_irq_disable();
+
+ gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+ timer_set_ctl(PHYSICAL, CTL_IMASK);
+
+ gic_irq_enable(vtimer_irq);
+ gic_irq_enable(ptimer_irq);
+ local_irq_enable();
+
+ for (i = 0; i < test_args.iterations; i++) {
+ GUEST_SYNC(i);
+ guest_run_iteration();
+ }
+
+ GUEST_DONE();
+}
+
+static void migrate_self(uint32_t new_pcpu)
+{
+ int ret;
+ cpu_set_t cpuset;
+ pthread_t thread;
+
+ thread = pthread_self();
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(new_pcpu, &cpuset);
+
+ pr_debug("Migrating from %u to %u\n", sched_getcpu(), new_pcpu);
+
+ ret = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
+
+ TEST_ASSERT(ret == 0, "Failed to migrate to pCPU: %u; ret: %d\n",
+ new_pcpu, ret);
+}
+
+/*
+ * Set the two pcpus that the test will use to alternate between. Default to
+ * use the current cpu as pcpus[0] and the one right after in the affinity set
+ * as pcpus[1].
+ */
+static void set_default_pcpus(void)
+{
+ int curr = sched_getcpu();
+ cpu_set_t cpuset;
+ long i;
+
+ pcpus[0] = curr;
+
+ sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuset);
+ for (i = (curr + 1) % CPU_SETSIZE; i != curr; i = (i + 1) % CPU_SETSIZE) {
+ if (CPU_ISSET(i, &cpuset)) {
+ pcpus[1] = i;
+ break;
+ }
+ }
+
+ TEST_ASSERT(pcpus[1] != -1, "Couldn't find a second pcpu.");
+ pr_debug("pcpus: %d %d\n", pcpus[0], pcpus[1]);
+}
+
+static void kvm_set_cntxct(struct kvm_vm *vm, uint64_t cnt, enum arch_timer timer)
+{
+ TEST_ASSERT(timer == VIRTUAL,
+ "Only supports setting the virtual counter for now.");
+
+ struct kvm_one_reg reg = {
+ .id = KVM_REG_ARM_TIMER_CNT,
+ .addr = (uint64_t)&cnt,
+ };
+ vcpu_set_reg(vm, 0, ®);
+}
+
+static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
+{
+ enum sync_cmd cmd = uc->args[1];
+ uint64_t val = uc->args[2];
+ enum arch_timer timer = uc->args[3];
+
+ switch (cmd) {
+ case SET_REG_KVM_REG_ARM_TIMER_CNT:
+ kvm_set_cntxct(vm, val, timer);
+ break;
+ case USERSPACE_SCHED_YIELD:
+ sched_yield();
+ break;
+ case USERSPACE_MIGRATE_SELF:
+ migrate_self(next_pcpu());
+ break;
+ default:
+ break;
+ }
+}
+
+static void test_run(struct kvm_vm *vm)
+{
+ struct ucall uc;
+ int stage = 0;
+
+ /* Start on the first pcpu. */
+ migrate_self(pcpus[0]);
+
+ sync_global_to_guest(vm, test_args);
+
+ for (stage = 0; ; stage++) {
+ vcpu_run(vm, VCPUID);
+ switch (get_ucall(vm, VCPUID, &uc)) {
+ case UCALL_SYNC:
+ handle_sync(vm, &uc);
+ break;
+ case UCALL_DONE:
+ goto out;
+ case UCALL_ABORT:
+ TEST_FAIL("%s at %s:%ld\n\tvalues: %lu, %lu; %lu",
+ (const char *)uc.args[0], __FILE__, uc.args[1],
+ uc.args[2], uc.args[3], uc.args[4]);
+ goto out;
+ default:
+ TEST_FAIL("Unexpected guest exit\n");
+ }
+ }
+
+out:
+ return;
+}
+
+static void test_init_timer_irq(struct kvm_vm *vm)
+{
+ int vcpu_fd = vcpu_get_fd(vm, VCPUID);
+
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_PTIMER, &ptimer_irq, false);
+ kvm_device_access(vcpu_fd, KVM_ARM_VCPU_TIMER_CTRL,
+ KVM_ARM_VCPU_TIMER_IRQ_VTIMER, &vtimer_irq, false);
+
+ sync_global_to_guest(vm, ptimer_irq);
+ sync_global_to_guest(vm, vtimer_irq);
+
+ pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq);
+}
+
+static struct kvm_vm *test_vm_create(void)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_default(VCPUID, 0, guest_code);
+
+ vm_init_descriptor_tables(vm);
+ vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
+
+ vcpu_init_descriptor_tables(vm, 0);
+
+ ucall_init(vm, NULL);
+ test_init_timer_irq(vm);
+ ret = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+ if (ret < 0) {
+ print_skip("Failed to create vgic-v3");
+ exit(KSFT_SKIP);
+ }
+
+ return vm;
+}
+
+static void test_print_help(char *name)
+{
+ pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ name);
+ pr_info("\t-i: Number of iterations (default: %u)\n",
+ NR_TEST_ITERS_DEF);
+ pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-h: Print this help message\n");
+}
+
+static bool parse_args(int argc, char *argv[])
+{
+ int opt, ret;
+
+ while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ switch (opt) {
+ case 'i':
+ test_args.iterations = atoi(optarg);
+ if (test_args.iterations <= 0) {
+ print_skip("Positive value needed for -i");
+ goto err;
+ }
+ break;
+ case 'p':
+ ret = sscanf(optarg, "%u,%u", &pcpus[0], &pcpus[1]);
+ if (ret != 2) {
+ print_skip("Could not parse the pcpus string");
+ goto err;
+ }
+ if (!is_cpu_eligible_to_run(pcpus[0])) {
+ print_skip("Invalid pcpu %d", pcpus[0]);
+ goto err;
+ }
+ if (!is_cpu_eligible_to_run(pcpus[1])) {
+ print_skip("Invalid pcpu %d", pcpus[1]);
+ goto err;
+ }
+ break;
+ case 'h':
+ default:
+ goto err;
+ }
+ }
+
+ return true;
+
+err:
+ test_print_help(argv[0]);
+ return false;
+}
+
+int main(int argc, char *argv[])
+{
+ struct kvm_vm *vm;
+
+ /* Tell stdout not to buffer its content */
+ setbuf(stdout, NULL);
+
+ if (!parse_args(argc, argv))
+ exit(KSFT_SKIP);
+
+ if (get_nprocs() < 2) {
+ print_skip("Need at least 2 online pcpus");
+ exit(KSFT_SKIP);
+ }
+
+ if (pcpus[0] == -1 || pcpus[1] == -1)
+ set_default_pcpus();
+
+ vm = test_vm_create();
+ test_run(vm);
+ kvm_vm_free(vm);
+
+ return 0;
+}
--
2.35.1.1094.g7c7d902a7c-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 4/4] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
2022-04-04 21:46 ` Ricardo Koller
@ 2022-04-04 21:46 ` Ricardo Koller
-1 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones; +Cc: maz, pbonzini
Add tests that validates some edge cases related to the virtual
arch-timer:
- timers in the past, including TVALs that rollover from 0.
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- reprograming timers.
- the same timer condition firing multiple times.
- masking/unmasking using the timer control mask.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../kvm/aarch64/arch_timer_edge_cases.c | 300 +++++++++++++++++-
1 file changed, 298 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
index 5f1e9c050b99..fa8d1dca9118 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -2,6 +2,12 @@
/*
* arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
*
+ * The test validates some edge cases related to the virtual arch-timer:
+ * - timers across counter roll-overs.
+ * - moving counters ahead and behind pending timers.
+ * - reprograming timers.
+ * - the same timer condition firing multiple times.
+ *
* Some of these tests program timers and then wait indefinitely for them to
* fire. We rely on having a timeout mechanism in the "runner", like
* tools/testing/selftests/kselftest/runner.sh.
@@ -45,6 +51,9 @@
/* Number of runs. */
#define NR_TEST_ITERS_DEF 5
+/* Default "long" wait test time in ms. */
+#define LONG_WAIT_TEST_MS 100
+
/* Shared with IRQ handler. */
volatile struct test_vcpu_shared_data {
int handled;
@@ -53,6 +62,8 @@ volatile struct test_vcpu_shared_data {
struct test_args {
/* Virtual or physical timer and counter tests. */
enum arch_timer timer;
+ /* Delay used in the test_long_timer_delays test. */
+ uint64_t long_wait_ms;
/* Number of iterations. */
int iterations;
};
@@ -60,6 +71,7 @@ struct test_args {
struct test_args test_args = {
/* Only testing VIRTUAL timers for now. */
.timer = VIRTUAL,
+ .long_wait_ms = LONG_WAIT_TEST_MS,
.iterations = NR_TEST_ITERS_DEF,
};
@@ -67,10 +79,25 @@ static int vtimer_irq, ptimer_irq;
enum sync_cmd {
SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_USLEEP,
USERSPACE_SCHED_YIELD,
USERSPACE_MIGRATE_SELF,
};
+typedef void (*sleep_method_t)(uint64_t usec);
+
+static void sleep_poll(uint64_t usec);
+static void sleep_sched_poll(uint64_t usec);
+static void sleep_in_userspace(uint64_t usec);
+static void sleep_migrate(uint64_t usec);
+
+sleep_method_t sleep_method[] = {
+ sleep_poll,
+ sleep_sched_poll,
+ sleep_migrate,
+ sleep_in_userspace,
+};
+
typedef void (*wait_method_t)(void);
static void wait_for_non_spurious_irq(void);
@@ -123,6 +150,9 @@ static uint32_t next_pcpu(void)
#define USERSPACE_MIGRATE_VCPU() \
USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+#define SLEEP_IN_USERSPACE(__usecs) \
+ GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
+
static void guest_irq_handler(struct ex_regs *regs)
{
unsigned int intid = gic_get_and_ack_irq();
@@ -260,6 +290,33 @@ static void wait_migrate_poll_for_irq(void)
poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
}
+/*
+ * Both sleep_sched_poll and sleep_migrate temporarily check that the timer in
+ * use (test_args.timer) is the virtual one. That's because the sleep
+ * implementation polls on the physical one.
+ */
+
+static void sleep_sched_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_migrate(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_in_userspace(uint64_t usec)
+{
+ SLEEP_IN_USERSPACE(usec);
+}
+
/*
* Reset the timer state to some nice values like the counter not being close
* to the edge, and the control register masked and disabled.
@@ -284,6 +341,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
local_irq_enable();
}
+/*
+ * Set the counter to just below the edge (CVAL_MAX) and set a timer that
+ * crosses it over.
+ */
+static void test_timers_across_rollovers(void)
+{
+ uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
+ }
+}
+
+/* Check that timer control masks actually mask a timer being fired. */
+static void test_timer_control_masked(sleep_method_t guest_sleep)
+{
+ reset_timer_state(DEF_CNT);
+
+ /* Local IRQs are not masked at this point. */
+
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
+ guest_sleep(TIMEOUT_NO_IRQ_US);
+
+ ASSERT_IRQS_HANDLED(0);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
+static void test_timer_control_mask_then_unmask(wait_method_t wm)
+{
+ reset_timer_state(DEF_CNT);
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* No IRQs because the timer is still masked. */
+ ASSERT_IRQS_HANDLED(0);
+
+ /* Unmask the timer, and then get an IRQ. */
+ local_irq_disable();
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/*
+ * Set a timer at the edge, and wait with irqs masked for so long that the
+ * counter rolls over and the "Timer Condition" doesn't apply anymore. We
+ * should not get an IRQ fired.
+ */
+static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
+{
+ local_irq_disable();
+ reset_timer_state(CVAL_MAX - msec_to_cycles(5));
+
+ program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
+
+ GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
+ guest_sleep(10 * 1000LL);
+ GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
+
+ local_irq_enable();
+ isb();
+
+ ASSERT_IRQS_HANDLED(0);
+}
+
+static void test_control_masks(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_timer_control_masked(sleep_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_timer_control_mask_then_unmask(wait_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_irq_masked_timer_across_rollover(sleep_method[i]);
+}
+
+static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
+{
+ int i;
+
+ local_irq_disable();
+ reset_timer_state(DEF_CNT);
+
+ program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
+
+ for (i = 1; i <= num; i++) {
+ wm();
+
+ /*
+ * The IRQ handler masked and disabled the timer.
+ * Enable and unmmask it again.
+ */
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+
+ ASSERT_IRQS_HANDLED(i);
+ }
+
+ local_irq_enable();
+}
+
+static void test_timers_fired_multiple_times(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_fire_a_timer_multiple_times(wait_method[i], 1000);
+}
+
+/* Set a timer for cval1 then reprogram it to cval2. */
+static void test_reprogram_timer(wait_method_t wm, bool use_sched,
+ uint64_t cnt, uint64_t cval1, uint64_t cval2)
+{
+ local_irq_disable();
+ reset_timer_state(cnt);
+
+ program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
+
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+
+ timer_set_cval(test_args.timer, cval2);
+
+ wm();
+
+ local_irq_enable();
+ ASSERT_IRQS_HANDLED(1);
+};
+
+static void test_reprogram_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ }
+}
+
static void test_basic_functionality(void)
{
int32_t tval = (int32_t)msec_to_cycles(10);
@@ -355,10 +562,86 @@ static void test_timers_sanity_checks(void)
timers_sanity_checks(true);
}
+/*
+ * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
+ * then wait for an IRQ.
+ */
+static void test_set_counter_after_programming_timer(uint64_t cnt_1,
+ uint64_t xval, uint64_t cnt_2, wait_method_t wm,
+ enum timer_view tv)
+{
+ local_irq_disable();
+
+ SET_COUNTER(cnt_1, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ SET_COUNTER(cnt_2, test_args.timer);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/* Set a timer and then move the counter ahead of it. */
+static void test_move_counters_after_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_set_counter_after_programming_timer(0, DEF_CNT,
+ DEF_CNT + 1, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(CVAL_MAX, 1,
+ 2, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(0, TVAL_MAX,
+ (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
+ }
+}
+
+static void test_timers_in_the_past(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
+ test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
+ }
+}
+
+static void test_long_timer_delays(void)
+{
+ uint64_t wait_ms = test_args.long_wait_ms;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
+ }
+}
+
static void guest_run_iteration(void)
{
test_timers_sanity_checks();
test_basic_functionality();
+
+ test_timers_in_the_past();
+ test_timers_across_rollovers();
+
+ test_move_counters_after_timers();
+ test_reprogram_timers();
+
+ test_control_masks();
+
+ test_timers_fired_multiple_times();
}
static void guest_code(void)
@@ -381,6 +664,7 @@ static void guest_code(void)
guest_run_iteration();
}
+ test_long_timer_delays();
GUEST_DONE();
}
@@ -450,6 +734,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
case SET_REG_KVM_REG_ARM_TIMER_CNT:
kvm_set_cntxct(vm, val, timer);
break;
+ case USERSPACE_USLEEP:
+ usleep(val);
+ break;
case USERSPACE_SCHED_YIELD:
sched_yield();
break;
@@ -533,11 +820,13 @@ static struct kvm_vm *test_vm_create(void)
static void test_print_help(char *name)
{
- pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
name);
pr_info("\t-i: Number of iterations (default: %u)\n",
NR_TEST_ITERS_DEF);
pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
+ LONG_WAIT_TEST_MS);
pr_info("\t-h: Print this help message\n");
}
@@ -545,7 +834,7 @@ static bool parse_args(int argc, char *argv[])
{
int opt, ret;
- while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
switch (opt) {
case 'i':
test_args.iterations = atoi(optarg);
@@ -569,6 +858,13 @@ static bool parse_args(int argc, char *argv[])
goto err;
}
break;
+ case 'l':
+ test_args.long_wait_ms = atoi(optarg);
+ if (test_args.long_wait_ms <= 0) {
+ print_skip("Positive value needed for -l");
+ goto err;
+ }
+ break;
case 'h':
default:
goto err;
--
2.35.1.1094.g7c7d902a7c-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 4/4] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases
@ 2022-04-04 21:46 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-04 21:46 UTC (permalink / raw)
To: kvm, kvmarm, drjones
Cc: pbonzini, maz, alexandru.elisei, eric.auger, oupton, reijiw,
rananta, Ricardo Koller
Add tests that validates some edge cases related to the virtual
arch-timer:
- timers in the past, including TVALs that rollover from 0.
- timers across counter roll-overs.
- moving counters ahead and behind pending timers.
- reprograming timers.
- the same timer condition firing multiple times.
- masking/unmasking using the timer control mask.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
.../kvm/aarch64/arch_timer_edge_cases.c | 300 +++++++++++++++++-
1 file changed, 298 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
index 5f1e9c050b99..fa8d1dca9118 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer_edge_cases.c
@@ -2,6 +2,12 @@
/*
* arch_timer_edge_cases.c - Tests the aarch64 timer IRQ functionality.
*
+ * The test validates some edge cases related to the virtual arch-timer:
+ * - timers across counter roll-overs.
+ * - moving counters ahead and behind pending timers.
+ * - reprograming timers.
+ * - the same timer condition firing multiple times.
+ *
* Some of these tests program timers and then wait indefinitely for them to
* fire. We rely on having a timeout mechanism in the "runner", like
* tools/testing/selftests/kselftest/runner.sh.
@@ -45,6 +51,9 @@
/* Number of runs. */
#define NR_TEST_ITERS_DEF 5
+/* Default "long" wait test time in ms. */
+#define LONG_WAIT_TEST_MS 100
+
/* Shared with IRQ handler. */
volatile struct test_vcpu_shared_data {
int handled;
@@ -53,6 +62,8 @@ volatile struct test_vcpu_shared_data {
struct test_args {
/* Virtual or physical timer and counter tests. */
enum arch_timer timer;
+ /* Delay used in the test_long_timer_delays test. */
+ uint64_t long_wait_ms;
/* Number of iterations. */
int iterations;
};
@@ -60,6 +71,7 @@ struct test_args {
struct test_args test_args = {
/* Only testing VIRTUAL timers for now. */
.timer = VIRTUAL,
+ .long_wait_ms = LONG_WAIT_TEST_MS,
.iterations = NR_TEST_ITERS_DEF,
};
@@ -67,10 +79,25 @@ static int vtimer_irq, ptimer_irq;
enum sync_cmd {
SET_REG_KVM_REG_ARM_TIMER_CNT,
+ USERSPACE_USLEEP,
USERSPACE_SCHED_YIELD,
USERSPACE_MIGRATE_SELF,
};
+typedef void (*sleep_method_t)(uint64_t usec);
+
+static void sleep_poll(uint64_t usec);
+static void sleep_sched_poll(uint64_t usec);
+static void sleep_in_userspace(uint64_t usec);
+static void sleep_migrate(uint64_t usec);
+
+sleep_method_t sleep_method[] = {
+ sleep_poll,
+ sleep_sched_poll,
+ sleep_migrate,
+ sleep_in_userspace,
+};
+
typedef void (*wait_method_t)(void);
static void wait_for_non_spurious_irq(void);
@@ -123,6 +150,9 @@ static uint32_t next_pcpu(void)
#define USERSPACE_MIGRATE_VCPU() \
USERSPACE_CMD(USERSPACE_MIGRATE_SELF)
+#define SLEEP_IN_USERSPACE(__usecs) \
+ GUEST_SYNC_ARGS(USERSPACE_USLEEP, (__usecs), 0, 0, 0)
+
static void guest_irq_handler(struct ex_regs *regs)
{
unsigned int intid = gic_get_and_ack_irq();
@@ -260,6 +290,33 @@ static void wait_migrate_poll_for_irq(void)
poll_for_non_spurious_irq(true, USERSPACE_MIGRATE_SELF);
}
+/*
+ * Both sleep_sched_poll and sleep_migrate temporarily check that the timer in
+ * use (test_args.timer) is the virtual one. That's because the sleep
+ * implementation polls on the physical one.
+ */
+
+static void sleep_sched_poll(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_SCHED_YIELD);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_migrate(uint64_t usec)
+{
+ if (test_args.timer == VIRTUAL)
+ guest_poll(PHYSICAL, usec, true, USERSPACE_MIGRATE_SELF);
+ else
+ GUEST_ASSERT(0); /* Not implemented. */
+}
+
+static void sleep_in_userspace(uint64_t usec)
+{
+ SLEEP_IN_USERSPACE(usec);
+}
+
/*
* Reset the timer state to some nice values like the counter not being close
* to the edge, and the control register masked and disabled.
@@ -284,6 +341,156 @@ static void test_timer(uint64_t reset_cnt, uint64_t xval,
local_irq_enable();
}
+/*
+ * Set the counter to just below the edge (CVAL_MAX) and set a timer that
+ * crosses it over.
+ */
+static void test_timers_across_rollovers(void)
+{
+ uint64_t edge_minus_5ms = CVAL_MAX - msec_to_cycles(5);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(edge_minus_5ms, msec_to_cycles(10), wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MAX, wm, TIMER_TVAL);
+ test_timer(edge_minus_5ms, TVAL_MIN, wm, TIMER_TVAL);
+ }
+}
+
+/* Check that timer control masks actually mask a timer being fired. */
+static void test_timer_control_masked(sleep_method_t guest_sleep)
+{
+ reset_timer_state(DEF_CNT);
+
+ /* Local IRQs are not masked at this point. */
+
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* Assume no IRQ after waiting TIMEOUT_NO_IRQ_US microseconds */
+ guest_sleep(TIMEOUT_NO_IRQ_US);
+
+ ASSERT_IRQS_HANDLED(0);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+}
+
+/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
+static void test_timer_control_mask_then_unmask(wait_method_t wm)
+{
+ reset_timer_state(DEF_CNT);
+ program_timer_irq(-1, CTL_ENABLE | CTL_IMASK, TIMER_TVAL);
+
+ /* No IRQs because the timer is still masked. */
+ ASSERT_IRQS_HANDLED(0);
+
+ /* Unmask the timer, and then get an IRQ. */
+ local_irq_disable();
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/*
+ * Set a timer at the edge, and wait with irqs masked for so long that the
+ * counter rolls over and the "Timer Condition" doesn't apply anymore. We
+ * should not get an IRQ fired.
+ */
+static void test_irq_masked_timer_across_rollover(sleep_method_t guest_sleep)
+{
+ local_irq_disable();
+ reset_timer_state(CVAL_MAX - msec_to_cycles(5));
+
+ program_timer_irq(-1, CTL_ENABLE, TIMER_TVAL);
+
+ GUEST_ASSERT(timer_get_ctl(test_args.timer) & CTL_ISTATUS);
+ guest_sleep(10 * 1000LL);
+ GUEST_ASSERT((timer_get_ctl(test_args.timer) & CTL_ISTATUS) == 0);
+
+ local_irq_enable();
+ isb();
+
+ ASSERT_IRQS_HANDLED(0);
+}
+
+static void test_control_masks(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_timer_control_masked(sleep_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_timer_control_mask_then_unmask(wait_method[i]);
+
+ for (i = 0; i < ARRAY_SIZE(sleep_method); i++)
+ test_irq_masked_timer_across_rollover(sleep_method[i]);
+}
+
+static void test_fire_a_timer_multiple_times(wait_method_t wm, int num)
+{
+ int i;
+
+ local_irq_disable();
+ reset_timer_state(DEF_CNT);
+
+ program_timer_irq(0, CTL_ENABLE, TIMER_TVAL);
+
+ for (i = 1; i <= num; i++) {
+ wm();
+
+ /*
+ * The IRQ handler masked and disabled the timer.
+ * Enable and unmmask it again.
+ */
+ timer_set_ctl(test_args.timer, CTL_ENABLE);
+
+ ASSERT_IRQS_HANDLED(i);
+ }
+
+ local_irq_enable();
+}
+
+static void test_timers_fired_multiple_times(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++)
+ test_fire_a_timer_multiple_times(wait_method[i], 1000);
+}
+
+/* Set a timer for cval1 then reprogram it to cval2. */
+static void test_reprogram_timer(wait_method_t wm, bool use_sched,
+ uint64_t cnt, uint64_t cval1, uint64_t cval2)
+{
+ local_irq_disable();
+ reset_timer_state(cnt);
+
+ program_timer_irq(cval1, CTL_ENABLE, TIMER_CVAL);
+
+ if (use_sched)
+ USERSPACE_SCHEDULE();
+
+ timer_set_cval(test_args.timer, cval2);
+
+ wm();
+
+ local_irq_enable();
+ ASSERT_IRQS_HANDLED(1);
+};
+
+static void test_reprogram_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ test_reprogram_timer(wait_method[i], true, 0, CVAL_MAX, 0);
+ }
+}
+
static void test_basic_functionality(void)
{
int32_t tval = (int32_t)msec_to_cycles(10);
@@ -355,10 +562,86 @@ static void test_timers_sanity_checks(void)
timers_sanity_checks(true);
}
+/*
+ * Set the counter to cnt_1, the [c|t]val to xval, the counter to cnt_2, and
+ * then wait for an IRQ.
+ */
+static void test_set_counter_after_programming_timer(uint64_t cnt_1,
+ uint64_t xval, uint64_t cnt_2, wait_method_t wm,
+ enum timer_view tv)
+{
+ local_irq_disable();
+
+ SET_COUNTER(cnt_1, test_args.timer);
+ timer_set_ctl(test_args.timer, CTL_IMASK);
+
+ program_timer_irq(xval, CTL_ENABLE, tv);
+ SET_COUNTER(cnt_2, test_args.timer);
+ wm();
+
+ ASSERT_IRQS_HANDLED(1);
+ local_irq_enable();
+}
+
+/* Set a timer and then move the counter ahead of it. */
+static void test_move_counters_after_timers(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_set_counter_after_programming_timer(0, DEF_CNT,
+ DEF_CNT + 1, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(CVAL_MAX, 1,
+ 2, wm, TIMER_CVAL);
+ test_set_counter_after_programming_timer(0, TVAL_MAX,
+ (uint64_t)TVAL_MAX + 1, wm, TIMER_TVAL);
+ }
+}
+
+static void test_timers_in_the_past(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(DEF_CNT, DEF_CNT - 1, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, TVAL_MIN, wm, TIMER_TVAL);
+ test_timer(CVAL_MAX, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_CVAL);
+ test_timer(DEF_CNT, 0, wm, TIMER_TVAL);
+ }
+}
+
+static void test_long_timer_delays(void)
+{
+ uint64_t wait_ms = test_args.long_wait_ms;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(wait_method); i++) {
+ wait_method_t wm = wait_method[i];
+
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_CVAL);
+ test_timer(0, msec_to_cycles(wait_ms), wm, TIMER_TVAL);
+ }
+}
+
static void guest_run_iteration(void)
{
test_timers_sanity_checks();
test_basic_functionality();
+
+ test_timers_in_the_past();
+ test_timers_across_rollovers();
+
+ test_move_counters_after_timers();
+ test_reprogram_timers();
+
+ test_control_masks();
+
+ test_timers_fired_multiple_times();
}
static void guest_code(void)
@@ -381,6 +664,7 @@ static void guest_code(void)
guest_run_iteration();
}
+ test_long_timer_delays();
GUEST_DONE();
}
@@ -450,6 +734,9 @@ static void handle_sync(struct kvm_vm *vm, struct ucall *uc)
case SET_REG_KVM_REG_ARM_TIMER_CNT:
kvm_set_cntxct(vm, val, timer);
break;
+ case USERSPACE_USLEEP:
+ usleep(val);
+ break;
case USERSPACE_SCHED_YIELD:
sched_yield();
break;
@@ -533,11 +820,13 @@ static struct kvm_vm *test_vm_create(void)
static void test_print_help(char *name)
{
- pr_info("Usage: %s [-h] [-i iterations] [-w] [-p pcpu1,pcpu2]\n",
+ pr_info("Usage: %s [-h] [-i iterations] [-p pcpu1,pcpu2] [-l long_wait_ms]\n",
name);
pr_info("\t-i: Number of iterations (default: %u)\n",
NR_TEST_ITERS_DEF);
pr_info("\t-p: Pair of pcpus for the vcpus to alternate between.\n");
+ pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",
+ LONG_WAIT_TEST_MS);
pr_info("\t-h: Print this help message\n");
}
@@ -545,7 +834,7 @@ static bool parse_args(int argc, char *argv[])
{
int opt, ret;
- while ((opt = getopt(argc, argv, "hi:p:")) != -1) {
+ while ((opt = getopt(argc, argv, "hi:p:l:")) != -1) {
switch (opt) {
case 'i':
test_args.iterations = atoi(optarg);
@@ -569,6 +858,13 @@ static bool parse_args(int argc, char *argv[])
goto err;
}
break;
+ case 'l':
+ test_args.long_wait_ms = atoi(optarg);
+ if (test_args.long_wait_ms <= 0) {
+ print_skip("Positive value needed for -l");
+ goto err;
+ }
+ break;
case 'h':
default:
goto err;
--
2.35.1.1094.g7c7d902a7c-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v4 2/4] KVM: selftests: add is_cpu_eligible_to_run() utility function
2022-04-04 21:46 ` Ricardo Koller
@ 2022-04-05 0:13 ` Oliver Upton
-1 siblings, 0 replies; 14+ messages in thread
From: Oliver Upton @ 2022-04-05 0:13 UTC (permalink / raw)
To: Ricardo Koller; +Cc: kvm, maz, pbonzini, kvmarm
Hi Ricardo,
On Mon, Apr 04, 2022 at 02:46:40PM -0700, Ricardo Koller wrote:
> Add is_cpu_eligible_to_run() utility function, which checks whether the current
> process, or one of its threads, is eligible to run on a particular CPU.
> This information is obtained using sched_getaffinity.
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
> .../testing/selftests/kvm/include/test_util.h | 2 ++
> tools/testing/selftests/kvm/lib/test_util.c | 20 ++++++++++++++++++-
> 2 files changed, 21 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
> index 99e0dcdc923f..a7653f369b6c 100644
> --- a/tools/testing/selftests/kvm/include/test_util.h
> +++ b/tools/testing/selftests/kvm/include/test_util.h
> @@ -143,4 +143,6 @@ static inline void *align_ptr_up(void *x, size_t size)
> return (void *)align_up((unsigned long)x, size);
> }
>
> +bool is_cpu_eligible_to_run(int pcpu);
> +
> #endif /* SELFTEST_KVM_TEST_UTIL_H */
> diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> index 6d23878bbfe1..7813a68333c0 100644
> --- a/tools/testing/selftests/kvm/lib/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/test_util.c
> @@ -4,6 +4,7 @@
> *
> * Copyright (C) 2020, Google LLC.
> */
> +#define _GNU_SOURCE
>
> #include <assert.h>
> #include <ctype.h>
> @@ -13,7 +14,9 @@
> #include <sys/stat.h>
> #include <sys/syscall.h>
> #include <linux/mman.h>
> -#include "linux/kernel.h"
> +#include <linux/kernel.h>
> +#include <sched.h>
> +#include <sys/sysinfo.h>
>
> #include "test_util.h"
>
> @@ -334,3 +337,18 @@ long get_run_delay(void)
>
> return val[1];
> }
> +
> +bool is_cpu_eligible_to_run(int pcpu)
> +{
> + cpu_set_t cpuset;
> + long i, nprocs;
> +
> + nprocs = get_nprocs_conf();
> + sched_getaffinity(0, sizeof(cpu_set_t), &cpuset);
> + for (i = 0; i < nprocs; i++) {
> + if (i == pcpu)
> + return CPU_ISSET(i, &cpuset);
> + }
I don't think you need the loop and can just do CPU_ISSET(pcpu, &cpuset),
right?
--
Thanks,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 2/4] KVM: selftests: add is_cpu_eligible_to_run() utility function
@ 2022-04-05 0:13 ` Oliver Upton
0 siblings, 0 replies; 14+ messages in thread
From: Oliver Upton @ 2022-04-05 0:13 UTC (permalink / raw)
To: Ricardo Koller
Cc: kvm, kvmarm, drjones, pbonzini, maz, alexandru.elisei,
eric.auger, reijiw, rananta
Hi Ricardo,
On Mon, Apr 04, 2022 at 02:46:40PM -0700, Ricardo Koller wrote:
> Add is_cpu_eligible_to_run() utility function, which checks whether the current
> process, or one of its threads, is eligible to run on a particular CPU.
> This information is obtained using sched_getaffinity.
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
> .../testing/selftests/kvm/include/test_util.h | 2 ++
> tools/testing/selftests/kvm/lib/test_util.c | 20 ++++++++++++++++++-
> 2 files changed, 21 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
> index 99e0dcdc923f..a7653f369b6c 100644
> --- a/tools/testing/selftests/kvm/include/test_util.h
> +++ b/tools/testing/selftests/kvm/include/test_util.h
> @@ -143,4 +143,6 @@ static inline void *align_ptr_up(void *x, size_t size)
> return (void *)align_up((unsigned long)x, size);
> }
>
> +bool is_cpu_eligible_to_run(int pcpu);
> +
> #endif /* SELFTEST_KVM_TEST_UTIL_H */
> diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> index 6d23878bbfe1..7813a68333c0 100644
> --- a/tools/testing/selftests/kvm/lib/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/test_util.c
> @@ -4,6 +4,7 @@
> *
> * Copyright (C) 2020, Google LLC.
> */
> +#define _GNU_SOURCE
>
> #include <assert.h>
> #include <ctype.h>
> @@ -13,7 +14,9 @@
> #include <sys/stat.h>
> #include <sys/syscall.h>
> #include <linux/mman.h>
> -#include "linux/kernel.h"
> +#include <linux/kernel.h>
> +#include <sched.h>
> +#include <sys/sysinfo.h>
>
> #include "test_util.h"
>
> @@ -334,3 +337,18 @@ long get_run_delay(void)
>
> return val[1];
> }
> +
> +bool is_cpu_eligible_to_run(int pcpu)
> +{
> + cpu_set_t cpuset;
> + long i, nprocs;
> +
> + nprocs = get_nprocs_conf();
> + sched_getaffinity(0, sizeof(cpu_set_t), &cpuset);
> + for (i = 0; i < nprocs; i++) {
> + if (i == pcpu)
> + return CPU_ISSET(i, &cpuset);
> + }
I don't think you need the loop and can just do CPU_ISSET(pcpu, &cpuset),
right?
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 2/4] KVM: selftests: add is_cpu_eligible_to_run() utility function
2022-04-05 0:13 ` Oliver Upton
@ 2022-04-05 1:56 ` Ricardo Koller
-1 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-05 1:56 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvm, maz, pbonzini, kvmarm
On Tue, Apr 05, 2022 at 12:13:33AM +0000, Oliver Upton wrote:
> Hi Ricardo,
>
> On Mon, Apr 04, 2022 at 02:46:40PM -0700, Ricardo Koller wrote:
> > Add is_cpu_eligible_to_run() utility function, which checks whether the current
> > process, or one of its threads, is eligible to run on a particular CPU.
> > This information is obtained using sched_getaffinity.
> >
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > ---
> > .../testing/selftests/kvm/include/test_util.h | 2 ++
> > tools/testing/selftests/kvm/lib/test_util.c | 20 ++++++++++++++++++-
> > 2 files changed, 21 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
> > index 99e0dcdc923f..a7653f369b6c 100644
> > --- a/tools/testing/selftests/kvm/include/test_util.h
> > +++ b/tools/testing/selftests/kvm/include/test_util.h
> > @@ -143,4 +143,6 @@ static inline void *align_ptr_up(void *x, size_t size)
> > return (void *)align_up((unsigned long)x, size);
> > }
> >
> > +bool is_cpu_eligible_to_run(int pcpu);
> > +
> > #endif /* SELFTEST_KVM_TEST_UTIL_H */
> > diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> > index 6d23878bbfe1..7813a68333c0 100644
> > --- a/tools/testing/selftests/kvm/lib/test_util.c
> > +++ b/tools/testing/selftests/kvm/lib/test_util.c
> > @@ -4,6 +4,7 @@
> > *
> > * Copyright (C) 2020, Google LLC.
> > */
> > +#define _GNU_SOURCE
> >
> > #include <assert.h>
> > #include <ctype.h>
> > @@ -13,7 +14,9 @@
> > #include <sys/stat.h>
> > #include <sys/syscall.h>
> > #include <linux/mman.h>
> > -#include "linux/kernel.h"
> > +#include <linux/kernel.h>
> > +#include <sched.h>
> > +#include <sys/sysinfo.h>
> >
> > #include "test_util.h"
> >
> > @@ -334,3 +337,18 @@ long get_run_delay(void)
> >
> > return val[1];
> > }
> > +
> > +bool is_cpu_eligible_to_run(int pcpu)
> > +{
> > + cpu_set_t cpuset;
> > + long i, nprocs;
> > +
> > + nprocs = get_nprocs_conf();
> > + sched_getaffinity(0, sizeof(cpu_set_t), &cpuset);
> > + for (i = 0; i < nprocs; i++) {
> > + if (i == pcpu)
> > + return CPU_ISSET(i, &cpuset);
> > + }
>
> I don't think you need the loop and can just do CPU_ISSET(pcpu, &cpuset),
> right?
Oops, definitely not. Thanks for catching this.
>
> --
> Thanks,
> Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v4 2/4] KVM: selftests: add is_cpu_eligible_to_run() utility function
@ 2022-04-05 1:56 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2022-04-05 1:56 UTC (permalink / raw)
To: Oliver Upton
Cc: kvm, kvmarm, drjones, pbonzini, maz, alexandru.elisei,
eric.auger, reijiw, rananta
On Tue, Apr 05, 2022 at 12:13:33AM +0000, Oliver Upton wrote:
> Hi Ricardo,
>
> On Mon, Apr 04, 2022 at 02:46:40PM -0700, Ricardo Koller wrote:
> > Add is_cpu_eligible_to_run() utility function, which checks whether the current
> > process, or one of its threads, is eligible to run on a particular CPU.
> > This information is obtained using sched_getaffinity.
> >
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > ---
> > .../testing/selftests/kvm/include/test_util.h | 2 ++
> > tools/testing/selftests/kvm/lib/test_util.c | 20 ++++++++++++++++++-
> > 2 files changed, 21 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
> > index 99e0dcdc923f..a7653f369b6c 100644
> > --- a/tools/testing/selftests/kvm/include/test_util.h
> > +++ b/tools/testing/selftests/kvm/include/test_util.h
> > @@ -143,4 +143,6 @@ static inline void *align_ptr_up(void *x, size_t size)
> > return (void *)align_up((unsigned long)x, size);
> > }
> >
> > +bool is_cpu_eligible_to_run(int pcpu);
> > +
> > #endif /* SELFTEST_KVM_TEST_UTIL_H */
> > diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> > index 6d23878bbfe1..7813a68333c0 100644
> > --- a/tools/testing/selftests/kvm/lib/test_util.c
> > +++ b/tools/testing/selftests/kvm/lib/test_util.c
> > @@ -4,6 +4,7 @@
> > *
> > * Copyright (C) 2020, Google LLC.
> > */
> > +#define _GNU_SOURCE
> >
> > #include <assert.h>
> > #include <ctype.h>
> > @@ -13,7 +14,9 @@
> > #include <sys/stat.h>
> > #include <sys/syscall.h>
> > #include <linux/mman.h>
> > -#include "linux/kernel.h"
> > +#include <linux/kernel.h>
> > +#include <sched.h>
> > +#include <sys/sysinfo.h>
> >
> > #include "test_util.h"
> >
> > @@ -334,3 +337,18 @@ long get_run_delay(void)
> >
> > return val[1];
> > }
> > +
> > +bool is_cpu_eligible_to_run(int pcpu)
> > +{
> > + cpu_set_t cpuset;
> > + long i, nprocs;
> > +
> > + nprocs = get_nprocs_conf();
> > + sched_getaffinity(0, sizeof(cpu_set_t), &cpuset);
> > + for (i = 0; i < nprocs; i++) {
> > + if (i == pcpu)
> > + return CPU_ISSET(i, &cpuset);
> > + }
>
> I don't think you need the loop and can just do CPU_ISSET(pcpu, &cpuset),
> right?
Oops, definitely not. Thanks for catching this.
>
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2022-04-05 2:51 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-04 21:46 [PATCH v4 0/4] KVM: arm64: selftests: Add edge cases tests for the arch timer Ricardo Koller
2022-04-04 21:46 ` Ricardo Koller
2022-04-04 21:46 ` [PATCH v4 1/4] KVM: arm64: selftests: add timer_get_tval() lib function Ricardo Koller
2022-04-04 21:46 ` Ricardo Koller
2022-04-04 21:46 ` [PATCH v4 2/4] KVM: selftests: add is_cpu_eligible_to_run() utility function Ricardo Koller
2022-04-04 21:46 ` Ricardo Koller
2022-04-05 0:13 ` Oliver Upton
2022-04-05 0:13 ` Oliver Upton
2022-04-05 1:56 ` Ricardo Koller
2022-04-05 1:56 ` Ricardo Koller
2022-04-04 21:46 ` [PATCH v4 3/4] KVM: arm64: selftests: add arch_timer_edge_cases Ricardo Koller
2022-04-04 21:46 ` Ricardo Koller
2022-04-04 21:46 ` [PATCH v4 4/4] KVM: arm64: selftests: add edge cases tests into arch_timer_edge_cases Ricardo Koller
2022-04-04 21:46 ` Ricardo Koller
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.