* [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5
@ 2023-01-26 16:53 Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 1/6] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
` (6 more replies)
0 siblings, 7 replies; 14+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:53 UTC (permalink / raw)
To: kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller
The first commit fixes the tests when running on PMUv3p5. The issue is that
PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
for overflowing at 32 or 64-bits. Tests are currently failing [0] on
PMUv3p5 because of this. They wrongly assume that values will be wrapped
around 32-bits, but they overflow into the other half of the 64-bit
counters.
The second and third commits add new tests for 64-bit overflows, a feature
added with PMUv3p5 (PMCR_EL0.LP == 1). This is done by running all
overflow-related tests in two modes: with 32-bit and 64-bit overflows.
The fourt commit changes the value reporting to use %lx instead of %ld.
This series was tested on PMUv3p5 and PMUv3p4 using the ARM Fast Model and
kvmtool. All tests pass on both PMUv3p5 and PMUv3p4 when using Marc's
PMUv3p5 series [0], plus the suggestion made at [1]. Didn't test AArch32.
Changes from v3:
- Added commit to fix test_overflow_interrupt(). (Reiji and Eric)
- Separated s/ALL_SET/ALL_SET_32/ and s/PRE_OVERFLOW/PRE_OVERFLOW_32
into its own commit. (Reiji and Eric)
- Fix s/200/20. (Eric)
Changes from v2:
- used Oliver's suggestion of using pmevcntr_mask() for masking counters to
32 or 64 bits, instead of casting to uint32_t or uint64_t.
- removed ALL_SET_AT() in favor of pmevcntr_mask(). (Oliver)
- moved the change to have odd counter overflows at 64-bits from first to
third commit.
- renamed PRE_OVERFLOW macro to PRE_OVERFLOW_32, and PRE_OVERFLOW_AT() to
PRE_OVERFLOW().
Changes from v1 (all suggested by Alexandru):
- report counter values in hexadecimal
- s/overflow_at_64bits/unused for all chained tests
- check that odd counters do not increment when using overflow_at_64bits
(pmcr.LP=1)
- test 64-bit odd counters overflows
- switch confusing var names in test_chained_sw_incr(): cntr0 <-> cntr1
[0] https://lore.kernel.org/kvmarm/20221113163832.3154370-1-maz@kernel.org/
[1] https://lore.kernel.org/kvmarm/Y4jasyxvFRNvvmox@google.com/
Ricardo Koller (6):
arm: pmu: Fix overflow checks for PMUv3p5 long counters
arm: pmu: Prepare for testing 64-bit overflows
arm: pmu: Rename ALL_SET and PRE_OVERFLOW
arm: pmu: Add tests for 64-bit overflows
arm: pmu: Print counter values as hexadecimals
arm: pmu: Fix test_overflow_interrupt()
arm/pmu.c | 298 ++++++++++++++++++++++++++++++++++--------------------
1 file changed, 188 insertions(+), 110 deletions(-)
--
2.39.1.456.gfc5497dd1b-goog
^ permalink raw reply [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v4 1/6] arm: pmu: Fix overflow checks for PMUv3p5 long counters
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
@ 2023-01-26 16:53 ` Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
` (5 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:53 UTC (permalink / raw)
To: kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller
PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
for overflowing at 32 or 64-bits. The consequence is that tests that check
the counter values after overflowing should not assume that values will be
wrapped around 32-bits: they overflow into the other half of the 64-bit
counters on PMUv3p5.
Fix tests by correctly checking overflowing-counters against the expected
64-bit value.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
---
arm/pmu.c | 38 ++++++++++++++++++++++++++++----------
1 file changed, 28 insertions(+), 10 deletions(-)
diff --git a/arm/pmu.c b/arm/pmu.c
index cd47b14..7f0794d 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -54,10 +54,10 @@
#define EXT_COMMON_EVENTS_LOW 0x4000
#define EXT_COMMON_EVENTS_HIGH 0x403F
-#define ALL_SET 0xFFFFFFFF
-#define ALL_CLEAR 0x0
-#define PRE_OVERFLOW 0xFFFFFFF0
-#define PRE_OVERFLOW2 0xFFFFFFDC
+#define ALL_SET 0x00000000FFFFFFFFULL
+#define ALL_CLEAR 0x0000000000000000ULL
+#define PRE_OVERFLOW 0x00000000FFFFFFF0ULL
+#define PRE_OVERFLOW2 0x00000000FFFFFFDCULL
#define PMU_PPI 23
@@ -419,6 +419,22 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events)
return true;
}
+static uint64_t pmevcntr_mask(void)
+{
+ /*
+ * Bits [63:0] are always incremented for 64-bit counters,
+ * even if the PMU is configured to generate an overflow at
+ * bits [31:0]
+ *
+ * For more details see the AArch64.IncrementEventCounter()
+ * pseudo-code in the ARM ARM DDI 0487I.a, section J1.1.1.
+ */
+ if (pmu.version >= ID_DFR0_PMU_V3_8_5)
+ return ~0;
+
+ return (uint32_t)~0;
+}
+
static void test_basic_event_count(void)
{
uint32_t implemented_counter_mask, non_implemented_counter_mask;
@@ -538,6 +554,7 @@ static void test_mem_access(void)
static void test_sw_incr(void)
{
uint32_t events[] = {SW_INCR, SW_INCR};
+ uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
int i;
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
@@ -572,9 +589,8 @@ static void test_sw_incr(void)
write_sysreg(0x3, pmswinc_el0);
isb();
- report(read_regn_el0(pmevcntr, 0) == 84, "counter #1 after + 100 SW_INCR");
- report(read_regn_el0(pmevcntr, 1) == 100,
- "counter #0 after + 100 SW_INCR");
+ report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR");
+ report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR");
report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
report(read_sysreg(pmovsclr_el0) == 0x1,
@@ -625,6 +641,8 @@ static void test_chained_counters(void)
static void test_chained_sw_incr(void)
{
uint32_t events[] = {SW_INCR, CHAIN};
+ uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
+ uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
int i;
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
@@ -666,9 +684,9 @@ static void test_chained_sw_incr(void)
isb();
report((read_sysreg(pmovsclr_el0) == 0x3) &&
- (read_regn_el0(pmevcntr, 1) == 0) &&
- (read_regn_el0(pmevcntr, 0) == 84),
- "expected overflows and values after 100 SW_INCR/CHAIN");
+ (read_regn_el0(pmevcntr, 0) == cntr0) &&
+ (read_regn_el0(pmevcntr, 1) == cntr1),
+ "expected overflows and values after 100 SW_INCR/CHAIN");
report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
}
--
2.39.1.456.gfc5497dd1b-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 1/6] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
@ 2023-01-26 16:53 ` Ricardo Koller
2023-02-07 15:45 ` Eric Auger
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW Ricardo Koller
` (4 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:53 UTC (permalink / raw)
To: kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller
PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits
instead of 32. Prepare by doing these 3 things:
1. Add a "bool overflow_at_64bits" argument to all tests checking
overflows.
2. Extend satisfy_prerequisites() to check if the machine supports
"overflow_at_64bits".
3. Refactor the test invocations to use the new "run_test()" which adds a
report prefix indicating whether the test uses 64 or 32-bit overflows.
A subsequent commit will actually add the 64-bit overflow tests.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
---
arm/pmu.c | 99 +++++++++++++++++++++++++++++++++----------------------
1 file changed, 60 insertions(+), 39 deletions(-)
diff --git a/arm/pmu.c b/arm/pmu.c
index 7f0794d..06cbd73 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -164,13 +164,13 @@ static void pmu_reset(void)
/* event counter tests only implemented for aarch64 */
static void test_event_introspection(void) {}
static void test_event_counter_config(void) {}
-static void test_basic_event_count(void) {}
-static void test_mem_access(void) {}
-static void test_sw_incr(void) {}
-static void test_chained_counters(void) {}
-static void test_chained_sw_incr(void) {}
-static void test_chain_promotion(void) {}
-static void test_overflow_interrupt(void) {}
+static void test_basic_event_count(bool overflow_at_64bits) {}
+static void test_mem_access(bool overflow_at_64bits) {}
+static void test_sw_incr(bool overflow_at_64bits) {}
+static void test_chained_counters(bool unused) {}
+static void test_chained_sw_incr(bool unused) {}
+static void test_chain_promotion(bool unused) {}
+static void test_overflow_interrupt(bool overflow_at_64bits) {}
#elif defined(__aarch64__)
#define ID_AA64DFR0_PERFMON_SHIFT 8
@@ -435,13 +435,24 @@ static uint64_t pmevcntr_mask(void)
return (uint32_t)~0;
}
-static void test_basic_event_count(void)
+static bool check_overflow_prerequisites(bool overflow_at_64bits)
+{
+ if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) {
+ report_skip("Skip test as 64 overflows need FEAT_PMUv3p5");
+ return false;
+ }
+
+ return true;
+}
+
+static void test_basic_event_count(bool overflow_at_64bits)
{
uint32_t implemented_counter_mask, non_implemented_counter_mask;
uint32_t counter_mask;
uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
- if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+ if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+ !check_overflow_prerequisites(overflow_at_64bits))
return;
implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1;
@@ -515,12 +526,13 @@ static void test_basic_event_count(void)
"check overflow happened on #0 only");
}
-static void test_mem_access(void)
+static void test_mem_access(bool overflow_at_64bits)
{
void *addr = malloc(PAGE_SIZE);
uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
- if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+ if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+ !check_overflow_prerequisites(overflow_at_64bits))
return;
pmu_reset();
@@ -551,13 +563,14 @@ static void test_mem_access(void)
read_sysreg(pmovsclr_el0));
}
-static void test_sw_incr(void)
+static void test_sw_incr(bool overflow_at_64bits)
{
uint32_t events[] = {SW_INCR, SW_INCR};
uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
int i;
- if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+ if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+ !check_overflow_prerequisites(overflow_at_64bits))
return;
pmu_reset();
@@ -597,7 +610,7 @@ static void test_sw_incr(void)
"overflow on counter #0 after 100 SW_INCR");
}
-static void test_chained_counters(void)
+static void test_chained_counters(bool unused)
{
uint32_t events[] = {CPU_CYCLES, CHAIN};
@@ -638,7 +651,7 @@ static void test_chained_counters(void)
report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
}
-static void test_chained_sw_incr(void)
+static void test_chained_sw_incr(bool unused)
{
uint32_t events[] = {SW_INCR, CHAIN};
uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
@@ -691,7 +704,7 @@ static void test_chained_sw_incr(void)
read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
}
-static void test_chain_promotion(void)
+static void test_chain_promotion(bool unused)
{
uint32_t events[] = {MEM_ACCESS, CHAIN};
void *addr = malloc(PAGE_SIZE);
@@ -840,13 +853,14 @@ static bool expect_interrupts(uint32_t bitmap)
return true;
}
-static void test_overflow_interrupt(void)
+static void test_overflow_interrupt(bool overflow_at_64bits)
{
uint32_t events[] = {MEM_ACCESS, SW_INCR};
void *addr = malloc(PAGE_SIZE);
int i;
- if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+ if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+ !check_overflow_prerequisites(overflow_at_64bits))
return;
gic_enable_defaults();
@@ -1070,6 +1084,27 @@ static bool pmu_probe(void)
return true;
}
+static void run_test(const char *name, const char *prefix,
+ void (*test)(bool), void *arg)
+{
+ report_prefix_push(name);
+ report_prefix_push(prefix);
+
+ test(arg);
+
+ report_prefix_pop();
+ report_prefix_pop();
+}
+
+static void run_event_test(char *name, void (*test)(bool),
+ bool overflow_at_64bits)
+{
+ const char *prefix = overflow_at_64bits ? "64-bit overflows"
+ : "32-bit overflows";
+
+ run_test(name, prefix, test, (void *)overflow_at_64bits);
+}
+
int main(int argc, char *argv[])
{
int cpi = 0;
@@ -1102,33 +1137,19 @@ int main(int argc, char *argv[])
test_event_counter_config();
report_prefix_pop();
} else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
- report_prefix_push(argv[1]);
- test_basic_event_count();
- report_prefix_pop();
+ run_event_test(argv[1], test_basic_event_count, false);
} else if (strcmp(argv[1], "pmu-mem-access") == 0) {
- report_prefix_push(argv[1]);
- test_mem_access();
- report_prefix_pop();
+ run_event_test(argv[1], test_mem_access, false);
} else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
- report_prefix_push(argv[1]);
- test_sw_incr();
- report_prefix_pop();
+ run_event_test(argv[1], test_sw_incr, false);
} else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
- report_prefix_push(argv[1]);
- test_chained_counters();
- report_prefix_pop();
+ run_event_test(argv[1], test_chained_counters, false);
} else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
- report_prefix_push(argv[1]);
- test_chained_sw_incr();
- report_prefix_pop();
+ run_event_test(argv[1], test_chained_sw_incr, false);
} else if (strcmp(argv[1], "pmu-chain-promotion") == 0) {
- report_prefix_push(argv[1]);
- test_chain_promotion();
- report_prefix_pop();
+ run_event_test(argv[1], test_chain_promotion, false);
} else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
- report_prefix_push(argv[1]);
- test_overflow_interrupt();
- report_prefix_pop();
+ run_event_test(argv[1], test_overflow_interrupt, false);
} else {
report_abort("Unknown sub-test '%s'", argv[1]);
}
--
2.39.1.456.gfc5497dd1b-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 1/6] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
@ 2023-01-26 16:53 ` Ricardo Koller
2023-02-07 15:48 ` Eric Auger
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 4/6] arm: pmu: Add tests for 64-bit overflows Ricardo Koller
` (3 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:53 UTC (permalink / raw)
To: kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller
Given that the arm PMU tests now handle 64-bit counters and overflows,
it's better to be precise about what the ALL_SET and PRE_OVERFLOW
macros actually are. Given that they are both 32-bit counters,
just add _32 to both of them.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
arm/pmu.c | 78 +++++++++++++++++++++++++++----------------------------
1 file changed, 39 insertions(+), 39 deletions(-)
diff --git a/arm/pmu.c b/arm/pmu.c
index 06cbd73..08e956d 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -54,9 +54,9 @@
#define EXT_COMMON_EVENTS_LOW 0x4000
#define EXT_COMMON_EVENTS_HIGH 0x403F
-#define ALL_SET 0x00000000FFFFFFFFULL
+#define ALL_SET_32 0x00000000FFFFFFFFULL
#define ALL_CLEAR 0x0000000000000000ULL
-#define PRE_OVERFLOW 0x00000000FFFFFFF0ULL
+#define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL
#define PRE_OVERFLOW2 0x00000000FFFFFFDCULL
#define PMU_PPI 23
@@ -153,11 +153,11 @@ static void pmu_reset(void)
/* reset all counters, counting disabled at PMCR level*/
set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
/* Disable all counters */
- write_sysreg(ALL_SET, PMCNTENCLR);
+ write_sysreg(ALL_SET_32, PMCNTENCLR);
/* clear overflow reg */
- write_sysreg(ALL_SET, PMOVSR);
+ write_sysreg(ALL_SET_32, PMOVSR);
/* disable overflow interrupts on all counters */
- write_sysreg(ALL_SET, PMINTENCLR);
+ write_sysreg(ALL_SET_32, PMINTENCLR);
isb();
}
@@ -322,7 +322,7 @@ static void irq_handler(struct pt_regs *regs)
pmu_stats.bitmap |= 1 << i;
}
}
- write_sysreg(ALL_SET, pmovsclr_el0);
+ write_sysreg(ALL_SET_32, pmovsclr_el0);
isb();
} else {
pmu_stats.unexpected = true;
@@ -346,11 +346,11 @@ static void pmu_reset(void)
/* reset all counters, counting disabled at PMCR level*/
set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
/* Disable all counters */
- write_sysreg_s(ALL_SET, PMCNTENCLR_EL0);
+ write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0);
/* clear overflow reg */
- write_sysreg(ALL_SET, pmovsclr_el0);
+ write_sysreg(ALL_SET_32, pmovsclr_el0);
/* disable overflow interrupts on all counters */
- write_sysreg(ALL_SET, pmintenclr_el1);
+ write_sysreg(ALL_SET_32, pmintenclr_el1);
pmu_reset_stats();
isb();
}
@@ -463,7 +463,7 @@ static void test_basic_event_count(bool overflow_at_64bits)
write_regn_el0(pmevtyper, 1, INST_RETIRED | PMEVTYPER_EXCLUDE_EL0);
/* disable all counters */
- write_sysreg_s(ALL_SET, PMCNTENCLR_EL0);
+ write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0);
report(!read_sysreg_s(PMCNTENCLR_EL0) && !read_sysreg_s(PMCNTENSET_EL0),
"pmcntenclr: disable all counters");
@@ -476,8 +476,8 @@ static void test_basic_event_count(bool overflow_at_64bits)
report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
/* Preset counter #0 to pre overflow value to trigger an overflow */
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
- report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
"counter #0 preset to pre-overflow value");
report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
@@ -499,11 +499,11 @@ static void test_basic_event_count(bool overflow_at_64bits)
"pmcntenset: just enabled #0 and #1");
/* clear overflow register */
- write_sysreg(ALL_SET, pmovsclr_el0);
+ write_sysreg(ALL_SET_32, pmovsclr_el0);
report(!read_sysreg(pmovsclr_el0), "check overflow reg is 0");
/* disable overflow interrupts on all counters*/
- write_sysreg(ALL_SET, pmintenclr_el1);
+ write_sysreg(ALL_SET_32, pmintenclr_el1);
report(!read_sysreg(pmintenclr_el1),
"pmintenclr_el1=0, all interrupts disabled");
@@ -551,8 +551,8 @@ static void test_mem_access(bool overflow_at_64bits)
pmu_reset();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
- write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
write_sysreg_s(0x3, PMCNTENSET_EL0);
isb();
mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
@@ -566,7 +566,7 @@ static void test_mem_access(bool overflow_at_64bits)
static void test_sw_incr(bool overflow_at_64bits)
{
uint32_t events[] = {SW_INCR, SW_INCR};
- uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
+ uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
int i;
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
@@ -580,7 +580,7 @@ static void test_sw_incr(bool overflow_at_64bits)
/* enable counters #0 and #1 */
write_sysreg_s(0x3, PMCNTENSET_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
isb();
for (i = 0; i < 100; i++)
@@ -588,12 +588,12 @@ static void test_sw_incr(bool overflow_at_64bits)
isb();
report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
- report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
+ report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
"PWSYNC does not increment if PMCR.E is unset");
pmu_reset();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
write_sysreg_s(0x3, PMCNTENSET_EL0);
set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
isb();
@@ -623,7 +623,7 @@ static void test_chained_counters(bool unused)
write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
/* enable counters #0 and #1 */
write_sysreg_s(0x3, PMCNTENSET_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
@@ -635,15 +635,15 @@ static void test_chained_counters(bool unused)
pmu_reset();
write_sysreg_s(0x3, PMCNTENSET_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
write_regn_el0(pmevcntr, 1, 0x1);
precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
- write_regn_el0(pmevcntr, 1, ALL_SET);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 1, ALL_SET_32);
precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
@@ -654,8 +654,8 @@ static void test_chained_counters(bool unused)
static void test_chained_sw_incr(bool unused)
{
uint32_t events[] = {SW_INCR, CHAIN};
- uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
- uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
+ uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
+ uint64_t cntr1 = (ALL_SET_32 + 1) & pmevcntr_mask();
int i;
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
@@ -668,7 +668,7 @@ static void test_chained_sw_incr(bool unused)
/* enable counters #0 and #1 */
write_sysreg_s(0x3, PMCNTENSET_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
isb();
@@ -686,8 +686,8 @@ static void test_chained_sw_incr(bool unused)
pmu_reset();
write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
- write_regn_el0(pmevcntr, 1, ALL_SET);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 1, ALL_SET_32);
write_sysreg_s(0x3, PMCNTENSET_EL0);
set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
isb();
@@ -725,7 +725,7 @@ static void test_chain_promotion(bool unused)
/* Only enable even counter */
pmu_reset();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
write_sysreg_s(0x1, PMCNTENSET_EL0);
isb();
@@ -873,8 +873,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
write_sysreg_s(0x3, PMCNTENSET_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
- write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
isb();
/* interrupts are disabled (PMINTENSET_EL1 == 0) */
@@ -893,13 +893,13 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
isb();
report(expect_interrupts(0), "no overflow interrupt after counting");
- /* enable interrupts (PMINTENSET_EL1 <= ALL_SET) */
+ /* enable interrupts (PMINTENSET_EL1 <= ALL_SET_32) */
pmu_reset_stats();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
- write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
- write_sysreg(ALL_SET, pmintenset_el1);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
+ write_sysreg(ALL_SET_32, pmintenset_el1);
isb();
mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
@@ -916,7 +916,7 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
pmu_reset_stats();
write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
isb();
mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
report(expect_interrupts(0x1),
@@ -924,8 +924,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
/* overflow on odd counter */
pmu_reset_stats();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
- write_regn_el0(pmevcntr, 1, ALL_SET);
+ write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 1, ALL_SET_32);
isb();
mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
report(expect_interrupts(0x3),
--
2.39.1.456.gfc5497dd1b-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v4 4/6] arm: pmu: Add tests for 64-bit overflows
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
` (2 preceding siblings ...)
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW Ricardo Koller
@ 2023-01-26 16:53 ` Ricardo Koller
2023-02-07 17:25 ` Eric Auger
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 5/6] arm: pmu: Print counter values as hexadecimals Ricardo Koller
` (2 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:53 UTC (permalink / raw)
To: kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller
Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
supported on PMUv3p5.
Note that chained tests do not implement "overflow_at_64bits == true".
That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
0487H.a, J1.1.1 "aarch64/debug").
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
---
arm/pmu.c | 100 ++++++++++++++++++++++++++++++++++++------------------
1 file changed, 67 insertions(+), 33 deletions(-)
diff --git a/arm/pmu.c b/arm/pmu.c
index 08e956d..082fb41 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -28,6 +28,7 @@
#define PMU_PMCR_X (1 << 4)
#define PMU_PMCR_DP (1 << 5)
#define PMU_PMCR_LC (1 << 6)
+#define PMU_PMCR_LP (1 << 7)
#define PMU_PMCR_N_SHIFT 11
#define PMU_PMCR_N_MASK 0x1f
#define PMU_PMCR_ID_SHIFT 16
@@ -57,8 +58,12 @@
#define ALL_SET_32 0x00000000FFFFFFFFULL
#define ALL_CLEAR 0x0000000000000000ULL
#define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL
+#define PRE_OVERFLOW_64 0xFFFFFFFFFFFFFFF0ULL
#define PRE_OVERFLOW2 0x00000000FFFFFFDCULL
+#define PRE_OVERFLOW(__overflow_at_64bits) \
+ (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
+
#define PMU_PPI 23
struct pmu {
@@ -448,8 +453,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
static void test_basic_event_count(bool overflow_at_64bits)
{
uint32_t implemented_counter_mask, non_implemented_counter_mask;
- uint32_t counter_mask;
+ uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+ uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
+ uint32_t counter_mask;
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
!check_overflow_prerequisites(overflow_at_64bits))
@@ -471,13 +478,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
* clear cycle and all event counters and allow counter enablement
* through PMCNTENSET. LC is RES1.
*/
- set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
+ set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
isb();
- report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
+ report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
/* Preset counter #0 to pre overflow value to trigger an overflow */
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
- report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
+ write_regn_el0(pmevcntr, 0, pre_overflow);
+ report(read_regn_el0(pmevcntr, 0) == pre_overflow,
"counter #0 preset to pre-overflow value");
report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
@@ -530,6 +537,8 @@ static void test_mem_access(bool overflow_at_64bits)
{
void *addr = malloc(PAGE_SIZE);
uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
+ uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+ uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
!check_overflow_prerequisites(overflow_at_64bits))
@@ -541,7 +550,7 @@ static void test_mem_access(bool overflow_at_64bits)
write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
write_sysreg_s(0x3, PMCNTENSET_EL0);
isb();
- mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
+ mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
/* We may measure more than 20 mem access depending on the core */
@@ -551,11 +560,11 @@ static void test_mem_access(bool overflow_at_64bits)
pmu_reset();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
- write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 0, pre_overflow);
+ write_regn_el0(pmevcntr, 1, pre_overflow);
write_sysreg_s(0x3, PMCNTENSET_EL0);
isb();
- mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
+ mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
report(read_sysreg(pmovsclr_el0) == 0x3,
"Ran 20 mem accesses with expected overflows on both counters");
report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
@@ -565,8 +574,10 @@ static void test_mem_access(bool overflow_at_64bits)
static void test_sw_incr(bool overflow_at_64bits)
{
+ uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+ uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
uint32_t events[] = {SW_INCR, SW_INCR};
- uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
+ uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
int i;
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
@@ -580,7 +591,7 @@ static void test_sw_incr(bool overflow_at_64bits)
/* enable counters #0 and #1 */
write_sysreg_s(0x3, PMCNTENSET_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 0, pre_overflow);
isb();
for (i = 0; i < 100; i++)
@@ -588,14 +599,14 @@ static void test_sw_incr(bool overflow_at_64bits)
isb();
report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
- report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
+ report(read_regn_el0(pmevcntr, 0) == pre_overflow,
"PWSYNC does not increment if PMCR.E is unset");
pmu_reset();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 0, pre_overflow);
write_sysreg_s(0x3, PMCNTENSET_EL0);
- set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
+ set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
isb();
for (i = 0; i < 100; i++)
@@ -613,6 +624,7 @@ static void test_sw_incr(bool overflow_at_64bits)
static void test_chained_counters(bool unused)
{
uint32_t events[] = {CPU_CYCLES, CHAIN};
+ uint64_t all_set = pmevcntr_mask();
if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
return;
@@ -643,11 +655,11 @@ static void test_chained_counters(bool unused)
report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
- write_regn_el0(pmevcntr, 1, ALL_SET_32);
+ write_regn_el0(pmevcntr, 1, all_set);
precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
- report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
+ report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
}
@@ -855,6 +867,9 @@ static bool expect_interrupts(uint32_t bitmap)
static void test_overflow_interrupt(bool overflow_at_64bits)
{
+ uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+ uint64_t all_set = pmevcntr_mask();
+ uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
uint32_t events[] = {MEM_ACCESS, SW_INCR};
void *addr = malloc(PAGE_SIZE);
int i;
@@ -873,16 +888,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
write_sysreg_s(0x3, PMCNTENSET_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
- write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 0, pre_overflow);
+ write_regn_el0(pmevcntr, 1, pre_overflow);
isb();
/* interrupts are disabled (PMINTENSET_EL1 == 0) */
- mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
+ mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
report(expect_interrupts(0), "no overflow interrupt after preset");
- set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
+ set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
isb();
for (i = 0; i < 100; i++)
@@ -897,12 +912,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
pmu_reset_stats();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
- write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 0, pre_overflow);
+ write_regn_el0(pmevcntr, 1, pre_overflow);
write_sysreg(ALL_SET_32, pmintenset_el1);
isb();
- mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
+ mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
for (i = 0; i < 100; i++)
write_sysreg(0x3, pmswinc_el0);
@@ -911,25 +926,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
report(expect_interrupts(0x3),
"overflow interrupts expected on #0 and #1");
- /* promote to 64-b */
+ /*
+ * promote to 64-b:
+ *
+ * This only applies to the !overflow_at_64bits case, as
+ * overflow_at_64bits doesn't implement CHAIN events. The
+ * overflow_at_64bits case just checks that chained counters are
+ * not incremented when PMCR.LP == 1.
+ */
pmu_reset_stats();
write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+ write_regn_el0(pmevcntr, 0, pre_overflow);
isb();
- mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
- report(expect_interrupts(0x1),
- "expect overflow interrupt on 32b boundary");
+ mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
+ report(expect_interrupts(0x1), "expect overflow interrupt");
/* overflow on odd counter */
pmu_reset_stats();
- write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
- write_regn_el0(pmevcntr, 1, ALL_SET_32);
+ write_regn_el0(pmevcntr, 0, pre_overflow);
+ write_regn_el0(pmevcntr, 1, all_set);
isb();
- mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
- report(expect_interrupts(0x3),
- "expect overflow interrupt on even and odd counter");
+ mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
+ if (overflow_at_64bits) {
+ report(expect_interrupts(0x1),
+ "expect overflow interrupt on even counter");
+ report(read_regn_el0(pmevcntr, 1) == all_set,
+ "Odd counter did not change");
+ } else {
+ report(expect_interrupts(0x3),
+ "expect overflow interrupt on even and odd counter");
+ report(read_regn_el0(pmevcntr, 1) != all_set,
+ "Odd counter wrapped");
+ }
}
#endif
@@ -1138,10 +1168,13 @@ int main(int argc, char *argv[])
report_prefix_pop();
} else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
run_event_test(argv[1], test_basic_event_count, false);
+ run_event_test(argv[1], test_basic_event_count, true);
} else if (strcmp(argv[1], "pmu-mem-access") == 0) {
run_event_test(argv[1], test_mem_access, false);
+ run_event_test(argv[1], test_mem_access, true);
} else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
run_event_test(argv[1], test_sw_incr, false);
+ run_event_test(argv[1], test_sw_incr, true);
} else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
run_event_test(argv[1], test_chained_counters, false);
} else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
@@ -1150,6 +1183,7 @@ int main(int argc, char *argv[])
run_event_test(argv[1], test_chain_promotion, false);
} else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
run_event_test(argv[1], test_overflow_interrupt, false);
+ run_event_test(argv[1], test_overflow_interrupt, true);
} else {
report_abort("Unknown sub-test '%s'", argv[1]);
}
--
2.39.1.456.gfc5497dd1b-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v4 5/6] arm: pmu: Print counter values as hexadecimals
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
` (3 preceding siblings ...)
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 4/6] arm: pmu: Add tests for 64-bit overflows Ricardo Koller
@ 2023-01-26 16:53 ` Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 6/6] arm: pmu: Fix test_overflow_interrupt() Ricardo Koller
2023-02-14 20:18 ` [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Andrew Jones
6 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:53 UTC (permalink / raw)
To: kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller
The arm/pmu test prints the value of counters as %ld. Most tests start
with counters around 0 or UINT_MAX, so having something like -16 instead of
0xffff_fff0 is not very useful.
Report counter values as hexadecimals.
Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
---
arm/pmu.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arm/pmu.c b/arm/pmu.c
index 082fb41..1e93ea2 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -551,8 +551,8 @@ static void test_mem_access(bool overflow_at_64bits)
write_sysreg_s(0x3, PMCNTENSET_EL0);
isb();
mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
- report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
- report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
+ report_info("counter #0 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
+ report_info("counter #1 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
/* We may measure more than 20 mem access depending on the core */
report((read_regn_el0(pmevcntr, 0) == read_regn_el0(pmevcntr, 1)) &&
(read_regn_el0(pmevcntr, 0) >= 20) && !read_sysreg(pmovsclr_el0),
@@ -567,7 +567,7 @@ static void test_mem_access(bool overflow_at_64bits)
mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
report(read_sysreg(pmovsclr_el0) == 0x3,
"Ran 20 mem accesses with expected overflows on both counters");
- report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
+ report_info("cnt#0=0x%lx cnt#1=0x%lx overflow=0x%lx",
read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1),
read_sysreg(pmovsclr_el0));
}
@@ -598,7 +598,7 @@ static void test_sw_incr(bool overflow_at_64bits)
write_sysreg(0x1, pmswinc_el0);
isb();
- report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
+ report_info("SW_INCR counter #0 has value 0x%lx", read_regn_el0(pmevcntr, 0));
report(read_regn_el0(pmevcntr, 0) == pre_overflow,
"PWSYNC does not increment if PMCR.E is unset");
@@ -615,7 +615,7 @@ static void test_sw_incr(bool overflow_at_64bits)
isb();
report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR");
report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR");
- report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
+ report_info("counter values after 100 SW_INCR #0=0x%lx #1=0x%lx",
read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
report(read_sysreg(pmovsclr_el0) == 0x1,
"overflow on counter #0 after 100 SW_INCR");
@@ -691,7 +691,7 @@ static void test_chained_sw_incr(bool unused)
report((read_sysreg(pmovsclr_el0) == 0x1) &&
(read_regn_el0(pmevcntr, 1) == 1),
"overflow and chain counter incremented after 100 SW_INCR/CHAIN");
- report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
+ report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0),
read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
/* 64b SW_INCR and overflow on CHAIN counter*/
@@ -712,7 +712,7 @@ static void test_chained_sw_incr(bool unused)
(read_regn_el0(pmevcntr, 0) == cntr0) &&
(read_regn_el0(pmevcntr, 1) == cntr1),
"expected overflows and values after 100 SW_INCR/CHAIN");
- report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
+ report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0),
read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
}
@@ -744,11 +744,11 @@ static void test_chain_promotion(bool unused)
mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
report(!read_regn_el0(pmevcntr, 1) && (read_sysreg(pmovsclr_el0) == 0x1),
"odd counter did not increment on overflow if disabled");
- report_info("MEM_ACCESS counter #0 has value %ld",
+ report_info("MEM_ACCESS counter #0 has value 0x%lx",
read_regn_el0(pmevcntr, 0));
- report_info("CHAIN counter #1 has value %ld",
+ report_info("CHAIN counter #1 has value 0x%lx",
read_regn_el0(pmevcntr, 1));
- report_info("overflow counter %ld", read_sysreg(pmovsclr_el0));
+ report_info("overflow counter 0x%lx", read_sysreg(pmovsclr_el0));
/* start at 0xFFFFFFDC, +20 with CHAIN enabled, +20 with CHAIN disabled */
pmu_reset();
--
2.39.1.456.gfc5497dd1b-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v4 6/6] arm: pmu: Fix test_overflow_interrupt()
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
` (4 preceding siblings ...)
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 5/6] arm: pmu: Print counter values as hexadecimals Ricardo Koller
@ 2023-01-26 16:53 ` Ricardo Koller
2023-02-07 17:31 ` Eric Auger
2023-02-14 20:18 ` [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Andrew Jones
6 siblings, 1 reply; 14+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:53 UTC (permalink / raw)
To: kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller
test_overflow_interrupt() (from arm/pmu.c) has a test that passes
because the previous test leaves the state needed to pass: the
overflow status register with the expected bits. The test (that should
fail) does not enable the PMU after mem_access_loop(), which clears
the PMCR, and before writing into the software increment register.
Fix by clearing the previous test state (pmovsclr_el0) and by enabling
the PMU before the sw_incr test.
Fixes: 4f5ef94f3aac ("arm: pmu: Test overflow interrupts")
Reported-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
arm/pmu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arm/pmu.c b/arm/pmu.c
index 1e93ea2..f91b5ca 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -914,10 +914,15 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
write_regn_el0(pmevcntr, 0, pre_overflow);
write_regn_el0(pmevcntr, 1, pre_overflow);
+ write_sysreg(ALL_SET_32, pmovsclr_el0);
write_sysreg(ALL_SET_32, pmintenset_el1);
isb();
mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
+
+ set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
+ isb();
+
for (i = 0; i < 100; i++)
write_sysreg(0x3, pmswinc_el0);
--
2.39.1.456.gfc5497dd1b-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
@ 2023-02-07 15:45 ` Eric Auger
2023-02-09 14:59 ` Ricardo Koller
0 siblings, 1 reply; 14+ messages in thread
From: Eric Auger @ 2023-02-07 15:45 UTC (permalink / raw)
To: Ricardo Koller, kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, oliver.upton, reijiw
Hi Ricardo,
On 1/26/23 17:53, Ricardo Koller wrote:
> PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits
> instead of 32. Prepare by doing these 3 things:
>
> 1. Add a "bool overflow_at_64bits" argument to all tests checking
> overflows.
Actually test_chained_sw_incr() and test_chained_counters() also test
overflows but they feature CHAIN events. I guess that's why you don't
need the LP flag. Just tweek the commit msg if you need to respin.
> 2. Extend satisfy_prerequisites() to check if the machine supports
> "overflow_at_64bits".
> 3. Refactor the test invocations to use the new "run_test()" which adds a
> report prefix indicating whether the test uses 64 or 32-bit overflows.
>
> A subsequent commit will actually add the 64-bit overflow tests.
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
Besides
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Thanks
Eric
> ---
> arm/pmu.c | 99 +++++++++++++++++++++++++++++++++----------------------
> 1 file changed, 60 insertions(+), 39 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 7f0794d..06cbd73 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -164,13 +164,13 @@ static void pmu_reset(void)
> /* event counter tests only implemented for aarch64 */
> static void test_event_introspection(void) {}
> static void test_event_counter_config(void) {}
> -static void test_basic_event_count(void) {}
> -static void test_mem_access(void) {}
> -static void test_sw_incr(void) {}
> -static void test_chained_counters(void) {}
> -static void test_chained_sw_incr(void) {}
> -static void test_chain_promotion(void) {}
> -static void test_overflow_interrupt(void) {}
> +static void test_basic_event_count(bool overflow_at_64bits) {}
> +static void test_mem_access(bool overflow_at_64bits) {}
> +static void test_sw_incr(bool overflow_at_64bits) {}
> +static void test_chained_counters(bool unused) {}
> +static void test_chained_sw_incr(bool unused) {}
> +static void test_chain_promotion(bool unused) {}
> +static void test_overflow_interrupt(bool overflow_at_64bits) {}
>
> #elif defined(__aarch64__)
> #define ID_AA64DFR0_PERFMON_SHIFT 8
> @@ -435,13 +435,24 @@ static uint64_t pmevcntr_mask(void)
> return (uint32_t)~0;
> }
>
> -static void test_basic_event_count(void)
> +static bool check_overflow_prerequisites(bool overflow_at_64bits)
> +{
> + if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) {
> + report_skip("Skip test as 64 overflows need FEAT_PMUv3p5");
> + return false;
> + }
> +
> + return true;
> +}
> +
> +static void test_basic_event_count(bool overflow_at_64bits)
> {
> uint32_t implemented_counter_mask, non_implemented_counter_mask;
> uint32_t counter_mask;
> uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
>
> - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> + !check_overflow_prerequisites(overflow_at_64bits))
> return;
>
> implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1;
> @@ -515,12 +526,13 @@ static void test_basic_event_count(void)
> "check overflow happened on #0 only");
> }
>
> -static void test_mem_access(void)
> +static void test_mem_access(bool overflow_at_64bits)
> {
> void *addr = malloc(PAGE_SIZE);
> uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
>
> - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> + !check_overflow_prerequisites(overflow_at_64bits))
> return;
>
> pmu_reset();
> @@ -551,13 +563,14 @@ static void test_mem_access(void)
> read_sysreg(pmovsclr_el0));
> }
>
> -static void test_sw_incr(void)
> +static void test_sw_incr(bool overflow_at_64bits)
> {
> uint32_t events[] = {SW_INCR, SW_INCR};
> uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> int i;
>
> - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> + !check_overflow_prerequisites(overflow_at_64bits))
> return;
>
> pmu_reset();
> @@ -597,7 +610,7 @@ static void test_sw_incr(void)
> "overflow on counter #0 after 100 SW_INCR");
> }
>
> -static void test_chained_counters(void)
> +static void test_chained_counters(bool unused)
> {
> uint32_t events[] = {CPU_CYCLES, CHAIN};
>
> @@ -638,7 +651,7 @@ static void test_chained_counters(void)
> report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
> }
>
> -static void test_chained_sw_incr(void)
> +static void test_chained_sw_incr(bool unused)
> {
> uint32_t events[] = {SW_INCR, CHAIN};
> uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> @@ -691,7 +704,7 @@ static void test_chained_sw_incr(void)
> read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
> }
>
> -static void test_chain_promotion(void)
> +static void test_chain_promotion(bool unused)
> {
> uint32_t events[] = {MEM_ACCESS, CHAIN};
> void *addr = malloc(PAGE_SIZE);
> @@ -840,13 +853,14 @@ static bool expect_interrupts(uint32_t bitmap)
> return true;
> }
>
> -static void test_overflow_interrupt(void)
> +static void test_overflow_interrupt(bool overflow_at_64bits)
> {
> uint32_t events[] = {MEM_ACCESS, SW_INCR};
> void *addr = malloc(PAGE_SIZE);
> int i;
>
> - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> + !check_overflow_prerequisites(overflow_at_64bits))
> return;
>
> gic_enable_defaults();
> @@ -1070,6 +1084,27 @@ static bool pmu_probe(void)
> return true;
> }
>
> +static void run_test(const char *name, const char *prefix,
> + void (*test)(bool), void *arg)
> +{
> + report_prefix_push(name);
> + report_prefix_push(prefix);
> +
> + test(arg);
> +
> + report_prefix_pop();
> + report_prefix_pop();
> +}
> +
> +static void run_event_test(char *name, void (*test)(bool),
> + bool overflow_at_64bits)
> +{
> + const char *prefix = overflow_at_64bits ? "64-bit overflows"
> + : "32-bit overflows";
> +
> + run_test(name, prefix, test, (void *)overflow_at_64bits);
> +}
> +
> int main(int argc, char *argv[])
> {
> int cpi = 0;
> @@ -1102,33 +1137,19 @@ int main(int argc, char *argv[])
> test_event_counter_config();
> report_prefix_pop();
> } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> - report_prefix_push(argv[1]);
> - test_basic_event_count();
> - report_prefix_pop();
> + run_event_test(argv[1], test_basic_event_count, false);
> } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> - report_prefix_push(argv[1]);
> - test_mem_access();
> - report_prefix_pop();
> + run_event_test(argv[1], test_mem_access, false);
> } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> - report_prefix_push(argv[1]);
> - test_sw_incr();
> - report_prefix_pop();
> + run_event_test(argv[1], test_sw_incr, false);
> } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> - report_prefix_push(argv[1]);
> - test_chained_counters();
> - report_prefix_pop();
> + run_event_test(argv[1], test_chained_counters, false);
> } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> - report_prefix_push(argv[1]);
> - test_chained_sw_incr();
> - report_prefix_pop();
> + run_event_test(argv[1], test_chained_sw_incr, false);
> } else if (strcmp(argv[1], "pmu-chain-promotion") == 0) {
> - report_prefix_push(argv[1]);
> - test_chain_promotion();
> - report_prefix_pop();
> + run_event_test(argv[1], test_chain_promotion, false);
> } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> - report_prefix_push(argv[1]);
> - test_overflow_interrupt();
> - report_prefix_pop();
> + run_event_test(argv[1], test_overflow_interrupt, false);
> } else {
> report_abort("Unknown sub-test '%s'", argv[1]);
> }
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW Ricardo Koller
@ 2023-02-07 15:48 ` Eric Auger
2023-02-14 18:51 ` Andrew Jones
0 siblings, 1 reply; 14+ messages in thread
From: Eric Auger @ 2023-02-07 15:48 UTC (permalink / raw)
To: Ricardo Koller, kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, oliver.upton, reijiw
Hi Rocardo,
On 1/26/23 17:53, Ricardo Koller wrote:
> Given that the arm PMU tests now handle 64-bit counters and overflows,
> it's better to be precise about what the ALL_SET and PRE_OVERFLOW
> macros actually are. Given that they are both 32-bit counters,
> just add _32 to both of them.
nit: wouldn't have hurt to rename OVERFLOW2 too even if it is only used
in tests using chain events.
Besides:
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Thanks
Eric
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
> arm/pmu.c | 78 +++++++++++++++++++++++++++----------------------------
> 1 file changed, 39 insertions(+), 39 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 06cbd73..08e956d 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -54,9 +54,9 @@
> #define EXT_COMMON_EVENTS_LOW 0x4000
> #define EXT_COMMON_EVENTS_HIGH 0x403F
>
> -#define ALL_SET 0x00000000FFFFFFFFULL
> +#define ALL_SET_32 0x00000000FFFFFFFFULL
> #define ALL_CLEAR 0x0000000000000000ULL
> -#define PRE_OVERFLOW 0x00000000FFFFFFF0ULL
> +#define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL
> #define PRE_OVERFLOW2 0x00000000FFFFFFDCULL
>
> #define PMU_PPI 23
> @@ -153,11 +153,11 @@ static void pmu_reset(void)
> /* reset all counters, counting disabled at PMCR level*/
> set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> /* Disable all counters */
> - write_sysreg(ALL_SET, PMCNTENCLR);
> + write_sysreg(ALL_SET_32, PMCNTENCLR);
> /* clear overflow reg */
> - write_sysreg(ALL_SET, PMOVSR);
> + write_sysreg(ALL_SET_32, PMOVSR);
> /* disable overflow interrupts on all counters */
> - write_sysreg(ALL_SET, PMINTENCLR);
> + write_sysreg(ALL_SET_32, PMINTENCLR);
> isb();
> }
>
> @@ -322,7 +322,7 @@ static void irq_handler(struct pt_regs *regs)
> pmu_stats.bitmap |= 1 << i;
> }
> }
> - write_sysreg(ALL_SET, pmovsclr_el0);
> + write_sysreg(ALL_SET_32, pmovsclr_el0);
> isb();
> } else {
> pmu_stats.unexpected = true;
> @@ -346,11 +346,11 @@ static void pmu_reset(void)
> /* reset all counters, counting disabled at PMCR level*/
> set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> /* Disable all counters */
> - write_sysreg_s(ALL_SET, PMCNTENCLR_EL0);
> + write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0);
> /* clear overflow reg */
> - write_sysreg(ALL_SET, pmovsclr_el0);
> + write_sysreg(ALL_SET_32, pmovsclr_el0);
> /* disable overflow interrupts on all counters */
> - write_sysreg(ALL_SET, pmintenclr_el1);
> + write_sysreg(ALL_SET_32, pmintenclr_el1);
> pmu_reset_stats();
> isb();
> }
> @@ -463,7 +463,7 @@ static void test_basic_event_count(bool overflow_at_64bits)
> write_regn_el0(pmevtyper, 1, INST_RETIRED | PMEVTYPER_EXCLUDE_EL0);
>
> /* disable all counters */
> - write_sysreg_s(ALL_SET, PMCNTENCLR_EL0);
> + write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0);
> report(!read_sysreg_s(PMCNTENCLR_EL0) && !read_sysreg_s(PMCNTENSET_EL0),
> "pmcntenclr: disable all counters");
>
> @@ -476,8 +476,8 @@ static void test_basic_event_count(bool overflow_at_64bits)
> report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
>
> /* Preset counter #0 to pre overflow value to trigger an overflow */
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
> "counter #0 preset to pre-overflow value");
> report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
>
> @@ -499,11 +499,11 @@ static void test_basic_event_count(bool overflow_at_64bits)
> "pmcntenset: just enabled #0 and #1");
>
> /* clear overflow register */
> - write_sysreg(ALL_SET, pmovsclr_el0);
> + write_sysreg(ALL_SET_32, pmovsclr_el0);
> report(!read_sysreg(pmovsclr_el0), "check overflow reg is 0");
>
> /* disable overflow interrupts on all counters*/
> - write_sysreg(ALL_SET, pmintenclr_el1);
> + write_sysreg(ALL_SET_32, pmintenclr_el1);
> report(!read_sysreg(pmintenclr_el1),
> "pmintenclr_el1=0, all interrupts disabled");
>
> @@ -551,8 +551,8 @@ static void test_mem_access(bool overflow_at_64bits)
>
> pmu_reset();
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> isb();
> mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> @@ -566,7 +566,7 @@ static void test_mem_access(bool overflow_at_64bits)
> static void test_sw_incr(bool overflow_at_64bits)
> {
> uint32_t events[] = {SW_INCR, SW_INCR};
> - uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> + uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> int i;
>
> if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> @@ -580,7 +580,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> /* enable counters #0 and #1 */
> write_sysreg_s(0x3, PMCNTENSET_EL0);
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> isb();
>
> for (i = 0; i < 100; i++)
> @@ -588,12 +588,12 @@ static void test_sw_incr(bool overflow_at_64bits)
>
> isb();
> report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> + report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
> "PWSYNC does not increment if PMCR.E is unset");
>
> pmu_reset();
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> isb();
> @@ -623,7 +623,7 @@ static void test_chained_counters(bool unused)
> write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> /* enable counters #0 and #1 */
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>
> precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>
> @@ -635,15 +635,15 @@ static void test_chained_counters(bool unused)
> pmu_reset();
> write_sysreg_s(0x3, PMCNTENSET_EL0);
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> write_regn_el0(pmevcntr, 1, 0x1);
> precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
> report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - write_regn_el0(pmevcntr, 1, ALL_SET);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 1, ALL_SET_32);
>
> precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> @@ -654,8 +654,8 @@ static void test_chained_counters(bool unused)
> static void test_chained_sw_incr(bool unused)
> {
> uint32_t events[] = {SW_INCR, CHAIN};
> - uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> - uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
> + uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> + uint64_t cntr1 = (ALL_SET_32 + 1) & pmevcntr_mask();
> int i;
>
> if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> @@ -668,7 +668,7 @@ static void test_chained_sw_incr(bool unused)
> /* enable counters #0 and #1 */
> write_sysreg_s(0x3, PMCNTENSET_EL0);
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> isb();
>
> @@ -686,8 +686,8 @@ static void test_chained_sw_incr(bool unused)
> pmu_reset();
>
> write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - write_regn_el0(pmevcntr, 1, ALL_SET);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 1, ALL_SET_32);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> isb();
> @@ -725,7 +725,7 @@ static void test_chain_promotion(bool unused)
>
> /* Only enable even counter */
> pmu_reset();
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> write_sysreg_s(0x1, PMCNTENSET_EL0);
> isb();
>
> @@ -873,8 +873,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> isb();
>
> /* interrupts are disabled (PMINTENSET_EL1 == 0) */
> @@ -893,13 +893,13 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> isb();
> report(expect_interrupts(0), "no overflow interrupt after counting");
>
> - /* enable interrupts (PMINTENSET_EL1 <= ALL_SET) */
> + /* enable interrupts (PMINTENSET_EL1 <= ALL_SET_32) */
>
> pmu_reset_stats();
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> - write_sysreg(ALL_SET, pmintenset_el1);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> + write_sysreg(ALL_SET_32, pmintenset_el1);
> isb();
>
> mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> @@ -916,7 +916,7 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> pmu_reset_stats();
>
> write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> isb();
> mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> report(expect_interrupts(0x1),
> @@ -924,8 +924,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>
> /* overflow on odd counter */
> pmu_reset_stats();
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - write_regn_el0(pmevcntr, 1, ALL_SET);
> + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 1, ALL_SET_32);
> isb();
> mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> report(expect_interrupts(0x3),
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v4 4/6] arm: pmu: Add tests for 64-bit overflows
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 4/6] arm: pmu: Add tests for 64-bit overflows Ricardo Koller
@ 2023-02-07 17:25 ` Eric Auger
0 siblings, 0 replies; 14+ messages in thread
From: Eric Auger @ 2023-02-07 17:25 UTC (permalink / raw)
To: Ricardo Koller, kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, oliver.upton, reijiw
Hi Ricardo,
On 1/26/23 17:53, Ricardo Koller wrote:
> Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> supported on PMUv3p5.
>
> Note that chained tests do not implement "overflow_at_64bits == true".
> That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> 0487H.a, J1.1.1 "aarch64/debug").
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
Looks good to me
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Thanks
Eric
> ---
> arm/pmu.c | 100 ++++++++++++++++++++++++++++++++++++------------------
> 1 file changed, 67 insertions(+), 33 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 08e956d..082fb41 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -28,6 +28,7 @@
> #define PMU_PMCR_X (1 << 4)
> #define PMU_PMCR_DP (1 << 5)
> #define PMU_PMCR_LC (1 << 6)
> +#define PMU_PMCR_LP (1 << 7)
> #define PMU_PMCR_N_SHIFT 11
> #define PMU_PMCR_N_MASK 0x1f
> #define PMU_PMCR_ID_SHIFT 16
> @@ -57,8 +58,12 @@
> #define ALL_SET_32 0x00000000FFFFFFFFULL
> #define ALL_CLEAR 0x0000000000000000ULL
> #define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL
> +#define PRE_OVERFLOW_64 0xFFFFFFFFFFFFFFF0ULL
> #define PRE_OVERFLOW2 0x00000000FFFFFFDCULL
>
> +#define PRE_OVERFLOW(__overflow_at_64bits) \
> + (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
> +
> #define PMU_PPI 23
>
> struct pmu {
> @@ -448,8 +453,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
> static void test_basic_event_count(bool overflow_at_64bits)
> {
> uint32_t implemented_counter_mask, non_implemented_counter_mask;
> - uint32_t counter_mask;
> + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> + uint32_t counter_mask;
>
> if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> !check_overflow_prerequisites(overflow_at_64bits))
> @@ -471,13 +478,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
> * clear cycle and all event counters and allow counter enablement
> * through PMCNTENSET. LC is RES1.
> */
> - set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> + set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
> isb();
> - report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
> + report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
>
> /* Preset counter #0 to pre overflow value to trigger an overflow */
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> + report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> "counter #0 preset to pre-overflow value");
> report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
>
> @@ -530,6 +537,8 @@ static void test_mem_access(bool overflow_at_64bits)
> {
> void *addr = malloc(PAGE_SIZE);
> uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>
> if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> !check_overflow_prerequisites(overflow_at_64bits))
> @@ -541,7 +550,7 @@ static void test_mem_access(bool overflow_at_64bits)
> write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> isb();
> - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
> report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
> /* We may measure more than 20 mem access depending on the core */
> @@ -551,11 +560,11 @@ static void test_mem_access(bool overflow_at_64bits)
>
> pmu_reset();
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> + write_regn_el0(pmevcntr, 1, pre_overflow);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> isb();
> - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> report(read_sysreg(pmovsclr_el0) == 0x3,
> "Ran 20 mem accesses with expected overflows on both counters");
> report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
> @@ -565,8 +574,10 @@ static void test_mem_access(bool overflow_at_64bits)
>
> static void test_sw_incr(bool overflow_at_64bits)
> {
> + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> uint32_t events[] = {SW_INCR, SW_INCR};
> - uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> + uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
> int i;
>
> if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> @@ -580,7 +591,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> /* enable counters #0 and #1 */
> write_sysreg_s(0x3, PMCNTENSET_EL0);
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> isb();
>
> for (i = 0; i < 100; i++)
> @@ -588,14 +599,14 @@ static void test_sw_incr(bool overflow_at_64bits)
>
> isb();
> report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
> + report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> "PWSYNC does not increment if PMCR.E is unset");
>
> pmu_reset();
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> - set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> isb();
>
> for (i = 0; i < 100; i++)
> @@ -613,6 +624,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> static void test_chained_counters(bool unused)
> {
> uint32_t events[] = {CPU_CYCLES, CHAIN};
> + uint64_t all_set = pmevcntr_mask();
>
> if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> return;
> @@ -643,11 +655,11 @@ static void test_chained_counters(bool unused)
> report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
>
> write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> - write_regn_el0(pmevcntr, 1, ALL_SET_32);
> + write_regn_el0(pmevcntr, 1, all_set);
>
> precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> - report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> + report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
> report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
> }
>
> @@ -855,6 +867,9 @@ static bool expect_interrupts(uint32_t bitmap)
>
> static void test_overflow_interrupt(bool overflow_at_64bits)
> {
> + uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> + uint64_t all_set = pmevcntr_mask();
> + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> uint32_t events[] = {MEM_ACCESS, SW_INCR};
> void *addr = malloc(PAGE_SIZE);
> int i;
> @@ -873,16 +888,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
> write_sysreg_s(0x3, PMCNTENSET_EL0);
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> + write_regn_el0(pmevcntr, 1, pre_overflow);
> isb();
>
> /* interrupts are disabled (PMINTENSET_EL1 == 0) */
>
> - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> report(expect_interrupts(0), "no overflow interrupt after preset");
>
> - set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> isb();
>
> for (i = 0; i < 100; i++)
> @@ -897,12 +912,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>
> pmu_reset_stats();
>
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> + write_regn_el0(pmevcntr, 1, pre_overflow);
> write_sysreg(ALL_SET_32, pmintenset_el1);
> isb();
>
> - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> for (i = 0; i < 100; i++)
> write_sysreg(0x3, pmswinc_el0);
>
> @@ -911,25 +926,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> report(expect_interrupts(0x3),
> "overflow interrupts expected on #0 and #1");
>
> - /* promote to 64-b */
> + /*
> + * promote to 64-b:
> + *
> + * This only applies to the !overflow_at_64bits case, as
> + * overflow_at_64bits doesn't implement CHAIN events. The
> + * overflow_at_64bits case just checks that chained counters are
> + * not incremented when PMCR.LP == 1.
> + */
>
> pmu_reset_stats();
>
> write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> isb();
> - mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> - report(expect_interrupts(0x1),
> - "expect overflow interrupt on 32b boundary");
> + mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> + report(expect_interrupts(0x1), "expect overflow interrupt");
>
> /* overflow on odd counter */
> pmu_reset_stats();
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> - write_regn_el0(pmevcntr, 1, ALL_SET_32);
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> + write_regn_el0(pmevcntr, 1, all_set);
> isb();
> - mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> - report(expect_interrupts(0x3),
> - "expect overflow interrupt on even and odd counter");
> + mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> + if (overflow_at_64bits) {
> + report(expect_interrupts(0x1),
> + "expect overflow interrupt on even counter");
> + report(read_regn_el0(pmevcntr, 1) == all_set,
> + "Odd counter did not change");
> + } else {
> + report(expect_interrupts(0x3),
> + "expect overflow interrupt on even and odd counter");
> + report(read_regn_el0(pmevcntr, 1) != all_set,
> + "Odd counter wrapped");
> + }
> }
> #endif
>
> @@ -1138,10 +1168,13 @@ int main(int argc, char *argv[])
> report_prefix_pop();
> } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> run_event_test(argv[1], test_basic_event_count, false);
> + run_event_test(argv[1], test_basic_event_count, true);
> } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> run_event_test(argv[1], test_mem_access, false);
> + run_event_test(argv[1], test_mem_access, true);
> } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> run_event_test(argv[1], test_sw_incr, false);
> + run_event_test(argv[1], test_sw_incr, true);
> } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> run_event_test(argv[1], test_chained_counters, false);
> } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> @@ -1150,6 +1183,7 @@ int main(int argc, char *argv[])
> run_event_test(argv[1], test_chain_promotion, false);
> } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> run_event_test(argv[1], test_overflow_interrupt, false);
> + run_event_test(argv[1], test_overflow_interrupt, true);
> } else {
> report_abort("Unknown sub-test '%s'", argv[1]);
> }
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v4 6/6] arm: pmu: Fix test_overflow_interrupt()
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 6/6] arm: pmu: Fix test_overflow_interrupt() Ricardo Koller
@ 2023-02-07 17:31 ` Eric Auger
0 siblings, 0 replies; 14+ messages in thread
From: Eric Auger @ 2023-02-07 17:31 UTC (permalink / raw)
To: Ricardo Koller, kvm, kvmarm, andrew.jones
Cc: maz, alexandru.elisei, oliver.upton, reijiw
Hi Ricardo,
On 1/26/23 17:53, Ricardo Koller wrote:
> test_overflow_interrupt() (from arm/pmu.c) has a test that passes
> because the previous test leaves the state needed to pass: the
> overflow status register with the expected bits. The test (that should
> fail) does not enable the PMU after mem_access_loop(), which clears
> the PMCR, and before writing into the software increment register.
>
> Fix by clearing the previous test state (pmovsclr_el0) and by enabling
> the PMU before the sw_incr test.
>
> Fixes: 4f5ef94f3aac ("arm: pmu: Test overflow interrupts")
> Reported-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Thank you for the fix!
Eric
> ---
> arm/pmu.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 1e93ea2..f91b5ca 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -914,10 +914,15 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>
> write_regn_el0(pmevcntr, 0, pre_overflow);
> write_regn_el0(pmevcntr, 1, pre_overflow);
> + write_sysreg(ALL_SET_32, pmovsclr_el0);
> write_sysreg(ALL_SET_32, pmintenset_el1);
> isb();
>
> mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> +
> + set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> + isb();
> +
> for (i = 0; i < 100; i++)
> write_sysreg(0x3, pmswinc_el0);
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows
2023-02-07 15:45 ` Eric Auger
@ 2023-02-09 14:59 ` Ricardo Koller
0 siblings, 0 replies; 14+ messages in thread
From: Ricardo Koller @ 2023-02-09 14:59 UTC (permalink / raw)
To: eric.auger
Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, oliver.upton, reijiw
On Tue, Feb 7, 2023 at 7:46 AM Eric Auger <eric.auger@redhat.com> wrote:
>
> Hi Ricardo,
>
> On 1/26/23 17:53, Ricardo Koller wrote:
> > PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits
> > instead of 32. Prepare by doing these 3 things:
> >
> > 1. Add a "bool overflow_at_64bits" argument to all tests checking
> > overflows.
> Actually test_chained_sw_incr() and test_chained_counters() also test
> overflows but they feature CHAIN events. I guess that's why you don't
> need the LP flag. Just tweek the commit msg if you need to respin.
> > 2. Extend satisfy_prerequisites() to check if the machine supports
> > "overflow_at_64bits".
> > 3. Refactor the test invocations to use the new "run_test()" which adds a
> > report prefix indicating whether the test uses 64 or 32-bit overflows.
> >
> > A subsequent commit will actually add the 64-bit overflow tests.
> >
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> Besides
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
Thanks for the reviews, I'll respin the series with the changes you suggest.
Thanks,
Ricardo
>
> Thanks
>
> Eric
>
> > ---
> > arm/pmu.c | 99 +++++++++++++++++++++++++++++++++----------------------
> > 1 file changed, 60 insertions(+), 39 deletions(-)
> >
> > diff --git a/arm/pmu.c b/arm/pmu.c
> > index 7f0794d..06cbd73 100644
> > --- a/arm/pmu.c
> > +++ b/arm/pmu.c
> > @@ -164,13 +164,13 @@ static void pmu_reset(void)
> > /* event counter tests only implemented for aarch64 */
> > static void test_event_introspection(void) {}
> > static void test_event_counter_config(void) {}
> > -static void test_basic_event_count(void) {}
> > -static void test_mem_access(void) {}
> > -static void test_sw_incr(void) {}
> > -static void test_chained_counters(void) {}
> > -static void test_chained_sw_incr(void) {}
> > -static void test_chain_promotion(void) {}
> > -static void test_overflow_interrupt(void) {}
> > +static void test_basic_event_count(bool overflow_at_64bits) {}
> > +static void test_mem_access(bool overflow_at_64bits) {}
> > +static void test_sw_incr(bool overflow_at_64bits) {}
> > +static void test_chained_counters(bool unused) {}
> > +static void test_chained_sw_incr(bool unused) {}
> > +static void test_chain_promotion(bool unused) {}
> > +static void test_overflow_interrupt(bool overflow_at_64bits) {}
> >
> > #elif defined(__aarch64__)
> > #define ID_AA64DFR0_PERFMON_SHIFT 8
> > @@ -435,13 +435,24 @@ static uint64_t pmevcntr_mask(void)
> > return (uint32_t)~0;
> > }
> >
> > -static void test_basic_event_count(void)
> > +static bool check_overflow_prerequisites(bool overflow_at_64bits)
> > +{
> > + if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) {
> > + report_skip("Skip test as 64 overflows need FEAT_PMUv3p5");
> > + return false;
> > + }
> > +
> > + return true;
> > +}
> > +
> > +static void test_basic_event_count(bool overflow_at_64bits)
> > {
> > uint32_t implemented_counter_mask, non_implemented_counter_mask;
> > uint32_t counter_mask;
> > uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> >
> > - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > + !check_overflow_prerequisites(overflow_at_64bits))
> > return;
> >
> > implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1;
> > @@ -515,12 +526,13 @@ static void test_basic_event_count(void)
> > "check overflow happened on #0 only");
> > }
> >
> > -static void test_mem_access(void)
> > +static void test_mem_access(bool overflow_at_64bits)
> > {
> > void *addr = malloc(PAGE_SIZE);
> > uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> >
> > - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > + !check_overflow_prerequisites(overflow_at_64bits))
> > return;
> >
> > pmu_reset();
> > @@ -551,13 +563,14 @@ static void test_mem_access(void)
> > read_sysreg(pmovsclr_el0));
> > }
> >
> > -static void test_sw_incr(void)
> > +static void test_sw_incr(bool overflow_at_64bits)
> > {
> > uint32_t events[] = {SW_INCR, SW_INCR};
> > uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > int i;
> >
> > - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > + !check_overflow_prerequisites(overflow_at_64bits))
> > return;
> >
> > pmu_reset();
> > @@ -597,7 +610,7 @@ static void test_sw_incr(void)
> > "overflow on counter #0 after 100 SW_INCR");
> > }
> >
> > -static void test_chained_counters(void)
> > +static void test_chained_counters(bool unused)
> > {
> > uint32_t events[] = {CPU_CYCLES, CHAIN};
> >
> > @@ -638,7 +651,7 @@ static void test_chained_counters(void)
> > report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
> > }
> >
> > -static void test_chained_sw_incr(void)
> > +static void test_chained_sw_incr(bool unused)
> > {
> > uint32_t events[] = {SW_INCR, CHAIN};
> > uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > @@ -691,7 +704,7 @@ static void test_chained_sw_incr(void)
> > read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
> > }
> >
> > -static void test_chain_promotion(void)
> > +static void test_chain_promotion(bool unused)
> > {
> > uint32_t events[] = {MEM_ACCESS, CHAIN};
> > void *addr = malloc(PAGE_SIZE);
> > @@ -840,13 +853,14 @@ static bool expect_interrupts(uint32_t bitmap)
> > return true;
> > }
> >
> > -static void test_overflow_interrupt(void)
> > +static void test_overflow_interrupt(bool overflow_at_64bits)
> > {
> > uint32_t events[] = {MEM_ACCESS, SW_INCR};
> > void *addr = malloc(PAGE_SIZE);
> > int i;
> >
> > - if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > + if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > + !check_overflow_prerequisites(overflow_at_64bits))
> > return;
> >
> > gic_enable_defaults();
> > @@ -1070,6 +1084,27 @@ static bool pmu_probe(void)
> > return true;
> > }
> >
> > +static void run_test(const char *name, const char *prefix,
> > + void (*test)(bool), void *arg)
> > +{
> > + report_prefix_push(name);
> > + report_prefix_push(prefix);
> > +
> > + test(arg);
> > +
> > + report_prefix_pop();
> > + report_prefix_pop();
> > +}
> > +
> > +static void run_event_test(char *name, void (*test)(bool),
> > + bool overflow_at_64bits)
> > +{
> > + const char *prefix = overflow_at_64bits ? "64-bit overflows"
> > + : "32-bit overflows";
> > +
> > + run_test(name, prefix, test, (void *)overflow_at_64bits);
> > +}
> > +
> > int main(int argc, char *argv[])
> > {
> > int cpi = 0;
> > @@ -1102,33 +1137,19 @@ int main(int argc, char *argv[])
> > test_event_counter_config();
> > report_prefix_pop();
> > } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> > - report_prefix_push(argv[1]);
> > - test_basic_event_count();
> > - report_prefix_pop();
> > + run_event_test(argv[1], test_basic_event_count, false);
> > } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> > - report_prefix_push(argv[1]);
> > - test_mem_access();
> > - report_prefix_pop();
> > + run_event_test(argv[1], test_mem_access, false);
> > } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> > - report_prefix_push(argv[1]);
> > - test_sw_incr();
> > - report_prefix_pop();
> > + run_event_test(argv[1], test_sw_incr, false);
> > } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> > - report_prefix_push(argv[1]);
> > - test_chained_counters();
> > - report_prefix_pop();
> > + run_event_test(argv[1], test_chained_counters, false);
> > } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> > - report_prefix_push(argv[1]);
> > - test_chained_sw_incr();
> > - report_prefix_pop();
> > + run_event_test(argv[1], test_chained_sw_incr, false);
> > } else if (strcmp(argv[1], "pmu-chain-promotion") == 0) {
> > - report_prefix_push(argv[1]);
> > - test_chain_promotion();
> > - report_prefix_pop();
> > + run_event_test(argv[1], test_chain_promotion, false);
> > } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> > - report_prefix_push(argv[1]);
> > - test_overflow_interrupt();
> > - report_prefix_pop();
> > + run_event_test(argv[1], test_overflow_interrupt, false);
> > } else {
> > report_abort("Unknown sub-test '%s'", argv[1]);
> > }
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW
2023-02-07 15:48 ` Eric Auger
@ 2023-02-14 18:51 ` Andrew Jones
0 siblings, 0 replies; 14+ messages in thread
From: Andrew Jones @ 2023-02-14 18:51 UTC (permalink / raw)
To: Eric Auger
Cc: Ricardo Koller, kvm, kvmarm, maz, alexandru.elisei, oliver.upton, reijiw
On Tue, Feb 07, 2023 at 04:48:59PM +0100, Eric Auger wrote:
> Hi Rocardo,
>
> On 1/26/23 17:53, Ricardo Koller wrote:
> > Given that the arm PMU tests now handle 64-bit counters and overflows,
> > it's better to be precise about what the ALL_SET and PRE_OVERFLOW
> > macros actually are. Given that they are both 32-bit counters,
> > just add _32 to both of them.
> nit: wouldn't have hurt to rename OVERFLOW2 too even if it is only used
> in tests using chain events.
I'll rename OVERFLOW2 when merging.
Thanks,
drew
>
> Besides:
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
>
> Thanks
>
> Eric
>
> >
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > ---
> > arm/pmu.c | 78 +++++++++++++++++++++++++++----------------------------
> > 1 file changed, 39 insertions(+), 39 deletions(-)
> >
> > diff --git a/arm/pmu.c b/arm/pmu.c
> > index 06cbd73..08e956d 100644
> > --- a/arm/pmu.c
> > +++ b/arm/pmu.c
> > @@ -54,9 +54,9 @@
> > #define EXT_COMMON_EVENTS_LOW 0x4000
> > #define EXT_COMMON_EVENTS_HIGH 0x403F
> >
> > -#define ALL_SET 0x00000000FFFFFFFFULL
> > +#define ALL_SET_32 0x00000000FFFFFFFFULL
> > #define ALL_CLEAR 0x0000000000000000ULL
> > -#define PRE_OVERFLOW 0x00000000FFFFFFF0ULL
> > +#define PRE_OVERFLOW_32 0x00000000FFFFFFF0ULL
> > #define PRE_OVERFLOW2 0x00000000FFFFFFDCULL
> >
> > #define PMU_PPI 23
> > @@ -153,11 +153,11 @@ static void pmu_reset(void)
> > /* reset all counters, counting disabled at PMCR level*/
> > set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> > /* Disable all counters */
> > - write_sysreg(ALL_SET, PMCNTENCLR);
> > + write_sysreg(ALL_SET_32, PMCNTENCLR);
> > /* clear overflow reg */
> > - write_sysreg(ALL_SET, PMOVSR);
> > + write_sysreg(ALL_SET_32, PMOVSR);
> > /* disable overflow interrupts on all counters */
> > - write_sysreg(ALL_SET, PMINTENCLR);
> > + write_sysreg(ALL_SET_32, PMINTENCLR);
> > isb();
> > }
> >
> > @@ -322,7 +322,7 @@ static void irq_handler(struct pt_regs *regs)
> > pmu_stats.bitmap |= 1 << i;
> > }
> > }
> > - write_sysreg(ALL_SET, pmovsclr_el0);
> > + write_sysreg(ALL_SET_32, pmovsclr_el0);
> > isb();
> > } else {
> > pmu_stats.unexpected = true;
> > @@ -346,11 +346,11 @@ static void pmu_reset(void)
> > /* reset all counters, counting disabled at PMCR level*/
> > set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> > /* Disable all counters */
> > - write_sysreg_s(ALL_SET, PMCNTENCLR_EL0);
> > + write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0);
> > /* clear overflow reg */
> > - write_sysreg(ALL_SET, pmovsclr_el0);
> > + write_sysreg(ALL_SET_32, pmovsclr_el0);
> > /* disable overflow interrupts on all counters */
> > - write_sysreg(ALL_SET, pmintenclr_el1);
> > + write_sysreg(ALL_SET_32, pmintenclr_el1);
> > pmu_reset_stats();
> > isb();
> > }
> > @@ -463,7 +463,7 @@ static void test_basic_event_count(bool overflow_at_64bits)
> > write_regn_el0(pmevtyper, 1, INST_RETIRED | PMEVTYPER_EXCLUDE_EL0);
> >
> > /* disable all counters */
> > - write_sysreg_s(ALL_SET, PMCNTENCLR_EL0);
> > + write_sysreg_s(ALL_SET_32, PMCNTENCLR_EL0);
> > report(!read_sysreg_s(PMCNTENCLR_EL0) && !read_sysreg_s(PMCNTENSET_EL0),
> > "pmcntenclr: disable all counters");
> >
> > @@ -476,8 +476,8 @@ static void test_basic_event_count(bool overflow_at_64bits)
> > report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
> >
> > /* Preset counter #0 to pre overflow value to trigger an overflow */
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > + report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
> > "counter #0 preset to pre-overflow value");
> > report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
> >
> > @@ -499,11 +499,11 @@ static void test_basic_event_count(bool overflow_at_64bits)
> > "pmcntenset: just enabled #0 and #1");
> >
> > /* clear overflow register */
> > - write_sysreg(ALL_SET, pmovsclr_el0);
> > + write_sysreg(ALL_SET_32, pmovsclr_el0);
> > report(!read_sysreg(pmovsclr_el0), "check overflow reg is 0");
> >
> > /* disable overflow interrupts on all counters*/
> > - write_sysreg(ALL_SET, pmintenclr_el1);
> > + write_sysreg(ALL_SET_32, pmintenclr_el1);
> > report(!read_sysreg(pmintenclr_el1),
> > "pmintenclr_el1=0, all interrupts disabled");
> >
> > @@ -551,8 +551,8 @@ static void test_mem_access(bool overflow_at_64bits)
> >
> > pmu_reset();
> >
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> > isb();
> > mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> > @@ -566,7 +566,7 @@ static void test_mem_access(bool overflow_at_64bits)
> > static void test_sw_incr(bool overflow_at_64bits)
> > {
> > uint32_t events[] = {SW_INCR, SW_INCR};
> > - uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > + uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> > int i;
> >
> > if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > @@ -580,7 +580,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> > /* enable counters #0 and #1 */
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> >
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > isb();
> >
> > for (i = 0; i < 100; i++)
> > @@ -588,12 +588,12 @@ static void test_sw_incr(bool overflow_at_64bits)
> >
> > isb();
> > report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> > - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> > + report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW_32,
> > "PWSYNC does not increment if PMCR.E is unset");
> >
> > pmu_reset();
> >
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> > set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > isb();
> > @@ -623,7 +623,7 @@ static void test_chained_counters(bool unused)
> > write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> > /* enable counters #0 and #1 */
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >
> > precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >
> > @@ -635,15 +635,15 @@ static void test_chained_counters(bool unused)
> > pmu_reset();
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> >
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > write_regn_el0(pmevcntr, 1, 0x1);
> > precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> > report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> > report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
> > report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
> >
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > - write_regn_el0(pmevcntr, 1, ALL_SET);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > + write_regn_el0(pmevcntr, 1, ALL_SET_32);
> >
> > precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> > report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> > @@ -654,8 +654,8 @@ static void test_chained_counters(bool unused)
> > static void test_chained_sw_incr(bool unused)
> > {
> > uint32_t events[] = {SW_INCR, CHAIN};
> > - uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > - uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
> > + uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> > + uint64_t cntr1 = (ALL_SET_32 + 1) & pmevcntr_mask();
> > int i;
> >
> > if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > @@ -668,7 +668,7 @@ static void test_chained_sw_incr(bool unused)
> > /* enable counters #0 and #1 */
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> >
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > isb();
> >
> > @@ -686,8 +686,8 @@ static void test_chained_sw_incr(bool unused)
> > pmu_reset();
> >
> > write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > - write_regn_el0(pmevcntr, 1, ALL_SET);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > + write_regn_el0(pmevcntr, 1, ALL_SET_32);
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> > set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > isb();
> > @@ -725,7 +725,7 @@ static void test_chain_promotion(bool unused)
> >
> > /* Only enable even counter */
> > pmu_reset();
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > write_sysreg_s(0x1, PMCNTENSET_EL0);
> > isb();
> >
> > @@ -873,8 +873,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> > write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> > write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
> > write_sysreg_s(0x3, PMCNTENSET_EL0);
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> > isb();
> >
> > /* interrupts are disabled (PMINTENSET_EL1 == 0) */
> > @@ -893,13 +893,13 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> > isb();
> > report(expect_interrupts(0), "no overflow interrupt after counting");
> >
> > - /* enable interrupts (PMINTENSET_EL1 <= ALL_SET) */
> > + /* enable interrupts (PMINTENSET_EL1 <= ALL_SET_32) */
> >
> > pmu_reset_stats();
> >
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > - write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > - write_sysreg(ALL_SET, pmintenset_el1);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > + write_regn_el0(pmevcntr, 1, PRE_OVERFLOW_32);
> > + write_sysreg(ALL_SET_32, pmintenset_el1);
> > isb();
> >
> > mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > @@ -916,7 +916,7 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> > pmu_reset_stats();
> >
> > write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > isb();
> > mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > report(expect_interrupts(0x1),
> > @@ -924,8 +924,8 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >
> > /* overflow on odd counter */
> > pmu_reset_stats();
> > - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > - write_regn_el0(pmevcntr, 1, ALL_SET);
> > + write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > + write_regn_el0(pmevcntr, 1, ALL_SET_32);
> > isb();
> > mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> > report(expect_interrupts(0x3),
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
` (5 preceding siblings ...)
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 6/6] arm: pmu: Fix test_overflow_interrupt() Ricardo Koller
@ 2023-02-14 20:18 ` Andrew Jones
6 siblings, 0 replies; 14+ messages in thread
From: Andrew Jones @ 2023-02-14 20:18 UTC (permalink / raw)
To: Ricardo Koller
Cc: kvm, kvmarm, maz, alexandru.elisei, eric.auger, oliver.upton, reijiw
On Thu, Jan 26, 2023 at 04:53:45PM +0000, Ricardo Koller wrote:
> The first commit fixes the tests when running on PMUv3p5. The issue is that
> PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
> for overflowing at 32 or 64-bits. Tests are currently failing [0] on
> PMUv3p5 because of this. They wrongly assume that values will be wrapped
> around 32-bits, but they overflow into the other half of the 64-bit
> counters.
>
> The second and third commits add new tests for 64-bit overflows, a feature
> added with PMUv3p5 (PMCR_EL0.LP == 1). This is done by running all
> overflow-related tests in two modes: with 32-bit and 64-bit overflows.
> The fourt commit changes the value reporting to use %lx instead of %ld.
>
> This series was tested on PMUv3p5 and PMUv3p4 using the ARM Fast Model and
> kvmtool. All tests pass on both PMUv3p5 and PMUv3p4 when using Marc's
> PMUv3p5 series [0], plus the suggestion made at [1]. Didn't test AArch32.
>
> Changes from v3:
> - Added commit to fix test_overflow_interrupt(). (Reiji and Eric)
> - Separated s/ALL_SET/ALL_SET_32/ and s/PRE_OVERFLOW/PRE_OVERFLOW_32
> into its own commit. (Reiji and Eric)
> - Fix s/200/20. (Eric)
>
> Changes from v2:
> - used Oliver's suggestion of using pmevcntr_mask() for masking counters to
> 32 or 64 bits, instead of casting to uint32_t or uint64_t.
> - removed ALL_SET_AT() in favor of pmevcntr_mask(). (Oliver)
> - moved the change to have odd counter overflows at 64-bits from first to
> third commit.
> - renamed PRE_OVERFLOW macro to PRE_OVERFLOW_32, and PRE_OVERFLOW_AT() to
> PRE_OVERFLOW().
>
> Changes from v1 (all suggested by Alexandru):
> - report counter values in hexadecimal
> - s/overflow_at_64bits/unused for all chained tests
> - check that odd counters do not increment when using overflow_at_64bits
> (pmcr.LP=1)
> - test 64-bit odd counters overflows
> - switch confusing var names in test_chained_sw_incr(): cntr0 <-> cntr1
>
> [0] https://lore.kernel.org/kvmarm/20221113163832.3154370-1-maz@kernel.org/
> [1] https://lore.kernel.org/kvmarm/Y4jasyxvFRNvvmox@google.com/
>
> Ricardo Koller (6):
> arm: pmu: Fix overflow checks for PMUv3p5 long counters
> arm: pmu: Prepare for testing 64-bit overflows
> arm: pmu: Rename ALL_SET and PRE_OVERFLOW
> arm: pmu: Add tests for 64-bit overflows
> arm: pmu: Print counter values as hexadecimals
> arm: pmu: Fix test_overflow_interrupt()
>
> arm/pmu.c | 298 ++++++++++++++++++++++++++++++++++--------------------
> 1 file changed, 188 insertions(+), 110 deletions(-)
>
> --
> 2.39.1.456.gfc5497dd1b-goog
>
Applied, thanks
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-02-14 20:19 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-26 16:53 [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 1/6] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 2/6] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
2023-02-07 15:45 ` Eric Auger
2023-02-09 14:59 ` Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 3/6] arm: pmu: Rename ALL_SET and PRE_OVERFLOW Ricardo Koller
2023-02-07 15:48 ` Eric Auger
2023-02-14 18:51 ` Andrew Jones
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 4/6] arm: pmu: Add tests for 64-bit overflows Ricardo Koller
2023-02-07 17:25 ` Eric Auger
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 5/6] arm: pmu: Print counter values as hexadecimals Ricardo Koller
2023-01-26 16:53 ` [kvm-unit-tests PATCH v4 6/6] arm: pmu: Fix test_overflow_interrupt() Ricardo Koller
2023-02-07 17:31 ` Eric Auger
2023-02-14 20:18 ` [kvm-unit-tests PATCH v4 0/6] arm: pmu: Add support for PMUv3p5 Andrew Jones
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).