All of lore.kernel.org
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v3 0/4] arm: pmu: Add support for PMUv3p5
@ 2023-01-09 21:17 Ricardo Koller
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-09 21:17 UTC (permalink / raw)
  To: kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller

The first commit fixes the tests when running on PMUv3p5. The issue is that
PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
for overflowing at 32 or 64-bits. Tests are currently failing [0] on
PMUv3p5 because of this. They wrongly assume that values will be wrapped
around 32-bits, but they overflow into the other half of the 64-bit
counters.

The second and third commits add new tests for 64-bit overflows, a feature
added with PMUv3p5 (PMCR_EL0.LP == 1). This is done by running all
overflow-related tests in two modes: with 32-bit and 64-bit overflows.
The fourt commit changes the value reporting to use %lx instead of %ld.

This series was tested on PMUv3p5 and PMUv3p4 using the ARM Fast Model and
kvmtool.  All tests pass on both PMUv3p5 and PMUv3p4 when using Marc's
PMUv3p5 series [0], plus the suggestion made at [1]. Didn't test AArch32.

Changes from v2:
- used Oliver's suggestion of using pmevcntr_mask() for masking counters to
  32 or 64 bits, instead of casting to uint32_t or uint64_t.
- removed ALL_SET_AT() in favor of pmevcntr_mask(). (Oliver)
- moved the change to have odd counter overflows at 64-bits from first to
  third commit.
- renamed PRE_OVERFLOW macro to PRE_OVERFLOW_32, and PRE_OVERFLOW_AT() to
  PRE_OVERFLOW().

Changes from v1 (all suggested by Alexandru):
- report counter values in hexadecimal
- s/overflow_at_64bits/unused for all chained tests
- check that odd counters do not increment when using overflow_at_64bits
  (pmcr.LP=1)
- test 64-bit odd counters overflows
- switch confusing var names in test_chained_sw_incr(): cntr0 <-> cntr1

[0] https://lore.kernel.org/kvmarm/20221113163832.3154370-1-maz@kernel.org/
[1] https://lore.kernel.org/kvmarm/Y4jasyxvFRNvvmox@google.com/

Ricardo Koller (4):
  arm: pmu: Fix overflow checks for PMUv3p5 long counters
  arm: pmu: Prepare for testing 64-bit overflows
  arm: pmu: Add tests for 64-bit overflows
  arm: pmu: Print counter values as hexadecimals

 arm/pmu.c | 260 ++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 163 insertions(+), 97 deletions(-)

-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters
  2023-01-09 21:17 [kvm-unit-tests PATCH v3 0/4] arm: pmu: Add support for PMUv3p5 Ricardo Koller
@ 2023-01-09 21:17 ` Ricardo Koller
  2023-01-09 21:42   ` Oliver Upton
  2023-01-23 20:16   ` Eric Auger
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 2/4] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-09 21:17 UTC (permalink / raw)
  To: kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller

PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
for overflowing at 32 or 64-bits. The consequence is that tests that check
the counter values after overflowing should not assume that values will be
wrapped around 32-bits: they overflow into the other half of the 64-bit
counters on PMUv3p5.

Fix tests by correctly checking overflowing-counters against the expected
64-bit value.

Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
 arm/pmu.c | 38 ++++++++++++++++++++++++++++----------
 1 file changed, 28 insertions(+), 10 deletions(-)

diff --git a/arm/pmu.c b/arm/pmu.c
index cd47b14..7f0794d 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -54,10 +54,10 @@
 #define EXT_COMMON_EVENTS_LOW	0x4000
 #define EXT_COMMON_EVENTS_HIGH	0x403F
 
-#define ALL_SET			0xFFFFFFFF
-#define ALL_CLEAR		0x0
-#define PRE_OVERFLOW		0xFFFFFFF0
-#define PRE_OVERFLOW2		0xFFFFFFDC
+#define ALL_SET			0x00000000FFFFFFFFULL
+#define ALL_CLEAR		0x0000000000000000ULL
+#define PRE_OVERFLOW		0x00000000FFFFFFF0ULL
+#define PRE_OVERFLOW2		0x00000000FFFFFFDCULL
 
 #define PMU_PPI			23
 
@@ -419,6 +419,22 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events)
 	return true;
 }
 
+static uint64_t pmevcntr_mask(void)
+{
+	/*
+	 * Bits [63:0] are always incremented for 64-bit counters,
+	 * even if the PMU is configured to generate an overflow at
+	 * bits [31:0]
+	 *
+	 * For more details see the AArch64.IncrementEventCounter()
+	 * pseudo-code in the ARM ARM DDI 0487I.a, section J1.1.1.
+	 */
+	if (pmu.version >= ID_DFR0_PMU_V3_8_5)
+		return ~0;
+
+	return (uint32_t)~0;
+}
+
 static void test_basic_event_count(void)
 {
 	uint32_t implemented_counter_mask, non_implemented_counter_mask;
@@ -538,6 +554,7 @@ static void test_mem_access(void)
 static void test_sw_incr(void)
 {
 	uint32_t events[] = {SW_INCR, SW_INCR};
+	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
 	int i;
 
 	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
@@ -572,9 +589,8 @@ static void test_sw_incr(void)
 		write_sysreg(0x3, pmswinc_el0);
 
 	isb();
-	report(read_regn_el0(pmevcntr, 0)  == 84, "counter #1 after + 100 SW_INCR");
-	report(read_regn_el0(pmevcntr, 1)  == 100,
-		"counter #0 after + 100 SW_INCR");
+	report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR");
+	report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR");
 	report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
 		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
 	report(read_sysreg(pmovsclr_el0) == 0x1,
@@ -625,6 +641,8 @@ static void test_chained_counters(void)
 static void test_chained_sw_incr(void)
 {
 	uint32_t events[] = {SW_INCR, CHAIN};
+	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
+	uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
 	int i;
 
 	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
@@ -666,9 +684,9 @@ static void test_chained_sw_incr(void)
 
 	isb();
 	report((read_sysreg(pmovsclr_el0) == 0x3) &&
-		(read_regn_el0(pmevcntr, 1) == 0) &&
-		(read_regn_el0(pmevcntr, 0) == 84),
-		"expected overflows and values after 100 SW_INCR/CHAIN");
+	       (read_regn_el0(pmevcntr, 0) == cntr0) &&
+	       (read_regn_el0(pmevcntr, 1) == cntr1),
+	       "expected overflows and values after 100 SW_INCR/CHAIN");
 	report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
 		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
 }
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [kvm-unit-tests PATCH v3 2/4] arm: pmu: Prepare for testing 64-bit overflows
  2023-01-09 21:17 [kvm-unit-tests PATCH v3 0/4] arm: pmu: Add support for PMUv3p5 Ricardo Koller
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
@ 2023-01-09 21:17 ` Ricardo Koller
  2023-01-12  5:56   ` Reiji Watanabe
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for " Ricardo Koller
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals Ricardo Koller
  3 siblings, 1 reply; 22+ messages in thread
From: Ricardo Koller @ 2023-01-09 21:17 UTC (permalink / raw)
  To: kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller

PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits
instead of 32. Prepare by doing these 3 things:

1. Add a "bool overflow_at_64bits" argument to all tests checking
   overflows.
2. Extend satisfy_prerequisites() to check if the machine supports
   "overflow_at_64bits".
3. Refactor the test invocations to use the new "run_test()" which adds a
   report prefix indicating whether the test uses 64 or 32-bit overflows.

A subsequent commit will actually add the 64-bit overflow tests.

Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
 arm/pmu.c | 92 ++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 53 insertions(+), 39 deletions(-)

diff --git a/arm/pmu.c b/arm/pmu.c
index 7f0794d..0d06b59 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -164,13 +164,13 @@ static void pmu_reset(void)
 /* event counter tests only implemented for aarch64 */
 static void test_event_introspection(void) {}
 static void test_event_counter_config(void) {}
-static void test_basic_event_count(void) {}
-static void test_mem_access(void) {}
-static void test_sw_incr(void) {}
-static void test_chained_counters(void) {}
-static void test_chained_sw_incr(void) {}
-static void test_chain_promotion(void) {}
-static void test_overflow_interrupt(void) {}
+static void test_basic_event_count(bool overflow_at_64bits) {}
+static void test_mem_access(bool overflow_at_64bits) {}
+static void test_sw_incr(bool overflow_at_64bits) {}
+static void test_chained_counters(bool unused) {}
+static void test_chained_sw_incr(bool unused) {}
+static void test_chain_promotion(bool unused) {}
+static void test_overflow_interrupt(bool overflow_at_64bits) {}
 
 #elif defined(__aarch64__)
 #define ID_AA64DFR0_PERFMON_SHIFT 8
@@ -416,6 +416,7 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events)
 			return false;
 		}
 	}
+
 	return true;
 }
 
@@ -435,13 +436,24 @@ static uint64_t pmevcntr_mask(void)
 	return (uint32_t)~0;
 }
 
-static void test_basic_event_count(void)
+static bool check_overflow_prerequisites(bool overflow_at_64bits)
+{
+	if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) {
+		report_skip("Skip test as 64 overflows need FEAT_PMUv3p5");
+		return false;
+	}
+
+	return true;
+}
+
+static void test_basic_event_count(bool overflow_at_64bits)
 {
 	uint32_t implemented_counter_mask, non_implemented_counter_mask;
 	uint32_t counter_mask;
 	uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
 
-	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+	    !check_overflow_prerequisites(overflow_at_64bits))
 		return;
 
 	implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1;
@@ -515,12 +527,13 @@ static void test_basic_event_count(void)
 		"check overflow happened on #0 only");
 }
 
-static void test_mem_access(void)
+static void test_mem_access(bool overflow_at_64bits)
 {
 	void *addr = malloc(PAGE_SIZE);
 	uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
 
-	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+	    !check_overflow_prerequisites(overflow_at_64bits))
 		return;
 
 	pmu_reset();
@@ -551,13 +564,14 @@ static void test_mem_access(void)
 			read_sysreg(pmovsclr_el0));
 }
 
-static void test_sw_incr(void)
+static void test_sw_incr(bool overflow_at_64bits)
 {
 	uint32_t events[] = {SW_INCR, SW_INCR};
 	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
 	int i;
 
-	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+	    !check_overflow_prerequisites(overflow_at_64bits))
 		return;
 
 	pmu_reset();
@@ -597,7 +611,7 @@ static void test_sw_incr(void)
 		"overflow on counter #0 after 100 SW_INCR");
 }
 
-static void test_chained_counters(void)
+static void test_chained_counters(bool unused)
 {
 	uint32_t events[] = {CPU_CYCLES, CHAIN};
 
@@ -638,7 +652,7 @@ static void test_chained_counters(void)
 	report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
 }
 
-static void test_chained_sw_incr(void)
+static void test_chained_sw_incr(bool unused)
 {
 	uint32_t events[] = {SW_INCR, CHAIN};
 	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
@@ -691,7 +705,7 @@ static void test_chained_sw_incr(void)
 		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
 }
 
-static void test_chain_promotion(void)
+static void test_chain_promotion(bool unused)
 {
 	uint32_t events[] = {MEM_ACCESS, CHAIN};
 	void *addr = malloc(PAGE_SIZE);
@@ -840,13 +854,14 @@ static bool expect_interrupts(uint32_t bitmap)
 	return true;
 }
 
-static void test_overflow_interrupt(void)
+static void test_overflow_interrupt(bool overflow_at_64bits)
 {
 	uint32_t events[] = {MEM_ACCESS, SW_INCR};
 	void *addr = malloc(PAGE_SIZE);
 	int i;
 
-	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
+	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
+	    !check_overflow_prerequisites(overflow_at_64bits))
 		return;
 
 	gic_enable_defaults();
@@ -1070,6 +1085,19 @@ static bool pmu_probe(void)
 	return true;
 }
 
+static void run_test(char *name, void (*test)(bool), bool overflow_at_64bits)
+{
+	const char *prefix = overflow_at_64bits ? "64-bit overflows" : "32-bit overflows";
+
+	report_prefix_push(name);
+	report_prefix_push(prefix);
+
+	test(overflow_at_64bits);
+
+	report_prefix_pop();
+	report_prefix_pop();
+}
+
 int main(int argc, char *argv[])
 {
 	int cpi = 0;
@@ -1102,33 +1130,19 @@ int main(int argc, char *argv[])
 		test_event_counter_config();
 		report_prefix_pop();
 	} else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
-		report_prefix_push(argv[1]);
-		test_basic_event_count();
-		report_prefix_pop();
+		run_test(argv[1], test_basic_event_count, false);
 	} else if (strcmp(argv[1], "pmu-mem-access") == 0) {
-		report_prefix_push(argv[1]);
-		test_mem_access();
-		report_prefix_pop();
+		run_test(argv[1], test_mem_access, false);
 	} else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
-		report_prefix_push(argv[1]);
-		test_sw_incr();
-		report_prefix_pop();
+		run_test(argv[1], test_sw_incr, false);
 	} else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
-		report_prefix_push(argv[1]);
-		test_chained_counters();
-		report_prefix_pop();
+		run_test(argv[1], test_chained_counters, false);
 	} else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
-		report_prefix_push(argv[1]);
-		test_chained_sw_incr();
-		report_prefix_pop();
+		run_test(argv[1], test_chained_sw_incr, false);
 	} else if (strcmp(argv[1], "pmu-chain-promotion") == 0) {
-		report_prefix_push(argv[1]);
-		test_chain_promotion();
-		report_prefix_pop();
+		run_test(argv[1], test_chain_promotion, false);
 	} else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
-		report_prefix_push(argv[1]);
-		test_overflow_interrupt();
-		report_prefix_pop();
+		run_test(argv[1], test_overflow_interrupt, false);
 	} else {
 		report_abort("Unknown sub-test '%s'", argv[1]);
 	}
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-09 21:17 [kvm-unit-tests PATCH v3 0/4] arm: pmu: Add support for PMUv3p5 Ricardo Koller
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 2/4] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
@ 2023-01-09 21:17 ` Ricardo Koller
  2023-01-19  5:58   ` Reiji Watanabe
  2023-01-23 20:33   ` Eric Auger
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals Ricardo Koller
  3 siblings, 2 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-09 21:17 UTC (permalink / raw)
  To: kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller

Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
supported on PMUv3p5.

Note that chained tests do not implement "overflow_at_64bits == true".
That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
0487H.a, J1.1.1 "aarch64/debug").

Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
 arm/pmu.c | 116 +++++++++++++++++++++++++++++++++++-------------------
 1 file changed, 75 insertions(+), 41 deletions(-)

diff --git a/arm/pmu.c b/arm/pmu.c
index 0d06b59..72d0f50 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -28,6 +28,7 @@
 #define PMU_PMCR_X         (1 << 4)
 #define PMU_PMCR_DP        (1 << 5)
 #define PMU_PMCR_LC        (1 << 6)
+#define PMU_PMCR_LP        (1 << 7)
 #define PMU_PMCR_N_SHIFT   11
 #define PMU_PMCR_N_MASK    0x1f
 #define PMU_PMCR_ID_SHIFT  16
@@ -56,9 +57,13 @@
 
 #define ALL_SET			0x00000000FFFFFFFFULL
 #define ALL_CLEAR		0x0000000000000000ULL
-#define PRE_OVERFLOW		0x00000000FFFFFFF0ULL
+#define PRE_OVERFLOW_32		0x00000000FFFFFFF0ULL
+#define PRE_OVERFLOW_64		0xFFFFFFFFFFFFFFF0ULL
 #define PRE_OVERFLOW2		0x00000000FFFFFFDCULL
 
+#define PRE_OVERFLOW(__overflow_at_64bits)				\
+	(__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
+
 #define PMU_PPI			23
 
 struct pmu {
@@ -449,8 +454,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
 static void test_basic_event_count(bool overflow_at_64bits)
 {
 	uint32_t implemented_counter_mask, non_implemented_counter_mask;
-	uint32_t counter_mask;
+	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
 	uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
+	uint32_t counter_mask;
 
 	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
 	    !check_overflow_prerequisites(overflow_at_64bits))
@@ -472,13 +479,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
 	 * clear cycle and all event counters and allow counter enablement
 	 * through PMCNTENSET. LC is RES1.
 	 */
-	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
+	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
 	isb();
-	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
+	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
 
 	/* Preset counter #0 to pre overflow value to trigger an overflow */
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
-	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
+	write_regn_el0(pmevcntr, 0, pre_overflow);
+	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
 		"counter #0 preset to pre-overflow value");
 	report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
 
@@ -531,6 +538,8 @@ static void test_mem_access(bool overflow_at_64bits)
 {
 	void *addr = malloc(PAGE_SIZE);
 	uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
+	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
 
 	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
 	    !check_overflow_prerequisites(overflow_at_64bits))
@@ -542,7 +551,7 @@ static void test_mem_access(bool overflow_at_64bits)
 	write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
 	isb();
-	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
+	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
 	report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
 	report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
 	/* We may measure more than 20 mem access depending on the core */
@@ -552,11 +561,11 @@ static void test_mem_access(bool overflow_at_64bits)
 
 	pmu_reset();
 
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
-	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, pre_overflow);
+	write_regn_el0(pmevcntr, 1, pre_overflow);
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
 	isb();
-	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
+	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
 	report(read_sysreg(pmovsclr_el0) == 0x3,
 	       "Ran 20 mem accesses with expected overflows on both counters");
 	report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
@@ -566,8 +575,10 @@ static void test_mem_access(bool overflow_at_64bits)
 
 static void test_sw_incr(bool overflow_at_64bits)
 {
+	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
 	uint32_t events[] = {SW_INCR, SW_INCR};
-	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
+	uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
 	int i;
 
 	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
@@ -581,7 +592,7 @@ static void test_sw_incr(bool overflow_at_64bits)
 	/* enable counters #0 and #1 */
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
 
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, pre_overflow);
 	isb();
 
 	for (i = 0; i < 100; i++)
@@ -589,14 +600,14 @@ static void test_sw_incr(bool overflow_at_64bits)
 
 	isb();
 	report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
-	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
+	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
 		"PWSYNC does not increment if PMCR.E is unset");
 
 	pmu_reset();
 
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, pre_overflow);
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
-	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
+	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
 	isb();
 
 	for (i = 0; i < 100; i++)
@@ -614,6 +625,7 @@ static void test_sw_incr(bool overflow_at_64bits)
 static void test_chained_counters(bool unused)
 {
 	uint32_t events[] = {CPU_CYCLES, CHAIN};
+	uint64_t all_set = pmevcntr_mask();
 
 	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
 		return;
@@ -624,7 +636,7 @@ static void test_chained_counters(bool unused)
 	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
 	/* enable counters #0 and #1 */
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
 
 	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
 
@@ -636,26 +648,26 @@ static void test_chained_counters(bool unused)
 	pmu_reset();
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
 
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
 	write_regn_el0(pmevcntr, 1, 0x1);
 	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
 	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
 	report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
 	report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
 
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
-	write_regn_el0(pmevcntr, 1, ALL_SET);
+	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
+	write_regn_el0(pmevcntr, 1, all_set);
 
 	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
 	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
-	report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
+	report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
 	report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
 }
 
 static void test_chained_sw_incr(bool unused)
 {
 	uint32_t events[] = {SW_INCR, CHAIN};
-	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
+	uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
 	uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
 	int i;
 
@@ -669,7 +681,7 @@ static void test_chained_sw_incr(bool unused)
 	/* enable counters #0 and #1 */
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
 
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
 	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
 	isb();
 
@@ -687,7 +699,7 @@ static void test_chained_sw_incr(bool unused)
 	pmu_reset();
 
 	write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
 	write_regn_el0(pmevcntr, 1, ALL_SET);
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
 	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
@@ -726,7 +738,7 @@ static void test_chain_promotion(bool unused)
 
 	/* Only enable even counter */
 	pmu_reset();
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
 	write_sysreg_s(0x1, PMCNTENSET_EL0);
 	isb();
 
@@ -856,6 +868,9 @@ static bool expect_interrupts(uint32_t bitmap)
 
 static void test_overflow_interrupt(bool overflow_at_64bits)
 {
+	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
+	uint64_t all_set = pmevcntr_mask();
+	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
 	uint32_t events[] = {MEM_ACCESS, SW_INCR};
 	void *addr = malloc(PAGE_SIZE);
 	int i;
@@ -874,16 +889,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
 	write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
 	write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
-	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, pre_overflow);
+	write_regn_el0(pmevcntr, 1, pre_overflow);
 	isb();
 
 	/* interrupts are disabled (PMINTENSET_EL1 == 0) */
 
-	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
+	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
 	report(expect_interrupts(0), "no overflow interrupt after preset");
 
-	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
+	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
 	isb();
 
 	for (i = 0; i < 100; i++)
@@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
 
 	pmu_reset_stats();
 
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
-	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, pre_overflow);
+	write_regn_el0(pmevcntr, 1, pre_overflow);
 	write_sysreg(ALL_SET, pmintenset_el1);
 	isb();
 
-	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
+	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
 	for (i = 0; i < 100; i++)
 		write_sysreg(0x3, pmswinc_el0);
 
@@ -912,25 +927,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
 	report(expect_interrupts(0x3),
 		"overflow interrupts expected on #0 and #1");
 
-	/* promote to 64-b */
+	/*
+	 * promote to 64-b:
+	 *
+	 * This only applies to the !overflow_at_64bits case, as
+	 * overflow_at_64bits doesn't implement CHAIN events. The
+	 * overflow_at_64bits case just checks that chained counters are
+	 * not incremented when PMCR.LP == 1.
+	 */
 
 	pmu_reset_stats();
 
 	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
+	write_regn_el0(pmevcntr, 0, pre_overflow);
 	isb();
-	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
-	report(expect_interrupts(0x1),
-		"expect overflow interrupt on 32b boundary");
+	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
+	report(expect_interrupts(0x1), "expect overflow interrupt");
 
 	/* overflow on odd counter */
 	pmu_reset_stats();
-	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
-	write_regn_el0(pmevcntr, 1, ALL_SET);
+	write_regn_el0(pmevcntr, 0, pre_overflow);
+	write_regn_el0(pmevcntr, 1, all_set);
 	isb();
-	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
-	report(expect_interrupts(0x3),
-		"expect overflow interrupt on even and odd counter");
+	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
+	if (overflow_at_64bits) {
+		report(expect_interrupts(0x1),
+		       "expect overflow interrupt on even counter");
+		report(read_regn_el0(pmevcntr, 1) == all_set,
+		       "Odd counter did not change");
+	} else {
+		report(expect_interrupts(0x3),
+		       "expect overflow interrupt on even and odd counter");
+		report(read_regn_el0(pmevcntr, 1) != all_set,
+		       "Odd counter wrapped");
+	}
 }
 #endif
 
@@ -1131,10 +1161,13 @@ int main(int argc, char *argv[])
 		report_prefix_pop();
 	} else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
 		run_test(argv[1], test_basic_event_count, false);
+		run_test(argv[1], test_basic_event_count, true);
 	} else if (strcmp(argv[1], "pmu-mem-access") == 0) {
 		run_test(argv[1], test_mem_access, false);
+		run_test(argv[1], test_mem_access, true);
 	} else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
 		run_test(argv[1], test_sw_incr, false);
+		run_test(argv[1], test_sw_incr, true);
 	} else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
 		run_test(argv[1], test_chained_counters, false);
 	} else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
@@ -1143,6 +1176,7 @@ int main(int argc, char *argv[])
 		run_test(argv[1], test_chain_promotion, false);
 	} else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
 		run_test(argv[1], test_overflow_interrupt, false);
+		run_test(argv[1], test_overflow_interrupt, true);
 	} else {
 		report_abort("Unknown sub-test '%s'", argv[1]);
 	}
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals
  2023-01-09 21:17 [kvm-unit-tests PATCH v3 0/4] arm: pmu: Add support for PMUv3p5 Ricardo Koller
                   ` (2 preceding siblings ...)
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for " Ricardo Koller
@ 2023-01-09 21:17 ` Ricardo Koller
  2023-01-09 21:43   ` Oliver Upton
  2023-01-23 20:17   ` Eric Auger
  3 siblings, 2 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-09 21:17 UTC (permalink / raw)
  To: kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, eric.auger, oliver.upton, reijiw, Ricardo Koller

The arm/pmu test prints the value of counters as %ld.  Most tests start
with counters around 0 or UINT_MAX, so having something like -16 instead of
0xffff_fff0 is not very useful.

Report counter values as hexadecimals.

Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
 arm/pmu.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arm/pmu.c b/arm/pmu.c
index 72d0f50..77b0a70 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -552,8 +552,8 @@ static void test_mem_access(bool overflow_at_64bits)
 	write_sysreg_s(0x3, PMCNTENSET_EL0);
 	isb();
 	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
-	report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
-	report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
+	report_info("counter #0 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
+	report_info("counter #1 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
 	/* We may measure more than 20 mem access depending on the core */
 	report((read_regn_el0(pmevcntr, 0) == read_regn_el0(pmevcntr, 1)) &&
 	       (read_regn_el0(pmevcntr, 0) >= 20) && !read_sysreg(pmovsclr_el0),
@@ -568,7 +568,7 @@ static void test_mem_access(bool overflow_at_64bits)
 	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
 	report(read_sysreg(pmovsclr_el0) == 0x3,
 	       "Ran 20 mem accesses with expected overflows on both counters");
-	report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
+	report_info("cnt#0=0x%lx cnt#1=0x%lx overflow=0x%lx",
 			read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1),
 			read_sysreg(pmovsclr_el0));
 }
@@ -599,7 +599,7 @@ static void test_sw_incr(bool overflow_at_64bits)
 		write_sysreg(0x1, pmswinc_el0);
 
 	isb();
-	report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
+	report_info("SW_INCR counter #0 has value 0x%lx", read_regn_el0(pmevcntr, 0));
 	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
 		"PWSYNC does not increment if PMCR.E is unset");
 
@@ -616,7 +616,7 @@ static void test_sw_incr(bool overflow_at_64bits)
 	isb();
 	report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR");
 	report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR");
-	report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
+	report_info("counter values after 100 SW_INCR #0=0x%lx #1=0x%lx",
 		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
 	report(read_sysreg(pmovsclr_el0) == 0x1,
 		"overflow on counter #0 after 100 SW_INCR");
@@ -692,7 +692,7 @@ static void test_chained_sw_incr(bool unused)
 	report((read_sysreg(pmovsclr_el0) == 0x1) &&
 		(read_regn_el0(pmevcntr, 1) == 1),
 		"overflow and chain counter incremented after 100 SW_INCR/CHAIN");
-	report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
+	report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0),
 		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
 
 	/* 64b SW_INCR and overflow on CHAIN counter*/
@@ -713,7 +713,7 @@ static void test_chained_sw_incr(bool unused)
 	       (read_regn_el0(pmevcntr, 0) == cntr0) &&
 	       (read_regn_el0(pmevcntr, 1) == cntr1),
 	       "expected overflows and values after 100 SW_INCR/CHAIN");
-	report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
+	report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0),
 		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
 }
 
@@ -745,11 +745,11 @@ static void test_chain_promotion(bool unused)
 	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
 	report(!read_regn_el0(pmevcntr, 1) && (read_sysreg(pmovsclr_el0) == 0x1),
 		"odd counter did not increment on overflow if disabled");
-	report_info("MEM_ACCESS counter #0 has value %ld",
+	report_info("MEM_ACCESS counter #0 has value 0x%lx",
 		    read_regn_el0(pmevcntr, 0));
-	report_info("CHAIN counter #1 has value %ld",
+	report_info("CHAIN counter #1 has value 0x%lx",
 		    read_regn_el0(pmevcntr, 1));
-	report_info("overflow counter %ld", read_sysreg(pmovsclr_el0));
+	report_info("overflow counter 0x%lx", read_sysreg(pmovsclr_el0));
 
 	/* start at 0xFFFFFFDC, +20 with CHAIN enabled, +20 with CHAIN disabled */
 	pmu_reset();
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
@ 2023-01-09 21:42   ` Oliver Upton
  2023-01-23 20:16   ` Eric Auger
  1 sibling, 0 replies; 22+ messages in thread
From: Oliver Upton @ 2023-01-09 21:42 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger, reijiw

On Mon, Jan 09, 2023 at 09:17:51PM +0000, Ricardo Koller wrote:
> PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
> for overflowing at 32 or 64-bits. The consequence is that tests that check
> the counter values after overflowing should not assume that values will be
> wrapped around 32-bits: they overflow into the other half of the 64-bit
> counters on PMUv3p5.
> 
> Fix tests by correctly checking overflowing-counters against the expected
> 64-bit value.
> 
> Signed-off-by: Ricardo Koller <ricarkol@google.com>

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals Ricardo Koller
@ 2023-01-09 21:43   ` Oliver Upton
  2023-01-23 20:17   ` Eric Auger
  1 sibling, 0 replies; 22+ messages in thread
From: Oliver Upton @ 2023-01-09 21:43 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger, reijiw

On Mon, Jan 09, 2023 at 09:17:54PM +0000, Ricardo Koller wrote:
> The arm/pmu test prints the value of counters as %ld.  Most tests start
> with counters around 0 or UINT_MAX, so having something like -16 instead of
> 0xffff_fff0 is not very useful.
> 
> Report counter values as hexadecimals.
> 
> Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/4] arm: pmu: Prepare for testing 64-bit overflows
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 2/4] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
@ 2023-01-12  5:56   ` Reiji Watanabe
  2023-01-13 15:20     ` Ricardo Koller
  0 siblings, 1 reply; 22+ messages in thread
From: Reiji Watanabe @ 2023-01-12  5:56 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger,
	oliver.upton

Hi Ricardo,

On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
>
> PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits
> instead of 32. Prepare by doing these 3 things:
>
> 1. Add a "bool overflow_at_64bits" argument to all tests checking
>    overflows.
> 2. Extend satisfy_prerequisites() to check if the machine supports
>    "overflow_at_64bits".
> 3. Refactor the test invocations to use the new "run_test()" which adds a
>    report prefix indicating whether the test uses 64 or 32-bit overflows.
>
> A subsequent commit will actually add the 64-bit overflow tests.
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
>  arm/pmu.c | 92 ++++++++++++++++++++++++++++++++-----------------------
>  1 file changed, 53 insertions(+), 39 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 7f0794d..0d06b59 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -164,13 +164,13 @@ static void pmu_reset(void)
>  /* event counter tests only implemented for aarch64 */
>  static void test_event_introspection(void) {}
>  static void test_event_counter_config(void) {}
> -static void test_basic_event_count(void) {}
> -static void test_mem_access(void) {}
> -static void test_sw_incr(void) {}
> -static void test_chained_counters(void) {}
> -static void test_chained_sw_incr(void) {}
> -static void test_chain_promotion(void) {}
> -static void test_overflow_interrupt(void) {}
> +static void test_basic_event_count(bool overflow_at_64bits) {}
> +static void test_mem_access(bool overflow_at_64bits) {}
> +static void test_sw_incr(bool overflow_at_64bits) {}
> +static void test_chained_counters(bool unused) {}
> +static void test_chained_sw_incr(bool unused) {}
> +static void test_chain_promotion(bool unused) {}
> +static void test_overflow_interrupt(bool overflow_at_64bits) {}
>
>  #elif defined(__aarch64__)
>  #define ID_AA64DFR0_PERFMON_SHIFT 8
> @@ -416,6 +416,7 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events)
>                         return false;
>                 }
>         }
> +

Nit: Unnecessary addition of the line.

>         return true;
>  }
>
> @@ -435,13 +436,24 @@ static uint64_t pmevcntr_mask(void)
>         return (uint32_t)~0;
>  }
>
> -static void test_basic_event_count(void)
> +static bool check_overflow_prerequisites(bool overflow_at_64bits)
> +{
> +       if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) {
> +               report_skip("Skip test as 64 overflows need FEAT_PMUv3p5");
> +               return false;
> +       }
> +
> +       return true;
> +}
> +
> +static void test_basic_event_count(bool overflow_at_64bits)
>  {
>         uint32_t implemented_counter_mask, non_implemented_counter_mask;
>         uint32_t counter_mask;
>         uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
>
> -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> +           !check_overflow_prerequisites(overflow_at_64bits))
>                 return;
>
>         implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1;
> @@ -515,12 +527,13 @@ static void test_basic_event_count(void)
>                 "check overflow happened on #0 only");
>  }
>
> -static void test_mem_access(void)
> +static void test_mem_access(bool overflow_at_64bits)
>  {
>         void *addr = malloc(PAGE_SIZE);
>         uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
>
> -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> +           !check_overflow_prerequisites(overflow_at_64bits))
>                 return;
>
>         pmu_reset();
> @@ -551,13 +564,14 @@ static void test_mem_access(void)
>                         read_sysreg(pmovsclr_el0));
>  }
>
> -static void test_sw_incr(void)
> +static void test_sw_incr(bool overflow_at_64bits)
>  {
>         uint32_t events[] = {SW_INCR, SW_INCR};
>         uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
>         int i;
>
> -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> +           !check_overflow_prerequisites(overflow_at_64bits))
>                 return;
>
>         pmu_reset();
> @@ -597,7 +611,7 @@ static void test_sw_incr(void)
>                 "overflow on counter #0 after 100 SW_INCR");
>  }
>
> -static void test_chained_counters(void)
> +static void test_chained_counters(bool unused)
>  {
>         uint32_t events[] = {CPU_CYCLES, CHAIN};
>
> @@ -638,7 +652,7 @@ static void test_chained_counters(void)
>         report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
>  }
>
> -static void test_chained_sw_incr(void)
> +static void test_chained_sw_incr(bool unused)
>  {
>         uint32_t events[] = {SW_INCR, CHAIN};
>         uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> @@ -691,7 +705,7 @@ static void test_chained_sw_incr(void)
>                     read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>  }
>
> -static void test_chain_promotion(void)
> +static void test_chain_promotion(bool unused)
>  {
>         uint32_t events[] = {MEM_ACCESS, CHAIN};
>         void *addr = malloc(PAGE_SIZE);
> @@ -840,13 +854,14 @@ static bool expect_interrupts(uint32_t bitmap)
>         return true;
>  }
>
> -static void test_overflow_interrupt(void)
> +static void test_overflow_interrupt(bool overflow_at_64bits)
>  {
>         uint32_t events[] = {MEM_ACCESS, SW_INCR};
>         void *addr = malloc(PAGE_SIZE);
>         int i;
>
> -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> +           !check_overflow_prerequisites(overflow_at_64bits))
>                 return;
>
>         gic_enable_defaults();
> @@ -1070,6 +1085,19 @@ static bool pmu_probe(void)
>         return true;
>  }
>
> +static void run_test(char *name, void (*test)(bool), bool overflow_at_64bits)
> +{
> +       const char *prefix = overflow_at_64bits ? "64-bit overflows" : "32-bit overflows";
> +
> +       report_prefix_push(name);
> +       report_prefix_push(prefix);
> +
> +       test(overflow_at_64bits);
> +
> +       report_prefix_pop();
> +       report_prefix_pop();
> +}
> +
>  int main(int argc, char *argv[])
>  {
>         int cpi = 0;
> @@ -1102,33 +1130,19 @@ int main(int argc, char *argv[])
>                 test_event_counter_config();
>                 report_prefix_pop();
>         } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> -               report_prefix_push(argv[1]);
> -               test_basic_event_count();
> -               report_prefix_pop();
> +               run_test(argv[1], test_basic_event_count, false);
>         } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> -               report_prefix_push(argv[1]);
> -               test_mem_access();
> -               report_prefix_pop();
> +               run_test(argv[1], test_mem_access, false);
>         } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> -               report_prefix_push(argv[1]);
> -               test_sw_incr();
> -               report_prefix_pop();
> +               run_test(argv[1], test_sw_incr, false);
>         } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> -               report_prefix_push(argv[1]);
> -               test_chained_counters();
> -               report_prefix_pop();
> +               run_test(argv[1], test_chained_counters, false);
>         } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> -               report_prefix_push(argv[1]);
> -               test_chained_sw_incr();
> -               report_prefix_pop();
> +               run_test(argv[1], test_chained_sw_incr, false);
>         } else if (strcmp(argv[1], "pmu-chain-promotion") == 0) {
> -               report_prefix_push(argv[1]);
> -               test_chain_promotion();
> -               report_prefix_pop();
> +               run_test(argv[1], test_chain_promotion, false);
>         } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> -               report_prefix_push(argv[1]);
> -               test_overflow_interrupt();
> -               report_prefix_pop();
> +               run_test(argv[1], test_overflow_interrupt, false);
>         } else {
>                 report_abort("Unknown sub-test '%s'", argv[1]);
>         }

Perhaps it might be useful to generalize run_test() a bit more so that it
can be used for other existing test cases as well ?
(e.g. "pmu-event-counter-config", etc)
---
i.e (The following are not all of the changes though).

-static void run_test(char *name, void (*test)(bool), bool overflow_at_64bits)
+static void run_test(const char *name, const char *prefix, void
(*test)(bool), void *arg)
 {
-       const char *prefix = overflow_at_64bits ? "64-bit overflows" :
"32-bit overflows";
-
        report_prefix_push(name);
        report_prefix_push(prefix);

-       test(overflow_at_64bits);
+       test(arg);

        report_prefix_pop();
        report_prefix_pop();
 }

+static void run_event_test(char *name, void (*test)(bool), bool
overflow_at_64bits)
+{
+       const char *prefix = overflow_at_64bits ? "64-bit overflows" :
"32-bit overflows";
+
+       run_test(name, prefix, test, (void *)overflow_at_64bits);
+}
---

Having said that, the patch already improves the code,
and I don't see any issue.

Reviewed-by: Reiji Watanabe <reijiw@google.com>

Thank you,
Reiji

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/4] arm: pmu: Prepare for testing 64-bit overflows
  2023-01-12  5:56   ` Reiji Watanabe
@ 2023-01-13 15:20     ` Ricardo Koller
  0 siblings, 0 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-13 15:20 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger,
	oliver.upton

On Wed, Jan 11, 2023 at 09:56:45PM -0800, Reiji Watanabe wrote:
> Hi Ricardo,
> 
> On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
> >
> > PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits
> > instead of 32. Prepare by doing these 3 things:
> >
> > 1. Add a "bool overflow_at_64bits" argument to all tests checking
> >    overflows.
> > 2. Extend satisfy_prerequisites() to check if the machine supports
> >    "overflow_at_64bits".
> > 3. Refactor the test invocations to use the new "run_test()" which adds a
> >    report prefix indicating whether the test uses 64 or 32-bit overflows.
> >
> > A subsequent commit will actually add the 64-bit overflow tests.
> >
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > ---
> >  arm/pmu.c | 92 ++++++++++++++++++++++++++++++++-----------------------
> >  1 file changed, 53 insertions(+), 39 deletions(-)
> >
> > diff --git a/arm/pmu.c b/arm/pmu.c
> > index 7f0794d..0d06b59 100644
> > --- a/arm/pmu.c
> > +++ b/arm/pmu.c
> > @@ -164,13 +164,13 @@ static void pmu_reset(void)
> >  /* event counter tests only implemented for aarch64 */
> >  static void test_event_introspection(void) {}
> >  static void test_event_counter_config(void) {}
> > -static void test_basic_event_count(void) {}
> > -static void test_mem_access(void) {}
> > -static void test_sw_incr(void) {}
> > -static void test_chained_counters(void) {}
> > -static void test_chained_sw_incr(void) {}
> > -static void test_chain_promotion(void) {}
> > -static void test_overflow_interrupt(void) {}
> > +static void test_basic_event_count(bool overflow_at_64bits) {}
> > +static void test_mem_access(bool overflow_at_64bits) {}
> > +static void test_sw_incr(bool overflow_at_64bits) {}
> > +static void test_chained_counters(bool unused) {}
> > +static void test_chained_sw_incr(bool unused) {}
> > +static void test_chain_promotion(bool unused) {}
> > +static void test_overflow_interrupt(bool overflow_at_64bits) {}
> >
> >  #elif defined(__aarch64__)
> >  #define ID_AA64DFR0_PERFMON_SHIFT 8
> > @@ -416,6 +416,7 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events)
> >                         return false;
> >                 }
> >         }
> > +
> 
> Nit: Unnecessary addition of the line.
> 
> >         return true;
> >  }
> >
> > @@ -435,13 +436,24 @@ static uint64_t pmevcntr_mask(void)
> >         return (uint32_t)~0;
> >  }
> >
> > -static void test_basic_event_count(void)
> > +static bool check_overflow_prerequisites(bool overflow_at_64bits)
> > +{
> > +       if (overflow_at_64bits && pmu.version < ID_DFR0_PMU_V3_8_5) {
> > +               report_skip("Skip test as 64 overflows need FEAT_PMUv3p5");
> > +               return false;
> > +       }
> > +
> > +       return true;
> > +}
> > +
> > +static void test_basic_event_count(bool overflow_at_64bits)
> >  {
> >         uint32_t implemented_counter_mask, non_implemented_counter_mask;
> >         uint32_t counter_mask;
> >         uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> >
> > -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > +           !check_overflow_prerequisites(overflow_at_64bits))
> >                 return;
> >
> >         implemented_counter_mask = BIT(pmu.nb_implemented_counters) - 1;
> > @@ -515,12 +527,13 @@ static void test_basic_event_count(void)
> >                 "check overflow happened on #0 only");
> >  }
> >
> > -static void test_mem_access(void)
> > +static void test_mem_access(bool overflow_at_64bits)
> >  {
> >         void *addr = malloc(PAGE_SIZE);
> >         uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> >
> > -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > +           !check_overflow_prerequisites(overflow_at_64bits))
> >                 return;
> >
> >         pmu_reset();
> > @@ -551,13 +564,14 @@ static void test_mem_access(void)
> >                         read_sysreg(pmovsclr_el0));
> >  }
> >
> > -static void test_sw_incr(void)
> > +static void test_sw_incr(bool overflow_at_64bits)
> >  {
> >         uint32_t events[] = {SW_INCR, SW_INCR};
> >         uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> >         int i;
> >
> > -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > +           !check_overflow_prerequisites(overflow_at_64bits))
> >                 return;
> >
> >         pmu_reset();
> > @@ -597,7 +611,7 @@ static void test_sw_incr(void)
> >                 "overflow on counter #0 after 100 SW_INCR");
> >  }
> >
> > -static void test_chained_counters(void)
> > +static void test_chained_counters(bool unused)
> >  {
> >         uint32_t events[] = {CPU_CYCLES, CHAIN};
> >
> > @@ -638,7 +652,7 @@ static void test_chained_counters(void)
> >         report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
> >  }
> >
> > -static void test_chained_sw_incr(void)
> > +static void test_chained_sw_incr(bool unused)
> >  {
> >         uint32_t events[] = {SW_INCR, CHAIN};
> >         uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > @@ -691,7 +705,7 @@ static void test_chained_sw_incr(void)
> >                     read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
> >  }
> >
> > -static void test_chain_promotion(void)
> > +static void test_chain_promotion(bool unused)
> >  {
> >         uint32_t events[] = {MEM_ACCESS, CHAIN};
> >         void *addr = malloc(PAGE_SIZE);
> > @@ -840,13 +854,14 @@ static bool expect_interrupts(uint32_t bitmap)
> >         return true;
> >  }
> >
> > -static void test_overflow_interrupt(void)
> > +static void test_overflow_interrupt(bool overflow_at_64bits)
> >  {
> >         uint32_t events[] = {MEM_ACCESS, SW_INCR};
> >         void *addr = malloc(PAGE_SIZE);
> >         int i;
> >
> > -       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > +       if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > +           !check_overflow_prerequisites(overflow_at_64bits))
> >                 return;
> >
> >         gic_enable_defaults();
> > @@ -1070,6 +1085,19 @@ static bool pmu_probe(void)
> >         return true;
> >  }
> >
> > +static void run_test(char *name, void (*test)(bool), bool overflow_at_64bits)
> > +{
> > +       const char *prefix = overflow_at_64bits ? "64-bit overflows" : "32-bit overflows";
> > +
> > +       report_prefix_push(name);
> > +       report_prefix_push(prefix);
> > +
> > +       test(overflow_at_64bits);
> > +
> > +       report_prefix_pop();
> > +       report_prefix_pop();
> > +}
> > +
> >  int main(int argc, char *argv[])
> >  {
> >         int cpi = 0;
> > @@ -1102,33 +1130,19 @@ int main(int argc, char *argv[])
> >                 test_event_counter_config();
> >                 report_prefix_pop();
> >         } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> > -               report_prefix_push(argv[1]);
> > -               test_basic_event_count();
> > -               report_prefix_pop();
> > +               run_test(argv[1], test_basic_event_count, false);
> >         } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> > -               report_prefix_push(argv[1]);
> > -               test_mem_access();
> > -               report_prefix_pop();
> > +               run_test(argv[1], test_mem_access, false);
> >         } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> > -               report_prefix_push(argv[1]);
> > -               test_sw_incr();
> > -               report_prefix_pop();
> > +               run_test(argv[1], test_sw_incr, false);
> >         } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> > -               report_prefix_push(argv[1]);
> > -               test_chained_counters();
> > -               report_prefix_pop();
> > +               run_test(argv[1], test_chained_counters, false);
> >         } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> > -               report_prefix_push(argv[1]);
> > -               test_chained_sw_incr();
> > -               report_prefix_pop();
> > +               run_test(argv[1], test_chained_sw_incr, false);
> >         } else if (strcmp(argv[1], "pmu-chain-promotion") == 0) {
> > -               report_prefix_push(argv[1]);
> > -               test_chain_promotion();
> > -               report_prefix_pop();
> > +               run_test(argv[1], test_chain_promotion, false);
> >         } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> > -               report_prefix_push(argv[1]);
> > -               test_overflow_interrupt();
> > -               report_prefix_pop();
> > +               run_test(argv[1], test_overflow_interrupt, false);
> >         } else {
> >                 report_abort("Unknown sub-test '%s'", argv[1]);
> >         }
> 
> Perhaps it might be useful to generalize run_test() a bit more so that it
> can be used for other existing test cases as well ?

Good idea, that's much better. Will send a v4 with this sugestion.

> (e.g. "pmu-event-counter-config", etc)
> ---
> i.e (The following are not all of the changes though).
> 
> -static void run_test(char *name, void (*test)(bool), bool overflow_at_64bits)
> +static void run_test(const char *name, const char *prefix, void
> (*test)(bool), void *arg)
>  {
> -       const char *prefix = overflow_at_64bits ? "64-bit overflows" :
> "32-bit overflows";
> -
>         report_prefix_push(name);
>         report_prefix_push(prefix);
> 
> -       test(overflow_at_64bits);
> +       test(arg);
> 
>         report_prefix_pop();
>         report_prefix_pop();
>  }
> 
> +static void run_event_test(char *name, void (*test)(bool), bool
> overflow_at_64bits)
> +{
> +       const char *prefix = overflow_at_64bits ? "64-bit overflows" :
> "32-bit overflows";
> +
> +       run_test(name, prefix, test, (void *)overflow_at_64bits);
> +}
> ---
> 
> Having said that, the patch already improves the code,
> and I don't see any issue.
> 
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> 
> Thank you,
> Reiji

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for " Ricardo Koller
@ 2023-01-19  5:58   ` Reiji Watanabe
  2023-01-24 15:11     ` Ricardo Koller
  2023-01-25  2:19     ` Ricardo Koller
  2023-01-23 20:33   ` Eric Auger
  1 sibling, 2 replies; 22+ messages in thread
From: Reiji Watanabe @ 2023-01-19  5:58 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger,
	oliver.upton

Hi Ricardo,

On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
>
> Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> supported on PMUv3p5.
>
> Note that chained tests do not implement "overflow_at_64bits == true".
> That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> 0487H.a, J1.1.1 "aarch64/debug").
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

I have a few nits below.

> ---
>  arm/pmu.c | 116 +++++++++++++++++++++++++++++++++++-------------------
>  1 file changed, 75 insertions(+), 41 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 0d06b59..72d0f50 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -28,6 +28,7 @@
>  #define PMU_PMCR_X         (1 << 4)
>  #define PMU_PMCR_DP        (1 << 5)
>  #define PMU_PMCR_LC        (1 << 6)
> +#define PMU_PMCR_LP        (1 << 7)
>  #define PMU_PMCR_N_SHIFT   11
>  #define PMU_PMCR_N_MASK    0x1f
>  #define PMU_PMCR_ID_SHIFT  16
> @@ -56,9 +57,13 @@
>
>  #define ALL_SET                        0x00000000FFFFFFFFULL

Nit: Perhaps renaming this to ALL_SET_32 might be more clear.
Since the test is now aware of 64 bit counters, the name 'ALL_SET'
makes me think that all 64 bits are set.

>  #define ALL_CLEAR              0x0000000000000000ULL
> -#define PRE_OVERFLOW           0x00000000FFFFFFF0ULL
> +#define PRE_OVERFLOW_32                0x00000000FFFFFFF0ULL
> +#define PRE_OVERFLOW_64                0xFFFFFFFFFFFFFFF0ULL
>  #define PRE_OVERFLOW2          0x00000000FFFFFFDCULL
>
> +#define PRE_OVERFLOW(__overflow_at_64bits)                             \
> +       (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
> +
>  #define PMU_PPI                        23
>
>  struct pmu {
> @@ -449,8 +454,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
>  static void test_basic_event_count(bool overflow_at_64bits)
>  {
>         uint32_t implemented_counter_mask, non_implemented_counter_mask;
> -       uint32_t counter_mask;
> +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>         uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> +       uint32_t counter_mask;
>
>         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
>             !check_overflow_prerequisites(overflow_at_64bits))
> @@ -472,13 +479,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
>          * clear cycle and all event counters and allow counter enablement
>          * through PMCNTENSET. LC is RES1.
>          */
> -       set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> +       set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
>         isb();
> -       report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
> +       report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
>
>         /* Preset counter #0 to pre overflow value to trigger an overflow */
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -       report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
> +       report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>                 "counter #0 preset to pre-overflow value");
>         report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
>
> @@ -531,6 +538,8 @@ static void test_mem_access(bool overflow_at_64bits)
>  {
>         void *addr = malloc(PAGE_SIZE);
>         uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>
>         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
>             !check_overflow_prerequisites(overflow_at_64bits))
> @@ -542,7 +551,7 @@ static void test_mem_access(bool overflow_at_64bits)
>         write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
>         isb();
> -       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> +       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>         report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
>         report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
>         /* We may measure more than 20 mem access depending on the core */
> @@ -552,11 +561,11 @@ static void test_mem_access(bool overflow_at_64bits)
>
>         pmu_reset();
>
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
> +       write_regn_el0(pmevcntr, 1, pre_overflow);
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
>         isb();
> -       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> +       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>         report(read_sysreg(pmovsclr_el0) == 0x3,
>                "Ran 20 mem accesses with expected overflows on both counters");
>         report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
> @@ -566,8 +575,10 @@ static void test_mem_access(bool overflow_at_64bits)
>
>  static void test_sw_incr(bool overflow_at_64bits)
>  {
> +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>         uint32_t events[] = {SW_INCR, SW_INCR};
> -       uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> +       uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
>         int i;
>
>         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> @@ -581,7 +592,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>         /* enable counters #0 and #1 */
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
>
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
>         isb();
>
>         for (i = 0; i < 100; i++)
> @@ -589,14 +600,14 @@ static void test_sw_incr(bool overflow_at_64bits)
>
>         isb();
>         report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> -       report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> +       report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>                 "PWSYNC does not increment if PMCR.E is unset");
>
>         pmu_reset();
>
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
> -       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> +       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>         isb();
>
>         for (i = 0; i < 100; i++)
> @@ -614,6 +625,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>  static void test_chained_counters(bool unused)
>  {
>         uint32_t events[] = {CPU_CYCLES, CHAIN};
> +       uint64_t all_set = pmevcntr_mask();
>
>         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
>                 return;
> @@ -624,7 +636,7 @@ static void test_chained_counters(bool unused)
>         write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
>         /* enable counters #0 and #1 */
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>
>         precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>
> @@ -636,26 +648,26 @@ static void test_chained_counters(bool unused)
>         pmu_reset();
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
>
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>         write_regn_el0(pmevcntr, 1, 0x1);
>         precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>         report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
>         report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
>         report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
>
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -       write_regn_el0(pmevcntr, 1, ALL_SET);
> +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> +       write_regn_el0(pmevcntr, 1, all_set);
>
>         precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>         report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> -       report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> +       report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
>         report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
>  }
>
>  static void test_chained_sw_incr(bool unused)
>  {
>         uint32_t events[] = {SW_INCR, CHAIN};
> -       uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> +       uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
>         uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
>         int i;
>
> @@ -669,7 +681,7 @@ static void test_chained_sw_incr(bool unused)
>         /* enable counters #0 and #1 */
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
>
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>         set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
>         isb();
>
> @@ -687,7 +699,7 @@ static void test_chained_sw_incr(bool unused)
>         pmu_reset();
>
>         write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>         write_regn_el0(pmevcntr, 1, ALL_SET);
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
>         set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> @@ -726,7 +738,7 @@ static void test_chain_promotion(bool unused)
>
>         /* Only enable even counter */
>         pmu_reset();
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>         write_sysreg_s(0x1, PMCNTENSET_EL0);
>         isb();
>
> @@ -856,6 +868,9 @@ static bool expect_interrupts(uint32_t bitmap)
>
>  static void test_overflow_interrupt(bool overflow_at_64bits)
>  {
> +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +       uint64_t all_set = pmevcntr_mask();
> +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>         uint32_t events[] = {MEM_ACCESS, SW_INCR};
>         void *addr = malloc(PAGE_SIZE);
>         int i;
> @@ -874,16 +889,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>         write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
>         write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
>         write_sysreg_s(0x3, PMCNTENSET_EL0);
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
> +       write_regn_el0(pmevcntr, 1, pre_overflow);
>         isb();
>
>         /* interrupts are disabled (PMINTENSET_EL1 == 0) */
>
> -       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> +       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>         report(expect_interrupts(0), "no overflow interrupt after preset");
>
> -       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> +       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>         isb();
>
>         for (i = 0; i < 100; i++)
> @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>
>         pmu_reset_stats();

This isn't directly related to the patch.
But, as bits of pmovsclr_el0 are already set (although interrupts
are disabled), I would think it's good to clear pmovsclr_el0 here.

Thank you,
Reiji



>
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
> +       write_regn_el0(pmevcntr, 1, pre_overflow);
>         write_sysreg(ALL_SET, pmintenset_el1);
>         isb();
>
> -       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> +       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>         for (i = 0; i < 100; i++)
>                 write_sysreg(0x3, pmswinc_el0);
>
> @@ -912,25 +927,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>         report(expect_interrupts(0x3),
>                 "overflow interrupts expected on #0 and #1");
>
> -       /* promote to 64-b */
> +       /*
> +        * promote to 64-b:
> +        *
> +        * This only applies to the !overflow_at_64bits case, as
> +        * overflow_at_64bits doesn't implement CHAIN events. The
> +        * overflow_at_64bits case just checks that chained counters are
> +        * not incremented when PMCR.LP == 1.
> +        */
>
>         pmu_reset_stats();
>
>         write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
>         isb();
> -       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> -       report(expect_interrupts(0x1),
> -               "expect overflow interrupt on 32b boundary");
> +       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> +       report(expect_interrupts(0x1), "expect overflow interrupt");
>
>         /* overflow on odd counter */
>         pmu_reset_stats();
> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -       write_regn_el0(pmevcntr, 1, ALL_SET);
> +       write_regn_el0(pmevcntr, 0, pre_overflow);
> +       write_regn_el0(pmevcntr, 1, all_set);
>         isb();
> -       mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> -       report(expect_interrupts(0x3),
> -               "expect overflow interrupt on even and odd counter");
> +       mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> +       if (overflow_at_64bits) {
> +               report(expect_interrupts(0x1),
> +                      "expect overflow interrupt on even counter");
> +               report(read_regn_el0(pmevcntr, 1) == all_set,
> +                      "Odd counter did not change");
> +       } else {
> +               report(expect_interrupts(0x3),
> +                      "expect overflow interrupt on even and odd counter");
> +               report(read_regn_el0(pmevcntr, 1) != all_set,
> +                      "Odd counter wrapped");
> +       }
>  }
>  #endif
>
> @@ -1131,10 +1161,13 @@ int main(int argc, char *argv[])
>                 report_prefix_pop();
>         } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
>                 run_test(argv[1], test_basic_event_count, false);
> +               run_test(argv[1], test_basic_event_count, true);
>         } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
>                 run_test(argv[1], test_mem_access, false);
> +               run_test(argv[1], test_mem_access, true);
>         } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
>                 run_test(argv[1], test_sw_incr, false);
> +               run_test(argv[1], test_sw_incr, true);
>         } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
>                 run_test(argv[1], test_chained_counters, false);
>         } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> @@ -1143,6 +1176,7 @@ int main(int argc, char *argv[])
>                 run_test(argv[1], test_chain_promotion, false);
>         } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
>                 run_test(argv[1], test_overflow_interrupt, false);
> +               run_test(argv[1], test_overflow_interrupt, true);
>         } else {
>                 report_abort("Unknown sub-test '%s'", argv[1]);
>         }
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
  2023-01-09 21:42   ` Oliver Upton
@ 2023-01-23 20:16   ` Eric Auger
  1 sibling, 0 replies; 22+ messages in thread
From: Eric Auger @ 2023-01-23 20:16 UTC (permalink / raw)
  To: Ricardo Koller, kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, oliver.upton, reijiw

Hi Ricardo,

On 1/9/23 22:17, Ricardo Koller wrote:
> PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
> for overflowing at 32 or 64-bits. The consequence is that tests that check
> the counter values after overflowing should not assume that values will be
> wrapped around 32-bits: they overflow into the other half of the 64-bit
> counters on PMUv3p5.
>
> Fix tests by correctly checking overflowing-counters against the expected
> 64-bit value.
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric
> ---
>  arm/pmu.c | 38 ++++++++++++++++++++++++++++----------
>  1 file changed, 28 insertions(+), 10 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index cd47b14..7f0794d 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -54,10 +54,10 @@
>  #define EXT_COMMON_EVENTS_LOW	0x4000
>  #define EXT_COMMON_EVENTS_HIGH	0x403F
>  
> -#define ALL_SET			0xFFFFFFFF
> -#define ALL_CLEAR		0x0
> -#define PRE_OVERFLOW		0xFFFFFFF0
> -#define PRE_OVERFLOW2		0xFFFFFFDC
> +#define ALL_SET			0x00000000FFFFFFFFULL
> +#define ALL_CLEAR		0x0000000000000000ULL
> +#define PRE_OVERFLOW		0x00000000FFFFFFF0ULL
> +#define PRE_OVERFLOW2		0x00000000FFFFFFDCULL
>  
>  #define PMU_PPI			23
>  
> @@ -419,6 +419,22 @@ static bool satisfy_prerequisites(uint32_t *events, unsigned int nb_events)
>  	return true;
>  }
>  
> +static uint64_t pmevcntr_mask(void)
> +{
> +	/*
> +	 * Bits [63:0] are always incremented for 64-bit counters,
> +	 * even if the PMU is configured to generate an overflow at
> +	 * bits [31:0]
> +	 *
> +	 * For more details see the AArch64.IncrementEventCounter()
> +	 * pseudo-code in the ARM ARM DDI 0487I.a, section J1.1.1.
> +	 */
> +	if (pmu.version >= ID_DFR0_PMU_V3_8_5)
> +		return ~0;
> +
> +	return (uint32_t)~0;
> +}
> +
>  static void test_basic_event_count(void)
>  {
>  	uint32_t implemented_counter_mask, non_implemented_counter_mask;
> @@ -538,6 +554,7 @@ static void test_mem_access(void)
>  static void test_sw_incr(void)
>  {
>  	uint32_t events[] = {SW_INCR, SW_INCR};
> +	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
>  	int i;
>  
>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> @@ -572,9 +589,8 @@ static void test_sw_incr(void)
>  		write_sysreg(0x3, pmswinc_el0);
>  
>  	isb();
> -	report(read_regn_el0(pmevcntr, 0)  == 84, "counter #1 after + 100 SW_INCR");
> -	report(read_regn_el0(pmevcntr, 1)  == 100,
> -		"counter #0 after + 100 SW_INCR");
> +	report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR");
> +	report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR");
>  	report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
>  		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>  	report(read_sysreg(pmovsclr_el0) == 0x1,
> @@ -625,6 +641,8 @@ static void test_chained_counters(void)
>  static void test_chained_sw_incr(void)
>  {
>  	uint32_t events[] = {SW_INCR, CHAIN};
> +	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> +	uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
>  	int i;
>  
>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> @@ -666,9 +684,9 @@ static void test_chained_sw_incr(void)
>  
>  	isb();
>  	report((read_sysreg(pmovsclr_el0) == 0x3) &&
> -		(read_regn_el0(pmevcntr, 1) == 0) &&
> -		(read_regn_el0(pmevcntr, 0) == 84),
> -		"expected overflows and values after 100 SW_INCR/CHAIN");
> +	       (read_regn_el0(pmevcntr, 0) == cntr0) &&
> +	       (read_regn_el0(pmevcntr, 1) == cntr1),
> +	       "expected overflows and values after 100 SW_INCR/CHAIN");
>  	report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
>  		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>  }


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals Ricardo Koller
  2023-01-09 21:43   ` Oliver Upton
@ 2023-01-23 20:17   ` Eric Auger
  2023-01-25  4:37     ` Reiji Watanabe
  1 sibling, 1 reply; 22+ messages in thread
From: Eric Auger @ 2023-01-23 20:17 UTC (permalink / raw)
  To: Ricardo Koller, kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, oliver.upton, reijiw



On 1/9/23 22:17, Ricardo Koller wrote:
> The arm/pmu test prints the value of counters as %ld.  Most tests start
> with counters around 0 or UINT_MAX, so having something like -16 instead of
> 0xffff_fff0 is not very useful.
>
> Report counter values as hexadecimals.
>
> Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric
> ---
>  arm/pmu.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 72d0f50..77b0a70 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -552,8 +552,8 @@ static void test_mem_access(bool overflow_at_64bits)
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>  	isb();
>  	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> -	report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
> -	report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
> +	report_info("counter #0 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
> +	report_info("counter #1 is 0x%lx (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
>  	/* We may measure more than 20 mem access depending on the core */
>  	report((read_regn_el0(pmevcntr, 0) == read_regn_el0(pmevcntr, 1)) &&
>  	       (read_regn_el0(pmevcntr, 0) >= 20) && !read_sysreg(pmovsclr_el0),
> @@ -568,7 +568,7 @@ static void test_mem_access(bool overflow_at_64bits)
>  	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>  	report(read_sysreg(pmovsclr_el0) == 0x3,
>  	       "Ran 20 mem accesses with expected overflows on both counters");
> -	report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
> +	report_info("cnt#0=0x%lx cnt#1=0x%lx overflow=0x%lx",
>  			read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1),
>  			read_sysreg(pmovsclr_el0));
>  }
> @@ -599,7 +599,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>  		write_sysreg(0x1, pmswinc_el0);
>  
>  	isb();
> -	report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> +	report_info("SW_INCR counter #0 has value 0x%lx", read_regn_el0(pmevcntr, 0));
>  	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>  		"PWSYNC does not increment if PMCR.E is unset");
>  
> @@ -616,7 +616,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>  	isb();
>  	report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 SW_INCR");
>  	report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 SW_INCR");
> -	report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
> +	report_info("counter values after 100 SW_INCR #0=0x%lx #1=0x%lx",
>  		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>  	report(read_sysreg(pmovsclr_el0) == 0x1,
>  		"overflow on counter #0 after 100 SW_INCR");
> @@ -692,7 +692,7 @@ static void test_chained_sw_incr(bool unused)
>  	report((read_sysreg(pmovsclr_el0) == 0x1) &&
>  		(read_regn_el0(pmevcntr, 1) == 1),
>  		"overflow and chain counter incremented after 100 SW_INCR/CHAIN");
> -	report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
> +	report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0),
>  		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>  
>  	/* 64b SW_INCR and overflow on CHAIN counter*/
> @@ -713,7 +713,7 @@ static void test_chained_sw_incr(bool unused)
>  	       (read_regn_el0(pmevcntr, 0) == cntr0) &&
>  	       (read_regn_el0(pmevcntr, 1) == cntr1),
>  	       "expected overflows and values after 100 SW_INCR/CHAIN");
> -	report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
> +	report_info("overflow=0x%lx, #0=0x%lx #1=0x%lx", read_sysreg(pmovsclr_el0),
>  		    read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>  }
>  
> @@ -745,11 +745,11 @@ static void test_chain_promotion(bool unused)
>  	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
>  	report(!read_regn_el0(pmevcntr, 1) && (read_sysreg(pmovsclr_el0) == 0x1),
>  		"odd counter did not increment on overflow if disabled");
> -	report_info("MEM_ACCESS counter #0 has value %ld",
> +	report_info("MEM_ACCESS counter #0 has value 0x%lx",
>  		    read_regn_el0(pmevcntr, 0));
> -	report_info("CHAIN counter #1 has value %ld",
> +	report_info("CHAIN counter #1 has value 0x%lx",
>  		    read_regn_el0(pmevcntr, 1));
> -	report_info("overflow counter %ld", read_sysreg(pmovsclr_el0));
> +	report_info("overflow counter 0x%lx", read_sysreg(pmovsclr_el0));
>  
>  	/* start at 0xFFFFFFDC, +20 with CHAIN enabled, +20 with CHAIN disabled */
>  	pmu_reset();


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for " Ricardo Koller
  2023-01-19  5:58   ` Reiji Watanabe
@ 2023-01-23 20:33   ` Eric Auger
  2023-01-24 15:26     ` Ricardo Koller
  1 sibling, 1 reply; 22+ messages in thread
From: Eric Auger @ 2023-01-23 20:33 UTC (permalink / raw)
  To: Ricardo Koller, kvm, kvmarm, andrew.jones
  Cc: maz, alexandru.elisei, oliver.upton, reijiw

Hi Ricardo,

On 1/9/23 22:17, Ricardo Koller wrote:
> Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> supported on PMUv3p5.
>
> Note that chained tests do not implement "overflow_at_64bits == true".
> That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> 0487H.a, J1.1.1 "aarch64/debug").
>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
>  arm/pmu.c | 116 +++++++++++++++++++++++++++++++++++-------------------
>  1 file changed, 75 insertions(+), 41 deletions(-)
>
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 0d06b59..72d0f50 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -28,6 +28,7 @@
>  #define PMU_PMCR_X         (1 << 4)
>  #define PMU_PMCR_DP        (1 << 5)
>  #define PMU_PMCR_LC        (1 << 6)
> +#define PMU_PMCR_LP        (1 << 7)
>  #define PMU_PMCR_N_SHIFT   11
>  #define PMU_PMCR_N_MASK    0x1f
>  #define PMU_PMCR_ID_SHIFT  16
> @@ -56,9 +57,13 @@
>  
>  #define ALL_SET			0x00000000FFFFFFFFULL
>  #define ALL_CLEAR		0x0000000000000000ULL
> -#define PRE_OVERFLOW		0x00000000FFFFFFF0ULL
> +#define PRE_OVERFLOW_32		0x00000000FFFFFFF0ULL
> +#define PRE_OVERFLOW_64		0xFFFFFFFFFFFFFFF0ULL
>  #define PRE_OVERFLOW2		0x00000000FFFFFFDCULL
>  
> +#define PRE_OVERFLOW(__overflow_at_64bits)				\
> +	(__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
> +
>  #define PMU_PPI			23
>  
>  struct pmu {
> @@ -449,8 +454,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
>  static void test_basic_event_count(bool overflow_at_64bits)
>  {
>  	uint32_t implemented_counter_mask, non_implemented_counter_mask;
> -	uint32_t counter_mask;
> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>  	uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> +	uint32_t counter_mask;
>  
>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
>  	    !check_overflow_prerequisites(overflow_at_64bits))
> @@ -472,13 +479,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
>  	 * clear cycle and all event counters and allow counter enablement
>  	 * through PMCNTENSET. LC is RES1.
>  	 */
> -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
>  	isb();
> -	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
> +	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
>  
>  	/* Preset counter #0 to pre overflow value to trigger an overflow */
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
> +	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>  		"counter #0 preset to pre-overflow value");
>  	report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
>  
> @@ -531,6 +538,8 @@ static void test_mem_access(bool overflow_at_64bits)
>  {
>  	void *addr = malloc(PAGE_SIZE);
>  	uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>  
>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
>  	    !check_overflow_prerequisites(overflow_at_64bits))
> @@ -542,7 +551,7 @@ static void test_mem_access(bool overflow_at_64bits)
>  	write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>  	isb();
> -	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>  	report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
>  	report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
>  	/* We may measure more than 20 mem access depending on the core */
> @@ -552,11 +561,11 @@ static void test_mem_access(bool overflow_at_64bits)
>  
>  	pmu_reset();
>  
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
> +	write_regn_el0(pmevcntr, 1, pre_overflow);
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>  	isb();
> -	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>  	report(read_sysreg(pmovsclr_el0) == 0x3,
>  	       "Ran 20 mem accesses with expected overflows on both counters");
>  	report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
> @@ -566,8 +575,10 @@ static void test_mem_access(bool overflow_at_64bits)
>  
>  static void test_sw_incr(bool overflow_at_64bits)
>  {
> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>  	uint32_t events[] = {SW_INCR, SW_INCR};
> -	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> +	uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
>  	int i;
>  
>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> @@ -581,7 +592,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>  	/* enable counters #0 and #1 */
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>  
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>  	isb();
>  
>  	for (i = 0; i < 100; i++)
> @@ -589,14 +600,14 @@ static void test_sw_incr(bool overflow_at_64bits)
>  
>  	isb();
>  	report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> -	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> +	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>  		"PWSYNC does not increment if PMCR.E is unset");
>  
>  	pmu_reset();
>  
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>  	isb();
>  
>  	for (i = 0; i < 100; i++)
> @@ -614,6 +625,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>  static void test_chained_counters(bool unused)
>  {
>  	uint32_t events[] = {CPU_CYCLES, CHAIN};
> +	uint64_t all_set = pmevcntr_mask();
ALL_SET_32 may be enough, as suggested by Reiji;
>  
>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
>  		return;
> @@ -624,7 +636,7 @@ static void test_chained_counters(bool unused)
>  	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
>  	/* enable counters #0 and #1 */
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>  
>  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>  
> @@ -636,26 +648,26 @@ static void test_chained_counters(bool unused)
>  	pmu_reset();
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>  
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>  	write_regn_el0(pmevcntr, 1, 0x1);
>  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>  	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
>  	report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
>  	report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
>  
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -	write_regn_el0(pmevcntr, 1, ALL_SET);
> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> +	write_regn_el0(pmevcntr, 1, all_set);
and here
>  
>  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>  	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> -	report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> +	report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
>  	report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
>  }
>  
>  static void test_chained_sw_incr(bool unused)
>  {
>  	uint32_t events[] = {SW_INCR, CHAIN};
> -	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> +	uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
>  	uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
>  	int i;
the chained counter changes may be put in a separate patch to help the
reviewer focus on the new 64b overflow tests.
>  
> @@ -669,7 +681,7 @@ static void test_chained_sw_incr(bool unused)
>  	/* enable counters #0 and #1 */
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>  
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>  	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
>  	isb();
>  
> @@ -687,7 +699,7 @@ static void test_chained_sw_incr(bool unused)
>  	pmu_reset();
>  
>  	write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>  	write_regn_el0(pmevcntr, 1, ALL_SET);
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>  	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> @@ -726,7 +738,7 @@ static void test_chain_promotion(bool unused)
>  
>  	/* Only enable even counter */
>  	pmu_reset();
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>  	write_sysreg_s(0x1, PMCNTENSET_EL0);
>  	isb();
>  
> @@ -856,6 +868,9 @@ static bool expect_interrupts(uint32_t bitmap)
>  
>  static void test_overflow_interrupt(bool overflow_at_64bits)
>  {
> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> +	uint64_t all_set = pmevcntr_mask();
> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>  	uint32_t events[] = {MEM_ACCESS, SW_INCR};
>  	void *addr = malloc(PAGE_SIZE);
>  	int i;
> @@ -874,16 +889,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>  	write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
>  	write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
> +	write_regn_el0(pmevcntr, 1, pre_overflow);
>  	isb();
>  
>  	/* interrupts are disabled (PMINTENSET_EL1 == 0) */
>  
> -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
I noticed that undocumented change. Is it done on purpose?
>  	report(expect_interrupts(0), "no overflow interrupt after preset");
>  
> -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>  	isb();
>  
>  	for (i = 0; i < 100; i++)
> @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>  
>  	pmu_reset_stats();
>  
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
> +	write_regn_el0(pmevcntr, 1, pre_overflow);
>  	write_sysreg(ALL_SET, pmintenset_el1);
>  	isb();
>  
> -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> +	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>  	for (i = 0; i < 100; i++)
>  		write_sysreg(0x3, pmswinc_el0);
>  
> @@ -912,25 +927,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>  	report(expect_interrupts(0x3),
>  		"overflow interrupts expected on #0 and #1");
>  
> -	/* promote to 64-b */
> +	/*
> +	 * promote to 64-b:
> +	 *
> +	 * This only applies to the !overflow_at_64bits case, as
> +	 * overflow_at_64bits doesn't implement CHAIN events. The
> +	 * overflow_at_64bits case just checks that chained counters are
> +	 * not incremented when PMCR.LP == 1.
> +	 */
>  
>  	pmu_reset_stats();
>  
>  	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>  	isb();
> -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> -	report(expect_interrupts(0x1),
> -		"expect overflow interrupt on 32b boundary");
> +	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> +	report(expect_interrupts(0x1), "expect overflow interrupt");
>  
>  	/* overflow on odd counter */
>  	pmu_reset_stats();
> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> -	write_regn_el0(pmevcntr, 1, ALL_SET);
> +	write_regn_el0(pmevcntr, 0, pre_overflow);
> +	write_regn_el0(pmevcntr, 1, all_set);
>  	isb();
> -	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> -	report(expect_interrupts(0x3),
> -		"expect overflow interrupt on even and odd counter");
> +	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> +	if (overflow_at_64bits) {
> +		report(expect_interrupts(0x1),
> +		       "expect overflow interrupt on even counter");
> +		report(read_regn_el0(pmevcntr, 1) == all_set,
> +		       "Odd counter did not change");
> +	} else {
> +		report(expect_interrupts(0x3),
> +		       "expect overflow interrupt on even and odd counter");
> +		report(read_regn_el0(pmevcntr, 1) != all_set,
> +		       "Odd counter wrapped");
> +	}
>  }
>  #endif
>  
> @@ -1131,10 +1161,13 @@ int main(int argc, char *argv[])
>  		report_prefix_pop();
>  	} else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
>  		run_test(argv[1], test_basic_event_count, false);
> +		run_test(argv[1], test_basic_event_count, true);
>  	} else if (strcmp(argv[1], "pmu-mem-access") == 0) {
>  		run_test(argv[1], test_mem_access, false);
> +		run_test(argv[1], test_mem_access, true);
>  	} else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
>  		run_test(argv[1], test_sw_incr, false);
> +		run_test(argv[1], test_sw_incr, true);
>  	} else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
>  		run_test(argv[1], test_chained_counters, false);
>  	} else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> @@ -1143,6 +1176,7 @@ int main(int argc, char *argv[])
>  		run_test(argv[1], test_chain_promotion, false);
>  	} else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
>  		run_test(argv[1], test_overflow_interrupt, false);
> +		run_test(argv[1], test_overflow_interrupt, true);
>  	} else {
>  		report_abort("Unknown sub-test '%s'", argv[1]);
>  	}
Thanks

Eric


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-19  5:58   ` Reiji Watanabe
@ 2023-01-24 15:11     ` Ricardo Koller
  2023-01-25  2:19     ` Ricardo Koller
  1 sibling, 0 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-24 15:11 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger,
	oliver.upton

On Wed, Jan 18, 2023 at 09:58:38PM -0800, Reiji Watanabe wrote:
> Hi Ricardo,
> 
> On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
> >
> > Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> > and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> > supported on PMUv3p5.
> >
> > Note that chained tests do not implement "overflow_at_64bits == true".
> > That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> > details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> > 0487H.a, J1.1.1 "aarch64/debug").
> >
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> 
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> 
> I have a few nits below.
> 
> > ---
> >  arm/pmu.c | 116 +++++++++++++++++++++++++++++++++++-------------------
> >  1 file changed, 75 insertions(+), 41 deletions(-)
> >
> > diff --git a/arm/pmu.c b/arm/pmu.c
> > index 0d06b59..72d0f50 100644
> > --- a/arm/pmu.c
> > +++ b/arm/pmu.c
> > @@ -28,6 +28,7 @@
> >  #define PMU_PMCR_X         (1 << 4)
> >  #define PMU_PMCR_DP        (1 << 5)
> >  #define PMU_PMCR_LC        (1 << 6)
> > +#define PMU_PMCR_LP        (1 << 7)
> >  #define PMU_PMCR_N_SHIFT   11
> >  #define PMU_PMCR_N_MASK    0x1f
> >  #define PMU_PMCR_ID_SHIFT  16
> > @@ -56,9 +57,13 @@
> >
> >  #define ALL_SET                        0x00000000FFFFFFFFULL
> 
> Nit: Perhaps renaming this to ALL_SET_32 might be more clear.
> Since the test is now aware of 64 bit counters, the name 'ALL_SET'
> makes me think that all 64 bits are set.
> 

Sounds good, will do that in the next version.

> >  #define ALL_CLEAR              0x0000000000000000ULL
> > -#define PRE_OVERFLOW           0x00000000FFFFFFF0ULL
> > +#define PRE_OVERFLOW_32                0x00000000FFFFFFF0ULL
> > +#define PRE_OVERFLOW_64                0xFFFFFFFFFFFFFFF0ULL
> >  #define PRE_OVERFLOW2          0x00000000FFFFFFDCULL
> >
> > +#define PRE_OVERFLOW(__overflow_at_64bits)                             \
> > +       (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
> > +
> >  #define PMU_PPI                        23
> >
> >  struct pmu {
> > @@ -449,8 +454,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
> >  static void test_basic_event_count(bool overflow_at_64bits)
> >  {
> >         uint32_t implemented_counter_mask, non_implemented_counter_mask;
> > -       uint32_t counter_mask;
> > +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >         uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> > +       uint32_t counter_mask;
> >
> >         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> >             !check_overflow_prerequisites(overflow_at_64bits))
> > @@ -472,13 +479,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
> >          * clear cycle and all event counters and allow counter enablement
> >          * through PMCNTENSET. LC is RES1.
> >          */
> > -       set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> > +       set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
> >         isb();
> > -       report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
> > +       report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
> >
> >         /* Preset counter #0 to pre overflow value to trigger an overflow */
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -       report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> > +       report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> >                 "counter #0 preset to pre-overflow value");
> >         report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
> >
> > @@ -531,6 +538,8 @@ static void test_mem_access(bool overflow_at_64bits)
> >  {
> >         void *addr = malloc(PAGE_SIZE);
> >         uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> > +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >
> >         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> >             !check_overflow_prerequisites(overflow_at_64bits))
> > @@ -542,7 +551,7 @@ static void test_mem_access(bool overflow_at_64bits)
> >         write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> >         isb();
> > -       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> > +       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >         report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
> >         report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
> >         /* We may measure more than 20 mem access depending on the core */
> > @@ -552,11 +561,11 @@ static void test_mem_access(bool overflow_at_64bits)
> >
> >         pmu_reset();
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> > +       write_regn_el0(pmevcntr, 1, pre_overflow);
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> >         isb();
> > -       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> > +       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >         report(read_sysreg(pmovsclr_el0) == 0x3,
> >                "Ran 20 mem accesses with expected overflows on both counters");
> >         report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
> > @@ -566,8 +575,10 @@ static void test_mem_access(bool overflow_at_64bits)
> >
> >  static void test_sw_incr(bool overflow_at_64bits)
> >  {
> > +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >         uint32_t events[] = {SW_INCR, SW_INCR};
> > -       uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > +       uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
> >         int i;
> >
> >         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > @@ -581,7 +592,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> >         /* enable counters #0 and #1 */
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> >         isb();
> >
> >         for (i = 0; i < 100; i++)
> > @@ -589,14 +600,14 @@ static void test_sw_incr(bool overflow_at_64bits)
> >
> >         isb();
> >         report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> > -       report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> > +       report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> >                 "PWSYNC does not increment if PMCR.E is unset");
> >
> >         pmu_reset();
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> > -       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > +       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >         isb();
> >
> >         for (i = 0; i < 100; i++)
> > @@ -614,6 +625,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> >  static void test_chained_counters(bool unused)
> >  {
> >         uint32_t events[] = {CPU_CYCLES, CHAIN};
> > +       uint64_t all_set = pmevcntr_mask();
> >
> >         if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> >                 return;
> > @@ -624,7 +636,7 @@ static void test_chained_counters(bool unused)
> >         write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> >         /* enable counters #0 and #1 */
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >
> >         precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >
> > @@ -636,26 +648,26 @@ static void test_chained_counters(bool unused)
> >         pmu_reset();
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >         write_regn_el0(pmevcntr, 1, 0x1);
> >         precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >         report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> >         report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
> >         report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -       write_regn_el0(pmevcntr, 1, ALL_SET);
> > +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > +       write_regn_el0(pmevcntr, 1, all_set);
> >
> >         precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >         report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> > -       report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> > +       report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
> >         report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
> >  }
> >
> >  static void test_chained_sw_incr(bool unused)
> >  {
> >         uint32_t events[] = {SW_INCR, CHAIN};
> > -       uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > +       uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> >         uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
> >         int i;
> >
> > @@ -669,7 +681,7 @@ static void test_chained_sw_incr(bool unused)
> >         /* enable counters #0 and #1 */
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >         set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> >         isb();
> >
> > @@ -687,7 +699,7 @@ static void test_chained_sw_incr(bool unused)
> >         pmu_reset();
> >
> >         write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >         write_regn_el0(pmevcntr, 1, ALL_SET);
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> >         set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > @@ -726,7 +738,7 @@ static void test_chain_promotion(bool unused)
> >
> >         /* Only enable even counter */
> >         pmu_reset();
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >         write_sysreg_s(0x1, PMCNTENSET_EL0);
> >         isb();
> >
> > @@ -856,6 +868,9 @@ static bool expect_interrupts(uint32_t bitmap)
> >
> >  static void test_overflow_interrupt(bool overflow_at_64bits)
> >  {
> > +       uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +       uint64_t all_set = pmevcntr_mask();
> > +       uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >         uint32_t events[] = {MEM_ACCESS, SW_INCR};
> >         void *addr = malloc(PAGE_SIZE);
> >         int i;
> > @@ -874,16 +889,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >         write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> >         write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
> >         write_sysreg_s(0x3, PMCNTENSET_EL0);
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> > +       write_regn_el0(pmevcntr, 1, pre_overflow);
> >         isb();
> >
> >         /* interrupts are disabled (PMINTENSET_EL1 == 0) */
> >
> > -       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > +       mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >         report(expect_interrupts(0), "no overflow interrupt after preset");
> >
> > -       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > +       set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >         isb();
> >
> >         for (i = 0; i < 100; i++)
> > @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >
> >         pmu_reset_stats();
> 
> This isn't directly related to the patch.
> But, as bits of pmovsclr_el0 are already set (although interrupts
> are disabled), I would think it's good to clear pmovsclr_el0 here.
> 
> Thank you,
> Reiji
> 
> 
> 
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> > +       write_regn_el0(pmevcntr, 1, pre_overflow);
> >         write_sysreg(ALL_SET, pmintenset_el1);
> >         isb();
> >
> > -       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > +       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >         for (i = 0; i < 100; i++)
> >                 write_sysreg(0x3, pmswinc_el0);
> >
> > @@ -912,25 +927,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >         report(expect_interrupts(0x3),
> >                 "overflow interrupts expected on #0 and #1");
> >
> > -       /* promote to 64-b */
> > +       /*
> > +        * promote to 64-b:
> > +        *
> > +        * This only applies to the !overflow_at_64bits case, as
> > +        * overflow_at_64bits doesn't implement CHAIN events. The
> > +        * overflow_at_64bits case just checks that chained counters are
> > +        * not incremented when PMCR.LP == 1.
> > +        */
> >
> >         pmu_reset_stats();
> >
> >         write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> >         isb();
> > -       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > -       report(expect_interrupts(0x1),
> > -               "expect overflow interrupt on 32b boundary");
> > +       mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> > +       report(expect_interrupts(0x1), "expect overflow interrupt");
> >
> >         /* overflow on odd counter */
> >         pmu_reset_stats();
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -       write_regn_el0(pmevcntr, 1, ALL_SET);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> > +       write_regn_el0(pmevcntr, 1, all_set);
> >         isb();
> > -       mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> > -       report(expect_interrupts(0x3),
> > -               "expect overflow interrupt on even and odd counter");
> > +       mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> > +       if (overflow_at_64bits) {
> > +               report(expect_interrupts(0x1),
> > +                      "expect overflow interrupt on even counter");
> > +               report(read_regn_el0(pmevcntr, 1) == all_set,
> > +                      "Odd counter did not change");
> > +       } else {
> > +               report(expect_interrupts(0x3),
> > +                      "expect overflow interrupt on even and odd counter");
> > +               report(read_regn_el0(pmevcntr, 1) != all_set,
> > +                      "Odd counter wrapped");
> > +       }
> >  }
> >  #endif
> >
> > @@ -1131,10 +1161,13 @@ int main(int argc, char *argv[])
> >                 report_prefix_pop();
> >         } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> >                 run_test(argv[1], test_basic_event_count, false);
> > +               run_test(argv[1], test_basic_event_count, true);
> >         } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> >                 run_test(argv[1], test_mem_access, false);
> > +               run_test(argv[1], test_mem_access, true);
> >         } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> >                 run_test(argv[1], test_sw_incr, false);
> > +               run_test(argv[1], test_sw_incr, true);
> >         } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> >                 run_test(argv[1], test_chained_counters, false);
> >         } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> > @@ -1143,6 +1176,7 @@ int main(int argc, char *argv[])
> >                 run_test(argv[1], test_chain_promotion, false);
> >         } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> >                 run_test(argv[1], test_overflow_interrupt, false);
> > +               run_test(argv[1], test_overflow_interrupt, true);
> >         } else {
> >                 report_abort("Unknown sub-test '%s'", argv[1]);
> >         }
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-23 20:33   ` Eric Auger
@ 2023-01-24 15:26     ` Ricardo Koller
  2023-01-24 20:15       ` Eric Auger
  0 siblings, 1 reply; 22+ messages in thread
From: Ricardo Koller @ 2023-01-24 15:26 UTC (permalink / raw)
  To: Eric Auger
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, oliver.upton, reijiw

On Mon, Jan 23, 2023 at 09:33:50PM +0100, Eric Auger wrote:
> Hi Ricardo,
> 
> On 1/9/23 22:17, Ricardo Koller wrote:
> > Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> > and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> > supported on PMUv3p5.
> >
> > Note that chained tests do not implement "overflow_at_64bits == true".
> > That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> > details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> > 0487H.a, J1.1.1 "aarch64/debug").
> >
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> > ---
> >  arm/pmu.c | 116 +++++++++++++++++++++++++++++++++++-------------------
> >  1 file changed, 75 insertions(+), 41 deletions(-)
> >
> > diff --git a/arm/pmu.c b/arm/pmu.c
> > index 0d06b59..72d0f50 100644
> > --- a/arm/pmu.c
> > +++ b/arm/pmu.c
> > @@ -28,6 +28,7 @@
> >  #define PMU_PMCR_X         (1 << 4)
> >  #define PMU_PMCR_DP        (1 << 5)
> >  #define PMU_PMCR_LC        (1 << 6)
> > +#define PMU_PMCR_LP        (1 << 7)
> >  #define PMU_PMCR_N_SHIFT   11
> >  #define PMU_PMCR_N_MASK    0x1f
> >  #define PMU_PMCR_ID_SHIFT  16
> > @@ -56,9 +57,13 @@
> >  
> >  #define ALL_SET			0x00000000FFFFFFFFULL
> >  #define ALL_CLEAR		0x0000000000000000ULL
> > -#define PRE_OVERFLOW		0x00000000FFFFFFF0ULL
> > +#define PRE_OVERFLOW_32		0x00000000FFFFFFF0ULL
> > +#define PRE_OVERFLOW_64		0xFFFFFFFFFFFFFFF0ULL
> >  #define PRE_OVERFLOW2		0x00000000FFFFFFDCULL
> >  
> > +#define PRE_OVERFLOW(__overflow_at_64bits)				\
> > +	(__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
> > +
> >  #define PMU_PPI			23
> >  
> >  struct pmu {
> > @@ -449,8 +454,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
> >  static void test_basic_event_count(bool overflow_at_64bits)
> >  {
> >  	uint32_t implemented_counter_mask, non_implemented_counter_mask;
> > -	uint32_t counter_mask;
> > +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >  	uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> > +	uint32_t counter_mask;
> >  
> >  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> >  	    !check_overflow_prerequisites(overflow_at_64bits))
> > @@ -472,13 +479,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
> >  	 * clear cycle and all event counters and allow counter enablement
> >  	 * through PMCNTENSET. LC is RES1.
> >  	 */
> > -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> > +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
> >  	isb();
> > -	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
> > +	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
> >  
> >  	/* Preset counter #0 to pre overflow value to trigger an overflow */
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> > +	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> >  		"counter #0 preset to pre-overflow value");
> >  	report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
> >  
> > @@ -531,6 +538,8 @@ static void test_mem_access(bool overflow_at_64bits)
> >  {
> >  	void *addr = malloc(PAGE_SIZE);
> >  	uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> > +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >  
> >  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> >  	    !check_overflow_prerequisites(overflow_at_64bits))
> > @@ -542,7 +551,7 @@ static void test_mem_access(bool overflow_at_64bits)
> >  	write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> >  	isb();
> > -	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> > +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >  	report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
> >  	report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
> >  	/* We may measure more than 20 mem access depending on the core */
> > @@ -552,11 +561,11 @@ static void test_mem_access(bool overflow_at_64bits)
> >  
> >  	pmu_reset();
> >  
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> > +	write_regn_el0(pmevcntr, 1, pre_overflow);
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> >  	isb();
> > -	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> > +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >  	report(read_sysreg(pmovsclr_el0) == 0x3,
> >  	       "Ran 20 mem accesses with expected overflows on both counters");
> >  	report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
> > @@ -566,8 +575,10 @@ static void test_mem_access(bool overflow_at_64bits)
> >  
> >  static void test_sw_incr(bool overflow_at_64bits)
> >  {
> > +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >  	uint32_t events[] = {SW_INCR, SW_INCR};
> > -	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > +	uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
> >  	int i;
> >  
> >  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> > @@ -581,7 +592,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> >  	/* enable counters #0 and #1 */
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> >  
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> >  	isb();
> >  
> >  	for (i = 0; i < 100; i++)
> > @@ -589,14 +600,14 @@ static void test_sw_incr(bool overflow_at_64bits)
> >  
> >  	isb();
> >  	report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> > -	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> > +	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> >  		"PWSYNC does not increment if PMCR.E is unset");
> >  
> >  	pmu_reset();
> >  
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> > -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >  	isb();
> >  
> >  	for (i = 0; i < 100; i++)
> > @@ -614,6 +625,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> >  static void test_chained_counters(bool unused)
> >  {
> >  	uint32_t events[] = {CPU_CYCLES, CHAIN};
> > +	uint64_t all_set = pmevcntr_mask();
> ALL_SET_32 may be enough, as suggested by Reiji;

I think we still want to conditionally use ALL_SET_64 for the
pre-overflow value of the odd counter in the case of PMUv3p5.
More below.

> >  
> >  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> >  		return;
> > @@ -624,7 +636,7 @@ static void test_chained_counters(bool unused)
> >  	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> >  	/* enable counters #0 and #1 */
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >  
> >  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >  
> > @@ -636,26 +648,26 @@ static void test_chained_counters(bool unused)
> >  	pmu_reset();
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> >  
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >  	write_regn_el0(pmevcntr, 1, 0x1);
> >  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >  	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> >  	report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
> >  	report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");

As we want to test the overflow of the chained counter, we need
ALL_SET_64 for its preoverflow value.

> >  
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -	write_regn_el0(pmevcntr, 1, ALL_SET);
> > +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> > +	write_regn_el0(pmevcntr, 1, all_set);
> and here
> >  
> >  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >  	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> > -	report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> > +	report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
> >  	report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
> >  }
> >  
> >  static void test_chained_sw_incr(bool unused)
> >  {
> >  	uint32_t events[] = {SW_INCR, CHAIN};
> > -	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> > +	uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> >  	uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
> >  	int i;
> the chained counter changes may be put in a separate patch to help the
> reviewer focus on the new 64b overflow tests.

Got it, will try that for the next version.

> >  
> > @@ -669,7 +681,7 @@ static void test_chained_sw_incr(bool unused)
> >  	/* enable counters #0 and #1 */
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> >  
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >  	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> >  	isb();
> >  
> > @@ -687,7 +699,7 @@ static void test_chained_sw_incr(bool unused)
> >  	pmu_reset();
> >  
> >  	write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >  	write_regn_el0(pmevcntr, 1, ALL_SET);
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> >  	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > @@ -726,7 +738,7 @@ static void test_chain_promotion(bool unused)
> >  
> >  	/* Only enable even counter */
> >  	pmu_reset();
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >  	write_sysreg_s(0x1, PMCNTENSET_EL0);
> >  	isb();
> >  
> > @@ -856,6 +868,9 @@ static bool expect_interrupts(uint32_t bitmap)
> >  
> >  static void test_overflow_interrupt(bool overflow_at_64bits)
> >  {
> > +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> > +	uint64_t all_set = pmevcntr_mask();
> > +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >  	uint32_t events[] = {MEM_ACCESS, SW_INCR};
> >  	void *addr = malloc(PAGE_SIZE);
> >  	int i;
> > @@ -874,16 +889,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >  	write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> >  	write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
> >  	write_sysreg_s(0x3, PMCNTENSET_EL0);
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> > +	write_regn_el0(pmevcntr, 1, pre_overflow);
> >  	isb();
> >  
> >  	/* interrupts are disabled (PMINTENSET_EL1 == 0) */
> >  
> > -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> I noticed that undocumented change. Is it done on purpose?

Yes, it's intentional. It conditionally sets PMCR.LP for the next
subtest when overflow_at_64bits==true. That's also why the pre-overflow
value is close to ALL_SET_64 (and not ALL_SET_32).

> >  	report(expect_interrupts(0), "no overflow interrupt after preset");
> >  
> > -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> > +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >  	isb();
> >  
> >  	for (i = 0; i < 100; i++)
> > @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >  
> >  	pmu_reset_stats();
> >  
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> > +	write_regn_el0(pmevcntr, 1, pre_overflow);
> >  	write_sysreg(ALL_SET, pmintenset_el1);
> >  	isb();
> >  
> > -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > +	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >  	for (i = 0; i < 100; i++)
> >  		write_sysreg(0x3, pmswinc_el0);
> >  
> > @@ -912,25 +927,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >  	report(expect_interrupts(0x3),
> >  		"overflow interrupts expected on #0 and #1");
> >  
> > -	/* promote to 64-b */
> > +	/*
> > +	 * promote to 64-b:
> > +	 *
> > +	 * This only applies to the !overflow_at_64bits case, as
> > +	 * overflow_at_64bits doesn't implement CHAIN events. The
> > +	 * overflow_at_64bits case just checks that chained counters are
> > +	 * not incremented when PMCR.LP == 1.
> > +	 */
> >  
> >  	pmu_reset_stats();
> >  
> >  	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> >  	isb();
> > -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> > -	report(expect_interrupts(0x1),
> > -		"expect overflow interrupt on 32b boundary");
> > +	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> > +	report(expect_interrupts(0x1), "expect overflow interrupt");
> >  
> >  	/* overflow on odd counter */
> >  	pmu_reset_stats();
> > -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -	write_regn_el0(pmevcntr, 1, ALL_SET);
> > +	write_regn_el0(pmevcntr, 0, pre_overflow);
> > +	write_regn_el0(pmevcntr, 1, all_set);
> >  	isb();
> > -	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> > -	report(expect_interrupts(0x3),
> > -		"expect overflow interrupt on even and odd counter");
> > +	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> > +	if (overflow_at_64bits) {
> > +		report(expect_interrupts(0x1),
> > +		       "expect overflow interrupt on even counter");
> > +		report(read_regn_el0(pmevcntr, 1) == all_set,
> > +		       "Odd counter did not change");
> > +	} else {
> > +		report(expect_interrupts(0x3),
> > +		       "expect overflow interrupt on even and odd counter");
> > +		report(read_regn_el0(pmevcntr, 1) != all_set,
> > +		       "Odd counter wrapped");
> > +	}
> >  }
> >  #endif
> >  
> > @@ -1131,10 +1161,13 @@ int main(int argc, char *argv[])
> >  		report_prefix_pop();
> >  	} else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> >  		run_test(argv[1], test_basic_event_count, false);
> > +		run_test(argv[1], test_basic_event_count, true);
> >  	} else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> >  		run_test(argv[1], test_mem_access, false);
> > +		run_test(argv[1], test_mem_access, true);
> >  	} else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> >  		run_test(argv[1], test_sw_incr, false);
> > +		run_test(argv[1], test_sw_incr, true);
> >  	} else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> >  		run_test(argv[1], test_chained_counters, false);
> >  	} else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> > @@ -1143,6 +1176,7 @@ int main(int argc, char *argv[])
> >  		run_test(argv[1], test_chain_promotion, false);
> >  	} else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> >  		run_test(argv[1], test_overflow_interrupt, false);
> > +		run_test(argv[1], test_overflow_interrupt, true);
> >  	} else {
> >  		report_abort("Unknown sub-test '%s'", argv[1]);
> >  	}
> Thanks
> 
> Eric
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-24 15:26     ` Ricardo Koller
@ 2023-01-24 20:15       ` Eric Auger
  2023-01-26 16:45         ` Ricardo Koller
  0 siblings, 1 reply; 22+ messages in thread
From: Eric Auger @ 2023-01-24 20:15 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, oliver.upton, reijiw



On 1/24/23 16:26, Ricardo Koller wrote:
> On Mon, Jan 23, 2023 at 09:33:50PM +0100, Eric Auger wrote:
>> Hi Ricardo,
>>
>> On 1/9/23 22:17, Ricardo Koller wrote:
>>> Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
>>> and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
>>> supported on PMUv3p5.
>>>
>>> Note that chained tests do not implement "overflow_at_64bits == true".
>>> That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
>>> details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
>>> 0487H.a, J1.1.1 "aarch64/debug").
>>>
>>> Signed-off-by: Ricardo Koller <ricarkol@google.com>
>>> ---
>>>  arm/pmu.c | 116 +++++++++++++++++++++++++++++++++++-------------------
>>>  1 file changed, 75 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/arm/pmu.c b/arm/pmu.c
>>> index 0d06b59..72d0f50 100644
>>> --- a/arm/pmu.c
>>> +++ b/arm/pmu.c
>>> @@ -28,6 +28,7 @@
>>>  #define PMU_PMCR_X         (1 << 4)
>>>  #define PMU_PMCR_DP        (1 << 5)
>>>  #define PMU_PMCR_LC        (1 << 6)
>>> +#define PMU_PMCR_LP        (1 << 7)
>>>  #define PMU_PMCR_N_SHIFT   11
>>>  #define PMU_PMCR_N_MASK    0x1f
>>>  #define PMU_PMCR_ID_SHIFT  16
>>> @@ -56,9 +57,13 @@
>>>  
>>>  #define ALL_SET			0x00000000FFFFFFFFULL
>>>  #define ALL_CLEAR		0x0000000000000000ULL
>>> -#define PRE_OVERFLOW		0x00000000FFFFFFF0ULL
>>> +#define PRE_OVERFLOW_32		0x00000000FFFFFFF0ULL
>>> +#define PRE_OVERFLOW_64		0xFFFFFFFFFFFFFFF0ULL
>>>  #define PRE_OVERFLOW2		0x00000000FFFFFFDCULL
>>>  
>>> +#define PRE_OVERFLOW(__overflow_at_64bits)				\
>>> +	(__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
>>> +
>>>  #define PMU_PPI			23
>>>  
>>>  struct pmu {
>>> @@ -449,8 +454,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
>>>  static void test_basic_event_count(bool overflow_at_64bits)
>>>  {
>>>  	uint32_t implemented_counter_mask, non_implemented_counter_mask;
>>> -	uint32_t counter_mask;
>>> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
>>> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>>>  	uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
>>> +	uint32_t counter_mask;
>>>  
>>>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
>>>  	    !check_overflow_prerequisites(overflow_at_64bits))
>>> @@ -472,13 +479,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
>>>  	 * clear cycle and all event counters and allow counter enablement
>>>  	 * through PMCNTENSET. LC is RES1.
>>>  	 */
>>> -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
>>> +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
>>>  	isb();
>>> -	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
>>> +	report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
>>>  
>>>  	/* Preset counter #0 to pre overflow value to trigger an overflow */
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> -	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>> +	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>>>  		"counter #0 preset to pre-overflow value");
>>>  	report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
>>>  
>>> @@ -531,6 +538,8 @@ static void test_mem_access(bool overflow_at_64bits)
>>>  {
>>>  	void *addr = malloc(PAGE_SIZE);
>>>  	uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
>>> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
>>> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>>>  
>>>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
>>>  	    !check_overflow_prerequisites(overflow_at_64bits))
>>> @@ -542,7 +551,7 @@ static void test_mem_access(bool overflow_at_64bits)
>>>  	write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>>  	isb();
>>> -	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
>>> +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>>>  	report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
>>>  	report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
>>>  	/* We may measure more than 20 mem access depending on the core */
>>> @@ -552,11 +561,11 @@ static void test_mem_access(bool overflow_at_64bits)
>>>  
>>>  	pmu_reset();
>>>  
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>> +	write_regn_el0(pmevcntr, 1, pre_overflow);
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>>  	isb();
>>> -	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
>>> +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>>>  	report(read_sysreg(pmovsclr_el0) == 0x3,
>>>  	       "Ran 20 mem accesses with expected overflows on both counters");
>>>  	report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
>>> @@ -566,8 +575,10 @@ static void test_mem_access(bool overflow_at_64bits)
>>>  
>>>  static void test_sw_incr(bool overflow_at_64bits)
>>>  {
>>> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
>>> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>>>  	uint32_t events[] = {SW_INCR, SW_INCR};
>>> -	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
>>> +	uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
>>>  	int i;
>>>  
>>>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
>>> @@ -581,7 +592,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>>>  	/* enable counters #0 and #1 */
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>>  
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>>  	isb();
>>>  
>>>  	for (i = 0; i < 100; i++)
>>> @@ -589,14 +600,14 @@ static void test_sw_incr(bool overflow_at_64bits)
>>>  
>>>  	isb();
>>>  	report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
>>> -	report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
>>> +	report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>>>  		"PWSYNC does not increment if PMCR.E is unset");
>>>  
>>>  	pmu_reset();
>>>  
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>> -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
>>> +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>>>  	isb();
>>>  
>>>  	for (i = 0; i < 100; i++)
>>> @@ -614,6 +625,7 @@ static void test_sw_incr(bool overflow_at_64bits)
>>>  static void test_chained_counters(bool unused)
>>>  {
>>>  	uint32_t events[] = {CPU_CYCLES, CHAIN};
>>> +	uint64_t all_set = pmevcntr_mask();
>> ALL_SET_32 may be enough, as suggested by Reiji;
> I think we still want to conditionally use ALL_SET_64 for the
> pre-overflow value of the odd counter in the case of PMUv3p5.
> More below.
Ah OK I initially understood there were no chain events at all with
PMUv3p5at. However they do exit if LP=0. Sorry for the noise.
>
>>>  
>>>  	if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
>>>  		return;
>>> @@ -624,7 +636,7 @@ static void test_chained_counters(bool unused)
>>>  	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
>>>  	/* enable counters #0 and #1 */
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>>>  
>>>  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>>>  
>>> @@ -636,26 +648,26 @@ static void test_chained_counters(bool unused)
>>>  	pmu_reset();
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>>  
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>>>  	write_regn_el0(pmevcntr, 1, 0x1);
>>>  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>>>  	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
>>>  	report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
>>>  	report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
> As we want to test the overflow of the chained counter, we need
> ALL_SET_64 for its preoverflow value.
>
>>>  
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> -	write_regn_el0(pmevcntr, 1, ALL_SET);
>>> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>>> +	write_regn_el0(pmevcntr, 1, all_set);
>> and here
>>>  
>>>  	precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>>>  	report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
>>> -	report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
>>> +	report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
>>>  	report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
>>>  }
>>>  
>>>  static void test_chained_sw_incr(bool unused)
>>>  {
>>>  	uint32_t events[] = {SW_INCR, CHAIN};
>>> -	uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
>>> +	uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
>>>  	uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
>>>  	int i;
>> the chained counter changes may be put in a separate patch to help the
>> reviewer focus on the new 64b overflow tests.
> Got it, will try that for the next version.
>
>>>  
>>> @@ -669,7 +681,7 @@ static void test_chained_sw_incr(bool unused)
>>>  	/* enable counters #0 and #1 */
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>>  
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>>>  	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
>>>  	isb();
>>>  
>>> @@ -687,7 +699,7 @@ static void test_chained_sw_incr(bool unused)
>>>  	pmu_reset();
>>>  
>>>  	write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>>>  	write_regn_el0(pmevcntr, 1, ALL_SET);
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>>  	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
>>> @@ -726,7 +738,7 @@ static void test_chain_promotion(bool unused)
>>>  
>>>  	/* Only enable even counter */
>>>  	pmu_reset();
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
>>>  	write_sysreg_s(0x1, PMCNTENSET_EL0);
>>>  	isb();
>>>  
>>> @@ -856,6 +868,9 @@ static bool expect_interrupts(uint32_t bitmap)
>>>  
>>>  static void test_overflow_interrupt(bool overflow_at_64bits)
>>>  {
>>> +	uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
>>> +	uint64_t all_set = pmevcntr_mask();
>>> +	uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>>>  	uint32_t events[] = {MEM_ACCESS, SW_INCR};
>>>  	void *addr = malloc(PAGE_SIZE);
>>>  	int i;
>>> @@ -874,16 +889,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>>>  	write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
>>>  	write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
>>>  	write_sysreg_s(0x3, PMCNTENSET_EL0);
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>> +	write_regn_el0(pmevcntr, 1, pre_overflow);
>>>  	isb();
>>>  
>>>  	/* interrupts are disabled (PMINTENSET_EL1 == 0) */
>>>  
>>> -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
>>> +	mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>> I noticed that undocumented change. Is it done on purpose?
> Yes, it's intentional. It conditionally sets PMCR.LP for the next
> subtest when overflow_at_64bits==true. That's also why the pre-overflow
> value is close to ALL_SET_64 (and not ALL_SET_32).
I rather meant s/200/20

Thanks

Eric
>
>>>  	report(expect_interrupts(0), "no overflow interrupt after preset");
>>>  
>>> -	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
>>> +	set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>>>  	isb();
>>>  
>>>  	for (i = 0; i < 100; i++)
>>> @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>>>  
>>>  	pmu_reset_stats();
>>>  
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> -	write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>> +	write_regn_el0(pmevcntr, 1, pre_overflow);
>>>  	write_sysreg(ALL_SET, pmintenset_el1);
>>>  	isb();
>>>  
>>> -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
>>> +	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>>>  	for (i = 0; i < 100; i++)
>>>  		write_sysreg(0x3, pmswinc_el0);
>>>  
>>> @@ -912,25 +927,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>>>  	report(expect_interrupts(0x3),
>>>  		"overflow interrupts expected on #0 and #1");
>>>  
>>> -	/* promote to 64-b */
>>> +	/*
>>> +	 * promote to 64-b:
>>> +	 *
>>> +	 * This only applies to the !overflow_at_64bits case, as
>>> +	 * overflow_at_64bits doesn't implement CHAIN events. The
>>> +	 * overflow_at_64bits case just checks that chained counters are
>>> +	 * not incremented when PMCR.LP == 1.
>>> +	 */
>>>  
>>>  	pmu_reset_stats();
>>>  
>>>  	write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>>  	isb();
>>> -	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
>>> -	report(expect_interrupts(0x1),
>>> -		"expect overflow interrupt on 32b boundary");
>>> +	mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>>> +	report(expect_interrupts(0x1), "expect overflow interrupt");
>>>  
>>>  	/* overflow on odd counter */
>>>  	pmu_reset_stats();
>>> -	write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>> -	write_regn_el0(pmevcntr, 1, ALL_SET);
>>> +	write_regn_el0(pmevcntr, 0, pre_overflow);
>>> +	write_regn_el0(pmevcntr, 1, all_set);
>>>  	isb();
>>> -	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
>>> -	report(expect_interrupts(0x3),
>>> -		"expect overflow interrupt on even and odd counter");
>>> +	mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>>> +	if (overflow_at_64bits) {
>>> +		report(expect_interrupts(0x1),
>>> +		       "expect overflow interrupt on even counter");
>>> +		report(read_regn_el0(pmevcntr, 1) == all_set,
>>> +		       "Odd counter did not change");
>>> +	} else {
>>> +		report(expect_interrupts(0x3),
>>> +		       "expect overflow interrupt on even and odd counter");
>>> +		report(read_regn_el0(pmevcntr, 1) != all_set,
>>> +		       "Odd counter wrapped");
>>> +	}
>>>  }
>>>  #endif
>>>  
>>> @@ -1131,10 +1161,13 @@ int main(int argc, char *argv[])
>>>  		report_prefix_pop();
>>>  	} else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
>>>  		run_test(argv[1], test_basic_event_count, false);
>>> +		run_test(argv[1], test_basic_event_count, true);
>>>  	} else if (strcmp(argv[1], "pmu-mem-access") == 0) {
>>>  		run_test(argv[1], test_mem_access, false);
>>> +		run_test(argv[1], test_mem_access, true);
>>>  	} else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
>>>  		run_test(argv[1], test_sw_incr, false);
>>> +		run_test(argv[1], test_sw_incr, true);
>>>  	} else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
>>>  		run_test(argv[1], test_chained_counters, false);
>>>  	} else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
>>> @@ -1143,6 +1176,7 @@ int main(int argc, char *argv[])
>>>  		run_test(argv[1], test_chain_promotion, false);
>>>  	} else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
>>>  		run_test(argv[1], test_overflow_interrupt, false);
>>> +		run_test(argv[1], test_overflow_interrupt, true);
>>>  	} else {
>>>  		report_abort("Unknown sub-test '%s'", argv[1]);
>>>  	}
>> Thanks
>>
>> Eric
>>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-19  5:58   ` Reiji Watanabe
  2023-01-24 15:11     ` Ricardo Koller
@ 2023-01-25  2:19     ` Ricardo Koller
  2023-01-25  4:11       ` Reiji Watanabe
  1 sibling, 1 reply; 22+ messages in thread
From: Ricardo Koller @ 2023-01-25  2:19 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger,
	oliver.upton

On Wed, Jan 18, 2023 at 09:58:38PM -0800, Reiji Watanabe wrote:
> Hi Ricardo,
> 
> On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
> > @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >
> >         pmu_reset_stats();
> 
> This isn't directly related to the patch.
> But, as bits of pmovsclr_el0 are already set (although interrupts
> are disabled), I would think it's good to clear pmovsclr_el0 here.
> 
> Thank you,
> Reiji
> 

There's no need in this case as there's this immediately before the
pmu_reset_stats();

	report(expect_interrupts(0), "no overflow interrupt after counting");

so pmovsclr_el0 should be clear.

> 
> >
> > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> > +       write_regn_el0(pmevcntr, 1, pre_overflow);
> >         write_sysreg(ALL_SET, pmintenset_el1);
> >         isb();
> >


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-25  2:19     ` Ricardo Koller
@ 2023-01-25  4:11       ` Reiji Watanabe
  2023-01-25  7:55         ` Eric Auger
  0 siblings, 1 reply; 22+ messages in thread
From: Reiji Watanabe @ 2023-01-25  4:11 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, eric.auger,
	oliver.upton

On Tue, Jan 24, 2023 at 6:19 PM Ricardo Koller <ricarkol@google.com> wrote:
>
> On Wed, Jan 18, 2023 at 09:58:38PM -0800, Reiji Watanabe wrote:
> > Hi Ricardo,
> >
> > On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
> > > @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> > >
> > >         pmu_reset_stats();
> >
> > This isn't directly related to the patch.
> > But, as bits of pmovsclr_el0 are already set (although interrupts
> > are disabled), I would think it's good to clear pmovsclr_el0 here.
> >
> > Thank you,
> > Reiji
> >
>
> There's no need in this case as there's this immediately before the
> pmu_reset_stats();
>
>         report(expect_interrupts(0), "no overflow interrupt after counting");
>
> so pmovsclr_el0 should be clear.

In my understanding, it means that no overflow *interrupt* was reported,
as the interrupt is not enabled yet (pmintenset_el1 is not set).
But, (as far as I checked the test case,) the both counters should be
overflowing here. So, pmovsclr_el0 must be 0x3.

Or am I misunderstanding something?

Thank you,
Reiji


>
> >
> > >
> > > -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > > -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> > > +       write_regn_el0(pmevcntr, 0, pre_overflow);
> > > +       write_regn_el0(pmevcntr, 1, pre_overflow);
> > >         write_sysreg(ALL_SET, pmintenset_el1);
> > >         isb();
> > >
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals
  2023-01-23 20:17   ` Eric Auger
@ 2023-01-25  4:37     ` Reiji Watanabe
  0 siblings, 0 replies; 22+ messages in thread
From: Reiji Watanabe @ 2023-01-25  4:37 UTC (permalink / raw)
  To: eric.auger
  Cc: Ricardo Koller, kvm, kvmarm, andrew.jones, maz, alexandru.elisei,
	oliver.upton

> On 1/9/23 22:17, Ricardo Koller wrote:
> > The arm/pmu test prints the value of counters as %ld.  Most tests start
> > with counters around 0 or UINT_MAX, so having something like -16 instead of
> > 0xffff_fff0 is not very useful.
> >
> > Report counter values as hexadecimals.
> >
> > Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > Signed-off-by: Ricardo Koller <ricarkol@google.com>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-25  4:11       ` Reiji Watanabe
@ 2023-01-25  7:55         ` Eric Auger
  2023-01-25 14:17           ` Ricardo Koller
  0 siblings, 1 reply; 22+ messages in thread
From: Eric Auger @ 2023-01-25  7:55 UTC (permalink / raw)
  To: Reiji Watanabe, Ricardo Koller
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, oliver.upton

Hi,

On 1/25/23 05:11, Reiji Watanabe wrote:
> On Tue, Jan 24, 2023 at 6:19 PM Ricardo Koller <ricarkol@google.com> wrote:
>> On Wed, Jan 18, 2023 at 09:58:38PM -0800, Reiji Watanabe wrote:
>>> Hi Ricardo,
>>>
>>> On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
>>>> @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
>>>>
>>>>         pmu_reset_stats();
>>> This isn't directly related to the patch.
>>> But, as bits of pmovsclr_el0 are already set (although interrupts
>>> are disabled), I would think it's good to clear pmovsclr_el0 here.
>>>
>>> Thank you,
>>> Reiji
>>>
>> There's no need in this case as there's this immediately before the
>> pmu_reset_stats();
>>
>>         report(expect_interrupts(0), "no overflow interrupt after counting");
>>
>> so pmovsclr_el0 should be clear.
> In my understanding, it means that no overflow *interrupt* was reported,
> as the interrupt is not enabled yet (pmintenset_el1 is not set).
> But, (as far as I checked the test case,) the both counters should be
> overflowing here. So, pmovsclr_el0 must be 0x3.

I would tend to agree with Reiji.  The PMOVSSET_EL0<idx> will impact the
next test according to aarch64/debug/pmu/AArch64.CheckForPMUOverflow and
I think the overflow reg should be reset.

Eric
>
> Or am I misunderstanding something?
>
> Thank you,
> Reiji
>
>
>>>> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
>>>> -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
>>>> +       write_regn_el0(pmevcntr, 0, pre_overflow);
>>>> +       write_regn_el0(pmevcntr, 1, pre_overflow);
>>>>         write_sysreg(ALL_SET, pmintenset_el1);
>>>>         isb();
>>>>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-25  7:55         ` Eric Auger
@ 2023-01-25 14:17           ` Ricardo Koller
  0 siblings, 0 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-25 14:17 UTC (permalink / raw)
  To: Eric Auger
  Cc: Reiji Watanabe, kvm, kvmarm, andrew.jones, maz, alexandru.elisei,
	oliver.upton

On Wed, Jan 25, 2023 at 08:55:50AM +0100, Eric Auger wrote:
> Hi,
> 
> On 1/25/23 05:11, Reiji Watanabe wrote:
> > On Tue, Jan 24, 2023 at 6:19 PM Ricardo Koller <ricarkol@google.com> wrote:
> >> On Wed, Jan 18, 2023 at 09:58:38PM -0800, Reiji Watanabe wrote:
> >>> Hi Ricardo,
> >>>
> >>> On Mon, Jan 9, 2023 at 1:18 PM Ricardo Koller <ricarkol@google.com> wrote:
> >>>> @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >>>>
> >>>>         pmu_reset_stats();
> >>> This isn't directly related to the patch.
> >>> But, as bits of pmovsclr_el0 are already set (although interrupts
> >>> are disabled), I would think it's good to clear pmovsclr_el0 here.
> >>>
> >>> Thank you,
> >>> Reiji
> >>>
> >> There's no need in this case as there's this immediately before the
> >> pmu_reset_stats();
> >>
> >>         report(expect_interrupts(0), "no overflow interrupt after counting");
> >>
> >> so pmovsclr_el0 should be clear.
> > In my understanding, it means that no overflow *interrupt* was reported,
> > as the interrupt is not enabled yet (pmintenset_el1 is not set).
> > But, (as far as I checked the test case,) the both counters should be
> > overflowing here. So, pmovsclr_el0 must be 0x3.
> 
> I would tend to agree with Reiji.  The PMOVSSET_EL0<idx> will impact the
> next test according to aarch64/debug/pmu/AArch64.CheckForPMUOverflow and
> I think the overflow reg should be reset.

Ahh, right. The overflow interrupt could fire as soon as they are
enabled again. OK, will add a fix in the next version.

> 
> Eric
> >
> > Or am I misunderstanding something?
> >
> > Thank you,
> > Reiji
> >
> >
> >>>> -       write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>>> -       write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> >>>> +       write_regn_el0(pmevcntr, 0, pre_overflow);
> >>>> +       write_regn_el0(pmevcntr, 1, pre_overflow);
> >>>>         write_sysreg(ALL_SET, pmintenset_el1);
> >>>>         isb();
> >>>>
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for 64-bit overflows
  2023-01-24 20:15       ` Eric Auger
@ 2023-01-26 16:45         ` Ricardo Koller
  0 siblings, 0 replies; 22+ messages in thread
From: Ricardo Koller @ 2023-01-26 16:45 UTC (permalink / raw)
  To: eric.auger
  Cc: kvm, kvmarm, andrew.jones, maz, alexandru.elisei, oliver.upton, reijiw

On Tue, Jan 24, 2023 at 12:15 PM Eric Auger <eric.auger@redhat.com> wrote:
>
>
>
> On 1/24/23 16:26, Ricardo Koller wrote:
> > On Mon, Jan 23, 2023 at 09:33:50PM +0100, Eric Auger wrote:
> >> Hi Ricardo,
> >>
> >> On 1/9/23 22:17, Ricardo Koller wrote:
> >>> Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> >>> and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> >>> supported on PMUv3p5.
> >>>
> >>> Note that chained tests do not implement "overflow_at_64bits == true".
> >>> That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> >>> details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> >>> 0487H.a, J1.1.1 "aarch64/debug").
> >>>
> >>> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> >>> ---
> >>>  arm/pmu.c | 116 +++++++++++++++++++++++++++++++++++-------------------
> >>>  1 file changed, 75 insertions(+), 41 deletions(-)
> >>>
> >>> diff --git a/arm/pmu.c b/arm/pmu.c
> >>> index 0d06b59..72d0f50 100644
> >>> --- a/arm/pmu.c
> >>> +++ b/arm/pmu.c
> >>> @@ -28,6 +28,7 @@
> >>>  #define PMU_PMCR_X         (1 << 4)
> >>>  #define PMU_PMCR_DP        (1 << 5)
> >>>  #define PMU_PMCR_LC        (1 << 6)
> >>> +#define PMU_PMCR_LP        (1 << 7)
> >>>  #define PMU_PMCR_N_SHIFT   11
> >>>  #define PMU_PMCR_N_MASK    0x1f
> >>>  #define PMU_PMCR_ID_SHIFT  16
> >>> @@ -56,9 +57,13 @@
> >>>
> >>>  #define ALL_SET                    0x00000000FFFFFFFFULL
> >>>  #define ALL_CLEAR          0x0000000000000000ULL
> >>> -#define PRE_OVERFLOW               0x00000000FFFFFFF0ULL
> >>> +#define PRE_OVERFLOW_32            0x00000000FFFFFFF0ULL
> >>> +#define PRE_OVERFLOW_64            0xFFFFFFFFFFFFFFF0ULL
> >>>  #define PRE_OVERFLOW2              0x00000000FFFFFFDCULL
> >>>
> >>> +#define PRE_OVERFLOW(__overflow_at_64bits)                         \
> >>> +   (__overflow_at_64bits ? PRE_OVERFLOW_64 : PRE_OVERFLOW_32)
> >>> +
> >>>  #define PMU_PPI                    23
> >>>
> >>>  struct pmu {
> >>> @@ -449,8 +454,10 @@ static bool check_overflow_prerequisites(bool overflow_at_64bits)
> >>>  static void test_basic_event_count(bool overflow_at_64bits)
> >>>  {
> >>>     uint32_t implemented_counter_mask, non_implemented_counter_mask;
> >>> -   uint32_t counter_mask;
> >>> +   uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> >>> +   uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >>>     uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> >>> +   uint32_t counter_mask;
> >>>
> >>>     if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> >>>         !check_overflow_prerequisites(overflow_at_64bits))
> >>> @@ -472,13 +479,13 @@ static void test_basic_event_count(bool overflow_at_64bits)
> >>>      * clear cycle and all event counters and allow counter enablement
> >>>      * through PMCNTENSET. LC is RES1.
> >>>      */
> >>> -   set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> >>> +   set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
> >>>     isb();
> >>> -   report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset counters");
> >>> +   report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: reset counters");
> >>>
> >>>     /* Preset counter #0 to pre overflow value to trigger an overflow */
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> -   report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>> +   report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> >>>             "counter #0 preset to pre-overflow value");
> >>>     report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
> >>>
> >>> @@ -531,6 +538,8 @@ static void test_mem_access(bool overflow_at_64bits)
> >>>  {
> >>>     void *addr = malloc(PAGE_SIZE);
> >>>     uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> >>> +   uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> >>> +   uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >>>
> >>>     if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> >>>         !check_overflow_prerequisites(overflow_at_64bits))
> >>> @@ -542,7 +551,7 @@ static void test_mem_access(bool overflow_at_64bits)
> >>>     write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>>     isb();
> >>> -   mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> >>> +   mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >>>     report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 0));
> >>>     report_info("counter #1 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 1));
> >>>     /* We may measure more than 20 mem access depending on the core */
> >>> @@ -552,11 +561,11 @@ static void test_mem_access(bool overflow_at_64bits)
> >>>
> >>>     pmu_reset();
> >>>
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> -   write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>> +   write_regn_el0(pmevcntr, 1, pre_overflow);
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>>     isb();
> >>> -   mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> >>> +   mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >>>     report(read_sysreg(pmovsclr_el0) == 0x3,
> >>>            "Ran 20 mem accesses with expected overflows on both counters");
> >>>     report_info("cnt#0 = %ld cnt#1=%ld overflow=0x%lx",
> >>> @@ -566,8 +575,10 @@ static void test_mem_access(bool overflow_at_64bits)
> >>>
> >>>  static void test_sw_incr(bool overflow_at_64bits)
> >>>  {
> >>> +   uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> >>> +   uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >>>     uint32_t events[] = {SW_INCR, SW_INCR};
> >>> -   uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> >>> +   uint64_t cntr0 = (pre_overflow + 100) & pmevcntr_mask();
> >>>     int i;
> >>>
> >>>     if (!satisfy_prerequisites(events, ARRAY_SIZE(events)) ||
> >>> @@ -581,7 +592,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> >>>     /* enable counters #0 and #1 */
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>>
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>>     isb();
> >>>
> >>>     for (i = 0; i < 100; i++)
> >>> @@ -589,14 +600,14 @@ static void test_sw_incr(bool overflow_at_64bits)
> >>>
> >>>     isb();
> >>>     report_info("SW_INCR counter #0 has value %ld", read_regn_el0(pmevcntr, 0));
> >>> -   report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> >>> +   report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> >>>             "PWSYNC does not increment if PMCR.E is unset");
> >>>
> >>>     pmu_reset();
> >>>
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>> -   set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> >>> +   set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >>>     isb();
> >>>
> >>>     for (i = 0; i < 100; i++)
> >>> @@ -614,6 +625,7 @@ static void test_sw_incr(bool overflow_at_64bits)
> >>>  static void test_chained_counters(bool unused)
> >>>  {
> >>>     uint32_t events[] = {CPU_CYCLES, CHAIN};
> >>> +   uint64_t all_set = pmevcntr_mask();
> >> ALL_SET_32 may be enough, as suggested by Reiji;
> > I think we still want to conditionally use ALL_SET_64 for the
> > pre-overflow value of the odd counter in the case of PMUv3p5.
> > More below.
> Ah OK I initially understood there were no chain events at all with
> PMUv3p5at. However they do exit if LP=0. Sorry for the noise.
> >
> >>>
> >>>     if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> >>>             return;
> >>> @@ -624,7 +636,7 @@ static void test_chained_counters(bool unused)
> >>>     write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> >>>     /* enable counters #0 and #1 */
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >>>
> >>>     precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >>>
> >>> @@ -636,26 +648,26 @@ static void test_chained_counters(bool unused)
> >>>     pmu_reset();
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>>
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >>>     write_regn_el0(pmevcntr, 1, 0x1);
> >>>     precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >>>     report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> >>>     report(read_regn_el0(pmevcntr, 1) == 2, "CHAIN counter #1 set to 2");
> >>>     report(read_sysreg(pmovsclr_el0) == 0x1, "overflow recorded for chained incr #2");
> > As we want to test the overflow of the chained counter, we need
> > ALL_SET_64 for its preoverflow value.
> >
> >>>
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> -   write_regn_el0(pmevcntr, 1, ALL_SET);
> >>> +   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >>> +   write_regn_el0(pmevcntr, 1, all_set);
> >> and here
> >>>
> >>>     precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> >>>     report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> >>> -   report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> >>> +   report(read_regn_el0(pmevcntr, 1) == 0, "CHAIN counter #1 wrapped");
> >>>     report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd counters");
> >>>  }
> >>>
> >>>  static void test_chained_sw_incr(bool unused)
> >>>  {
> >>>     uint32_t events[] = {SW_INCR, CHAIN};
> >>> -   uint64_t cntr0 = (PRE_OVERFLOW + 100) & pmevcntr_mask();
> >>> +   uint64_t cntr0 = (PRE_OVERFLOW_32 + 100) & pmevcntr_mask();
> >>>     uint64_t cntr1 = (ALL_SET + 1) & pmevcntr_mask();
> >>>     int i;
> >> the chained counter changes may be put in a separate patch to help the
> >> reviewer focus on the new 64b overflow tests.
> > Got it, will try that for the next version.
> >
> >>>
> >>> @@ -669,7 +681,7 @@ static void test_chained_sw_incr(bool unused)
> >>>     /* enable counters #0 and #1 */
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>>
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >>>     set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> >>>     isb();
> >>>
> >>> @@ -687,7 +699,7 @@ static void test_chained_sw_incr(bool unused)
> >>>     pmu_reset();
> >>>
> >>>     write_regn_el0(pmevtyper, 1, events[1] | PMEVTYPER_EXCLUDE_EL0);
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >>>     write_regn_el0(pmevcntr, 1, ALL_SET);
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>>     set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> >>> @@ -726,7 +738,7 @@ static void test_chain_promotion(bool unused)
> >>>
> >>>     /* Only enable even counter */
> >>>     pmu_reset();
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW_32);
> >>>     write_sysreg_s(0x1, PMCNTENSET_EL0);
> >>>     isb();
> >>>
> >>> @@ -856,6 +868,9 @@ static bool expect_interrupts(uint32_t bitmap)
> >>>
> >>>  static void test_overflow_interrupt(bool overflow_at_64bits)
> >>>  {
> >>> +   uint64_t pre_overflow = PRE_OVERFLOW(overflow_at_64bits);
> >>> +   uint64_t all_set = pmevcntr_mask();
> >>> +   uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >>>     uint32_t events[] = {MEM_ACCESS, SW_INCR};
> >>>     void *addr = malloc(PAGE_SIZE);
> >>>     int i;
> >>> @@ -874,16 +889,16 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >>>     write_regn_el0(pmevtyper, 0, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
> >>>     write_regn_el0(pmevtyper, 1, SW_INCR | PMEVTYPER_EXCLUDE_EL0);
> >>>     write_sysreg_s(0x3, PMCNTENSET_EL0);
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> -   write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>> +   write_regn_el0(pmevcntr, 1, pre_overflow);
> >>>     isb();
> >>>
> >>>     /* interrupts are disabled (PMINTENSET_EL1 == 0) */
> >>>
> >>> -   mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> >>> +   mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >> I noticed that undocumented change. Is it done on purpose?
> > Yes, it's intentional. It conditionally sets PMCR.LP for the next
> > subtest when overflow_at_64bits==true. That's also why the pre-overflow
> > value is close to ALL_SET_64 (and not ALL_SET_32).
> I rather meant s/200/20

Ooh, yes, that was a mistake. Fixing it in v4.

Thanks!

>
> Thanks
>
> Eric
> >
> >>>     report(expect_interrupts(0), "no overflow interrupt after preset");
> >>>
> >>> -   set_pmcr(pmu.pmcr_ro | PMU_PMCR_E);
> >>> +   set_pmcr(pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >>>     isb();
> >>>
> >>>     for (i = 0; i < 100; i++)
> >>> @@ -898,12 +913,12 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >>>
> >>>     pmu_reset_stats();
> >>>
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> -   write_regn_el0(pmevcntr, 1, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>> +   write_regn_el0(pmevcntr, 1, pre_overflow);
> >>>     write_sysreg(ALL_SET, pmintenset_el1);
> >>>     isb();
> >>>
> >>> -   mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> >>> +   mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >>>     for (i = 0; i < 100; i++)
> >>>             write_sysreg(0x3, pmswinc_el0);
> >>>
> >>> @@ -912,25 +927,40 @@ static void test_overflow_interrupt(bool overflow_at_64bits)
> >>>     report(expect_interrupts(0x3),
> >>>             "overflow interrupts expected on #0 and #1");
> >>>
> >>> -   /* promote to 64-b */
> >>> +   /*
> >>> +    * promote to 64-b:
> >>> +    *
> >>> +    * This only applies to the !overflow_at_64bits case, as
> >>> +    * overflow_at_64bits doesn't implement CHAIN events. The
> >>> +    * overflow_at_64bits case just checks that chained counters are
> >>> +    * not incremented when PMCR.LP == 1.
> >>> +    */
> >>>
> >>>     pmu_reset_stats();
> >>>
> >>>     write_regn_el0(pmevtyper, 1, CHAIN | PMEVTYPER_EXCLUDE_EL0);
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>>     isb();
> >>> -   mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E);
> >>> -   report(expect_interrupts(0x1),
> >>> -           "expect overflow interrupt on 32b boundary");
> >>> +   mem_access_loop(addr, 200, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >>> +   report(expect_interrupts(0x1), "expect overflow interrupt");
> >>>
> >>>     /* overflow on odd counter */
> >>>     pmu_reset_stats();
> >>> -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> >>> -   write_regn_el0(pmevcntr, 1, ALL_SET);
> >>> +   write_regn_el0(pmevcntr, 0, pre_overflow);
> >>> +   write_regn_el0(pmevcntr, 1, all_set);
> >>>     isb();
> >>> -   mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E);
> >>> -   report(expect_interrupts(0x3),
> >>> -           "expect overflow interrupt on even and odd counter");
> >>> +   mem_access_loop(addr, 400, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
> >>> +   if (overflow_at_64bits) {
> >>> +           report(expect_interrupts(0x1),
> >>> +                  "expect overflow interrupt on even counter");
> >>> +           report(read_regn_el0(pmevcntr, 1) == all_set,
> >>> +                  "Odd counter did not change");
> >>> +   } else {
> >>> +           report(expect_interrupts(0x3),
> >>> +                  "expect overflow interrupt on even and odd counter");
> >>> +           report(read_regn_el0(pmevcntr, 1) != all_set,
> >>> +                  "Odd counter wrapped");
> >>> +   }
> >>>  }
> >>>  #endif
> >>>
> >>> @@ -1131,10 +1161,13 @@ int main(int argc, char *argv[])
> >>>             report_prefix_pop();
> >>>     } else if (strcmp(argv[1], "pmu-basic-event-count") == 0) {
> >>>             run_test(argv[1], test_basic_event_count, false);
> >>> +           run_test(argv[1], test_basic_event_count, true);
> >>>     } else if (strcmp(argv[1], "pmu-mem-access") == 0) {
> >>>             run_test(argv[1], test_mem_access, false);
> >>> +           run_test(argv[1], test_mem_access, true);
> >>>     } else if (strcmp(argv[1], "pmu-sw-incr") == 0) {
> >>>             run_test(argv[1], test_sw_incr, false);
> >>> +           run_test(argv[1], test_sw_incr, true);
> >>>     } else if (strcmp(argv[1], "pmu-chained-counters") == 0) {
> >>>             run_test(argv[1], test_chained_counters, false);
> >>>     } else if (strcmp(argv[1], "pmu-chained-sw-incr") == 0) {
> >>> @@ -1143,6 +1176,7 @@ int main(int argc, char *argv[])
> >>>             run_test(argv[1], test_chain_promotion, false);
> >>>     } else if (strcmp(argv[1], "pmu-overflow-interrupt") == 0) {
> >>>             run_test(argv[1], test_overflow_interrupt, false);
> >>> +           run_test(argv[1], test_overflow_interrupt, true);
> >>>     } else {
> >>>             report_abort("Unknown sub-test '%s'", argv[1]);
> >>>     }
> >> Thanks
> >>
> >> Eric
> >>
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-01-26 16:45 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-09 21:17 [kvm-unit-tests PATCH v3 0/4] arm: pmu: Add support for PMUv3p5 Ricardo Koller
2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 1/4] arm: pmu: Fix overflow checks for PMUv3p5 long counters Ricardo Koller
2023-01-09 21:42   ` Oliver Upton
2023-01-23 20:16   ` Eric Auger
2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 2/4] arm: pmu: Prepare for testing 64-bit overflows Ricardo Koller
2023-01-12  5:56   ` Reiji Watanabe
2023-01-13 15:20     ` Ricardo Koller
2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 3/4] arm: pmu: Add tests for " Ricardo Koller
2023-01-19  5:58   ` Reiji Watanabe
2023-01-24 15:11     ` Ricardo Koller
2023-01-25  2:19     ` Ricardo Koller
2023-01-25  4:11       ` Reiji Watanabe
2023-01-25  7:55         ` Eric Auger
2023-01-25 14:17           ` Ricardo Koller
2023-01-23 20:33   ` Eric Auger
2023-01-24 15:26     ` Ricardo Koller
2023-01-24 20:15       ` Eric Auger
2023-01-26 16:45         ` Ricardo Koller
2023-01-09 21:17 ` [kvm-unit-tests PATCH v3 4/4] arm: pmu: Print counter values as hexadecimals Ricardo Koller
2023-01-09 21:43   ` Oliver Upton
2023-01-23 20:17   ` Eric Auger
2023-01-25  4:37     ` Reiji Watanabe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.