All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
@ 2023-01-17  1:35 ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

The goal of this series is to allow userspace to limit the number
of PMU event counters on the vCPU. We need this to support migration
across systems that implement different numbers of counters.

The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, its value will be the same as
the host value by default. Userspace can set PMCR_EL0.N for the
vCPU to a lower value than the host value, using KVM_SET_ONE_REG.
However, it is practically unsupported, as KVM resets PMCR_EL0.N
to the host value on vCPU reset and some KVM code uses the host
value to identify (un)implemented event counters on the vCPU.

This series will ensure that the PMCR_EL0.N value is preserved
on vCPU reset and that KVM doesn't use the host value
to identify (un)implemented event counters on the vCPU.
This allows userspace to limit the number of the PMU event
counters on the vCPU.

Patch 1 fixes reset_pmu_reg() to ensure that (RAZ) bits of
{PMCNTEN,PMOVS}{SET,CLR}_EL1 corresponding to unimplemented event
counters on the vCPU are reset to zero even when PMCR_EL0.N for
the vCPU is different from the host.

Patch 2 is a minor refactoring to use the default PMU register reset
function (reset_pmu_reg()) for PMUSERENR_EL0 and PMCCFILTR_EL0.
(With the Patch 1 change, reset_pmu_reg() can now be used for
those registers)

Patch 3 fixes reset_pmcr() to preserve PMCR_EL0.N for the vCPU on
vCPU reset.

Patch 4 adds the sys_reg's set_user() handler for the PMCR_EL0
to disallow userspace to set PMCR_EL0.N for the vCPU to a value
that is greater than the host value.

Patch 5-8 adds a selftest to verify reading and writing PMU registers
for implemented or unimplemented PMU event counters on the vCPU.

The series is based on v6.2-rc4.

v2:
 - Added the sys_reg's set_user() handler for the PMCR_EL0 to
   disallow userspace to set PMCR_EL0.N for the vCPU to a value
   that is greater than the host value (and added a new test
   case for this behavior). [Oliver]
 - Added to the commit log of the patch 2 that PMUSERENR_EL0 and
   PMCCFILTR_EL0 have UNKNOWN reset values.

v1: https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/

Reiji Watanabe (8):
  KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
  KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and
    PMCCFILTR_EL0
  KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
  KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the
    host value
  tools: arm64: Import perf_event.h
  KVM: selftests: aarch64: Introduce vpmu_counter_access test
  KVM: selftests: aarch64: vPMU register test for implemented counters
  KVM: selftests: aarch64: vPMU register test for unimplemented counters

 arch/arm64/kvm/pmu-emul.c                     |   6 +
 arch/arm64/kvm/sys_regs.c                     |  57 +-
 tools/arch/arm64/include/asm/perf_event.h     | 258 +++++++
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/vpmu_counter_access.c         | 644 ++++++++++++++++++
 .../selftests/kvm/include/aarch64/processor.h |   1 +
 6 files changed, 954 insertions(+), 13 deletions(-)
 create mode 100644 tools/arch/arm64/include/asm/perf_event.h
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c


base-commit: 5dc4c995db9eb45f6373a956eb1f69460e69e6d4
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
@ 2023-01-17  1:35 ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

The goal of this series is to allow userspace to limit the number
of PMU event counters on the vCPU. We need this to support migration
across systems that implement different numbers of counters.

The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, its value will be the same as
the host value by default. Userspace can set PMCR_EL0.N for the
vCPU to a lower value than the host value, using KVM_SET_ONE_REG.
However, it is practically unsupported, as KVM resets PMCR_EL0.N
to the host value on vCPU reset and some KVM code uses the host
value to identify (un)implemented event counters on the vCPU.

This series will ensure that the PMCR_EL0.N value is preserved
on vCPU reset and that KVM doesn't use the host value
to identify (un)implemented event counters on the vCPU.
This allows userspace to limit the number of the PMU event
counters on the vCPU.

Patch 1 fixes reset_pmu_reg() to ensure that (RAZ) bits of
{PMCNTEN,PMOVS}{SET,CLR}_EL1 corresponding to unimplemented event
counters on the vCPU are reset to zero even when PMCR_EL0.N for
the vCPU is different from the host.

Patch 2 is a minor refactoring to use the default PMU register reset
function (reset_pmu_reg()) for PMUSERENR_EL0 and PMCCFILTR_EL0.
(With the Patch 1 change, reset_pmu_reg() can now be used for
those registers)

Patch 3 fixes reset_pmcr() to preserve PMCR_EL0.N for the vCPU on
vCPU reset.

Patch 4 adds the sys_reg's set_user() handler for the PMCR_EL0
to disallow userspace to set PMCR_EL0.N for the vCPU to a value
that is greater than the host value.

Patch 5-8 adds a selftest to verify reading and writing PMU registers
for implemented or unimplemented PMU event counters on the vCPU.

The series is based on v6.2-rc4.

v2:
 - Added the sys_reg's set_user() handler for the PMCR_EL0 to
   disallow userspace to set PMCR_EL0.N for the vCPU to a value
   that is greater than the host value (and added a new test
   case for this behavior). [Oliver]
 - Added to the commit log of the patch 2 that PMUSERENR_EL0 and
   PMCCFILTR_EL0 have UNKNOWN reset values.

v1: https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/

Reiji Watanabe (8):
  KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
  KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and
    PMCCFILTR_EL0
  KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
  KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the
    host value
  tools: arm64: Import perf_event.h
  KVM: selftests: aarch64: Introduce vpmu_counter_access test
  KVM: selftests: aarch64: vPMU register test for implemented counters
  KVM: selftests: aarch64: vPMU register test for unimplemented counters

 arch/arm64/kvm/pmu-emul.c                     |   6 +
 arch/arm64/kvm/sys_regs.c                     |  57 +-
 tools/arch/arm64/include/asm/perf_event.h     | 258 +++++++
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/vpmu_counter_access.c         | 644 ++++++++++++++++++
 .../selftests/kvm/include/aarch64/processor.h |   1 +
 6 files changed, 954 insertions(+), 13 deletions(-)
 create mode 100644 tools/arch/arm64/include/asm/perf_event.h
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c


base-commit: 5dc4c995db9eb45f6373a956eb1f69460e69e6d4
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
This function clears RAZ bits of those registers corresponding
to unimplemented event counters on the vCPU, and sets bits
corresponding to implemented event counters to a predefined
pseudo UNKNOWN value (some bits are set to 1).

The function identifies (un)implemented event counters on the
vCPU based on the PMCR_EL1.N value on the host. Using the host
value for this would be problematic when KVM supports letting
userspace set PMCR_EL1.N to a value different from the host value
(some of the RAZ bits of those registers could end up being set to 1).

Fix reset_pmu_reg() to clear the registers so that it can ensure
that all the RAZ bits are cleared even when the PMCR_EL1.N value
for the vCPU is different from the host value.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c6cbfe6b854b..ec4bdaf71a15 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
 
 static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
-	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
-
 	/* No PMU available, any PMU reg may UNDEF... */
 	if (!kvm_arm_support_pmu_v3())
 		return;
 
-	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
-	n &= ARMV8_PMU_PMCR_N_MASK;
-	if (n)
-		mask |= GENMASK(n - 1, 0);
-
-	reset_unknown(vcpu, r);
-	__vcpu_sys_reg(vcpu, r->reg) &= mask;
+	__vcpu_sys_reg(vcpu, r->reg) = 0;
 }
 
 static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
This function clears RAZ bits of those registers corresponding
to unimplemented event counters on the vCPU, and sets bits
corresponding to implemented event counters to a predefined
pseudo UNKNOWN value (some bits are set to 1).

The function identifies (un)implemented event counters on the
vCPU based on the PMCR_EL1.N value on the host. Using the host
value for this would be problematic when KVM supports letting
userspace set PMCR_EL1.N to a value different from the host value
(some of the RAZ bits of those registers could end up being set to 1).

Fix reset_pmu_reg() to clear the registers so that it can ensure
that all the RAZ bits are cleared even when the PMCR_EL1.N value
for the vCPU is different from the host value.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c6cbfe6b854b..ec4bdaf71a15 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
 
 static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
-	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
-
 	/* No PMU available, any PMU reg may UNDEF... */
 	if (!kvm_arm_support_pmu_v3())
 		return;
 
-	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
-	n &= ARMV8_PMU_PMCR_N_MASK;
-	if (n)
-		mask |= GENMASK(n - 1, 0);
-
-	reset_unknown(vcpu, r);
-	__vcpu_sys_reg(vcpu, r->reg) &= mask;
+	__vcpu_sys_reg(vcpu, r->reg) = 0;
 }
 
 static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 2/8] KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and PMCCFILTR_EL0
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

The default reset function for PMU registers (reset_pmu_reg())
now simply clears a specified register. Use that function for
PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those
registers on vCPU reset (NOTE: All non-RES0 fields of those
registers have UNKNOWN reset values, and the same fields of
their AArch32 registers have 0 reset values).

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ec4bdaf71a15..4959658b502c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1747,7 +1747,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	 * in 32bit mode. Here we choose to reset it as zero for consistency.
 	 */
 	{ PMU_SYS_REG(SYS_PMUSERENR_EL0), .access = access_pmuserenr,
-	  .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 },
+	  .reg = PMUSERENR_EL0 },
 	{ PMU_SYS_REG(SYS_PMOVSSET_EL0),
 	  .access = access_pmovs, .reg = PMOVSSET_EL0 },
 
@@ -1903,7 +1903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	 * in 32bit mode. Here we choose to reset it as zero for consistency.
 	 */
 	{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
-	  .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
+	  .reg = PMCCFILTR_EL0 },
 
 	{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
 	{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 2/8] KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and PMCCFILTR_EL0
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

The default reset function for PMU registers (reset_pmu_reg())
now simply clears a specified register. Use that function for
PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those
registers on vCPU reset (NOTE: All non-RES0 fields of those
registers have UNKNOWN reset values, and the same fields of
their AArch32 registers have 0 reset values).

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ec4bdaf71a15..4959658b502c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1747,7 +1747,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	 * in 32bit mode. Here we choose to reset it as zero for consistency.
 	 */
 	{ PMU_SYS_REG(SYS_PMUSERENR_EL0), .access = access_pmuserenr,
-	  .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 },
+	  .reg = PMUSERENR_EL0 },
 	{ PMU_SYS_REG(SYS_PMOVSSET_EL0),
 	  .access = access_pmovs, .reg = PMOVSSET_EL0 },
 
@@ -1903,7 +1903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	 * in 32bit mode. Here we choose to reset it as zero for consistency.
 	 */
 	{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
-	  .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
+	  .reg = PMCCFILTR_EL0 },
 
 	{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
 	{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, its value will be the same as
the host value by default. Userspace can set PMCR_EL0.N for the
vCPU to a lower value than the host value using KVM_SET_ONE_REG.
However, it is practically unsupported, as reset_pmcr() resets
PMCR_EL0.N to the host value on vCPU reset.

Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on
vCPU reset so that userspace can limit the number of the PMU
event counter on the vCPU.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 6 ++++++
 arch/arm64/kvm/sys_regs.c | 4 +++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 24908400e190..937a272b00a5 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu)
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
 		pmu->pmc[i].idx = i;
+
+	/*
+	 * Initialize PMCR_EL0 for the vCPU with the host value so that
+	 * the value is available at the very first vCPU reset.
+	 */
+	__vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0);
 }
 
 /**
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4959658b502c..67c1bd39b478 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -637,8 +637,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	if (!kvm_arm_support_pmu_v3())
 		return;
 
+	/* PMCR_EL0 for the vCPU is set to the host value at vCPU creation. */
+
 	/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
-	pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	pmcr = __vcpu_sys_reg(vcpu, r->reg) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
 	if (!kvm_supports_32bit_el0())
 		pmcr |= ARMV8_PMU_PMCR_LC;
 
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, its value will be the same as
the host value by default. Userspace can set PMCR_EL0.N for the
vCPU to a lower value than the host value using KVM_SET_ONE_REG.
However, it is practically unsupported, as reset_pmcr() resets
PMCR_EL0.N to the host value on vCPU reset.

Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on
vCPU reset so that userspace can limit the number of the PMU
event counter on the vCPU.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 6 ++++++
 arch/arm64/kvm/sys_regs.c | 4 +++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 24908400e190..937a272b00a5 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu)
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
 		pmu->pmc[i].idx = i;
+
+	/*
+	 * Initialize PMCR_EL0 for the vCPU with the host value so that
+	 * the value is available at the very first vCPU reset.
+	 */
+	__vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0);
 }
 
 /**
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4959658b502c..67c1bd39b478 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -637,8 +637,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	if (!kvm_arm_support_pmu_v3())
 		return;
 
+	/* PMCR_EL0 for the vCPU is set to the host value at vCPU creation. */
+
 	/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
-	pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	pmcr = __vcpu_sys_reg(vcpu, r->reg) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
 	if (!kvm_supports_32bit_el0())
 		pmcr |= ARMV8_PMU_PMCR_LC;
 
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 4/8] KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the host value
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Currently, KVM allows userspace to set PMCR_EL0 to any values
with KVM_SET_ONE_REG for a vCPU with PMUv3 configured.

Disallow userspace to set PMCR_EL0.N to a value that is greater
than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
support more event counters than the host HW implements.
Although this is an ABI change, this change only affects
userspace setting PMCR_EL0.N to a larger value than the host.
As accesses to unadvertised event counters indices is CONSTRAINED
UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
on every vCPU reset before this series, I can't think of any
use case where a user space would do that.

Also, ignore writes to read-only bits that are cleared on vCPU reset,
and RES{0,1} bits (including writable bits that KVM doesn't support
yet), as those bits shouldn't be modified (at least with
the current KVM).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 39 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 38 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 67c1bd39b478..e4bff9621473 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -958,6 +958,43 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
+		    u64 val)
+{
+	u64 host_pmcr, host_n, new_n, mutable_mask;
+
+	new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+
+	host_pmcr = read_sysreg(pmcr_el0);
+	host_n = (host_pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+
+	/* The vCPU can't have more counters than the host have. */
+	if (new_n > host_n)
+		return -EINVAL;
+
+	/*
+	 * Ignore writes to RES0 bits, read only bits that are cleared on
+	 * vCPU reset, and writable bits that KVM doesn't support yet.
+	 * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
+	 * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
+	 * But, we leave the bit as it is here, as the vCPU's PMUver might
+	 * be changed later (NOTE: the bit will be cleared on first vCPU run
+	 * if necessary).
+	 */
+	mutable_mask = (ARMV8_PMU_PMCR_MASK |
+			(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT));
+	val &= mutable_mask;
+	val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
+
+	/* The LC bit is RES1 when AArch32 is not supported */
+	if (!kvm_supports_32bit_el0())
+		val |= ARMV8_PMU_PMCR_LC;
+
+	__vcpu_sys_reg(vcpu, r->reg) = val;
+
+	return 0;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	{ SYS_DESC(SYS_DBGBVRn_EL1(n)),					\
@@ -1718,7 +1755,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_SVCR), undef_access },
 
 	{ PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr,
-	  .reset = reset_pmcr, .reg = PMCR_EL0 },
+	  .reset = reset_pmcr, .reg = PMCR_EL0, .set_user = set_pmcr },
 	{ PMU_SYS_REG(SYS_PMCNTENSET_EL0),
 	  .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
 	{ PMU_SYS_REG(SYS_PMCNTENCLR_EL0),
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 4/8] KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the host value
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Currently, KVM allows userspace to set PMCR_EL0 to any values
with KVM_SET_ONE_REG for a vCPU with PMUv3 configured.

Disallow userspace to set PMCR_EL0.N to a value that is greater
than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
support more event counters than the host HW implements.
Although this is an ABI change, this change only affects
userspace setting PMCR_EL0.N to a larger value than the host.
As accesses to unadvertised event counters indices is CONSTRAINED
UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
on every vCPU reset before this series, I can't think of any
use case where a user space would do that.

Also, ignore writes to read-only bits that are cleared on vCPU reset,
and RES{0,1} bits (including writable bits that KVM doesn't support
yet), as those bits shouldn't be modified (at least with
the current KVM).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 39 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 38 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 67c1bd39b478..e4bff9621473 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -958,6 +958,43 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
+		    u64 val)
+{
+	u64 host_pmcr, host_n, new_n, mutable_mask;
+
+	new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+
+	host_pmcr = read_sysreg(pmcr_el0);
+	host_n = (host_pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+
+	/* The vCPU can't have more counters than the host have. */
+	if (new_n > host_n)
+		return -EINVAL;
+
+	/*
+	 * Ignore writes to RES0 bits, read only bits that are cleared on
+	 * vCPU reset, and writable bits that KVM doesn't support yet.
+	 * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
+	 * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
+	 * But, we leave the bit as it is here, as the vCPU's PMUver might
+	 * be changed later (NOTE: the bit will be cleared on first vCPU run
+	 * if necessary).
+	 */
+	mutable_mask = (ARMV8_PMU_PMCR_MASK |
+			(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT));
+	val &= mutable_mask;
+	val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
+
+	/* The LC bit is RES1 when AArch32 is not supported */
+	if (!kvm_supports_32bit_el0())
+		val |= ARMV8_PMU_PMCR_LC;
+
+	__vcpu_sys_reg(vcpu, r->reg) = val;
+
+	return 0;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	{ SYS_DESC(SYS_DBGBVRn_EL1(n)),					\
@@ -1718,7 +1755,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_SVCR), undef_access },
 
 	{ PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr,
-	  .reset = reset_pmcr, .reg = PMCR_EL0 },
+	  .reset = reset_pmcr, .reg = PMCR_EL0, .set_user = set_pmcr },
 	{ PMU_SYS_REG(SYS_PMCNTENSET_EL0),
 	  .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
 	{ PMU_SYS_REG(SYS_PMCNTENCLR_EL0),
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 5/8] tools: arm64: Import perf_event.h
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Copy perf_event.h from the kernel's arch/arm64/include/asm/perf_event.h.
The following patches will use macros defined in this header.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/arch/arm64/include/asm/perf_event.h | 258 ++++++++++++++++++++++
 1 file changed, 258 insertions(+)
 create mode 100644 tools/arch/arm64/include/asm/perf_event.h

diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h
new file mode 100644
index 000000000000..b2ae51f5f93d
--- /dev/null
+++ b/tools/arch/arm64/include/asm/perf_event.h
@@ -0,0 +1,258 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ASM_PERF_EVENT_H
+#define __ASM_PERF_EVENT_H
+
+#define	ARMV8_PMU_MAX_COUNTERS	32
+#define	ARMV8_PMU_COUNTER_MASK	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * Common architectural and microarchitectural event numbers.
+ */
+#define ARMV8_PMUV3_PERFCTR_SW_INCR				0x0000
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL			0x0001
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL			0x0002
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL			0x0003
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE				0x0004
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL			0x0005
+#define ARMV8_PMUV3_PERFCTR_LD_RETIRED				0x0006
+#define ARMV8_PMUV3_PERFCTR_ST_RETIRED				0x0007
+#define ARMV8_PMUV3_PERFCTR_INST_RETIRED			0x0008
+#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN				0x0009
+#define ARMV8_PMUV3_PERFCTR_EXC_RETURN				0x000A
+#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED			0x000B
+#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED			0x000C
+#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED			0x000D
+#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED			0x000E
+#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED		0x000F
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED				0x0010
+#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES				0x0011
+#define ARMV8_PMUV3_PERFCTR_BR_PRED				0x0012
+#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS				0x0013
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE				0x0014
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB			0x0015
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE				0x0016
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL			0x0017
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB			0x0018
+#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS				0x0019
+#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR			0x001A
+#define ARMV8_PMUV3_PERFCTR_INST_SPEC				0x001B
+#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED			0x001C
+#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES				0x001D
+#define ARMV8_PMUV3_PERFCTR_CHAIN				0x001E
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE			0x001F
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE			0x0020
+#define ARMV8_PMUV3_PERFCTR_BR_RETIRED				0x0021
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED			0x0022
+#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND			0x0023
+#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND			0x0024
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB				0x0025
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB				0x0026
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE				0x0027
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL			0x0028
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE			0x0029
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL			0x002A
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE				0x002B
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB			0x002C
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL			0x002D
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL			0x002E
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB				0x002F
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB				0x0030
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS			0x0031
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE				0x0032
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS			0x0033
+#define ARMV8_PMUV3_PERFCTR_DTLB_WALK				0x0034
+#define ARMV8_PMUV3_PERFCTR_ITLB_WALK				0x0035
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD				0x0036
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD			0x0037
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD			0x0038
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD			0x0039
+#define ARMV8_PMUV3_PERFCTR_OP_RETIRED				0x003A
+#define ARMV8_PMUV3_PERFCTR_OP_SPEC				0x003B
+#define ARMV8_PMUV3_PERFCTR_STALL				0x003C
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND			0x003D
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND			0x003E
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT				0x003F
+
+/* Statistical profiling extension microarchitectural events */
+#define	ARMV8_SPE_PERFCTR_SAMPLE_POP				0x4000
+#define	ARMV8_SPE_PERFCTR_SAMPLE_FEED				0x4001
+#define	ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE			0x4002
+#define	ARMV8_SPE_PERFCTR_SAMPLE_COLLISION			0x4003
+
+/* AMUv1 architecture events */
+#define	ARMV8_AMU_PERFCTR_CNT_CYCLES				0x4004
+#define	ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM			0x4005
+
+/* long-latency read miss events */
+#define	ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS			0x4006
+#define	ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD			0x4009
+#define	ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS			0x400A
+#define	ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD			0x400B
+
+/* Trace buffer events */
+#define ARMV8_PMUV3_PERFCTR_TRB_WRAP				0x400C
+#define ARMV8_PMUV3_PERFCTR_TRB_TRIG				0x400E
+
+/* Trace unit events */
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0				0x4010
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1				0x4011
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2				0x4012
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3				0x4013
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4			0x4018
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5			0x4019
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6			0x401A
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7			0x401B
+
+/* additional latency from alignment events */
+#define	ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT			0x4020
+#define	ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT			0x4021
+#define	ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT			0x4022
+
+/* Armv8.5 Memory Tagging Extension events */
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED			0x4024
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD			0x4025
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR			0x4026
+
+/* ARMv8 recommended implementation defined event types */
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD			0x0040
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR			0x0041
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD		0x0042
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR		0x0043
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER		0x0044
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER		0x0045
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM		0x0046
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN			0x0047
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL			0x0048
+
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD			0x004C
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR			0x004D
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD				0x004E
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR				0x004F
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD			0x0050
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR			0x0051
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD		0x0052
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR		0x0053
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM		0x0056
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN			0x0057
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL			0x0058
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD			0x005C
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR			0x005D
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD				0x005E
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR				0x005F
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD			0x0060
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR			0x0061
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED			0x0062
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED		0x0063
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL			0x0064
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH			0x0065
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD			0x0066
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR			0x0067
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC			0x0068
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC			0x0069
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC		0x006A
+
+#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC				0x006C
+#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC			0x006D
+#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC			0x006E
+#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC				0x006F
+#define ARMV8_IMPDEF_PERFCTR_LD_SPEC				0x0070
+#define ARMV8_IMPDEF_PERFCTR_ST_SPEC				0x0071
+#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC				0x0072
+#define ARMV8_IMPDEF_PERFCTR_DP_SPEC				0x0073
+#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC				0x0074
+#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC				0x0075
+#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC			0x0076
+#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC			0x0077
+#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC			0x0078
+#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC			0x0079
+#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC			0x007A
+
+#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC				0x007C
+#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC				0x007D
+#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC				0x007E
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF				0x0081
+#define ARMV8_IMPDEF_PERFCTR_EXC_SVC				0x0082
+#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT				0x0083
+#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT				0x0084
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ				0x0086
+#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ				0x0087
+#define ARMV8_IMPDEF_PERFCTR_EXC_SMC				0x0088
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_HVC				0x008A
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT			0x008B
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT			0x008C
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER			0x008D
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ			0x008E
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ			0x008F
+#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC				0x0090
+#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC				0x0091
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD			0x00A0
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR			0x00A1
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD		0x00A2
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR		0x00A3
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM		0x00A6
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN			0x00A7
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL			0x00A8
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMU_PMCR_E	(1 << 0) /* Enable all counters */
+#define ARMV8_PMU_PMCR_P	(1 << 1) /* Reset all counters */
+#define ARMV8_PMU_PMCR_C	(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMU_PMCR_D	(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMU_PMCR_X	(1 << 4) /* Export to ETM */
+#define ARMV8_PMU_PMCR_DP	(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define ARMV8_PMU_PMCR_LC	(1 << 6) /* Overflow on 64 bit cycle counter */
+#define ARMV8_PMU_PMCR_LP	(1 << 7) /* Long event counter enable */
+#define	ARMV8_PMU_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMU_PMCR_N_MASK	0x1f
+#define	ARMV8_PMU_PMCR_MASK	0xff	 /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_PMU_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_PMU_OVERFLOWED_MASK	ARMV8_PMU_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_PMU_EVTYPE_MASK	0xc800ffff	/* Mask for writable bits */
+#define	ARMV8_PMU_EVTYPE_EVENT	0xffff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_PMU_EXCLUDE_EL1	(1U << 31)
+#define	ARMV8_PMU_EXCLUDE_EL0	(1U << 30)
+#define	ARMV8_PMU_INCLUDE_EL2	(1U << 27)
+
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_PMU_USERENR_MASK	0xf		/* Mask for writable bits */
+#define ARMV8_PMU_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_PMU_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_PMU_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_PMU_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
+
+/* PMMIR_EL1.SLOTS mask */
+#define ARMV8_PMU_SLOTS_MASK	0xff
+
+#define ARMV8_PMU_BUS_SLOTS_SHIFT 8
+#define ARMV8_PMU_BUS_SLOTS_MASK 0xff
+#define ARMV8_PMU_BUS_WIDTH_SHIFT 16
+#define ARMV8_PMU_BUS_WIDTH_MASK 0xf
+
+#endif /* __ASM_PERF_EVENT_H */
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 5/8] tools: arm64: Import perf_event.h
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Copy perf_event.h from the kernel's arch/arm64/include/asm/perf_event.h.
The following patches will use macros defined in this header.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/arch/arm64/include/asm/perf_event.h | 258 ++++++++++++++++++++++
 1 file changed, 258 insertions(+)
 create mode 100644 tools/arch/arm64/include/asm/perf_event.h

diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h
new file mode 100644
index 000000000000..b2ae51f5f93d
--- /dev/null
+++ b/tools/arch/arm64/include/asm/perf_event.h
@@ -0,0 +1,258 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ASM_PERF_EVENT_H
+#define __ASM_PERF_EVENT_H
+
+#define	ARMV8_PMU_MAX_COUNTERS	32
+#define	ARMV8_PMU_COUNTER_MASK	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * Common architectural and microarchitectural event numbers.
+ */
+#define ARMV8_PMUV3_PERFCTR_SW_INCR				0x0000
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL			0x0001
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL			0x0002
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL			0x0003
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE				0x0004
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL			0x0005
+#define ARMV8_PMUV3_PERFCTR_LD_RETIRED				0x0006
+#define ARMV8_PMUV3_PERFCTR_ST_RETIRED				0x0007
+#define ARMV8_PMUV3_PERFCTR_INST_RETIRED			0x0008
+#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN				0x0009
+#define ARMV8_PMUV3_PERFCTR_EXC_RETURN				0x000A
+#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED			0x000B
+#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED			0x000C
+#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED			0x000D
+#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED			0x000E
+#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED		0x000F
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED				0x0010
+#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES				0x0011
+#define ARMV8_PMUV3_PERFCTR_BR_PRED				0x0012
+#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS				0x0013
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE				0x0014
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB			0x0015
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE				0x0016
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL			0x0017
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB			0x0018
+#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS				0x0019
+#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR			0x001A
+#define ARMV8_PMUV3_PERFCTR_INST_SPEC				0x001B
+#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED			0x001C
+#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES				0x001D
+#define ARMV8_PMUV3_PERFCTR_CHAIN				0x001E
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE			0x001F
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE			0x0020
+#define ARMV8_PMUV3_PERFCTR_BR_RETIRED				0x0021
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED			0x0022
+#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND			0x0023
+#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND			0x0024
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB				0x0025
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB				0x0026
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE				0x0027
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL			0x0028
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE			0x0029
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL			0x002A
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE				0x002B
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB			0x002C
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL			0x002D
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL			0x002E
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB				0x002F
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB				0x0030
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS			0x0031
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE				0x0032
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS			0x0033
+#define ARMV8_PMUV3_PERFCTR_DTLB_WALK				0x0034
+#define ARMV8_PMUV3_PERFCTR_ITLB_WALK				0x0035
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD				0x0036
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD			0x0037
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD			0x0038
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD			0x0039
+#define ARMV8_PMUV3_PERFCTR_OP_RETIRED				0x003A
+#define ARMV8_PMUV3_PERFCTR_OP_SPEC				0x003B
+#define ARMV8_PMUV3_PERFCTR_STALL				0x003C
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND			0x003D
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND			0x003E
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT				0x003F
+
+/* Statistical profiling extension microarchitectural events */
+#define	ARMV8_SPE_PERFCTR_SAMPLE_POP				0x4000
+#define	ARMV8_SPE_PERFCTR_SAMPLE_FEED				0x4001
+#define	ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE			0x4002
+#define	ARMV8_SPE_PERFCTR_SAMPLE_COLLISION			0x4003
+
+/* AMUv1 architecture events */
+#define	ARMV8_AMU_PERFCTR_CNT_CYCLES				0x4004
+#define	ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM			0x4005
+
+/* long-latency read miss events */
+#define	ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS			0x4006
+#define	ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD			0x4009
+#define	ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS			0x400A
+#define	ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD			0x400B
+
+/* Trace buffer events */
+#define ARMV8_PMUV3_PERFCTR_TRB_WRAP				0x400C
+#define ARMV8_PMUV3_PERFCTR_TRB_TRIG				0x400E
+
+/* Trace unit events */
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0				0x4010
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1				0x4011
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2				0x4012
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3				0x4013
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4			0x4018
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5			0x4019
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6			0x401A
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7			0x401B
+
+/* additional latency from alignment events */
+#define	ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT			0x4020
+#define	ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT			0x4021
+#define	ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT			0x4022
+
+/* Armv8.5 Memory Tagging Extension events */
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED			0x4024
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD			0x4025
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR			0x4026
+
+/* ARMv8 recommended implementation defined event types */
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD			0x0040
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR			0x0041
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD		0x0042
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR		0x0043
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER		0x0044
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER		0x0045
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM		0x0046
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN			0x0047
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL			0x0048
+
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD			0x004C
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR			0x004D
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD				0x004E
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR				0x004F
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD			0x0050
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR			0x0051
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD		0x0052
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR		0x0053
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM		0x0056
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN			0x0057
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL			0x0058
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD			0x005C
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR			0x005D
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD				0x005E
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR				0x005F
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD			0x0060
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR			0x0061
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED			0x0062
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED		0x0063
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL			0x0064
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH			0x0065
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD			0x0066
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR			0x0067
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC			0x0068
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC			0x0069
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC		0x006A
+
+#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC				0x006C
+#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC			0x006D
+#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC			0x006E
+#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC				0x006F
+#define ARMV8_IMPDEF_PERFCTR_LD_SPEC				0x0070
+#define ARMV8_IMPDEF_PERFCTR_ST_SPEC				0x0071
+#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC				0x0072
+#define ARMV8_IMPDEF_PERFCTR_DP_SPEC				0x0073
+#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC				0x0074
+#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC				0x0075
+#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC			0x0076
+#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC			0x0077
+#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC			0x0078
+#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC			0x0079
+#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC			0x007A
+
+#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC				0x007C
+#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC				0x007D
+#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC				0x007E
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF				0x0081
+#define ARMV8_IMPDEF_PERFCTR_EXC_SVC				0x0082
+#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT				0x0083
+#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT				0x0084
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ				0x0086
+#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ				0x0087
+#define ARMV8_IMPDEF_PERFCTR_EXC_SMC				0x0088
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_HVC				0x008A
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT			0x008B
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT			0x008C
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER			0x008D
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ			0x008E
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ			0x008F
+#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC				0x0090
+#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC				0x0091
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD			0x00A0
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR			0x00A1
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD		0x00A2
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR		0x00A3
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM		0x00A6
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN			0x00A7
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL			0x00A8
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMU_PMCR_E	(1 << 0) /* Enable all counters */
+#define ARMV8_PMU_PMCR_P	(1 << 1) /* Reset all counters */
+#define ARMV8_PMU_PMCR_C	(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMU_PMCR_D	(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMU_PMCR_X	(1 << 4) /* Export to ETM */
+#define ARMV8_PMU_PMCR_DP	(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define ARMV8_PMU_PMCR_LC	(1 << 6) /* Overflow on 64 bit cycle counter */
+#define ARMV8_PMU_PMCR_LP	(1 << 7) /* Long event counter enable */
+#define	ARMV8_PMU_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMU_PMCR_N_MASK	0x1f
+#define	ARMV8_PMU_PMCR_MASK	0xff	 /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_PMU_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_PMU_OVERFLOWED_MASK	ARMV8_PMU_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_PMU_EVTYPE_MASK	0xc800ffff	/* Mask for writable bits */
+#define	ARMV8_PMU_EVTYPE_EVENT	0xffff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_PMU_EXCLUDE_EL1	(1U << 31)
+#define	ARMV8_PMU_EXCLUDE_EL0	(1U << 30)
+#define	ARMV8_PMU_INCLUDE_EL2	(1U << 27)
+
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_PMU_USERENR_MASK	0xf		/* Mask for writable bits */
+#define ARMV8_PMU_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_PMU_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_PMU_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_PMU_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
+
+/* PMMIR_EL1.SLOTS mask */
+#define ARMV8_PMU_SLOTS_MASK	0xff
+
+#define ARMV8_PMU_BUS_SLOTS_SHIFT 8
+#define ARMV8_PMU_BUS_SLOTS_MASK 0xff
+#define ARMV8_PMU_BUS_WIDTH_SHIFT 16
+#define ARMV8_PMU_BUS_WIDTH_MASK 0xf
+
+#endif /* __ASM_PERF_EVENT_H */
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 6/8] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce vpmu_counter_access test for arm64 platforms.
The test configures PMUv3 for a vCPU, sets PMCR_EL1.N for the vCPU,
and check if the guest can consistently see the same number of the
PMU event counters (PMCR_EL1.N) that userspace sets.
This test case is done with each of the PMCR_EL1.N values from
0 to 31 (With the PMCR_EL1.N values greater than the host value,
the test expects KVM_SET_ONE_REG for the PMCR_EL1 to fail).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/vpmu_counter_access.c         | 212 ++++++++++++++++++
 2 files changed, 213 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 1750f91dd936..b27fea0ce591 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -143,6 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
+TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
new file mode 100644
index 000000000000..704a2500b7e1
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -0,0 +1,212 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * vpmu_counter_access - Test vPMU event counter access
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
+ * This test checks if the guest can see the same number of the PMU event
+ * counters (PMCR_EL1.N) that userspace sets.
+ * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
+ */
+#include <kvm_util.h>
+#include <processor.h>
+#include <test_util.h>
+#include <vgic.h>
+#include <asm/perf_event.h>
+#include <linux/bitfield.h>
+
+/* The max number of the PMU event counters (excluding the cycle counter) */
+#define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+static uint64_t pmcr_extract_n(uint64_t pmcr_val)
+{
+	return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+}
+
+/*
+ * The guest is configured with PMUv3 with @expected_pmcr_n number of
+ * event counters.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ */
+static void guest_code(uint64_t expected_pmcr_n)
+{
+	uint64_t pmcr, pmcr_n;
+
+	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
+
+	pmcr = read_sysreg(pmcr_el0);
+	pmcr_n = pmcr_extract_n(pmcr);
+
+	/* Make sure that PMCR_EL0.N indicates the value userspace set */
+	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
+
+	GUEST_DONE();
+}
+
+#define GICD_BASE_GPA	0x8000000ULL
+#define GICR_BASE_GPA	0x80A0000ULL
+
+/* Create a VM that has one vCPU with PMUv3 configured. */
+static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
+				     int *gic_fd)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct kvm_vcpu_init init;
+	uint8_t pmuver;
+	uint64_t dfr0, irq = 23;
+	struct kvm_device_attr irq_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
+		.addr = (uint64_t)&irq,
+	};
+	struct kvm_device_attr init_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
+	};
+
+	vm = vm_create(1);
+
+	/* Create vCPU with PMUv3 */
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+
+	/* Make sure that PMUv3 support is indicated in the ID register */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
+	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
+		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+
+	/* Initialize vPMU */
+	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
+	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+
+	*vcpup = vcpu;
+	return vm;
+}
+
+static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+{
+	struct ucall uc;
+
+	vcpu_args_set(vcpu, 1, pmcr_n);
+	vcpu_run(vcpu);
+	switch (get_ucall(vcpu, &uc)) {
+	case UCALL_ABORT:
+		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+		break;
+	case UCALL_DONE:
+		break;
+	default:
+		TEST_FAIL("Unknown ucall %lu", uc.cmd);
+		break;
+	}
+}
+
+/*
+ * Create a guest with one vCPU, set the PMCR_EL1.N for the vCPU to @pmcr_n,
+ * and run the test.
+ */
+static void run_test(uint64_t pmcr_n)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+	uint64_t sp, pmcr, pmcr_orig;
+	struct kvm_vcpu_init init;
+
+	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+
+	/* Save the initial sp to restore them later to run the guest again */
+	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
+
+	/* Update the PMCR_EL1.N with @pmcr_n */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+	pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT;
+	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	/*
+	 * Reset and re-initialize the vCPU, and run the guest code again to
+	 * check if PMCR_EL1.N is preserved.
+	 */
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	close(gic_fd);
+	kvm_vm_free(vm);
+}
+
+/*
+ * Create a guest with one vCPU, and attempt to set the PMCR_EL1.N for
+ * the vCPU to @pmcr_n, which is larger than the host value.
+ * The attempt should fail as @pmcr_n is too big to set for the vCPU.
+ */
+static void run_error_test(uint64_t pmcr_n)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd, ret;
+	uint64_t pmcr, pmcr_orig;
+
+	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+
+	/* Update the PMCR_EL1.N with @pmcr_n */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+	pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT;
+
+	/* This should fail as @pmcr_n is too big to set for the vCPU */
+	ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+	TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
+		    pmcr, pmcr_orig);
+
+	close(gic_fd);
+	kvm_vm_free(vm);
+}
+
+/*
+ * Return the default number of implemented PMU event counters excluding
+ * the cycle counter (i.e. PMCR_EL1.N value) for the guest.
+ */
+static uint64_t get_pmcr_n_limit(void)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+	uint64_t pmcr;
+
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	close(gic_fd);
+	kvm_vm_free(vm);
+	return pmcr_extract_n(pmcr);
+}
+
+int main(void)
+{
+	uint64_t i, pmcr_n;
+
+	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
+
+	pmcr_n = get_pmcr_n_limit();
+	for (i = 0; i <= pmcr_n; i++)
+		run_test(i);
+
+	for (i = pmcr_n + 1; i < ARMV8_PMU_PMCR_N_MASK; i++)
+		run_error_test(i);
+
+	return 0;
+}
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 6/8] KVM: selftests: aarch64: Introduce vpmu_counter_access test
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce vpmu_counter_access test for arm64 platforms.
The test configures PMUv3 for a vCPU, sets PMCR_EL1.N for the vCPU,
and check if the guest can consistently see the same number of the
PMU event counters (PMCR_EL1.N) that userspace sets.
This test case is done with each of the PMCR_EL1.N values from
0 to 31 (With the PMCR_EL1.N values greater than the host value,
the test expects KVM_SET_ONE_REG for the PMCR_EL1 to fail).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/vpmu_counter_access.c         | 212 ++++++++++++++++++
 2 files changed, 213 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 1750f91dd936..b27fea0ce591 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -143,6 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
+TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
new file mode 100644
index 000000000000..704a2500b7e1
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -0,0 +1,212 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * vpmu_counter_access - Test vPMU event counter access
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
+ * This test checks if the guest can see the same number of the PMU event
+ * counters (PMCR_EL1.N) that userspace sets.
+ * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
+ */
+#include <kvm_util.h>
+#include <processor.h>
+#include <test_util.h>
+#include <vgic.h>
+#include <asm/perf_event.h>
+#include <linux/bitfield.h>
+
+/* The max number of the PMU event counters (excluding the cycle counter) */
+#define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+static uint64_t pmcr_extract_n(uint64_t pmcr_val)
+{
+	return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+}
+
+/*
+ * The guest is configured with PMUv3 with @expected_pmcr_n number of
+ * event counters.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ */
+static void guest_code(uint64_t expected_pmcr_n)
+{
+	uint64_t pmcr, pmcr_n;
+
+	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
+
+	pmcr = read_sysreg(pmcr_el0);
+	pmcr_n = pmcr_extract_n(pmcr);
+
+	/* Make sure that PMCR_EL0.N indicates the value userspace set */
+	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
+
+	GUEST_DONE();
+}
+
+#define GICD_BASE_GPA	0x8000000ULL
+#define GICR_BASE_GPA	0x80A0000ULL
+
+/* Create a VM that has one vCPU with PMUv3 configured. */
+static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
+				     int *gic_fd)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct kvm_vcpu_init init;
+	uint8_t pmuver;
+	uint64_t dfr0, irq = 23;
+	struct kvm_device_attr irq_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
+		.addr = (uint64_t)&irq,
+	};
+	struct kvm_device_attr init_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
+	};
+
+	vm = vm_create(1);
+
+	/* Create vCPU with PMUv3 */
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+
+	/* Make sure that PMUv3 support is indicated in the ID register */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
+	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
+		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+
+	/* Initialize vPMU */
+	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
+	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+
+	*vcpup = vcpu;
+	return vm;
+}
+
+static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+{
+	struct ucall uc;
+
+	vcpu_args_set(vcpu, 1, pmcr_n);
+	vcpu_run(vcpu);
+	switch (get_ucall(vcpu, &uc)) {
+	case UCALL_ABORT:
+		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+		break;
+	case UCALL_DONE:
+		break;
+	default:
+		TEST_FAIL("Unknown ucall %lu", uc.cmd);
+		break;
+	}
+}
+
+/*
+ * Create a guest with one vCPU, set the PMCR_EL1.N for the vCPU to @pmcr_n,
+ * and run the test.
+ */
+static void run_test(uint64_t pmcr_n)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+	uint64_t sp, pmcr, pmcr_orig;
+	struct kvm_vcpu_init init;
+
+	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+
+	/* Save the initial sp to restore them later to run the guest again */
+	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
+
+	/* Update the PMCR_EL1.N with @pmcr_n */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+	pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT;
+	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	/*
+	 * Reset and re-initialize the vCPU, and run the guest code again to
+	 * check if PMCR_EL1.N is preserved.
+	 */
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	close(gic_fd);
+	kvm_vm_free(vm);
+}
+
+/*
+ * Create a guest with one vCPU, and attempt to set the PMCR_EL1.N for
+ * the vCPU to @pmcr_n, which is larger than the host value.
+ * The attempt should fail as @pmcr_n is too big to set for the vCPU.
+ */
+static void run_error_test(uint64_t pmcr_n)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd, ret;
+	uint64_t pmcr, pmcr_orig;
+
+	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+
+	/* Update the PMCR_EL1.N with @pmcr_n */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+	pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT;
+
+	/* This should fail as @pmcr_n is too big to set for the vCPU */
+	ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+	TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
+		    pmcr, pmcr_orig);
+
+	close(gic_fd);
+	kvm_vm_free(vm);
+}
+
+/*
+ * Return the default number of implemented PMU event counters excluding
+ * the cycle counter (i.e. PMCR_EL1.N value) for the guest.
+ */
+static uint64_t get_pmcr_n_limit(void)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+	uint64_t pmcr;
+
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	close(gic_fd);
+	kvm_vm_free(vm);
+	return pmcr_extract_n(pmcr);
+}
+
+int main(void)
+{
+	uint64_t i, pmcr_n;
+
+	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
+
+	pmcr_n = get_pmcr_n_limit();
+	for (i = 0; i <= pmcr_n; i++)
+		run_test(i);
+
+	for (i = pmcr_n + 1; i < ARMV8_PMU_PMCR_N_MASK; i++)
+		run_error_test(i);
+
+	return 0;
+}
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add a new test case to the vpmu_counter_access test to check if PMU
registers or their bits for implemented counters on the vCPU are
readable/writable as expected, and can be programmed to count events.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 347 +++++++++++++++++-
 1 file changed, 344 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 704a2500b7e1..54b69c76c824 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,7 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL1.N) that userspace sets.
+ * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
+ * those counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -18,19 +19,350 @@
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
+/*
+ * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
+ * were basically copied from arch/arm64/kernel/perf_event.c.
+ */
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default:					\
+			GUEST_ASSERT_1(0, x);			\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+	isb();
+}
+
+#define READ_PMEVTYPERN(n) \
+	return read_sysreg(pmevtyper##n##_el0)
+static unsigned long read_pmevtypern(int n)
+{
+	PMEVN_SWITCH(n, READ_PMEVTYPERN);
+	return 0;
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+	isb();
+}
+
+/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline unsigned long read_sel_evcntr(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevcntr_el0);
+}
+
+/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline void write_sel_evcntr(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevcntr_el0);
+	isb();
+}
+
+/* Read PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline unsigned long read_sel_evtyper(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevtyper_el0);
+}
+
+/* Write PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline void write_sel_evtyper(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevtyper_el0);
+	isb();
+}
+
+static inline void enable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenset_el0);
+	isb();
+}
+
+static inline void disable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenclr_el0);
+	isb();
+}
+
+/*
+ * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * accessors that test cases will use. Each of the accessors will
+ * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
+ * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
+ *
+ * This is used to test that combinations of those accessors provide
+ * the consistent behavior.
+ */
+struct pmc_accessor {
+	/* A function to be used to read PMEVTCNTR<n>_EL0 */
+	unsigned long	(*read_cntr)(int idx);
+	/* A function to be used to write PMEVTCNTR<n>_EL0 */
+	void		(*write_cntr)(int idx, unsigned long val);
+	/* A function to be used to read PMEVTTYPER<n>_EL0 */
+	unsigned long	(*read_typer)(int idx);
+	/* A function to be used write PMEVTTYPER<n>_EL0 */
+	void		(*write_typer)(int idx, unsigned long val);
+};
+
+struct pmc_accessor pmc_accessors[] = {
+	/* test with all direct accesses */
+	{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
+	/* test with all indirect accesses */
+	{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
+	/* read with direct accesses, and write with indirect accesses */
+	{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
+	/* read with indirect accesses, and write with direct accesses */
+	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
+};
+
+static void pmu_disable_reset(void)
+{
+	uint64_t pmcr = read_sysreg(pmcr_el0);
+
+	/* Reset all counters, disabling them */
+	pmcr &= ~ARMV8_PMU_PMCR_E;
+	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+	isb();
+}
+
+static void pmu_enable(void)
+{
+	uint64_t pmcr = read_sysreg(pmcr_el0);
+
+	/* Reset all counters, disabling them */
+	pmcr |= ARMV8_PMU_PMCR_E;
+	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+	isb();
+}
+
+static bool pmu_event_is_supported(uint64_t event)
+{
+	GUEST_ASSERT_1(event < 64, event);
+	return (read_sysreg(pmceid0_el0) & BIT(event));
+}
+
 static uint64_t pmcr_extract_n(uint64_t pmcr_val)
 {
 	return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
 }
 
+#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)		\
+{									\
+	uint64_t _tval = read_sysreg(regname);				\
+									\
+	if (set_expected)						\
+		GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
+	else								   \
+		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
+}
+
+/*
+ * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers
+ * are set or cleared as specified in @set_expected.
+ */
+static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
+{
+	GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
+}
+
+/*
+ * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding
+ * to the specified counter (@pmc_idx) can be read/written as expected.
+ * When @set_op is true, it tries to set the bit for the counter in
+ * those registers by writing the SET registers (the bit won't be set
+ * if the counter is not implemented though).
+ * Otherwise, it tries to clear the bits in the registers by writing
+ * the CLR registers.
+ * Then, it checks if the values indicated in the registers are as expected.
+ */
+static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
+{
+	uint64_t pmcr_n, test_bit = BIT(pmc_idx);
+	bool set_expected = false;
+
+	if (set_op) {
+		write_sysreg(test_bit, pmcntenset_el0);
+		write_sysreg(test_bit, pmovsset_el0);
+
+		/* The bit will be set only if the counter is implemented */
+		pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0));
+		set_expected = (pmc_idx < pmcr_n) ? true : false;
+	} else {
+		write_sysreg(test_bit, pmcntenclr_el0);
+		write_sysreg(test_bit, pmovsclr_el0);
+	}
+	check_bitmap_pmu_regs(test_bit, set_expected);
+}
+
+/*
+ * Tests for reading/writing registers for the (implemented) event counter
+ * specified by @pmc_idx.
+ */
+static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	uint64_t write_data, read_data, read_data_prev, test_bit;
+
+	/* Disable all PMCs and reset all PMCs to zero. */
+	pmu_disable_reset();
+
+
+	/*
+	 * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1.
+	 */
+
+	test_bit = 1ul << pmc_idx;
+	/* Make sure that the bit in those registers are set to 0 */
+	test_bitmap_pmu_regs(test_bit, false);
+	/* Test if setting the bit in those registers works */
+	test_bitmap_pmu_regs(test_bit, true);
+	/* Test if clearing the bit in those registers works */
+	test_bitmap_pmu_regs(test_bit, false);
+
+
+	/*
+	 * Tests for reading/writing the event type register.
+	 */
+
+	read_data = acc->read_typer(pmc_idx);
+	/*
+	 * Set the event type register to an arbitrary value just for testing
+	 * of reading/writing the register.
+	 * ArmARM says that for the event from 0x0000 to 0x003F,
+	 * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
+	 * the value written to the field even when the specified event
+	 * is not supported.
+	 */
+	write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+	acc->write_typer(pmc_idx, write_data);
+	read_data = acc->read_typer(pmc_idx);
+	GUEST_ASSERT_4(read_data == write_data,
+		       pmc_idx, acc, read_data, write_data);
+
+
+	/*
+	 * Tests for reading/writing the event count register.
+	 */
+
+	read_data = acc->read_cntr(pmc_idx);
+
+	/* The count value must be 0, as it is not used after the reset */
+	GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
+
+	write_data = read_data + pmc_idx + 0x12345;
+	acc->write_cntr(pmc_idx, write_data);
+	read_data = acc->read_cntr(pmc_idx);
+	GUEST_ASSERT_4(read_data == write_data,
+		       pmc_idx, acc, read_data, write_data);
+
+
+	/* The following test requires the INST_RETIRED event support. */
+	if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED))
+		return;
+
+	pmu_enable();
+	acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+
+	/*
+	 * Make sure that the counter doesn't count the INST_RETIRED
+	 * event when disabled, and the counter counts the event when enabled.
+	 */
+	disable_counter(pmc_idx);
+	read_data_prev = acc->read_cntr(pmc_idx);
+	read_data = acc->read_cntr(pmc_idx);
+	GUEST_ASSERT_4(read_data == read_data_prev,
+		       pmc_idx, acc, read_data, read_data_prev);
+
+	enable_counter(pmc_idx);
+	read_data = acc->read_cntr(pmc_idx);
+
+	/*
+	 * The counter should be increased by at least 1, as there is at
+	 * least one instruction between enabling the counter and reading
+	 * the counter (the test assumes that all event counters are not
+	 * being used by the host's higher priority events).
+	 */
+	GUEST_ASSERT_4(read_data > read_data_prev,
+		       pmc_idx, acc, read_data, read_data_prev);
+}
+
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
- * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
+ * if reading/writing PMU registers for implemented counters can work
+ * as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
 	uint64_t pmcr, pmcr_n;
+	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
 
@@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n)
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Tests for reading/writing PMU registers for implemented counters.
+	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = 0; pmc < pmcr_n; pmc++)
+			test_access_pmc_regs(&pmc_accessors[i], pmc);
+	}
+
 	GUEST_DONE();
 }
 
@@ -96,7 +437,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
 	vcpu_run(vcpu);
 	switch (get_ucall(vcpu, &uc)) {
 	case UCALL_ABORT:
-		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+		REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
 		break;
 	case UCALL_DONE:
 		break;
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add a new test case to the vpmu_counter_access test to check if PMU
registers or their bits for implemented counters on the vCPU are
readable/writable as expected, and can be programmed to count events.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 347 +++++++++++++++++-
 1 file changed, 344 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 704a2500b7e1..54b69c76c824 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,7 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL1.N) that userspace sets.
+ * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
+ * those counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -18,19 +19,350 @@
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
+/*
+ * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
+ * were basically copied from arch/arm64/kernel/perf_event.c.
+ */
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default:					\
+			GUEST_ASSERT_1(0, x);			\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+	isb();
+}
+
+#define READ_PMEVTYPERN(n) \
+	return read_sysreg(pmevtyper##n##_el0)
+static unsigned long read_pmevtypern(int n)
+{
+	PMEVN_SWITCH(n, READ_PMEVTYPERN);
+	return 0;
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+	isb();
+}
+
+/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline unsigned long read_sel_evcntr(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevcntr_el0);
+}
+
+/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline void write_sel_evcntr(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevcntr_el0);
+	isb();
+}
+
+/* Read PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline unsigned long read_sel_evtyper(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevtyper_el0);
+}
+
+/* Write PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline void write_sel_evtyper(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevtyper_el0);
+	isb();
+}
+
+static inline void enable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenset_el0);
+	isb();
+}
+
+static inline void disable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenclr_el0);
+	isb();
+}
+
+/*
+ * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * accessors that test cases will use. Each of the accessors will
+ * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
+ * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
+ *
+ * This is used to test that combinations of those accessors provide
+ * the consistent behavior.
+ */
+struct pmc_accessor {
+	/* A function to be used to read PMEVTCNTR<n>_EL0 */
+	unsigned long	(*read_cntr)(int idx);
+	/* A function to be used to write PMEVTCNTR<n>_EL0 */
+	void		(*write_cntr)(int idx, unsigned long val);
+	/* A function to be used to read PMEVTTYPER<n>_EL0 */
+	unsigned long	(*read_typer)(int idx);
+	/* A function to be used write PMEVTTYPER<n>_EL0 */
+	void		(*write_typer)(int idx, unsigned long val);
+};
+
+struct pmc_accessor pmc_accessors[] = {
+	/* test with all direct accesses */
+	{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
+	/* test with all indirect accesses */
+	{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
+	/* read with direct accesses, and write with indirect accesses */
+	{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
+	/* read with indirect accesses, and write with direct accesses */
+	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
+};
+
+static void pmu_disable_reset(void)
+{
+	uint64_t pmcr = read_sysreg(pmcr_el0);
+
+	/* Reset all counters, disabling them */
+	pmcr &= ~ARMV8_PMU_PMCR_E;
+	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+	isb();
+}
+
+static void pmu_enable(void)
+{
+	uint64_t pmcr = read_sysreg(pmcr_el0);
+
+	/* Reset all counters, disabling them */
+	pmcr |= ARMV8_PMU_PMCR_E;
+	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+	isb();
+}
+
+static bool pmu_event_is_supported(uint64_t event)
+{
+	GUEST_ASSERT_1(event < 64, event);
+	return (read_sysreg(pmceid0_el0) & BIT(event));
+}
+
 static uint64_t pmcr_extract_n(uint64_t pmcr_val)
 {
 	return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
 }
 
+#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)		\
+{									\
+	uint64_t _tval = read_sysreg(regname);				\
+									\
+	if (set_expected)						\
+		GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
+	else								   \
+		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
+}
+
+/*
+ * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers
+ * are set or cleared as specified in @set_expected.
+ */
+static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
+{
+	GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
+}
+
+/*
+ * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding
+ * to the specified counter (@pmc_idx) can be read/written as expected.
+ * When @set_op is true, it tries to set the bit for the counter in
+ * those registers by writing the SET registers (the bit won't be set
+ * if the counter is not implemented though).
+ * Otherwise, it tries to clear the bits in the registers by writing
+ * the CLR registers.
+ * Then, it checks if the values indicated in the registers are as expected.
+ */
+static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
+{
+	uint64_t pmcr_n, test_bit = BIT(pmc_idx);
+	bool set_expected = false;
+
+	if (set_op) {
+		write_sysreg(test_bit, pmcntenset_el0);
+		write_sysreg(test_bit, pmovsset_el0);
+
+		/* The bit will be set only if the counter is implemented */
+		pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0));
+		set_expected = (pmc_idx < pmcr_n) ? true : false;
+	} else {
+		write_sysreg(test_bit, pmcntenclr_el0);
+		write_sysreg(test_bit, pmovsclr_el0);
+	}
+	check_bitmap_pmu_regs(test_bit, set_expected);
+}
+
+/*
+ * Tests for reading/writing registers for the (implemented) event counter
+ * specified by @pmc_idx.
+ */
+static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	uint64_t write_data, read_data, read_data_prev, test_bit;
+
+	/* Disable all PMCs and reset all PMCs to zero. */
+	pmu_disable_reset();
+
+
+	/*
+	 * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1.
+	 */
+
+	test_bit = 1ul << pmc_idx;
+	/* Make sure that the bit in those registers are set to 0 */
+	test_bitmap_pmu_regs(test_bit, false);
+	/* Test if setting the bit in those registers works */
+	test_bitmap_pmu_regs(test_bit, true);
+	/* Test if clearing the bit in those registers works */
+	test_bitmap_pmu_regs(test_bit, false);
+
+
+	/*
+	 * Tests for reading/writing the event type register.
+	 */
+
+	read_data = acc->read_typer(pmc_idx);
+	/*
+	 * Set the event type register to an arbitrary value just for testing
+	 * of reading/writing the register.
+	 * ArmARM says that for the event from 0x0000 to 0x003F,
+	 * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
+	 * the value written to the field even when the specified event
+	 * is not supported.
+	 */
+	write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+	acc->write_typer(pmc_idx, write_data);
+	read_data = acc->read_typer(pmc_idx);
+	GUEST_ASSERT_4(read_data == write_data,
+		       pmc_idx, acc, read_data, write_data);
+
+
+	/*
+	 * Tests for reading/writing the event count register.
+	 */
+
+	read_data = acc->read_cntr(pmc_idx);
+
+	/* The count value must be 0, as it is not used after the reset */
+	GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
+
+	write_data = read_data + pmc_idx + 0x12345;
+	acc->write_cntr(pmc_idx, write_data);
+	read_data = acc->read_cntr(pmc_idx);
+	GUEST_ASSERT_4(read_data == write_data,
+		       pmc_idx, acc, read_data, write_data);
+
+
+	/* The following test requires the INST_RETIRED event support. */
+	if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED))
+		return;
+
+	pmu_enable();
+	acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+
+	/*
+	 * Make sure that the counter doesn't count the INST_RETIRED
+	 * event when disabled, and the counter counts the event when enabled.
+	 */
+	disable_counter(pmc_idx);
+	read_data_prev = acc->read_cntr(pmc_idx);
+	read_data = acc->read_cntr(pmc_idx);
+	GUEST_ASSERT_4(read_data == read_data_prev,
+		       pmc_idx, acc, read_data, read_data_prev);
+
+	enable_counter(pmc_idx);
+	read_data = acc->read_cntr(pmc_idx);
+
+	/*
+	 * The counter should be increased by at least 1, as there is at
+	 * least one instruction between enabling the counter and reading
+	 * the counter (the test assumes that all event counters are not
+	 * being used by the host's higher priority events).
+	 */
+	GUEST_ASSERT_4(read_data > read_data_prev,
+		       pmc_idx, acc, read_data, read_data_prev);
+}
+
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
- * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
+ * if reading/writing PMU registers for implemented counters can work
+ * as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
 	uint64_t pmcr, pmcr_n;
+	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
 
@@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n)
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Tests for reading/writing PMU registers for implemented counters.
+	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = 0; pmc < pmcr_n; pmc++)
+			test_access_pmc_regs(&pmc_accessors[i], pmc);
+	}
+
 	GUEST_DONE();
 }
 
@@ -96,7 +437,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
 	vcpu_run(vcpu);
 	switch (get_ucall(vcpu, &uc)) {
 	case UCALL_ABORT:
-		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+		REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
 		break;
 	case UCALL_DONE:
 		break;
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  1:35   ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add a new test case to the vpmu_counter_access test to check
if PMU registers or their bits for unimplemented counters are not
accessible or are RAZ, as expected.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 103 +++++++++++++++++-
 .../selftests/kvm/include/aarch64/processor.h |   1 +
 2 files changed, 98 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 54b69c76c824..a7e34d63808b 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,8 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
- * those counters.
+ * counters (PMCR_EL1.N) that userspace sets, if the guest can access
+ * those counters, and if the guest cannot access any other counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = {
 	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
 };
 
+#define INVALID_EC	(-1ul)
+uint64_t expected_ec = INVALID_EC;
+uint64_t op_end_addr;
+
+static void guest_sync_handler(struct ex_regs *regs)
+{
+	uint64_t esr, ec;
+
+	esr = read_sysreg(esr_el1);
+	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
+	GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
+		       regs->pc, esr, ec, expected_ec);
+
+	/* Will go back to op_end_addr after the handler exits */
+	regs->pc = op_end_addr;
+
+	/*
+	 * Clear op_end_addr, and setting expected_ec to INVALID_EC
+	 * as a sign that an exception has occurred.
+	 */
+	op_end_addr = 0;
+	expected_ec = INVALID_EC;
+}
+
+/*
+ * Run the given operation that should trigger an exception with the
+ * given exception class. The exception handler (guest_sync_handler)
+ * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
+ * will come back to the instruction at the @done_label.
+ * The @done_label must be a unique label in this test program.
+ */
+#define TEST_EXCEPTION(ec, ops, done_label)		\
+{							\
+	extern int done_label;				\
+							\
+	WRITE_ONCE(op_end_addr, (uint64_t)&done_label);	\
+	GUEST_ASSERT(ec != INVALID_EC);			\
+	WRITE_ONCE(expected_ec, ec);			\
+	dsb(ish);					\
+	ops;						\
+	asm volatile(#done_label":");			\
+	GUEST_ASSERT(!op_end_addr);			\
+	GUEST_ASSERT(expected_ec == INVALID_EC);	\
+}
+
 static void pmu_disable_reset(void)
 {
 	uint64_t pmcr = read_sysreg(pmcr_el0);
@@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
 		       pmc_idx, acc, read_data, read_data_prev);
 }
 
+/*
+ * Tests for reading/writing registers for the unimplemented event counter
+ * specified by @pmc_idx (>= PMCR_EL1.N).
+ */
+static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	/*
+	 * Reading/writing the event count/type registers should cause
+	 * an UNDEFINED exception.
+	 */
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
+	/*
+	 * The bit corresponding to the (unimplemented) counter in
+	 * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
+	 */
+	test_bitmap_pmu_regs(pmc_idx, 1);
+	test_bitmap_pmu_regs(pmc_idx, 0);
+}
+
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
  * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
- * if reading/writing PMU registers for implemented counters can work
- * as expected.
+ * if reading/writing PMU registers for implemented or unimplemented
+ * counters can work as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
-	uint64_t pmcr, pmcr_n;
+	uint64_t pmcr, pmcr_n, unimp_mask;
 	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
@@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n)
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Make sure that (RAZ) bits corresponding to unimplemented event
+	 * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
+	 * (NOTE: bits for implemented event counters are reset to UNKNOWN)
+	 */
+	unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
+	check_bitmap_pmu_regs(unimp_mask, false);
+
 	/*
 	 * Tests for reading/writing PMU registers for implemented counters.
 	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
@@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n)
 			test_access_pmc_regs(&pmc_accessors[i], pmc);
 	}
 
+	/*
+	 * Tests for reading/writing PMU registers for unimplemented counters.
+	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
+			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
+	}
 	GUEST_DONE();
 }
 
@@ -394,7 +477,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
-	uint8_t pmuver;
+	uint8_t pmuver, ec;
 	uint64_t dfr0, irq = 23;
 	struct kvm_device_attr irq_attr = {
 		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
@@ -407,11 +490,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	};
 
 	vm = vm_create(1);
+	vm_init_descriptor_tables(vm);
+	/* Catch exceptions for easier debugging */
+	for (ec = 0; ec < ESR_EC_NUM; ec++) {
+		vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
+					guest_sync_handler);
+	}
 
 	/* Create vCPU with PMUv3 */
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	vcpu_init_descriptor_tables(vcpu);
 	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
 	/* Make sure that PMUv3 support is indicated in the ID register */
@@ -480,6 +570,7 @@ static void run_test(uint64_t pmcr_n)
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_init_descriptor_tables(vcpu);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
 
diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 5f977528e09c..52d87809356c 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -104,6 +104,7 @@ enum {
 #define ESR_EC_SHIFT		26
 #define ESR_EC_MASK		(ESR_EC_NUM - 1)
 
+#define ESR_EC_UNKNOWN		0x0
 #define ESR_EC_SVC64		0x15
 #define ESR_EC_IABT		0x21
 #define ESR_EC_DABT		0x25
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters
@ 2023-01-17  1:35   ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-17  1:35 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add a new test case to the vpmu_counter_access test to check
if PMU registers or their bits for unimplemented counters are not
accessible or are RAZ, as expected.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 103 +++++++++++++++++-
 .../selftests/kvm/include/aarch64/processor.h |   1 +
 2 files changed, 98 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 54b69c76c824..a7e34d63808b 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,8 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
- * those counters.
+ * counters (PMCR_EL1.N) that userspace sets, if the guest can access
+ * those counters, and if the guest cannot access any other counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = {
 	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
 };
 
+#define INVALID_EC	(-1ul)
+uint64_t expected_ec = INVALID_EC;
+uint64_t op_end_addr;
+
+static void guest_sync_handler(struct ex_regs *regs)
+{
+	uint64_t esr, ec;
+
+	esr = read_sysreg(esr_el1);
+	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
+	GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
+		       regs->pc, esr, ec, expected_ec);
+
+	/* Will go back to op_end_addr after the handler exits */
+	regs->pc = op_end_addr;
+
+	/*
+	 * Clear op_end_addr, and setting expected_ec to INVALID_EC
+	 * as a sign that an exception has occurred.
+	 */
+	op_end_addr = 0;
+	expected_ec = INVALID_EC;
+}
+
+/*
+ * Run the given operation that should trigger an exception with the
+ * given exception class. The exception handler (guest_sync_handler)
+ * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
+ * will come back to the instruction at the @done_label.
+ * The @done_label must be a unique label in this test program.
+ */
+#define TEST_EXCEPTION(ec, ops, done_label)		\
+{							\
+	extern int done_label;				\
+							\
+	WRITE_ONCE(op_end_addr, (uint64_t)&done_label);	\
+	GUEST_ASSERT(ec != INVALID_EC);			\
+	WRITE_ONCE(expected_ec, ec);			\
+	dsb(ish);					\
+	ops;						\
+	asm volatile(#done_label":");			\
+	GUEST_ASSERT(!op_end_addr);			\
+	GUEST_ASSERT(expected_ec == INVALID_EC);	\
+}
+
 static void pmu_disable_reset(void)
 {
 	uint64_t pmcr = read_sysreg(pmcr_el0);
@@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
 		       pmc_idx, acc, read_data, read_data_prev);
 }
 
+/*
+ * Tests for reading/writing registers for the unimplemented event counter
+ * specified by @pmc_idx (>= PMCR_EL1.N).
+ */
+static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	/*
+	 * Reading/writing the event count/type registers should cause
+	 * an UNDEFINED exception.
+	 */
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
+	/*
+	 * The bit corresponding to the (unimplemented) counter in
+	 * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
+	 */
+	test_bitmap_pmu_regs(pmc_idx, 1);
+	test_bitmap_pmu_regs(pmc_idx, 0);
+}
+
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
  * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
- * if reading/writing PMU registers for implemented counters can work
- * as expected.
+ * if reading/writing PMU registers for implemented or unimplemented
+ * counters can work as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
-	uint64_t pmcr, pmcr_n;
+	uint64_t pmcr, pmcr_n, unimp_mask;
 	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
@@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n)
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Make sure that (RAZ) bits corresponding to unimplemented event
+	 * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
+	 * (NOTE: bits for implemented event counters are reset to UNKNOWN)
+	 */
+	unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
+	check_bitmap_pmu_regs(unimp_mask, false);
+
 	/*
 	 * Tests for reading/writing PMU registers for implemented counters.
 	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
@@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n)
 			test_access_pmc_regs(&pmc_accessors[i], pmc);
 	}
 
+	/*
+	 * Tests for reading/writing PMU registers for unimplemented counters.
+	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
+			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
+	}
 	GUEST_DONE();
 }
 
@@ -394,7 +477,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
-	uint8_t pmuver;
+	uint8_t pmuver, ec;
 	uint64_t dfr0, irq = 23;
 	struct kvm_device_attr irq_attr = {
 		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
@@ -407,11 +490,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	};
 
 	vm = vm_create(1);
+	vm_init_descriptor_tables(vm);
+	/* Catch exceptions for easier debugging */
+	for (ec = 0; ec < ESR_EC_NUM; ec++) {
+		vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
+					guest_sync_handler);
+	}
 
 	/* Create vCPU with PMUv3 */
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	vcpu_init_descriptor_tables(vcpu);
 	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
 	/* Make sure that PMUv3 support is indicated in the ID register */
@@ -480,6 +570,7 @@ static void run_test(uint64_t pmcr_n)
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_init_descriptor_tables(vcpu);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
 
diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 5f977528e09c..52d87809356c 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -104,6 +104,7 @@ enum {
 #define ESR_EC_SHIFT		26
 #define ESR_EC_MASK		(ESR_EC_NUM - 1)
 
+#define ESR_EC_UNKNOWN		0x0
 #define ESR_EC_SVC64		0x15
 #define ESR_EC_IABT		0x21
 #define ESR_EC_DABT		0x25
-- 
2.39.0.314.g84b9a713c41-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
  2023-01-17  1:35 ` Reiji Watanabe
@ 2023-01-17  7:25   ` Shaoqin Huang
  -1 siblings, 0 replies; 46+ messages in thread
From: Shaoqin Huang @ 2023-01-17  7:25 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata

Hi Reiji,


I have tested this patch set on an Ampere machine, and every thing works 
fine.


Tested-by: Shaoqin Huang <shahuang@redhat.com>

On 1/17/23 09:35, Reiji Watanabe wrote:
> The goal of this series is to allow userspace to limit the number
> of PMU event counters on the vCPU. We need this to support migration
> across systems that implement different numbers of counters.
>
> The number of PMU event counters is indicated in PMCR_EL0.N.
> For a vCPU with PMUv3 configured, its value will be the same as
> the host value by default. Userspace can set PMCR_EL0.N for the
> vCPU to a lower value than the host value, using KVM_SET_ONE_REG.
> However, it is practically unsupported, as KVM resets PMCR_EL0.N
> to the host value on vCPU reset and some KVM code uses the host
> value to identify (un)implemented event counters on the vCPU.
>
> This series will ensure that the PMCR_EL0.N value is preserved
> on vCPU reset and that KVM doesn't use the host value
> to identify (un)implemented event counters on the vCPU.
> This allows userspace to limit the number of the PMU event
> counters on the vCPU.
>
> Patch 1 fixes reset_pmu_reg() to ensure that (RAZ) bits of
> {PMCNTEN,PMOVS}{SET,CLR}_EL1 corresponding to unimplemented event
> counters on the vCPU are reset to zero even when PMCR_EL0.N for
> the vCPU is different from the host.
>
> Patch 2 is a minor refactoring to use the default PMU register reset
> function (reset_pmu_reg()) for PMUSERENR_EL0 and PMCCFILTR_EL0.
> (With the Patch 1 change, reset_pmu_reg() can now be used for
> those registers)
>
> Patch 3 fixes reset_pmcr() to preserve PMCR_EL0.N for the vCPU on
> vCPU reset.
>
> Patch 4 adds the sys_reg's set_user() handler for the PMCR_EL0
> to disallow userspace to set PMCR_EL0.N for the vCPU to a value
> that is greater than the host value.
>
> Patch 5-8 adds a selftest to verify reading and writing PMU registers
> for implemented or unimplemented PMU event counters on the vCPU.
>
> The series is based on v6.2-rc4.
>
> v2:
>   - Added the sys_reg's set_user() handler for the PMCR_EL0 to
>     disallow userspace to set PMCR_EL0.N for the vCPU to a value
>     that is greater than the host value (and added a new test
>     case for this behavior). [Oliver]
>   - Added to the commit log of the patch 2 that PMUSERENR_EL0 and
>     PMCCFILTR_EL0 have UNKNOWN reset values.
>
> v1: https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/
>
> Reiji Watanabe (8):
>    KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
>    KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and
>      PMCCFILTR_EL0
>    KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
>    KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the
>      host value
>    tools: arm64: Import perf_event.h
>    KVM: selftests: aarch64: Introduce vpmu_counter_access test
>    KVM: selftests: aarch64: vPMU register test for implemented counters
>    KVM: selftests: aarch64: vPMU register test for unimplemented counters
>
>   arch/arm64/kvm/pmu-emul.c                     |   6 +
>   arch/arm64/kvm/sys_regs.c                     |  57 +-
>   tools/arch/arm64/include/asm/perf_event.h     | 258 +++++++
>   tools/testing/selftests/kvm/Makefile          |   1 +
>   .../kvm/aarch64/vpmu_counter_access.c         | 644 ++++++++++++++++++
>   .../selftests/kvm/include/aarch64/processor.h |   1 +
>   6 files changed, 954 insertions(+), 13 deletions(-)
>   create mode 100644 tools/arch/arm64/include/asm/perf_event.h
>   create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
>
>
> base-commit: 5dc4c995db9eb45f6373a956eb1f69460e69e6d4

-- 
Regards,
Shaoqin


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
@ 2023-01-17  7:25   ` Shaoqin Huang
  0 siblings, 0 replies; 46+ messages in thread
From: Shaoqin Huang @ 2023-01-17  7:25 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata

Hi Reiji,


I have tested this patch set on an Ampere machine, and every thing works 
fine.


Tested-by: Shaoqin Huang <shahuang@redhat.com>

On 1/17/23 09:35, Reiji Watanabe wrote:
> The goal of this series is to allow userspace to limit the number
> of PMU event counters on the vCPU. We need this to support migration
> across systems that implement different numbers of counters.
>
> The number of PMU event counters is indicated in PMCR_EL0.N.
> For a vCPU with PMUv3 configured, its value will be the same as
> the host value by default. Userspace can set PMCR_EL0.N for the
> vCPU to a lower value than the host value, using KVM_SET_ONE_REG.
> However, it is practically unsupported, as KVM resets PMCR_EL0.N
> to the host value on vCPU reset and some KVM code uses the host
> value to identify (un)implemented event counters on the vCPU.
>
> This series will ensure that the PMCR_EL0.N value is preserved
> on vCPU reset and that KVM doesn't use the host value
> to identify (un)implemented event counters on the vCPU.
> This allows userspace to limit the number of the PMU event
> counters on the vCPU.
>
> Patch 1 fixes reset_pmu_reg() to ensure that (RAZ) bits of
> {PMCNTEN,PMOVS}{SET,CLR}_EL1 corresponding to unimplemented event
> counters on the vCPU are reset to zero even when PMCR_EL0.N for
> the vCPU is different from the host.
>
> Patch 2 is a minor refactoring to use the default PMU register reset
> function (reset_pmu_reg()) for PMUSERENR_EL0 and PMCCFILTR_EL0.
> (With the Patch 1 change, reset_pmu_reg() can now be used for
> those registers)
>
> Patch 3 fixes reset_pmcr() to preserve PMCR_EL0.N for the vCPU on
> vCPU reset.
>
> Patch 4 adds the sys_reg's set_user() handler for the PMCR_EL0
> to disallow userspace to set PMCR_EL0.N for the vCPU to a value
> that is greater than the host value.
>
> Patch 5-8 adds a selftest to verify reading and writing PMU registers
> for implemented or unimplemented PMU event counters on the vCPU.
>
> The series is based on v6.2-rc4.
>
> v2:
>   - Added the sys_reg's set_user() handler for the PMCR_EL0 to
>     disallow userspace to set PMCR_EL0.N for the vCPU to a value
>     that is greater than the host value (and added a new test
>     case for this behavior). [Oliver]
>   - Added to the commit log of the patch 2 that PMUSERENR_EL0 and
>     PMCCFILTR_EL0 have UNKNOWN reset values.
>
> v1: https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/
>
> Reiji Watanabe (8):
>    KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
>    KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and
>      PMCCFILTR_EL0
>    KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
>    KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the
>      host value
>    tools: arm64: Import perf_event.h
>    KVM: selftests: aarch64: Introduce vpmu_counter_access test
>    KVM: selftests: aarch64: vPMU register test for implemented counters
>    KVM: selftests: aarch64: vPMU register test for unimplemented counters
>
>   arch/arm64/kvm/pmu-emul.c                     |   6 +
>   arch/arm64/kvm/sys_regs.c                     |  57 +-
>   tools/arch/arm64/include/asm/perf_event.h     | 258 +++++++
>   tools/testing/selftests/kvm/Makefile          |   1 +
>   .../kvm/aarch64/vpmu_counter_access.c         | 644 ++++++++++++++++++
>   .../selftests/kvm/include/aarch64/processor.h |   1 +
>   6 files changed, 954 insertions(+), 13 deletions(-)
>   create mode 100644 tools/arch/arm64/include/asm/perf_event.h
>   create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
>
>
> base-commit: 5dc4c995db9eb45f6373a956eb1f69460e69e6d4

-- 
Regards,
Shaoqin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
  2023-01-17  7:25   ` Shaoqin Huang
@ 2023-01-18  5:53     ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-18  5:53 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Shaoqin,

> I have tested this patch set on an Ampere machine, and every thing works
> fine.
>
>
> Tested-by: Shaoqin Huang <shahuang@redhat.com>

Thank you for testing the series!
Reiji


>
> On 1/17/23 09:35, Reiji Watanabe wrote:
> > The goal of this series is to allow userspace to limit the number
> > of PMU event counters on the vCPU. We need this to support migration
> > across systems that implement different numbers of counters.
> >
> > The number of PMU event counters is indicated in PMCR_EL0.N.
> > For a vCPU with PMUv3 configured, its value will be the same as
> > the host value by default. Userspace can set PMCR_EL0.N for the
> > vCPU to a lower value than the host value, using KVM_SET_ONE_REG.
> > However, it is practically unsupported, as KVM resets PMCR_EL0.N
> > to the host value on vCPU reset and some KVM code uses the host
> > value to identify (un)implemented event counters on the vCPU.
> >
> > This series will ensure that the PMCR_EL0.N value is preserved
> > on vCPU reset and that KVM doesn't use the host value
> > to identify (un)implemented event counters on the vCPU.
> > This allows userspace to limit the number of the PMU event
> > counters on the vCPU.
> >
> > Patch 1 fixes reset_pmu_reg() to ensure that (RAZ) bits of
> > {PMCNTEN,PMOVS}{SET,CLR}_EL1 corresponding to unimplemented event
> > counters on the vCPU are reset to zero even when PMCR_EL0.N for
> > the vCPU is different from the host.
> >
> > Patch 2 is a minor refactoring to use the default PMU register reset
> > function (reset_pmu_reg()) for PMUSERENR_EL0 and PMCCFILTR_EL0.
> > (With the Patch 1 change, reset_pmu_reg() can now be used for
> > those registers)
> >
> > Patch 3 fixes reset_pmcr() to preserve PMCR_EL0.N for the vCPU on
> > vCPU reset.
> >
> > Patch 4 adds the sys_reg's set_user() handler for the PMCR_EL0
> > to disallow userspace to set PMCR_EL0.N for the vCPU to a value
> > that is greater than the host value.
> >
> > Patch 5-8 adds a selftest to verify reading and writing PMU registers
> > for implemented or unimplemented PMU event counters on the vCPU.
> >
> > The series is based on v6.2-rc4.
> >
> > v2:
> >   - Added the sys_reg's set_user() handler for the PMCR_EL0 to
> >     disallow userspace to set PMCR_EL0.N for the vCPU to a value
> >     that is greater than the host value (and added a new test
> >     case for this behavior). [Oliver]
> >   - Added to the commit log of the patch 2 that PMUSERENR_EL0 and
> >     PMCCFILTR_EL0 have UNKNOWN reset values.
> >
> > v1: https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/
> >
> > Reiji Watanabe (8):
> >    KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
> >    KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and
> >      PMCCFILTR_EL0
> >    KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
> >    KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the
> >      host value
> >    tools: arm64: Import perf_event.h
> >    KVM: selftests: aarch64: Introduce vpmu_counter_access test
> >    KVM: selftests: aarch64: vPMU register test for implemented counters
> >    KVM: selftests: aarch64: vPMU register test for unimplemented counters
> >
> >   arch/arm64/kvm/pmu-emul.c                     |   6 +
> >   arch/arm64/kvm/sys_regs.c                     |  57 +-
> >   tools/arch/arm64/include/asm/perf_event.h     | 258 +++++++
> >   tools/testing/selftests/kvm/Makefile          |   1 +
> >   .../kvm/aarch64/vpmu_counter_access.c         | 644 ++++++++++++++++++
> >   .../selftests/kvm/include/aarch64/processor.h |   1 +
> >   6 files changed, 954 insertions(+), 13 deletions(-)
> >   create mode 100644 tools/arch/arm64/include/asm/perf_event.h
> >   create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> >
> >
> > base-commit: 5dc4c995db9eb45f6373a956eb1f69460e69e6d4
>
> --
> Regards,
> Shaoqin
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
@ 2023-01-18  5:53     ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-18  5:53 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Shaoqin,

> I have tested this patch set on an Ampere machine, and every thing works
> fine.
>
>
> Tested-by: Shaoqin Huang <shahuang@redhat.com>

Thank you for testing the series!
Reiji


>
> On 1/17/23 09:35, Reiji Watanabe wrote:
> > The goal of this series is to allow userspace to limit the number
> > of PMU event counters on the vCPU. We need this to support migration
> > across systems that implement different numbers of counters.
> >
> > The number of PMU event counters is indicated in PMCR_EL0.N.
> > For a vCPU with PMUv3 configured, its value will be the same as
> > the host value by default. Userspace can set PMCR_EL0.N for the
> > vCPU to a lower value than the host value, using KVM_SET_ONE_REG.
> > However, it is practically unsupported, as KVM resets PMCR_EL0.N
> > to the host value on vCPU reset and some KVM code uses the host
> > value to identify (un)implemented event counters on the vCPU.
> >
> > This series will ensure that the PMCR_EL0.N value is preserved
> > on vCPU reset and that KVM doesn't use the host value
> > to identify (un)implemented event counters on the vCPU.
> > This allows userspace to limit the number of the PMU event
> > counters on the vCPU.
> >
> > Patch 1 fixes reset_pmu_reg() to ensure that (RAZ) bits of
> > {PMCNTEN,PMOVS}{SET,CLR}_EL1 corresponding to unimplemented event
> > counters on the vCPU are reset to zero even when PMCR_EL0.N for
> > the vCPU is different from the host.
> >
> > Patch 2 is a minor refactoring to use the default PMU register reset
> > function (reset_pmu_reg()) for PMUSERENR_EL0 and PMCCFILTR_EL0.
> > (With the Patch 1 change, reset_pmu_reg() can now be used for
> > those registers)
> >
> > Patch 3 fixes reset_pmcr() to preserve PMCR_EL0.N for the vCPU on
> > vCPU reset.
> >
> > Patch 4 adds the sys_reg's set_user() handler for the PMCR_EL0
> > to disallow userspace to set PMCR_EL0.N for the vCPU to a value
> > that is greater than the host value.
> >
> > Patch 5-8 adds a selftest to verify reading and writing PMU registers
> > for implemented or unimplemented PMU event counters on the vCPU.
> >
> > The series is based on v6.2-rc4.
> >
> > v2:
> >   - Added the sys_reg's set_user() handler for the PMCR_EL0 to
> >     disallow userspace to set PMCR_EL0.N for the vCPU to a value
> >     that is greater than the host value (and added a new test
> >     case for this behavior). [Oliver]
> >   - Added to the commit log of the patch 2 that PMUSERENR_EL0 and
> >     PMCCFILTR_EL0 have UNKNOWN reset values.
> >
> > v1: https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/
> >
> > Reiji Watanabe (8):
> >    KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
> >    KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and
> >      PMCCFILTR_EL0
> >    KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
> >    KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the
> >      host value
> >    tools: arm64: Import perf_event.h
> >    KVM: selftests: aarch64: Introduce vpmu_counter_access test
> >    KVM: selftests: aarch64: vPMU register test for implemented counters
> >    KVM: selftests: aarch64: vPMU register test for unimplemented counters
> >
> >   arch/arm64/kvm/pmu-emul.c                     |   6 +
> >   arch/arm64/kvm/sys_regs.c                     |  57 +-
> >   tools/arch/arm64/include/asm/perf_event.h     | 258 +++++++
> >   tools/testing/selftests/kvm/Makefile          |   1 +
> >   .../kvm/aarch64/vpmu_counter_access.c         | 644 ++++++++++++++++++
> >   .../selftests/kvm/include/aarch64/processor.h |   1 +
> >   6 files changed, 954 insertions(+), 13 deletions(-)
> >   create mode 100644 tools/arch/arm64/include/asm/perf_event.h
> >   create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> >
> >
> > base-commit: 5dc4c995db9eb45f6373a956eb1f69460e69e6d4
>
> --
> Regards,
> Shaoqin
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters
  2023-01-17  1:35   ` Reiji Watanabe
@ 2023-01-18  7:47     ` Shaoqin Huang
  -1 siblings, 0 replies; 46+ messages in thread
From: Shaoqin Huang @ 2023-01-18  7:47 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata

Hi Reiji,


I found some place should be PMEVTYPER, but wrongly written to 
PMEVTTYPE. Should we fix them?


I list some of them, but not covered every one.

On 1/17/23 09:35, Reiji Watanabe wrote:
> Add a new test case to the vpmu_counter_access test to check if PMU
> registers or their bits for implemented counters on the vCPU are
> readable/writable as expected, and can be programmed to count events.
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>   .../kvm/aarch64/vpmu_counter_access.c         | 347 +++++++++++++++++-
>   1 file changed, 344 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> index 704a2500b7e1..54b69c76c824 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -5,7 +5,8 @@
>    * Copyright (c) 2022 Google LLC.
>    *
>    * This test checks if the guest can see the same number of the PMU event
> - * counters (PMCR_EL1.N) that userspace sets.
> + * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> + * those counters.
>    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
>    */
>   #include <kvm_util.h>
> @@ -18,19 +19,350 @@
>   /* The max number of the PMU event counters (excluding the cycle counter) */
>   #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
>   
> +/*
> + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
Here should be PMEV{CNTR, TYPER}.
> + * were basically copied from arch/arm64/kernel/perf_event.c.
> + */
> +#define PMEVN_CASE(n, case_macro) \
> +	case n: case_macro(n); break
> +
> +#define PMEVN_SWITCH(x, case_macro)				\
> +	do {							\
> +		switch (x) {					\
> +		PMEVN_CASE(0,  case_macro);			\
> +		PMEVN_CASE(1,  case_macro);			\
> +		PMEVN_CASE(2,  case_macro);			\
> +		PMEVN_CASE(3,  case_macro);			\
> +		PMEVN_CASE(4,  case_macro);			\
> +		PMEVN_CASE(5,  case_macro);			\
> +		PMEVN_CASE(6,  case_macro);			\
> +		PMEVN_CASE(7,  case_macro);			\
> +		PMEVN_CASE(8,  case_macro);			\
> +		PMEVN_CASE(9,  case_macro);			\
> +		PMEVN_CASE(10, case_macro);			\
> +		PMEVN_CASE(11, case_macro);			\
> +		PMEVN_CASE(12, case_macro);			\
> +		PMEVN_CASE(13, case_macro);			\
> +		PMEVN_CASE(14, case_macro);			\
> +		PMEVN_CASE(15, case_macro);			\
> +		PMEVN_CASE(16, case_macro);			\
> +		PMEVN_CASE(17, case_macro);			\
> +		PMEVN_CASE(18, case_macro);			\
> +		PMEVN_CASE(19, case_macro);			\
> +		PMEVN_CASE(20, case_macro);			\
> +		PMEVN_CASE(21, case_macro);			\
> +		PMEVN_CASE(22, case_macro);			\
> +		PMEVN_CASE(23, case_macro);			\
> +		PMEVN_CASE(24, case_macro);			\
> +		PMEVN_CASE(25, case_macro);			\
> +		PMEVN_CASE(26, case_macro);			\
> +		PMEVN_CASE(27, case_macro);			\
> +		PMEVN_CASE(28, case_macro);			\
> +		PMEVN_CASE(29, case_macro);			\
> +		PMEVN_CASE(30, case_macro);			\
> +		default:					\
> +			GUEST_ASSERT_1(0, x);			\
> +		}						\
> +	} while (0)
> +
> +#define RETURN_READ_PMEVCNTRN(n) \
> +	return read_sysreg(pmevcntr##n##_el0)
> +static unsigned long read_pmevcntrn(int n)
> +{
> +	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> +	return 0;
> +}
> +
> +#define WRITE_PMEVCNTRN(n) \
> +	write_sysreg(val, pmevcntr##n##_el0)
> +static void write_pmevcntrn(int n, unsigned long val)
> +{
> +	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> +	isb();
> +}
> +
> +#define READ_PMEVTYPERN(n) \
> +	return read_sysreg(pmevtyper##n##_el0)
> +static unsigned long read_pmevtypern(int n)
> +{
> +	PMEVN_SWITCH(n, READ_PMEVTYPERN);
> +	return 0;
> +}
> +
> +#define WRITE_PMEVTYPERN(n) \
> +	write_sysreg(val, pmevtyper##n##_el0)
> +static void write_pmevtypern(int n, unsigned long val)
> +{
> +	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> +	isb();
> +}
> +
> +/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> +static inline unsigned long read_sel_evcntr(int sel)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	return read_sysreg(pmxevcntr_el0);
> +}
> +
> +/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> +static inline void write_sel_evcntr(int sel, unsigned long val)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	write_sysreg(val, pmxevcntr_el0);
> +	isb();
> +}
> +
> +/* Read PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
Here should be PMEVTYPER.
> +static inline unsigned long read_sel_evtyper(int sel)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	return read_sysreg(pmxevtyper_el0);
> +}
> +
> +/* Write PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> +static inline void write_sel_evtyper(int sel, unsigned long val)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	write_sysreg(val, pmxevtyper_el0);
> +	isb();
> +}
> +
> +static inline void enable_counter(int idx)
> +{
> +	uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +	write_sysreg(BIT(idx) | v, pmcntenset_el0);
> +	isb();
> +}
> +
> +static inline void disable_counter(int idx)
> +{
> +	uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +	write_sysreg(BIT(idx) | v, pmcntenclr_el0);
> +	isb();
> +}
> +
> +/*
> + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> + * accessors that test cases will use. Each of the accessors will
> + * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
> + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
> + *
> + * This is used to test that combinations of those accessors provide
> + * the consistent behavior.
> + */
> +struct pmc_accessor {
> +	/* A function to be used to read PMEVTCNTR<n>_EL0 */
> +	unsigned long	(*read_cntr)(int idx);
> +	/* A function to be used to write PMEVTCNTR<n>_EL0 */
> +	void		(*write_cntr)(int idx, unsigned long val);
> +	/* A function to be used to read PMEVTTYPER<n>_EL0 */
> +	unsigned long	(*read_typer)(int idx);
> +	/* A function to be used write PMEVTTYPER<n>_EL0 */
> +	void		(*write_typer)(int idx, unsigned long val);
> +};
> +
> +struct pmc_accessor pmc_accessors[] = {
> +	/* test with all direct accesses */
> +	{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
> +	/* test with all indirect accesses */
> +	{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
> +	/* read with direct accesses, and write with indirect accesses */
> +	{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
> +	/* read with indirect accesses, and write with direct accesses */
> +	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> +};
> +
> +static void pmu_disable_reset(void)
> +{
> +	uint64_t pmcr = read_sysreg(pmcr_el0);
> +
> +	/* Reset all counters, disabling them */
> +	pmcr &= ~ARMV8_PMU_PMCR_E;
> +	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> +	isb();
> +}
> +
> +static void pmu_enable(void)
> +{
> +	uint64_t pmcr = read_sysreg(pmcr_el0);
> +
> +	/* Reset all counters, disabling them */
> +	pmcr |= ARMV8_PMU_PMCR_E;
> +	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> +	isb();
> +}
> +
> +static bool pmu_event_is_supported(uint64_t event)
> +{
> +	GUEST_ASSERT_1(event < 64, event);
> +	return (read_sysreg(pmceid0_el0) & BIT(event));
> +}
> +
>   static uint64_t pmcr_extract_n(uint64_t pmcr_val)
>   {
>   	return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
>   }
>   
> +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)		\
> +{									\
> +	uint64_t _tval = read_sysreg(regname);				\
> +									\
> +	if (set_expected)						\
> +		GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
> +	else								   \
> +		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
> +}
> +
> +/*
> + * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers
> + * are set or cleared as specified in @set_expected.
> + */
> +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
> +{
> +	GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
> +}
> +
> +/*
> + * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding
> + * to the specified counter (@pmc_idx) can be read/written as expected.
> + * When @set_op is true, it tries to set the bit for the counter in
> + * those registers by writing the SET registers (the bit won't be set
> + * if the counter is not implemented though).
> + * Otherwise, it tries to clear the bits in the registers by writing
> + * the CLR registers.
> + * Then, it checks if the values indicated in the registers are as expected.
> + */
> +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
> +{
> +	uint64_t pmcr_n, test_bit = BIT(pmc_idx);
> +	bool set_expected = false;
> +
> +	if (set_op) {
> +		write_sysreg(test_bit, pmcntenset_el0);
> +		write_sysreg(test_bit, pmovsset_el0);
> +
> +		/* The bit will be set only if the counter is implemented */
> +		pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0));
> +		set_expected = (pmc_idx < pmcr_n) ? true : false;
> +	} else {
> +		write_sysreg(test_bit, pmcntenclr_el0);
> +		write_sysreg(test_bit, pmovsclr_el0);
> +	}
> +	check_bitmap_pmu_regs(test_bit, set_expected);
> +}
> +
> +/*
> + * Tests for reading/writing registers for the (implemented) event counter
> + * specified by @pmc_idx.
> + */
> +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> +{
> +	uint64_t write_data, read_data, read_data_prev, test_bit;
> +
> +	/* Disable all PMCs and reset all PMCs to zero. */
> +	pmu_disable_reset();
> +
> +
> +	/*
> +	 * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1.
> +	 */
> +
> +	test_bit = 1ul << pmc_idx;
> +	/* Make sure that the bit in those registers are set to 0 */
> +	test_bitmap_pmu_regs(test_bit, false);
> +	/* Test if setting the bit in those registers works */
> +	test_bitmap_pmu_regs(test_bit, true);
> +	/* Test if clearing the bit in those registers works */
> +	test_bitmap_pmu_regs(test_bit, false);
> +
> +
> +	/*
> +	 * Tests for reading/writing the event type register.
> +	 */
> +
> +	read_data = acc->read_typer(pmc_idx);
> +	/*
> +	 * Set the event type register to an arbitrary value just for testing
> +	 * of reading/writing the register.
> +	 * ArmARM says that for the event from 0x0000 to 0x003F,
> +	 * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
> +	 * the value written to the field even when the specified event
> +	 * is not supported.
> +	 */
> +	write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> +	acc->write_typer(pmc_idx, write_data);
> +	read_data = acc->read_typer(pmc_idx);
> +	GUEST_ASSERT_4(read_data == write_data,
> +		       pmc_idx, acc, read_data, write_data);
> +
> +
> +	/*
> +	 * Tests for reading/writing the event count register.
> +	 */
> +
> +	read_data = acc->read_cntr(pmc_idx);
> +
> +	/* The count value must be 0, as it is not used after the reset */
> +	GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
> +
> +	write_data = read_data + pmc_idx + 0x12345;
> +	acc->write_cntr(pmc_idx, write_data);
> +	read_data = acc->read_cntr(pmc_idx);
> +	GUEST_ASSERT_4(read_data == write_data,
> +		       pmc_idx, acc, read_data, write_data);
> +
> +
> +	/* The following test requires the INST_RETIRED event support. */
> +	if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED))
> +		return;
> +
> +	pmu_enable();
> +	acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> +
> +	/*
> +	 * Make sure that the counter doesn't count the INST_RETIRED
> +	 * event when disabled, and the counter counts the event when enabled.
> +	 */
> +	disable_counter(pmc_idx);
> +	read_data_prev = acc->read_cntr(pmc_idx);
> +	read_data = acc->read_cntr(pmc_idx);
> +	GUEST_ASSERT_4(read_data == read_data_prev,
> +		       pmc_idx, acc, read_data, read_data_prev);
> +
> +	enable_counter(pmc_idx);
> +	read_data = acc->read_cntr(pmc_idx);
> +
> +	/*
> +	 * The counter should be increased by at least 1, as there is at
> +	 * least one instruction between enabling the counter and reading
> +	 * the counter (the test assumes that all event counters are not
> +	 * being used by the host's higher priority events).
> +	 */
> +	GUEST_ASSERT_4(read_data > read_data_prev,
> +		       pmc_idx, acc, read_data, read_data_prev);
> +}
> +
>   /*
>    * The guest is configured with PMUv3 with @expected_pmcr_n number of
>    * event counters.
> - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> + * if reading/writing PMU registers for implemented counters can work
> + * as expected.
>    */
>   static void guest_code(uint64_t expected_pmcr_n)
>   {
>   	uint64_t pmcr, pmcr_n;
> +	int i, pmc;
>   
>   	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
>   
> @@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n)
>   	/* Make sure that PMCR_EL0.N indicates the value userspace set */
>   	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
>   
> +	/*
> +	 * Tests for reading/writing PMU registers for implemented counters.
> +	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +		for (pmc = 0; pmc < pmcr_n; pmc++)
> +			test_access_pmc_regs(&pmc_accessors[i], pmc);
> +	}
> +
>   	GUEST_DONE();
>   }
>   
> @@ -96,7 +437,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
>   	vcpu_run(vcpu);
>   	switch (get_ucall(vcpu, &uc)) {
>   	case UCALL_ABORT:
> -		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
> +		REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
>   		break;
>   	case UCALL_DONE:
>   		break;

-- 
Regards,
Shaoqin


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters
@ 2023-01-18  7:47     ` Shaoqin Huang
  0 siblings, 0 replies; 46+ messages in thread
From: Shaoqin Huang @ 2023-01-18  7:47 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata

Hi Reiji,


I found some place should be PMEVTYPER, but wrongly written to 
PMEVTTYPE. Should we fix them?


I list some of them, but not covered every one.

On 1/17/23 09:35, Reiji Watanabe wrote:
> Add a new test case to the vpmu_counter_access test to check if PMU
> registers or their bits for implemented counters on the vCPU are
> readable/writable as expected, and can be programmed to count events.
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>   .../kvm/aarch64/vpmu_counter_access.c         | 347 +++++++++++++++++-
>   1 file changed, 344 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> index 704a2500b7e1..54b69c76c824 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -5,7 +5,8 @@
>    * Copyright (c) 2022 Google LLC.
>    *
>    * This test checks if the guest can see the same number of the PMU event
> - * counters (PMCR_EL1.N) that userspace sets.
> + * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> + * those counters.
>    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
>    */
>   #include <kvm_util.h>
> @@ -18,19 +19,350 @@
>   /* The max number of the PMU event counters (excluding the cycle counter) */
>   #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
>   
> +/*
> + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
Here should be PMEV{CNTR, TYPER}.
> + * were basically copied from arch/arm64/kernel/perf_event.c.
> + */
> +#define PMEVN_CASE(n, case_macro) \
> +	case n: case_macro(n); break
> +
> +#define PMEVN_SWITCH(x, case_macro)				\
> +	do {							\
> +		switch (x) {					\
> +		PMEVN_CASE(0,  case_macro);			\
> +		PMEVN_CASE(1,  case_macro);			\
> +		PMEVN_CASE(2,  case_macro);			\
> +		PMEVN_CASE(3,  case_macro);			\
> +		PMEVN_CASE(4,  case_macro);			\
> +		PMEVN_CASE(5,  case_macro);			\
> +		PMEVN_CASE(6,  case_macro);			\
> +		PMEVN_CASE(7,  case_macro);			\
> +		PMEVN_CASE(8,  case_macro);			\
> +		PMEVN_CASE(9,  case_macro);			\
> +		PMEVN_CASE(10, case_macro);			\
> +		PMEVN_CASE(11, case_macro);			\
> +		PMEVN_CASE(12, case_macro);			\
> +		PMEVN_CASE(13, case_macro);			\
> +		PMEVN_CASE(14, case_macro);			\
> +		PMEVN_CASE(15, case_macro);			\
> +		PMEVN_CASE(16, case_macro);			\
> +		PMEVN_CASE(17, case_macro);			\
> +		PMEVN_CASE(18, case_macro);			\
> +		PMEVN_CASE(19, case_macro);			\
> +		PMEVN_CASE(20, case_macro);			\
> +		PMEVN_CASE(21, case_macro);			\
> +		PMEVN_CASE(22, case_macro);			\
> +		PMEVN_CASE(23, case_macro);			\
> +		PMEVN_CASE(24, case_macro);			\
> +		PMEVN_CASE(25, case_macro);			\
> +		PMEVN_CASE(26, case_macro);			\
> +		PMEVN_CASE(27, case_macro);			\
> +		PMEVN_CASE(28, case_macro);			\
> +		PMEVN_CASE(29, case_macro);			\
> +		PMEVN_CASE(30, case_macro);			\
> +		default:					\
> +			GUEST_ASSERT_1(0, x);			\
> +		}						\
> +	} while (0)
> +
> +#define RETURN_READ_PMEVCNTRN(n) \
> +	return read_sysreg(pmevcntr##n##_el0)
> +static unsigned long read_pmevcntrn(int n)
> +{
> +	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> +	return 0;
> +}
> +
> +#define WRITE_PMEVCNTRN(n) \
> +	write_sysreg(val, pmevcntr##n##_el0)
> +static void write_pmevcntrn(int n, unsigned long val)
> +{
> +	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> +	isb();
> +}
> +
> +#define READ_PMEVTYPERN(n) \
> +	return read_sysreg(pmevtyper##n##_el0)
> +static unsigned long read_pmevtypern(int n)
> +{
> +	PMEVN_SWITCH(n, READ_PMEVTYPERN);
> +	return 0;
> +}
> +
> +#define WRITE_PMEVTYPERN(n) \
> +	write_sysreg(val, pmevtyper##n##_el0)
> +static void write_pmevtypern(int n, unsigned long val)
> +{
> +	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> +	isb();
> +}
> +
> +/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> +static inline unsigned long read_sel_evcntr(int sel)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	return read_sysreg(pmxevcntr_el0);
> +}
> +
> +/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> +static inline void write_sel_evcntr(int sel, unsigned long val)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	write_sysreg(val, pmxevcntr_el0);
> +	isb();
> +}
> +
> +/* Read PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
Here should be PMEVTYPER.
> +static inline unsigned long read_sel_evtyper(int sel)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	return read_sysreg(pmxevtyper_el0);
> +}
> +
> +/* Write PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> +static inline void write_sel_evtyper(int sel, unsigned long val)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	write_sysreg(val, pmxevtyper_el0);
> +	isb();
> +}
> +
> +static inline void enable_counter(int idx)
> +{
> +	uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +	write_sysreg(BIT(idx) | v, pmcntenset_el0);
> +	isb();
> +}
> +
> +static inline void disable_counter(int idx)
> +{
> +	uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +	write_sysreg(BIT(idx) | v, pmcntenclr_el0);
> +	isb();
> +}
> +
> +/*
> + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> + * accessors that test cases will use. Each of the accessors will
> + * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
> + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
> + *
> + * This is used to test that combinations of those accessors provide
> + * the consistent behavior.
> + */
> +struct pmc_accessor {
> +	/* A function to be used to read PMEVTCNTR<n>_EL0 */
> +	unsigned long	(*read_cntr)(int idx);
> +	/* A function to be used to write PMEVTCNTR<n>_EL0 */
> +	void		(*write_cntr)(int idx, unsigned long val);
> +	/* A function to be used to read PMEVTTYPER<n>_EL0 */
> +	unsigned long	(*read_typer)(int idx);
> +	/* A function to be used write PMEVTTYPER<n>_EL0 */
> +	void		(*write_typer)(int idx, unsigned long val);
> +};
> +
> +struct pmc_accessor pmc_accessors[] = {
> +	/* test with all direct accesses */
> +	{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
> +	/* test with all indirect accesses */
> +	{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
> +	/* read with direct accesses, and write with indirect accesses */
> +	{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
> +	/* read with indirect accesses, and write with direct accesses */
> +	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> +};
> +
> +static void pmu_disable_reset(void)
> +{
> +	uint64_t pmcr = read_sysreg(pmcr_el0);
> +
> +	/* Reset all counters, disabling them */
> +	pmcr &= ~ARMV8_PMU_PMCR_E;
> +	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> +	isb();
> +}
> +
> +static void pmu_enable(void)
> +{
> +	uint64_t pmcr = read_sysreg(pmcr_el0);
> +
> +	/* Reset all counters, disabling them */
> +	pmcr |= ARMV8_PMU_PMCR_E;
> +	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> +	isb();
> +}
> +
> +static bool pmu_event_is_supported(uint64_t event)
> +{
> +	GUEST_ASSERT_1(event < 64, event);
> +	return (read_sysreg(pmceid0_el0) & BIT(event));
> +}
> +
>   static uint64_t pmcr_extract_n(uint64_t pmcr_val)
>   {
>   	return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
>   }
>   
> +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)		\
> +{									\
> +	uint64_t _tval = read_sysreg(regname);				\
> +									\
> +	if (set_expected)						\
> +		GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
> +	else								   \
> +		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
> +}
> +
> +/*
> + * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers
> + * are set or cleared as specified in @set_expected.
> + */
> +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
> +{
> +	GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
> +}
> +
> +/*
> + * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding
> + * to the specified counter (@pmc_idx) can be read/written as expected.
> + * When @set_op is true, it tries to set the bit for the counter in
> + * those registers by writing the SET registers (the bit won't be set
> + * if the counter is not implemented though).
> + * Otherwise, it tries to clear the bits in the registers by writing
> + * the CLR registers.
> + * Then, it checks if the values indicated in the registers are as expected.
> + */
> +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
> +{
> +	uint64_t pmcr_n, test_bit = BIT(pmc_idx);
> +	bool set_expected = false;
> +
> +	if (set_op) {
> +		write_sysreg(test_bit, pmcntenset_el0);
> +		write_sysreg(test_bit, pmovsset_el0);
> +
> +		/* The bit will be set only if the counter is implemented */
> +		pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0));
> +		set_expected = (pmc_idx < pmcr_n) ? true : false;
> +	} else {
> +		write_sysreg(test_bit, pmcntenclr_el0);
> +		write_sysreg(test_bit, pmovsclr_el0);
> +	}
> +	check_bitmap_pmu_regs(test_bit, set_expected);
> +}
> +
> +/*
> + * Tests for reading/writing registers for the (implemented) event counter
> + * specified by @pmc_idx.
> + */
> +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> +{
> +	uint64_t write_data, read_data, read_data_prev, test_bit;
> +
> +	/* Disable all PMCs and reset all PMCs to zero. */
> +	pmu_disable_reset();
> +
> +
> +	/*
> +	 * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1.
> +	 */
> +
> +	test_bit = 1ul << pmc_idx;
> +	/* Make sure that the bit in those registers are set to 0 */
> +	test_bitmap_pmu_regs(test_bit, false);
> +	/* Test if setting the bit in those registers works */
> +	test_bitmap_pmu_regs(test_bit, true);
> +	/* Test if clearing the bit in those registers works */
> +	test_bitmap_pmu_regs(test_bit, false);
> +
> +
> +	/*
> +	 * Tests for reading/writing the event type register.
> +	 */
> +
> +	read_data = acc->read_typer(pmc_idx);
> +	/*
> +	 * Set the event type register to an arbitrary value just for testing
> +	 * of reading/writing the register.
> +	 * ArmARM says that for the event from 0x0000 to 0x003F,
> +	 * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
> +	 * the value written to the field even when the specified event
> +	 * is not supported.
> +	 */
> +	write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> +	acc->write_typer(pmc_idx, write_data);
> +	read_data = acc->read_typer(pmc_idx);
> +	GUEST_ASSERT_4(read_data == write_data,
> +		       pmc_idx, acc, read_data, write_data);
> +
> +
> +	/*
> +	 * Tests for reading/writing the event count register.
> +	 */
> +
> +	read_data = acc->read_cntr(pmc_idx);
> +
> +	/* The count value must be 0, as it is not used after the reset */
> +	GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
> +
> +	write_data = read_data + pmc_idx + 0x12345;
> +	acc->write_cntr(pmc_idx, write_data);
> +	read_data = acc->read_cntr(pmc_idx);
> +	GUEST_ASSERT_4(read_data == write_data,
> +		       pmc_idx, acc, read_data, write_data);
> +
> +
> +	/* The following test requires the INST_RETIRED event support. */
> +	if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED))
> +		return;
> +
> +	pmu_enable();
> +	acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> +
> +	/*
> +	 * Make sure that the counter doesn't count the INST_RETIRED
> +	 * event when disabled, and the counter counts the event when enabled.
> +	 */
> +	disable_counter(pmc_idx);
> +	read_data_prev = acc->read_cntr(pmc_idx);
> +	read_data = acc->read_cntr(pmc_idx);
> +	GUEST_ASSERT_4(read_data == read_data_prev,
> +		       pmc_idx, acc, read_data, read_data_prev);
> +
> +	enable_counter(pmc_idx);
> +	read_data = acc->read_cntr(pmc_idx);
> +
> +	/*
> +	 * The counter should be increased by at least 1, as there is at
> +	 * least one instruction between enabling the counter and reading
> +	 * the counter (the test assumes that all event counters are not
> +	 * being used by the host's higher priority events).
> +	 */
> +	GUEST_ASSERT_4(read_data > read_data_prev,
> +		       pmc_idx, acc, read_data, read_data_prev);
> +}
> +
>   /*
>    * The guest is configured with PMUv3 with @expected_pmcr_n number of
>    * event counters.
> - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> + * if reading/writing PMU registers for implemented counters can work
> + * as expected.
>    */
>   static void guest_code(uint64_t expected_pmcr_n)
>   {
>   	uint64_t pmcr, pmcr_n;
> +	int i, pmc;
>   
>   	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
>   
> @@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n)
>   	/* Make sure that PMCR_EL0.N indicates the value userspace set */
>   	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
>   
> +	/*
> +	 * Tests for reading/writing PMU registers for implemented counters.
> +	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +		for (pmc = 0; pmc < pmcr_n; pmc++)
> +			test_access_pmc_regs(&pmc_accessors[i], pmc);
> +	}
> +
>   	GUEST_DONE();
>   }
>   
> @@ -96,7 +437,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
>   	vcpu_run(vcpu);
>   	switch (get_ucall(vcpu, &uc)) {
>   	case UCALL_ABORT:
> -		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
> +		REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
>   		break;
>   	case UCALL_DONE:
>   		break;

-- 
Regards,
Shaoqin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters
  2023-01-17  1:35   ` Reiji Watanabe
@ 2023-01-18  7:49     ` Shaoqin Huang
  -1 siblings, 0 replies; 46+ messages in thread
From: Shaoqin Huang @ 2023-01-18  7:49 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata

Hi Reiji,

On 1/17/23 09:35, Reiji Watanabe wrote:
> Add a new test case to the vpmu_counter_access test to check
> if PMU registers or their bits for unimplemented counters are not
> accessible or are RAZ, as expected.
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>   .../kvm/aarch64/vpmu_counter_access.c         | 103 +++++++++++++++++-
>   .../selftests/kvm/include/aarch64/processor.h |   1 +
>   2 files changed, 98 insertions(+), 6 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> index 54b69c76c824..a7e34d63808b 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -5,8 +5,8 @@
>    * Copyright (c) 2022 Google LLC.
>    *
>    * This test checks if the guest can see the same number of the PMU event
> - * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> - * those counters.
> + * counters (PMCR_EL1.N) that userspace sets, if the guest can access
> + * those counters, and if the guest cannot access any other counters.
>    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
>    */
>   #include <kvm_util.h>
> @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = {
>   	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
>   };
>   
> +#define INVALID_EC	(-1ul)
> +uint64_t expected_ec = INVALID_EC;
> +uint64_t op_end_addr;
> +
> +static void guest_sync_handler(struct ex_regs *regs)
> +{
> +	uint64_t esr, ec;
> +
> +	esr = read_sysreg(esr_el1);
> +	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> +	GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
> +		       regs->pc, esr, ec, expected_ec);
> +
> +	/* Will go back to op_end_addr after the handler exits */
> +	regs->pc = op_end_addr;
> +
> +	/*
> +	 * Clear op_end_addr, and setting expected_ec to INVALID_EC
> +	 * as a sign that an exception has occurred.
> +	 */
> +	op_end_addr = 0;
> +	expected_ec = INVALID_EC;
> +}
> +
> +/*
> + * Run the given operation that should trigger an exception with the
> + * given exception class. The exception handler (guest_sync_handler)
> + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
> + * will come back to the instruction at the @done_label.
> + * The @done_label must be a unique label in this test program.
> + */
> +#define TEST_EXCEPTION(ec, ops, done_label)		\
> +{							\
> +	extern int done_label;				\
> +							\
> +	WRITE_ONCE(op_end_addr, (uint64_t)&done_label);	\
> +	GUEST_ASSERT(ec != INVALID_EC);			\
> +	WRITE_ONCE(expected_ec, ec);			\
> +	dsb(ish);					\
> +	ops;						\
> +	asm volatile(#done_label":");			\
> +	GUEST_ASSERT(!op_end_addr);			\
> +	GUEST_ASSERT(expected_ec == INVALID_EC);	\
> +}
> +
>   static void pmu_disable_reset(void)
>   {
>   	uint64_t pmcr = read_sysreg(pmcr_el0);
> @@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
>   		       pmc_idx, acc, read_data, read_data_prev);
>   }
>   
> +/*
> + * Tests for reading/writing registers for the unimplemented event counter
> + * specified by @pmc_idx (>= PMCR_EL1.N).
> + */
> +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> +{
> +	/*
> +	 * Reading/writing the event count/type registers should cause
> +	 * an UNDEFINED exception.
> +	 */
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
> +	/*
> +	 * The bit corresponding to the (unimplemented) counter in
> +	 * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
> +	 */
> +	test_bitmap_pmu_regs(pmc_idx, 1);
> +	test_bitmap_pmu_regs(pmc_idx, 0);
> +}
> +
>   /*
>    * The guest is configured with PMUv3 with @expected_pmcr_n number of
>    * event counters.
>    * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> - * if reading/writing PMU registers for implemented counters can work
> - * as expected.
> + * if reading/writing PMU registers for implemented or unimplemented
> + * counters can work as expected.
>    */
>   static void guest_code(uint64_t expected_pmcr_n)
>   {
> -	uint64_t pmcr, pmcr_n;
> +	uint64_t pmcr, pmcr_n, unimp_mask;
>   	int i, pmc;
>   
>   	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
> @@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n)
>   	/* Make sure that PMCR_EL0.N indicates the value userspace set */
>   	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
>   
> +	/*
> +	 * Make sure that (RAZ) bits corresponding to unimplemented event
> +	 * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
> +	 * (NOTE: bits for implemented event counters are reset to UNKNOWN)
> +	 */
> +	unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
> +	check_bitmap_pmu_regs(unimp_mask, false);
> +
>   	/*
>   	 * Tests for reading/writing PMU registers for implemented counters.
>   	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> @@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n)
>   			test_access_pmc_regs(&pmc_accessors[i], pmc);
>   	}
>   
> +	/*
> +	 * Tests for reading/writing PMU registers for unimplemented counters.
> +	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
Here should be PMEV{CNTR, TYPER}<n>.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
> +			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
> +	}
>   	GUEST_DONE();
>   }
>   
> @@ -394,7 +477,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
>   	struct kvm_vm *vm;
>   	struct kvm_vcpu *vcpu;
>   	struct kvm_vcpu_init init;
> -	uint8_t pmuver;
> +	uint8_t pmuver, ec;
>   	uint64_t dfr0, irq = 23;
>   	struct kvm_device_attr irq_attr = {
>   		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
> @@ -407,11 +490,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
>   	};
>   
>   	vm = vm_create(1);
> +	vm_init_descriptor_tables(vm);
> +	/* Catch exceptions for easier debugging */
> +	for (ec = 0; ec < ESR_EC_NUM; ec++) {
> +		vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
> +					guest_sync_handler);
> +	}
>   
>   	/* Create vCPU with PMUv3 */
>   	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
>   	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
>   	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
> +	vcpu_init_descriptor_tables(vcpu);
>   	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
>   
>   	/* Make sure that PMUv3 support is indicated in the ID register */
> @@ -480,6 +570,7 @@ static void run_test(uint64_t pmcr_n)
>   	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
>   	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
>   	aarch64_vcpu_setup(vcpu, &init);
> +	vcpu_init_descriptor_tables(vcpu);
>   	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
>   	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
>   
> diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> index 5f977528e09c..52d87809356c 100644
> --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> @@ -104,6 +104,7 @@ enum {
>   #define ESR_EC_SHIFT		26
>   #define ESR_EC_MASK		(ESR_EC_NUM - 1)
>   
> +#define ESR_EC_UNKNOWN		0x0
>   #define ESR_EC_SVC64		0x15
>   #define ESR_EC_IABT		0x21
>   #define ESR_EC_DABT		0x25

-- 
Regards,
Shaoqin


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters
@ 2023-01-18  7:49     ` Shaoqin Huang
  0 siblings, 0 replies; 46+ messages in thread
From: Shaoqin Huang @ 2023-01-18  7:49 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Oliver Upton,
	Jing Zhang, Raghavendra Rao Anata

Hi Reiji,

On 1/17/23 09:35, Reiji Watanabe wrote:
> Add a new test case to the vpmu_counter_access test to check
> if PMU registers or their bits for unimplemented counters are not
> accessible or are RAZ, as expected.
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>   .../kvm/aarch64/vpmu_counter_access.c         | 103 +++++++++++++++++-
>   .../selftests/kvm/include/aarch64/processor.h |   1 +
>   2 files changed, 98 insertions(+), 6 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> index 54b69c76c824..a7e34d63808b 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -5,8 +5,8 @@
>    * Copyright (c) 2022 Google LLC.
>    *
>    * This test checks if the guest can see the same number of the PMU event
> - * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> - * those counters.
> + * counters (PMCR_EL1.N) that userspace sets, if the guest can access
> + * those counters, and if the guest cannot access any other counters.
>    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
>    */
>   #include <kvm_util.h>
> @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = {
>   	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
>   };
>   
> +#define INVALID_EC	(-1ul)
> +uint64_t expected_ec = INVALID_EC;
> +uint64_t op_end_addr;
> +
> +static void guest_sync_handler(struct ex_regs *regs)
> +{
> +	uint64_t esr, ec;
> +
> +	esr = read_sysreg(esr_el1);
> +	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> +	GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
> +		       regs->pc, esr, ec, expected_ec);
> +
> +	/* Will go back to op_end_addr after the handler exits */
> +	regs->pc = op_end_addr;
> +
> +	/*
> +	 * Clear op_end_addr, and setting expected_ec to INVALID_EC
> +	 * as a sign that an exception has occurred.
> +	 */
> +	op_end_addr = 0;
> +	expected_ec = INVALID_EC;
> +}
> +
> +/*
> + * Run the given operation that should trigger an exception with the
> + * given exception class. The exception handler (guest_sync_handler)
> + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
> + * will come back to the instruction at the @done_label.
> + * The @done_label must be a unique label in this test program.
> + */
> +#define TEST_EXCEPTION(ec, ops, done_label)		\
> +{							\
> +	extern int done_label;				\
> +							\
> +	WRITE_ONCE(op_end_addr, (uint64_t)&done_label);	\
> +	GUEST_ASSERT(ec != INVALID_EC);			\
> +	WRITE_ONCE(expected_ec, ec);			\
> +	dsb(ish);					\
> +	ops;						\
> +	asm volatile(#done_label":");			\
> +	GUEST_ASSERT(!op_end_addr);			\
> +	GUEST_ASSERT(expected_ec == INVALID_EC);	\
> +}
> +
>   static void pmu_disable_reset(void)
>   {
>   	uint64_t pmcr = read_sysreg(pmcr_el0);
> @@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
>   		       pmc_idx, acc, read_data, read_data_prev);
>   }
>   
> +/*
> + * Tests for reading/writing registers for the unimplemented event counter
> + * specified by @pmc_idx (>= PMCR_EL1.N).
> + */
> +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> +{
> +	/*
> +	 * Reading/writing the event count/type registers should cause
> +	 * an UNDEFINED exception.
> +	 */
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
> +	/*
> +	 * The bit corresponding to the (unimplemented) counter in
> +	 * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
> +	 */
> +	test_bitmap_pmu_regs(pmc_idx, 1);
> +	test_bitmap_pmu_regs(pmc_idx, 0);
> +}
> +
>   /*
>    * The guest is configured with PMUv3 with @expected_pmcr_n number of
>    * event counters.
>    * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> - * if reading/writing PMU registers for implemented counters can work
> - * as expected.
> + * if reading/writing PMU registers for implemented or unimplemented
> + * counters can work as expected.
>    */
>   static void guest_code(uint64_t expected_pmcr_n)
>   {
> -	uint64_t pmcr, pmcr_n;
> +	uint64_t pmcr, pmcr_n, unimp_mask;
>   	int i, pmc;
>   
>   	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
> @@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n)
>   	/* Make sure that PMCR_EL0.N indicates the value userspace set */
>   	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
>   
> +	/*
> +	 * Make sure that (RAZ) bits corresponding to unimplemented event
> +	 * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
> +	 * (NOTE: bits for implemented event counters are reset to UNKNOWN)
> +	 */
> +	unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
> +	check_bitmap_pmu_regs(unimp_mask, false);
> +
>   	/*
>   	 * Tests for reading/writing PMU registers for implemented counters.
>   	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> @@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n)
>   			test_access_pmc_regs(&pmc_accessors[i], pmc);
>   	}
>   
> +	/*
> +	 * Tests for reading/writing PMU registers for unimplemented counters.
> +	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
Here should be PMEV{CNTR, TYPER}<n>.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
> +			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
> +	}
>   	GUEST_DONE();
>   }
>   
> @@ -394,7 +477,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
>   	struct kvm_vm *vm;
>   	struct kvm_vcpu *vcpu;
>   	struct kvm_vcpu_init init;
> -	uint8_t pmuver;
> +	uint8_t pmuver, ec;
>   	uint64_t dfr0, irq = 23;
>   	struct kvm_device_attr irq_attr = {
>   		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
> @@ -407,11 +490,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
>   	};
>   
>   	vm = vm_create(1);
> +	vm_init_descriptor_tables(vm);
> +	/* Catch exceptions for easier debugging */
> +	for (ec = 0; ec < ESR_EC_NUM; ec++) {
> +		vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
> +					guest_sync_handler);
> +	}
>   
>   	/* Create vCPU with PMUv3 */
>   	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
>   	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
>   	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
> +	vcpu_init_descriptor_tables(vcpu);
>   	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
>   
>   	/* Make sure that PMUv3 support is indicated in the ID register */
> @@ -480,6 +570,7 @@ static void run_test(uint64_t pmcr_n)
>   	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
>   	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
>   	aarch64_vcpu_setup(vcpu, &init);
> +	vcpu_init_descriptor_tables(vcpu);
>   	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
>   	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
>   
> diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> index 5f977528e09c..52d87809356c 100644
> --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> @@ -104,6 +104,7 @@ enum {
>   #define ESR_EC_SHIFT		26
>   #define ESR_EC_MASK		(ESR_EC_NUM - 1)
>   
> +#define ESR_EC_UNKNOWN		0x0
>   #define ESR_EC_SVC64		0x15
>   #define ESR_EC_IABT		0x21
>   #define ESR_EC_DABT		0x25

-- 
Regards,
Shaoqin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters
  2023-01-18  7:47     ` Shaoqin Huang
@ 2023-01-19  3:02       ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-19  3:02 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Shaoqin,

> I found some place should be PMEVTYPER, but wrongly written to
> PMEVTTYPE. Should we fix them?

Thank you for catching them!
I will review the patch, and fix them all in v3.

Thank you,
Reiji

On Tue, Jan 17, 2023 at 11:47 PM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Hi Reiji,
>
>
> I found some place should be PMEVTYPER, but wrongly written to
> PMEVTTYPE. Should we fix them?
>
>
> I list some of them, but not covered every one.
>
> On 1/17/23 09:35, Reiji Watanabe wrote:
> > Add a new test case to the vpmu_counter_access test to check if PMU
> > registers or their bits for implemented counters on the vCPU are
> > readable/writable as expected, and can be programmed to count events.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >   .../kvm/aarch64/vpmu_counter_access.c         | 347 +++++++++++++++++-
> >   1 file changed, 344 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > index 704a2500b7e1..54b69c76c824 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -5,7 +5,8 @@
> >    * Copyright (c) 2022 Google LLC.
> >    *
> >    * This test checks if the guest can see the same number of the PMU event
> > - * counters (PMCR_EL1.N) that userspace sets.
> > + * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> > + * those counters.
> >    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> >    */
> >   #include <kvm_util.h>
> > @@ -18,19 +19,350 @@
> >   /* The max number of the PMU event counters (excluding the cycle counter) */
> >   #define ARMV8_PMU_MAX_GENERAL_COUNTERS      (ARMV8_PMU_MAX_COUNTERS - 1)
> >
> > +/*
> > + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
> Here should be PMEV{CNTR, TYPER}.
> > + * were basically copied from arch/arm64/kernel/perf_event.c.
> > + */
> > +#define PMEVN_CASE(n, case_macro) \
> > +     case n: case_macro(n); break
> > +
> > +#define PMEVN_SWITCH(x, case_macro)                          \
> > +     do {                                                    \
> > +             switch (x) {                                    \
> > +             PMEVN_CASE(0,  case_macro);                     \
> > +             PMEVN_CASE(1,  case_macro);                     \
> > +             PMEVN_CASE(2,  case_macro);                     \
> > +             PMEVN_CASE(3,  case_macro);                     \
> > +             PMEVN_CASE(4,  case_macro);                     \
> > +             PMEVN_CASE(5,  case_macro);                     \
> > +             PMEVN_CASE(6,  case_macro);                     \
> > +             PMEVN_CASE(7,  case_macro);                     \
> > +             PMEVN_CASE(8,  case_macro);                     \
> > +             PMEVN_CASE(9,  case_macro);                     \
> > +             PMEVN_CASE(10, case_macro);                     \
> > +             PMEVN_CASE(11, case_macro);                     \
> > +             PMEVN_CASE(12, case_macro);                     \
> > +             PMEVN_CASE(13, case_macro);                     \
> > +             PMEVN_CASE(14, case_macro);                     \
> > +             PMEVN_CASE(15, case_macro);                     \
> > +             PMEVN_CASE(16, case_macro);                     \
> > +             PMEVN_CASE(17, case_macro);                     \
> > +             PMEVN_CASE(18, case_macro);                     \
> > +             PMEVN_CASE(19, case_macro);                     \
> > +             PMEVN_CASE(20, case_macro);                     \
> > +             PMEVN_CASE(21, case_macro);                     \
> > +             PMEVN_CASE(22, case_macro);                     \
> > +             PMEVN_CASE(23, case_macro);                     \
> > +             PMEVN_CASE(24, case_macro);                     \
> > +             PMEVN_CASE(25, case_macro);                     \
> > +             PMEVN_CASE(26, case_macro);                     \
> > +             PMEVN_CASE(27, case_macro);                     \
> > +             PMEVN_CASE(28, case_macro);                     \
> > +             PMEVN_CASE(29, case_macro);                     \
> > +             PMEVN_CASE(30, case_macro);                     \
> > +             default:                                        \
> > +                     GUEST_ASSERT_1(0, x);                   \
> > +             }                                               \
> > +     } while (0)
> > +
> > +#define RETURN_READ_PMEVCNTRN(n) \
> > +     return read_sysreg(pmevcntr##n##_el0)
> > +static unsigned long read_pmevcntrn(int n)
> > +{
> > +     PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> > +     return 0;
> > +}
> > +
> > +#define WRITE_PMEVCNTRN(n) \
> > +     write_sysreg(val, pmevcntr##n##_el0)
> > +static void write_pmevcntrn(int n, unsigned long val)
> > +{
> > +     PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> > +     isb();
> > +}
> > +
> > +#define READ_PMEVTYPERN(n) \
> > +     return read_sysreg(pmevtyper##n##_el0)
> > +static unsigned long read_pmevtypern(int n)
> > +{
> > +     PMEVN_SWITCH(n, READ_PMEVTYPERN);
> > +     return 0;
> > +}
> > +
> > +#define WRITE_PMEVTYPERN(n) \
> > +     write_sysreg(val, pmevtyper##n##_el0)
> > +static void write_pmevtypern(int n, unsigned long val)
> > +{
> > +     PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> > +     isb();
> > +}
> > +
> > +/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> > +static inline unsigned long read_sel_evcntr(int sel)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     return read_sysreg(pmxevcntr_el0);
> > +}
> > +
> > +/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> > +static inline void write_sel_evcntr(int sel, unsigned long val)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     write_sysreg(val, pmxevcntr_el0);
> > +     isb();
> > +}
> > +
> > +/* Read PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> Here should be PMEVTYPER.
> > +static inline unsigned long read_sel_evtyper(int sel)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     return read_sysreg(pmxevtyper_el0);
> > +}
> > +
> > +/* Write PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> > +static inline void write_sel_evtyper(int sel, unsigned long val)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     write_sysreg(val, pmxevtyper_el0);
> > +     isb();
> > +}
> > +
> > +static inline void enable_counter(int idx)
> > +{
> > +     uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +     write_sysreg(BIT(idx) | v, pmcntenset_el0);
> > +     isb();
> > +}
> > +
> > +static inline void disable_counter(int idx)
> > +{
> > +     uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +     write_sysreg(BIT(idx) | v, pmcntenclr_el0);
> > +     isb();
> > +}
> > +
> > +/*
> > + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> > + * accessors that test cases will use. Each of the accessors will
> > + * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> > + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
> > + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
> > + *
> > + * This is used to test that combinations of those accessors provide
> > + * the consistent behavior.
> > + */
> > +struct pmc_accessor {
> > +     /* A function to be used to read PMEVTCNTR<n>_EL0 */
> > +     unsigned long   (*read_cntr)(int idx);
> > +     /* A function to be used to write PMEVTCNTR<n>_EL0 */
> > +     void            (*write_cntr)(int idx, unsigned long val);
> > +     /* A function to be used to read PMEVTTYPER<n>_EL0 */
> > +     unsigned long   (*read_typer)(int idx);
> > +     /* A function to be used write PMEVTTYPER<n>_EL0 */
> > +     void            (*write_typer)(int idx, unsigned long val);
> > +};
> > +
> > +struct pmc_accessor pmc_accessors[] = {
> > +     /* test with all direct accesses */
> > +     { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
> > +     /* test with all indirect accesses */
> > +     { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
> > +     /* read with direct accesses, and write with indirect accesses */
> > +     { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
> > +     /* read with indirect accesses, and write with direct accesses */
> > +     { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> > +};
> > +
> > +static void pmu_disable_reset(void)
> > +{
> > +     uint64_t pmcr = read_sysreg(pmcr_el0);
> > +
> > +     /* Reset all counters, disabling them */
> > +     pmcr &= ~ARMV8_PMU_PMCR_E;
> > +     write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> > +     isb();
> > +}
> > +
> > +static void pmu_enable(void)
> > +{
> > +     uint64_t pmcr = read_sysreg(pmcr_el0);
> > +
> > +     /* Reset all counters, disabling them */
> > +     pmcr |= ARMV8_PMU_PMCR_E;
> > +     write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> > +     isb();
> > +}
> > +
> > +static bool pmu_event_is_supported(uint64_t event)
> > +{
> > +     GUEST_ASSERT_1(event < 64, event);
> > +     return (read_sysreg(pmceid0_el0) & BIT(event));
> > +}
> > +
> >   static uint64_t pmcr_extract_n(uint64_t pmcr_val)
> >   {
> >       return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> >   }
> >
> > +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)         \
> > +{                                                                    \
> > +     uint64_t _tval = read_sysreg(regname);                          \
> > +                                                                     \
> > +     if (set_expected)                                               \
> > +             GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
> > +     else                                                               \
> > +             GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
> > +}
> > +
> > +/*
> > + * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers
> > + * are set or cleared as specified in @set_expected.
> > + */
> > +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
> > +{
> > +     GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
> > +}
> > +
> > +/*
> > + * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding
> > + * to the specified counter (@pmc_idx) can be read/written as expected.
> > + * When @set_op is true, it tries to set the bit for the counter in
> > + * those registers by writing the SET registers (the bit won't be set
> > + * if the counter is not implemented though).
> > + * Otherwise, it tries to clear the bits in the registers by writing
> > + * the CLR registers.
> > + * Then, it checks if the values indicated in the registers are as expected.
> > + */
> > +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
> > +{
> > +     uint64_t pmcr_n, test_bit = BIT(pmc_idx);
> > +     bool set_expected = false;
> > +
> > +     if (set_op) {
> > +             write_sysreg(test_bit, pmcntenset_el0);
> > +             write_sysreg(test_bit, pmovsset_el0);
> > +
> > +             /* The bit will be set only if the counter is implemented */
> > +             pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0));
> > +             set_expected = (pmc_idx < pmcr_n) ? true : false;
> > +     } else {
> > +             write_sysreg(test_bit, pmcntenclr_el0);
> > +             write_sysreg(test_bit, pmovsclr_el0);
> > +     }
> > +     check_bitmap_pmu_regs(test_bit, set_expected);
> > +}
> > +
> > +/*
> > + * Tests for reading/writing registers for the (implemented) event counter
> > + * specified by @pmc_idx.
> > + */
> > +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> > +{
> > +     uint64_t write_data, read_data, read_data_prev, test_bit;
> > +
> > +     /* Disable all PMCs and reset all PMCs to zero. */
> > +     pmu_disable_reset();
> > +
> > +
> > +     /*
> > +      * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1.
> > +      */
> > +
> > +     test_bit = 1ul << pmc_idx;
> > +     /* Make sure that the bit in those registers are set to 0 */
> > +     test_bitmap_pmu_regs(test_bit, false);
> > +     /* Test if setting the bit in those registers works */
> > +     test_bitmap_pmu_regs(test_bit, true);
> > +     /* Test if clearing the bit in those registers works */
> > +     test_bitmap_pmu_regs(test_bit, false);
> > +
> > +
> > +     /*
> > +      * Tests for reading/writing the event type register.
> > +      */
> > +
> > +     read_data = acc->read_typer(pmc_idx);
> > +     /*
> > +      * Set the event type register to an arbitrary value just for testing
> > +      * of reading/writing the register.
> > +      * ArmARM says that for the event from 0x0000 to 0x003F,
> > +      * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
> > +      * the value written to the field even when the specified event
> > +      * is not supported.
> > +      */
> > +     write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> > +     acc->write_typer(pmc_idx, write_data);
> > +     read_data = acc->read_typer(pmc_idx);
> > +     GUEST_ASSERT_4(read_data == write_data,
> > +                    pmc_idx, acc, read_data, write_data);
> > +
> > +
> > +     /*
> > +      * Tests for reading/writing the event count register.
> > +      */
> > +
> > +     read_data = acc->read_cntr(pmc_idx);
> > +
> > +     /* The count value must be 0, as it is not used after the reset */
> > +     GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
> > +
> > +     write_data = read_data + pmc_idx + 0x12345;
> > +     acc->write_cntr(pmc_idx, write_data);
> > +     read_data = acc->read_cntr(pmc_idx);
> > +     GUEST_ASSERT_4(read_data == write_data,
> > +                    pmc_idx, acc, read_data, write_data);
> > +
> > +
> > +     /* The following test requires the INST_RETIRED event support. */
> > +     if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED))
> > +             return;
> > +
> > +     pmu_enable();
> > +     acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> > +
> > +     /*
> > +      * Make sure that the counter doesn't count the INST_RETIRED
> > +      * event when disabled, and the counter counts the event when enabled.
> > +      */
> > +     disable_counter(pmc_idx);
> > +     read_data_prev = acc->read_cntr(pmc_idx);
> > +     read_data = acc->read_cntr(pmc_idx);
> > +     GUEST_ASSERT_4(read_data == read_data_prev,
> > +                    pmc_idx, acc, read_data, read_data_prev);
> > +
> > +     enable_counter(pmc_idx);
> > +     read_data = acc->read_cntr(pmc_idx);
> > +
> > +     /*
> > +      * The counter should be increased by at least 1, as there is at
> > +      * least one instruction between enabling the counter and reading
> > +      * the counter (the test assumes that all event counters are not
> > +      * being used by the host's higher priority events).
> > +      */
> > +     GUEST_ASSERT_4(read_data > read_data_prev,
> > +                    pmc_idx, acc, read_data, read_data_prev);
> > +}
> > +
> >   /*
> >    * The guest is configured with PMUv3 with @expected_pmcr_n number of
> >    * event counters.
> > - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> > + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> > + * if reading/writing PMU registers for implemented counters can work
> > + * as expected.
> >    */
> >   static void guest_code(uint64_t expected_pmcr_n)
> >   {
> >       uint64_t pmcr, pmcr_n;
> > +     int i, pmc;
> >
> >       GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
> >
> > @@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n)
> >       /* Make sure that PMCR_EL0.N indicates the value userspace set */
> >       GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
> >
> > +     /*
> > +      * Tests for reading/writing PMU registers for implemented counters.
> > +      * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> > +      */
> > +     for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> > +             for (pmc = 0; pmc < pmcr_n; pmc++)
> > +                     test_access_pmc_regs(&pmc_accessors[i], pmc);
> > +     }
> > +
> >       GUEST_DONE();
> >   }
> >
> > @@ -96,7 +437,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
> >       vcpu_run(vcpu);
> >       switch (get_ucall(vcpu, &uc)) {
> >       case UCALL_ABORT:
> > -             REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
> > +             REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
> >               break;
> >       case UCALL_DONE:
> >               break;
>
> --
> Regards,
> Shaoqin
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters
@ 2023-01-19  3:02       ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-19  3:02 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Shaoqin,

> I found some place should be PMEVTYPER, but wrongly written to
> PMEVTTYPE. Should we fix them?

Thank you for catching them!
I will review the patch, and fix them all in v3.

Thank you,
Reiji

On Tue, Jan 17, 2023 at 11:47 PM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Hi Reiji,
>
>
> I found some place should be PMEVTYPER, but wrongly written to
> PMEVTTYPE. Should we fix them?
>
>
> I list some of them, but not covered every one.
>
> On 1/17/23 09:35, Reiji Watanabe wrote:
> > Add a new test case to the vpmu_counter_access test to check if PMU
> > registers or their bits for implemented counters on the vCPU are
> > readable/writable as expected, and can be programmed to count events.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >   .../kvm/aarch64/vpmu_counter_access.c         | 347 +++++++++++++++++-
> >   1 file changed, 344 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > index 704a2500b7e1..54b69c76c824 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -5,7 +5,8 @@
> >    * Copyright (c) 2022 Google LLC.
> >    *
> >    * This test checks if the guest can see the same number of the PMU event
> > - * counters (PMCR_EL1.N) that userspace sets.
> > + * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> > + * those counters.
> >    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> >    */
> >   #include <kvm_util.h>
> > @@ -18,19 +19,350 @@
> >   /* The max number of the PMU event counters (excluding the cycle counter) */
> >   #define ARMV8_PMU_MAX_GENERAL_COUNTERS      (ARMV8_PMU_MAX_COUNTERS - 1)
> >
> > +/*
> > + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
> Here should be PMEV{CNTR, TYPER}.
> > + * were basically copied from arch/arm64/kernel/perf_event.c.
> > + */
> > +#define PMEVN_CASE(n, case_macro) \
> > +     case n: case_macro(n); break
> > +
> > +#define PMEVN_SWITCH(x, case_macro)                          \
> > +     do {                                                    \
> > +             switch (x) {                                    \
> > +             PMEVN_CASE(0,  case_macro);                     \
> > +             PMEVN_CASE(1,  case_macro);                     \
> > +             PMEVN_CASE(2,  case_macro);                     \
> > +             PMEVN_CASE(3,  case_macro);                     \
> > +             PMEVN_CASE(4,  case_macro);                     \
> > +             PMEVN_CASE(5,  case_macro);                     \
> > +             PMEVN_CASE(6,  case_macro);                     \
> > +             PMEVN_CASE(7,  case_macro);                     \
> > +             PMEVN_CASE(8,  case_macro);                     \
> > +             PMEVN_CASE(9,  case_macro);                     \
> > +             PMEVN_CASE(10, case_macro);                     \
> > +             PMEVN_CASE(11, case_macro);                     \
> > +             PMEVN_CASE(12, case_macro);                     \
> > +             PMEVN_CASE(13, case_macro);                     \
> > +             PMEVN_CASE(14, case_macro);                     \
> > +             PMEVN_CASE(15, case_macro);                     \
> > +             PMEVN_CASE(16, case_macro);                     \
> > +             PMEVN_CASE(17, case_macro);                     \
> > +             PMEVN_CASE(18, case_macro);                     \
> > +             PMEVN_CASE(19, case_macro);                     \
> > +             PMEVN_CASE(20, case_macro);                     \
> > +             PMEVN_CASE(21, case_macro);                     \
> > +             PMEVN_CASE(22, case_macro);                     \
> > +             PMEVN_CASE(23, case_macro);                     \
> > +             PMEVN_CASE(24, case_macro);                     \
> > +             PMEVN_CASE(25, case_macro);                     \
> > +             PMEVN_CASE(26, case_macro);                     \
> > +             PMEVN_CASE(27, case_macro);                     \
> > +             PMEVN_CASE(28, case_macro);                     \
> > +             PMEVN_CASE(29, case_macro);                     \
> > +             PMEVN_CASE(30, case_macro);                     \
> > +             default:                                        \
> > +                     GUEST_ASSERT_1(0, x);                   \
> > +             }                                               \
> > +     } while (0)
> > +
> > +#define RETURN_READ_PMEVCNTRN(n) \
> > +     return read_sysreg(pmevcntr##n##_el0)
> > +static unsigned long read_pmevcntrn(int n)
> > +{
> > +     PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> > +     return 0;
> > +}
> > +
> > +#define WRITE_PMEVCNTRN(n) \
> > +     write_sysreg(val, pmevcntr##n##_el0)
> > +static void write_pmevcntrn(int n, unsigned long val)
> > +{
> > +     PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> > +     isb();
> > +}
> > +
> > +#define READ_PMEVTYPERN(n) \
> > +     return read_sysreg(pmevtyper##n##_el0)
> > +static unsigned long read_pmevtypern(int n)
> > +{
> > +     PMEVN_SWITCH(n, READ_PMEVTYPERN);
> > +     return 0;
> > +}
> > +
> > +#define WRITE_PMEVTYPERN(n) \
> > +     write_sysreg(val, pmevtyper##n##_el0)
> > +static void write_pmevtypern(int n, unsigned long val)
> > +{
> > +     PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> > +     isb();
> > +}
> > +
> > +/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> > +static inline unsigned long read_sel_evcntr(int sel)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     return read_sysreg(pmxevcntr_el0);
> > +}
> > +
> > +/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> > +static inline void write_sel_evcntr(int sel, unsigned long val)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     write_sysreg(val, pmxevcntr_el0);
> > +     isb();
> > +}
> > +
> > +/* Read PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> Here should be PMEVTYPER.
> > +static inline unsigned long read_sel_evtyper(int sel)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     return read_sysreg(pmxevtyper_el0);
> > +}
> > +
> > +/* Write PMEVTTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> > +static inline void write_sel_evtyper(int sel, unsigned long val)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     write_sysreg(val, pmxevtyper_el0);
> > +     isb();
> > +}
> > +
> > +static inline void enable_counter(int idx)
> > +{
> > +     uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +     write_sysreg(BIT(idx) | v, pmcntenset_el0);
> > +     isb();
> > +}
> > +
> > +static inline void disable_counter(int idx)
> > +{
> > +     uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +     write_sysreg(BIT(idx) | v, pmcntenclr_el0);
> > +     isb();
> > +}
> > +
> > +/*
> > + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> > + * accessors that test cases will use. Each of the accessors will
> > + * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> > + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
> > + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
> > + *
> > + * This is used to test that combinations of those accessors provide
> > + * the consistent behavior.
> > + */
> > +struct pmc_accessor {
> > +     /* A function to be used to read PMEVTCNTR<n>_EL0 */
> > +     unsigned long   (*read_cntr)(int idx);
> > +     /* A function to be used to write PMEVTCNTR<n>_EL0 */
> > +     void            (*write_cntr)(int idx, unsigned long val);
> > +     /* A function to be used to read PMEVTTYPER<n>_EL0 */
> > +     unsigned long   (*read_typer)(int idx);
> > +     /* A function to be used write PMEVTTYPER<n>_EL0 */
> > +     void            (*write_typer)(int idx, unsigned long val);
> > +};
> > +
> > +struct pmc_accessor pmc_accessors[] = {
> > +     /* test with all direct accesses */
> > +     { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
> > +     /* test with all indirect accesses */
> > +     { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
> > +     /* read with direct accesses, and write with indirect accesses */
> > +     { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
> > +     /* read with indirect accesses, and write with direct accesses */
> > +     { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> > +};
> > +
> > +static void pmu_disable_reset(void)
> > +{
> > +     uint64_t pmcr = read_sysreg(pmcr_el0);
> > +
> > +     /* Reset all counters, disabling them */
> > +     pmcr &= ~ARMV8_PMU_PMCR_E;
> > +     write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> > +     isb();
> > +}
> > +
> > +static void pmu_enable(void)
> > +{
> > +     uint64_t pmcr = read_sysreg(pmcr_el0);
> > +
> > +     /* Reset all counters, disabling them */
> > +     pmcr |= ARMV8_PMU_PMCR_E;
> > +     write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> > +     isb();
> > +}
> > +
> > +static bool pmu_event_is_supported(uint64_t event)
> > +{
> > +     GUEST_ASSERT_1(event < 64, event);
> > +     return (read_sysreg(pmceid0_el0) & BIT(event));
> > +}
> > +
> >   static uint64_t pmcr_extract_n(uint64_t pmcr_val)
> >   {
> >       return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> >   }
> >
> > +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)         \
> > +{                                                                    \
> > +     uint64_t _tval = read_sysreg(regname);                          \
> > +                                                                     \
> > +     if (set_expected)                                               \
> > +             GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
> > +     else                                                               \
> > +             GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
> > +}
> > +
> > +/*
> > + * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers
> > + * are set or cleared as specified in @set_expected.
> > + */
> > +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
> > +{
> > +     GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
> > +}
> > +
> > +/*
> > + * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding
> > + * to the specified counter (@pmc_idx) can be read/written as expected.
> > + * When @set_op is true, it tries to set the bit for the counter in
> > + * those registers by writing the SET registers (the bit won't be set
> > + * if the counter is not implemented though).
> > + * Otherwise, it tries to clear the bits in the registers by writing
> > + * the CLR registers.
> > + * Then, it checks if the values indicated in the registers are as expected.
> > + */
> > +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
> > +{
> > +     uint64_t pmcr_n, test_bit = BIT(pmc_idx);
> > +     bool set_expected = false;
> > +
> > +     if (set_op) {
> > +             write_sysreg(test_bit, pmcntenset_el0);
> > +             write_sysreg(test_bit, pmovsset_el0);
> > +
> > +             /* The bit will be set only if the counter is implemented */
> > +             pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0));
> > +             set_expected = (pmc_idx < pmcr_n) ? true : false;
> > +     } else {
> > +             write_sysreg(test_bit, pmcntenclr_el0);
> > +             write_sysreg(test_bit, pmovsclr_el0);
> > +     }
> > +     check_bitmap_pmu_regs(test_bit, set_expected);
> > +}
> > +
> > +/*
> > + * Tests for reading/writing registers for the (implemented) event counter
> > + * specified by @pmc_idx.
> > + */
> > +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> > +{
> > +     uint64_t write_data, read_data, read_data_prev, test_bit;
> > +
> > +     /* Disable all PMCs and reset all PMCs to zero. */
> > +     pmu_disable_reset();
> > +
> > +
> > +     /*
> > +      * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1.
> > +      */
> > +
> > +     test_bit = 1ul << pmc_idx;
> > +     /* Make sure that the bit in those registers are set to 0 */
> > +     test_bitmap_pmu_regs(test_bit, false);
> > +     /* Test if setting the bit in those registers works */
> > +     test_bitmap_pmu_regs(test_bit, true);
> > +     /* Test if clearing the bit in those registers works */
> > +     test_bitmap_pmu_regs(test_bit, false);
> > +
> > +
> > +     /*
> > +      * Tests for reading/writing the event type register.
> > +      */
> > +
> > +     read_data = acc->read_typer(pmc_idx);
> > +     /*
> > +      * Set the event type register to an arbitrary value just for testing
> > +      * of reading/writing the register.
> > +      * ArmARM says that for the event from 0x0000 to 0x003F,
> > +      * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
> > +      * the value written to the field even when the specified event
> > +      * is not supported.
> > +      */
> > +     write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> > +     acc->write_typer(pmc_idx, write_data);
> > +     read_data = acc->read_typer(pmc_idx);
> > +     GUEST_ASSERT_4(read_data == write_data,
> > +                    pmc_idx, acc, read_data, write_data);
> > +
> > +
> > +     /*
> > +      * Tests for reading/writing the event count register.
> > +      */
> > +
> > +     read_data = acc->read_cntr(pmc_idx);
> > +
> > +     /* The count value must be 0, as it is not used after the reset */
> > +     GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
> > +
> > +     write_data = read_data + pmc_idx + 0x12345;
> > +     acc->write_cntr(pmc_idx, write_data);
> > +     read_data = acc->read_cntr(pmc_idx);
> > +     GUEST_ASSERT_4(read_data == write_data,
> > +                    pmc_idx, acc, read_data, write_data);
> > +
> > +
> > +     /* The following test requires the INST_RETIRED event support. */
> > +     if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED))
> > +             return;
> > +
> > +     pmu_enable();
> > +     acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> > +
> > +     /*
> > +      * Make sure that the counter doesn't count the INST_RETIRED
> > +      * event when disabled, and the counter counts the event when enabled.
> > +      */
> > +     disable_counter(pmc_idx);
> > +     read_data_prev = acc->read_cntr(pmc_idx);
> > +     read_data = acc->read_cntr(pmc_idx);
> > +     GUEST_ASSERT_4(read_data == read_data_prev,
> > +                    pmc_idx, acc, read_data, read_data_prev);
> > +
> > +     enable_counter(pmc_idx);
> > +     read_data = acc->read_cntr(pmc_idx);
> > +
> > +     /*
> > +      * The counter should be increased by at least 1, as there is at
> > +      * least one instruction between enabling the counter and reading
> > +      * the counter (the test assumes that all event counters are not
> > +      * being used by the host's higher priority events).
> > +      */
> > +     GUEST_ASSERT_4(read_data > read_data_prev,
> > +                    pmc_idx, acc, read_data, read_data_prev);
> > +}
> > +
> >   /*
> >    * The guest is configured with PMUv3 with @expected_pmcr_n number of
> >    * event counters.
> > - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> > + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> > + * if reading/writing PMU registers for implemented counters can work
> > + * as expected.
> >    */
> >   static void guest_code(uint64_t expected_pmcr_n)
> >   {
> >       uint64_t pmcr, pmcr_n;
> > +     int i, pmc;
> >
> >       GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
> >
> > @@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n)
> >       /* Make sure that PMCR_EL0.N indicates the value userspace set */
> >       GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
> >
> > +     /*
> > +      * Tests for reading/writing PMU registers for implemented counters.
> > +      * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> > +      */
> > +     for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> > +             for (pmc = 0; pmc < pmcr_n; pmc++)
> > +                     test_access_pmc_regs(&pmc_accessors[i], pmc);
> > +     }
> > +
> >       GUEST_DONE();
> >   }
> >
> > @@ -96,7 +437,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
> >       vcpu_run(vcpu);
> >       switch (get_ucall(vcpu, &uc)) {
> >       case UCALL_ABORT:
> > -             REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
> > +             REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
> >               break;
> >       case UCALL_DONE:
> >               break;
>
> --
> Regards,
> Shaoqin
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters
  2023-01-18  7:49     ` Shaoqin Huang
@ 2023-01-19  3:04       ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-19  3:04 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Shaoqin,

On Tue, Jan 17, 2023 at 11:50 PM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Hi Reiji,
>
> On 1/17/23 09:35, Reiji Watanabe wrote:
> > Add a new test case to the vpmu_counter_access test to check
> > if PMU registers or their bits for unimplemented counters are not
> > accessible or are RAZ, as expected.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >   .../kvm/aarch64/vpmu_counter_access.c         | 103 +++++++++++++++++-
> >   .../selftests/kvm/include/aarch64/processor.h |   1 +
> >   2 files changed, 98 insertions(+), 6 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > index 54b69c76c824..a7e34d63808b 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -5,8 +5,8 @@
> >    * Copyright (c) 2022 Google LLC.
> >    *
> >    * This test checks if the guest can see the same number of the PMU event
> > - * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> > - * those counters.
> > + * counters (PMCR_EL1.N) that userspace sets, if the guest can access
> > + * those counters, and if the guest cannot access any other counters.
> >    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> >    */
> >   #include <kvm_util.h>
> > @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = {
> >       { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> >   };
> >
> > +#define INVALID_EC   (-1ul)
> > +uint64_t expected_ec = INVALID_EC;
> > +uint64_t op_end_addr;
> > +
> > +static void guest_sync_handler(struct ex_regs *regs)
> > +{
> > +     uint64_t esr, ec;
> > +
> > +     esr = read_sysreg(esr_el1);
> > +     ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> > +     GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
> > +                    regs->pc, esr, ec, expected_ec);
> > +
> > +     /* Will go back to op_end_addr after the handler exits */
> > +     regs->pc = op_end_addr;
> > +
> > +     /*
> > +      * Clear op_end_addr, and setting expected_ec to INVALID_EC
> > +      * as a sign that an exception has occurred.
> > +      */
> > +     op_end_addr = 0;
> > +     expected_ec = INVALID_EC;
> > +}
> > +
> > +/*
> > + * Run the given operation that should trigger an exception with the
> > + * given exception class. The exception handler (guest_sync_handler)
> > + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
> > + * will come back to the instruction at the @done_label.
> > + * The @done_label must be a unique label in this test program.
> > + */
> > +#define TEST_EXCEPTION(ec, ops, done_label)          \
> > +{                                                    \
> > +     extern int done_label;                          \
> > +                                                     \
> > +     WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \
> > +     GUEST_ASSERT(ec != INVALID_EC);                 \
> > +     WRITE_ONCE(expected_ec, ec);                    \
> > +     dsb(ish);                                       \
> > +     ops;                                            \
> > +     asm volatile(#done_label":");                   \
> > +     GUEST_ASSERT(!op_end_addr);                     \
> > +     GUEST_ASSERT(expected_ec == INVALID_EC);        \
> > +}
> > +
> >   static void pmu_disable_reset(void)
> >   {
> >       uint64_t pmcr = read_sysreg(pmcr_el0);
> > @@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> >                      pmc_idx, acc, read_data, read_data_prev);
> >   }
> >
> > +/*
> > + * Tests for reading/writing registers for the unimplemented event counter
> > + * specified by @pmc_idx (>= PMCR_EL1.N).
> > + */
> > +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> > +{
> > +     /*
> > +      * Reading/writing the event count/type registers should cause
> > +      * an UNDEFINED exception.
> > +      */
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
> > +     /*
> > +      * The bit corresponding to the (unimplemented) counter in
> > +      * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
> > +      */
> > +     test_bitmap_pmu_regs(pmc_idx, 1);
> > +     test_bitmap_pmu_regs(pmc_idx, 0);
> > +}
> > +
> >   /*
> >    * The guest is configured with PMUv3 with @expected_pmcr_n number of
> >    * event counters.
> >    * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> > - * if reading/writing PMU registers for implemented counters can work
> > - * as expected.
> > + * if reading/writing PMU registers for implemented or unimplemented
> > + * counters can work as expected.
> >    */
> >   static void guest_code(uint64_t expected_pmcr_n)
> >   {
> > -     uint64_t pmcr, pmcr_n;
> > +     uint64_t pmcr, pmcr_n, unimp_mask;
> >       int i, pmc;
> >
> >       GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
> > @@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n)
> >       /* Make sure that PMCR_EL0.N indicates the value userspace set */
> >       GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
> >
> > +     /*
> > +      * Make sure that (RAZ) bits corresponding to unimplemented event
> > +      * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
> > +      * (NOTE: bits for implemented event counters are reset to UNKNOWN)
> > +      */
> > +     unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
> > +     check_bitmap_pmu_regs(unimp_mask, false);
> > +
> >       /*
> >        * Tests for reading/writing PMU registers for implemented counters.
> >        * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> > @@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n)
> >                       test_access_pmc_regs(&pmc_accessors[i], pmc);
> >       }
> >
> > +     /*
> > +      * Tests for reading/writing PMU registers for unimplemented counters.
> > +      * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> Here should be PMEV{CNTR, TYPER}<n>.

Thank you for catching this. I will fix this.

Thank you,
Reiji

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters
@ 2023-01-19  3:04       ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-19  3:04 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Shaoqin,

On Tue, Jan 17, 2023 at 11:50 PM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Hi Reiji,
>
> On 1/17/23 09:35, Reiji Watanabe wrote:
> > Add a new test case to the vpmu_counter_access test to check
> > if PMU registers or their bits for unimplemented counters are not
> > accessible or are RAZ, as expected.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >   .../kvm/aarch64/vpmu_counter_access.c         | 103 +++++++++++++++++-
> >   .../selftests/kvm/include/aarch64/processor.h |   1 +
> >   2 files changed, 98 insertions(+), 6 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > index 54b69c76c824..a7e34d63808b 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -5,8 +5,8 @@
> >    * Copyright (c) 2022 Google LLC.
> >    *
> >    * This test checks if the guest can see the same number of the PMU event
> > - * counters (PMCR_EL1.N) that userspace sets, and if the guest can access
> > - * those counters.
> > + * counters (PMCR_EL1.N) that userspace sets, if the guest can access
> > + * those counters, and if the guest cannot access any other counters.
> >    * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> >    */
> >   #include <kvm_util.h>
> > @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = {
> >       { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> >   };
> >
> > +#define INVALID_EC   (-1ul)
> > +uint64_t expected_ec = INVALID_EC;
> > +uint64_t op_end_addr;
> > +
> > +static void guest_sync_handler(struct ex_regs *regs)
> > +{
> > +     uint64_t esr, ec;
> > +
> > +     esr = read_sysreg(esr_el1);
> > +     ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> > +     GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
> > +                    regs->pc, esr, ec, expected_ec);
> > +
> > +     /* Will go back to op_end_addr after the handler exits */
> > +     regs->pc = op_end_addr;
> > +
> > +     /*
> > +      * Clear op_end_addr, and setting expected_ec to INVALID_EC
> > +      * as a sign that an exception has occurred.
> > +      */
> > +     op_end_addr = 0;
> > +     expected_ec = INVALID_EC;
> > +}
> > +
> > +/*
> > + * Run the given operation that should trigger an exception with the
> > + * given exception class. The exception handler (guest_sync_handler)
> > + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
> > + * will come back to the instruction at the @done_label.
> > + * The @done_label must be a unique label in this test program.
> > + */
> > +#define TEST_EXCEPTION(ec, ops, done_label)          \
> > +{                                                    \
> > +     extern int done_label;                          \
> > +                                                     \
> > +     WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \
> > +     GUEST_ASSERT(ec != INVALID_EC);                 \
> > +     WRITE_ONCE(expected_ec, ec);                    \
> > +     dsb(ish);                                       \
> > +     ops;                                            \
> > +     asm volatile(#done_label":");                   \
> > +     GUEST_ASSERT(!op_end_addr);                     \
> > +     GUEST_ASSERT(expected_ec == INVALID_EC);        \
> > +}
> > +
> >   static void pmu_disable_reset(void)
> >   {
> >       uint64_t pmcr = read_sysreg(pmcr_el0);
> > @@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> >                      pmc_idx, acc, read_data, read_data_prev);
> >   }
> >
> > +/*
> > + * Tests for reading/writing registers for the unimplemented event counter
> > + * specified by @pmc_idx (>= PMCR_EL1.N).
> > + */
> > +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> > +{
> > +     /*
> > +      * Reading/writing the event count/type registers should cause
> > +      * an UNDEFINED exception.
> > +      */
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
> > +     /*
> > +      * The bit corresponding to the (unimplemented) counter in
> > +      * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
> > +      */
> > +     test_bitmap_pmu_regs(pmc_idx, 1);
> > +     test_bitmap_pmu_regs(pmc_idx, 0);
> > +}
> > +
> >   /*
> >    * The guest is configured with PMUv3 with @expected_pmcr_n number of
> >    * event counters.
> >    * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> > - * if reading/writing PMU registers for implemented counters can work
> > - * as expected.
> > + * if reading/writing PMU registers for implemented or unimplemented
> > + * counters can work as expected.
> >    */
> >   static void guest_code(uint64_t expected_pmcr_n)
> >   {
> > -     uint64_t pmcr, pmcr_n;
> > +     uint64_t pmcr, pmcr_n, unimp_mask;
> >       int i, pmc;
> >
> >       GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
> > @@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n)
> >       /* Make sure that PMCR_EL0.N indicates the value userspace set */
> >       GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
> >
> > +     /*
> > +      * Make sure that (RAZ) bits corresponding to unimplemented event
> > +      * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
> > +      * (NOTE: bits for implemented event counters are reset to UNKNOWN)
> > +      */
> > +     unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
> > +     check_bitmap_pmu_regs(unimp_mask, false);
> > +
> >       /*
> >        * Tests for reading/writing PMU registers for implemented counters.
> >        * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> > @@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n)
> >                       test_access_pmc_regs(&pmc_accessors[i], pmc);
> >       }
> >
> > +     /*
> > +      * Tests for reading/writing PMU registers for unimplemented counters.
> > +      * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> Here should be PMEV{CNTR, TYPER}<n>.

Thank you for catching this. I will fix this.

Thank you,
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
  2023-01-17  1:35   ` Reiji Watanabe
@ 2023-01-20  0:30     ` Oliver Upton
  -1 siblings, 0 replies; 46+ messages in thread
From: Oliver Upton @ 2023-01-20  0:30 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

On Mon, Jan 16, 2023 at 05:35:37PM -0800, Reiji Watanabe wrote:
> The number of PMU event counters is indicated in PMCR_EL0.N.
> For a vCPU with PMUv3 configured, its value will be the same as
> the host value by default. Userspace can set PMCR_EL0.N for the
> vCPU to a lower value than the host value using KVM_SET_ONE_REG.
> However, it is practically unsupported, as reset_pmcr() resets
> PMCR_EL0.N to the host value on vCPU reset.
> 
> Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on
> vCPU reset so that userspace can limit the number of the PMU
> event counter on the vCPU.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/kvm/pmu-emul.c | 6 ++++++
>  arch/arm64/kvm/sys_regs.c | 4 +++-
>  2 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 24908400e190..937a272b00a5 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu)
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
>  		pmu->pmc[i].idx = i;
> +
> +	/*
> +	 * Initialize PMCR_EL0 for the vCPU with the host value so that
> +	 * the value is available at the very first vCPU reset.
> +	 */
> +	__vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0);

I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
nothing in the architecture prevents implementers from gluing together
cores with varying numbers of PMCs. We probably haven't noticed it yet
since it would appear all Arm designs have had 6 PMCs.

>  }
>  
>  /**
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 4959658b502c..67c1bd39b478 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -637,8 +637,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	if (!kvm_arm_support_pmu_v3())
>  		return;
>  
> +	/* PMCR_EL0 for the vCPU is set to the host value at vCPU creation. */
> +

nit: I think we can do without the floating comment here.

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
@ 2023-01-20  0:30     ` Oliver Upton
  0 siblings, 0 replies; 46+ messages in thread
From: Oliver Upton @ 2023-01-20  0:30 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

On Mon, Jan 16, 2023 at 05:35:37PM -0800, Reiji Watanabe wrote:
> The number of PMU event counters is indicated in PMCR_EL0.N.
> For a vCPU with PMUv3 configured, its value will be the same as
> the host value by default. Userspace can set PMCR_EL0.N for the
> vCPU to a lower value than the host value using KVM_SET_ONE_REG.
> However, it is practically unsupported, as reset_pmcr() resets
> PMCR_EL0.N to the host value on vCPU reset.
> 
> Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on
> vCPU reset so that userspace can limit the number of the PMU
> event counter on the vCPU.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/kvm/pmu-emul.c | 6 ++++++
>  arch/arm64/kvm/sys_regs.c | 4 +++-
>  2 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 24908400e190..937a272b00a5 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu)
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
>  		pmu->pmc[i].idx = i;
> +
> +	/*
> +	 * Initialize PMCR_EL0 for the vCPU with the host value so that
> +	 * the value is available at the very first vCPU reset.
> +	 */
> +	__vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0);

I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
nothing in the architecture prevents implementers from gluing together
cores with varying numbers of PMCs. We probably haven't noticed it yet
since it would appear all Arm designs have had 6 PMCs.

>  }
>  
>  /**
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 4959658b502c..67c1bd39b478 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -637,8 +637,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	if (!kvm_arm_support_pmu_v3())
>  		return;
>  
> +	/* PMCR_EL0 for the vCPU is set to the host value at vCPU creation. */
> +

nit: I think we can do without the floating comment here.

--
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
  2023-01-20  0:30     ` Oliver Upton
@ 2023-01-20 12:12       ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 12:12 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Reiji Watanabe, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

On Fri, 20 Jan 2023 00:30:33 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
> 
> On Mon, Jan 16, 2023 at 05:35:37PM -0800, Reiji Watanabe wrote:
> > The number of PMU event counters is indicated in PMCR_EL0.N.
> > For a vCPU with PMUv3 configured, its value will be the same as
> > the host value by default. Userspace can set PMCR_EL0.N for the
> > vCPU to a lower value than the host value using KVM_SET_ONE_REG.
> > However, it is practically unsupported, as reset_pmcr() resets
> > PMCR_EL0.N to the host value on vCPU reset.
> > 
> > Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on
> > vCPU reset so that userspace can limit the number of the PMU
> > event counter on the vCPU.
> > 
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/kvm/pmu-emul.c | 6 ++++++
> >  arch/arm64/kvm/sys_regs.c | 4 +++-
> >  2 files changed, 9 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index 24908400e190..937a272b00a5 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu)
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
> >  		pmu->pmc[i].idx = i;
> > +
> > +	/*
> > +	 * Initialize PMCR_EL0 for the vCPU with the host value so that
> > +	 * the value is available at the very first vCPU reset.
> > +	 */
> > +	__vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0);
> 
> I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
> nothing in the architecture prevents implementers from gluing together
> cores with varying numbers of PMCs. We probably haven't noticed it yet
> since it would appear all Arm designs have had 6 PMCs.

This brings back the question of late onlining. How do you cope with
with the onlining of such a CPU that has a smaller set of counters
than its online counterparts? This is at odds with the way the PMU
code works.

If you have a different set of counters, you are likely to have a
different PMU altogether:

[    1.192606] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 counters available
[    1.201254] hw perfevents: enabled with armv8_cortex_a53 PMU driver, 7 counters available

This isn't a broken system, but it has two set of cores which are
massively different, and two PMUs.

This really should tie back to the PMU type we're counting on, and to
the set of CPUs that implements it. We already have some
infrastructure to check for the affinity of the PMU vs the CPU we're
running on, and this is already visible to userspace.

Can't we just leave this responsibility to userspace?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
@ 2023-01-20 12:12       ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 12:12 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Reiji Watanabe, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

On Fri, 20 Jan 2023 00:30:33 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
> 
> On Mon, Jan 16, 2023 at 05:35:37PM -0800, Reiji Watanabe wrote:
> > The number of PMU event counters is indicated in PMCR_EL0.N.
> > For a vCPU with PMUv3 configured, its value will be the same as
> > the host value by default. Userspace can set PMCR_EL0.N for the
> > vCPU to a lower value than the host value using KVM_SET_ONE_REG.
> > However, it is practically unsupported, as reset_pmcr() resets
> > PMCR_EL0.N to the host value on vCPU reset.
> > 
> > Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on
> > vCPU reset so that userspace can limit the number of the PMU
> > event counter on the vCPU.
> > 
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/kvm/pmu-emul.c | 6 ++++++
> >  arch/arm64/kvm/sys_regs.c | 4 +++-
> >  2 files changed, 9 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index 24908400e190..937a272b00a5 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu)
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
> >  		pmu->pmc[i].idx = i;
> > +
> > +	/*
> > +	 * Initialize PMCR_EL0 for the vCPU with the host value so that
> > +	 * the value is available at the very first vCPU reset.
> > +	 */
> > +	__vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0);
> 
> I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
> nothing in the architecture prevents implementers from gluing together
> cores with varying numbers of PMCs. We probably haven't noticed it yet
> since it would appear all Arm designs have had 6 PMCs.

This brings back the question of late onlining. How do you cope with
with the onlining of such a CPU that has a smaller set of counters
than its online counterparts? This is at odds with the way the PMU
code works.

If you have a different set of counters, you are likely to have a
different PMU altogether:

[    1.192606] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 counters available
[    1.201254] hw perfevents: enabled with armv8_cortex_a53 PMU driver, 7 counters available

This isn't a broken system, but it has two set of cores which are
massively different, and two PMUs.

This really should tie back to the PMU type we're counting on, and to
the set of CPUs that implements it. We already have some
infrastructure to check for the affinity of the PMU vs the CPU we're
running on, and this is already visible to userspace.

Can't we just leave this responsibility to userspace?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
  2023-01-17  1:35   ` Reiji Watanabe
@ 2023-01-20 14:04     ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 14:04 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

On Tue, 17 Jan 2023 01:35:35 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> This function clears RAZ bits of those registers corresponding
> to unimplemented event counters on the vCPU, and sets bits
> corresponding to implemented event counters to a predefined
> pseudo UNKNOWN value (some bits are set to 1).
> 
> The function identifies (un)implemented event counters on the
> vCPU based on the PMCR_EL1.N value on the host. Using the host
> value for this would be problematic when KVM supports letting
> userspace set PMCR_EL1.N to a value different from the host value
> (some of the RAZ bits of those registers could end up being set to 1).
> 
> Fix reset_pmu_reg() to clear the registers so that it can ensure
> that all the RAZ bits are cleared even when the PMCR_EL1.N value
> for the vCPU is different from the host value.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 10 +---------
>  1 file changed, 1 insertion(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c6cbfe6b854b..ec4bdaf71a15 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>  
>  static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
> -	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> -
>  	/* No PMU available, any PMU reg may UNDEF... */
>  	if (!kvm_arm_support_pmu_v3())
>  		return;

Is this still true? We remove the PMCR_EL0 access just below.

>  
> -	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> -	n &= ARMV8_PMU_PMCR_N_MASK;
> -	if (n)
> -		mask |= GENMASK(n - 1, 0);
> -
> -	reset_unknown(vcpu, r);
> -	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> +	__vcpu_sys_reg(vcpu, r->reg) = 0;
>  }

At the end of the day, this function has no dependency on the host at
all, and only writes 0 to the per-vcpu register.

So why not get rid of it altogether and have:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c6cbfe6b854b..1d1514b89d75 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(r)						\
-	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
+	SYS_DESC(r), .visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
 #define PMU_PMEVCNTR_EL0(n)						\

which would fall-back the specified reset value (zero by default)?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
@ 2023-01-20 14:04     ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 14:04 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

On Tue, 17 Jan 2023 01:35:35 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> This function clears RAZ bits of those registers corresponding
> to unimplemented event counters on the vCPU, and sets bits
> corresponding to implemented event counters to a predefined
> pseudo UNKNOWN value (some bits are set to 1).
> 
> The function identifies (un)implemented event counters on the
> vCPU based on the PMCR_EL1.N value on the host. Using the host
> value for this would be problematic when KVM supports letting
> userspace set PMCR_EL1.N to a value different from the host value
> (some of the RAZ bits of those registers could end up being set to 1).
> 
> Fix reset_pmu_reg() to clear the registers so that it can ensure
> that all the RAZ bits are cleared even when the PMCR_EL1.N value
> for the vCPU is different from the host value.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 10 +---------
>  1 file changed, 1 insertion(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c6cbfe6b854b..ec4bdaf71a15 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>  
>  static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
> -	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> -
>  	/* No PMU available, any PMU reg may UNDEF... */
>  	if (!kvm_arm_support_pmu_v3())
>  		return;

Is this still true? We remove the PMCR_EL0 access just below.

>  
> -	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> -	n &= ARMV8_PMU_PMCR_N_MASK;
> -	if (n)
> -		mask |= GENMASK(n - 1, 0);
> -
> -	reset_unknown(vcpu, r);
> -	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> +	__vcpu_sys_reg(vcpu, r->reg) = 0;
>  }

At the end of the day, this function has no dependency on the host at
all, and only writes 0 to the per-vcpu register.

So why not get rid of it altogether and have:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c6cbfe6b854b..1d1514b89d75 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(r)						\
-	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
+	SYS_DESC(r), .visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
 #define PMU_PMEVCNTR_EL0(n)						\

which would fall-back the specified reset value (zero by default)?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
  2023-01-20 14:04     ` Marc Zyngier
@ 2023-01-20 14:11       ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 14:11 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

On Fri, 20 Jan 2023 14:04:12 +0000,
Marc Zyngier <maz@kernel.org> wrote:
> 
> On Tue, 17 Jan 2023 01:35:35 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> > PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> > This function clears RAZ bits of those registers corresponding
> > to unimplemented event counters on the vCPU, and sets bits
> > corresponding to implemented event counters to a predefined
> > pseudo UNKNOWN value (some bits are set to 1).
> > 
> > The function identifies (un)implemented event counters on the
> > vCPU based on the PMCR_EL1.N value on the host. Using the host
> > value for this would be problematic when KVM supports letting
> > userspace set PMCR_EL1.N to a value different from the host value
> > (some of the RAZ bits of those registers could end up being set to 1).
> > 
> > Fix reset_pmu_reg() to clear the registers so that it can ensure
> > that all the RAZ bits are cleared even when the PMCR_EL1.N value
> > for the vCPU is different from the host value.
> > 
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 10 +---------
> >  1 file changed, 1 insertion(+), 9 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index c6cbfe6b854b..ec4bdaf71a15 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >  
> >  static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  {
> > -	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > -
> >  	/* No PMU available, any PMU reg may UNDEF... */
> >  	if (!kvm_arm_support_pmu_v3())
> >  		return;
> 
> Is this still true? We remove the PMCR_EL0 access just below.
> 
> >  
> > -	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > -	n &= ARMV8_PMU_PMCR_N_MASK;
> > -	if (n)
> > -		mask |= GENMASK(n - 1, 0);
> > -
> > -	reset_unknown(vcpu, r);
> > -	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > +	__vcpu_sys_reg(vcpu, r->reg) = 0;
> >  }
> 
> At the end of the day, this function has no dependency on the host at
> all, and only writes 0 to the per-vcpu register.
> 
> So why not get rid of it altogether and have:
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c6cbfe6b854b..1d1514b89d75 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>  
>  #define PMU_SYS_REG(r)						\
> -	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
> +	SYS_DESC(r), .visibility = pmu_visibility
>  
>  /* Macro to expand the PMEVCNTRn_EL0 register */
>  #define PMU_PMEVCNTR_EL0(n)						\
> 
> which would fall-back the specified reset value (zero by default)?

Scratch that, we need:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c6cbfe6b854b..6f6a928c92ec 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(r)						\
-	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
+	SYS_DESC(r), .reset = reset_val, .visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
 #define PMU_PMEVCNTR_EL0(n)						\

But otherwise, this should be enough.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
@ 2023-01-20 14:11       ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 14:11 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

On Fri, 20 Jan 2023 14:04:12 +0000,
Marc Zyngier <maz@kernel.org> wrote:
> 
> On Tue, 17 Jan 2023 01:35:35 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> > PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> > This function clears RAZ bits of those registers corresponding
> > to unimplemented event counters on the vCPU, and sets bits
> > corresponding to implemented event counters to a predefined
> > pseudo UNKNOWN value (some bits are set to 1).
> > 
> > The function identifies (un)implemented event counters on the
> > vCPU based on the PMCR_EL1.N value on the host. Using the host
> > value for this would be problematic when KVM supports letting
> > userspace set PMCR_EL1.N to a value different from the host value
> > (some of the RAZ bits of those registers could end up being set to 1).
> > 
> > Fix reset_pmu_reg() to clear the registers so that it can ensure
> > that all the RAZ bits are cleared even when the PMCR_EL1.N value
> > for the vCPU is different from the host value.
> > 
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 10 +---------
> >  1 file changed, 1 insertion(+), 9 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index c6cbfe6b854b..ec4bdaf71a15 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >  
> >  static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  {
> > -	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > -
> >  	/* No PMU available, any PMU reg may UNDEF... */
> >  	if (!kvm_arm_support_pmu_v3())
> >  		return;
> 
> Is this still true? We remove the PMCR_EL0 access just below.
> 
> >  
> > -	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > -	n &= ARMV8_PMU_PMCR_N_MASK;
> > -	if (n)
> > -		mask |= GENMASK(n - 1, 0);
> > -
> > -	reset_unknown(vcpu, r);
> > -	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > +	__vcpu_sys_reg(vcpu, r->reg) = 0;
> >  }
> 
> At the end of the day, this function has no dependency on the host at
> all, and only writes 0 to the per-vcpu register.
> 
> So why not get rid of it altogether and have:
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c6cbfe6b854b..1d1514b89d75 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>  
>  #define PMU_SYS_REG(r)						\
> -	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
> +	SYS_DESC(r), .visibility = pmu_visibility
>  
>  /* Macro to expand the PMEVCNTRn_EL0 register */
>  #define PMU_PMEVCNTR_EL0(n)						\
> 
> which would fall-back the specified reset value (zero by default)?

Scratch that, we need:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c6cbfe6b854b..6f6a928c92ec 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(r)						\
-	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
+	SYS_DESC(r), .reset = reset_val, .visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
 #define PMU_PMEVCNTR_EL0(n)						\

But otherwise, this should be enough.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/8] KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the host value
  2023-01-17  1:35   ` Reiji Watanabe
@ 2023-01-20 14:18     ` Marc Zyngier
  -1 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 14:18 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

On Tue, 17 Jan 2023 01:35:38 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Currently, KVM allows userspace to set PMCR_EL0 to any values
> with KVM_SET_ONE_REG for a vCPU with PMUv3 configured.
> 
> Disallow userspace to set PMCR_EL0.N to a value that is greater
> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
> support more event counters than the host HW implements.
> Although this is an ABI change, this change only affects
> userspace setting PMCR_EL0.N to a larger value than the host.
> As accesses to unadvertised event counters indices is CONSTRAINED
> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
> on every vCPU reset before this series, I can't think of any
> use case where a user space would do that.
> 
> Also, ignore writes to read-only bits that are cleared on vCPU reset,
> and RES{0,1} bits (including writable bits that KVM doesn't support
> yet), as those bits shouldn't be modified (at least with
> the current KVM).
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>

Reviewed-by: Marc Zyngier <maz@kernel.org>

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/8] KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the host value
@ 2023-01-20 14:18     ` Marc Zyngier
  0 siblings, 0 replies; 46+ messages in thread
From: Marc Zyngier @ 2023-01-20 14:18 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

On Tue, 17 Jan 2023 01:35:38 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Currently, KVM allows userspace to set PMCR_EL0 to any values
> with KVM_SET_ONE_REG for a vCPU with PMUv3 configured.
> 
> Disallow userspace to set PMCR_EL0.N to a value that is greater
> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
> support more event counters than the host HW implements.
> Although this is an ABI change, this change only affects
> userspace setting PMCR_EL0.N to a larger value than the host.
> As accesses to unadvertised event counters indices is CONSTRAINED
> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
> on every vCPU reset before this series, I can't think of any
> use case where a user space would do that.
> 
> Also, ignore writes to read-only bits that are cleared on vCPU reset,
> and RES{0,1} bits (including writable bits that KVM doesn't support
> yet), as those bits shouldn't be modified (at least with
> the current KVM).
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>

Reviewed-by: Marc Zyngier <maz@kernel.org>

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
  2023-01-20 12:12       ` Marc Zyngier
@ 2023-01-20 18:04         ` Oliver Upton
  -1 siblings, 0 replies; 46+ messages in thread
From: Oliver Upton @ 2023-01-20 18:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Reiji Watanabe, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

Hey Marc,

On Fri, Jan 20, 2023 at 12:12:32PM +0000, Marc Zyngier wrote:
> On Fri, 20 Jan 2023 00:30:33 +0000, Oliver Upton <oliver.upton@linux.dev> wrote:
> > I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
> > nothing in the architecture prevents implementers from gluing together
> > cores with varying numbers of PMCs. We probably haven't noticed it yet
> > since it would appear all Arm designs have had 6 PMCs.
> 
> This brings back the question of late onlining. How do you cope with
> with the onlining of such a CPU that has a smaller set of counters
> than its online counterparts? This is at odds with the way the PMU
> code works.

You're absolutely right, any illusion we derived from the online set of
CPUs could fall apart with a late onlining of a different core.

> If you have a different set of counters, you are likely to have a
> different PMU altogether:
> 
> [    1.192606] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 counters available
> [    1.201254] hw perfevents: enabled with armv8_cortex_a53 PMU driver, 7 counters available
> 
> This isn't a broken system, but it has two set of cores which are
> massively different, and two PMUs.
> 
> This really should tie back to the PMU type we're counting on, and to
> the set of CPUs that implements it. We already have some
> infrastructure to check for the affinity of the PMU vs the CPU we're
> running on, and this is already visible to userspace.
> 
> Can't we just leave this responsibility to userspace?

Believe me, I'm always a fan of offloading things to userspace :)

If the VMM is privy to the details of the system it is on then the
differing PMUs can be passed through to the guest w/ pinned vCPU
threads. I just worry about the case of a naive VMM that assumes a
homogenous system. I don't think I could entirely blame the VMM in this
case either as we've gone to lengths to sanitise the feature set
exposed to userspace.

What happens when a vCPU gets scheduled on a core where the vPMU
doesn't match? Ignoring other incongruences, it is not possible to
virtualize more counters than are supported by the vPMU of the core.

Stopping short of any major hacks in the kernel to fudge around the
problem, I believe we may need to provide better documentation of how
heterogeneous CPUs are handled in KVM and what userspace can do about
it.

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
@ 2023-01-20 18:04         ` Oliver Upton
  0 siblings, 0 replies; 46+ messages in thread
From: Oliver Upton @ 2023-01-20 18:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Reiji Watanabe, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

Hey Marc,

On Fri, Jan 20, 2023 at 12:12:32PM +0000, Marc Zyngier wrote:
> On Fri, 20 Jan 2023 00:30:33 +0000, Oliver Upton <oliver.upton@linux.dev> wrote:
> > I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
> > nothing in the architecture prevents implementers from gluing together
> > cores with varying numbers of PMCs. We probably haven't noticed it yet
> > since it would appear all Arm designs have had 6 PMCs.
> 
> This brings back the question of late onlining. How do you cope with
> with the onlining of such a CPU that has a smaller set of counters
> than its online counterparts? This is at odds with the way the PMU
> code works.

You're absolutely right, any illusion we derived from the online set of
CPUs could fall apart with a late onlining of a different core.

> If you have a different set of counters, you are likely to have a
> different PMU altogether:
> 
> [    1.192606] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 counters available
> [    1.201254] hw perfevents: enabled with armv8_cortex_a53 PMU driver, 7 counters available
> 
> This isn't a broken system, but it has two set of cores which are
> massively different, and two PMUs.
> 
> This really should tie back to the PMU type we're counting on, and to
> the set of CPUs that implements it. We already have some
> infrastructure to check for the affinity of the PMU vs the CPU we're
> running on, and this is already visible to userspace.
> 
> Can't we just leave this responsibility to userspace?

Believe me, I'm always a fan of offloading things to userspace :)

If the VMM is privy to the details of the system it is on then the
differing PMUs can be passed through to the guest w/ pinned vCPU
threads. I just worry about the case of a naive VMM that assumes a
homogenous system. I don't think I could entirely blame the VMM in this
case either as we've gone to lengths to sanitise the feature set
exposed to userspace.

What happens when a vCPU gets scheduled on a core where the vPMU
doesn't match? Ignoring other incongruences, it is not possible to
virtualize more counters than are supported by the vPMU of the core.

Stopping short of any major hacks in the kernel to fudge around the
problem, I believe we may need to provide better documentation of how
heterogeneous CPUs are handled in KVM and what userspace can do about
it.

--
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
  2023-01-20 18:04         ` Oliver Upton
@ 2023-01-20 18:53           ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-20 18:53 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

Hi Oliver, Marc,

Thank you for the review!

On Fri, Jan 20, 2023 at 10:05 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hey Marc,
>
> On Fri, Jan 20, 2023 at 12:12:32PM +0000, Marc Zyngier wrote:
> > On Fri, 20 Jan 2023 00:30:33 +0000, Oliver Upton <oliver.upton@linux.dev> wrote:
> > > I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
> > > nothing in the architecture prevents implementers from gluing together
> > > cores with varying numbers of PMCs. We probably haven't noticed it yet
> > > since it would appear all Arm designs have had 6 PMCs.
> >
> > This brings back the question of late onlining. How do you cope with
> > with the onlining of such a CPU that has a smaller set of counters
> > than its online counterparts? This is at odds with the way the PMU
> > code works.
>
> You're absolutely right, any illusion we derived from the online set of
> CPUs could fall apart with a late onlining of a different core.
>
> > If you have a different set of counters, you are likely to have a
> > different PMU altogether:
> >
> > [    1.192606] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 counters available
> > [    1.201254] hw perfevents: enabled with armv8_cortex_a53 PMU driver, 7 counters available
> >
> > This isn't a broken system, but it has two set of cores which are
> > massively different, and two PMUs.
> >
> > This really should tie back to the PMU type we're counting on, and to
> > the set of CPUs that implements it. We already have some
> > infrastructure to check for the affinity of the PMU vs the CPU we're
> > running on, and this is already visible to userspace.
> >
> > Can't we just leave this responsibility to userspace?
>
> Believe me, I'm always a fan of offloading things to userspace :)
>
> If the VMM is privy to the details of the system it is on then the
> differing PMUs can be passed through to the guest w/ pinned vCPU
> threads. I just worry about the case of a naive VMM that assumes a
> homogenous system. I don't think I could entirely blame the VMM in this
> case either as we've gone to lengths to sanitise the feature set
> exposed to userspace.
>
> What happens when a vCPU gets scheduled on a core where the vPMU
> doesn't match? Ignoring other incongruences, it is not possible to
> virtualize more counters than are supported by the vPMU of the core.

I believe KVM_RUN will fail with KVM_EXIT_FAIL_ENTRY (Please see
the code that handles ON_UNSUPPORTED_CPU).

> Stopping short of any major hacks in the kernel to fudge around the
> problem, I believe we may need to provide better documentation of how
> heterogeneous CPUs are handled in KVM and what userspace can do about
> it.

Documentation/virt/kvm/devices/vcpu.rstDocumentation/virt/kvm/devices/vcpu.rst
for KVM_ARM_VCPU_PMU_V3_SET_PMU
has some description for the current behavior at least.
(perhaps we may need to update documents for this though)

Now I'm a bit worried about the validation code for PMCR_EL0.N
as well, as setting (restoring) PMCR_EL0 could be done on any
pCPUs (even before using KVM_ARM_VCPU_PMU_V3_SET_PMU).

What I am currently looking at is something like this:
 - Set the sanitised (min) value of PMCR_EL0.N among all PMUs
   for vCPUs by default.
 - Validate the PMCR_EL0.N value that userspace tries to set
   against the max value on the system (this is to ensure that
   restoring PMCR_EL0 for a vCPU works on any pCPUs)
 - Make KVM_RUN fail when PMCR_EL0.N for the vCPU indicates
   more counters than the PMU that is set for the vCPU.

What do you think ?

Thank you,
Reiji

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset
@ 2023-01-20 18:53           ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-20 18:53 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

Hi Oliver, Marc,

Thank you for the review!

On Fri, Jan 20, 2023 at 10:05 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hey Marc,
>
> On Fri, Jan 20, 2023 at 12:12:32PM +0000, Marc Zyngier wrote:
> > On Fri, 20 Jan 2023 00:30:33 +0000, Oliver Upton <oliver.upton@linux.dev> wrote:
> > > I think we need to derive a sanitised value for PMCR_EL0.N, as I believe
> > > nothing in the architecture prevents implementers from gluing together
> > > cores with varying numbers of PMCs. We probably haven't noticed it yet
> > > since it would appear all Arm designs have had 6 PMCs.
> >
> > This brings back the question of late onlining. How do you cope with
> > with the onlining of such a CPU that has a smaller set of counters
> > than its online counterparts? This is at odds with the way the PMU
> > code works.
>
> You're absolutely right, any illusion we derived from the online set of
> CPUs could fall apart with a late onlining of a different core.
>
> > If you have a different set of counters, you are likely to have a
> > different PMU altogether:
> >
> > [    1.192606] hw perfevents: enabled with armv8_cortex_a57 PMU driver, 7 counters available
> > [    1.201254] hw perfevents: enabled with armv8_cortex_a53 PMU driver, 7 counters available
> >
> > This isn't a broken system, but it has two set of cores which are
> > massively different, and two PMUs.
> >
> > This really should tie back to the PMU type we're counting on, and to
> > the set of CPUs that implements it. We already have some
> > infrastructure to check for the affinity of the PMU vs the CPU we're
> > running on, and this is already visible to userspace.
> >
> > Can't we just leave this responsibility to userspace?
>
> Believe me, I'm always a fan of offloading things to userspace :)
>
> If the VMM is privy to the details of the system it is on then the
> differing PMUs can be passed through to the guest w/ pinned vCPU
> threads. I just worry about the case of a naive VMM that assumes a
> homogenous system. I don't think I could entirely blame the VMM in this
> case either as we've gone to lengths to sanitise the feature set
> exposed to userspace.
>
> What happens when a vCPU gets scheduled on a core where the vPMU
> doesn't match? Ignoring other incongruences, it is not possible to
> virtualize more counters than are supported by the vPMU of the core.

I believe KVM_RUN will fail with KVM_EXIT_FAIL_ENTRY (Please see
the code that handles ON_UNSUPPORTED_CPU).

> Stopping short of any major hacks in the kernel to fudge around the
> problem, I believe we may need to provide better documentation of how
> heterogeneous CPUs are handled in KVM and what userspace can do about
> it.

Documentation/virt/kvm/devices/vcpu.rstDocumentation/virt/kvm/devices/vcpu.rst
for KVM_ARM_VCPU_PMU_V3_SET_PMU
has some description for the current behavior at least.
(perhaps we may need to update documents for this though)

Now I'm a bit worried about the validation code for PMCR_EL0.N
as well, as setting (restoring) PMCR_EL0 could be done on any
pCPUs (even before using KVM_ARM_VCPU_PMU_V3_SET_PMU).

What I am currently looking at is something like this:
 - Set the sanitised (min) value of PMCR_EL0.N among all PMUs
   for vCPUs by default.
 - Validate the PMCR_EL0.N value that userspace tries to set
   against the max value on the system (this is to ensure that
   restoring PMCR_EL0 for a vCPU works on any pCPUs)
 - Make KVM_RUN fail when PMCR_EL0.N for the vCPU indicates
   more counters than the PMU that is set for the vCPU.

What do you think ?

Thank you,
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
  2023-01-20 14:11       ` Marc Zyngier
@ 2023-01-21  5:18         ` Reiji Watanabe
  -1 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-21  5:18 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Marc,

On Fri, Jan 20, 2023 at 6:11 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 20 Jan 2023 14:04:12 +0000,
> Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Tue, 17 Jan 2023 01:35:35 +0000,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > >
> > > On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> > > PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> > > This function clears RAZ bits of those registers corresponding
> > > to unimplemented event counters on the vCPU, and sets bits
> > > corresponding to implemented event counters to a predefined
> > > pseudo UNKNOWN value (some bits are set to 1).
> > >
> > > The function identifies (un)implemented event counters on the
> > > vCPU based on the PMCR_EL1.N value on the host. Using the host
> > > value for this would be problematic when KVM supports letting
> > > userspace set PMCR_EL1.N to a value different from the host value
> > > (some of the RAZ bits of those registers could end up being set to 1).
> > >
> > > Fix reset_pmu_reg() to clear the registers so that it can ensure
> > > that all the RAZ bits are cleared even when the PMCR_EL1.N value
> > > for the vCPU is different from the host value.
> > >
> > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 10 +---------
> > >  1 file changed, 1 insertion(+), 9 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index c6cbfe6b854b..ec4bdaf71a15 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> > >
> > >  static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > >  {
> > > -   u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > > -
> > >     /* No PMU available, any PMU reg may UNDEF... */
> > >     if (!kvm_arm_support_pmu_v3())
> > >             return;
> >
> > Is this still true? We remove the PMCR_EL0 access just below.
> >
> > >
> > > -   n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > > -   n &= ARMV8_PMU_PMCR_N_MASK;
> > > -   if (n)
> > > -           mask |= GENMASK(n - 1, 0);
> > > -
> > > -   reset_unknown(vcpu, r);
> > > -   __vcpu_sys_reg(vcpu, r->reg) &= mask;
> > > +   __vcpu_sys_reg(vcpu, r->reg) = 0;
> > >  }
> >
> > At the end of the day, this function has no dependency on the host at
> > all, and only writes 0 to the per-vcpu register.
> >
> > So why not get rid of it altogether and have:
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index c6cbfe6b854b..1d1514b89d75 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >         trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
> >
> >  #define PMU_SYS_REG(r)                                               \
> > -     SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
> > +     SYS_DESC(r), .visibility = pmu_visibility
> >
> >  /* Macro to expand the PMEVCNTRn_EL0 register */
> >  #define PMU_PMEVCNTR_EL0(n)                                          \
> >
> > which would fall-back the specified reset value (zero by default)?
>
> Scratch that, we need:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c6cbfe6b854b..6f6a928c92ec 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>           trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>
>  #define PMU_SYS_REG(r)                                         \
> -       SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
> +       SYS_DESC(r), .reset = reset_val, .visibility = pmu_visibility
>
>  /* Macro to expand the PMEVCNTRn_EL0 register */
>  #define PMU_PMEVCNTR_EL0(n)                                            \
>
> But otherwise, this should be enough.

Yes, that's true.  I will fix that in v3.

Thank you!
Reiji

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register
@ 2023-01-21  5:18         ` Reiji Watanabe
  0 siblings, 0 replies; 46+ messages in thread
From: Reiji Watanabe @ 2023-01-21  5:18 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Zenghui Yu, Suzuki K Poulose, Paolo Bonzini, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Marc,

On Fri, Jan 20, 2023 at 6:11 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 20 Jan 2023 14:04:12 +0000,
> Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Tue, 17 Jan 2023 01:35:35 +0000,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > >
> > > On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> > > PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> > > This function clears RAZ bits of those registers corresponding
> > > to unimplemented event counters on the vCPU, and sets bits
> > > corresponding to implemented event counters to a predefined
> > > pseudo UNKNOWN value (some bits are set to 1).
> > >
> > > The function identifies (un)implemented event counters on the
> > > vCPU based on the PMCR_EL1.N value on the host. Using the host
> > > value for this would be problematic when KVM supports letting
> > > userspace set PMCR_EL1.N to a value different from the host value
> > > (some of the RAZ bits of those registers could end up being set to 1).
> > >
> > > Fix reset_pmu_reg() to clear the registers so that it can ensure
> > > that all the RAZ bits are cleared even when the PMCR_EL1.N value
> > > for the vCPU is different from the host value.
> > >
> > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 10 +---------
> > >  1 file changed, 1 insertion(+), 9 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index c6cbfe6b854b..ec4bdaf71a15 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> > >
> > >  static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > >  {
> > > -   u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > > -
> > >     /* No PMU available, any PMU reg may UNDEF... */
> > >     if (!kvm_arm_support_pmu_v3())
> > >             return;
> >
> > Is this still true? We remove the PMCR_EL0 access just below.
> >
> > >
> > > -   n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > > -   n &= ARMV8_PMU_PMCR_N_MASK;
> > > -   if (n)
> > > -           mask |= GENMASK(n - 1, 0);
> > > -
> > > -   reset_unknown(vcpu, r);
> > > -   __vcpu_sys_reg(vcpu, r->reg) &= mask;
> > > +   __vcpu_sys_reg(vcpu, r->reg) = 0;
> > >  }
> >
> > At the end of the day, this function has no dependency on the host at
> > all, and only writes 0 to the per-vcpu register.
> >
> > So why not get rid of it altogether and have:
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index c6cbfe6b854b..1d1514b89d75 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >         trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
> >
> >  #define PMU_SYS_REG(r)                                               \
> > -     SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
> > +     SYS_DESC(r), .visibility = pmu_visibility
> >
> >  /* Macro to expand the PMEVCNTRn_EL0 register */
> >  #define PMU_PMEVCNTR_EL0(n)                                          \
> >
> > which would fall-back the specified reset value (zero by default)?
>
> Scratch that, we need:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c6cbfe6b854b..6f6a928c92ec 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -976,7 +976,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>           trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>
>  #define PMU_SYS_REG(r)                                         \
> -       SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
> +       SYS_DESC(r), .reset = reset_val, .visibility = pmu_visibility
>
>  /* Macro to expand the PMEVCNTRn_EL0 register */
>  #define PMU_PMEVCNTR_EL0(n)                                            \
>
> But otherwise, this should be enough.

Yes, that's true.  I will fix that in v3.

Thank you!
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2023-01-21  5:20 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-17  1:35 [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Reiji Watanabe
2023-01-17  1:35 ` Reiji Watanabe
2023-01-17  1:35 ` [PATCH v2 1/8] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-20 14:04   ` Marc Zyngier
2023-01-20 14:04     ` Marc Zyngier
2023-01-20 14:11     ` Marc Zyngier
2023-01-20 14:11       ` Marc Zyngier
2023-01-21  5:18       ` Reiji Watanabe
2023-01-21  5:18         ` Reiji Watanabe
2023-01-17  1:35 ` [PATCH v2 2/8] KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and PMCCFILTR_EL0 Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-17  1:35 ` [PATCH v2 3/8] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-20  0:30   ` Oliver Upton
2023-01-20  0:30     ` Oliver Upton
2023-01-20 12:12     ` Marc Zyngier
2023-01-20 12:12       ` Marc Zyngier
2023-01-20 18:04       ` Oliver Upton
2023-01-20 18:04         ` Oliver Upton
2023-01-20 18:53         ` Reiji Watanabe
2023-01-20 18:53           ` Reiji Watanabe
2023-01-17  1:35 ` [PATCH v2 4/8] KVM: arm64: PMU: Disallow userspace to set PMCR.N greater than the host value Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-20 14:18   ` Marc Zyngier
2023-01-20 14:18     ` Marc Zyngier
2023-01-17  1:35 ` [PATCH v2 5/8] tools: arm64: Import perf_event.h Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-17  1:35 ` [PATCH v2 6/8] KVM: selftests: aarch64: Introduce vpmu_counter_access test Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-17  1:35 ` [PATCH v2 7/8] KVM: selftests: aarch64: vPMU register test for implemented counters Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-18  7:47   ` Shaoqin Huang
2023-01-18  7:47     ` Shaoqin Huang
2023-01-19  3:02     ` Reiji Watanabe
2023-01-19  3:02       ` Reiji Watanabe
2023-01-17  1:35 ` [PATCH v2 8/8] KVM: selftests: aarch64: vPMU register test for unimplemented counters Reiji Watanabe
2023-01-17  1:35   ` Reiji Watanabe
2023-01-18  7:49   ` Shaoqin Huang
2023-01-18  7:49     ` Shaoqin Huang
2023-01-19  3:04     ` Reiji Watanabe
2023-01-19  3:04       ` Reiji Watanabe
2023-01-17  7:25 ` [PATCH v2 0/8] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Shaoqin Huang
2023-01-17  7:25   ` Shaoqin Huang
2023-01-18  5:53   ` Reiji Watanabe
2023-01-18  5:53     ` Reiji Watanabe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.