linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
@ 2023-10-09 23:08 Raghavendra Rao Ananta
  2023-10-09 23:08 ` [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU Raghavendra Rao Ananta
                   ` (11 more replies)
  0 siblings, 12 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

Hello,

With permission from Reiji Watanabe <reijiw@google.com>, the original
author of the series, I'm posting the v6 with necessary alterations.

The goal of this series is to allow userspace to limit the number
of PMU event counters on the vCPU.  We need this to support migration
across systems that implement different numbers of counters.

The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, its value will be the same as
the current PE by default.  Userspace can set PMCR_EL0.N for the
vCPU to any value even with the current KVM using KVM_SET_ONE_REG.
However, it is practically unsupported, as KVM resets PMCR_EL0.N
to the host value on vCPU reset and some KVM code uses the host
value to identify (un)implemented event counters on the vCPU.

This series will ensure that the PMCR_EL0.N value is preserved
on vCPU reset and that KVM doesn't use the host value
to identify (un)implemented event counters on the vCPU.
This allows userspace to limit the number of the PMU event
counters on the vCPU.

The series is based on kvmarm/next @7e6587baafc0 to include the
vCPU reset and feature flags cleanup/fixes series [1].

Patch 1 adds helper functions to set a PMU for the guest. This
helper will make it easier for the following patches to add
modify codes for that process.

Patch 2 makes the default PMU for the guest set before the first
vCPU reset.

Patch 3 fixes reset_pmu_reg() to ensure that (RAZ) bits of
PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
PMOVS{SET,CLR}_EL1 corresponding to unimplemented event
counters on the vCPU are reset to zero.

Patch 4 is a minor refactoring to use the default PMU register reset
function for PMUSERENR_EL0 and PMCCFILTR_EL0.

Patch 5,6 adds a helper to read vCPU's PMCR_EL0 and the number of
counters, respectively.

Patch 7 changes the code to use the guest's PMCR_EL0.N, instead
of the PE's PMCR_EL0.N.

Patch 8 adds support userspace modifying PMCR_EL0.N.

Patch 9-12 adds a selftest to verify reading and writing PMU registers
for implemented or unimplemented PMU event counters on the vCPU.

v7: Thanks, Oliver for the suggestions
- Rebase the series onto kvmarm/next.
- Move the logic to set the default PMU for the guest from
  kvm_reset_vcpu() to __kvm_vcpu_set_target() to deal with the
  error returned.
- Add a helper, kvm_arm_get_num_counters(), to read the number
  of general-purpose counters.
- Use this helper to fix the error reported by kernel test robot [2].

v6: Thanks, Oliver and Shaoqin for the suggestions
- Split the previously defined kvm_arm_set_vm_pmu() into separate
  functions: default arm_pmu and a caller requested arm_pmu.
- Send -EINVAL from kvm_reset_vcpu(), instead of -ENODEV for the
  case where KVM fails to set a default arm_pmu, to remain consistent
  with the existing behavior.
- Drop the v5 patch-5/12 that removes ARMV8_PMU_PMCR_N_MASK and adds
  ARMV8_PMU_PMCR_N. Make corresponding changes to v5 patch-6/12.
- Disregard introducing 'pmcr_n_limit' in kvm->arch as a member to
  be accessed later in 'set_pmcr()'. Instead, directly obtain the
  value by accessing the saved 'arm_pmu'.
- 'set_pmcr()' ignores the error when userspace tries to set PMCR.N
  greater than the hardware limit to keep the existing API behavior.
- 'set_pmcr()' ignores modifications to the register after the VM has
  started and returns a success to userspace.
- Introduce [get|set]_pmcr_n() helpers in the selftest to make
  modifications to the field easier.
- Define the 'vpmu_vm' globally in the selftest, instead of allocating
  it every time a VM is created.
- Use the new printf style __GUEST_ASSERT()s in the selftest. 

v5:
https://lore.kernel.org/all/20230817003029.3073210-1-rananta@google.com/
 - Drop the patches (v4 3,4) related to PMU version fixes as it's
   now being handled in a separate series [3].
 - Switch to config_lock, instead of kvm->lock, while configuring
   the guest PMU.
 - Instead of continuing after a WARN_ON() for the return value of
   kvm_arm_set_vm_pmu() in kvm_arm_pmu_v3_set_pmu(), patch-1 now
   returns from the function immediately with the error code.
 - Fix WARN_ON() logic in kvm_host_pmu_init() (patch v4 9/14).
 - Instead of returning 0, return -ENODEV from the
   kvm_arm_set_vm_pmu() stub function.
 - Do not define the PMEVN_CASE() and PMEVN_SWITCH() macros in
   the selftest code as they are now included in the imported
   arm_pmuv3.h header.
 - Since the (initial) purpose of the selftest is to test the
   accessibility of the counter registers, remove the functional
   test at the end of test_access_pmc_regs(). It'll be added
   later in a separate series.
 - Introduce additional helper functions (destroy_vpmu_vm(),
   PMC_ACC_TO_IDX()) in the selftest for ease of maintenance
   and debugging.
   
v4:
https://lore.kernel.org/all/20230211031506.4159098-1-reijiw@google.com/
 - Fix the selftest bug in patch 13 (Have test_access_pmc_regs() to
   specify pmc index for test_bitmap_pmu_regs() instead of bit-shifted
   value (Thank you Raghavendra for the reporting the issue!).

v3:
https://lore.kernel.org/all/20230203040242.1792453-1-reijiw@google.com/
 - Remove reset_pmu_reg(), and use reset_val() instead. [Marc]
 - Fixed the initial value of PMCR_EL0.N on heterogeneous
   PMU systems. [Oliver]
 - Fixed PMUVer issues on heterogeneous PMU systems.
 - Fixed typos [Shaoqin]

v2:
https://lore.kernel.org/all/20230117013542.371944-1-reijiw@google.com/
 - Added the sys_reg's set_user() handler for the PMCR_EL0 to
   disallow userspace to set PMCR_EL0.N for the vCPU to a value
   that is greater than the host value (and added a new test
   case for this behavior). [Oliver]
 - Added to the commit log of the patch 2 that PMUSERENR_EL0 and
   PMCCFILTR_EL0 have UNKNOWN reset values.

v1:
https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/

Thank you.
Raghavendra

[1]:
https://lore.kernel.org/all/20230920195036.1169791-1-oliver.upton@linux.dev/
[2]: https://lore.kernel.org/all/202309290607.Qgg05wKw-lkp@intel.com/
[3]:
https://lore.kernel.org/all/20230728181907.1759513-1-reijiw@google.com/

Raghavendra Rao Ananta (2):
  KVM: arm64: PMU: Add a helper to read the number of counters
  tools: Import arm_pmuv3.h

Reiji Watanabe (10):
  KVM: arm64: PMU: Introduce helpers to set the guest's PMU
  KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset
  KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU
    reset
  KVM: arm64: PMU: Don't define the sysreg reset() for
    PM{USERENR,CCFILTR}_EL0
  KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0
  KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
  KVM: selftests: aarch64: Introduce vpmu_counter_access test
  KVM: selftests: aarch64: vPMU register test for implemented counters
  KVM: selftests: aarch64: vPMU register test for unimplemented counters

 arch/arm64/include/asm/kvm_host.h             |   3 +
 arch/arm64/kvm/arm.c                          |  23 +-
 arch/arm64/kvm/pmu-emul.c                     | 102 ++-
 arch/arm64/kvm/sys_regs.c                     | 101 ++-
 include/kvm/arm_pmu.h                         |  18 +
 tools/include/perf/arm_pmuv3.h                | 308 +++++++++
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/vpmu_counter_access.c         | 590 ++++++++++++++++++
 .../selftests/kvm/include/aarch64/processor.h |   1 +
 9 files changed, 1087 insertions(+), 60 deletions(-)
 create mode 100644 tools/include/perf/arm_pmuv3.h
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c


base-commit: 7e6587baafc0054bd32d9ca5f72af36e36ff1d05
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply	[flat|nested] 60+ messages in thread

* [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-16 19:45   ` Eric Auger
  2023-10-09 23:08 ` [PATCH v7 02/12] KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset Raghavendra Rao Ananta
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Introduce new helper functions to set the guest's PMU
(kvm->arch.arm_pmu) either to a default probed instance or to a
caller requested one, and use it when the guest's PMU needs to
be set. These helpers will make it easier for the following
patches to modify the relevant code.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 50 +++++++++++++++++++++++++++------------
 1 file changed, 35 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 3afb281ed8d2..eb5dcb12dafe 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -874,6 +874,36 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 	return true;
 }
 
+static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+{
+	lockdep_assert_held(&kvm->arch.config_lock);
+
+	kvm->arch.arm_pmu = arm_pmu;
+}
+
+/**
+ * kvm_arm_set_default_pmu - No PMU set, get the default one.
+ * @kvm: The kvm pointer
+ *
+ * The observant among you will notice that the supported_cpus
+ * mask does not get updated for the default PMU even though it
+ * is quite possible the selected instance supports only a
+ * subset of cores in the system. This is intentional, and
+ * upholds the preexisting behavior on heterogeneous systems
+ * where vCPUs can be scheduled on any core but the guest
+ * counters could stop working.
+ */
+static int kvm_arm_set_default_pmu(struct kvm *kvm)
+{
+	struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu();
+
+	if (!arm_pmu)
+		return -ENODEV;
+
+	kvm_arm_set_pmu(kvm, arm_pmu);
+	return 0;
+}
+
 static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -893,7 +923,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
 				break;
 			}
 
-			kvm->arch.arm_pmu = arm_pmu;
+			kvm_arm_set_pmu(kvm, arm_pmu);
 			cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus);
 			ret = 0;
 			break;
@@ -917,20 +947,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 		return -EBUSY;
 
 	if (!kvm->arch.arm_pmu) {
-		/*
-		 * No PMU set, get the default one.
-		 *
-		 * The observant among you will notice that the supported_cpus
-		 * mask does not get updated for the default PMU even though it
-		 * is quite possible the selected instance supports only a
-		 * subset of cores in the system. This is intentional, and
-		 * upholds the preexisting behavior on heterogeneous systems
-		 * where vCPUs can be scheduled on any core but the guest
-		 * counters could stop working.
-		 */
-		kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
-		if (!kvm->arch.arm_pmu)
-			return -ENODEV;
+		int ret = kvm_arm_set_default_pmu(kvm);
+
+		if (ret)
+			return ret;
 	}
 
 	switch (attr->attr) {
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 02/12] KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
  2023-10-09 23:08 ` [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-10 22:25   ` Oliver Upton
  2023-10-09 23:08 ` [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on " Raghavendra Rao Ananta
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

The following patches will use the number of counters information
from the arm_pmu and use this to set the PMCR.N for the guest
during vCPU reset. However, since the guest is not associated
with any arm_pmu until userspace configures the vPMU device
attributes, and a reset can happen before this event, assign a
default PMU to the guest just before doing the reset.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/arm.c      | 20 ++++++++++++++++++++
 arch/arm64/kvm/pmu-emul.c | 12 ++----------
 include/kvm/arm_pmu.h     |  6 ++++++
 3 files changed, 28 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 78b0970eb8e6..708a53b70a7b 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1313,6 +1313,23 @@ static bool kvm_vcpu_init_changed(struct kvm_vcpu *vcpu,
 			     KVM_VCPU_MAX_FEATURES);
 }
 
+static int kvm_vcpu_set_pmu(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+
+	if (!kvm_arm_support_pmu_v3())
+		return -EINVAL;
+
+	/*
+	 * When the vCPU has a PMU, but no PMU is set for the guest
+	 * yet, set the default one.
+	 */
+	if (unlikely(!kvm->arch.arm_pmu))
+		return kvm_arm_set_default_pmu(kvm);
+
+	return 0;
+}
+
 static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 				 const struct kvm_vcpu_init *init)
 {
@@ -1328,6 +1345,9 @@ static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 
 	bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES);
 
+	if (kvm_vcpu_has_pmu(vcpu) && kvm_vcpu_set_pmu(vcpu))
+		goto out_unlock;
+
 	/* Now we know what it is, we can reset it. */
 	kvm_reset_vcpu(vcpu);
 
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index eb5dcb12dafe..cc30c246c010 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -717,8 +717,7 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
 	 * It is still necessary to get a valid cpu, though, to probe for the
 	 * default PMU instance as userspace is not required to specify a PMU
 	 * type. In order to uphold the preexisting behavior KVM selects the
-	 * PMU instance for the core where the first call to the
-	 * KVM_ARM_VCPU_PMU_V3_CTRL attribute group occurs. A dependent use case
+	 * PMU instance for the core during the vcpu reset. A dependent use case
 	 * would be a user with disdain of all things big.LITTLE that affines
 	 * the VMM to a particular cluster of cores.
 	 *
@@ -893,7 +892,7 @@ static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
  * where vCPUs can be scheduled on any core but the guest
  * counters could stop working.
  */
-static int kvm_arm_set_default_pmu(struct kvm *kvm)
+int kvm_arm_set_default_pmu(struct kvm *kvm)
 {
 	struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu();
 
@@ -946,13 +945,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 	if (vcpu->arch.pmu.created)
 		return -EBUSY;
 
-	if (!kvm->arch.arm_pmu) {
-		int ret = kvm_arm_set_default_pmu(kvm);
-
-		if (ret)
-			return ret;
-	}
-
 	switch (attr->attr) {
 	case KVM_ARM_VCPU_PMU_V3_IRQ: {
 		int __user *uaddr = (int __user *)(long)attr->addr;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 3546ebc469ad..858ed9ce828a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -101,6 +101,7 @@ void kvm_vcpu_pmu_resync_el0(void);
 })
 
 u8 kvm_arm_pmu_get_pmuver_limit(void);
+int kvm_arm_set_default_pmu(struct kvm *kvm);
 
 #else
 struct kvm_pmu {
@@ -174,6 +175,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
 }
 static inline void kvm_vcpu_pmu_resync_el0(void) {}
 
+static inline int kvm_arm_set_default_pmu(struct kvm *kvm)
+{
+	return -ENODEV;
+}
+
 #endif
 
 #endif
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
  2023-10-09 23:08 ` [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU Raghavendra Rao Ananta
  2023-10-09 23:08 ` [PATCH v7 02/12] KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-16 19:44   ` Eric Auger
  2023-10-09 23:08 ` [PATCH v7 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 Raghavendra Rao Ananta
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
This function clears RAZ bits of those registers corresponding
to unimplemented event counters on the vCPU, and sets bits
corresponding to implemented event counters to a predefined
pseudo UNKNOWN value (some bits are set to 1).

The function identifies (un)implemented event counters on the
vCPU based on the PMCR_EL0.N value on the host. Using the host
value for this would be problematic when KVM supports letting
userspace set PMCR_EL0.N to a value different from the host value
(some of the RAZ bits of those registers could end up being set to 1).

Fix this by clearing the registers so that it can ensure
that all the RAZ bits are cleared even when the PMCR_EL0.N value
for the vCPU is different from the host value. Use reset_val() to
do this instead of fixing reset_pmu_reg(), and remove
reset_pmu_reg(), as it is no longer used.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/sys_regs.c | 21 +--------------------
 1 file changed, 1 insertion(+), 20 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 818a52e257ed..3dbb7d276b0e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
-static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
-{
-	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
-
-	/* No PMU available, any PMU reg may UNDEF... */
-	if (!kvm_arm_support_pmu_v3())
-		return 0;
-
-	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
-	n &= ARMV8_PMU_PMCR_N_MASK;
-	if (n)
-		mask |= GENMASK(n - 1, 0);
-
-	reset_unknown(vcpu, r);
-	__vcpu_sys_reg(vcpu, r->reg) &= mask;
-
-	return __vcpu_sys_reg(vcpu, r->reg);
-}
-
 static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	reset_unknown(vcpu, r);
@@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(name)						\
-	SYS_DESC(SYS_##name), .reset = reset_pmu_reg,			\
+	SYS_DESC(SYS_##name), .reset = reset_val,			\
 	.visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (2 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on " Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-16 19:47   ` Eric Auger
  2023-10-09 23:08 ` [PATCH v7 05/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 Raghavendra Rao Ananta
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

The default reset function for PMU registers (defined by PMU_SYS_REG)
now simply clears a specified register. Use the default one for
PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those
registers on vCPU reset (NOTE: All non-RES0 fields of those
registers have UNKNOWN reset values, and the same fields of
their AArch32 registers have 0 reset values).

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3dbb7d276b0e..08af7824e9d8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2180,7 +2180,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	 * in 32bit mode. Here we choose to reset it as zero for consistency.
 	 */
 	{ PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr,
-	  .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 },
+	  .reg = PMUSERENR_EL0, },
 	{ PMU_SYS_REG(PMOVSSET_EL0),
 	  .access = access_pmovs, .reg = PMOVSSET_EL0 },
 
@@ -2338,7 +2338,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	 * in 32bit mode. Here we choose to reset it as zero for consistency.
 	 */
 	{ PMU_SYS_REG(PMCCFILTR_EL0), .access = access_pmu_evtyper,
-	  .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
+	  .reg = PMCCFILTR_EL0, },
 
 	EL2_REG(VPIDR_EL2, access_rw, reset_unknown, 0),
 	EL2_REG(VMPIDR_EL2, access_rw, reset_unknown, 0),
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 05/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (3 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-16 20:02   ` Eric Auger
  2023-10-09 23:08 ` [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters Raghavendra Rao Ananta
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Add a helper to read a vCPU's PMCR_EL0, and use it when KVM
reads a vCPU's PMCR_EL0.

The PMCR_EL0 value is tracked by a sysreg file per each vCPU.
The following patches will make (only) PMCR_EL0.N track per guest.
Having the new helper will be useful to combine the PMCR_EL0.N
field (tracked per guest) and the other fields (tracked per vCPU)
to provide the value of PMCR_EL0.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/arm.c      |  3 +--
 arch/arm64/kvm/pmu-emul.c | 21 +++++++++++++++------
 arch/arm64/kvm/sys_regs.c |  6 +++---
 include/kvm/arm_pmu.h     |  6 ++++++
 4 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 708a53b70a7b..0af4d6bbe3d3 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -854,8 +854,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
 		}
 
 		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
-			kvm_pmu_handle_pmcr(vcpu,
-					    __vcpu_sys_reg(vcpu, PMCR_EL0));
+			kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu));
 
 		if (kvm_check_request(KVM_REQ_RESYNC_PMU_EL0, vcpu))
 			kvm_vcpu_pmu_restore_guest(vcpu);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index cc30c246c010..a161d6266a5c 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -72,7 +72,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
 
 static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
 {
-	u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0);
+	u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc));
 
 	return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) ||
 	       (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC));
@@ -250,7 +250,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
-	u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
+	u64 val = kvm_vcpu_read_pmcr(vcpu) >> ARMV8_PMU_PMCR_N_SHIFT;
 
 	val &= ARMV8_PMU_PMCR_N_MASK;
 	if (val == 0)
@@ -272,7 +272,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
+	if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val)
 		return;
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
@@ -324,7 +324,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
 {
 	u64 reg = 0;
 
-	if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
+	if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) {
 		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
@@ -426,7 +426,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 {
 	int i;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+	if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E))
 		return;
 
 	/* Weed out disabled counters */
@@ -569,7 +569,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
 {
 	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
-	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+	return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) &&
 	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx));
 }
 
@@ -1084,3 +1084,12 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
 					      ID_AA64DFR0_EL1_PMUVer_V3P5);
 	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
 }
+
+/**
+ * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU
+ * @vcpu: The vcpu pointer
+ */
+u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
+{
+	return __vcpu_sys_reg(vcpu, PMCR_EL0);
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 08af7824e9d8..ff0f7095eaca 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -803,7 +803,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		 * Only update writeable bits of PMCR (continuing into
 		 * kvm_pmu_handle_pmcr() as well)
 		 */
-		val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+		val = kvm_vcpu_read_pmcr(vcpu);
 		val &= ~ARMV8_PMU_PMCR_MASK;
 		val |= p->regval & ARMV8_PMU_PMCR_MASK;
 		if (!kvm_supports_32bit_el0())
@@ -811,7 +811,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		kvm_pmu_handle_pmcr(vcpu, val);
 	} else {
 		/* PMCR.P & PMCR.C are RAZ */
-		val = __vcpu_sys_reg(vcpu, PMCR_EL0)
+		val = kvm_vcpu_read_pmcr(vcpu)
 		      & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
 		p->regval = val;
 	}
@@ -860,7 +860,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
 {
 	u64 pmcr, val;
 
-	pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
+	pmcr = kvm_vcpu_read_pmcr(vcpu);
 	val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
 	if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
 		kvm_inject_undefined(vcpu);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 858ed9ce828a..cd980d78b86b 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -103,6 +103,7 @@ void kvm_vcpu_pmu_resync_el0(void);
 u8 kvm_arm_pmu_get_pmuver_limit(void);
 int kvm_arm_set_default_pmu(struct kvm *kvm);
 
+u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu);
 #else
 struct kvm_pmu {
 };
@@ -180,6 +181,11 @@ static inline int kvm_arm_set_default_pmu(struct kvm *kvm)
 	return -ENODEV;
 }
 
+static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+
 #endif
 
 #endif
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (4 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 05/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-10 22:30   ` Oliver Upton
  2023-10-09 23:08 ` [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU Raghavendra Rao Ananta
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

Add a helper, kvm_arm_get_num_counters(), to read the number
of counters from the arm_pmu associated to the VM. Make the
function global as upcoming patches will be interested to
know the value while setting the PMCR.N of the guest from
userspace.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 17 +++++++++++++++++
 include/kvm/arm_pmu.h     |  6 ++++++
 2 files changed, 23 insertions(+)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index a161d6266a5c..84aa8efd9163 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -873,6 +873,23 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 	return true;
 }
 
+/**
+ * kvm_arm_get_num_counters - Get the number of general-purpose PMU counters.
+ * @kvm: The kvm pointer
+ */
+int kvm_arm_get_num_counters(struct kvm *kvm)
+{
+	struct arm_pmu *arm_pmu = kvm->arch.arm_pmu;
+
+	lockdep_assert_held(&kvm->arch.config_lock);
+
+	/*
+	 * The arm_pmu->num_events considers the cycle counter as well.
+	 * Ignore that and return only the general-purpose counters.
+	 */
+	return arm_pmu->num_events - 1;
+}
+
 static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 {
 	lockdep_assert_held(&kvm->arch.config_lock);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index cd980d78b86b..672f3e9d7eea 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -102,6 +102,7 @@ void kvm_vcpu_pmu_resync_el0(void);
 
 u8 kvm_arm_pmu_get_pmuver_limit(void);
 int kvm_arm_set_default_pmu(struct kvm *kvm);
+int kvm_arm_get_num_counters(struct kvm *kvm);
 
 u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu);
 #else
@@ -181,6 +182,11 @@ static inline int kvm_arm_set_default_pmu(struct kvm *kvm)
 	return -ENODEV;
 }
 
+static inline int kvm_arm_get_num_counters(struct kvm *kvm)
+{
+	return -ENODEV;
+}
+
 static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
 {
 	return 0;
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (5 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-16 13:35   ` Sebastian Ott
  2023-10-09 23:08 ` [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, the value is set to the same
value as the current PE on every vCPU reset.  Unless the vCPU is
pinned to PEs that has the PMU associated to the guest from the
initial vCPU reset, the value might be different from the PMU's
PMCR_EL0.N on heterogeneous PMU systems.

Fix this by setting the vCPU's PMCR_EL0.N to the PMU's PMCR_EL0.N
value. Track the PMCR_EL0.N per guest, as only one PMU can be set
for the guest (PMCR_EL0.N must be the same for all vCPUs of the
guest), and it is convenient for updating the value.

KVM does not yet support userspace modifying PMCR_EL0.N.
The following patch will add support for that.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/pmu-emul.c         | 14 +++++++++++++-
 arch/arm64/kvm/sys_regs.c         | 15 +++++++++------
 3 files changed, 25 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index f7e5132c0a23..a7f326a85077 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -283,6 +283,9 @@ struct kvm_arch {
 
 	cpumask_var_t supported_cpus;
 
+	/* PMCR_EL0.N value for the guest */
+	u8 pmcr_n;
+
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
 	struct maple_tree smccc_filter;
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 84aa8efd9163..4daa9f6b170a 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -690,6 +690,9 @@ void kvm_host_pmu_init(struct arm_pmu *pmu)
 	if (!entry)
 		goto out_unlock;
 
+	WARN_ON((pmu->num_events <= 0) ||
+		(pmu->num_events > ARMV8_PMU_MAX_COUNTERS));
+
 	entry->arm_pmu = pmu;
 	list_add_tail(&entry->entry, &arm_pmus);
 
@@ -895,6 +898,7 @@ static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 	lockdep_assert_held(&kvm->arch.config_lock);
 
 	kvm->arch.arm_pmu = arm_pmu;
+	kvm->arch.pmcr_n = kvm_arm_get_num_counters(kvm);
 }
 
 /**
@@ -1105,8 +1109,16 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
 /**
  * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU
  * @vcpu: The vcpu pointer
+ *
+ * The function returns the value of PMCR.N based on the per-VM tracked
+ * value (kvm->arch.pmcr_n). This is to ensure that the register field
+ * remains consistent for the VM, even on heterogeneous systems where
+ * the value may vary when read from different CPUs (during vCPU reset).
  */
 u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
 {
-	return __vcpu_sys_reg(vcpu, PMCR_EL0);
+	u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
+			~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+
+	return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ff0f7095eaca..c750722fbe4a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -745,12 +745,8 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	u64 pmcr;
 
-	/* No PMU available, PMCR_EL0 may UNDEF... */
-	if (!kvm_arm_support_pmu_v3())
-		return 0;
-
 	/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
-	pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	pmcr = kvm_vcpu_read_pmcr(vcpu) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
 	if (!kvm_supports_32bit_el0())
 		pmcr |= ARMV8_PMU_PMCR_LC;
 
@@ -1084,6 +1080,13 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
+		    u64 *val)
+{
+	*val = kvm_vcpu_read_pmcr(vcpu);
+	return 0;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	{ SYS_DESC(SYS_DBGBVRn_EL1(n)),					\
@@ -2148,7 +2151,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_SVCR), undef_access },
 
 	{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
-	  .reset = reset_pmcr, .reg = PMCR_EL0 },
+	  .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
 	{ PMU_SYS_REG(PMCNTENSET_EL0),
 	  .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
 	{ PMU_SYS_REG(PMCNTENCLR_EL0),
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (6 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-17 15:52   ` Sebastian Ott
  2023-10-09 23:08 ` [PATCH v7 09/12] tools: Import arm_pmuv3.h Raghavendra Rao Ananta
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

KVM does not yet support userspace modifying PMCR_EL0.N (With
the previous patch, KVM ignores what is written by userspace).
Add support userspace limiting PMCR_EL0.N.

Disallow userspace to set PMCR_EL0.N to a value that is greater
than the host value as KVM doesn't support more event counters
than what the host HW implements. Also, make this register
immutable after the VM has started running. To maintain the
existing expectations, instead of returning an error, KVM
returns a success for these two cases.

Finally, ignore writes to read-only bits that are cleared on
vCPU reset, and RES{0,1} bits (including writable bits that
KVM doesn't support yet), as those bits shouldn't be modified
(at least with the current KVM).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/sys_regs.c | 57 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 55 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c750722fbe4a..0c8d337b0370 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1087,6 +1087,59 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
 	return 0;
 }
 
+static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
+		    u64 val)
+{
+	struct kvm *kvm = vcpu->kvm;
+	u64 new_n, mutable_mask;
+
+	mutex_lock(&kvm->arch.config_lock);
+
+	/*
+	 * Make PMCR immutable once the VM has started running, but do
+	 * not return an error (-EBUSY) to meet the existing expectations.
+	 */
+	if (kvm_vm_has_ran_once(vcpu->kvm)) {
+		mutex_unlock(&kvm->arch.config_lock);
+		return 0;
+	}
+
+	new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+	if (new_n != kvm->arch.pmcr_n) {
+		u8 pmcr_n_limit = kvm_arm_get_num_counters(kvm);
+
+		/*
+		 * The vCPU can't have more counters than the PMU hardware
+		 * implements. Ignore this error to maintain compatibility
+		 * with the existing KVM behavior.
+		 */
+		if (new_n <= pmcr_n_limit)
+			kvm->arch.pmcr_n = new_n;
+	}
+	mutex_unlock(&kvm->arch.config_lock);
+
+	/*
+	 * Ignore writes to RES0 bits, read only bits that are cleared on
+	 * vCPU reset, and writable bits that KVM doesn't support yet.
+	 * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
+	 * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
+	 * But, we leave the bit as it is here, as the vCPU's PMUver might
+	 * be changed later (NOTE: the bit will be cleared on first vCPU run
+	 * if necessary).
+	 */
+	mutable_mask = (ARMV8_PMU_PMCR_MASK |
+			(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT));
+	val &= mutable_mask;
+	val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
+
+	/* The LC bit is RES1 when AArch32 is not supported */
+	if (!kvm_supports_32bit_el0())
+		val |= ARMV8_PMU_PMCR_LC;
+
+	__vcpu_sys_reg(vcpu, r->reg) = val;
+	return 0;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	{ SYS_DESC(SYS_DBGBVRn_EL1(n)),					\
@@ -2150,8 +2203,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_CTR_EL0), access_ctr },
 	{ SYS_DESC(SYS_SVCR), undef_access },
 
-	{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
-	  .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
+	{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
+	  .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
 	{ PMU_SYS_REG(PMCNTENSET_EL0),
 	  .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
 	{ PMU_SYS_REG(PMCNTENCLR_EL0),
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 09/12] tools: Import arm_pmuv3.h
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (7 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-09 23:08 ` [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

Import kernel's include/linux/perf/arm_pmuv3.h, with the
definition of PMEVN_SWITCH() additionally including an assert()
for the 'default' case. The following patches will use macros
defined in this header.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/include/perf/arm_pmuv3.h | 308 +++++++++++++++++++++++++++++++++
 1 file changed, 308 insertions(+)
 create mode 100644 tools/include/perf/arm_pmuv3.h

diff --git a/tools/include/perf/arm_pmuv3.h b/tools/include/perf/arm_pmuv3.h
new file mode 100644
index 000000000000..e822d49fb5b8
--- /dev/null
+++ b/tools/include/perf/arm_pmuv3.h
@@ -0,0 +1,308 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __PERF_ARM_PMUV3_H
+#define __PERF_ARM_PMUV3_H
+
+#include <assert.h>
+#include <asm/bug.h>
+
+#define ARMV8_PMU_MAX_COUNTERS	32
+#define ARMV8_PMU_COUNTER_MASK	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * Common architectural and microarchitectural event numbers.
+ */
+#define ARMV8_PMUV3_PERFCTR_SW_INCR				0x0000
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL			0x0001
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL			0x0002
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL			0x0003
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE				0x0004
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL			0x0005
+#define ARMV8_PMUV3_PERFCTR_LD_RETIRED				0x0006
+#define ARMV8_PMUV3_PERFCTR_ST_RETIRED				0x0007
+#define ARMV8_PMUV3_PERFCTR_INST_RETIRED			0x0008
+#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN				0x0009
+#define ARMV8_PMUV3_PERFCTR_EXC_RETURN				0x000A
+#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED			0x000B
+#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED			0x000C
+#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED			0x000D
+#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED			0x000E
+#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED		0x000F
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED				0x0010
+#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES				0x0011
+#define ARMV8_PMUV3_PERFCTR_BR_PRED				0x0012
+#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS				0x0013
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE				0x0014
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB			0x0015
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE				0x0016
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL			0x0017
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB			0x0018
+#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS				0x0019
+#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR			0x001A
+#define ARMV8_PMUV3_PERFCTR_INST_SPEC				0x001B
+#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED			0x001C
+#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES				0x001D
+#define ARMV8_PMUV3_PERFCTR_CHAIN				0x001E
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE			0x001F
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE			0x0020
+#define ARMV8_PMUV3_PERFCTR_BR_RETIRED				0x0021
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED			0x0022
+#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND			0x0023
+#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND			0x0024
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB				0x0025
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB				0x0026
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE				0x0027
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL			0x0028
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE			0x0029
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL			0x002A
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE				0x002B
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB			0x002C
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL			0x002D
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL			0x002E
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB				0x002F
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB				0x0030
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS			0x0031
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE				0x0032
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS			0x0033
+#define ARMV8_PMUV3_PERFCTR_DTLB_WALK				0x0034
+#define ARMV8_PMUV3_PERFCTR_ITLB_WALK				0x0035
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD				0x0036
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD			0x0037
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD			0x0038
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD			0x0039
+#define ARMV8_PMUV3_PERFCTR_OP_RETIRED				0x003A
+#define ARMV8_PMUV3_PERFCTR_OP_SPEC				0x003B
+#define ARMV8_PMUV3_PERFCTR_STALL				0x003C
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND			0x003D
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND			0x003E
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT				0x003F
+
+/* Statistical profiling extension microarchitectural events */
+#define ARMV8_SPE_PERFCTR_SAMPLE_POP				0x4000
+#define ARMV8_SPE_PERFCTR_SAMPLE_FEED				0x4001
+#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE			0x4002
+#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION			0x4003
+
+/* AMUv1 architecture events */
+#define ARMV8_AMU_PERFCTR_CNT_CYCLES				0x4004
+#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM			0x4005
+
+/* long-latency read miss events */
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS			0x4006
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD			0x4009
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS			0x400A
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD			0x400B
+
+/* Trace buffer events */
+#define ARMV8_PMUV3_PERFCTR_TRB_WRAP				0x400C
+#define ARMV8_PMUV3_PERFCTR_TRB_TRIG				0x400E
+
+/* Trace unit events */
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0				0x4010
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1				0x4011
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2				0x4012
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3				0x4013
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4			0x4018
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5			0x4019
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6			0x401A
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7			0x401B
+
+/* additional latency from alignment events */
+#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT			0x4020
+#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT			0x4021
+#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT			0x4022
+
+/* Armv8.5 Memory Tagging Extension events */
+#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED			0x4024
+#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD			0x4025
+#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR			0x4026
+
+/* ARMv8 recommended implementation defined event types */
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD			0x0040
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR			0x0041
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD		0x0042
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR		0x0043
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER		0x0044
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER		0x0045
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM		0x0046
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN			0x0047
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL			0x0048
+
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD			0x004C
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR			0x004D
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD				0x004E
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR				0x004F
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD			0x0050
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR			0x0051
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD		0x0052
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR		0x0053
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM		0x0056
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN			0x0057
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL			0x0058
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD			0x005C
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR			0x005D
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD				0x005E
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR				0x005F
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD			0x0060
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR			0x0061
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED			0x0062
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED		0x0063
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL			0x0064
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH			0x0065
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD			0x0066
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR			0x0067
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC			0x0068
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC			0x0069
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC		0x006A
+
+#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC				0x006C
+#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC			0x006D
+#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC			0x006E
+#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC				0x006F
+#define ARMV8_IMPDEF_PERFCTR_LD_SPEC				0x0070
+#define ARMV8_IMPDEF_PERFCTR_ST_SPEC				0x0071
+#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC				0x0072
+#define ARMV8_IMPDEF_PERFCTR_DP_SPEC				0x0073
+#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC				0x0074
+#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC				0x0075
+#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC			0x0076
+#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC			0x0077
+#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC			0x0078
+#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC			0x0079
+#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC			0x007A
+
+#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC				0x007C
+#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC				0x007D
+#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC				0x007E
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF				0x0081
+#define ARMV8_IMPDEF_PERFCTR_EXC_SVC				0x0082
+#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT				0x0083
+#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT				0x0084
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ				0x0086
+#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ				0x0087
+#define ARMV8_IMPDEF_PERFCTR_EXC_SMC				0x0088
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_HVC				0x008A
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT			0x008B
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT			0x008C
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER			0x008D
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ			0x008E
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ			0x008F
+#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC				0x0090
+#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC				0x0091
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD			0x00A0
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR			0x00A1
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD		0x00A2
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR		0x00A3
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM		0x00A6
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN			0x00A7
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL			0x00A8
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMU_PMCR_E	(1 << 0) /* Enable all counters */
+#define ARMV8_PMU_PMCR_P	(1 << 1) /* Reset all counters */
+#define ARMV8_PMU_PMCR_C	(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMU_PMCR_D	(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMU_PMCR_X	(1 << 4) /* Export to ETM */
+#define ARMV8_PMU_PMCR_DP	(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define ARMV8_PMU_PMCR_LC	(1 << 6) /* Overflow on 64 bit cycle counter */
+#define ARMV8_PMU_PMCR_LP	(1 << 7) /* Long event counter enable */
+#define ARMV8_PMU_PMCR_N_SHIFT	11  /* Number of counters supported */
+#define ARMV8_PMU_PMCR_N_MASK	0x1f
+#define ARMV8_PMU_PMCR_MASK	0xff    /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define ARMV8_PMU_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define ARMV8_PMU_OVERFLOWED_MASK	ARMV8_PMU_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define ARMV8_PMU_EVTYPE_MASK	0xc800ffff	/* Mask for writable bits */
+#define ARMV8_PMU_EVTYPE_EVENT	0xffff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define ARMV8_PMU_EXCLUDE_EL1	(1U << 31)
+#define ARMV8_PMU_EXCLUDE_EL0	(1U << 30)
+#define ARMV8_PMU_INCLUDE_EL2	(1U << 27)
+
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_PMU_USERENR_MASK	0xf		/* Mask for writable bits */
+#define ARMV8_PMU_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_PMU_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_PMU_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_PMU_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
+
+/* PMMIR_EL1.SLOTS mask */
+#define ARMV8_PMU_SLOTS_MASK	0xff
+
+#define ARMV8_PMU_BUS_SLOTS_SHIFT 8
+#define ARMV8_PMU_BUS_SLOTS_MASK 0xff
+#define ARMV8_PMU_BUS_WIDTH_SHIFT 16
+#define ARMV8_PMU_BUS_WIDTH_MASK 0xf
+
+/*
+ * This code is really good
+ */
+
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default:					\
+			WARN(1, "Invalid PMEV* index\n");	\
+			assert(0);				\
+		}						\
+	} while (0)
+
+#endif
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (8 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 09/12] tools: Import arm_pmuv3.h Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-12 11:24   ` Sebastian Ott
                     ` (2 more replies)
  2023-10-09 23:08 ` [PATCH v7 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
  2023-10-09 23:08 ` [PATCH v7 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
  11 siblings, 3 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Introduce vpmu_counter_access test for arm64 platforms.
The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU,
and check if the guest can consistently see the same number of the
PMU event counters (PMCR_EL0.N) that userspace sets.
This test case is done with each of the PMCR_EL0.N values from
0 to 31 (With the PMCR_EL0.N values greater than the host value,
the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/vpmu_counter_access.c         | 247 ++++++++++++++++++
 2 files changed, 248 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index a3bb36fb3cfc..416700aa196c 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -149,6 +149,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
+TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
new file mode 100644
index 000000000000..58949b17d76e
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -0,0 +1,247 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * vpmu_counter_access - Test vPMU event counter access
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
+ * This test checks if the guest can see the same number of the PMU event
+ * counters (PMCR_EL0.N) that userspace sets.
+ * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
+ */
+#include <kvm_util.h>
+#include <processor.h>
+#include <test_util.h>
+#include <vgic.h>
+#include <perf/arm_pmuv3.h>
+#include <linux/bitfield.h>
+
+/* The max number of the PMU event counters (excluding the cycle counter) */
+#define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+struct vpmu_vm {
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+};
+
+static struct vpmu_vm vpmu_vm;
+
+static uint64_t get_pmcr_n(uint64_t pmcr)
+{
+	return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+}
+
+static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
+{
+	*pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+	*pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
+}
+
+static void guest_sync_handler(struct ex_regs *regs)
+{
+	uint64_t esr, ec;
+
+	esr = read_sysreg(esr_el1);
+	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
+	__GUEST_ASSERT(0, "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx", regs->pc, esr, ec);
+}
+
+/*
+ * The guest is configured with PMUv3 with @expected_pmcr_n number of
+ * event counters.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ */
+static void guest_code(uint64_t expected_pmcr_n)
+{
+	uint64_t pmcr, pmcr_n;
+
+	__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
+			"Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
+			expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS);
+
+	pmcr = read_sysreg(pmcr_el0);
+	pmcr_n = get_pmcr_n(pmcr);
+
+	/* Make sure that PMCR_EL0.N indicates the value userspace set */
+	__GUEST_ASSERT(pmcr_n == expected_pmcr_n,
+			"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
+			pmcr_n, expected_pmcr_n);
+
+	GUEST_DONE();
+}
+
+#define GICD_BASE_GPA	0x8000000ULL
+#define GICR_BASE_GPA	0x80A0000ULL
+
+/* Create a VM that has one vCPU with PMUv3 configured. */
+static void create_vpmu_vm(void *guest_code)
+{
+	struct kvm_vcpu_init init;
+	uint8_t pmuver, ec;
+	uint64_t dfr0, irq = 23;
+	struct kvm_device_attr irq_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
+		.addr = (uint64_t)&irq,
+	};
+	struct kvm_device_attr init_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
+	};
+
+	/* The test creates the vpmu_vm multiple times. Ensure a clean state */
+	memset(&vpmu_vm, 0, sizeof(vpmu_vm));
+
+	vpmu_vm.vm = vm_create(1);
+	vm_init_descriptor_tables(vpmu_vm.vm);
+	for (ec = 0; ec < ESR_EC_NUM; ec++) {
+		vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
+					guest_sync_handler);
+	}
+
+	/* Create vCPU with PMUv3 */
+	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
+	vcpu_init_descriptor_tables(vpmu_vm.vcpu);
+	vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
+					GICD_BASE_GPA, GICR_BASE_GPA);
+
+	/* Make sure that PMUv3 support is indicated in the ID register */
+	vcpu_get_reg(vpmu_vm.vcpu,
+		     KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
+	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
+		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+
+	/* Initialize vPMU */
+	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
+	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+}
+
+static void destroy_vpmu_vm(void)
+{
+	close(vpmu_vm.gic_fd);
+	kvm_vm_free(vpmu_vm.vm);
+}
+
+static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+{
+	struct ucall uc;
+
+	vcpu_args_set(vcpu, 1, pmcr_n);
+	vcpu_run(vcpu);
+	switch (get_ucall(vcpu, &uc)) {
+	case UCALL_ABORT:
+		REPORT_GUEST_ASSERT(uc);
+		break;
+	case UCALL_DONE:
+		break;
+	default:
+		TEST_FAIL("Unknown ucall %lu", uc.cmd);
+		break;
+	}
+}
+
+/*
+ * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
+ * and run the test.
+ */
+static void run_test(uint64_t pmcr_n)
+{
+	struct kvm_vcpu *vcpu;
+	uint64_t sp, pmcr;
+	struct kvm_vcpu_init init;
+
+	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
+	create_vpmu_vm(guest_code);
+
+	vcpu = vpmu_vm.vcpu;
+
+	/* Save the initial sp to restore them later to run the guest again */
+	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
+
+	/* Update the PMCR_EL0.N with @pmcr_n */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	set_pmcr_n(&pmcr, pmcr_n);
+	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	/*
+	 * Reset and re-initialize the vCPU, and run the guest code again to
+	 * check if PMCR_EL0.N is preserved.
+	 */
+	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_init_descriptor_tables(vcpu);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	destroy_vpmu_vm();
+}
+
+/*
+ * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
+ * the vCPU to @pmcr_n, which is larger than the host value.
+ * The attempt should fail as @pmcr_n is too big to set for the vCPU.
+ */
+static void run_error_test(uint64_t pmcr_n)
+{
+	struct kvm_vcpu *vcpu;
+	uint64_t pmcr, pmcr_orig;
+
+	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
+	create_vpmu_vm(guest_code);
+	vcpu = vpmu_vm.vcpu;
+
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+	pmcr = pmcr_orig;
+
+	/*
+	 * Setting a larger value of PMCR.N should not modify the field, and
+	 * return a success.
+	 */
+	set_pmcr_n(&pmcr, pmcr_n);
+	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	TEST_ASSERT(pmcr_orig == pmcr,
+		    "PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx\n",
+		    pmcr, pmcr_n);
+
+	destroy_vpmu_vm();
+}
+
+/*
+ * Return the default number of implemented PMU event counters excluding
+ * the cycle counter (i.e. PMCR_EL0.N value) for the guest.
+ */
+static uint64_t get_pmcr_n_limit(void)
+{
+	uint64_t pmcr;
+
+	create_vpmu_vm(guest_code);
+	vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	destroy_vpmu_vm();
+	return get_pmcr_n(pmcr);
+}
+
+int main(void)
+{
+	uint64_t i, pmcr_n;
+
+	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
+
+	pmcr_n = get_pmcr_n_limit();
+	for (i = 0; i <= pmcr_n; i++)
+		run_test(i);
+
+	for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
+		run_error_test(i);
+
+	return 0;
+}
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (9 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-17 18:54   ` Eric Auger
  2023-10-09 23:08 ` [PATCH v7 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Add a new test case to the vpmu_counter_access test to check if PMU
registers or their bits for implemented counters on the vCPU are
readable/writable as expected, and can be programmed to count events.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 270 +++++++++++++++++-
 1 file changed, 268 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 58949b17d76e..e92af3c0db03 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,7 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL0.N) that userspace sets.
+ * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
+ * those counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -37,6 +38,259 @@ static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
 	*pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
 }
 
+/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline unsigned long read_sel_evcntr(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevcntr_el0);
+}
+
+/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline void write_sel_evcntr(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevcntr_el0);
+	isb();
+}
+
+/* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline unsigned long read_sel_evtyper(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevtyper_el0);
+}
+
+/* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline void write_sel_evtyper(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevtyper_el0);
+	isb();
+}
+
+static inline void enable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenset_el0);
+	isb();
+}
+
+static inline void disable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenclr_el0);
+	isb();
+}
+
+static void pmu_disable_reset(void)
+{
+	uint64_t pmcr = read_sysreg(pmcr_el0);
+
+	/* Reset all counters, disabling them */
+	pmcr &= ~ARMV8_PMU_PMCR_E;
+	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+	isb();
+}
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+	isb();
+}
+
+#define READ_PMEVTYPERN(n) \
+	return read_sysreg(pmevtyper##n##_el0)
+static unsigned long read_pmevtypern(int n)
+{
+	PMEVN_SWITCH(n, READ_PMEVTYPERN);
+	return 0;
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+	isb();
+}
+
+/*
+ * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * accessors that test cases will use. Each of the accessors will
+ * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
+ * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
+ *
+ * This is used to test that combinations of those accessors provide
+ * the consistent behavior.
+ */
+struct pmc_accessor {
+	/* A function to be used to read PMEVTCNTR<n>_EL0 */
+	unsigned long	(*read_cntr)(int idx);
+	/* A function to be used to write PMEVTCNTR<n>_EL0 */
+	void		(*write_cntr)(int idx, unsigned long val);
+	/* A function to be used to read PMEVTYPER<n>_EL0 */
+	unsigned long	(*read_typer)(int idx);
+	/* A function to be used to write PMEVTYPER<n>_EL0 */
+	void		(*write_typer)(int idx, unsigned long val);
+};
+
+struct pmc_accessor pmc_accessors[] = {
+	/* test with all direct accesses */
+	{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
+	/* test with all indirect accesses */
+	{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
+	/* read with direct accesses, and write with indirect accesses */
+	{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
+	/* read with indirect accesses, and write with direct accesses */
+	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
+};
+
+/*
+ * Convert a pointer of pmc_accessor to an index in pmc_accessors[],
+ * assuming that the pointer is one of the entries in pmc_accessors[].
+ */
+#define PMC_ACC_TO_IDX(acc)	(acc - &pmc_accessors[0])
+
+#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)			 \
+{										 \
+	uint64_t _tval = read_sysreg(regname);					 \
+										 \
+	if (set_expected)							 \
+		__GUEST_ASSERT((_tval & mask),					 \
+				"tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
+				_tval, mask, set_expected);			 \
+	else									 \
+		__GUEST_ASSERT(!(_tval & mask),					 \
+				"tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
+				_tval, mask, set_expected);			 \
+}
+
+/*
+ * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
+ * are set or cleared as specified in @set_expected.
+ */
+static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
+{
+	GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
+}
+
+/*
+ * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding
+ * to the specified counter (@pmc_idx) can be read/written as expected.
+ * When @set_op is true, it tries to set the bit for the counter in
+ * those registers by writing the SET registers (the bit won't be set
+ * if the counter is not implemented though).
+ * Otherwise, it tries to clear the bits in the registers by writing
+ * the CLR registers.
+ * Then, it checks if the values indicated in the registers are as expected.
+ */
+static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
+{
+	uint64_t pmcr_n, test_bit = BIT(pmc_idx);
+	bool set_expected = false;
+
+	if (set_op) {
+		write_sysreg(test_bit, pmcntenset_el0);
+		write_sysreg(test_bit, pmintenset_el1);
+		write_sysreg(test_bit, pmovsset_el0);
+
+		/* The bit will be set only if the counter is implemented */
+		pmcr_n = get_pmcr_n(read_sysreg(pmcr_el0));
+		set_expected = (pmc_idx < pmcr_n) ? true : false;
+	} else {
+		write_sysreg(test_bit, pmcntenclr_el0);
+		write_sysreg(test_bit, pmintenclr_el1);
+		write_sysreg(test_bit, pmovsclr_el0);
+	}
+	check_bitmap_pmu_regs(test_bit, set_expected);
+}
+
+/*
+ * Tests for reading/writing registers for the (implemented) event counter
+ * specified by @pmc_idx.
+ */
+static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	uint64_t write_data, read_data;
+
+	/* Disable all PMCs and reset all PMCs to zero. */
+	pmu_disable_reset();
+
+
+	/*
+	 * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1.
+	 */
+
+	/* Make sure that the bit in those registers are set to 0 */
+	test_bitmap_pmu_regs(pmc_idx, false);
+	/* Test if setting the bit in those registers works */
+	test_bitmap_pmu_regs(pmc_idx, true);
+	/* Test if clearing the bit in those registers works */
+	test_bitmap_pmu_regs(pmc_idx, false);
+
+
+	/*
+	 * Tests for reading/writing the event type register.
+	 */
+
+	read_data = acc->read_typer(pmc_idx);
+	/*
+	 * Set the event type register to an arbitrary value just for testing
+	 * of reading/writing the register.
+	 * ArmARM says that for the event from 0x0000 to 0x003F,
+	 * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
+	 * the value written to the field even when the specified event
+	 * is not supported.
+	 */
+	write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+	acc->write_typer(pmc_idx, write_data);
+	read_data = acc->read_typer(pmc_idx);
+	__GUEST_ASSERT(read_data == write_data,
+		       "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
+		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
+
+
+	/*
+	 * Tests for reading/writing the event count register.
+	 */
+
+	read_data = acc->read_cntr(pmc_idx);
+
+	/* The count value must be 0, as it is not used after the reset */
+	__GUEST_ASSERT(read_data == 0,
+		       "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx",
+		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data);
+
+	write_data = read_data + pmc_idx + 0x12345;
+	acc->write_cntr(pmc_idx, write_data);
+	read_data = acc->read_cntr(pmc_idx);
+	__GUEST_ASSERT(read_data == write_data,
+		       "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
+		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
+}
+
 static void guest_sync_handler(struct ex_regs *regs)
 {
 	uint64_t esr, ec;
@@ -49,11 +303,14 @@ static void guest_sync_handler(struct ex_regs *regs)
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
- * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
+ * if reading/writing PMU registers for implemented counters can work
+ * as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
 	uint64_t pmcr, pmcr_n;
+	int i, pmc;
 
 	__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
 			"Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
@@ -67,6 +324,15 @@ static void guest_code(uint64_t expected_pmcr_n)
 			"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
 			pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Tests for reading/writing PMU registers for implemented counters.
+	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = 0; pmc < pmcr_n; pmc++)
+			test_access_pmc_regs(&pmc_accessors[i], pmc);
+	}
+
 	GUEST_DONE();
 }
 
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v7 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters
  2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
                   ` (10 preceding siblings ...)
  2023-10-09 23:08 ` [PATCH v7 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
@ 2023-10-09 23:08 ` Raghavendra Rao Ananta
  2023-10-18  6:54   ` Eric Auger
  11 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-09 23:08 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Add a new test case to the vpmu_counter_access test to check
if PMU registers or their bits for unimplemented counters are not
accessible or are RAZ, as expected.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 95 +++++++++++++++++--
 .../selftests/kvm/include/aarch64/processor.h |  1 +
 2 files changed, 87 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index e92af3c0db03..788386ac0894 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,8 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
- * those counters.
+ * counters (PMCR_EL0.N) that userspace sets, if the guest can access
+ * those counters, and if the guest cannot access any other counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -131,9 +131,9 @@ static void write_pmevtypern(int n, unsigned long val)
 }
 
 /*
- * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
  * accessors that test cases will use. Each of the accessors will
- * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0
  * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
  * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
  *
@@ -291,25 +291,85 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
 		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
 }
 
+#define INVALID_EC	(-1ul)
+uint64_t expected_ec = INVALID_EC;
+uint64_t op_end_addr;
+
 static void guest_sync_handler(struct ex_regs *regs)
 {
 	uint64_t esr, ec;
 
 	esr = read_sysreg(esr_el1);
 	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
-	__GUEST_ASSERT(0, "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx", regs->pc, esr, ec);
+
+	__GUEST_ASSERT(op_end_addr && (expected_ec == ec),
+			"PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx; EC expected: 0x%lx",
+			regs->pc, esr, ec, expected_ec);
+
+	/* Will go back to op_end_addr after the handler exits */
+	regs->pc = op_end_addr;
+
+	/*
+	 * Clear op_end_addr, and setting expected_ec to INVALID_EC
+	 * as a sign that an exception has occurred.
+	 */
+	op_end_addr = 0;
+	expected_ec = INVALID_EC;
+}
+
+/*
+ * Run the given operation that should trigger an exception with the
+ * given exception class. The exception handler (guest_sync_handler)
+ * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
+ * will come back to the instruction at the @done_label.
+ * The @done_label must be a unique label in this test program.
+ */
+#define TEST_EXCEPTION(ec, ops, done_label)		\
+{							\
+	extern int done_label;				\
+							\
+	WRITE_ONCE(op_end_addr, (uint64_t)&done_label);	\
+	GUEST_ASSERT(ec != INVALID_EC);			\
+	WRITE_ONCE(expected_ec, ec);			\
+	dsb(ish);					\
+	ops;						\
+	asm volatile(#done_label":");			\
+	GUEST_ASSERT(!op_end_addr);			\
+	GUEST_ASSERT(expected_ec == INVALID_EC);	\
+}
+
+/*
+ * Tests for reading/writing registers for the unimplemented event counter
+ * specified by @pmc_idx (>= PMCR_EL0.N).
+ */
+static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	/*
+	 * Reading/writing the event count/type registers should cause
+	 * an UNDEFINED exception.
+	 */
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
+	/*
+	 * The bit corresponding to the (unimplemented) counter in
+	 * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
+	 */
+	test_bitmap_pmu_regs(pmc_idx, 1);
+	test_bitmap_pmu_regs(pmc_idx, 0);
 }
 
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
  * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
- * if reading/writing PMU registers for implemented counters can work
- * as expected.
+ * if reading/writing PMU registers for implemented or unimplemented
+ * counters can work as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
-	uint64_t pmcr, pmcr_n;
+	uint64_t pmcr, pmcr_n, unimp_mask;
 	int i, pmc;
 
 	__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
@@ -324,15 +384,32 @@ static void guest_code(uint64_t expected_pmcr_n)
 			"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
 			pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Make sure that (RAZ) bits corresponding to unimplemented event
+	 * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
+	 * (NOTE: bits for implemented event counters are reset to UNKNOWN)
+	 */
+	unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
+	check_bitmap_pmu_regs(unimp_mask, false);
+
 	/*
 	 * Tests for reading/writing PMU registers for implemented counters.
-	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
 	 */
 	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
 		for (pmc = 0; pmc < pmcr_n; pmc++)
 			test_access_pmc_regs(&pmc_accessors[i], pmc);
 	}
 
+	/*
+	 * Tests for reading/writing PMU registers for unimplemented counters.
+	 * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
+			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
+	}
+
 	GUEST_DONE();
 }
 
diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index cb537253a6b9..c42d683102c7 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -104,6 +104,7 @@ enum {
 #define ESR_EC_SHIFT		26
 #define ESR_EC_MASK		(ESR_EC_NUM - 1)
 
+#define ESR_EC_UNKNOWN		0x0
 #define ESR_EC_SVC64		0x15
 #define ESR_EC_IABT		0x21
 #define ESR_EC_DABT		0x25
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 02/12] KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset
  2023-10-09 23:08 ` [PATCH v7 02/12] KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset Raghavendra Rao Ananta
@ 2023-10-10 22:25   ` Oliver Upton
  2023-10-13 20:27     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-10 22:25 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
	Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
	Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

Hi Raghu,

On Mon, Oct 09, 2023 at 11:08:48PM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> The following patches will use the number of counters information
> from the arm_pmu and use this to set the PMCR.N for the guest
> during vCPU reset. However, since the guest is not associated
> with any arm_pmu until userspace configures the vPMU device
> attributes, and a reset can happen before this event, assign a
> default PMU to the guest just before doing the reset.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  arch/arm64/kvm/arm.c      | 20 ++++++++++++++++++++
>  arch/arm64/kvm/pmu-emul.c | 12 ++----------
>  include/kvm/arm_pmu.h     |  6 ++++++
>  3 files changed, 28 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 78b0970eb8e6..708a53b70a7b 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1313,6 +1313,23 @@ static bool kvm_vcpu_init_changed(struct kvm_vcpu *vcpu,
>  			     KVM_VCPU_MAX_FEATURES);
>  }
>  
> +static int kvm_vcpu_set_pmu(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +
> +	if (!kvm_arm_support_pmu_v3())
> +		return -EINVAL;

This check is pointless; the vCPU feature flags have been sanitised at
this point, and a requirement of having PMUv3 is that this predicate is
true.

> +	/*
> +	 * When the vCPU has a PMU, but no PMU is set for the guest
> +	 * yet, set the default one.
> +	 */
> +	if (unlikely(!kvm->arch.arm_pmu))
> +		return kvm_arm_set_default_pmu(kvm);
> +
> +	return 0;
> +}
> +

Apologies, I believe I was unclear last time around as to what I was
wanting here. Let's call this thing kvm_setup_vcpu() such that we can
add other one-time setup activities to it in the future.

Something like:

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 96641e442039..4896a44108e0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1265,19 +1265,17 @@ static bool kvm_vcpu_init_changed(struct kvm_vcpu *vcpu,
 			     KVM_VCPU_MAX_FEATURES);
 }
 
-static int kvm_vcpu_set_pmu(struct kvm_vcpu *vcpu)
+static int kvm_setup_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
 
-	if (!kvm_arm_support_pmu_v3())
-		return -EINVAL;
-
 	/*
 	 * When the vCPU has a PMU, but no PMU is set for the guest
 	 * yet, set the default one.
 	 */
-	if (unlikely(!kvm->arch.arm_pmu))
-		return kvm_arm_set_default_pmu(kvm);
+	if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu &&
+	    kvm_arm_set_default_pmu(kvm))
+		return -EINVAL;
 
 	return 0;
 }
@@ -1297,7 +1295,8 @@ static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 
 	bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES);
 
-	if (kvm_vcpu_has_pmu(vcpu) && kvm_vcpu_set_pmu(vcpu))
+	ret = kvm_setup_vcpu(vcpu);
+	if (ret)
 		goto out_unlock;
 
 	/* Now we know what it is, we can reset it. */

-- 
Thanks,
Oliver

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters
  2023-10-09 23:08 ` [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters Raghavendra Rao Ananta
@ 2023-10-10 22:30   ` Oliver Upton
  2023-10-13  5:43     ` Oliver Upton
  0 siblings, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-10 22:30 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
	Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
	Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

On Mon, Oct 09, 2023 at 11:08:52PM +0000, Raghavendra Rao Ananta wrote:
> Add a helper, kvm_arm_get_num_counters(), to read the number
> of counters from the arm_pmu associated to the VM. Make the
> function global as upcoming patches will be interested to
> know the value while setting the PMCR.N of the guest from
> userspace.
> 
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  arch/arm64/kvm/pmu-emul.c | 17 +++++++++++++++++
>  include/kvm/arm_pmu.h     |  6 ++++++
>  2 files changed, 23 insertions(+)
> 
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index a161d6266a5c..84aa8efd9163 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -873,6 +873,23 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
>  	return true;
>  }
>  
> +/**
> + * kvm_arm_get_num_counters - Get the number of general-purpose PMU counters.
> + * @kvm: The kvm pointer
> + */
> +int kvm_arm_get_num_counters(struct kvm *kvm)

nit: the naming suggests this returns the configured number of PMCs, not
the limit.

Maybe kvm_arm_pmu_get_max_counters()?

> +{
> +	struct arm_pmu *arm_pmu = kvm->arch.arm_pmu;
> +
> +	lockdep_assert_held(&kvm->arch.config_lock);
> +
> +	/*
> +	 * The arm_pmu->num_events considers the cycle counter as well.
> +	 * Ignore that and return only the general-purpose counters.
> +	 */
> +	return arm_pmu->num_events - 1;
> +}
> +
>  static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
>  {
>  	lockdep_assert_held(&kvm->arch.config_lock);
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index cd980d78b86b..672f3e9d7eea 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -102,6 +102,7 @@ void kvm_vcpu_pmu_resync_el0(void);
>  
>  u8 kvm_arm_pmu_get_pmuver_limit(void);
>  int kvm_arm_set_default_pmu(struct kvm *kvm);
> +int kvm_arm_get_num_counters(struct kvm *kvm);
>  
>  u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu);
>  #else
> @@ -181,6 +182,11 @@ static inline int kvm_arm_set_default_pmu(struct kvm *kvm)
>  	return -ENODEV;
>  }
>  
> +static inline int kvm_arm_get_num_counters(struct kvm *kvm)
> +{
> +	return -ENODEV;
> +}
> +
>  static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
>  {
>  	return 0;
> -- 
> 2.42.0.609.gbb76f46606-goog
> 

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-09 23:08 ` [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
@ 2023-10-12 11:24   ` Sebastian Ott
  2023-10-12 15:01     ` Sebastian Ott
  2023-10-17 14:51   ` Eric Auger
  2023-10-17 15:48   ` Sebastian Ott
  2 siblings, 1 reply; 60+ messages in thread
From: Sebastian Ott @ 2023-10-12 11:24 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

[-- Attachment #1: Type: text/plain, Size: 2146 bytes --]

On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> +/* Create a VM that has one vCPU with PMUv3 configured. */
> +static void create_vpmu_vm(void *guest_code)
> +{
> +	struct kvm_vcpu_init init;
> +	uint8_t pmuver, ec;
> +	uint64_t dfr0, irq = 23;
> +	struct kvm_device_attr irq_attr = {
> +		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
> +		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
> +		.addr = (uint64_t)&irq,
> +	};
> +	struct kvm_device_attr init_attr = {
> +		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
> +		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
> +	};
> +
> +	/* The test creates the vpmu_vm multiple times. Ensure a clean state */
> +	memset(&vpmu_vm, 0, sizeof(vpmu_vm));
> +
> +	vpmu_vm.vm = vm_create(1);
> +	vm_init_descriptor_tables(vpmu_vm.vm);
> +	for (ec = 0; ec < ESR_EC_NUM; ec++) {
> +		vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
> +					guest_sync_handler);
> +	}
> +
> +	/* Create vCPU with PMUv3 */
> +	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> +	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> +	vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
> +	vcpu_init_descriptor_tables(vpmu_vm.vcpu);
> +	vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
> +					GICD_BASE_GPA, GICR_BASE_GPA);
> +
> +	/* Make sure that PMUv3 support is indicated in the ID register */
> +	vcpu_get_reg(vpmu_vm.vcpu,
> +		     KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
> +	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
> +	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
> +		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
> +		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
> +
> +	/* Initialize vPMU */
> +	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
> +	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
> +}

This one fails to build for me:
aarch64/vpmu_counter_access.c: In function ‘create_vpmu_vm’:
aarch64/vpmu_counter_access.c:456:47: error: ‘ID_AA64DFR0_PMUVER_MASK’ undeclared (first use in this function); did you mean ‘ID_AA64DFR0_EL1_PMUVer_MASK’?
   456 |         pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);

Regards,
Sebastian

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-12 11:24   ` Sebastian Ott
@ 2023-10-12 15:01     ` Sebastian Ott
  2023-10-13 21:05       ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Sebastian Ott @ 2023-10-12 15:01 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

[-- Attachment #1: Type: text/plain, Size: 2472 bytes --]

On Thu, 12 Oct 2023, Sebastian Ott wrote:
> On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
>>  +/* Create a VM that has one vCPU with PMUv3 configured. */
>>  +static void create_vpmu_vm(void *guest_code)
>>  +{
>>  +	struct kvm_vcpu_init init;
>>  +	uint8_t pmuver, ec;
>>  +	uint64_t dfr0, irq = 23;
>>  +	struct kvm_device_attr irq_attr = {
>>  +		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
>>  +		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
>>  +		.addr = (uint64_t)&irq,
>>  +	};
>>  +	struct kvm_device_attr init_attr = {
>>  +		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
>>  +		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
>>  +	};
>>  +
>>  +	/* The test creates the vpmu_vm multiple times. Ensure a clean state
>>  */
>>  +	memset(&vpmu_vm, 0, sizeof(vpmu_vm));
>>  +
>>  +	vpmu_vm.vm = vm_create(1);
>>  +	vm_init_descriptor_tables(vpmu_vm.vm);
>>  +	for (ec = 0; ec < ESR_EC_NUM; ec++) {
>>  +		vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
>>  +					guest_sync_handler);
>>  +	}
>>  +
>>  +	/* Create vCPU with PMUv3 */
>>  +	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
>>  +	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
>>  +	vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
>>  +	vcpu_init_descriptor_tables(vpmu_vm.vcpu);
>>  +	vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
>>  +					GICD_BASE_GPA, GICR_BASE_GPA);
>>  +
>>  +	/* Make sure that PMUv3 support is indicated in the ID register */
>>  +	vcpu_get_reg(vpmu_vm.vcpu,
>>  +		     KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
>>  +	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
>>  +	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
>>  +		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
>>  +		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3",
>>  pmuver);
>>  +
>>  +	/* Initialize vPMU */
>>  +	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
>>  +	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
>>  +}
>
> This one fails to build for me:
> aarch64/vpmu_counter_access.c: In function ‘create_vpmu_vm’:
> aarch64/vpmu_counter_access.c:456:47: error: ‘ID_AA64DFR0_PMUVER_MASK’ 
> undeclared (first use in this function); did you mean 
> ‘ID_AA64DFR0_EL1_PMUVer_MASK’?
>   456 |         pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER),
>   dfr0);

Looks like there's a clash with
"KVM: arm64: selftests: Import automatic generation of sysreg defs"
from:
 	https://lore.kernel.org/r/20231003230408.3405722-12-oliver.upton@linux.dev

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters
  2023-10-10 22:30   ` Oliver Upton
@ 2023-10-13  5:43     ` Oliver Upton
  2023-10-13 20:24       ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-13  5:43 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
	Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
	Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

On Tue, Oct 10, 2023 at 10:30:31PM +0000, Oliver Upton wrote:
> On Mon, Oct 09, 2023 at 11:08:52PM +0000, Raghavendra Rao Ananta wrote:
> > Add a helper, kvm_arm_get_num_counters(), to read the number
> > of counters from the arm_pmu associated to the VM. Make the
> > function global as upcoming patches will be interested to
> > know the value while setting the PMCR.N of the guest from
> > userspace.
> > 
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  arch/arm64/kvm/pmu-emul.c | 17 +++++++++++++++++
> >  include/kvm/arm_pmu.h     |  6 ++++++
> >  2 files changed, 23 insertions(+)
> > 
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index a161d6266a5c..84aa8efd9163 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -873,6 +873,23 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
> >  	return true;
> >  }
> >  
> > +/**
> > + * kvm_arm_get_num_counters - Get the number of general-purpose PMU counters.
> > + * @kvm: The kvm pointer
> > + */
> > +int kvm_arm_get_num_counters(struct kvm *kvm)
> 
> nit: the naming suggests this returns the configured number of PMCs, not
> the limit.
> 
> Maybe kvm_arm_pmu_get_max_counters()?

Following up on the matter -- please try to avoid sending patches that
add helpers without any users. Lifting *existing* logic into a helper
and updating the callsites is itself worthy of a separate patch. But
adding a new function called by nobody doesn't do much, and can easily
be squashed into the patch that consumes the new logic.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters
  2023-10-13  5:43     ` Oliver Upton
@ 2023-10-13 20:24       ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-13 20:24 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
	Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
	Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

Hi Oliver,

On Thu, Oct 12, 2023 at 10:43 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Tue, Oct 10, 2023 at 10:30:31PM +0000, Oliver Upton wrote:
> > On Mon, Oct 09, 2023 at 11:08:52PM +0000, Raghavendra Rao Ananta wrote:
> > > Add a helper, kvm_arm_get_num_counters(), to read the number
> > > of counters from the arm_pmu associated to the VM. Make the
> > > function global as upcoming patches will be interested to
> > > know the value while setting the PMCR.N of the guest from
> > > userspace.
> > >
> > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > > ---
> > >  arch/arm64/kvm/pmu-emul.c | 17 +++++++++++++++++
> > >  include/kvm/arm_pmu.h     |  6 ++++++
> > >  2 files changed, 23 insertions(+)
> > >
> > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > > index a161d6266a5c..84aa8efd9163 100644
> > > --- a/arch/arm64/kvm/pmu-emul.c
> > > +++ b/arch/arm64/kvm/pmu-emul.c
> > > @@ -873,6 +873,23 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
> > >     return true;
> > >  }
> > >
> > > +/**
> > > + * kvm_arm_get_num_counters - Get the number of general-purpose PMU counters.
> > > + * @kvm: The kvm pointer
> > > + */
> > > +int kvm_arm_get_num_counters(struct kvm *kvm)
> >
> > nit: the naming suggests this returns the configured number of PMCs, not
> > the limit.
> >
> > Maybe kvm_arm_pmu_get_max_counters()?
>
Sure, kvm_arm_pmu_get_max_counters() it is!

> Following up on the matter -- please try to avoid sending patches that
> add helpers without any users. Lifting *existing* logic into a helper
> and updating the callsites is itself worthy of a separate patch. But
> adding a new function called by nobody doesn't do much, and can easily
> be squashed into the patch that consumes the new logic.
>
Sounds good. I'll squash patches of this type into the caller patches.

Thank you.
Raghavendra
> --
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 02/12] KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset
  2023-10-10 22:25   ` Oliver Upton
@ 2023-10-13 20:27     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-13 20:27 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
	Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
	Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

On Tue, Oct 10, 2023 at 3:25 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Raghu,
>
> On Mon, Oct 09, 2023 at 11:08:48PM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > The following patches will use the number of counters information
> > from the arm_pmu and use this to set the PMCR.N for the guest
> > during vCPU reset. However, since the guest is not associated
> > with any arm_pmu until userspace configures the vPMU device
> > attributes, and a reset can happen before this event, assign a
> > default PMU to the guest just before doing the reset.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  arch/arm64/kvm/arm.c      | 20 ++++++++++++++++++++
> >  arch/arm64/kvm/pmu-emul.c | 12 ++----------
> >  include/kvm/arm_pmu.h     |  6 ++++++
> >  3 files changed, 28 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 78b0970eb8e6..708a53b70a7b 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1313,6 +1313,23 @@ static bool kvm_vcpu_init_changed(struct kvm_vcpu *vcpu,
> >                            KVM_VCPU_MAX_FEATURES);
> >  }
> >
> > +static int kvm_vcpu_set_pmu(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm *kvm = vcpu->kvm;
> > +
> > +     if (!kvm_arm_support_pmu_v3())
> > +             return -EINVAL;
>
> This check is pointless; the vCPU feature flags have been sanitised at
> this point, and a requirement of having PMUv3 is that this predicate is
> true.
>
Oh yes. I'll avoid this in v8.

> > +     /*
> > +      * When the vCPU has a PMU, but no PMU is set for the guest
> > +      * yet, set the default one.
> > +      */
> > +     if (unlikely(!kvm->arch.arm_pmu))
> > +             return kvm_arm_set_default_pmu(kvm);
> > +
> > +     return 0;
> > +}
> > +
>
> Apologies, I believe I was unclear last time around as to what I was
> wanting here. Let's call this thing kvm_setup_vcpu() such that we can
> add other one-time setup activities to it in the future.
>
> Something like:
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 96641e442039..4896a44108e0 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1265,19 +1265,17 @@ static bool kvm_vcpu_init_changed(struct kvm_vcpu *vcpu,
>                              KVM_VCPU_MAX_FEATURES);
>  }
>
> -static int kvm_vcpu_set_pmu(struct kvm_vcpu *vcpu)
> +static int kvm_setup_vcpu(struct kvm_vcpu *vcpu)
>  {
>         struct kvm *kvm = vcpu->kvm;
>
> -       if (!kvm_arm_support_pmu_v3())
> -               return -EINVAL;
> -
>         /*
>          * When the vCPU has a PMU, but no PMU is set for the guest
>          * yet, set the default one.
>          */
> -       if (unlikely(!kvm->arch.arm_pmu))
> -               return kvm_arm_set_default_pmu(kvm);
> +       if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu &&
> +           kvm_arm_set_default_pmu(kvm))
> +               return -EINVAL;
>
>         return 0;
>  }
> @@ -1297,7 +1295,8 @@ static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>
>         bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES);
>
> -       if (kvm_vcpu_has_pmu(vcpu) && kvm_vcpu_set_pmu(vcpu))
> +       ret = kvm_setup_vcpu(vcpu);
> +       if (ret)
>                 goto out_unlock;
>
>         /* Now we know what it is, we can reset it. */
>
Introducing kvm_setup_vcpu() seems better than directly calling
kvm_vcpu_set_pmu(), which feels like it's crashing a party.

Thank you.
Raghavendra
> --
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-12 15:01     ` Sebastian Ott
@ 2023-10-13 21:05       ` Raghavendra Rao Ananta
  2023-10-16 10:01         ` Sebastian Ott
  2023-10-16 18:56         ` Oliver Upton
  0 siblings, 2 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-13 21:05 UTC (permalink / raw)
  To: Sebastian Ott, Oliver Upton
  Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
	Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
	Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
	linux-kernel, kvm

On Thu, Oct 12, 2023 at 8:02 AM Sebastian Ott <sebott@redhat.com> wrote:
>
> On Thu, 12 Oct 2023, Sebastian Ott wrote:
> > On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> >>  +/* Create a VM that has one vCPU with PMUv3 configured. */
> >>  +static void create_vpmu_vm(void *guest_code)
> >>  +{
> >>  +   struct kvm_vcpu_init init;
> >>  +   uint8_t pmuver, ec;
> >>  +   uint64_t dfr0, irq = 23;
> >>  +   struct kvm_device_attr irq_attr = {
> >>  +           .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> >>  +           .attr = KVM_ARM_VCPU_PMU_V3_IRQ,
> >>  +           .addr = (uint64_t)&irq,
> >>  +   };
> >>  +   struct kvm_device_attr init_attr = {
> >>  +           .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> >>  +           .attr = KVM_ARM_VCPU_PMU_V3_INIT,
> >>  +   };
> >>  +
> >>  +   /* The test creates the vpmu_vm multiple times. Ensure a clean state
> >>  */
> >>  +   memset(&vpmu_vm, 0, sizeof(vpmu_vm));
> >>  +
> >>  +   vpmu_vm.vm = vm_create(1);
> >>  +   vm_init_descriptor_tables(vpmu_vm.vm);
> >>  +   for (ec = 0; ec < ESR_EC_NUM; ec++) {
> >>  +           vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
> >>  +                                   guest_sync_handler);
> >>  +   }
> >>  +
> >>  +   /* Create vCPU with PMUv3 */
> >>  +   vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> >>  +   init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> >>  +   vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
> >>  +   vcpu_init_descriptor_tables(vpmu_vm.vcpu);
> >>  +   vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
> >>  +                                   GICD_BASE_GPA, GICR_BASE_GPA);
> >>  +
> >>  +   /* Make sure that PMUv3 support is indicated in the ID register */
> >>  +   vcpu_get_reg(vpmu_vm.vcpu,
> >>  +                KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
> >>  +   pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
> >>  +   TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
> >>  +               pmuver >= ID_AA64DFR0_PMUVER_8_0,
> >>  +               "Unexpected PMUVER (0x%x) on the vCPU with PMUv3",
> >>  pmuver);
> >>  +
> >>  +   /* Initialize vPMU */
> >>  +   vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
> >>  +   vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
> >>  +}
> >
> > This one fails to build for me:
> > aarch64/vpmu_counter_access.c: In function ‘create_vpmu_vm’:
> > aarch64/vpmu_counter_access.c:456:47: error: ‘ID_AA64DFR0_PMUVER_MASK’
> > undeclared (first use in this function); did you mean
> > ‘ID_AA64DFR0_EL1_PMUVer_MASK’?
> >   456 |         pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER),
> >   dfr0);
>
> Looks like there's a clash with
> "KVM: arm64: selftests: Import automatic generation of sysreg defs"
> from:
>         https://lore.kernel.org/r/20231003230408.3405722-12-oliver.upton@linux.dev
Thanks for the pointer, Sebastian! Surprisingly, I don't see the patch
when I sync to kvmarm/next.

Oliver,

Aren't the selftest patches from the 'Enable writable ID regs' series
[1] merged into kvmarm/next? Looking at the log, I couldn't find them
and the last patch that went from the series was [2]. Am I missing
something?

Thank you.
Raghavendra

[1]: https://lore.kernel.org/all/169644154288.3677537.15121340860793882283.b4-ty@linux.dev/
[2]: https://lore.kernel.org/all/20231003230408.3405722-11-oliver.upton@linux.dev/

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-13 21:05       ` Raghavendra Rao Ananta
@ 2023-10-16 10:01         ` Sebastian Ott
  2023-10-16 18:56         ` Oliver Upton
  1 sibling, 0 replies; 60+ messages in thread
From: Sebastian Ott @ 2023-10-16 10:01 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

[-- Attachment #1: Type: text/plain, Size: 3118 bytes --]

On Fri, 13 Oct 2023, Raghavendra Rao Ananta wrote:
> On Thu, Oct 12, 2023 at 8:02 AM Sebastian Ott <sebott@redhat.com> wrote:
>>
>> On Thu, 12 Oct 2023, Sebastian Ott wrote:
>>> On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
>>>>  +/* Create a VM that has one vCPU with PMUv3 configured. */
>>>>  +static void create_vpmu_vm(void *guest_code)
>>>>  +{
>>>>  +   struct kvm_vcpu_init init;
>>>>  +   uint8_t pmuver, ec;
>>>>  +   uint64_t dfr0, irq = 23;
>>>>  +   struct kvm_device_attr irq_attr = {
>>>>  +           .group = KVM_ARM_VCPU_PMU_V3_CTRL,
>>>>  +           .attr = KVM_ARM_VCPU_PMU_V3_IRQ,
>>>>  +           .addr = (uint64_t)&irq,
>>>>  +   };
>>>>  +   struct kvm_device_attr init_attr = {
>>>>  +           .group = KVM_ARM_VCPU_PMU_V3_CTRL,
>>>>  +           .attr = KVM_ARM_VCPU_PMU_V3_INIT,
>>>>  +   };
>>>>  +
>>>>  +   /* The test creates the vpmu_vm multiple times. Ensure a clean state
>>>>  */
>>>>  +   memset(&vpmu_vm, 0, sizeof(vpmu_vm));
>>>>  +
>>>>  +   vpmu_vm.vm = vm_create(1);
>>>>  +   vm_init_descriptor_tables(vpmu_vm.vm);
>>>>  +   for (ec = 0; ec < ESR_EC_NUM; ec++) {
>>>>  +           vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
>>>>  +                                   guest_sync_handler);
>>>>  +   }
>>>>  +
>>>>  +   /* Create vCPU with PMUv3 */
>>>>  +   vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
>>>>  +   init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
>>>>  +   vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
>>>>  +   vcpu_init_descriptor_tables(vpmu_vm.vcpu);
>>>>  +   vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
>>>>  +                                   GICD_BASE_GPA, GICR_BASE_GPA);
>>>>  +
>>>>  +   /* Make sure that PMUv3 support is indicated in the ID register */
>>>>  +   vcpu_get_reg(vpmu_vm.vcpu,
>>>>  +                KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
>>>>  +   pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
>>>>  +   TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
>>>>  +               pmuver >= ID_AA64DFR0_PMUVER_8_0,
>>>>  +               "Unexpected PMUVER (0x%x) on the vCPU with PMUv3",
>>>>  pmuver);
>>>>  +
>>>>  +   /* Initialize vPMU */
>>>>  +   vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
>>>>  +   vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
>>>>  +}
>>>
>>> This one fails to build for me:
>>> aarch64/vpmu_counter_access.c: In function ‘create_vpmu_vm’:
>>> aarch64/vpmu_counter_access.c:456:47: error: ‘ID_AA64DFR0_PMUVER_MASK’
>>> undeclared (first use in this function); did you mean
>>> ‘ID_AA64DFR0_EL1_PMUVer_MASK’?
>>>   456 |         pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER),
>>>   dfr0);
>>
>> Looks like there's a clash with
>> "KVM: arm64: selftests: Import automatic generation of sysreg defs"
>> from:
>>         https://lore.kernel.org/r/20231003230408.3405722-12-oliver.upton@linux.dev
> Thanks for the pointer, Sebastian! Surprisingly, I don't see the patch
> when I sync to kvmarm/next.
>

Yea, sry - I've had both of these series applied locally.

Sebastian

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-09 23:08 ` [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU Raghavendra Rao Ananta
@ 2023-10-16 13:35   ` Sebastian Ott
  2023-10-16 19:02     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Sebastian Ott @ 2023-10-16 13:35 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
> {
> -	return __vcpu_sys_reg(vcpu, PMCR_EL0);
> +	u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
> +			~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> +
> +	return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> }
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index ff0f7095eaca..c750722fbe4a 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -745,12 +745,8 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> {
> 	u64 pmcr;
>
> -	/* No PMU available, PMCR_EL0 may UNDEF... */
> -	if (!kvm_arm_support_pmu_v3())
> -		return 0;
> -
> 	/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> -	pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> +	pmcr = kvm_vcpu_read_pmcr(vcpu) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);

pmcr = ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
Would that maybe make it more clear what is done here?


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-13 21:05       ` Raghavendra Rao Ananta
  2023-10-16 10:01         ` Sebastian Ott
@ 2023-10-16 18:56         ` Oliver Upton
  2023-10-16 19:05           ` Raghavendra Rao Ananta
  1 sibling, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-16 18:56 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Fri, Oct 13, 2023 at 02:05:29PM -0700, Raghavendra Rao Ananta wrote:
> Oliver,
> 
> Aren't the selftest patches from the 'Enable writable ID regs' series
> [1] merged into kvmarm/next? Looking at the log, I couldn't find them
> and the last patch that went from the series was [2]. Am I missing
> something?
> 
> Thank you.
> Raghavendra
> 
> [1]: https://lore.kernel.org/all/169644154288.3677537.15121340860793882283.b4-ty@linux.dev/
> [2]: https://lore.kernel.org/all/20231003230408.3405722-11-oliver.upton@linux.dev/

This is intentional, updating the tools headers as it was done in the
original series broke the perftool build. I backed out the selftest
patches, but took the rest of the kernel changes into kvmarm/next so
they could soak while we sort out the selftests mess. Hopefully we can
get the fix reviewed in time [*]...

[*] https://lore.kernel.org/kvmarm/20231011195740.3349631-1-oliver.upton@linux.dev/

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-16 13:35   ` Sebastian Ott
@ 2023-10-16 19:02     ` Raghavendra Rao Ananta
  2023-10-16 19:15       ` Oliver Upton
  0 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-16 19:02 UTC (permalink / raw)
  To: Sebastian Ott
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 6:35 AM Sebastian Ott <sebott@redhat.com> wrote:
>
> On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> > u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
> > {
> > -     return __vcpu_sys_reg(vcpu, PMCR_EL0);
> > +     u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
> > +                     ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > +
> > +     return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> > }
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index ff0f7095eaca..c750722fbe4a 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -745,12 +745,8 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > {
> >       u64 pmcr;
> >
> > -     /* No PMU available, PMCR_EL0 may UNDEF... */
> > -     if (!kvm_arm_support_pmu_v3())
> > -             return 0;
> > -
> >       /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> > -     pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > +     pmcr = kvm_vcpu_read_pmcr(vcpu) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
>
> pmcr = ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> Would that maybe make it more clear what is done here?
>
Since we require the entire PMCR register, and not just the PMCR.N
field, I think using kvm_vcpu_read_pmcr() would be technically
correct, don't you think?

Thank you.
Raghavendra

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-16 18:56         ` Oliver Upton
@ 2023-10-16 19:05           ` Raghavendra Rao Ananta
  2023-10-16 19:07             ` Oliver Upton
  0 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-16 19:05 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 11:56 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Fri, Oct 13, 2023 at 02:05:29PM -0700, Raghavendra Rao Ananta wrote:
> > Oliver,
> >
> > Aren't the selftest patches from the 'Enable writable ID regs' series
> > [1] merged into kvmarm/next? Looking at the log, I couldn't find them
> > and the last patch that went from the series was [2]. Am I missing
> > something?
> >
> > Thank you.
> > Raghavendra
> >
> > [1]: https://lore.kernel.org/all/169644154288.3677537.15121340860793882283.b4-ty@linux.dev/
> > [2]: https://lore.kernel.org/all/20231003230408.3405722-11-oliver.upton@linux.dev/
>
> This is intentional, updating the tools headers as it was done in the
> original series broke the perftool build. I backed out the selftest
> patches, but took the rest of the kernel changes into kvmarm/next so
> they could soak while we sort out the selftests mess. Hopefully we can
> get the fix reviewed in time [*]...
>
> [*] https://lore.kernel.org/kvmarm/20231011195740.3349631-1-oliver.upton@linux.dev/
>
> --
Ah, I see. In that case, since it impacts this series, do you want me
to rebase my series on top of your selftests series for v8?

Thank you.
Raghavendra
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-16 19:05           ` Raghavendra Rao Ananta
@ 2023-10-16 19:07             ` Oliver Upton
  0 siblings, 0 replies; 60+ messages in thread
From: Oliver Upton @ 2023-10-16 19:07 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 12:05:16PM -0700, Raghavendra Rao Ananta wrote:
> On Mon, Oct 16, 2023 at 11:56 AM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > On Fri, Oct 13, 2023 at 02:05:29PM -0700, Raghavendra Rao Ananta wrote:
> > > Oliver,
> > >
> > > Aren't the selftest patches from the 'Enable writable ID regs' series
> > > [1] merged into kvmarm/next? Looking at the log, I couldn't find them
> > > and the last patch that went from the series was [2]. Am I missing
> > > something?
> > >
> > > Thank you.
> > > Raghavendra
> > >
> > > [1]: https://lore.kernel.org/all/169644154288.3677537.15121340860793882283.b4-ty@linux.dev/
> > > [2]: https://lore.kernel.org/all/20231003230408.3405722-11-oliver.upton@linux.dev/
> >
> > This is intentional, updating the tools headers as it was done in the
> > original series broke the perftool build. I backed out the selftest
> > patches, but took the rest of the kernel changes into kvmarm/next so
> > they could soak while we sort out the selftests mess. Hopefully we can
> > get the fix reviewed in time [*]...
> >
> > [*] https://lore.kernel.org/kvmarm/20231011195740.3349631-1-oliver.upton@linux.dev/
> >
> > --
> Ah, I see. In that case, since it impacts this series, do you want me
> to rebase my series on top of your selftests series for v8?

No, please keep the two independent for now. I can fix it up when
applying the series.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-16 19:02     ` Raghavendra Rao Ananta
@ 2023-10-16 19:15       ` Oliver Upton
  2023-10-16 21:35         ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-16 19:15 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 12:02:27PM -0700, Raghavendra Rao Ananta wrote:
> On Mon, Oct 16, 2023 at 6:35 AM Sebastian Ott <sebott@redhat.com> wrote:
> >
> > On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> > > u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
> > > {
> > > -     return __vcpu_sys_reg(vcpu, PMCR_EL0);
> > > +     u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
> > > +                     ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > > +
> > > +     return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> > > }
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index ff0f7095eaca..c750722fbe4a 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -745,12 +745,8 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > > {
> > >       u64 pmcr;
> > >
> > > -     /* No PMU available, PMCR_EL0 may UNDEF... */
> > > -     if (!kvm_arm_support_pmu_v3())
> > > -             return 0;
> > > -
> > >       /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> > > -     pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > > +     pmcr = kvm_vcpu_read_pmcr(vcpu) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> >
> > pmcr = ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> > Would that maybe make it more clear what is done here?
> >
> Since we require the entire PMCR register, and not just the PMCR.N
> field, I think using kvm_vcpu_read_pmcr() would be technically
> correct, don't you think?

No, this isn't using the entire PMCR value, it is just grabbing
PMCR_EL0.N.

What's the point of doing this in the first place? The implementation of
kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-09 23:08 ` [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on " Raghavendra Rao Ananta
@ 2023-10-16 19:44   ` Eric Auger
  2023-10-16 21:28     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Eric Auger @ 2023-10-16 19:44 UTC (permalink / raw)
  To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghavendra,

On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
PMOVS{SET,CLR}_EL0?
> This function clears RAZ bits of those registers corresponding
> to unimplemented event counters on the vCPU, and sets bits
> corresponding to implemented event counters to a predefined
> pseudo UNKNOWN value (some bits are set to 1).
> 
> The function identifies (un)implemented event counters on the
> vCPU based on the PMCR_EL0.N value on the host. Using the host
> value for this would be problematic when KVM supports letting
> userspace set PMCR_EL0.N to a value different from the host value
> (some of the RAZ bits of those registers could end up being set to 1).
> 
> Fix this by clearing the registers so that it can ensure
> that all the RAZ bits are cleared even when the PMCR_EL0.N value
> for the vCPU is different from the host value. Use reset_val() to
> do this instead of fixing reset_pmu_reg(), and remove
> reset_pmu_reg(), as it is no longer used.
do you intend to restore the 'unknown' behavior at some point?

Thanks

Eric
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 21 +--------------------
>  1 file changed, 1 insertion(+), 20 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 818a52e257ed..3dbb7d276b0e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>  	return REG_HIDDEN;
>  }
>  
> -static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> -{
> -	u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> -
> -	/* No PMU available, any PMU reg may UNDEF... */
> -	if (!kvm_arm_support_pmu_v3())
> -		return 0;
> -
> -	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> -	n &= ARMV8_PMU_PMCR_N_MASK;
> -	if (n)
> -		mask |= GENMASK(n - 1, 0);
> -
> -	reset_unknown(vcpu, r);
> -	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> -
> -	return __vcpu_sys_reg(vcpu, r->reg);
> -}
> -
>  static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
>  	reset_unknown(vcpu, r);
> @@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>  
>  #define PMU_SYS_REG(name)						\
> -	SYS_DESC(SYS_##name), .reset = reset_pmu_reg,			\
> +	SYS_DESC(SYS_##name), .reset = reset_val,			\
>  	.visibility = pmu_visibility
>  
>  /* Macro to expand the PMEVCNTRn_EL0 register */


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU
  2023-10-09 23:08 ` [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU Raghavendra Rao Ananta
@ 2023-10-16 19:45   ` Eric Auger
  0 siblings, 0 replies; 60+ messages in thread
From: Eric Auger @ 2023-10-16 19:45 UTC (permalink / raw)
  To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Reiji,

On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> Introduce new helper functions to set the guest's PMU
> (kvm->arch.arm_pmu) either to a default probed instance or to a
> caller requested one, and use it when the guest's PMU needs to
> be set. These helpers will make it easier for the following
> patches to modify the relevant code.
> 
> No functional change intended.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Eric
> ---
>  arch/arm64/kvm/pmu-emul.c | 50 +++++++++++++++++++++++++++------------
>  1 file changed, 35 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 3afb281ed8d2..eb5dcb12dafe 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -874,6 +874,36 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
>  	return true;
>  }
>  
> +static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> +{
> +	lockdep_assert_held(&kvm->arch.config_lock);
> +
> +	kvm->arch.arm_pmu = arm_pmu;
> +}
> +
> +/**
> + * kvm_arm_set_default_pmu - No PMU set, get the default one.
> + * @kvm: The kvm pointer
> + *
> + * The observant among you will notice that the supported_cpus
> + * mask does not get updated for the default PMU even though it
> + * is quite possible the selected instance supports only a
> + * subset of cores in the system. This is intentional, and
> + * upholds the preexisting behavior on heterogeneous systems
> + * where vCPUs can be scheduled on any core but the guest
> + * counters could stop working.
> + */
> +static int kvm_arm_set_default_pmu(struct kvm *kvm)
> +{
> +	struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu();
> +
> +	if (!arm_pmu)
> +		return -ENODEV;
> +
> +	kvm_arm_set_pmu(kvm, arm_pmu);
> +	return 0;
> +}
> +
>  static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
>  {
>  	struct kvm *kvm = vcpu->kvm;
> @@ -893,7 +923,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
>  				break;
>  			}
>  
> -			kvm->arch.arm_pmu = arm_pmu;
> +			kvm_arm_set_pmu(kvm, arm_pmu);
>  			cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus);
>  			ret = 0;
>  			break;
> @@ -917,20 +947,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
>  		return -EBUSY;
>  
>  	if (!kvm->arch.arm_pmu) {
> -		/*
> -		 * No PMU set, get the default one.
> -		 *
> -		 * The observant among you will notice that the supported_cpus
> -		 * mask does not get updated for the default PMU even though it
> -		 * is quite possible the selected instance supports only a
> -		 * subset of cores in the system. This is intentional, and
> -		 * upholds the preexisting behavior on heterogeneous systems
> -		 * where vCPUs can be scheduled on any core but the guest
> -		 * counters could stop working.
> -		 */
> -		kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
> -		if (!kvm->arch.arm_pmu)
> -			return -ENODEV;
> +		int ret = kvm_arm_set_default_pmu(kvm);
> +
> +		if (ret)
> +			return ret;
>  	}
>  
>  	switch (attr->attr) {


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0
  2023-10-09 23:08 ` [PATCH v7 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 Raghavendra Rao Ananta
@ 2023-10-16 19:47   ` Eric Auger
  0 siblings, 0 replies; 60+ messages in thread
From: Eric Auger @ 2023-10-16 19:47 UTC (permalink / raw)
  To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi,

On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> The default reset function for PMU registers (defined by PMU_SYS_REG)
> now simply clears a specified register. Use the default one for
> PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those
> registers on vCPU reset (NOTE: All non-RES0 fields of those
> registers have UNKNOWN reset values, and the same fields of
> their AArch32 registers have 0 reset values).
> 
> No functional change intended.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3dbb7d276b0e..08af7824e9d8 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -2180,7 +2180,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	 * in 32bit mode. Here we choose to reset it as zero for consistency.
>  	 */
>  	{ PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr,
> -	  .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 },
> +	  .reg = PMUSERENR_EL0, },
>  	{ PMU_SYS_REG(PMOVSSET_EL0),
>  	  .access = access_pmovs, .reg = PMOVSSET_EL0 },
>  
> @@ -2338,7 +2338,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	 * in 32bit mode. Here we choose to reset it as zero for consistency.
>  	 */
>  	{ PMU_SYS_REG(PMCCFILTR_EL0), .access = access_pmu_evtyper,
> -	  .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
> +	  .reg = PMCCFILTR_EL0, },
>  
>  	EL2_REG(VPIDR_EL2, access_rw, reset_unknown, 0),
>  	EL2_REG(VMPIDR_EL2, access_rw, reset_unknown, 0),

Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 05/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0
  2023-10-09 23:08 ` [PATCH v7 05/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 Raghavendra Rao Ananta
@ 2023-10-16 20:02   ` Eric Auger
  0 siblings, 0 replies; 60+ messages in thread
From: Eric Auger @ 2023-10-16 20:02 UTC (permalink / raw)
  To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Raghavendra,

On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> Add a helper to read a vCPU's PMCR_EL0, and use it when KVM
> reads a vCPU's PMCR_EL0.
> 
> The PMCR_EL0 value is tracked by a sysreg file per each vCPU.
file?
> The following patches will make (only) PMCR_EL0.N track per guest.
> Having the new helper will be useful to combine the PMCR_EL0.N
> field (tracked per guest) and the other fields (tracked per vCPU)
> to provide the value of PMCR_EL0.
> 
> No functional change intended.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
Besides
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Eric
> ---
>  arch/arm64/kvm/arm.c      |  3 +--
>  arch/arm64/kvm/pmu-emul.c | 21 +++++++++++++++------
>  arch/arm64/kvm/sys_regs.c |  6 +++---
>  include/kvm/arm_pmu.h     |  6 ++++++
>  4 files changed, 25 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 708a53b70a7b..0af4d6bbe3d3 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -854,8 +854,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
>  		}
>  
>  		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
> -			kvm_pmu_handle_pmcr(vcpu,
> -					    __vcpu_sys_reg(vcpu, PMCR_EL0));
> +			kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu));
>  
>  		if (kvm_check_request(KVM_REQ_RESYNC_PMU_EL0, vcpu))
>  			kvm_vcpu_pmu_restore_guest(vcpu);
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index cc30c246c010..a161d6266a5c 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -72,7 +72,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
>  
>  static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
>  {
> -	u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0);
> +	u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc));
>  
>  	return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) ||
>  	       (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC));
> @@ -250,7 +250,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  {
> -	u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
> +	u64 val = kvm_vcpu_read_pmcr(vcpu) >> ARMV8_PMU_PMCR_N_SHIFT;
>  
>  	val &= ARMV8_PMU_PMCR_N_MASK;
>  	if (val == 0)
> @@ -272,7 +272,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>  	if (!kvm_vcpu_has_pmu(vcpu))
>  		return;
>  
> -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> +	if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val)
>  		return;
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> @@ -324,7 +324,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
>  {
>  	u64 reg = 0;
>  
> -	if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
> +	if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) {
>  		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
> @@ -426,7 +426,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
>  {
>  	int i;
>  
> -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> +	if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E))
>  		return;
>  
>  	/* Weed out disabled counters */
> @@ -569,7 +569,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
>  {
>  	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
> -	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> +	return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) &&
>  	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx));
>  }
>  
> @@ -1084,3 +1084,12 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
>  					      ID_AA64DFR0_EL1_PMUVer_V3P5);
>  	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
>  }
> +
> +/**
> + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU
> + * @vcpu: The vcpu pointer
> + */
> +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
> +{
> +	return __vcpu_sys_reg(vcpu, PMCR_EL0);
> +}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 08af7824e9d8..ff0f7095eaca 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -803,7 +803,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  		 * Only update writeable bits of PMCR (continuing into
>  		 * kvm_pmu_handle_pmcr() as well)
>  		 */
> -		val = __vcpu_sys_reg(vcpu, PMCR_EL0);
> +		val = kvm_vcpu_read_pmcr(vcpu);
>  		val &= ~ARMV8_PMU_PMCR_MASK;
>  		val |= p->regval & ARMV8_PMU_PMCR_MASK;
>  		if (!kvm_supports_32bit_el0())
> @@ -811,7 +811,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  		kvm_pmu_handle_pmcr(vcpu, val);
>  	} else {
>  		/* PMCR.P & PMCR.C are RAZ */
> -		val = __vcpu_sys_reg(vcpu, PMCR_EL0)
> +		val = kvm_vcpu_read_pmcr(vcpu)
>  		      & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
>  		p->regval = val;
>  	}
> @@ -860,7 +860,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
>  {
>  	u64 pmcr, val;
>  
> -	pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
> +	pmcr = kvm_vcpu_read_pmcr(vcpu);
>  	val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
>  	if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
>  		kvm_inject_undefined(vcpu);
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 858ed9ce828a..cd980d78b86b 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -103,6 +103,7 @@ void kvm_vcpu_pmu_resync_el0(void);
>  u8 kvm_arm_pmu_get_pmuver_limit(void);
>  int kvm_arm_set_default_pmu(struct kvm *kvm);
>  
> +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu);
>  #else
>  struct kvm_pmu {
>  };
> @@ -180,6 +181,11 @@ static inline int kvm_arm_set_default_pmu(struct kvm *kvm)
>  	return -ENODEV;
>  }
>  
> +static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
> +{
> +	return 0;
> +}
> +
>  #endif
>  
>  #endif


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-16 19:44   ` Eric Auger
@ 2023-10-16 21:28     ` Raghavendra Rao Ananta
  2023-10-17  9:23       ` Eric Auger
  0 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-16 21:28 UTC (permalink / raw)
  To: Eric Auger
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 12:45 PM Eric Auger <eauger@redhat.com> wrote:
>
> Hi Raghavendra,
>
> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> > PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> PMOVS{SET,CLR}_EL0?
Ah, yes. It should be PMOVS{SET,CLR}_EL0.

> > This function clears RAZ bits of those registers corresponding
> > to unimplemented event counters on the vCPU, and sets bits
> > corresponding to implemented event counters to a predefined
> > pseudo UNKNOWN value (some bits are set to 1).
> >
> > The function identifies (un)implemented event counters on the
> > vCPU based on the PMCR_EL0.N value on the host. Using the host
> > value for this would be problematic when KVM supports letting
> > userspace set PMCR_EL0.N to a value different from the host value
> > (some of the RAZ bits of those registers could end up being set to 1).
> >
> > Fix this by clearing the registers so that it can ensure
> > that all the RAZ bits are cleared even when the PMCR_EL0.N value
> > for the vCPU is different from the host value. Use reset_val() to
> > do this instead of fixing reset_pmu_reg(), and remove
> > reset_pmu_reg(), as it is no longer used.
> do you intend to restore the 'unknown' behavior at some point?
>
I believe Reiji's (original author) intention was to keep them
cleared, which would still imply an 'unknown' behavior. Do you think
there's an issue with this?

Thank you.
Raghavendra
> Thanks
>
> Eric
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 21 +--------------------
> >  1 file changed, 1 insertion(+), 20 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 818a52e257ed..3dbb7d276b0e 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >       return REG_HIDDEN;
> >  }
> >
> > -static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > -{
> > -     u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > -
> > -     /* No PMU available, any PMU reg may UNDEF... */
> > -     if (!kvm_arm_support_pmu_v3())
> > -             return 0;
> > -
> > -     n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > -     n &= ARMV8_PMU_PMCR_N_MASK;
> > -     if (n)
> > -             mask |= GENMASK(n - 1, 0);
> > -
> > -     reset_unknown(vcpu, r);
> > -     __vcpu_sys_reg(vcpu, r->reg) &= mask;
> > -
> > -     return __vcpu_sys_reg(vcpu, r->reg);
> > -}
> > -
> >  static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  {
> >       reset_unknown(vcpu, r);
> > @@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >         trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
> >
> >  #define PMU_SYS_REG(name)                                            \
> > -     SYS_DESC(SYS_##name), .reset = reset_pmu_reg,                   \
> > +     SYS_DESC(SYS_##name), .reset = reset_val,                       \
> >       .visibility = pmu_visibility
> >
> >  /* Macro to expand the PMEVCNTRn_EL0 register */
>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-16 19:15       ` Oliver Upton
@ 2023-10-16 21:35         ` Raghavendra Rao Ananta
  2023-10-17  5:52           ` Oliver Upton
  0 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-16 21:35 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 12:16 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Oct 16, 2023 at 12:02:27PM -0700, Raghavendra Rao Ananta wrote:
> > On Mon, Oct 16, 2023 at 6:35 AM Sebastian Ott <sebott@redhat.com> wrote:
> > >
> > > On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> > > > u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
> > > > {
> > > > -     return __vcpu_sys_reg(vcpu, PMCR_EL0);
> > > > +     u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
> > > > +                     ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > > > +
> > > > +     return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> > > > }
> > > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > > index ff0f7095eaca..c750722fbe4a 100644
> > > > --- a/arch/arm64/kvm/sys_regs.c
> > > > +++ b/arch/arm64/kvm/sys_regs.c
> > > > @@ -745,12 +745,8 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > > > {
> > > >       u64 pmcr;
> > > >
> > > > -     /* No PMU available, PMCR_EL0 may UNDEF... */
> > > > -     if (!kvm_arm_support_pmu_v3())
> > > > -             return 0;
> > > > -
> > > >       /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> > > > -     pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > > > +     pmcr = kvm_vcpu_read_pmcr(vcpu) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > >
> > > pmcr = ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> > > Would that maybe make it more clear what is done here?
> > >
> > Since we require the entire PMCR register, and not just the PMCR.N
> > field, I think using kvm_vcpu_read_pmcr() would be technically
> > correct, don't you think?
>
> No, this isn't using the entire PMCR value, it is just grabbing
> PMCR_EL0.N.
>
Oh sorry, my bad.
> What's the point of doing this in the first place? The implementation of
> kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
>
I guess originally the change replaced read_sysreg(pmcr_el0) with
kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
But if you and Sebastian feel that it's an overkill and directly
getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
happy to make the change.

Thank you.
Raghavendra
> --
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-16 21:35         ` Raghavendra Rao Ananta
@ 2023-10-17  5:52           ` Oliver Upton
  2023-10-17  5:55             ` Oliver Upton
  2023-10-17 16:58             ` Raghavendra Rao Ananta
  0 siblings, 2 replies; 60+ messages in thread
From: Oliver Upton @ 2023-10-17  5:52 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 02:35:52PM -0700, Raghavendra Rao Ananta wrote:

[...]

> > What's the point of doing this in the first place? The implementation of
> > kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
> >
> I guess originally the change replaced read_sysreg(pmcr_el0) with
> kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
> But if you and Sebastian feel that it's an overkill and directly
> getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
> happy to make the change.

No, I'd rather you delete the line where PMCR_EL0.N altogether.
reset_pmcr() tries to initialize the field, but your
kvm_vcpu_read_pmcr() winds up replacing it with pmcr_n.

> @@ -1105,8 +1109,16 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
>  /**
>   * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU
>   * @vcpu: The vcpu pointer
> + *
> + * The function returns the value of PMCR.N based on the per-VM tracked
> + * value (kvm->arch.pmcr_n). This is to ensure that the register field
> + * remains consistent for the VM, even on heterogeneous systems where
> + * the value may vary when read from different CPUs (during vCPU reset).

Since I'm looking at this again, I don't believe the comment is adding
much. KVM doesn't read pmcr_el0 directly anymore, and kvm_arch is
clearly VM-scoped context.

>   */
>  u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
>  {
> -	return __vcpu_sys_reg(vcpu, PMCR_EL0);
> +	u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
> +			~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> +
> +	return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
>  }


-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-17  5:52           ` Oliver Upton
@ 2023-10-17  5:55             ` Oliver Upton
  2023-10-17 16:58             ` Raghavendra Rao Ananta
  1 sibling, 0 replies; 60+ messages in thread
From: Oliver Upton @ 2023-10-17  5:55 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 05:52:24AM +0000, Oliver Upton wrote:
> On Mon, Oct 16, 2023 at 02:35:52PM -0700, Raghavendra Rao Ananta wrote:
> 
> [...]
> 
> > > What's the point of doing this in the first place? The implementation of
> > > kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
> > >
> > I guess originally the change replaced read_sysreg(pmcr_el0) with
> > kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
> > But if you and Sebastian feel that it's an overkill and directly
> > getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
> > happy to make the change.
> 
> No, I'd rather you delete the line where PMCR_EL0.N altogether.

... where we set up ...

> reset_pmcr() tries to initialize the field, but your
> kvm_vcpu_read_pmcr() winds up replacing it with pmcr_n.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-16 21:28     ` Raghavendra Rao Ananta
@ 2023-10-17  9:23       ` Eric Auger
  2023-10-17 16:59         ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Eric Auger @ 2023-10-17  9:23 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

Hi,
On 10/16/23 23:28, Raghavendra Rao Ananta wrote:
> On Mon, Oct 16, 2023 at 12:45 PM Eric Auger <eauger@redhat.com> wrote:
>>
>> Hi Raghavendra,
>>
>> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
>>> From: Reiji Watanabe <reijiw@google.com>
>>>
>>> On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
>>> PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
>> PMOVS{SET,CLR}_EL0?
> Ah, yes. It should be PMOVS{SET,CLR}_EL0.
> 
>>> This function clears RAZ bits of those registers corresponding
>>> to unimplemented event counters on the vCPU, and sets bits
>>> corresponding to implemented event counters to a predefined
>>> pseudo UNKNOWN value (some bits are set to 1).
>>>
>>> The function identifies (un)implemented event counters on the
>>> vCPU based on the PMCR_EL0.N value on the host. Using the host
>>> value for this would be problematic when KVM supports letting
>>> userspace set PMCR_EL0.N to a value different from the host value
>>> (some of the RAZ bits of those registers could end up being set to 1).
>>>
>>> Fix this by clearing the registers so that it can ensure
>>> that all the RAZ bits are cleared even when the PMCR_EL0.N value
>>> for the vCPU is different from the host value. Use reset_val() to
>>> do this instead of fixing reset_pmu_reg(), and remove
>>> reset_pmu_reg(), as it is no longer used.
>> do you intend to restore the 'unknown' behavior at some point?
>>
> I believe Reiji's (original author) intention was to keep them
> cleared, which would still imply an 'unknown' behavior. Do you think
> there's an issue with this?
Then why do we bother using reset_unknown in the other places if
clearing the bits is enough here?

Thanks

Eric
> 
> Thank you.
> Raghavendra
>> Thanks
>>
>> Eric
>>>
>>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
>>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
>>> ---
>>>  arch/arm64/kvm/sys_regs.c | 21 +--------------------
>>>  1 file changed, 1 insertion(+), 20 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 818a52e257ed..3dbb7d276b0e 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>>>       return REG_HIDDEN;
>>>  }
>>>
>>> -static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> -{
>>> -     u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
>>> -
>>> -     /* No PMU available, any PMU reg may UNDEF... */
>>> -     if (!kvm_arm_support_pmu_v3())
>>> -             return 0;
>>> -
>>> -     n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
>>> -     n &= ARMV8_PMU_PMCR_N_MASK;
>>> -     if (n)
>>> -             mask |= GENMASK(n - 1, 0);
>>> -
>>> -     reset_unknown(vcpu, r);
>>> -     __vcpu_sys_reg(vcpu, r->reg) &= mask;
>>> -
>>> -     return __vcpu_sys_reg(vcpu, r->reg);
>>> -}
>>> -
>>>  static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>>  {
>>>       reset_unknown(vcpu, r);
>>> @@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>>         trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>>>
>>>  #define PMU_SYS_REG(name)                                            \
>>> -     SYS_DESC(SYS_##name), .reset = reset_pmu_reg,                   \
>>> +     SYS_DESC(SYS_##name), .reset = reset_val,                       \
>>>       .visibility = pmu_visibility
>>>
>>>  /* Macro to expand the PMEVCNTRn_EL0 register */
>>
> 


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-09 23:08 ` [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
  2023-10-12 11:24   ` Sebastian Ott
@ 2023-10-17 14:51   ` Eric Auger
  2023-10-17 17:07     ` Raghavendra Rao Ananta
  2023-10-17 15:48   ` Sebastian Ott
  2 siblings, 1 reply; 60+ messages in thread
From: Eric Auger @ 2023-10-17 14:51 UTC (permalink / raw)
  To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghavendra,
On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> Introduce vpmu_counter_access test for arm64 platforms.
> The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU,
> and check if the guest can consistently see the same number of the
> PMU event counters (PMCR_EL0.N) that userspace sets.
> This test case is done with each of the PMCR_EL0.N values from
> 0 to 31 (With the PMCR_EL0.N values greater than the host value,
> the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail).
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../kvm/aarch64/vpmu_counter_access.c         | 247 ++++++++++++++++++
>  2 files changed, 248 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index a3bb36fb3cfc..416700aa196c 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -149,6 +149,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
>  TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
>  TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
>  TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
> +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
>  TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
>  TEST_GEN_PROGS_aarch64 += demand_paging_test
>  TEST_GEN_PROGS_aarch64 += dirty_log_test
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> new file mode 100644
> index 000000000000..58949b17d76e
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -0,0 +1,247 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * vpmu_counter_access - Test vPMU event counter access
> + *
> + * Copyright (c) 2022 Google LLC.
2023 ;-)
> + *
> + * This test checks if the guest can see the same number of the PMU event
> + * counters (PMCR_EL0.N) that userspace sets.
> + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> + */
> +#include <kvm_util.h>
> +#include <processor.h>
> +#include <test_util.h>
> +#include <vgic.h>
> +#include <perf/arm_pmuv3.h>
> +#include <linux/bitfield.h>
> +
> +/* The max number of the PMU event counters (excluding the cycle counter) */
> +#define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
> +
> +struct vpmu_vm {
> +	struct kvm_vm *vm;
> +	struct kvm_vcpu *vcpu;
> +	int gic_fd;
> +};
> +
> +static struct vpmu_vm vpmu_vm;
> +
> +static uint64_t get_pmcr_n(uint64_t pmcr)
> +{
> +	return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> +}
> +
> +static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
> +{
> +	*pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> +	*pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> +}
> +
> +static void guest_sync_handler(struct ex_regs *regs)
> +{
> +	uint64_t esr, ec;
> +
> +	esr = read_sysreg(esr_el1);
> +	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> +	__GUEST_ASSERT(0, "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx", regs->pc, esr, ec);
> +}
> +
> +/*
> + * The guest is configured with PMUv3 with @expected_pmcr_n number of
> + * event counters.
> + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> + */
> +static void guest_code(uint64_t expected_pmcr_n)
> +{
> +	uint64_t pmcr, pmcr_n;
> +
> +	__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
> +			"Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
> +			expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS);
> +
> +	pmcr = read_sysreg(pmcr_el0);
> +	pmcr_n = get_pmcr_n(pmcr);
> +
> +	/* Make sure that PMCR_EL0.N indicates the value userspace set */
> +	__GUEST_ASSERT(pmcr_n == expected_pmcr_n,
> +			"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
> +			pmcr_n, expected_pmcr_n);
> +
> +	GUEST_DONE();
> +}
> +
> +#define GICD_BASE_GPA	0x8000000ULL
> +#define GICR_BASE_GPA	0x80A0000ULL
> +
> +/* Create a VM that has one vCPU with PMUv3 configured. */
> +static void create_vpmu_vm(void *guest_code)
> +{
> +	struct kvm_vcpu_init init;
> +	uint8_t pmuver, ec;
> +	uint64_t dfr0, irq = 23;
> +	struct kvm_device_attr irq_attr = {
> +		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
> +		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
> +		.addr = (uint64_t)&irq,
> +	};
> +	struct kvm_device_attr init_attr = {
> +		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
> +		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
> +	};
> +
> +	/* The test creates the vpmu_vm multiple times. Ensure a clean state */
> +	memset(&vpmu_vm, 0, sizeof(vpmu_vm));
> +
> +	vpmu_vm.vm = vm_create(1);
> +	vm_init_descriptor_tables(vpmu_vm.vm);
> +	for (ec = 0; ec < ESR_EC_NUM; ec++) {
> +		vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
> +					guest_sync_handler);
> +	}
> +
> +	/* Create vCPU with PMUv3 */
> +	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> +	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> +	vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
> +	vcpu_init_descriptor_tables(vpmu_vm.vcpu);
> +	vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
> +					GICD_BASE_GPA, GICR_BASE_GPA);
__TEST_REQUIRE(vpmu_vm.gic_fd >= 0, "Failed to create vgic-v3, skipping");
as done in some other tests

> +
> +	/* Make sure that PMUv3 support is indicated in the ID register */
> +	vcpu_get_reg(vpmu_vm.vcpu,
> +		     KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
> +	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
> +	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
> +		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
> +		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
> +
> +	/* Initialize vPMU */
> +	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
> +	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
> +}
> +
> +static void destroy_vpmu_vm(void)
> +{
> +	close(vpmu_vm.gic_fd);
> +	kvm_vm_free(vpmu_vm.vm);
> +}
> +
> +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
> +{
> +	struct ucall uc;
> +
> +	vcpu_args_set(vcpu, 1, pmcr_n);
> +	vcpu_run(vcpu);
> +	switch (get_ucall(vcpu, &uc)) {
> +	case UCALL_ABORT:
> +		REPORT_GUEST_ASSERT(uc);
> +		break;
> +	case UCALL_DONE:
> +		break;
> +	default:
> +		TEST_FAIL("Unknown ucall %lu", uc.cmd);
> +		break;
> +	}
> +}
> +
> +/*
> + * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
> + * and run the test.
> + */
> +static void run_test(uint64_t pmcr_n)
> +{
> +	struct kvm_vcpu *vcpu;
> +	uint64_t sp, pmcr;
> +	struct kvm_vcpu_init init;
> +
> +	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
> +	create_vpmu_vm(guest_code);
> +
> +	vcpu = vpmu_vm.vcpu;
> +
> +	/* Save the initial sp to restore them later to run the guest again */
> +	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
> +
> +	/* Update the PMCR_EL0.N with @pmcr_n */
> +	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> +	set_pmcr_n(&pmcr, pmcr_n);
> +	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
> +
> +	run_vcpu(vcpu, pmcr_n);
> +
> +	/*
> +	 * Reset and re-initialize the vCPU, and run the guest code again to
> +	 * check if PMCR_EL0.N is preserved.
> +	 */
> +	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> +	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> +	aarch64_vcpu_setup(vcpu, &init);
> +	vcpu_init_descriptor_tables(vcpu);
> +	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
> +	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
> +
> +	run_vcpu(vcpu, pmcr_n);
> +
> +	destroy_vpmu_vm();
> +}
> +
> +/*
> + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
> + * the vCPU to @pmcr_n, which is larger than the host value.
> + * The attempt should fail as @pmcr_n is too big to set for the vCPU.
> + */
> +static void run_error_test(uint64_t pmcr_n)
> +{
> +	struct kvm_vcpu *vcpu;
> +	uint64_t pmcr, pmcr_orig;
> +
> +	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
> +	create_vpmu_vm(guest_code);
> +	vcpu = vpmu_vm.vcpu;
> +
> +	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
> +	pmcr = pmcr_orig;
> +
> +	/*
> +	 * Setting a larger value of PMCR.N should not modify the field, and
> +	 * return a success.
> +	 */
> +	set_pmcr_n(&pmcr, pmcr_n);
> +	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
> +	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> +	TEST_ASSERT(pmcr_orig == pmcr,
> +		    "PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx\n",
> +		    pmcr, pmcr_n);
nit: you could introduce a set_pmcr_n() routine  which creates the
vpmu_vm and set the PMCR.N and check whether the setting is applied. An
arg could tell the helper whether this is supposed to fail. This could
be used in both run_error_test and run_test which both mostly use the
same code.
> +
> +	destroy_vpmu_vm();
> +}
> +
> +/*
> + * Return the default number of implemented PMU event counters excluding
> + * the cycle counter (i.e. PMCR_EL0.N value) for the guest.
> + */
> +static uint64_t get_pmcr_n_limit(void)
> +{
> +	uint64_t pmcr;
> +
> +	create_vpmu_vm(guest_code);
> +	vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> +	destroy_vpmu_vm();
> +	return get_pmcr_n(pmcr);
> +}
> +
> +int main(void)
> +{
> +	uint64_t i, pmcr_n;
> +
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
> +
> +	pmcr_n = get_pmcr_n_limit();
> +	for (i = 0; i <= pmcr_n; i++)
> +		run_test(i);
> +
> +	for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
> +		run_error_test(i);
> +
> +	return 0;
> +}

Besides this looks good to me.

Thanks

Eric


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-09 23:08 ` [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
  2023-10-12 11:24   ` Sebastian Ott
  2023-10-17 14:51   ` Eric Auger
@ 2023-10-17 15:48   ` Sebastian Ott
  2023-10-17 17:10     ` Raghavendra Rao Ananta
  2 siblings, 1 reply; 60+ messages in thread
From: Sebastian Ott @ 2023-10-17 15:48 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> +static void guest_code(uint64_t expected_pmcr_n)
> +{
> +	uint64_t pmcr, pmcr_n;
> +
> +	__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
> +			"Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
> +			expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS);
> +
> +	pmcr = read_sysreg(pmcr_el0);
> +	pmcr_n = get_pmcr_n(pmcr);
> +
> +	/* Make sure that PMCR_EL0.N indicates the value userspace set */
> +	__GUEST_ASSERT(pmcr_n == expected_pmcr_n,
> +			"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
> +			pmcr_n, expected_pmcr_n);

Expected vs read value is swapped.


Also, since the kernel has special handling for this, should we add a
test like below?

+static void immutable_test(void)
+{
+	struct kvm_vcpu *vcpu;
+	uint64_t sp, pmcr, pmcr_n;
+	struct kvm_vcpu_init init;
+
+	create_vpmu_vm(guest_code);
+
+	vcpu = vpmu_vm.vcpu;
+
+	/* Save the initial sp to restore them later to run the guest again */
+	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
+
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	pmcr_n = get_pmcr_n(pmcr);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_init_descriptor_tables(vcpu);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+
+	/* Update the PMCR_EL0.N after the VM ran once */
+	set_pmcr_n(&pmcr, 0);
+	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+
+	/* Verify that the guest still gets the unmodified value */
+	run_vcpu(vcpu, pmcr_n);
+
+	destroy_vpmu_vm();
+}


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
  2023-10-09 23:08 ` [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
@ 2023-10-17 15:52   ` Sebastian Ott
  2023-10-17 16:49     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Sebastian Ott @ 2023-10-17 15:52 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> +		    u64 val)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +	u64 new_n, mutable_mask;
> +
> +	mutex_lock(&kvm->arch.config_lock);
> +
> +	/*
> +	 * Make PMCR immutable once the VM has started running, but do
> +	 * not return an error (-EBUSY) to meet the existing expectations.
> +	 */

Why should we mention which error we're _not_ returning?


> +	if (kvm_vm_has_ran_once(vcpu->kvm)) {
> +		mutex_unlock(&kvm->arch.config_lock);
> +		return 0;
> +	}


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
  2023-10-17 15:52   ` Sebastian Ott
@ 2023-10-17 16:49     ` Raghavendra Rao Ananta
  2023-10-19 10:45       ` Sebastian Ott
  0 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 16:49 UTC (permalink / raw)
  To: Sebastian Ott
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 8:52 AM Sebastian Ott <sebott@redhat.com> wrote:
>
> On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > +                 u64 val)
> > +{
> > +     struct kvm *kvm = vcpu->kvm;
> > +     u64 new_n, mutable_mask;
> > +
> > +     mutex_lock(&kvm->arch.config_lock);
> > +
> > +     /*
> > +      * Make PMCR immutable once the VM has started running, but do
> > +      * not return an error (-EBUSY) to meet the existing expectations.
> > +      */
>
> Why should we mention which error we're _not_ returning?
>
Oh, it's not to break the existing userspace expectations. Before this
series, any 'write' from userspace was possible. Returning -EBUSY all
of a sudden might tamper with this expectation.

Thank you.
Raghavendra
>
> > +     if (kvm_vm_has_ran_once(vcpu->kvm)) {
> > +             mutex_unlock(&kvm->arch.config_lock);
> > +             return 0;
> > +     }
>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-17  5:52           ` Oliver Upton
  2023-10-17  5:55             ` Oliver Upton
@ 2023-10-17 16:58             ` Raghavendra Rao Ananta
  2023-10-17 17:09               ` Oliver Upton
  1 sibling, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 16:58 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Oct 16, 2023 at 10:52 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Oct 16, 2023 at 02:35:52PM -0700, Raghavendra Rao Ananta wrote:
>
> [...]
>
> > > What's the point of doing this in the first place? The implementation of
> > > kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
> > >
> > I guess originally the change replaced read_sysreg(pmcr_el0) with
> > kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
> > But if you and Sebastian feel that it's an overkill and directly
> > getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
> > happy to make the change.
>
> No, I'd rather you delete the line where PMCR_EL0.N altogether.
> reset_pmcr() tries to initialize the field, but your
> kvm_vcpu_read_pmcr() winds up replacing it with pmcr_n.
>
I didn't get this comment. We still do initialize pmcr, but using the
pmcr.n read via kvm_vcpu_read_pmcr() instead of the actual system
register.

Thank you.
Raghavendra
> > @@ -1105,8 +1109,16 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
> >  /**
> >   * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU
> >   * @vcpu: The vcpu pointer
> > + *
> > + * The function returns the value of PMCR.N based on the per-VM tracked
> > + * value (kvm->arch.pmcr_n). This is to ensure that the register field
> > + * remains consistent for the VM, even on heterogeneous systems where
> > + * the value may vary when read from different CPUs (during vCPU reset).
>
> Since I'm looking at this again, I don't believe the comment is adding
> much. KVM doesn't read pmcr_el0 directly anymore, and kvm_arch is
> clearly VM-scoped context.
>
> >   */
> >  u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
> >  {
> > -     return __vcpu_sys_reg(vcpu, PMCR_EL0);
> > +     u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
> > +                     ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > +
> > +     return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> >  }
>
>
> --
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-17  9:23       ` Eric Auger
@ 2023-10-17 16:59         ` Raghavendra Rao Ananta
  2023-10-18 21:16           ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 16:59 UTC (permalink / raw)
  To: Eric Auger
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

Hi Eric,
On Tue, Oct 17, 2023 at 2:23 AM Eric Auger <eauger@redhat.com> wrote:
>
> Hi,
> On 10/16/23 23:28, Raghavendra Rao Ananta wrote:
> > On Mon, Oct 16, 2023 at 12:45 PM Eric Auger <eauger@redhat.com> wrote:
> >>
> >> Hi Raghavendra,
> >>
> >> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> >>> From: Reiji Watanabe <reijiw@google.com>
> >>>
> >>> On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> >>> PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> >> PMOVS{SET,CLR}_EL0?
> > Ah, yes. It should be PMOVS{SET,CLR}_EL0.
> >
> >>> This function clears RAZ bits of those registers corresponding
> >>> to unimplemented event counters on the vCPU, and sets bits
> >>> corresponding to implemented event counters to a predefined
> >>> pseudo UNKNOWN value (some bits are set to 1).
> >>>
> >>> The function identifies (un)implemented event counters on the
> >>> vCPU based on the PMCR_EL0.N value on the host. Using the host
> >>> value for this would be problematic when KVM supports letting
> >>> userspace set PMCR_EL0.N to a value different from the host value
> >>> (some of the RAZ bits of those registers could end up being set to 1).
> >>>
> >>> Fix this by clearing the registers so that it can ensure
> >>> that all the RAZ bits are cleared even when the PMCR_EL0.N value
> >>> for the vCPU is different from the host value. Use reset_val() to
> >>> do this instead of fixing reset_pmu_reg(), and remove
> >>> reset_pmu_reg(), as it is no longer used.
> >> do you intend to restore the 'unknown' behavior at some point?
> >>
> > I believe Reiji's (original author) intention was to keep them
> > cleared, which would still imply an 'unknown' behavior. Do you think
> > there's an issue with this?
> Then why do we bother using reset_unknown in the other places if
> clearing the bits is enough here?
>
Hmm. Good point. I can bring back reset_unknown to keep the original behavior.

Thank you.
Raghavendra
> Thanks
>
> Eric
> >
> > Thank you.
> > Raghavendra
> >> Thanks
> >>
> >> Eric
> >>>
> >>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> >>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> >>> ---
> >>>  arch/arm64/kvm/sys_regs.c | 21 +--------------------
> >>>  1 file changed, 1 insertion(+), 20 deletions(-)
> >>>
> >>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >>> index 818a52e257ed..3dbb7d276b0e 100644
> >>> --- a/arch/arm64/kvm/sys_regs.c
> >>> +++ b/arch/arm64/kvm/sys_regs.c
> >>> @@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >>>       return REG_HIDDEN;
> >>>  }
> >>>
> >>> -static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >>> -{
> >>> -     u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> >>> -
> >>> -     /* No PMU available, any PMU reg may UNDEF... */
> >>> -     if (!kvm_arm_support_pmu_v3())
> >>> -             return 0;
> >>> -
> >>> -     n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> >>> -     n &= ARMV8_PMU_PMCR_N_MASK;
> >>> -     if (n)
> >>> -             mask |= GENMASK(n - 1, 0);
> >>> -
> >>> -     reset_unknown(vcpu, r);
> >>> -     __vcpu_sys_reg(vcpu, r->reg) &= mask;
> >>> -
> >>> -     return __vcpu_sys_reg(vcpu, r->reg);
> >>> -}
> >>> -
> >>>  static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >>>  {
> >>>       reset_unknown(vcpu, r);
> >>> @@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >>>         trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
> >>>
> >>>  #define PMU_SYS_REG(name)                                            \
> >>> -     SYS_DESC(SYS_##name), .reset = reset_pmu_reg,                   \
> >>> +     SYS_DESC(SYS_##name), .reset = reset_val,                       \
> >>>       .visibility = pmu_visibility
> >>>
> >>>  /* Macro to expand the PMEVCNTRn_EL0 register */
> >>
> >
>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-17 14:51   ` Eric Auger
@ 2023-10-17 17:07     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 17:07 UTC (permalink / raw)
  To: Eric Auger
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

Hi Eric,

On Tue, Oct 17, 2023 at 7:51 AM Eric Auger <eauger@redhat.com> wrote:
>
> Hi Raghavendra,
> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > Introduce vpmu_counter_access test for arm64 platforms.
> > The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU,
> > and check if the guest can consistently see the same number of the
> > PMU event counters (PMCR_EL0.N) that userspace sets.
> > This test case is done with each of the PMCR_EL0.N values from
> > 0 to 31 (With the PMCR_EL0.N values greater than the host value,
> > the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail).
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  tools/testing/selftests/kvm/Makefile          |   1 +
> >  .../kvm/aarch64/vpmu_counter_access.c         | 247 ++++++++++++++++++
> >  2 files changed, 248 insertions(+)
> >  create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> >
> > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > index a3bb36fb3cfc..416700aa196c 100644
> > --- a/tools/testing/selftests/kvm/Makefile
> > +++ b/tools/testing/selftests/kvm/Makefile
> > @@ -149,6 +149,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
> >  TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
> >  TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
> >  TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
> > +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
> >  TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
> >  TEST_GEN_PROGS_aarch64 += demand_paging_test
> >  TEST_GEN_PROGS_aarch64 += dirty_log_test
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > new file mode 100644
> > index 000000000000..58949b17d76e
> > --- /dev/null
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -0,0 +1,247 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * vpmu_counter_access - Test vPMU event counter access
> > + *
> > + * Copyright (c) 2022 Google LLC.
> 2023 ;-)
Will fix in v8.
> > + *
> > + * This test checks if the guest can see the same number of the PMU event
> > + * counters (PMCR_EL0.N) that userspace sets.
> > + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> > + */
> > +#include <kvm_util.h>
> > +#include <processor.h>
> > +#include <test_util.h>
> > +#include <vgic.h>
> > +#include <perf/arm_pmuv3.h>
> > +#include <linux/bitfield.h>
> > +
> > +/* The max number of the PMU event counters (excluding the cycle counter) */
> > +#define ARMV8_PMU_MAX_GENERAL_COUNTERS       (ARMV8_PMU_MAX_COUNTERS - 1)
> > +
> > +struct vpmu_vm {
> > +     struct kvm_vm *vm;
> > +     struct kvm_vcpu *vcpu;
> > +     int gic_fd;
> > +};
> > +
> > +static struct vpmu_vm vpmu_vm;
> > +
> > +static uint64_t get_pmcr_n(uint64_t pmcr)
> > +{
> > +     return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> > +}
> > +
> > +static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
> > +{
> > +     *pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > +     *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> > +}
> > +
> > +static void guest_sync_handler(struct ex_regs *regs)
> > +{
> > +     uint64_t esr, ec;
> > +
> > +     esr = read_sysreg(esr_el1);
> > +     ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> > +     __GUEST_ASSERT(0, "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx", regs->pc, esr, ec);
> > +}
> > +
> > +/*
> > + * The guest is configured with PMUv3 with @expected_pmcr_n number of
> > + * event counters.
> > + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> > + */
> > +static void guest_code(uint64_t expected_pmcr_n)
> > +{
> > +     uint64_t pmcr, pmcr_n;
> > +
> > +     __GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
> > +                     "Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
> > +                     expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS);
> > +
> > +     pmcr = read_sysreg(pmcr_el0);
> > +     pmcr_n = get_pmcr_n(pmcr);
> > +
> > +     /* Make sure that PMCR_EL0.N indicates the value userspace set */
> > +     __GUEST_ASSERT(pmcr_n == expected_pmcr_n,
> > +                     "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
> > +                     pmcr_n, expected_pmcr_n);
> > +
> > +     GUEST_DONE();
> > +}
> > +
> > +#define GICD_BASE_GPA        0x8000000ULL
> > +#define GICR_BASE_GPA        0x80A0000ULL
> > +
> > +/* Create a VM that has one vCPU with PMUv3 configured. */
> > +static void create_vpmu_vm(void *guest_code)
> > +{
> > +     struct kvm_vcpu_init init;
> > +     uint8_t pmuver, ec;
> > +     uint64_t dfr0, irq = 23;
> > +     struct kvm_device_attr irq_attr = {
> > +             .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> > +             .attr = KVM_ARM_VCPU_PMU_V3_IRQ,
> > +             .addr = (uint64_t)&irq,
> > +     };
> > +     struct kvm_device_attr init_attr = {
> > +             .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> > +             .attr = KVM_ARM_VCPU_PMU_V3_INIT,
> > +     };
> > +
> > +     /* The test creates the vpmu_vm multiple times. Ensure a clean state */
> > +     memset(&vpmu_vm, 0, sizeof(vpmu_vm));
> > +
> > +     vpmu_vm.vm = vm_create(1);
> > +     vm_init_descriptor_tables(vpmu_vm.vm);
> > +     for (ec = 0; ec < ESR_EC_NUM; ec++) {
> > +             vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
> > +                                     guest_sync_handler);
> > +     }
> > +
> > +     /* Create vCPU with PMUv3 */
> > +     vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> > +     init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> > +     vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
> > +     vcpu_init_descriptor_tables(vpmu_vm.vcpu);
> > +     vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
> > +                                     GICD_BASE_GPA, GICR_BASE_GPA);
> __TEST_REQUIRE(vpmu_vm.gic_fd >= 0, "Failed to create vgic-v3, skipping");
> as done in some other tests
>
I'll add this in v8.
> > +
> > +     /* Make sure that PMUv3 support is indicated in the ID register */
> > +     vcpu_get_reg(vpmu_vm.vcpu,
> > +                  KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
> > +     pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
> > +     TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
> > +                 pmuver >= ID_AA64DFR0_PMUVER_8_0,
> > +                 "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
> > +
> > +     /* Initialize vPMU */
> > +     vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
> > +     vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
> > +}
> > +
> > +static void destroy_vpmu_vm(void)
> > +{
> > +     close(vpmu_vm.gic_fd);
> > +     kvm_vm_free(vpmu_vm.vm);
> > +}
> > +
> > +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
> > +{
> > +     struct ucall uc;
> > +
> > +     vcpu_args_set(vcpu, 1, pmcr_n);
> > +     vcpu_run(vcpu);
> > +     switch (get_ucall(vcpu, &uc)) {
> > +     case UCALL_ABORT:
> > +             REPORT_GUEST_ASSERT(uc);
> > +             break;
> > +     case UCALL_DONE:
> > +             break;
> > +     default:
> > +             TEST_FAIL("Unknown ucall %lu", uc.cmd);
> > +             break;
> > +     }
> > +}
> > +
> > +/*
> > + * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
> > + * and run the test.
> > + */
> > +static void run_test(uint64_t pmcr_n)
> > +{
> > +     struct kvm_vcpu *vcpu;
> > +     uint64_t sp, pmcr;
> > +     struct kvm_vcpu_init init;
> > +
> > +     pr_debug("Test with pmcr_n %lu\n", pmcr_n);
> > +     create_vpmu_vm(guest_code);
> > +
> > +     vcpu = vpmu_vm.vcpu;
> > +
> > +     /* Save the initial sp to restore them later to run the guest again */
> > +     vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
> > +
> > +     /* Update the PMCR_EL0.N with @pmcr_n */
> > +     vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> > +     set_pmcr_n(&pmcr, pmcr_n);
> > +     vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
> > +
> > +     run_vcpu(vcpu, pmcr_n);
> > +
> > +     /*
> > +      * Reset and re-initialize the vCPU, and run the guest code again to
> > +      * check if PMCR_EL0.N is preserved.
> > +      */
> > +     vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> > +     init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> > +     aarch64_vcpu_setup(vcpu, &init);
> > +     vcpu_init_descriptor_tables(vcpu);
> > +     vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
> > +     vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
> > +
> > +     run_vcpu(vcpu, pmcr_n);
> > +
> > +     destroy_vpmu_vm();
> > +}
> > +
> > +/*
> > + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
> > + * the vCPU to @pmcr_n, which is larger than the host value.
> > + * The attempt should fail as @pmcr_n is too big to set for the vCPU.
> > + */
> > +static void run_error_test(uint64_t pmcr_n)
> > +{
> > +     struct kvm_vcpu *vcpu;
> > +     uint64_t pmcr, pmcr_orig;
> > +
> > +     pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
> > +     create_vpmu_vm(guest_code);
> > +     vcpu = vpmu_vm.vcpu;
> > +
> > +     vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
> > +     pmcr = pmcr_orig;
> > +
> > +     /*
> > +      * Setting a larger value of PMCR.N should not modify the field, and
> > +      * return a success.
> > +      */
> > +     set_pmcr_n(&pmcr, pmcr_n);
> > +     vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
> > +     vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> > +     TEST_ASSERT(pmcr_orig == pmcr,
> > +                 "PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx\n",
> > +                 pmcr, pmcr_n);
> nit: you could introduce a set_pmcr_n() routine  which creates the
> vpmu_vm and set the PMCR.N and check whether the setting is applied. An
> arg could tell the helper whether this is supposed to fail. This could
> be used in both run_error_test and run_test which both mostly use the
> same code.
Good idea. I'll think about it..

Thank you.
Raghavendra
> > +
> > +     destroy_vpmu_vm();
> > +}
> > +
> > +/*
> > + * Return the default number of implemented PMU event counters excluding
> > + * the cycle counter (i.e. PMCR_EL0.N value) for the guest.
> > + */
> > +static uint64_t get_pmcr_n_limit(void)
> > +{
> > +     uint64_t pmcr;
> > +
> > +     create_vpmu_vm(guest_code);
> > +     vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> > +     destroy_vpmu_vm();
> > +     return get_pmcr_n(pmcr);
> > +}
> > +
> > +int main(void)
> > +{
> > +     uint64_t i, pmcr_n;
> > +
> > +     TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
> > +
> > +     pmcr_n = get_pmcr_n_limit();
> > +     for (i = 0; i <= pmcr_n; i++)
> > +             run_test(i);
> > +
> > +     for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
> > +             run_error_test(i);
> > +
> > +     return 0;
> > +}
>
> Besides this looks good to me.
>
> Thanks
>
> Eric
>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-17 16:58             ` Raghavendra Rao Ananta
@ 2023-10-17 17:09               ` Oliver Upton
  2023-10-17 17:25                 ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-17 17:09 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 09:58:08AM -0700, Raghavendra Rao Ananta wrote:
> On Mon, Oct 16, 2023 at 10:52 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > On Mon, Oct 16, 2023 at 02:35:52PM -0700, Raghavendra Rao Ananta wrote:
> >
> > [...]
> >
> > > > What's the point of doing this in the first place? The implementation of
> > > > kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
> > > >
> > > I guess originally the change replaced read_sysreg(pmcr_el0) with
> > > kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
> > > But if you and Sebastian feel that it's an overkill and directly
> > > getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
> > > happy to make the change.
> >
> > No, I'd rather you delete the line where PMCR_EL0.N altogether.
> > reset_pmcr() tries to initialize the field, but your
> > kvm_vcpu_read_pmcr() winds up replacing it with pmcr_n.
> >
> I didn't get this comment. We still do initialize pmcr, but using the
> pmcr.n read via kvm_vcpu_read_pmcr() instead of the actual system
> register.

You have two bits of code trying to do the exact same thing:

 1) reset_pmcr() initializes __vcpu_sys_reg(vcpu, PMCR_EL0) with the N
    field set up.

 2) kvm_vcpu_read_pmcr() takes whatever is in __vcpu_sys_reg(vcpu, PMCR_EL0),
    *masks out* the N field and re-initializes it with vcpu->kvm->arch.pmcr_n

Why do you need (1) if you do (2)?

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-10-17 15:48   ` Sebastian Ott
@ 2023-10-17 17:10     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 17:10 UTC (permalink / raw)
  To: Sebastian Ott
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 8:48 AM Sebastian Ott <sebott@redhat.com> wrote:
>
> On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> > +static void guest_code(uint64_t expected_pmcr_n)
> > +{
> > +     uint64_t pmcr, pmcr_n;
> > +
> > +     __GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
> > +                     "Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
> > +                     expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS);
> > +
> > +     pmcr = read_sysreg(pmcr_el0);
> > +     pmcr_n = get_pmcr_n(pmcr);
> > +
> > +     /* Make sure that PMCR_EL0.N indicates the value userspace set */
> > +     __GUEST_ASSERT(pmcr_n == expected_pmcr_n,
> > +                     "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
> > +                     pmcr_n, expected_pmcr_n);
>
> Expected vs read value is swapped.
>
Good catch! I'll fix this in v8.
>
> Also, since the kernel has special handling for this, should we add a
> test like below?
>
> +static void immutable_test(void)
> +{
> +       struct kvm_vcpu *vcpu;
> +       uint64_t sp, pmcr, pmcr_n;
> +       struct kvm_vcpu_init init;
> +
> +       create_vpmu_vm(guest_code);
> +
> +       vcpu = vpmu_vm.vcpu;
> +
> +       /* Save the initial sp to restore them later to run the guest again */
> +       vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
> +
> +       vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> +       pmcr_n = get_pmcr_n(pmcr);
> +
> +       run_vcpu(vcpu, pmcr_n);
> +
> +       vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> +       init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> +       aarch64_vcpu_setup(vcpu, &init);
> +       vcpu_init_descriptor_tables(vcpu);
> +       vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
> +       vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
> +
> +       /* Update the PMCR_EL0.N after the VM ran once */
> +       set_pmcr_n(&pmcr, 0);
> +       vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
> +
> +       /* Verify that the guest still gets the unmodified value */
> +       run_vcpu(vcpu, pmcr_n);
> +
> +       destroy_vpmu_vm();
> +}
Thanks for the suggestion! I'll add this test case in v8.

- Raghavendra
>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-17 17:09               ` Oliver Upton
@ 2023-10-17 17:25                 ` Raghavendra Rao Ananta
  2023-10-17 18:10                   ` Oliver Upton
  0 siblings, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 17:25 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 10:09 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Tue, Oct 17, 2023 at 09:58:08AM -0700, Raghavendra Rao Ananta wrote:
> > On Mon, Oct 16, 2023 at 10:52 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > On Mon, Oct 16, 2023 at 02:35:52PM -0700, Raghavendra Rao Ananta wrote:
> > >
> > > [...]
> > >
> > > > > What's the point of doing this in the first place? The implementation of
> > > > > kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
> > > > >
> > > > I guess originally the change replaced read_sysreg(pmcr_el0) with
> > > > kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
> > > > But if you and Sebastian feel that it's an overkill and directly
> > > > getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
> > > > happy to make the change.
> > >
> > > No, I'd rather you delete the line where PMCR_EL0.N altogether.
> > > reset_pmcr() tries to initialize the field, but your
> > > kvm_vcpu_read_pmcr() winds up replacing it with pmcr_n.
> > >
> > I didn't get this comment. We still do initialize pmcr, but using the
> > pmcr.n read via kvm_vcpu_read_pmcr() instead of the actual system
> > register.
>
> You have two bits of code trying to do the exact same thing:
>
>  1) reset_pmcr() initializes __vcpu_sys_reg(vcpu, PMCR_EL0) with the N
>     field set up.
>
>  2) kvm_vcpu_read_pmcr() takes whatever is in __vcpu_sys_reg(vcpu, PMCR_EL0),
>     *masks out* the N field and re-initializes it with vcpu->kvm->arch.pmcr_n
>
> Why do you need (1) if you do (2)?
>
Okay, I see what you mean now. In that case, let reset_pmcr():
- Initialize 'pmcr' using  vcpu->kvm->arch.pmcr_n
- Set ARMV8_PMU_PMCR_LC as appropriate in 'pmcr'
- Write 'pmcr' to the vcpu reg

From here on out, kvm_vcpu_read_pmcr() would read off of this
initialized value, unless of course, userspace updates the pmcr.n.
Is this the flow that you were suggesting?

Thank you.
Raghavendra

> --
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-17 17:25                 ` Raghavendra Rao Ananta
@ 2023-10-17 18:10                   ` Oliver Upton
  2023-10-17 18:45                     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-17 18:10 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 10:25:50AM -0700, Raghavendra Rao Ananta wrote:
> On Tue, Oct 17, 2023 at 10:09 AM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > On Tue, Oct 17, 2023 at 09:58:08AM -0700, Raghavendra Rao Ananta wrote:
> > > On Mon, Oct 16, 2023 at 10:52 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > > >
> > > > On Mon, Oct 16, 2023 at 02:35:52PM -0700, Raghavendra Rao Ananta wrote:
> > > >
> > > > [...]
> > > >
> > > > > > What's the point of doing this in the first place? The implementation of
> > > > > > kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
> > > > > >
> > > > > I guess originally the change replaced read_sysreg(pmcr_el0) with
> > > > > kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
> > > > > But if you and Sebastian feel that it's an overkill and directly
> > > > > getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
> > > > > happy to make the change.
> > > >
> > > > No, I'd rather you delete the line where PMCR_EL0.N altogether.
> > > > reset_pmcr() tries to initialize the field, but your
> > > > kvm_vcpu_read_pmcr() winds up replacing it with pmcr_n.
> > > >
> > > I didn't get this comment. We still do initialize pmcr, but using the
> > > pmcr.n read via kvm_vcpu_read_pmcr() instead of the actual system
> > > register.
> >
> > You have two bits of code trying to do the exact same thing:
> >
> >  1) reset_pmcr() initializes __vcpu_sys_reg(vcpu, PMCR_EL0) with the N
> >     field set up.
> >
> >  2) kvm_vcpu_read_pmcr() takes whatever is in __vcpu_sys_reg(vcpu, PMCR_EL0),
> >     *masks out* the N field and re-initializes it with vcpu->kvm->arch.pmcr_n
> >
> > Why do you need (1) if you do (2)?
> >
> Okay, I see what you mean now. In that case, let reset_pmcr():
> - Initialize 'pmcr' using  vcpu->kvm->arch.pmcr_n
> - Set ARMV8_PMU_PMCR_LC as appropriate in 'pmcr'
> - Write 'pmcr' to the vcpu reg
> 
> From here on out, kvm_vcpu_read_pmcr() would read off of this
> initialized value, unless of course, userspace updates the pmcr.n.
> Is this the flow that you were suggesting?

Just squash this in:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d1db1f292645..7b54c7843bef 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -743,10 +743,8 @@ static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 
 static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
-	u64 pmcr;
+	u64 pmcr = 0;
 
-	/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
-	pmcr = kvm_vcpu_read_pmcr(vcpu) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
 	if (!kvm_supports_32bit_el0())
 		pmcr |= ARMV8_PMU_PMCR_LC;
 

-- 
Thanks,
Oliver

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
  2023-10-17 18:10                   ` Oliver Upton
@ 2023-10-17 18:45                     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 18:45 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Sebastian Ott, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 11:11 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Tue, Oct 17, 2023 at 10:25:50AM -0700, Raghavendra Rao Ananta wrote:
> > On Tue, Oct 17, 2023 at 10:09 AM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > On Tue, Oct 17, 2023 at 09:58:08AM -0700, Raghavendra Rao Ananta wrote:
> > > > On Mon, Oct 16, 2023 at 10:52 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > > > >
> > > > > On Mon, Oct 16, 2023 at 02:35:52PM -0700, Raghavendra Rao Ananta wrote:
> > > > >
> > > > > [...]
> > > > >
> > > > > > > What's the point of doing this in the first place? The implementation of
> > > > > > > kvm_vcpu_read_pmcr() is populating PMCR_EL0.N using the VM-scoped value.
> > > > > > >
> > > > > > I guess originally the change replaced read_sysreg(pmcr_el0) with
> > > > > > kvm_vcpu_read_pmcr(vcpu) to maintain consistency with others.
> > > > > > But if you and Sebastian feel that it's an overkill and directly
> > > > > > getting the value via vcpu->kvm->arch.pmcr_n is more readable, I'm
> > > > > > happy to make the change.
> > > > >
> > > > > No, I'd rather you delete the line where PMCR_EL0.N altogether.
> > > > > reset_pmcr() tries to initialize the field, but your
> > > > > kvm_vcpu_read_pmcr() winds up replacing it with pmcr_n.
> > > > >
> > > > I didn't get this comment. We still do initialize pmcr, but using the
> > > > pmcr.n read via kvm_vcpu_read_pmcr() instead of the actual system
> > > > register.
> > >
> > > You have two bits of code trying to do the exact same thing:
> > >
> > >  1) reset_pmcr() initializes __vcpu_sys_reg(vcpu, PMCR_EL0) with the N
> > >     field set up.
> > >
> > >  2) kvm_vcpu_read_pmcr() takes whatever is in __vcpu_sys_reg(vcpu, PMCR_EL0),
> > >     *masks out* the N field and re-initializes it with vcpu->kvm->arch.pmcr_n
> > >
> > > Why do you need (1) if you do (2)?
> > >
> > Okay, I see what you mean now. In that case, let reset_pmcr():
> > - Initialize 'pmcr' using  vcpu->kvm->arch.pmcr_n
> > - Set ARMV8_PMU_PMCR_LC as appropriate in 'pmcr'
> > - Write 'pmcr' to the vcpu reg
> >
> > From here on out, kvm_vcpu_read_pmcr() would read off of this
> > initialized value, unless of course, userspace updates the pmcr.n.
> > Is this the flow that you were suggesting?
>
> Just squash this in:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d1db1f292645..7b54c7843bef 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -743,10 +743,8 @@ static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>
>  static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
> -       u64 pmcr;
> +       u64 pmcr = 0;
>
> -       /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> -       pmcr = kvm_vcpu_read_pmcr(vcpu) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
>         if (!kvm_supports_32bit_el0())
>                 pmcr |= ARMV8_PMU_PMCR_LC;
>
>
Oh, I get the redundancy that you were suggesting to get rid of!
Thanks for the diff. It helped.

- Raghavendra
> --
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters
  2023-10-09 23:08 ` [PATCH v7 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
@ 2023-10-17 18:54   ` Eric Auger
  2023-10-17 21:42     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Eric Auger @ 2023-10-17 18:54 UTC (permalink / raw)
  To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghavendra,

On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> Add a new test case to the vpmu_counter_access test to check if PMU
> registers or their bits for implemented counters on the vCPU are
> readable/writable as expected, and can be programmed to count events.>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../kvm/aarch64/vpmu_counter_access.c         | 270 +++++++++++++++++-
>  1 file changed, 268 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> index 58949b17d76e..e92af3c0db03 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -5,7 +5,8 @@
>   * Copyright (c) 2022 Google LLC.
>   *
>   * This test checks if the guest can see the same number of the PMU event
> - * counters (PMCR_EL0.N) that userspace sets.
> + * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
> + * those counters.
>   * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
>   */
>  #include <kvm_util.h>
> @@ -37,6 +38,259 @@ static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
>  	*pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
>  }
>  
> +/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> +static inline unsigned long read_sel_evcntr(int sel)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	return read_sysreg(pmxevcntr_el0);
> +}> +
> +/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> +static inline void write_sel_evcntr(int sel, unsigned long val)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	write_sysreg(val, pmxevcntr_el0);
> +	isb();
> +}
> +
> +/* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> +static inline unsigned long read_sel_evtyper(int sel)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	return read_sysreg(pmxevtyper_el0);
> +}
> +
> +/* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> +static inline void write_sel_evtyper(int sel, unsigned long val)
> +{
> +	write_sysreg(sel, pmselr_el0);
> +	isb();
> +	write_sysreg(val, pmxevtyper_el0);
> +	isb();
> +}
> +
> +static inline void enable_counter(int idx)
> +{
> +	uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +	write_sysreg(BIT(idx) | v, pmcntenset_el0);
> +	isb();
> +}
> +
> +static inline void disable_counter(int idx)
> +{
> +	uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +	write_sysreg(BIT(idx) | v, pmcntenclr_el0);
> +	isb();
> +}
> +
> +static void pmu_disable_reset(void)
> +{
> +	uint64_t pmcr = read_sysreg(pmcr_el0);
> +
> +	/* Reset all counters, disabling them */
> +	pmcr &= ~ARMV8_PMU_PMCR_E;
> +	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> +	isb();
> +> +
> +#define RETURN_READ_PMEVCNTRN(n) \
> +	return read_sysreg(pmevcntr##n##_el0)
> +static unsigned long read_pmevcntrn(int n)
> +{
> +	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> +	return 0;
> +}
> +
> +#define WRITE_PMEVCNTRN(n) \
> +	write_sysreg(val, pmevcntr##n##_el0)
> +static void write_pmevcntrn(int n, unsigned long val)
> +{
> +	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> +	isb();
> +}
> +
> +#define READ_PMEVTYPERN(n) \
> +	return read_sysreg(pmevtyper##n##_el0)
> +static unsigned long read_pmevtypern(int n)
> +{
> +	PMEVN_SWITCH(n, READ_PMEVTYPERN);
> +	return 0;
> +}
> +
> +#define WRITE_PMEVTYPERN(n) \
> +	write_sysreg(val, pmevtyper##n##_el0)
> +static void write_pmevtypern(int n, unsigned long val)
> +{
> +	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> +	isb();
> +}
> +
> +/*
> + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> + * accessors that test cases will use. Each of the accessors will
> + * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
> + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
> + *
> + * This is used to test that combinations of those accessors provide
> + * the consistent behavior.
> + */
> +struct pmc_accessor {
> +	/* A function to be used to read PMEVTCNTR<n>_EL0 */
> +	unsigned long	(*read_cntr)(int idx);
> +	/* A function to be used to write PMEVTCNTR<n>_EL0 */
> +	void		(*write_cntr)(int idx, unsigned long val);
> +	/* A function to be used to read PMEVTYPER<n>_EL0 */
> +	unsigned long	(*read_typer)(int idx);
> +	/* A function to be used to write PMEVTYPER<n>_EL0 */
> +	void		(*write_typer)(int idx, unsigned long val);
> +};
> +
> +struct pmc_accessor pmc_accessors[] = {
> +	/* test with all direct accesses */
> +	{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
> +	/* test with all indirect accesses */
> +	{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
> +	/* read with direct accesses, and write with indirect accesses */
> +	{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
> +	/* read with indirect accesses, and write with direct accesses */
> +	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> +};
what is the rationale behing testing both direct and indirect accesses
and any combinations? I think this would deserve some
comments/justification.
> +
> +/*
> + * Convert a pointer of pmc_accessor to an index in pmc_accessors[],
> + * assuming that the pointer is one of the entries in pmc_accessors[].
> + */
> +#define PMC_ACC_TO_IDX(acc)	(acc - &pmc_accessors[0])
> +
> +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)			 \
> +{										 \
> +	uint64_t _tval = read_sysreg(regname);					 \
> +										 \
> +	if (set_expected)							 \
> +		__GUEST_ASSERT((_tval & mask),					 \
> +				"tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
> +				_tval, mask, set_expected);			 \
> +	else									 \
> +		__GUEST_ASSERT(!(_tval & mask),					 \
> +				"tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
> +				_tval, mask, set_expected);			 \
> +}
> +
> +/*
> + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
> + * are set or cleared as specified in @set_expected.
> + */
> +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
> +{
> +	GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
> +	GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
> +}
> +
> +/*
> + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding
> + * to the specified counter (@pmc_idx) can be read/written as expected.
> + * When @set_op is true, it tries to set the bit for the counter in
> + * those registers by writing the SET registers (the bit won't be set
> + * if the counter is not implemented though).
> + * Otherwise, it tries to clear the bits in the registers by writing
> + * the CLR registers.
> + * Then, it checks if the values indicated in the registers are as expected.
> + */
> +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
> +{
> +	uint64_t pmcr_n, test_bit = BIT(pmc_idx);
> +	bool set_expected = false;
> +
> +	if (set_op) {
> +		write_sysreg(test_bit, pmcntenset_el0);
> +		write_sysreg(test_bit, pmintenset_el1);
> +		write_sysreg(test_bit, pmovsset_el0);
> +
> +		/* The bit will be set only if the counter is implemented */
> +		pmcr_n = get_pmcr_n(read_sysreg(pmcr_el0));
> +		set_expected = (pmc_idx < pmcr_n) ? true : false;
> +	} else {
> +		write_sysreg(test_bit, pmcntenclr_el0);
> +		write_sysreg(test_bit, pmintenclr_el1);
> +		write_sysreg(test_bit, pmovsclr_el0);
> +	}
> +	check_bitmap_pmu_regs(test_bit, set_expected);
> +}
> +
> +/*
> + * Tests for reading/writing registers for the (implemented) event counter
> + * specified by @pmc_idx.
> + */
> +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> +{
> +	uint64_t write_data, read_data;
> +
> +	/* Disable all PMCs and reset all PMCs to zero. */
> +	pmu_disable_reset();
> +
> +
nit: double empty line
> +	/*
> +	 * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1.
> +	 */
> +
> +	/* Make sure that the bit in those registers are set to 0 */
> +	test_bitmap_pmu_regs(pmc_idx, false);
> +	/* Test if setting the bit in those registers works */
> +	test_bitmap_pmu_regs(pmc_idx, true);
> +	/* Test if clearing the bit in those registers works */
> +	test_bitmap_pmu_regs(pmc_idx, false);
> +
> +
same here
> +	/*
> +	 * Tests for reading/writing the event type register.
> +	 */
> +
> +	read_data = acc->read_typer(pmc_idx);
not needed I think
> +	/*
> +	 * Set the event type register to an arbitrary value just for testing
> +	 * of reading/writing the register.
> +	 * ArmARM says that for the event from 0x0000 to 0x003F,
nit s/ArmARM/Arm ARM
> +	 * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
> +	 * the value written to the field even when the specified event
> +	 * is not supported.
> +	 */
> +	write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> +	acc->write_typer(pmc_idx, write_data);
> +	read_data = acc->read_typer(pmc_idx);
> +	__GUEST_ASSERT(read_data == write_data,
> +		       "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
> +		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
> +
> +
> +	/*
> +	 * Tests for reading/writing the event count register.
> +	 */
> +
> +	read_data = acc->read_cntr(pmc_idx);
> +
> +	/* The count value must be 0, as it is not used after the reset */
s/not used/disabled and reset?
> +	__GUEST_ASSERT(read_data == 0,
> +		       "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx",
> +		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data);
> +
> +	write_data = read_data + pmc_idx + 0x12345;
> +	acc->write_cntr(pmc_idx, write_data);
> +	read_data = acc->read_cntr(pmc_idx);
> +	__GUEST_ASSERT(read_data == write_data,
> +		       "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
> +		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
> +}
> +
>  static void guest_sync_handler(struct ex_regs *regs)
>  {
>  	uint64_t esr, ec;
> @@ -49,11 +303,14 @@ static void guest_sync_handler(struct ex_regs *regs)
>  /*
>   * The guest is configured with PMUv3 with @expected_pmcr_n number of
>   * event counters.
> - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> + * if reading/writing PMU registers for implemented counters can work
s/can work/works
> + * as expected.
>   */
>  static void guest_code(uint64_t expected_pmcr_n)
>  {
>  	uint64_t pmcr, pmcr_n;
> +	int i, pmc;
>  
>  	__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
>  			"Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
> @@ -67,6 +324,15 @@ static void guest_code(uint64_t expected_pmcr_n)
>  			"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
>  			pmcr_n, expected_pmcr_n);
>  
> +	/*
> +	 * Tests for reading/writing PMU registers for implemented counters.
> +	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +		for (pmc = 0; pmc < pmcr_n; pmc++)
> +			test_access_pmc_regs(&pmc_accessors[i], pmc);
> +	}
> +
>  	GUEST_DONE();
>  }
>  
Thanks

Eric


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters
  2023-10-17 18:54   ` Eric Auger
@ 2023-10-17 21:42     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-17 21:42 UTC (permalink / raw)
  To: Eric Auger
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

Hi Eric,

On Tue, Oct 17, 2023 at 11:54 AM Eric Auger <eauger@redhat.com> wrote:
>
> Hi Raghavendra,
>
> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > Add a new test case to the vpmu_counter_access test to check if PMU
> > registers or their bits for implemented counters on the vCPU are
> > readable/writable as expected, and can be programmed to count events.>
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../kvm/aarch64/vpmu_counter_access.c         | 270 +++++++++++++++++-
> >  1 file changed, 268 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > index 58949b17d76e..e92af3c0db03 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -5,7 +5,8 @@
> >   * Copyright (c) 2022 Google LLC.
> >   *
> >   * This test checks if the guest can see the same number of the PMU event
> > - * counters (PMCR_EL0.N) that userspace sets.
> > + * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
> > + * those counters.
> >   * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> >   */
> >  #include <kvm_util.h>
> > @@ -37,6 +38,259 @@ static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
> >       *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> >  }
> >
> > +/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> > +static inline unsigned long read_sel_evcntr(int sel)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     return read_sysreg(pmxevcntr_el0);
> > +}> +
> > +/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
> > +static inline void write_sel_evcntr(int sel, unsigned long val)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     write_sysreg(val, pmxevcntr_el0);
> > +     isb();
> > +}
> > +
> > +/* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> > +static inline unsigned long read_sel_evtyper(int sel)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     return read_sysreg(pmxevtyper_el0);
> > +}
> > +
> > +/* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
> > +static inline void write_sel_evtyper(int sel, unsigned long val)
> > +{
> > +     write_sysreg(sel, pmselr_el0);
> > +     isb();
> > +     write_sysreg(val, pmxevtyper_el0);
> > +     isb();
> > +}
> > +
> > +static inline void enable_counter(int idx)
> > +{
> > +     uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +     write_sysreg(BIT(idx) | v, pmcntenset_el0);
> > +     isb();
> > +}
> > +
> > +static inline void disable_counter(int idx)
> > +{
> > +     uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +     write_sysreg(BIT(idx) | v, pmcntenclr_el0);
> > +     isb();
> > +}
> > +
> > +static void pmu_disable_reset(void)
> > +{
> > +     uint64_t pmcr = read_sysreg(pmcr_el0);
> > +
> > +     /* Reset all counters, disabling them */
> > +     pmcr &= ~ARMV8_PMU_PMCR_E;
> > +     write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
> > +     isb();
> > +> +
> > +#define RETURN_READ_PMEVCNTRN(n) \
> > +     return read_sysreg(pmevcntr##n##_el0)
> > +static unsigned long read_pmevcntrn(int n)
> > +{
> > +     PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> > +     return 0;
> > +}
> > +
> > +#define WRITE_PMEVCNTRN(n) \
> > +     write_sysreg(val, pmevcntr##n##_el0)
> > +static void write_pmevcntrn(int n, unsigned long val)
> > +{
> > +     PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> > +     isb();
> > +}
> > +
> > +#define READ_PMEVTYPERN(n) \
> > +     return read_sysreg(pmevtyper##n##_el0)
> > +static unsigned long read_pmevtypern(int n)
> > +{
> > +     PMEVN_SWITCH(n, READ_PMEVTYPERN);
> > +     return 0;
> > +}
> > +
> > +#define WRITE_PMEVTYPERN(n) \
> > +     write_sysreg(val, pmevtyper##n##_el0)
> > +static void write_pmevtypern(int n, unsigned long val)
> > +{
> > +     PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> > +     isb();
> > +}
> > +
> > +/*
> > + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> > + * accessors that test cases will use. Each of the accessors will
> > + * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> > + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
> > + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
> > + *
> > + * This is used to test that combinations of those accessors provide
> > + * the consistent behavior.
> > + */
> > +struct pmc_accessor {
> > +     /* A function to be used to read PMEVTCNTR<n>_EL0 */
> > +     unsigned long   (*read_cntr)(int idx);
> > +     /* A function to be used to write PMEVTCNTR<n>_EL0 */
> > +     void            (*write_cntr)(int idx, unsigned long val);
> > +     /* A function to be used to read PMEVTYPER<n>_EL0 */
> > +     unsigned long   (*read_typer)(int idx);
> > +     /* A function to be used to write PMEVTYPER<n>_EL0 */
> > +     void            (*write_typer)(int idx, unsigned long val);
> > +};
> > +
> > +struct pmc_accessor pmc_accessors[] = {
> > +     /* test with all direct accesses */
> > +     { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
> > +     /* test with all indirect accesses */
> > +     { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
> > +     /* read with direct accesses, and write with indirect accesses */
> > +     { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
> > +     /* read with indirect accesses, and write with direct accesses */
> > +     { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> > +};
> what is the rationale behing testing both direct and indirect accesses
> and any combinations? I think this would deserve some
> comments/justification.
Basically, the idea is to test if, with PMCR.N being constantly
updated, we are able to access the registers successfully or not. At
this point it may not fully justify to test all these combinations
(just one direct set and one indirect set should do). However, I do
have some more test patches which add more functional testing of the
vPMU.
I can add some comment about this.
> > +
> > +/*
> > + * Convert a pointer of pmc_accessor to an index in pmc_accessors[],
> > + * assuming that the pointer is one of the entries in pmc_accessors[].
> > + */
> > +#define PMC_ACC_TO_IDX(acc)  (acc - &pmc_accessors[0])
> > +
> > +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)                  \
> > +{                                                                             \
> > +     uint64_t _tval = read_sysreg(regname);                                   \
> > +                                                                              \
> > +     if (set_expected)                                                        \
> > +             __GUEST_ASSERT((_tval & mask),                                   \
> > +                             "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
> > +                             _tval, mask, set_expected);                      \
> > +     else                                                                     \
> > +             __GUEST_ASSERT(!(_tval & mask),                                  \
> > +                             "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
> > +                             _tval, mask, set_expected);                      \
> > +}
> > +
> > +/*
> > + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
> > + * are set or cleared as specified in @set_expected.
> > + */
> > +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
> > +{
> > +     GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
> > +     GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
> > +}
> > +
> > +/*
> > + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding
> > + * to the specified counter (@pmc_idx) can be read/written as expected.
> > + * When @set_op is true, it tries to set the bit for the counter in
> > + * those registers by writing the SET registers (the bit won't be set
> > + * if the counter is not implemented though).
> > + * Otherwise, it tries to clear the bits in the registers by writing
> > + * the CLR registers.
> > + * Then, it checks if the values indicated in the registers are as expected.
> > + */
> > +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
> > +{
> > +     uint64_t pmcr_n, test_bit = BIT(pmc_idx);
> > +     bool set_expected = false;
> > +
> > +     if (set_op) {
> > +             write_sysreg(test_bit, pmcntenset_el0);
> > +             write_sysreg(test_bit, pmintenset_el1);
> > +             write_sysreg(test_bit, pmovsset_el0);
> > +
> > +             /* The bit will be set only if the counter is implemented */
> > +             pmcr_n = get_pmcr_n(read_sysreg(pmcr_el0));
> > +             set_expected = (pmc_idx < pmcr_n) ? true : false;
> > +     } else {
> > +             write_sysreg(test_bit, pmcntenclr_el0);
> > +             write_sysreg(test_bit, pmintenclr_el1);
> > +             write_sysreg(test_bit, pmovsclr_el0);
> > +     }
> > +     check_bitmap_pmu_regs(test_bit, set_expected);
> > +}
> > +
> > +/*
> > + * Tests for reading/writing registers for the (implemented) event counter
> > + * specified by @pmc_idx.
> > + */
> > +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> > +{
> > +     uint64_t write_data, read_data;
> > +
> > +     /* Disable all PMCs and reset all PMCs to zero. */
> > +     pmu_disable_reset();
> > +
> > +
> nit: double empty line
The double-empty lines were introduced to separate out the test
'phases'. But if it feels too much, I can remove one.
> > +     /*
> > +      * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1.
> > +      */
> > +
> > +     /* Make sure that the bit in those registers are set to 0 */
> > +     test_bitmap_pmu_regs(pmc_idx, false);
> > +     /* Test if setting the bit in those registers works */
> > +     test_bitmap_pmu_regs(pmc_idx, true);
> > +     /* Test if clearing the bit in those registers works */
> > +     test_bitmap_pmu_regs(pmc_idx, false);
> > +
> > +
> same here
> > +     /*
> > +      * Tests for reading/writing the event type register.
> > +      */
> > +
> > +     read_data = acc->read_typer(pmc_idx);
> not needed I think
You are right. It's redundant. I'll remove it.
> > +     /*
> > +      * Set the event type register to an arbitrary value just for testing
> > +      * of reading/writing the register.
> > +      * ArmARM says that for the event from 0x0000 to 0x003F,
> nit s/ArmARM/Arm ARM
> > +      * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
> > +      * the value written to the field even when the specified event
> > +      * is not supported.
> > +      */
> > +     write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> > +     acc->write_typer(pmc_idx, write_data);
> > +     read_data = acc->read_typer(pmc_idx);
> > +     __GUEST_ASSERT(read_data == write_data,
> > +                    "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
> > +                    pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
> > +
> > +
> > +     /*
> > +      * Tests for reading/writing the event count register.
> > +      */
> > +
> > +     read_data = acc->read_cntr(pmc_idx);
> > +
> > +     /* The count value must be 0, as it is not used after the reset */
> s/not used/disabled and reset?
> > +     __GUEST_ASSERT(read_data == 0,
> > +                    "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx",
> > +                    pmc_idx, PMC_ACC_TO_IDX(acc), read_data);
> > +
> > +     write_data = read_data + pmc_idx + 0x12345;
> > +     acc->write_cntr(pmc_idx, write_data);
> > +     read_data = acc->read_cntr(pmc_idx);
> > +     __GUEST_ASSERT(read_data == write_data,
> > +                    "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
> > +                    pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
> > +}
> > +
> >  static void guest_sync_handler(struct ex_regs *regs)
> >  {
> >       uint64_t esr, ec;
> > @@ -49,11 +303,14 @@ static void guest_sync_handler(struct ex_regs *regs)
> >  /*
> >   * The guest is configured with PMUv3 with @expected_pmcr_n number of
> >   * event counters.
> > - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
> > + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> > + * if reading/writing PMU registers for implemented counters can work
> s/can work/works
> > + * as expected.
> >   */
> >  static void guest_code(uint64_t expected_pmcr_n)
> >  {
> >       uint64_t pmcr, pmcr_n;
> > +     int i, pmc;
> >
> >       __GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
> >                       "Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
> > @@ -67,6 +324,15 @@ static void guest_code(uint64_t expected_pmcr_n)
> >                       "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
> >                       pmcr_n, expected_pmcr_n);
> >
> > +     /*
> > +      * Tests for reading/writing PMU registers for implemented counters.
> > +      * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> > +      */
> > +     for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> > +             for (pmc = 0; pmc < pmcr_n; pmc++)
> > +                     test_access_pmc_regs(&pmc_accessors[i], pmc);
> > +     }
> > +
> >       GUEST_DONE();
> >  }
> >
I'll address all the other nits.

Thank you.
Raghavendra
> Thanks
>
> Eric
>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters
  2023-10-09 23:08 ` [PATCH v7 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
@ 2023-10-18  6:54   ` Eric Auger
  2023-10-19 18:09     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Eric Auger @ 2023-10-18  6:54 UTC (permalink / raw)
  To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
  Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghavendra,

On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
> 
> Add a new test case to the vpmu_counter_access test to check
> if PMU registers or their bits for unimplemented counters are not
> accessible or are RAZ, as expected.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../kvm/aarch64/vpmu_counter_access.c         | 95 +++++++++++++++++--
>  .../selftests/kvm/include/aarch64/processor.h |  1 +
>  2 files changed, 87 insertions(+), 9 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> index e92af3c0db03..788386ac0894 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -5,8 +5,8 @@
>   * Copyright (c) 2022 Google LLC.
>   *
>   * This test checks if the guest can see the same number of the PMU event
> - * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
> - * those counters.
> + * counters (PMCR_EL0.N) that userspace sets, if the guest can access
> + * those counters, and if the guest cannot access any other counters.
I would suggest: if the guest is prevented from accessing any other counters
>   * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
>   */
>  #include <kvm_util.h>
> @@ -131,9 +131,9 @@ static void write_pmevtypern(int n, unsigned long val)
>  }
>  
>  /*
> - * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> + * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
>   * accessors that test cases will use. Each of the accessors will
> - * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> + * either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0
I guess this should belong to the previous patch?
>   * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
>   * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
>   *
> @@ -291,25 +291,85 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
>  		       pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
>  }
>  
> +#define INVALID_EC	(-1ul)
> +uint64_t expected_ec = INVALID_EC;
> +uint64_t op_end_addr;
> +
>  static void guest_sync_handler(struct ex_regs *regs)
>  {
>  	uint64_t esr, ec;
>  
>  	esr = read_sysreg(esr_el1);
>  	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> -	__GUEST_ASSERT(0, "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx", regs->pc, esr, ec);
> +
> +	__GUEST_ASSERT(op_end_addr && (expected_ec == ec),
> +			"PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx; EC expected: 0x%lx",
> +			regs->pc, esr, ec, expected_ec);
> +
> +	/* Will go back to op_end_addr after the handler exits */
> +	regs->pc = op_end_addr;
> +
> +	/*
> +	 * Clear op_end_addr, and setting expected_ec to INVALID_EC
and set
> +	 * as a sign that an exception has occurred.
> +	 */
> +	op_end_addr = 0;
> +	expected_ec = INVALID_EC;
> +}
> +
> +/*
> + * Run the given operation that should trigger an exception with the
> + * given exception class. The exception handler (guest_sync_handler)
> + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
> + * will come back to the instruction at the @done_label.
> + * The @done_label must be a unique label in this test program.
> + */
> +#define TEST_EXCEPTION(ec, ops, done_label)		\
> +{							\
> +	extern int done_label;				\
> +							\
> +	WRITE_ONCE(op_end_addr, (uint64_t)&done_label);	\
> +	GUEST_ASSERT(ec != INVALID_EC);			\
> +	WRITE_ONCE(expected_ec, ec);			\
> +	dsb(ish);					\
> +	ops;						\
> +	asm volatile(#done_label":");			\
> +	GUEST_ASSERT(!op_end_addr);			\
> +	GUEST_ASSERT(expected_ec == INVALID_EC);	\
> +}
> +
> +/*
> + * Tests for reading/writing registers for the unimplemented event counter
> + * specified by @pmc_idx (>= PMCR_EL0.N).
> + */
> +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> +{
> +	/*
> +	 * Reading/writing the event count/type registers should cause
> +	 * an UNDEFINED exception.
> +	 */
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
> +	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
> +	/*
> +	 * The bit corresponding to the (unimplemented) counter in
> +	 * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
{PMCNTEN,PMINTEN,PMOVS}{SET,CLR}
> +	 */
> +	test_bitmap_pmu_regs(pmc_idx, 1);
> +	test_bitmap_pmu_regs(pmc_idx, 0);
>  }
>  
>  /*
>   * The guest is configured with PMUv3 with @expected_pmcr_n number of
>   * event counters.
>   * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> - * if reading/writing PMU registers for implemented counters can work
> - * as expected.
> + * if reading/writing PMU registers for implemented or unimplemented
> + * counters can work as expected.
>   */
>  static void guest_code(uint64_t expected_pmcr_n)
>  {
> -	uint64_t pmcr, pmcr_n;
> +	uint64_t pmcr, pmcr_n, unimp_mask;
>  	int i, pmc;
>  
>  	__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
> @@ -324,15 +384,32 @@ static void guest_code(uint64_t expected_pmcr_n)
>  			"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
>  			pmcr_n, expected_pmcr_n);
>  
> +	/*
> +	 * Make sure that (RAZ) bits corresponding to unimplemented event
> +	 * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
> +	 * (NOTE: bits for implemented event counters are reset to UNKNOWN)
> +	 */
> +	unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
> +	check_bitmap_pmu_regs(unimp_mask, false);
wrt above comment, this also checks pmintenset|clr_el1.
> +
>  	/*
>  	 * Tests for reading/writing PMU registers for implemented counters.
> -	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> +	 * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
>  	 */
>  	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
>  		for (pmc = 0; pmc < pmcr_n; pmc++)
>  			test_access_pmc_regs(&pmc_accessors[i], pmc);
>  	}
>  
> +	/*
> +	 * Tests for reading/writing PMU registers for unimplemented counters.
> +	 * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
> +			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
> +	}
> +
>  	GUEST_DONE();
>  }
>  
> diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> index cb537253a6b9..c42d683102c7 100644
> --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> @@ -104,6 +104,7 @@ enum {
>  #define ESR_EC_SHIFT		26
>  #define ESR_EC_MASK		(ESR_EC_NUM - 1)
>  
> +#define ESR_EC_UNKNOWN		0x0
>  #define ESR_EC_SVC64		0x15
>  #define ESR_EC_IABT		0x21
>  #define ESR_EC_DABT		0x25

Thanks

Eric


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-17 16:59         ` Raghavendra Rao Ananta
@ 2023-10-18 21:16           ` Raghavendra Rao Ananta
  2023-10-18 22:17             ` Oliver Upton
  2023-10-19 18:46             ` Raghavendra Rao Ananta
  0 siblings, 2 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-18 21:16 UTC (permalink / raw)
  To: Eric Auger
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 9:59 AM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Hi Eric,
> On Tue, Oct 17, 2023 at 2:23 AM Eric Auger <eauger@redhat.com> wrote:
> >
> > Hi,
> > On 10/16/23 23:28, Raghavendra Rao Ananta wrote:
> > > On Mon, Oct 16, 2023 at 12:45 PM Eric Auger <eauger@redhat.com> wrote:
> > >>
> > >> Hi Raghavendra,
> > >>
> > >> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> > >>> From: Reiji Watanabe <reijiw@google.com>
> > >>>
> > >>> On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> > >>> PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> > >> PMOVS{SET,CLR}_EL0?
> > > Ah, yes. It should be PMOVS{SET,CLR}_EL0.
> > >
> > >>> This function clears RAZ bits of those registers corresponding
> > >>> to unimplemented event counters on the vCPU, and sets bits
> > >>> corresponding to implemented event counters to a predefined
> > >>> pseudo UNKNOWN value (some bits are set to 1).
> > >>>
> > >>> The function identifies (un)implemented event counters on the
> > >>> vCPU based on the PMCR_EL0.N value on the host. Using the host
> > >>> value for this would be problematic when KVM supports letting
> > >>> userspace set PMCR_EL0.N to a value different from the host value
> > >>> (some of the RAZ bits of those registers could end up being set to 1).
> > >>>
> > >>> Fix this by clearing the registers so that it can ensure
> > >>> that all the RAZ bits are cleared even when the PMCR_EL0.N value
> > >>> for the vCPU is different from the host value. Use reset_val() to
> > >>> do this instead of fixing reset_pmu_reg(), and remove
> > >>> reset_pmu_reg(), as it is no longer used.
> > >> do you intend to restore the 'unknown' behavior at some point?
> > >>
> > > I believe Reiji's (original author) intention was to keep them
> > > cleared, which would still imply an 'unknown' behavior. Do you think
> > > there's an issue with this?
> > Then why do we bother using reset_unknown in the other places if
> > clearing the bits is enough here?
> >
> Hmm. Good point. I can bring back reset_unknown to keep the original behavior.
>
I had a brief discussion about this with Oliver, and it looks like we
might need a couple of additional changes for these register accesses:
- For the userspace accesses, we have to implement explicit get_user
and set_user callbacks that to filter out the unimplemented counters
using kvm_pmu_valid_counter_mask().
- For the guest accesses to be correct, we might have to apply the
same mask while serving KVM_REQ_RELOAD_PMU.

Thank you.
Raghavendra

> Thank you.
> Raghavendra
> > Thanks
> >
> > Eric
> > >
> > > Thank you.
> > > Raghavendra
> > >> Thanks
> > >>
> > >> Eric
> > >>>
> > >>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > >>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > >>> ---
> > >>>  arch/arm64/kvm/sys_regs.c | 21 +--------------------
> > >>>  1 file changed, 1 insertion(+), 20 deletions(-)
> > >>>
> > >>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > >>> index 818a52e257ed..3dbb7d276b0e 100644
> > >>> --- a/arch/arm64/kvm/sys_regs.c
> > >>> +++ b/arch/arm64/kvm/sys_regs.c
> > >>> @@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> > >>>       return REG_HIDDEN;
> > >>>  }
> > >>>
> > >>> -static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > >>> -{
> > >>> -     u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > >>> -
> > >>> -     /* No PMU available, any PMU reg may UNDEF... */
> > >>> -     if (!kvm_arm_support_pmu_v3())
> > >>> -             return 0;
> > >>> -
> > >>> -     n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > >>> -     n &= ARMV8_PMU_PMCR_N_MASK;
> > >>> -     if (n)
> > >>> -             mask |= GENMASK(n - 1, 0);
> > >>> -
> > >>> -     reset_unknown(vcpu, r);
> > >>> -     __vcpu_sys_reg(vcpu, r->reg) &= mask;
> > >>> -
> > >>> -     return __vcpu_sys_reg(vcpu, r->reg);
> > >>> -}
> > >>> -
> > >>>  static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > >>>  {
> > >>>       reset_unknown(vcpu, r);
> > >>> @@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > >>>         trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
> > >>>
> > >>>  #define PMU_SYS_REG(name)                                            \
> > >>> -     SYS_DESC(SYS_##name), .reset = reset_pmu_reg,                   \
> > >>> +     SYS_DESC(SYS_##name), .reset = reset_val,                       \
> > >>>       .visibility = pmu_visibility
> > >>>
> > >>>  /* Macro to expand the PMEVCNTRn_EL0 register */
> > >>
> > >
> >

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-18 21:16           ` Raghavendra Rao Ananta
@ 2023-10-18 22:17             ` Oliver Upton
  2023-10-19 18:46             ` Raghavendra Rao Ananta
  1 sibling, 0 replies; 60+ messages in thread
From: Oliver Upton @ 2023-10-18 22:17 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Eric Auger, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Wed, Oct 18, 2023 at 02:16:36PM -0700, Raghavendra Rao Ananta wrote:

[...]

> I had a brief discussion about this with Oliver, and it looks like we
> might need a couple of additional changes for these register accesses:
> - For the userspace accesses, we have to implement explicit get_user
> and set_user callbacks that to filter out the unimplemented counters
> using kvm_pmu_valid_counter_mask().
> - For the guest accesses to be correct, we might have to apply the
> same mask while serving KVM_REQ_RELOAD_PMU.

To be precise, the second issue is that we want to make sure KVM's PMU
emulation never uses an invalid value for the configuration, like
enabling a PMC at an index inaccessible to the guest.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
  2023-10-17 16:49     ` Raghavendra Rao Ananta
@ 2023-10-19 10:45       ` Sebastian Ott
  2023-10-19 18:05         ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Sebastian Ott @ 2023-10-19 10:45 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

[-- Attachment #1: Type: text/plain, Size: 1197 bytes --]

On Tue, 17 Oct 2023, Raghavendra Rao Ananta wrote:
> On Tue, Oct 17, 2023 at 8:52 AM Sebastian Ott <sebott@redhat.com> wrote:
>>
>> On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
>>> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>>> +                 u64 val)
>>> +{
>>> +     struct kvm *kvm = vcpu->kvm;
>>> +     u64 new_n, mutable_mask;
>>> +
>>> +     mutex_lock(&kvm->arch.config_lock);
>>> +
>>> +     /*
>>> +      * Make PMCR immutable once the VM has started running, but do
>>> +      * not return an error (-EBUSY) to meet the existing expectations.
>>> +      */
>>
>> Why should we mention which error we're _not_ returning?
>>
> Oh, it's not to break the existing userspace expectations. Before this
> series, any 'write' from userspace was possible. Returning -EBUSY all
> of a sudden might tamper with this expectation.

Yes I get that part. What I've meant is why specifically mention -EBUSY?
You're also not returning -EFAULT nor -EINVAL.

/*
  * Make PMCR immutable once the VM has started running, but do
  * not return an error to meet the existing expectations.
  */
IMHO provides the same info to the reader and is less confusing

Sebastian

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
  2023-10-19 10:45       ` Sebastian Ott
@ 2023-10-19 18:05         ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-19 18:05 UTC (permalink / raw)
  To: Sebastian Ott
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Thu, Oct 19, 2023 at 3:45 AM Sebastian Ott <sebott@redhat.com> wrote:
>
> On Tue, 17 Oct 2023, Raghavendra Rao Ananta wrote:
> > On Tue, Oct 17, 2023 at 8:52 AM Sebastian Ott <sebott@redhat.com> wrote:
> >>
> >> On Mon, 9 Oct 2023, Raghavendra Rao Ananta wrote:
> >>> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> >>> +                 u64 val)
> >>> +{
> >>> +     struct kvm *kvm = vcpu->kvm;
> >>> +     u64 new_n, mutable_mask;
> >>> +
> >>> +     mutex_lock(&kvm->arch.config_lock);
> >>> +
> >>> +     /*
> >>> +      * Make PMCR immutable once the VM has started running, but do
> >>> +      * not return an error (-EBUSY) to meet the existing expectations.
> >>> +      */
> >>
> >> Why should we mention which error we're _not_ returning?
> >>
> > Oh, it's not to break the existing userspace expectations. Before this
> > series, any 'write' from userspace was possible. Returning -EBUSY all
> > of a sudden might tamper with this expectation.
>
> Yes I get that part. What I've meant is why specifically mention -EBUSY?
> You're also not returning -EFAULT nor -EINVAL.
>
> /*
>   * Make PMCR immutable once the VM has started running, but do
>   * not return an error to meet the existing expectations.
>   */
> IMHO provides the same info to the reader and is less confusing
>
Sounds good. I'll apply this.

Thank you.
Raghavendra
> Sebastian

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters
  2023-10-18  6:54   ` Eric Auger
@ 2023-10-19 18:09     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-19 18:09 UTC (permalink / raw)
  To: Eric Auger
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Tue, Oct 17, 2023 at 11:54 PM Eric Auger <eauger@redhat.com> wrote:
>
> Hi Raghavendra,
>
> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > Add a new test case to the vpmu_counter_access test to check
> > if PMU registers or their bits for unimplemented counters are not
> > accessible or are RAZ, as expected.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../kvm/aarch64/vpmu_counter_access.c         | 95 +++++++++++++++++--
> >  .../selftests/kvm/include/aarch64/processor.h |  1 +
> >  2 files changed, 87 insertions(+), 9 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > index e92af3c0db03..788386ac0894 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -5,8 +5,8 @@
> >   * Copyright (c) 2022 Google LLC.
> >   *
> >   * This test checks if the guest can see the same number of the PMU event
> > - * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
> > - * those counters.
> > + * counters (PMCR_EL0.N) that userspace sets, if the guest can access
> > + * those counters, and if the guest cannot access any other counters.
> I would suggest: if the guest is prevented from accessing any other counters
> >   * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> >   */
> >  #include <kvm_util.h>
> > @@ -131,9 +131,9 @@ static void write_pmevtypern(int n, unsigned long val)
> >  }
> >
> >  /*
> > - * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
> > + * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
> >   * accessors that test cases will use. Each of the accessors will
> > - * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
> > + * either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0
> I guess this should belong to the previous patch?
> >   * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
> >   * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
> >   *
> > @@ -291,25 +291,85 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> >                      pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
> >  }
> >
> > +#define INVALID_EC   (-1ul)
> > +uint64_t expected_ec = INVALID_EC;
> > +uint64_t op_end_addr;
> > +
> >  static void guest_sync_handler(struct ex_regs *regs)
> >  {
> >       uint64_t esr, ec;
> >
> >       esr = read_sysreg(esr_el1);
> >       ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
> > -     __GUEST_ASSERT(0, "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx", regs->pc, esr, ec);
> > +
> > +     __GUEST_ASSERT(op_end_addr && (expected_ec == ec),
> > +                     "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx; EC expected: 0x%lx",
> > +                     regs->pc, esr, ec, expected_ec);
> > +
> > +     /* Will go back to op_end_addr after the handler exits */
> > +     regs->pc = op_end_addr;
> > +
> > +     /*
> > +      * Clear op_end_addr, and setting expected_ec to INVALID_EC
> and set
> > +      * as a sign that an exception has occurred.
> > +      */
> > +     op_end_addr = 0;
> > +     expected_ec = INVALID_EC;
> > +}
> > +
> > +/*
> > + * Run the given operation that should trigger an exception with the
> > + * given exception class. The exception handler (guest_sync_handler)
> > + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
> > + * will come back to the instruction at the @done_label.
> > + * The @done_label must be a unique label in this test program.
> > + */
> > +#define TEST_EXCEPTION(ec, ops, done_label)          \
> > +{                                                    \
> > +     extern int done_label;                          \
> > +                                                     \
> > +     WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \
> > +     GUEST_ASSERT(ec != INVALID_EC);                 \
> > +     WRITE_ONCE(expected_ec, ec);                    \
> > +     dsb(ish);                                       \
> > +     ops;                                            \
> > +     asm volatile(#done_label":");                   \
> > +     GUEST_ASSERT(!op_end_addr);                     \
> > +     GUEST_ASSERT(expected_ec == INVALID_EC);        \
> > +}
> > +
> > +/*
> > + * Tests for reading/writing registers for the unimplemented event counter
> > + * specified by @pmc_idx (>= PMCR_EL0.N).
> > + */
> > +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
> > +{
> > +     /*
> > +      * Reading/writing the event count/type registers should cause
> > +      * an UNDEFINED exception.
> > +      */
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
> > +     TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
> > +     /*
> > +      * The bit corresponding to the (unimplemented) counter in
> > +      * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
> {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}
> > +      */
> > +     test_bitmap_pmu_regs(pmc_idx, 1);
> > +     test_bitmap_pmu_regs(pmc_idx, 0);
> >  }
> >
> >  /*
> >   * The guest is configured with PMUv3 with @expected_pmcr_n number of
> >   * event counters.
> >   * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
> > - * if reading/writing PMU registers for implemented counters can work
> > - * as expected.
> > + * if reading/writing PMU registers for implemented or unimplemented
> > + * counters can work as expected.
> >   */
> >  static void guest_code(uint64_t expected_pmcr_n)
> >  {
> > -     uint64_t pmcr, pmcr_n;
> > +     uint64_t pmcr, pmcr_n, unimp_mask;
> >       int i, pmc;
> >
> >       __GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
> > @@ -324,15 +384,32 @@ static void guest_code(uint64_t expected_pmcr_n)
> >                       "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
> >                       pmcr_n, expected_pmcr_n);
> >
> > +     /*
> > +      * Make sure that (RAZ) bits corresponding to unimplemented event
> > +      * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
> > +      * (NOTE: bits for implemented event counters are reset to UNKNOWN)
> > +      */
> > +     unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
> > +     check_bitmap_pmu_regs(unimp_mask, false);
> wrt above comment, this also checks pmintenset|clr_el1.
> > +
> >       /*
> >        * Tests for reading/writing PMU registers for implemented counters.
> > -      * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
> > +      * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
> >        */
> >       for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> >               for (pmc = 0; pmc < pmcr_n; pmc++)
> >                       test_access_pmc_regs(&pmc_accessors[i], pmc);
> >       }
> >
> > +     /*
> > +      * Tests for reading/writing PMU registers for unimplemented counters.
> > +      * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
> > +      */
> > +     for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> > +             for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
> > +                     test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
> > +     }
> > +
> >       GUEST_DONE();
> >  }
> >
> > diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> > index cb537253a6b9..c42d683102c7 100644
> > --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> > +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> > @@ -104,6 +104,7 @@ enum {
> >  #define ESR_EC_SHIFT         26
> >  #define ESR_EC_MASK          (ESR_EC_NUM - 1)
> >
> > +#define ESR_EC_UNKNOWN               0x0
> >  #define ESR_EC_SVC64         0x15
> >  #define ESR_EC_IABT          0x21
> >  #define ESR_EC_DABT          0x25
>
> Thanks
>
> Eric
>
Thanks for the comments, Eric. I'll fix these.

- Raghavendra

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-18 21:16           ` Raghavendra Rao Ananta
  2023-10-18 22:17             ` Oliver Upton
@ 2023-10-19 18:46             ` Raghavendra Rao Ananta
  2023-10-19 19:05               ` Oliver Upton
  1 sibling, 1 reply; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-19 18:46 UTC (permalink / raw)
  To: Eric Auger
  Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Wed, Oct 18, 2023 at 2:16 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> On Tue, Oct 17, 2023 at 9:59 AM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Hi Eric,
> > On Tue, Oct 17, 2023 at 2:23 AM Eric Auger <eauger@redhat.com> wrote:
> > >
> > > Hi,
> > > On 10/16/23 23:28, Raghavendra Rao Ananta wrote:
> > > > On Mon, Oct 16, 2023 at 12:45 PM Eric Auger <eauger@redhat.com> wrote:
> > > >>
> > > >> Hi Raghavendra,
> > > >>
> > > >> On 10/10/23 01:08, Raghavendra Rao Ananta wrote:
> > > >>> From: Reiji Watanabe <reijiw@google.com>
> > > >>>
> > > >>> On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
> > > >>> PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
> > > >> PMOVS{SET,CLR}_EL0?
> > > > Ah, yes. It should be PMOVS{SET,CLR}_EL0.
> > > >
> > > >>> This function clears RAZ bits of those registers corresponding
> > > >>> to unimplemented event counters on the vCPU, and sets bits
> > > >>> corresponding to implemented event counters to a predefined
> > > >>> pseudo UNKNOWN value (some bits are set to 1).
> > > >>>
> > > >>> The function identifies (un)implemented event counters on the
> > > >>> vCPU based on the PMCR_EL0.N value on the host. Using the host
> > > >>> value for this would be problematic when KVM supports letting
> > > >>> userspace set PMCR_EL0.N to a value different from the host value
> > > >>> (some of the RAZ bits of those registers could end up being set to 1).
> > > >>>
> > > >>> Fix this by clearing the registers so that it can ensure
> > > >>> that all the RAZ bits are cleared even when the PMCR_EL0.N value
> > > >>> for the vCPU is different from the host value. Use reset_val() to
> > > >>> do this instead of fixing reset_pmu_reg(), and remove
> > > >>> reset_pmu_reg(), as it is no longer used.
> > > >> do you intend to restore the 'unknown' behavior at some point?
> > > >>
> > > > I believe Reiji's (original author) intention was to keep them
> > > > cleared, which would still imply an 'unknown' behavior. Do you think
> > > > there's an issue with this?
> > > Then why do we bother using reset_unknown in the other places if
> > > clearing the bits is enough here?
> > >
> > Hmm. Good point. I can bring back reset_unknown to keep the original behavior.
> >
> I had a brief discussion about this with Oliver, and it looks like we
> might need a couple of additional changes for these register accesses:
> - For the userspace accesses, we have to implement explicit get_user
> and set_user callbacks that to filter out the unimplemented counters
> using kvm_pmu_valid_counter_mask().
Re-thinking the first case: Since these registers go through a reset
(reset_pmu_reg()) during initialization, where the valid counter mask
is applied, and since we are sanitizing the registers with the mask
before running the guest (below case), will implementing the
{get,set}_user() add any value, apart from just keeping userspace in
sync with every update of PMCR.N?
> - For the guest accesses to be correct, we might have to apply the
> same mask while serving KVM_REQ_RELOAD_PMU.
>
> Thank you.
> Raghavendra
>
> > Thank you.
> > Raghavendra
> > > Thanks
> > >
> > > Eric
> > > >
> > > > Thank you.
> > > > Raghavendra
> > > >> Thanks
> > > >>
> > > >> Eric
> > > >>>
> > > >>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > >>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > > >>> ---
> > > >>>  arch/arm64/kvm/sys_regs.c | 21 +--------------------
> > > >>>  1 file changed, 1 insertion(+), 20 deletions(-)
> > > >>>
> > > >>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > >>> index 818a52e257ed..3dbb7d276b0e 100644
> > > >>> --- a/arch/arm64/kvm/sys_regs.c
> > > >>> +++ b/arch/arm64/kvm/sys_regs.c
> > > >>> @@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> > > >>>       return REG_HIDDEN;
> > > >>>  }
> > > >>>
> > > >>> -static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > > >>> -{
> > > >>> -     u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > > >>> -
> > > >>> -     /* No PMU available, any PMU reg may UNDEF... */
> > > >>> -     if (!kvm_arm_support_pmu_v3())
> > > >>> -             return 0;
> > > >>> -
> > > >>> -     n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > > >>> -     n &= ARMV8_PMU_PMCR_N_MASK;
> > > >>> -     if (n)
> > > >>> -             mask |= GENMASK(n - 1, 0);
> > > >>> -
> > > >>> -     reset_unknown(vcpu, r);
> > > >>> -     __vcpu_sys_reg(vcpu, r->reg) &= mask;
> > > >>> -
> > > >>> -     return __vcpu_sys_reg(vcpu, r->reg);
> > > >>> -}
> > > >>> -
> > > >>>  static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > > >>>  {
> > > >>>       reset_unknown(vcpu, r);
> > > >>> @@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > > >>>         trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
> > > >>>
> > > >>>  #define PMU_SYS_REG(name)                                            \
> > > >>> -     SYS_DESC(SYS_##name), .reset = reset_pmu_reg,                   \
> > > >>> +     SYS_DESC(SYS_##name), .reset = reset_val,                       \
> > > >>>       .visibility = pmu_visibility
> > > >>>
> > > >>>  /* Macro to expand the PMEVCNTRn_EL0 register */
> > > >>
> > > >
> > >

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-19 18:46             ` Raghavendra Rao Ananta
@ 2023-10-19 19:05               ` Oliver Upton
  2023-10-19 20:17                 ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 60+ messages in thread
From: Oliver Upton @ 2023-10-19 19:05 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Eric Auger, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

Hi Raghu,

Can you please make sure you include leading and trailing whitespace for
your inline replies? The message gets extremely dense and is difficult
to read.

Also -- delete any unrelated context from your replies. If there's a
localized conversation about a particular detail there's no reason to
keep the entire thread in the body.

On Thu, Oct 19, 2023 at 11:46:22AM -0700, Raghavendra Rao Ananta wrote:
> On Wed, Oct 18, 2023 at 2:16 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> > I had a brief discussion about this with Oliver, and it looks like we
> > might need a couple of additional changes for these register accesses:
> > - For the userspace accesses, we have to implement explicit get_user
> > and set_user callbacks that to filter out the unimplemented counters
> > using kvm_pmu_valid_counter_mask().
> Re-thinking the first case: Since these registers go through a reset
> (reset_pmu_reg()) during initialization, where the valid counter mask
> is applied, and since we are sanitizing the registers with the mask
> before running the guest (below case), will implementing the
> {get,set}_user() add any value, apart from just keeping userspace in
> sync with every update of PMCR.N?

KVM's sysreg emulation (as seen from userspace) fails to uphold the RES0
bits of these registers. That's a bug.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
  2023-10-19 19:05               ` Oliver Upton
@ 2023-10-19 20:17                 ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 60+ messages in thread
From: Raghavendra Rao Ananta @ 2023-10-19 20:17 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Eric Auger, Marc Zyngier, Alexandru Elisei, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
	Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Thu, Oct 19, 2023 at 12:06 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Raghu,
>
> Can you please make sure you include leading and trailing whitespace for
> your inline replies? The message gets extremely dense and is difficult
> to read.
>
> Also -- delete any unrelated context from your replies. If there's a
> localized conversation about a particular detail there's no reason to
> keep the entire thread in the body.
>
Sorry about that. I'll try to keep it clean.

> On Thu, Oct 19, 2023 at 11:46:22AM -0700, Raghavendra Rao Ananta wrote:
> > On Wed, Oct 18, 2023 at 2:16 PM Raghavendra Rao Ananta
> > <rananta@google.com> wrote:
> > > I had a brief discussion about this with Oliver, and it looks like we
> > > might need a couple of additional changes for these register accesses:
> > > - For the userspace accesses, we have to implement explicit get_user
> > > and set_user callbacks that to filter out the unimplemented counters
> > > using kvm_pmu_valid_counter_mask().
> > Re-thinking the first case: Since these registers go through a reset
> > (reset_pmu_reg()) during initialization, where the valid counter mask
> > is applied, and since we are sanitizing the registers with the mask
> > before running the guest (below case), will implementing the
> > {get,set}_user() add any value, apart from just keeping userspace in
> > sync with every update of PMCR.N?
>
> KVM's sysreg emulation (as seen from userspace) fails to uphold the RES0
> bits of these registers. That's a bug.
>
Got it. Thanks for the confirmation. I'll implement these as originally planned.

Thank you.
Raghavendra

> --
> Thanks,
> Oliver

^ permalink raw reply	[flat|nested] 60+ messages in thread

end of thread, other threads:[~2023-10-19 20:18 UTC | newest]

Thread overview: 60+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-09 23:08 [PATCH v7 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 01/12] KVM: arm64: PMU: Introduce helpers to set the guest's PMU Raghavendra Rao Ananta
2023-10-16 19:45   ` Eric Auger
2023-10-09 23:08 ` [PATCH v7 02/12] KVM: arm64: PMU: Set the default PMU for the guest before vCPU reset Raghavendra Rao Ananta
2023-10-10 22:25   ` Oliver Upton
2023-10-13 20:27     ` Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on " Raghavendra Rao Ananta
2023-10-16 19:44   ` Eric Auger
2023-10-16 21:28     ` Raghavendra Rao Ananta
2023-10-17  9:23       ` Eric Auger
2023-10-17 16:59         ` Raghavendra Rao Ananta
2023-10-18 21:16           ` Raghavendra Rao Ananta
2023-10-18 22:17             ` Oliver Upton
2023-10-19 18:46             ` Raghavendra Rao Ananta
2023-10-19 19:05               ` Oliver Upton
2023-10-19 20:17                 ` Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 Raghavendra Rao Ananta
2023-10-16 19:47   ` Eric Auger
2023-10-09 23:08 ` [PATCH v7 05/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 Raghavendra Rao Ananta
2023-10-16 20:02   ` Eric Auger
2023-10-09 23:08 ` [PATCH v7 06/12] KVM: arm64: PMU: Add a helper to read the number of counters Raghavendra Rao Ananta
2023-10-10 22:30   ` Oliver Upton
2023-10-13  5:43     ` Oliver Upton
2023-10-13 20:24       ` Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU Raghavendra Rao Ananta
2023-10-16 13:35   ` Sebastian Ott
2023-10-16 19:02     ` Raghavendra Rao Ananta
2023-10-16 19:15       ` Oliver Upton
2023-10-16 21:35         ` Raghavendra Rao Ananta
2023-10-17  5:52           ` Oliver Upton
2023-10-17  5:55             ` Oliver Upton
2023-10-17 16:58             ` Raghavendra Rao Ananta
2023-10-17 17:09               ` Oliver Upton
2023-10-17 17:25                 ` Raghavendra Rao Ananta
2023-10-17 18:10                   ` Oliver Upton
2023-10-17 18:45                     ` Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
2023-10-17 15:52   ` Sebastian Ott
2023-10-17 16:49     ` Raghavendra Rao Ananta
2023-10-19 10:45       ` Sebastian Ott
2023-10-19 18:05         ` Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 09/12] tools: Import arm_pmuv3.h Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
2023-10-12 11:24   ` Sebastian Ott
2023-10-12 15:01     ` Sebastian Ott
2023-10-13 21:05       ` Raghavendra Rao Ananta
2023-10-16 10:01         ` Sebastian Ott
2023-10-16 18:56         ` Oliver Upton
2023-10-16 19:05           ` Raghavendra Rao Ananta
2023-10-16 19:07             ` Oliver Upton
2023-10-17 14:51   ` Eric Auger
2023-10-17 17:07     ` Raghavendra Rao Ananta
2023-10-17 15:48   ` Sebastian Ott
2023-10-17 17:10     ` Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
2023-10-17 18:54   ` Eric Auger
2023-10-17 21:42     ` Raghavendra Rao Ananta
2023-10-09 23:08 ` [PATCH v7 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
2023-10-18  6:54   ` Eric Auger
2023-10-19 18:09     ` Raghavendra Rao Ananta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).