* [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU
@ 2023-08-17 0:30 Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU Raghavendra Rao Ananta
` (11 more replies)
0 siblings, 12 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hello,
With permission from Reiji Watanabe <reijiw@google.com>, the original
author of the series, I'm posting the v5 with necessary alterations.
The goal of this series is to allow userspace to limit the number
of PMU event counters on the vCPU. We need this to support migration
across systems that implement different numbers of counters.
The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, its value will be the same as
the current PE by default. Userspace can set PMCR_EL0.N for the
vCPU to any value even with the current KVM using KVM_SET_ONE_REG.
However, it is practically unsupported, as KVM resets PMCR_EL0.N
to the host value on vCPU reset and some KVM code uses the host
value to identify (un)implemented event counters on the vCPU.
This series will ensure that the PMCR_EL0.N value is preserved
on vCPU reset and that KVM doesn't use the host value
to identify (un)implemented event counters on the vCPU.
This allows userspace to limit the number of the PMU event
counters on the vCPU.
The series is based on v6.5-rc6.
Patch 1 adds a helper to set a PMU for the guest. This helper will
make it easier for the following patches to add modify codes
for that process.
Patch 2 makes the default PMU for the guest set on the first
vCPU reset.
Patch 3 fixes reset_pmu_reg() to ensure that (RAZ) bits of
PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
PMOVS{SET,CLR}_EL1 corresponding to unimplemented event
counters on the vCPU are reset to zero.
Patch 4 is a minor refactoring to use the default PMU register reset
function for PMUSERENR_EL0 and PMCCFILTR_EL0.
Patch 5 simplifies the existing code that extracts PMCR_EL0.N by
using FIELD_GET().
Patch 6 adds a helper to read vCPU's PMCR_EL0.
Patch 7 changes the code to use the guest's PMCR_EL0.N, instead
of the PE's PMCR_EL0.N.
Patch 8 adds support userspace modifying PMCR_EL0.N.
Patch 9-12 adds a selftest to verify reading and writing PMU registers
for implemented or unimplemented PMU event counters on the vCPU.
v5:
- Drop the patches (v4 3,4) related to PMU version fixes as it's
now being handled in a separate series [1].
- Switch to config_lock, instead of kvm->lock, while configuring
the guest PMU.
- Instead of continuing after a WARN_ON() for the return value of
kvm_arm_set_vm_pmu() in kvm_arm_pmu_v3_set_pmu(), patch-1 now
returns from the function immediately with the error code.
- Fix WARN_ON() logic in kvm_host_pmu_init() (patch v4 9/14).
- Instead of returning 0, return -ENODEV from the
kvm_arm_set_vm_pmu() stub function.
- Do not define the PMEVN_CASE() and PMEVN_SWITCH() macros in
the selftest code as they are now included in the imported
arm_pmuv3.h header.
- Since the (initial) purpose of the selftest is to test the
accessibility of the counter registers, remove the functional
test at the end of test_access_pmc_regs(). It'll be added
later in a separate series.
- Introduce additional helper functions (destroy_vpmu_vm(),
PMC_ACC_TO_IDX()) in the selftest for ease of maintenance
and debugging.
v4:
https://lore.kernel.org/all/20230211031506.4159098-1-reijiw@google.com/
- Fix the selftest bug in patch 13 (Have test_access_pmc_regs() to
specify pmc index for test_bitmap_pmu_regs() instead of bit-shifted
value (Thank you Raghavendra for the reporting the issue!).
v3:
https://lore.kernel.org/all/20230203040242.1792453-1-reijiw@google.com/
- Remove reset_pmu_reg(), and use reset_val() instead. [Marc]
- Fixed the initial value of PMCR_EL0.N on heterogeneous
PMU systems. [Oliver]
- Fixed PMUVer issues on heterogeneous PMU systems.
- Fixed typos [Shaoqin]
v2:
https://lore.kernel.org/all/20230117013542.371944-1-reijiw@google.com/
- Added the sys_reg's set_user() handler for the PMCR_EL0 to
disallow userspace to set PMCR_EL0.N for the vCPU to a value
that is greater than the host value (and added a new test
case for this behavior). [Oliver]
- Added to the commit log of the patch 2 that PMUSERENR_EL0 and
PMCCFILTR_EL0 have UNKNOWN reset values.
v1:
https://lore.kernel.org/all/20221230035928.3423990-1-reijiw@google.com/
Thank you.
Raghavendra
[1]:
https://lore.kernel.org/all/20230728181907.1759513-1-reijiw@google.com/
Raghavendra Rao Ananta (1):
tools: Import arm_pmuv3.h
Reiji Watanabe (11):
KVM: arm64: PMU: Introduce a helper to set the guest's PMU
KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU
reset
KVM: arm64: PMU: Don't define the sysreg reset() for
PM{USERENR,CCFILTR}_EL0
KVM: arm64: PMU: Simplify extracting PMCR_EL0.N
KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0
KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
KVM: selftests: aarch64: Introduce vpmu_counter_access test
KVM: selftests: aarch64: vPMU register test for implemented counters
KVM: selftests: aarch64: vPMU register test for unimplemented counters
arch/arm64/include/asm/kvm_host.h | 6 +
arch/arm64/kvm/arm.c | 3 +-
arch/arm64/kvm/pmu-emul.c | 82 ++-
arch/arm64/kvm/reset.c | 18 +-
arch/arm64/kvm/sys_regs.c | 96 +--
drivers/perf/arm_pmuv3.c | 3 +-
include/kvm/arm_pmu.h | 12 +
include/linux/perf/arm_pmuv3.h | 2 +-
tools/include/perf/arm_pmuv3.h | 306 ++++++++++
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/vpmu_counter_access.c | 568 ++++++++++++++++++
.../selftests/kvm/include/aarch64/processor.h | 1 +
12 files changed, 1028 insertions(+), 70 deletions(-)
create mode 100644 tools/include/perf/arm_pmuv3.h
create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
base-commit: 2ccdd1b13c591d306f0401d98dedc4bdcd02b421
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-09-15 19:22 ` Oliver Upton
2023-08-17 0:30 ` [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset Raghavendra Rao Ananta
` (10 subsequent siblings)
11 siblings, 1 reply; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
Introduce a new helper function to set the guest's PMU
(kvm->arch.arm_pmu), and use it when the guest's PMU needs
to be set. This helper will make it easier for the following
patches to modify the relevant code.
No functional change intended.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/kvm/pmu-emul.c | 52 +++++++++++++++++++++++++++------------
1 file changed, 36 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 5606509724787..0ffd1efa90c07 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -865,6 +865,32 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
return true;
}
+static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+{
+ lockdep_assert_held(&kvm->arch.config_lock);
+
+ if (!arm_pmu) {
+ /*
+ * No PMU set, get the default one.
+ *
+ * The observant among you will notice that the supported_cpus
+ * mask does not get updated for the default PMU even though it
+ * is quite possible the selected instance supports only a
+ * subset of cores in the system. This is intentional, and
+ * upholds the preexisting behavior on heterogeneous systems
+ * where vCPUs can be scheduled on any core but the guest
+ * counters could stop working.
+ */
+ arm_pmu = kvm_pmu_probe_armpmu();
+ if (!arm_pmu)
+ return -ENODEV;
+ }
+
+ kvm->arch.arm_pmu = arm_pmu;
+
+ return 0;
+}
+
static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
{
struct kvm *kvm = vcpu->kvm;
@@ -884,9 +910,13 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
break;
}
- kvm->arch.arm_pmu = arm_pmu;
+ ret = kvm_arm_set_vm_pmu(kvm, arm_pmu);
+ if (ret) {
+ WARN_ON(ret);
+ break;
+ }
+
cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus);
- ret = 0;
break;
}
}
@@ -908,20 +938,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
return -EBUSY;
if (!kvm->arch.arm_pmu) {
- /*
- * No PMU set, get the default one.
- *
- * The observant among you will notice that the supported_cpus
- * mask does not get updated for the default PMU even though it
- * is quite possible the selected instance supports only a
- * subset of cores in the system. This is intentional, and
- * upholds the preexisting behavior on heterogeneous systems
- * where vCPUs can be scheduled on any core but the guest
- * counters could stop working.
- */
- kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
- if (!kvm->arch.arm_pmu)
- return -ENODEV;
+ int ret = kvm_arm_set_vm_pmu(kvm, NULL);
+
+ if (ret)
+ return ret;
}
switch (attr->attr) {
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 5:03 ` kernel test robot
` (2 more replies)
2023-08-17 0:30 ` [PATCH v5 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} " Raghavendra Rao Ananta
` (9 subsequent siblings)
11 siblings, 3 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
The following patches will use the number of counters information
from the arm_pmu and use this to set the PMCR.N for the guest
during vCPU reset. However, since the guest is not associated
with any arm_pmu until userspace configures the vPMU device
attributes, and a reset can happen before this event, call
kvm_arm_support_pmu_v3() just before doing the reset.
No functional change intended.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/kvm/pmu-emul.c | 9 +--------
arch/arm64/kvm/reset.c | 18 +++++++++++++-----
include/kvm/arm_pmu.h | 6 ++++++
3 files changed, 20 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 0ffd1efa90c07..b87822024828a 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -865,7 +865,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
return true;
}
-static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
{
lockdep_assert_held(&kvm->arch.config_lock);
@@ -937,13 +937,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
if (vcpu->arch.pmu.created)
return -EBUSY;
- if (!kvm->arch.arm_pmu) {
- int ret = kvm_arm_set_vm_pmu(kvm, NULL);
-
- if (ret)
- return ret;
- }
-
switch (attr->attr) {
case KVM_ARM_VCPU_PMU_V3_IRQ: {
int __user *uaddr = (int __user *)(long)attr->addr;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index bc8556b6f4590..4c20f1ccd0789 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -206,6 +206,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
*/
int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{
+ struct kvm *kvm = vcpu->kvm;
struct vcpu_reset_state reset_state;
int ret;
bool loaded;
@@ -216,6 +217,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
vcpu->arch.reset_state.reset = false;
spin_unlock(&vcpu->arch.mp_state_lock);
+ /*
+ * When the vCPU has a PMU, but no PMU is set for the guest
+ * yet, set the default one.
+ */
+ if (kvm_vcpu_has_pmu(vcpu) && unlikely(!kvm->arch.arm_pmu)) {
+ ret = -EINVAL;
+ if (kvm_arm_support_pmu_v3())
+ ret = kvm_arm_set_vm_pmu(kvm, NULL);
+ if (ret)
+ return ret;
+ }
+
/* Reset PMU outside of the non-preemptible section */
kvm_pmu_vcpu_reset(vcpu);
@@ -257,11 +270,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
} else {
pstate = VCPU_RESET_PSTATE_EL1;
}
-
- if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) {
- ret = -EINVAL;
- goto out;
- }
break;
}
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 847da6fc27139..66a2f8477641e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -100,6 +100,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
})
u8 kvm_arm_pmu_get_pmuver_limit(void);
+int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu);
#else
struct kvm_pmu {
@@ -172,6 +173,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
return 0;
}
+static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+{
+ return -ENODEV;
+}
+
#endif
#endif
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 Raghavendra Rao Ananta
` (8 subsequent siblings)
11 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and
PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg().
This function clears RAZ bits of those registers corresponding
to unimplemented event counters on the vCPU, and sets bits
corresponding to implemented event counters to a predefined
pseudo UNKNOWN value (some bits are set to 1).
The function identifies (un)implemented event counters on the
vCPU based on the PMCR_EL0.N value on the host. Using the host
value for this would be problematic when KVM supports letting
userspace set PMCR_EL0.N to a value different from the host value
(some of the RAZ bits of those registers could end up being set to 1).
Fix this by clearing the registers so that it can ensure
that all the RAZ bits are cleared even when the PMCR_EL0.N value
for the vCPU is different from the host value. Use reset_val() to
do this instead of fixing reset_pmu_reg(), and remove
reset_pmu_reg(), as it is no longer used.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/kvm/sys_regs.c | 21 +--------------------
1 file changed, 1 insertion(+), 20 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2ca2973abe66f..9d3d44d165eed 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
return REG_HIDDEN;
}
-static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
-{
- u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
-
- /* No PMU available, any PMU reg may UNDEF... */
- if (!kvm_arm_support_pmu_v3())
- return 0;
-
- n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
- n &= ARMV8_PMU_PMCR_N_MASK;
- if (n)
- mask |= GENMASK(n - 1, 0);
-
- reset_unknown(vcpu, r);
- __vcpu_sys_reg(vcpu, r->reg) &= mask;
-
- return __vcpu_sys_reg(vcpu, r->reg);
-}
-
static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
reset_unknown(vcpu, r);
@@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
trap_wcr, reset_wcr, 0, 0, get_wcr, set_wcr }
#define PMU_SYS_REG(name) \
- SYS_DESC(SYS_##name), .reset = reset_pmu_reg, \
+ SYS_DESC(SYS_##name), .reset = reset_val, \
.visibility = pmu_visibility
/* Macro to expand the PMEVCNTRn_EL0 register */
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (2 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} " Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N Raghavendra Rao Ananta
` (7 subsequent siblings)
11 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
The default reset function for PMU registers (defined by PMU_SYS_REG)
now simply clears a specified register. Use the default one for
PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those
registers on vCPU reset (NOTE: All non-RES0 fields of those
registers have UNKNOWN reset values, and the same fields of
their AArch32 registers have 0 reset values).
No functional change intended.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/kvm/sys_regs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9d3d44d165eed..39e9248c935e7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2178,7 +2178,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
* in 32bit mode. Here we choose to reset it as zero for consistency.
*/
{ PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr,
- .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 },
+ .reg = PMUSERENR_EL0, },
{ PMU_SYS_REG(PMOVSSET_EL0),
.access = access_pmovs, .reg = PMOVSSET_EL0 },
@@ -2336,7 +2336,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
* in 32bit mode. Here we choose to reset it as zero for consistency.
*/
{ PMU_SYS_REG(PMCCFILTR_EL0), .access = access_pmu_evtyper,
- .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
+ .reg = PMCCFILTR_EL0, },
EL2_REG(VPIDR_EL2, access_rw, reset_unknown, 0),
EL2_REG(VMPIDR_EL2, access_rw, reset_unknown, 0),
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (3 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 6:38 ` kernel test robot
2023-09-15 19:56 ` Oliver Upton
2023-08-17 0:30 ` [PATCH v5 06/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 Raghavendra Rao Ananta
` (6 subsequent siblings)
11 siblings, 2 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
Some code extracts PMCR_EL0.N using ARMV8_PMU_PMCR_N_SHIFT and
ARMV8_PMU_PMCR_N_MASK. Define ARMV8_PMU_PMCR_N (0x1f << 11),
and simplify those codes using FIELD_GET() and/or ARMV8_PMU_PMCR_N.
The following patches will also use these macros to extract PMCR_EL0.N.
No functional change intended.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/kvm/pmu-emul.c | 3 +--
arch/arm64/kvm/sys_regs.c | 7 +++----
drivers/perf/arm_pmuv3.c | 3 +--
include/linux/perf/arm_pmuv3.h | 2 +-
4 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index b87822024828a..f7b5fa16341ad 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -245,9 +245,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
{
- u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
+ u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
- val &= ARMV8_PMU_PMCR_N_MASK;
if (val == 0)
return BIT(ARMV8_PMU_CYCLE_IDX);
else
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 39e9248c935e7..30108f09e088b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -750,7 +750,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
return 0;
/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
- pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
+ pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N;
if (!kvm_supports_32bit_el0())
pmcr |= ARMV8_PMU_PMCR_LC;
@@ -858,10 +858,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
{
- u64 pmcr, val;
+ u64 val;
- pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
- val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+ val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
kvm_inject_undefined(vcpu);
return false;
diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index 08b3a1bf0ef62..7618b0adc0b8c 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -1128,8 +1128,7 @@ static void __armv8pmu_probe_pmu(void *info)
probe->present = true;
/* Read the nb of CNTx counters supported from PMNC */
- cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT)
- & ARMV8_PMU_PMCR_N_MASK;
+ cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read());
/* Add the CPU cycles counter */
cpu_pmu->num_events += 1;
diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h
index e3899bd77f5cc..ecbcf3f93560c 100644
--- a/include/linux/perf/arm_pmuv3.h
+++ b/include/linux/perf/arm_pmuv3.h
@@ -216,7 +216,7 @@
#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
-#define ARMV8_PMU_PMCR_N_MASK 0x1f
+#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT)
#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
/*
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 06/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (4 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU Raghavendra Rao Ananta
` (5 subsequent siblings)
11 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
Add a helper to read a vCPU's PMCR_EL0, and use it when KVM
reads a vCPU's PMCR_EL0.
The PMCR_EL0 value is tracked by a sysreg file per each vCPU.
The following patches will make (only) PMCR_EL0.N track per guest.
Having the new helper will be useful to combine the PMCR_EL0.N
field (tracked per guest) and the other fields (tracked per vCPU)
to provide the value of PMCR_EL0.
No functional change intended.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/kvm/arm.c | 3 +--
arch/arm64/kvm/pmu-emul.c | 17 +++++++++++------
arch/arm64/kvm/sys_regs.c | 6 +++---
include/kvm/arm_pmu.h | 6 ++++++
4 files changed, 21 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d1cb298a58a08..7bd438c181f76 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -800,8 +800,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
}
if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
- kvm_pmu_handle_pmcr(vcpu,
- __vcpu_sys_reg(vcpu, PMCR_EL0));
+ kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu));
if (kvm_check_request(KVM_REQ_SUSPEND, vcpu))
return kvm_vcpu_suspend(vcpu);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f7b5fa16341ad..42b88b1a901f9 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -67,7 +67,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
{
- u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0);
+ u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc));
return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) ||
(pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC));
@@ -245,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
{
- u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
+ u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu));
if (val == 0)
return BIT(ARMV8_PMU_CYCLE_IDX);
@@ -266,7 +266,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
if (!kvm_vcpu_has_pmu(vcpu))
return;
- if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
+ if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val)
return;
for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
@@ -318,7 +318,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
{
u64 reg = 0;
- if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
+ if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) {
reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
@@ -420,7 +420,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
{
int i;
- if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+ if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E))
return;
/* Weed out disabled counters */
@@ -563,7 +563,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
{
struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
- return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+ return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) &&
(__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx));
}
@@ -1069,3 +1069,8 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
ID_AA64DFR0_EL1_PMUVer_V3P5);
return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
}
+
+u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
+{
+ return __vcpu_sys_reg(vcpu, PMCR_EL0);
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 30108f09e088b..cf4981e2c153b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -803,7 +803,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
* Only update writeable bits of PMCR (continuing into
* kvm_pmu_handle_pmcr() as well)
*/
- val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+ val = kvm_vcpu_read_pmcr(vcpu);
val &= ~ARMV8_PMU_PMCR_MASK;
val |= p->regval & ARMV8_PMU_PMCR_MASK;
if (!kvm_supports_32bit_el0())
@@ -811,7 +811,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
kvm_pmu_handle_pmcr(vcpu, val);
} else {
/* PMCR.P & PMCR.C are RAZ */
- val = __vcpu_sys_reg(vcpu, PMCR_EL0)
+ val = kvm_vcpu_read_pmcr(vcpu)
& ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
p->regval = val;
}
@@ -860,7 +860,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
{
u64 val;
- val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
+ val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu));
if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
kvm_inject_undefined(vcpu);
return false;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 66a2f8477641e..99fe64c81ca8b 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -102,6 +102,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
u8 kvm_arm_pmu_get_pmuver_limit(void);
int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu);
+u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu);
#else
struct kvm_pmu {
};
@@ -178,6 +179,11 @@ static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
return -ENODEV;
}
+static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
+{
+ return 0;
+}
+
#endif
#endif
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (5 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 06/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
` (4 subsequent siblings)
11 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
The number of PMU event counters is indicated in PMCR_EL0.N.
For a vCPU with PMUv3 configured, the value is set to the same
value as the current PE on every vCPU reset. Unless the vCPU is
pinned to PEs that has the PMU associated to the guest from the
initial vCPU reset, the value might be different from the PMU's
PMCR_EL0.N on heterogeneous PMU systems.
Fix this by setting the vCPU's PMCR_EL0.N to the PMU's PMCR_EL0.N
value. Track the PMCR_EL0.N per guest, as only one PMU can be set
for the guest (PMCR_EL0.N must be the same for all vCPUs of the
guest), and it is convenient for updating the value.
KVM does not yet support userspace modifying PMCR_EL0.N.
The following patch will add support for that.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/include/asm/kvm_host.h | 3 +++
arch/arm64/kvm/pmu-emul.c | 14 +++++++++++++-
arch/arm64/kvm/sys_regs.c | 15 +++++++++------
3 files changed, 25 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d3dd05bbfe23f..0f2dbbe8f6a7e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -256,6 +256,9 @@ struct kvm_arch {
cpumask_var_t supported_cpus;
+ /* PMCR_EL0.N value for the guest */
+ u8 pmcr_n;
+
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
struct maple_tree smccc_filter;
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 42b88b1a901f9..ce7de6bbdc967 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -681,6 +681,9 @@ void kvm_host_pmu_init(struct arm_pmu *pmu)
if (!entry)
goto out_unlock;
+ WARN_ON((pmu->num_events <= 0) ||
+ (pmu->num_events > ARMV8_PMU_MAX_COUNTERS));
+
entry->arm_pmu = pmu;
list_add_tail(&entry->entry, &arm_pmus);
@@ -887,6 +890,13 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
kvm->arch.arm_pmu = arm_pmu;
+ /*
+ * Both the num_events and PMCR_EL0.N indicates the number of
+ * PMU event counters, but the former includes the cycle counter
+ * while the latter does not.
+ */
+ kvm->arch.pmcr_n = arm_pmu->num_events - 1;
+
return 0;
}
@@ -1072,5 +1082,7 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
{
- return __vcpu_sys_reg(vcpu, PMCR_EL0);
+ u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) & ~ARMV8_PMU_PMCR_N;
+
+ return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index cf4981e2c153b..2075901356c5b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -745,12 +745,8 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 pmcr;
- /* No PMU available, PMCR_EL0 may UNDEF... */
- if (!kvm_arm_support_pmu_v3())
- return 0;
-
/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
- pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N;
+ pmcr = kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_N;
if (!kvm_supports_32bit_el0())
pmcr |= ARMV8_PMU_PMCR_LC;
@@ -1083,6 +1079,13 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
return true;
}
+static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
+ u64 *val)
+{
+ *val = kvm_vcpu_read_pmcr(vcpu);
+ return 0;
+}
+
/* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
#define DBG_BCR_BVR_WCR_WVR_EL1(n) \
{ SYS_DESC(SYS_DBGBVRn_EL1(n)), \
@@ -2145,7 +2148,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SVCR), undef_access },
{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
- .reset = reset_pmcr, .reg = PMCR_EL0 },
+ .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
{ PMU_SYS_REG(PMCNTENSET_EL0),
.access = access_pmcnten, .reg = PMCNTENSET_EL0 },
{ PMU_SYS_REG(PMCNTENCLR_EL0),
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (6 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-21 12:12 ` Shaoqin Huang
` (2 more replies)
2023-08-17 0:30 ` [PATCH v5 09/12] tools: Import arm_pmuv3.h Raghavendra Rao Ananta
` (3 subsequent siblings)
11 siblings, 3 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
KVM does not yet support userspace modifying PMCR_EL0.N (With
the previous patch, KVM ignores what is written by upserspace).
Add support userspace limiting PMCR_EL0.N.
Disallow userspace to set PMCR_EL0.N to a value that is greater
than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
support more event counters than the host HW implements.
Although this is an ABI change, this change only affects
userspace setting PMCR_EL0.N to a larger value than the host.
As accesses to unadvertised event counters indices is CONSTRAINED
UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
on every vCPU reset before this series, I can't think of any
use case where a user space would do that.
Also, ignore writes to read-only bits that are cleared on vCPU reset,
and RES{0,1} bits (including writable bits that KVM doesn't support
yet), as those bits shouldn't be modified (at least with
the current KVM).
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
arch/arm64/include/asm/kvm_host.h | 3 ++
arch/arm64/kvm/pmu-emul.c | 1 +
arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
3 files changed, 51 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0f2dbbe8f6a7e..c15ec365283d1 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -259,6 +259,9 @@ struct kvm_arch {
/* PMCR_EL0.N value for the guest */
u8 pmcr_n;
+ /* Limit value of PMCR_EL0.N for the guest */
+ u8 pmcr_n_limit;
+
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
struct maple_tree smccc_filter;
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index ce7de6bbdc967..39ad56a71ad20 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
* while the latter does not.
*/
kvm->arch.pmcr_n = arm_pmu->num_events - 1;
+ kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
return 0;
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2075901356c5b..c01d62afa7db4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
return 0;
}
+static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
+ u64 val)
+{
+ struct kvm *kvm = vcpu->kvm;
+ u64 new_n, mutable_mask;
+ int ret = 0;
+
+ new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
+
+ mutex_lock(&kvm->arch.config_lock);
+ if (unlikely(new_n != kvm->arch.pmcr_n)) {
+ /*
+ * The vCPU can't have more counters than the PMU
+ * hardware implements.
+ */
+ if (new_n <= kvm->arch.pmcr_n_limit)
+ kvm->arch.pmcr_n = new_n;
+ else
+ ret = -EINVAL;
+ }
+ mutex_unlock(&kvm->arch.config_lock);
+ if (ret)
+ return ret;
+
+ /*
+ * Ignore writes to RES0 bits, read only bits that are cleared on
+ * vCPU reset, and writable bits that KVM doesn't support yet.
+ * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
+ * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
+ * But, we leave the bit as it is here, as the vCPU's PMUver might
+ * be changed later (NOTE: the bit will be cleared on first vCPU run
+ * if necessary).
+ */
+ mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
+ val &= mutable_mask;
+ val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
+
+ /* The LC bit is RES1 when AArch32 is not supported */
+ if (!kvm_supports_32bit_el0())
+ val |= ARMV8_PMU_PMCR_LC;
+
+ __vcpu_sys_reg(vcpu, r->reg) = val;
+ return 0;
+}
+
/* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
#define DBG_BCR_BVR_WCR_WVR_EL1(n) \
{ SYS_DESC(SYS_DBGBVRn_EL1(n)), \
@@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_CTR_EL0), access_ctr },
{ SYS_DESC(SYS_SVCR), undef_access },
- { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
- .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
+ { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
+ .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
{ PMU_SYS_REG(PMCNTENSET_EL0),
.access = access_pmcnten, .reg = PMCNTENSET_EL0 },
{ PMU_SYS_REG(PMCNTENCLR_EL0),
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 09/12] tools: Import arm_pmuv3.h
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (7 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
` (2 subsequent siblings)
11 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Import kernel's include/linux/perf/arm_pmuv3.h, with the
definition of PMEVN_SWITCH() additionally including an assert()
for the 'default' case. The following patches will use macros
defined in this header.
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
tools/include/perf/arm_pmuv3.h | 308 +++++++++++++++++++++++++++++++++
1 file changed, 308 insertions(+)
create mode 100644 tools/include/perf/arm_pmuv3.h
diff --git a/tools/include/perf/arm_pmuv3.h b/tools/include/perf/arm_pmuv3.h
new file mode 100644
index 0000000000000..1567a772abe27
--- /dev/null
+++ b/tools/include/perf/arm_pmuv3.h
@@ -0,0 +1,308 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __PERF_ARM_PMUV3_H
+#define __PERF_ARM_PMUV3_H
+
+#include <assert.h>
+#include <asm/bug.h>
+
+#define ARMV8_PMU_MAX_COUNTERS 32
+#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * Common architectural and microarchitectural event numbers.
+ */
+#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005
+#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006
+#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007
+#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008
+#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009
+#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A
+#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B
+#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C
+#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D
+#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E
+#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010
+#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011
+#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012
+#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018
+#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019
+#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A
+#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B
+#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C
+#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D
+#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020
+#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022
+#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023
+#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033
+#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034
+#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039
+#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A
+#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B
+#define ARMV8_PMUV3_PERFCTR_STALL 0x003C
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F
+
+/* Statistical profiling extension microarchitectural events */
+#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000
+#define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001
+#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002
+#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003
+
+/* AMUv1 architecture events */
+#define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004
+#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005
+
+/* long-latency read miss events */
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B
+
+/* Trace buffer events */
+#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C
+#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E
+
+/* Trace unit events */
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B
+
+/* additional latency from alignment events */
+#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020
+#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021
+#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022
+
+/* Armv8.5 Memory Tagging Extension events */
+#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024
+#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025
+#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026
+
+/* ARMv8 recommended implementation defined event types */
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048
+
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A
+
+#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C
+#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D
+#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E
+#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F
+#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070
+#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071
+#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072
+#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073
+#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074
+#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075
+#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076
+#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077
+#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078
+#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079
+#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A
+
+#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C
+#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D
+#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081
+#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082
+#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083
+#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086
+#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087
+#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F
+#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090
+#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */
+#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */
+#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */
+#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
+#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
+#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
+#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
+#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
+#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT)
+#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */
+#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */
+#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31)
+#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30)
+#define ARMV8_PMU_INCLUDE_EL2 (1U << 27)
+
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
+#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
+
+/* PMMIR_EL1.SLOTS mask */
+#define ARMV8_PMU_SLOTS_MASK 0xff
+
+#define ARMV8_PMU_BUS_SLOTS_SHIFT 8
+#define ARMV8_PMU_BUS_SLOTS_MASK 0xff
+#define ARMV8_PMU_BUS_WIDTH_SHIFT 16
+#define ARMV8_PMU_BUS_WIDTH_MASK 0xf
+
+/*
+ * This code is really good
+ */
+
+#define PMEVN_CASE(n, case_macro) \
+ case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro) \
+ do { \
+ switch (x) { \
+ PMEVN_CASE(0, case_macro); \
+ PMEVN_CASE(1, case_macro); \
+ PMEVN_CASE(2, case_macro); \
+ PMEVN_CASE(3, case_macro); \
+ PMEVN_CASE(4, case_macro); \
+ PMEVN_CASE(5, case_macro); \
+ PMEVN_CASE(6, case_macro); \
+ PMEVN_CASE(7, case_macro); \
+ PMEVN_CASE(8, case_macro); \
+ PMEVN_CASE(9, case_macro); \
+ PMEVN_CASE(10, case_macro); \
+ PMEVN_CASE(11, case_macro); \
+ PMEVN_CASE(12, case_macro); \
+ PMEVN_CASE(13, case_macro); \
+ PMEVN_CASE(14, case_macro); \
+ PMEVN_CASE(15, case_macro); \
+ PMEVN_CASE(16, case_macro); \
+ PMEVN_CASE(17, case_macro); \
+ PMEVN_CASE(18, case_macro); \
+ PMEVN_CASE(19, case_macro); \
+ PMEVN_CASE(20, case_macro); \
+ PMEVN_CASE(21, case_macro); \
+ PMEVN_CASE(22, case_macro); \
+ PMEVN_CASE(23, case_macro); \
+ PMEVN_CASE(24, case_macro); \
+ PMEVN_CASE(25, case_macro); \
+ PMEVN_CASE(26, case_macro); \
+ PMEVN_CASE(27, case_macro); \
+ PMEVN_CASE(28, case_macro); \
+ PMEVN_CASE(29, case_macro); \
+ PMEVN_CASE(30, case_macro); \
+ default: \
+ WARN(1, "Invalid PMEV* index\n"); \
+ assert(0); \
+ } \
+ } while (0)
+
+#endif
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (8 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 09/12] tools: Import arm_pmuv3.h Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-09-15 21:00 ` Oliver Upton
2023-08-17 0:30 ` [PATCH v5 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
11 siblings, 1 reply; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
Introduce vpmu_counter_access test for arm64 platforms.
The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU,
and check if the guest can consistently see the same number of the
PMU event counters (PMCR_EL0.N) that userspace sets.
This test case is done with each of the PMCR_EL0.N values from
0 to 31 (With the PMCR_EL0.N values greater than the host value,
the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail).
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/aarch64/vpmu_counter_access.c | 235 ++++++++++++++++++
2 files changed, 236 insertions(+)
create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c692cc86e7da8..a1599e2b82e38 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -148,6 +148,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
+TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
TEST_GEN_PROGS_aarch64 += demand_paging_test
TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
new file mode 100644
index 0000000000000..d0afec07948ef
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -0,0 +1,235 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * vpmu_counter_access - Test vPMU event counter access
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
+ * This test checks if the guest can see the same number of the PMU event
+ * counters (PMCR_EL0.N) that userspace sets.
+ * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
+ */
+#include <kvm_util.h>
+#include <processor.h>
+#include <test_util.h>
+#include <vgic.h>
+#include <perf/arm_pmuv3.h>
+#include <linux/bitfield.h>
+
+/* The max number of the PMU event counters (excluding the cycle counter) */
+#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
+
+struct vpmu_vm {
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+ int gic_fd;
+};
+
+static void guest_sync_handler(struct ex_regs *regs)
+{
+ uint64_t esr, ec;
+
+ esr = read_sysreg(esr_el1);
+ ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
+ GUEST_ASSERT_3(0, regs->pc, esr, ec);
+}
+
+/*
+ * The guest is configured with PMUv3 with @expected_pmcr_n number of
+ * event counters.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ */
+static void guest_code(uint64_t expected_pmcr_n)
+{
+ uint64_t pmcr, pmcr_n;
+
+ GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
+
+ pmcr = read_sysreg(pmcr_el0);
+ pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
+
+ /* Make sure that PMCR_EL0.N indicates the value userspace set */
+ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
+
+ GUEST_DONE();
+}
+
+#define GICD_BASE_GPA 0x8000000ULL
+#define GICR_BASE_GPA 0x80A0000ULL
+
+/* Create a VM that has one vCPU with PMUv3 configured. */
+static struct vpmu_vm *create_vpmu_vm(void *guest_code)
+{
+ struct kvm_vcpu_init init;
+ uint8_t pmuver, ec;
+ uint64_t dfr0, irq = 23;
+ struct vpmu_vm *vpmu_vm;
+ struct kvm_device_attr irq_attr = {
+ .group = KVM_ARM_VCPU_PMU_V3_CTRL,
+ .attr = KVM_ARM_VCPU_PMU_V3_IRQ,
+ .addr = (uint64_t)&irq,
+ };
+ struct kvm_device_attr init_attr = {
+ .group = KVM_ARM_VCPU_PMU_V3_CTRL,
+ .attr = KVM_ARM_VCPU_PMU_V3_INIT,
+ };
+
+ vpmu_vm = calloc(1, sizeof(*vpmu_vm));
+ TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm");
+
+ vpmu_vm->vm = vm_create(1);
+ vm_init_descriptor_tables(vpmu_vm->vm);
+ for (ec = 0; ec < ESR_EC_NUM; ec++) {
+ vm_install_sync_handler(vpmu_vm->vm, VECTOR_SYNC_CURRENT, ec,
+ guest_sync_handler);
+ }
+
+ /* Create vCPU with PMUv3 */
+ vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init);
+ init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+ vpmu_vm->vcpu = aarch64_vcpu_add(vpmu_vm->vm, 0, &init, guest_code);
+ vcpu_init_descriptor_tables(vpmu_vm->vcpu);
+ vpmu_vm->gic_fd = vgic_v3_setup(vpmu_vm->vm, 1, 64,
+ GICD_BASE_GPA, GICR_BASE_GPA);
+
+ /* Make sure that PMUv3 support is indicated in the ID register */
+ vcpu_get_reg(vpmu_vm->vcpu,
+ KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+ pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
+ TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+ pmuver >= ID_AA64DFR0_PMUVER_8_0,
+ "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+
+ /* Initialize vPMU */
+ vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
+ vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+
+ return vpmu_vm;
+}
+
+static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
+{
+ close(vpmu_vm->gic_fd);
+ kvm_vm_free(vpmu_vm->vm);
+ free(vpmu_vm);
+}
+
+static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+{
+ struct ucall uc;
+
+ vcpu_args_set(vcpu, 1, pmcr_n);
+ vcpu_run(vcpu);
+ switch (get_ucall(vcpu, &uc)) {
+ case UCALL_ABORT:
+ REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+ break;
+ case UCALL_DONE:
+ break;
+ default:
+ TEST_FAIL("Unknown ucall %lu", uc.cmd);
+ break;
+ }
+}
+
+/*
+ * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
+ * and run the test.
+ */
+static void run_test(uint64_t pmcr_n)
+{
+ struct vpmu_vm *vpmu_vm;
+ struct kvm_vcpu *vcpu;
+ uint64_t sp, pmcr, pmcr_orig;
+ struct kvm_vcpu_init init;
+
+ pr_debug("Test with pmcr_n %lu\n", pmcr_n);
+ vpmu_vm = create_vpmu_vm(guest_code);
+
+ vcpu = vpmu_vm->vcpu;
+
+ /* Save the initial sp to restore them later to run the guest again */
+ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
+
+ /* Update the PMCR_EL0.N with @pmcr_n */
+ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+ pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N;
+ pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
+ vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+
+ run_vcpu(vcpu, pmcr_n);
+
+ /*
+ * Reset and re-initialize the vCPU, and run the guest code again to
+ * check if PMCR_EL0.N is preserved.
+ */
+ vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init);
+ init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+ aarch64_vcpu_setup(vcpu, &init);
+ vcpu_init_descriptor_tables(vcpu);
+ vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
+ vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+
+ run_vcpu(vcpu, pmcr_n);
+
+ destroy_vpmu_vm(vpmu_vm);
+}
+
+/*
+ * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
+ * the vCPU to @pmcr_n, which is larger than the host value.
+ * The attempt should fail as @pmcr_n is too big to set for the vCPU.
+ */
+static void run_error_test(uint64_t pmcr_n)
+{
+ struct vpmu_vm *vpmu_vm;
+ struct kvm_vcpu *vcpu;
+ int ret;
+ uint64_t pmcr, pmcr_orig;
+
+ pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
+ vpmu_vm = create_vpmu_vm(guest_code);
+ vcpu = vpmu_vm->vcpu;
+
+ /* Update the PMCR_EL0.N with @pmcr_n */
+ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+ pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N;
+ pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
+
+ /* This should fail as @pmcr_n is too big to set for the vCPU */
+ ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+ TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
+ pmcr, pmcr_orig);
+
+ destroy_vpmu_vm(vpmu_vm);
+}
+
+/*
+ * Return the default number of implemented PMU event counters excluding
+ * the cycle counter (i.e. PMCR_EL0.N value) for the guest.
+ */
+static uint64_t get_pmcr_n_limit(void)
+{
+ struct vpmu_vm *vpmu_vm;
+ uint64_t pmcr;
+
+ vpmu_vm = create_vpmu_vm(guest_code);
+ vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+ destroy_vpmu_vm(vpmu_vm);
+ return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
+}
+
+int main(void)
+{
+ uint64_t i, pmcr_n;
+
+ TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
+
+ pmcr_n = get_pmcr_n_limit();
+ for (i = 0; i <= pmcr_n; i++)
+ run_test(i);
+
+ for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
+ run_error_test(i);
+
+ return 0;
+}
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (9 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
11 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
Add a new test case to the vpmu_counter_access test to check if PMU
registers or their bits for implemented counters on the vCPU are
readable/writable as expected, and can be programmed to count events.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
.../kvm/aarch64/vpmu_counter_access.c | 264 +++++++++++++++++-
1 file changed, 261 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index d0afec07948ef..3a2cf38bb415d 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,7 +5,8 @@
* Copyright (c) 2022 Google LLC.
*
* This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL0.N) that userspace sets.
+ * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
+ * those counters.
* This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
*/
#include <kvm_util.h>
@@ -24,6 +25,251 @@ struct vpmu_vm {
int gic_fd;
};
+/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline unsigned long read_sel_evcntr(int sel)
+{
+ write_sysreg(sel, pmselr_el0);
+ isb();
+ return read_sysreg(pmxevcntr_el0);
+}
+
+/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline void write_sel_evcntr(int sel, unsigned long val)
+{
+ write_sysreg(sel, pmselr_el0);
+ isb();
+ write_sysreg(val, pmxevcntr_el0);
+ isb();
+}
+
+/* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline unsigned long read_sel_evtyper(int sel)
+{
+ write_sysreg(sel, pmselr_el0);
+ isb();
+ return read_sysreg(pmxevtyper_el0);
+}
+
+/* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline void write_sel_evtyper(int sel, unsigned long val)
+{
+ write_sysreg(sel, pmselr_el0);
+ isb();
+ write_sysreg(val, pmxevtyper_el0);
+ isb();
+}
+
+static inline void enable_counter(int idx)
+{
+ uint64_t v = read_sysreg(pmcntenset_el0);
+
+ write_sysreg(BIT(idx) | v, pmcntenset_el0);
+ isb();
+}
+
+static inline void disable_counter(int idx)
+{
+ uint64_t v = read_sysreg(pmcntenset_el0);
+
+ write_sysreg(BIT(idx) | v, pmcntenclr_el0);
+ isb();
+}
+
+static void pmu_disable_reset(void)
+{
+ uint64_t pmcr = read_sysreg(pmcr_el0);
+
+ /* Reset all counters, disabling them */
+ pmcr &= ~ARMV8_PMU_PMCR_E;
+ write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+ isb();
+}
+
+#define RETURN_READ_PMEVCNTRN(n) \
+ return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+ PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+ return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+ write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+ PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+ isb();
+}
+
+#define READ_PMEVTYPERN(n) \
+ return read_sysreg(pmevtyper##n##_el0)
+static unsigned long read_pmevtypern(int n)
+{
+ PMEVN_SWITCH(n, READ_PMEVTYPERN);
+ return 0;
+}
+
+#define WRITE_PMEVTYPERN(n) \
+ write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+ PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+ isb();
+}
+
+/*
+ * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * accessors that test cases will use. Each of the accessors will
+ * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
+ * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
+ *
+ * This is used to test that combinations of those accessors provide
+ * the consistent behavior.
+ */
+struct pmc_accessor {
+ /* A function to be used to read PMEVTCNTR<n>_EL0 */
+ unsigned long (*read_cntr)(int idx);
+ /* A function to be used to write PMEVTCNTR<n>_EL0 */
+ void (*write_cntr)(int idx, unsigned long val);
+ /* A function to be used to read PMEVTYPER<n>_EL0 */
+ unsigned long (*read_typer)(int idx);
+ /* A function to be used to write PMEVTYPER<n>_EL0 */
+ void (*write_typer)(int idx, unsigned long val);
+};
+
+struct pmc_accessor pmc_accessors[] = {
+ /* test with all direct accesses */
+ { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
+ /* test with all indirect accesses */
+ { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
+ /* read with direct accesses, and write with indirect accesses */
+ { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
+ /* read with indirect accesses, and write with direct accesses */
+ { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
+};
+
+/*
+ * Convert a pointer of pmc_accessor to an index in pmc_accessors[],
+ * assuming that the pointer is one of the entries in pmc_accessors[].
+ */
+#define PMC_ACC_TO_IDX(acc) (acc - &pmc_accessors[0])
+
+#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \
+{ \
+ uint64_t _tval = read_sysreg(regname); \
+ \
+ if (set_expected) \
+ GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
+ else \
+ GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
+}
+
+/*
+ * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
+ * are set or cleared as specified in @set_expected.
+ */
+static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
+{
+ GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
+ GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
+ GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected);
+ GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected);
+ GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
+ GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
+}
+
+/*
+ * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding
+ * to the specified counter (@pmc_idx) can be read/written as expected.
+ * When @set_op is true, it tries to set the bit for the counter in
+ * those registers by writing the SET registers (the bit won't be set
+ * if the counter is not implemented though).
+ * Otherwise, it tries to clear the bits in the registers by writing
+ * the CLR registers.
+ * Then, it checks if the values indicated in the registers are as expected.
+ */
+static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
+{
+ uint64_t pmcr_n, test_bit = BIT(pmc_idx);
+ bool set_expected = false;
+
+ if (set_op) {
+ write_sysreg(test_bit, pmcntenset_el0);
+ write_sysreg(test_bit, pmintenset_el1);
+ write_sysreg(test_bit, pmovsset_el0);
+
+ /* The bit will be set only if the counter is implemented */
+ pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
+ set_expected = (pmc_idx < pmcr_n) ? true : false;
+ } else {
+ write_sysreg(test_bit, pmcntenclr_el0);
+ write_sysreg(test_bit, pmintenclr_el1);
+ write_sysreg(test_bit, pmovsclr_el0);
+ }
+ check_bitmap_pmu_regs(test_bit, set_expected);
+}
+
+/*
+ * Tests for reading/writing registers for the (implemented) event counter
+ * specified by @pmc_idx.
+ */
+static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+ uint64_t write_data, read_data;
+
+ /* Disable all PMCs and reset all PMCs to zero. */
+ pmu_disable_reset();
+
+
+ /*
+ * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1.
+ */
+
+ /* Make sure that the bit in those registers are set to 0 */
+ test_bitmap_pmu_regs(pmc_idx, false);
+ /* Test if setting the bit in those registers works */
+ test_bitmap_pmu_regs(pmc_idx, true);
+ /* Test if clearing the bit in those registers works */
+ test_bitmap_pmu_regs(pmc_idx, false);
+
+
+ /*
+ * Tests for reading/writing the event type register.
+ */
+
+ read_data = acc->read_typer(pmc_idx);
+ /*
+ * Set the event type register to an arbitrary value just for testing
+ * of reading/writing the register.
+ * ArmARM says that for the event from 0x0000 to 0x003F,
+ * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
+ * the value written to the field even when the specified event
+ * is not supported.
+ */
+ write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+ acc->write_typer(pmc_idx, write_data);
+ read_data = acc->read_typer(pmc_idx);
+ GUEST_ASSERT_4(read_data == write_data,
+ pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
+
+
+ /*
+ * Tests for reading/writing the event count register.
+ */
+
+ read_data = acc->read_cntr(pmc_idx);
+
+ /* The count value must be 0, as it is not used after the reset */
+ GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
+
+ write_data = read_data + pmc_idx + 0x12345;
+ acc->write_cntr(pmc_idx, write_data);
+ read_data = acc->read_cntr(pmc_idx);
+ GUEST_ASSERT_4(read_data == write_data,
+ pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
+}
+
static void guest_sync_handler(struct ex_regs *regs)
{
uint64_t esr, ec;
@@ -36,11 +282,14 @@ static void guest_sync_handler(struct ex_regs *regs)
/*
* The guest is configured with PMUv3 with @expected_pmcr_n number of
* event counters.
- * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
+ * if reading/writing PMU registers for implemented counters can work
+ * as expected.
*/
static void guest_code(uint64_t expected_pmcr_n)
{
uint64_t pmcr, pmcr_n;
+ int i, pmc;
GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
@@ -50,6 +299,15 @@ static void guest_code(uint64_t expected_pmcr_n)
/* Make sure that PMCR_EL0.N indicates the value userspace set */
GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
+ /*
+ * Tests for reading/writing PMU registers for implemented counters.
+ * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+ */
+ for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+ for (pmc = 0; pmc < pmcr_n; pmc++)
+ test_access_pmc_regs(&pmc_accessors[i], pmc);
+ }
+
GUEST_DONE();
}
@@ -121,7 +379,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
vcpu_run(vcpu);
switch (get_ucall(vcpu, &uc)) {
case UCALL_ABORT:
- REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+ REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
break;
case UCALL_DONE:
break;
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v5 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
` (10 preceding siblings ...)
2023-08-17 0:30 ` [PATCH v5 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
@ 2023-08-17 0:30 ` Raghavendra Rao Ananta
11 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-17 0:30 UTC (permalink / raw)
To: Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Shaoqin Huang, Jing Zhang, Reiji Watanabe,
Colton Lewis, Raghavendra Rao Anata, linux-arm-kernel, kvmarm,
linux-kernel, kvm
From: Reiji Watanabe <reijiw@google.com>
Add a new test case to the vpmu_counter_access test to check
if PMU registers or their bits for unimplemented counters are not
accessible or are RAZ, as expected.
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
.../kvm/aarch64/vpmu_counter_access.c | 93 +++++++++++++++++--
.../selftests/kvm/include/aarch64/processor.h | 1 +
2 files changed, 85 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 3a2cf38bb415d..61fd1420e3cc1 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,8 +5,8 @@
* Copyright (c) 2022 Google LLC.
*
* This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
- * those counters.
+ * counters (PMCR_EL0.N) that userspace sets, if the guest can access
+ * those counters, and if the guest cannot access any other counters.
* This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
*/
#include <kvm_util.h>
@@ -118,9 +118,9 @@ static void write_pmevtypern(int n, unsigned long val)
}
/*
- * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
* accessors that test cases will use. Each of the accessors will
- * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0
* (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
* PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
*
@@ -270,25 +270,83 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
}
+#define INVALID_EC (-1ul)
+uint64_t expected_ec = INVALID_EC;
+uint64_t op_end_addr;
+
static void guest_sync_handler(struct ex_regs *regs)
{
uint64_t esr, ec;
esr = read_sysreg(esr_el1);
ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
- GUEST_ASSERT_3(0, regs->pc, esr, ec);
+ GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
+ regs->pc, esr, ec, expected_ec);
+
+ /* Will go back to op_end_addr after the handler exits */
+ regs->pc = op_end_addr;
+
+ /*
+ * Clear op_end_addr, and setting expected_ec to INVALID_EC
+ * as a sign that an exception has occurred.
+ */
+ op_end_addr = 0;
+ expected_ec = INVALID_EC;
+}
+
+/*
+ * Run the given operation that should trigger an exception with the
+ * given exception class. The exception handler (guest_sync_handler)
+ * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
+ * will come back to the instruction at the @done_label.
+ * The @done_label must be a unique label in this test program.
+ */
+#define TEST_EXCEPTION(ec, ops, done_label) \
+{ \
+ extern int done_label; \
+ \
+ WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \
+ GUEST_ASSERT(ec != INVALID_EC); \
+ WRITE_ONCE(expected_ec, ec); \
+ dsb(ish); \
+ ops; \
+ asm volatile(#done_label":"); \
+ GUEST_ASSERT(!op_end_addr); \
+ GUEST_ASSERT(expected_ec == INVALID_EC); \
+}
+
+/*
+ * Tests for reading/writing registers for the unimplemented event counter
+ * specified by @pmc_idx (>= PMCR_EL0.N).
+ */
+static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+ /*
+ * Reading/writing the event count/type registers should cause
+ * an UNDEFINED exception.
+ */
+ TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
+ TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
+ TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
+ TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
+ /*
+ * The bit corresponding to the (unimplemented) counter in
+ * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
+ */
+ test_bitmap_pmu_regs(pmc_idx, 1);
+ test_bitmap_pmu_regs(pmc_idx, 0);
}
/*
* The guest is configured with PMUv3 with @expected_pmcr_n number of
* event counters.
* Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
- * if reading/writing PMU registers for implemented counters can work
- * as expected.
+ * if reading/writing PMU registers for implemented or unimplemented
+ * counters can work as expected.
*/
static void guest_code(uint64_t expected_pmcr_n)
{
- uint64_t pmcr, pmcr_n;
+ uint64_t pmcr, pmcr_n, unimp_mask;
int i, pmc;
GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
@@ -299,15 +357,32 @@ static void guest_code(uint64_t expected_pmcr_n)
/* Make sure that PMCR_EL0.N indicates the value userspace set */
GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
+ /*
+ * Make sure that (RAZ) bits corresponding to unimplemented event
+ * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
+ * (NOTE: bits for implemented event counters are reset to UNKNOWN)
+ */
+ unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
+ check_bitmap_pmu_regs(unimp_mask, false);
+
/*
* Tests for reading/writing PMU registers for implemented counters.
- * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+ * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
*/
for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
for (pmc = 0; pmc < pmcr_n; pmc++)
test_access_pmc_regs(&pmc_accessors[i], pmc);
}
+ /*
+ * Tests for reading/writing PMU registers for unimplemented counters.
+ * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
+ */
+ for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+ for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
+ test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
+ }
+
GUEST_DONE();
}
diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index cb537253a6b9c..c42d683102c7a 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -104,6 +104,7 @@ enum {
#define ESR_EC_SHIFT 26
#define ESR_EC_MASK (ESR_EC_NUM - 1)
+#define ESR_EC_UNKNOWN 0x0
#define ESR_EC_SVC64 0x15
#define ESR_EC_IABT 0x21
#define ESR_EC_DABT 0x25
--
2.41.0.694.ge786442a9b-goog
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
2023-08-17 0:30 ` [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset Raghavendra Rao Ananta
@ 2023-08-17 5:03 ` kernel test robot
2023-08-17 7:54 ` kernel test robot
2023-09-15 19:33 ` Oliver Upton
2 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2023-08-17 5:03 UTC (permalink / raw)
To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
Cc: llvm, oe-kbuild-all, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
Jing Zhang, Reiji Watanabe, Colton Lewis, Raghavendra Rao Anata,
linux-arm-kernel, kvmarm, linux-kernel, kvm
Hi Raghavendra,
kernel test robot noticed the following build warnings:
[auto build test WARNING on 2ccdd1b13c591d306f0401d98dedc4bdcd02b421]
url: https://github.com/intel-lab-lkp/linux/commits/Raghavendra-Rao-Ananta/KVM-arm64-PMU-Introduce-a-helper-to-set-the-guest-s-PMU/20230817-083353
base: 2ccdd1b13c591d306f0401d98dedc4bdcd02b421
patch link: https://lore.kernel.org/r/20230817003029.3073210-3-rananta%40google.com
patch subject: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
config: arm64-randconfig-r032-20230817 (https://download.01.org/0day-ci/archive/20230817/202308171212.KW8LnRRC-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce: (https://download.01.org/0day-ci/archive/20230817/202308171212.KW8LnRRC-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308171212.KW8LnRRC-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from arch/arm64/kernel/asm-offsets.c:16:
In file included from include/linux/kvm_host.h:45:
In file included from arch/arm64/include/asm/kvm_host.h:37:
>> include/kvm/arm_pmu.h:176:62: warning: declaration of 'struct arm_pmu' will not be visible outside of this function [-Wvisibility]
176 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
| ^
1 warning generated.
--
In file included from arch/arm64/kernel/asm-offsets.c:16:
In file included from include/linux/kvm_host.h:45:
In file included from arch/arm64/include/asm/kvm_host.h:37:
>> include/kvm/arm_pmu.h:176:62: warning: declaration of 'struct arm_pmu' will not be visible outside of this function [-Wvisibility]
176 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
| ^
1 warning generated.
vim +176 include/kvm/arm_pmu.h
175
> 176 static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
177 {
178 return -ENODEV;
179 }
180
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N
2023-08-17 0:30 ` [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N Raghavendra Rao Ananta
@ 2023-08-17 6:38 ` kernel test robot
2023-09-15 19:56 ` Oliver Upton
1 sibling, 0 replies; 41+ messages in thread
From: kernel test robot @ 2023-08-17 6:38 UTC (permalink / raw)
To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
Cc: llvm, oe-kbuild-all, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Shaoqin Huang,
Jing Zhang, Reiji Watanabe, Colton Lewis, Raghavendra Rao Anata,
linux-arm-kernel, kvmarm, linux-kernel, kvm
Hi Raghavendra,
kernel test robot noticed the following build errors:
[auto build test ERROR on 2ccdd1b13c591d306f0401d98dedc4bdcd02b421]
url: https://github.com/intel-lab-lkp/linux/commits/Raghavendra-Rao-Ananta/KVM-arm64-PMU-Introduce-a-helper-to-set-the-guest-s-PMU/20230817-083353
base: 2ccdd1b13c591d306f0401d98dedc4bdcd02b421
patch link: https://lore.kernel.org/r/20230817003029.3073210-6-rananta%40google.com
patch subject: [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N
config: arm-randconfig-r046-20230817 (https://download.01.org/0day-ci/archive/20230817/202308171444.Q5rfJubF-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce: (https://download.01.org/0day-ci/archive/20230817/202308171444.Q5rfJubF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308171444.Q5rfJubF-lkp@intel.com/
All errors (new ones prefixed by >>):
| ^~~~~~
drivers/perf/arm_pmuv3.c:141:2: note: previous initialization is here
141 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:45:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
45 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:37:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
37 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:148:44: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
148 | [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:133:44: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD'
133 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E
| ^~~~~~
drivers/perf/arm_pmuv3.c:141:2: note: previous initialization is here
141 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:45:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
45 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:37:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
37 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:149:45: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
149 | [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:134:44: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR'
134 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
| ^~~~~~
drivers/perf/arm_pmuv3.c:141:2: note: previous initialization is here
141 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:45:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
45 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:37:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
37 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:150:42: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
150 | [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:131:50: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD'
131 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
| ^~~~~~
drivers/perf/arm_pmuv3.c:141:2: note: previous initialization is here
141 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:45:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
45 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:37:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
37 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:151:43: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
151 | [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:132:50: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR'
132 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
| ^~~~~~
drivers/perf/arm_pmuv3.c:141:2: note: previous initialization is here
141 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:45:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
45 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:37:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
37 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:153:44: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
153 | [C(NODE)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:148:46: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD'
148 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
| ^~~~~~
drivers/perf/arm_pmuv3.c:141:2: note: previous initialization is here
141 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:45:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
45 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:37:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
37 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:154:45: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
154 | [C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:149:46: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR'
149 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
| ^~~~~~
drivers/perf/arm_pmuv3.c:141:2: note: previous initialization is here
141 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:45:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
45 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:37:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
37 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
>> drivers/perf/arm_pmuv3.c:1131:24: error: call to undeclared function 'FIELD_GET'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1131 | cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read());
| ^
55 warnings and 1 error generated.
vim +/FIELD_GET +1131 drivers/perf/arm_pmuv3.c
1114
1115 static void __armv8pmu_probe_pmu(void *info)
1116 {
1117 struct armv8pmu_probe_info *probe = info;
1118 struct arm_pmu *cpu_pmu = probe->pmu;
1119 u64 pmceid_raw[2];
1120 u32 pmceid[2];
1121 int pmuver;
1122
1123 pmuver = read_pmuver();
1124 if (!pmuv3_implemented(pmuver))
1125 return;
1126
1127 cpu_pmu->pmuver = pmuver;
1128 probe->present = true;
1129
1130 /* Read the nb of CNTx counters supported from PMNC */
> 1131 cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read());
1132
1133 /* Add the CPU cycles counter */
1134 cpu_pmu->num_events += 1;
1135
1136 pmceid[0] = pmceid_raw[0] = read_pmceid0();
1137 pmceid[1] = pmceid_raw[1] = read_pmceid1();
1138
1139 bitmap_from_arr32(cpu_pmu->pmceid_bitmap,
1140 pmceid, ARMV8_PMUV3_MAX_COMMON_EVENTS);
1141
1142 pmceid[0] = pmceid_raw[0] >> 32;
1143 pmceid[1] = pmceid_raw[1] >> 32;
1144
1145 bitmap_from_arr32(cpu_pmu->pmceid_ext_bitmap,
1146 pmceid, ARMV8_PMUV3_MAX_COMMON_EVENTS);
1147
1148 /* store PMMIR register for sysfs */
1149 if (is_pmuv3p4(pmuver) && (pmceid_raw[1] & BIT(31)))
1150 cpu_pmu->reg_pmmir = read_pmmir();
1151 else
1152 cpu_pmu->reg_pmmir = 0;
1153 }
1154
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
2023-08-17 0:30 ` [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset Raghavendra Rao Ananta
2023-08-17 5:03 ` kernel test robot
@ 2023-08-17 7:54 ` kernel test robot
2023-09-15 19:33 ` Oliver Upton
2 siblings, 0 replies; 41+ messages in thread
From: kernel test robot @ 2023-08-17 7:54 UTC (permalink / raw)
To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
Cc: oe-kbuild-all, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, Raghavendra Rao Anata,
linux-arm-kernel, kvmarm, linux-kernel, kvm
Hi Raghavendra,
kernel test robot noticed the following build warnings:
[auto build test WARNING on 2ccdd1b13c591d306f0401d98dedc4bdcd02b421]
url: https://github.com/intel-lab-lkp/linux/commits/Raghavendra-Rao-Ananta/KVM-arm64-PMU-Introduce-a-helper-to-set-the-guest-s-PMU/20230817-083353
base: 2ccdd1b13c591d306f0401d98dedc4bdcd02b421
patch link: https://lore.kernel.org/r/20230817003029.3073210-3-rananta%40google.com
patch subject: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
config: arm64-randconfig-r024-20230817 (https://download.01.org/0day-ci/archive/20230817/202308171559.K5QeXXZk-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 12.3.0
reproduce: (https://download.01.org/0day-ci/archive/20230817/202308171559.K5QeXXZk-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308171559.K5QeXXZk-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from arch/arm64/include/asm/kvm_host.h:37,
from include/linux/kvm_host.h:45,
from arch/arm64/kernel/asm-offsets.c:16:
>> include/kvm/arm_pmu.h:176:62: warning: 'struct arm_pmu' declared inside parameter list will not be visible outside of this definition or declaration
176 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
| ^~~~~~~
--
In file included from arch/arm64/include/asm/kvm_host.h:37,
from include/linux/kvm_host.h:45,
from arch/arm64/kernel/asm-offsets.c:16:
>> include/kvm/arm_pmu.h:176:62: warning: 'struct arm_pmu' declared inside parameter list will not be visible outside of this definition or declaration
176 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
| ^~~~~~~
vim +176 include/kvm/arm_pmu.h
175
> 176 static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
177 {
178 return -ENODEV;
179 }
180
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-17 0:30 ` [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
@ 2023-08-21 12:12 ` Shaoqin Huang
2023-08-21 23:28 ` Raghavendra Rao Ananta
2023-08-22 10:05 ` Shaoqin Huang
2023-09-15 20:53 ` Oliver Upton
2 siblings, 1 reply; 41+ messages in thread
From: Shaoqin Huang @ 2023-08-21 12:12 UTC (permalink / raw)
To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Jing Zhang, Reiji Watanabe, Colton Lewis,
linux-arm-kernel, kvmarm, linux-kernel, kvm
Hi Raghavendra,
On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
>
> KVM does not yet support userspace modifying PMCR_EL0.N (With
> the previous patch, KVM ignores what is written by upserspace).
> Add support userspace limiting PMCR_EL0.N.
>
> Disallow userspace to set PMCR_EL0.N to a value that is greater
> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
> support more event counters than the host HW implements.
> Although this is an ABI change, this change only affects
> userspace setting PMCR_EL0.N to a larger value than the host.
> As accesses to unadvertised event counters indices is CONSTRAINED
> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
> on every vCPU reset before this series, I can't think of any
> use case where a user space would do that.
>
> Also, ignore writes to read-only bits that are cleared on vCPU reset,
> and RES{0,1} bits (including writable bits that KVM doesn't support
> yet), as those bits shouldn't be modified (at least with
> the current KVM).
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 3 ++
> arch/arm64/kvm/pmu-emul.c | 1 +
> arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
> 3 files changed, 51 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 0f2dbbe8f6a7e..c15ec365283d1 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -259,6 +259,9 @@ struct kvm_arch {
> /* PMCR_EL0.N value for the guest */
> u8 pmcr_n;
>
> + /* Limit value of PMCR_EL0.N for the guest */
> + u8 pmcr_n_limit;
> +
> /* Hypercall features firmware registers' descriptor */
> struct kvm_smccc_features smccc_feat;
> struct maple_tree smccc_filter;
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index ce7de6bbdc967..39ad56a71ad20 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> * while the latter does not.
> */
> kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
>
> return 0;
> }
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2075901356c5b..c01d62afa7db4 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> return 0;
> }
>
> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> + u64 val)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + u64 new_n, mutable_mask;
> + int ret = 0;
> +
> + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> +
> + mutex_lock(&kvm->arch.config_lock);
> + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> + /*
> + * The vCPU can't have more counters than the PMU
> + * hardware implements.
> + */
> + if (new_n <= kvm->arch.pmcr_n_limit)
> + kvm->arch.pmcr_n = new_n;
> + else
> + ret = -EINVAL;
> + }
Since we have set the default value of pmcr_n, if we want to set a new
pmcr_n, shouldn't it be a different value?
So how about change the checking to:
if (likely(new_n <= kvm->arch.pmcr_n_limit)
kvm->arch.pmcr_n = new_n;
else
ret = -EINVAL;
what do you think?
> + mutex_unlock(&kvm->arch.config_lock);
> + if (ret)
> + return ret;
> +
> + /*
> + * Ignore writes to RES0 bits, read only bits that are cleared on
> + * vCPU reset, and writable bits that KVM doesn't support yet.
> + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
> + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
> + * But, we leave the bit as it is here, as the vCPU's PMUver might
> + * be changed later (NOTE: the bit will be cleared on first vCPU run
> + * if necessary).
> + */
> + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
> + val &= mutable_mask;
> + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
> +
> + /* The LC bit is RES1 when AArch32 is not supported */
> + if (!kvm_supports_32bit_el0())
> + val |= ARMV8_PMU_PMCR_LC;
> +
> + __vcpu_sys_reg(vcpu, r->reg) = val;
> + return 0;
> +}
> +
> /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
> { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
> @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> { SYS_DESC(SYS_CTR_EL0), access_ctr },
> { SYS_DESC(SYS_SVCR), undef_access },
>
> - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
> - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
> + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
> + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
A little confusing, since the PMU_SYS_REG() defines the default
visibility which is pmu_visibility can return REG_HIDDEN, the set_user
to pmcr will be blocked, how can it being set?
Maybe I lose some details.
Thanks,
Shaoqin
> { PMU_SYS_REG(PMCNTENSET_EL0),
> .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
> { PMU_SYS_REG(PMCNTENCLR_EL0),
--
Shaoqin
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-21 12:12 ` Shaoqin Huang
@ 2023-08-21 23:28 ` Raghavendra Rao Ananta
2023-08-22 3:26 ` Shaoqin Huang
0 siblings, 1 reply; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-21 23:28 UTC (permalink / raw)
To: Shaoqin Huang
Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hi Shaoqin,
On Mon, Aug 21, 2023 at 5:12 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Hi Raghavendra,
>
> On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > KVM does not yet support userspace modifying PMCR_EL0.N (With
> > the previous patch, KVM ignores what is written by upserspace).
> > Add support userspace limiting PMCR_EL0.N.
> >
> > Disallow userspace to set PMCR_EL0.N to a value that is greater
> > than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
> > support more event counters than the host HW implements.
> > Although this is an ABI change, this change only affects
> > userspace setting PMCR_EL0.N to a larger value than the host.
> > As accesses to unadvertised event counters indices is CONSTRAINED
> > UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
> > on every vCPU reset before this series, I can't think of any
> > use case where a user space would do that.
> >
> > Also, ignore writes to read-only bits that are cleared on vCPU reset,
> > and RES{0,1} bits (including writable bits that KVM doesn't support
> > yet), as those bits shouldn't be modified (at least with
> > the current KVM).
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 3 ++
> > arch/arm64/kvm/pmu-emul.c | 1 +
> > arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
> > 3 files changed, 51 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 0f2dbbe8f6a7e..c15ec365283d1 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -259,6 +259,9 @@ struct kvm_arch {
> > /* PMCR_EL0.N value for the guest */
> > u8 pmcr_n;
> >
> > + /* Limit value of PMCR_EL0.N for the guest */
> > + u8 pmcr_n_limit;
> > +
> > /* Hypercall features firmware registers' descriptor */
> > struct kvm_smccc_features smccc_feat;
> > struct maple_tree smccc_filter;
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index ce7de6bbdc967..39ad56a71ad20 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > * while the latter does not.
> > */
> > kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> > + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
> >
> > return 0;
> > }
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 2075901356c5b..c01d62afa7db4 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > return 0;
> > }
> >
> > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > + u64 val)
> > +{
> > + struct kvm *kvm = vcpu->kvm;
> > + u64 new_n, mutable_mask;
> > + int ret = 0;
> > +
> > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > +
> > + mutex_lock(&kvm->arch.config_lock);
> > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > + /*
> > + * The vCPU can't have more counters than the PMU
> > + * hardware implements.
> > + */
> > + if (new_n <= kvm->arch.pmcr_n_limit)
> > + kvm->arch.pmcr_n = new_n;
> > + else
> > + ret = -EINVAL;
> > + }
>
> Since we have set the default value of pmcr_n, if we want to set a new
> pmcr_n, shouldn't it be a different value?
>
> So how about change the checking to:
>
> if (likely(new_n <= kvm->arch.pmcr_n_limit)
> kvm->arch.pmcr_n = new_n;
> else
> ret = -EINVAL;
>
> what do you think?
>
Sorry, I guess I didn't fully understand your suggestion. Are you
saying that it's 'likely' that userspace would configure the correct
value?
> > + mutex_unlock(&kvm->arch.config_lock);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * Ignore writes to RES0 bits, read only bits that are cleared on
> > + * vCPU reset, and writable bits that KVM doesn't support yet.
> > + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
> > + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
> > + * But, we leave the bit as it is here, as the vCPU's PMUver might
> > + * be changed later (NOTE: the bit will be cleared on first vCPU run
> > + * if necessary).
> > + */
> > + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
> > + val &= mutable_mask;
> > + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
> > +
> > + /* The LC bit is RES1 when AArch32 is not supported */
> > + if (!kvm_supports_32bit_el0())
> > + val |= ARMV8_PMU_PMCR_LC;
> > +
> > + __vcpu_sys_reg(vcpu, r->reg) = val;
> > + return 0;
> > +}
> > +
> > /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> > #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
> > { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
> > @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> > { SYS_DESC(SYS_CTR_EL0), access_ctr },
> > { SYS_DESC(SYS_SVCR), undef_access },
> >
> > - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
> > - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
> > + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
> > + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
>
> A little confusing, since the PMU_SYS_REG() defines the default
> visibility which is pmu_visibility can return REG_HIDDEN, the set_user
> to pmcr will be blocked, how can it being set?
>
> Maybe I lose some details.
>
pmu_visibility() returns REG_HIDDEN only if userspace has not added
support for PMUv3 via KVM_ARM_PREFERRED_TARGET ioctl. Else, it should
return 0, and give access.
Thank you.
Raghavendra
> Thanks,
> Shaoqin
>
> > { PMU_SYS_REG(PMCNTENSET_EL0),
> > .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
> > { PMU_SYS_REG(PMCNTENCLR_EL0),
>
> --
> Shaoqin
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-21 23:28 ` Raghavendra Rao Ananta
@ 2023-08-22 3:26 ` Shaoqin Huang
2023-09-15 20:36 ` Oliver Upton
0 siblings, 1 reply; 41+ messages in thread
From: Shaoqin Huang @ 2023-08-22 3:26 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hi Raghavendra,
On 8/22/23 07:28, Raghavendra Rao Ananta wrote:
> Hi Shaoqin,
>
> On Mon, Aug 21, 2023 at 5:12 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>>
>> Hi Raghavendra,
>>
>> On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
>>> From: Reiji Watanabe <reijiw@google.com>
>>>
>>> KVM does not yet support userspace modifying PMCR_EL0.N (With
>>> the previous patch, KVM ignores what is written by upserspace).
>>> Add support userspace limiting PMCR_EL0.N.
>>>
>>> Disallow userspace to set PMCR_EL0.N to a value that is greater
>>> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
>>> support more event counters than the host HW implements.
>>> Although this is an ABI change, this change only affects
>>> userspace setting PMCR_EL0.N to a larger value than the host.
>>> As accesses to unadvertised event counters indices is CONSTRAINED
>>> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
>>> on every vCPU reset before this series, I can't think of any
>>> use case where a user space would do that.
>>>
>>> Also, ignore writes to read-only bits that are cleared on vCPU reset,
>>> and RES{0,1} bits (including writable bits that KVM doesn't support
>>> yet), as those bits shouldn't be modified (at least with
>>> the current KVM).
>>>
>>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
>>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
>>> ---
>>> arch/arm64/include/asm/kvm_host.h | 3 ++
>>> arch/arm64/kvm/pmu-emul.c | 1 +
>>> arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
>>> 3 files changed, 51 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 0f2dbbe8f6a7e..c15ec365283d1 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -259,6 +259,9 @@ struct kvm_arch {
>>> /* PMCR_EL0.N value for the guest */
>>> u8 pmcr_n;
>>>
>>> + /* Limit value of PMCR_EL0.N for the guest */
>>> + u8 pmcr_n_limit;
>>> +
>>> /* Hypercall features firmware registers' descriptor */
>>> struct kvm_smccc_features smccc_feat;
>>> struct maple_tree smccc_filter;
>>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>>> index ce7de6bbdc967..39ad56a71ad20 100644
>>> --- a/arch/arm64/kvm/pmu-emul.c
>>> +++ b/arch/arm64/kvm/pmu-emul.c
>>> @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
>>> * while the latter does not.
>>> */
>>> kvm->arch.pmcr_n = arm_pmu->num_events - 1;
>>> + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
>>>
>>> return 0;
>>> }
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 2075901356c5b..c01d62afa7db4 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>>> return 0;
>>> }
>>>
>>> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>>> + u64 val)
>>> +{
>>> + struct kvm *kvm = vcpu->kvm;
>>> + u64 new_n, mutable_mask;
>>> + int ret = 0;
>>> +
>>> + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
>>> +
>>> + mutex_lock(&kvm->arch.config_lock);
>>> + if (unlikely(new_n != kvm->arch.pmcr_n)) {
>>> + /*
>>> + * The vCPU can't have more counters than the PMU
>>> + * hardware implements.
>>> + */
>>> + if (new_n <= kvm->arch.pmcr_n_limit)
>>> + kvm->arch.pmcr_n = new_n;
>>> + else
>>> + ret = -EINVAL;
>>> + }
>>
>> Since we have set the default value of pmcr_n, if we want to set a new
>> pmcr_n, shouldn't it be a different value?
>>
>> So how about change the checking to:
>>
>> if (likely(new_n <= kvm->arch.pmcr_n_limit)
>> kvm->arch.pmcr_n = new_n;
>> else
>> ret = -EINVAL;
>>
>> what do you think?
>>
> Sorry, I guess I didn't fully understand your suggestion. Are you
> saying that it's 'likely' that userspace would configure the correct
> value?
>
It depends on how userspace use this api to limit the number of pmcr. I
think what you mean in the code is that userspace need to set every
vcpu's pmcr to the same value, so the `unlikely` here is right, only one
vcpu can change the kvm->arch.pmcr.n, it saves the cpu cycles.
What suggest above might be wrong. Since I think when userspace want to
limit the number of pmcr, it may just set the new_n on one vcpu, since
the kvm->arch.pmcr_n is a VM-local value, every vcpu can see it, so it's
`likely` the (new_n <= kvm->arch.pmcr_n_limit), it can decrease one
checking statement.
Thanks,
Shaoqin
>>> + mutex_unlock(&kvm->arch.config_lock);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * Ignore writes to RES0 bits, read only bits that are cleared on
>>> + * vCPU reset, and writable bits that KVM doesn't support yet.
>>> + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
>>> + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
>>> + * But, we leave the bit as it is here, as the vCPU's PMUver might
>>> + * be changed later (NOTE: the bit will be cleared on first vCPU run
>>> + * if necessary).
>>> + */
>>> + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
>>> + val &= mutable_mask;
>>> + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
>>> +
>>> + /* The LC bit is RES1 when AArch32 is not supported */
>>> + if (!kvm_supports_32bit_el0())
>>> + val |= ARMV8_PMU_PMCR_LC;
>>> +
>>> + __vcpu_sys_reg(vcpu, r->reg) = val;
>>> + return 0;
>>> +}
>>> +
>>> /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>>> #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
>>> { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
>>> @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>> { SYS_DESC(SYS_CTR_EL0), access_ctr },
>>> { SYS_DESC(SYS_SVCR), undef_access },
>>>
>>> - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
>>> - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
>>> + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
>>> + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
>>
>> A little confusing, since the PMU_SYS_REG() defines the default
>> visibility which is pmu_visibility can return REG_HIDDEN, the set_user
>> to pmcr will be blocked, how can it being set?
>>
>> Maybe I lose some details.
>>
> pmu_visibility() returns REG_HIDDEN only if userspace has not added
> support for PMUv3 via KVM_ARM_PREFERRED_TARGET ioctl. Else, it should
> return 0, and give access.
>
Got it. Thanks.
> Thank you.
> Raghavendra
>
>> Thanks,
>> Shaoqin
>>
>>> { PMU_SYS_REG(PMCNTENSET_EL0),
>>> .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
>>> { PMU_SYS_REG(PMCNTENCLR_EL0),
>>
>> --
>> Shaoqin
>>
>
--
Shaoqin
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-17 0:30 ` [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
2023-08-21 12:12 ` Shaoqin Huang
@ 2023-08-22 10:05 ` Shaoqin Huang
2023-08-23 16:06 ` Raghavendra Rao Ananta
2023-09-15 20:53 ` Oliver Upton
2 siblings, 1 reply; 41+ messages in thread
From: Shaoqin Huang @ 2023-08-22 10:05 UTC (permalink / raw)
To: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier
Cc: Alexandru Elisei, James Morse, Suzuki K Poulose, Paolo Bonzini,
Zenghui Yu, Jing Zhang, Reiji Watanabe, Colton Lewis,
linux-arm-kernel, kvmarm, linux-kernel, kvm
Hi Raghavendra,
On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
>
> KVM does not yet support userspace modifying PMCR_EL0.N (With
> the previous patch, KVM ignores what is written by upserspace).
> Add support userspace limiting PMCR_EL0.N.
>
> Disallow userspace to set PMCR_EL0.N to a value that is greater
> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
> support more event counters than the host HW implements.
> Although this is an ABI change, this change only affects
> userspace setting PMCR_EL0.N to a larger value than the host.
> As accesses to unadvertised event counters indices is CONSTRAINED
> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
> on every vCPU reset before this series, I can't think of any
> use case where a user space would do that.
>
> Also, ignore writes to read-only bits that are cleared on vCPU reset,
> and RES{0,1} bits (including writable bits that KVM doesn't support
> yet), as those bits shouldn't be modified (at least with
> the current KVM).
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 3 ++
> arch/arm64/kvm/pmu-emul.c | 1 +
> arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
> 3 files changed, 51 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 0f2dbbe8f6a7e..c15ec365283d1 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -259,6 +259,9 @@ struct kvm_arch {
> /* PMCR_EL0.N value for the guest */
> u8 pmcr_n;
>
> + /* Limit value of PMCR_EL0.N for the guest */
> + u8 pmcr_n_limit;
> +
> /* Hypercall features firmware registers' descriptor */
> struct kvm_smccc_features smccc_feat;
> struct maple_tree smccc_filter;
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index ce7de6bbdc967..39ad56a71ad20 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> * while the latter does not.
> */
> kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
>
> return 0;
> }
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2075901356c5b..c01d62afa7db4 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> return 0;
> }
>
> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> + u64 val)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + u64 new_n, mutable_mask;
> + int ret = 0;
> +
> + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> +
> + mutex_lock(&kvm->arch.config_lock);
> + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> + /*
> + * The vCPU can't have more counters than the PMU
> + * hardware implements.
> + */
> + if (new_n <= kvm->arch.pmcr_n_limit)
> + kvm->arch.pmcr_n = new_n;
> + else
> + ret = -EINVAL;
> + }
> + mutex_unlock(&kvm->arch.config_lock);
Another thing I am just wonder is that should we block any modification
to the pmcr_n after vm start to run? Like add one more checking
kvm_vm_has_ran_once() at the beginning of the set_pmcr() function.
Thanks,
Shaoqin
> + if (ret)
> + return ret;
> +
> + /*
> + * Ignore writes to RES0 bits, read only bits that are cleared on
> + * vCPU reset, and writable bits that KVM doesn't support yet.
> + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
> + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
> + * But, we leave the bit as it is here, as the vCPU's PMUver might
> + * be changed later (NOTE: the bit will be cleared on first vCPU run
> + * if necessary).
> + */
> + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
> + val &= mutable_mask;
> + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
> +
> + /* The LC bit is RES1 when AArch32 is not supported */
> + if (!kvm_supports_32bit_el0())
> + val |= ARMV8_PMU_PMCR_LC;
> +
> + __vcpu_sys_reg(vcpu, r->reg) = val;
> + return 0;
> +}
> +
> /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
> { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
> @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> { SYS_DESC(SYS_CTR_EL0), access_ctr },
> { SYS_DESC(SYS_SVCR), undef_access },
>
> - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
> - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
> + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
> + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
> { PMU_SYS_REG(PMCNTENSET_EL0),
> .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
> { PMU_SYS_REG(PMCNTENCLR_EL0),
--
Shaoqin
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-22 10:05 ` Shaoqin Huang
@ 2023-08-23 16:06 ` Raghavendra Rao Ananta
2023-08-24 8:50 ` Shaoqin Huang
0 siblings, 1 reply; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-23 16:06 UTC (permalink / raw)
To: Shaoqin Huang
Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Tue, Aug 22, 2023 at 3:06 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Hi Raghavendra,
>
> On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > KVM does not yet support userspace modifying PMCR_EL0.N (With
> > the previous patch, KVM ignores what is written by upserspace).
> > Add support userspace limiting PMCR_EL0.N.
> >
> > Disallow userspace to set PMCR_EL0.N to a value that is greater
> > than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
> > support more event counters than the host HW implements.
> > Although this is an ABI change, this change only affects
> > userspace setting PMCR_EL0.N to a larger value than the host.
> > As accesses to unadvertised event counters indices is CONSTRAINED
> > UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
> > on every vCPU reset before this series, I can't think of any
> > use case where a user space would do that.
> >
> > Also, ignore writes to read-only bits that are cleared on vCPU reset,
> > and RES{0,1} bits (including writable bits that KVM doesn't support
> > yet), as those bits shouldn't be modified (at least with
> > the current KVM).
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 3 ++
> > arch/arm64/kvm/pmu-emul.c | 1 +
> > arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
> > 3 files changed, 51 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 0f2dbbe8f6a7e..c15ec365283d1 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -259,6 +259,9 @@ struct kvm_arch {
> > /* PMCR_EL0.N value for the guest */
> > u8 pmcr_n;
> >
> > + /* Limit value of PMCR_EL0.N for the guest */
> > + u8 pmcr_n_limit;
> > +
> > /* Hypercall features firmware registers' descriptor */
> > struct kvm_smccc_features smccc_feat;
> > struct maple_tree smccc_filter;
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index ce7de6bbdc967..39ad56a71ad20 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > * while the latter does not.
> > */
> > kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> > + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
> >
> > return 0;
> > }
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 2075901356c5b..c01d62afa7db4 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > return 0;
> > }
> >
> > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > + u64 val)
> > +{
> > + struct kvm *kvm = vcpu->kvm;
> > + u64 new_n, mutable_mask;
> > + int ret = 0;
> > +
> > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > +
> > + mutex_lock(&kvm->arch.config_lock);
> > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > + /*
> > + * The vCPU can't have more counters than the PMU
> > + * hardware implements.
> > + */
> > + if (new_n <= kvm->arch.pmcr_n_limit)
> > + kvm->arch.pmcr_n = new_n;
> > + else
> > + ret = -EINVAL;
> > + }
> > + mutex_unlock(&kvm->arch.config_lock);
>
> Another thing I am just wonder is that should we block any modification
> to the pmcr_n after vm start to run? Like add one more checking
> kvm_vm_has_ran_once() at the beginning of the set_pmcr() function.
>
Thanks for bringing it up. Reiji and I discussed about this. Checking
for kvm_vm_has_ran_once() will be a good move, however, it will go
against the ABI expectations of setting the PMCR. I'd like others to
weigh in on this as well. What do you think?
Thank you.
Raghavendra
> Thanks,
> Shaoqin
>
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * Ignore writes to RES0 bits, read only bits that are cleared on
> > + * vCPU reset, and writable bits that KVM doesn't support yet.
> > + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
> > + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
> > + * But, we leave the bit as it is here, as the vCPU's PMUver might
> > + * be changed later (NOTE: the bit will be cleared on first vCPU run
> > + * if necessary).
> > + */
> > + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
> > + val &= mutable_mask;
> > + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
> > +
> > + /* The LC bit is RES1 when AArch32 is not supported */
> > + if (!kvm_supports_32bit_el0())
> > + val |= ARMV8_PMU_PMCR_LC;
> > +
> > + __vcpu_sys_reg(vcpu, r->reg) = val;
> > + return 0;
> > +}
> > +
> > /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> > #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
> > { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
> > @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> > { SYS_DESC(SYS_CTR_EL0), access_ctr },
> > { SYS_DESC(SYS_SVCR), undef_access },
> >
> > - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
> > - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
> > + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
> > + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
> > { PMU_SYS_REG(PMCNTENSET_EL0),
> > .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
> > { PMU_SYS_REG(PMCNTENCLR_EL0),
>
> --
> Shaoqin
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-23 16:06 ` Raghavendra Rao Ananta
@ 2023-08-24 8:50 ` Shaoqin Huang
2023-08-25 22:34 ` Raghavendra Rao Ananta
0 siblings, 1 reply; 41+ messages in thread
From: Shaoqin Huang @ 2023-08-24 8:50 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On 8/24/23 00:06, Raghavendra Rao Ananta wrote:
> On Tue, Aug 22, 2023 at 3:06 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>>
>> Hi Raghavendra,
>>
>> On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
>>> From: Reiji Watanabe <reijiw@google.com>
>>>
>>> KVM does not yet support userspace modifying PMCR_EL0.N (With
>>> the previous patch, KVM ignores what is written by upserspace).
>>> Add support userspace limiting PMCR_EL0.N.
>>>
>>> Disallow userspace to set PMCR_EL0.N to a value that is greater
>>> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
>>> support more event counters than the host HW implements.
>>> Although this is an ABI change, this change only affects
>>> userspace setting PMCR_EL0.N to a larger value than the host.
>>> As accesses to unadvertised event counters indices is CONSTRAINED
>>> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
>>> on every vCPU reset before this series, I can't think of any
>>> use case where a user space would do that.
>>>
>>> Also, ignore writes to read-only bits that are cleared on vCPU reset,
>>> and RES{0,1} bits (including writable bits that KVM doesn't support
>>> yet), as those bits shouldn't be modified (at least with
>>> the current KVM).
>>>
>>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
>>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
>>> ---
>>> arch/arm64/include/asm/kvm_host.h | 3 ++
>>> arch/arm64/kvm/pmu-emul.c | 1 +
>>> arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
>>> 3 files changed, 51 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 0f2dbbe8f6a7e..c15ec365283d1 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -259,6 +259,9 @@ struct kvm_arch {
>>> /* PMCR_EL0.N value for the guest */
>>> u8 pmcr_n;
>>>
>>> + /* Limit value of PMCR_EL0.N for the guest */
>>> + u8 pmcr_n_limit;
>>> +
>>> /* Hypercall features firmware registers' descriptor */
>>> struct kvm_smccc_features smccc_feat;
>>> struct maple_tree smccc_filter;
>>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>>> index ce7de6bbdc967..39ad56a71ad20 100644
>>> --- a/arch/arm64/kvm/pmu-emul.c
>>> +++ b/arch/arm64/kvm/pmu-emul.c
>>> @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
>>> * while the latter does not.
>>> */
>>> kvm->arch.pmcr_n = arm_pmu->num_events - 1;
>>> + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
>>>
>>> return 0;
>>> }
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 2075901356c5b..c01d62afa7db4 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>>> return 0;
>>> }
>>>
>>> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>>> + u64 val)
>>> +{
>>> + struct kvm *kvm = vcpu->kvm;
>>> + u64 new_n, mutable_mask;
>>> + int ret = 0;
>>> +
>>> + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
>>> +
>>> + mutex_lock(&kvm->arch.config_lock);
>>> + if (unlikely(new_n != kvm->arch.pmcr_n)) {
>>> + /*
>>> + * The vCPU can't have more counters than the PMU
>>> + * hardware implements.
>>> + */
>>> + if (new_n <= kvm->arch.pmcr_n_limit)
>>> + kvm->arch.pmcr_n = new_n;
>>> + else
>>> + ret = -EINVAL;
>>> + }
>>> + mutex_unlock(&kvm->arch.config_lock);
>>
>> Another thing I am just wonder is that should we block any modification
>> to the pmcr_n after vm start to run? Like add one more checking
>> kvm_vm_has_ran_once() at the beginning of the set_pmcr() function.
>>
> Thanks for bringing it up. Reiji and I discussed about this. Checking
> for kvm_vm_has_ran_once() will be a good move, however, it will go
> against the ABI expectations of setting the PMCR. I'd like others to
> weigh in on this as well. What do you think?
>
> Thank you.
> Raghavendra
Before this change, kvm not allowed userspace to change the pmcr_n, but
allowed to change the lower ARMV8_PMU_PMCR_MASK bits. With this change,
we now allow to change the pmcr_n, we should not block the change to
ARMV8_PMU_PMCR_MASK after vm start to run, but how about we just block
the change to ARMV8_PMU_PMCR_N after vm start to run?
Thanks,
Shaoqin
>> Thanks,
>> Shaoqin
>>
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * Ignore writes to RES0 bits, read only bits that are cleared on
>>> + * vCPU reset, and writable bits that KVM doesn't support yet.
>>> + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
>>> + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
>>> + * But, we leave the bit as it is here, as the vCPU's PMUver might
>>> + * be changed later (NOTE: the bit will be cleared on first vCPU run
>>> + * if necessary).
>>> + */
>>> + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
>>> + val &= mutable_mask;
>>> + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
>>> +
>>> + /* The LC bit is RES1 when AArch32 is not supported */
>>> + if (!kvm_supports_32bit_el0())
>>> + val |= ARMV8_PMU_PMCR_LC;
>>> +
>>> + __vcpu_sys_reg(vcpu, r->reg) = val;
>>> + return 0;
>>> +}
>>> +
>>> /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>>> #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
>>> { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
>>> @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>> { SYS_DESC(SYS_CTR_EL0), access_ctr },
>>> { SYS_DESC(SYS_SVCR), undef_access },
>>>
>>> - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
>>> - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
>>> + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
>>> + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
>>> { PMU_SYS_REG(PMCNTENSET_EL0),
>>> .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
>>> { PMU_SYS_REG(PMCNTENCLR_EL0),
>>
>> --
>> Shaoqin
>>
>
--
Shaoqin
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-24 8:50 ` Shaoqin Huang
@ 2023-08-25 22:34 ` Raghavendra Rao Ananta
2023-08-26 2:40 ` Shaoqin Huang
0 siblings, 1 reply; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-08-25 22:34 UTC (permalink / raw)
To: Shaoqin Huang
Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Thu, Aug 24, 2023 at 1:50 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>
>
>
> On 8/24/23 00:06, Raghavendra Rao Ananta wrote:
> > On Tue, Aug 22, 2023 at 3:06 AM Shaoqin Huang <shahuang@redhat.com> wrote:
> >>
> >> Hi Raghavendra,
> >>
> >> On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
> >>> From: Reiji Watanabe <reijiw@google.com>
> >>>
> >>> KVM does not yet support userspace modifying PMCR_EL0.N (With
> >>> the previous patch, KVM ignores what is written by upserspace).
> >>> Add support userspace limiting PMCR_EL0.N.
> >>>
> >>> Disallow userspace to set PMCR_EL0.N to a value that is greater
> >>> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
> >>> support more event counters than the host HW implements.
> >>> Although this is an ABI change, this change only affects
> >>> userspace setting PMCR_EL0.N to a larger value than the host.
> >>> As accesses to unadvertised event counters indices is CONSTRAINED
> >>> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
> >>> on every vCPU reset before this series, I can't think of any
> >>> use case where a user space would do that.
> >>>
> >>> Also, ignore writes to read-only bits that are cleared on vCPU reset,
> >>> and RES{0,1} bits (including writable bits that KVM doesn't support
> >>> yet), as those bits shouldn't be modified (at least with
> >>> the current KVM).
> >>>
> >>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> >>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> >>> ---
> >>> arch/arm64/include/asm/kvm_host.h | 3 ++
> >>> arch/arm64/kvm/pmu-emul.c | 1 +
> >>> arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
> >>> 3 files changed, 51 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>> index 0f2dbbe8f6a7e..c15ec365283d1 100644
> >>> --- a/arch/arm64/include/asm/kvm_host.h
> >>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>> @@ -259,6 +259,9 @@ struct kvm_arch {
> >>> /* PMCR_EL0.N value for the guest */
> >>> u8 pmcr_n;
> >>>
> >>> + /* Limit value of PMCR_EL0.N for the guest */
> >>> + u8 pmcr_n_limit;
> >>> +
> >>> /* Hypercall features firmware registers' descriptor */
> >>> struct kvm_smccc_features smccc_feat;
> >>> struct maple_tree smccc_filter;
> >>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> >>> index ce7de6bbdc967..39ad56a71ad20 100644
> >>> --- a/arch/arm64/kvm/pmu-emul.c
> >>> +++ b/arch/arm64/kvm/pmu-emul.c
> >>> @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> >>> * while the latter does not.
> >>> */
> >>> kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> >>> + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
> >>>
> >>> return 0;
> >>> }
> >>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >>> index 2075901356c5b..c01d62afa7db4 100644
> >>> --- a/arch/arm64/kvm/sys_regs.c
> >>> +++ b/arch/arm64/kvm/sys_regs.c
> >>> @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> >>> return 0;
> >>> }
> >>>
> >>> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> >>> + u64 val)
> >>> +{
> >>> + struct kvm *kvm = vcpu->kvm;
> >>> + u64 new_n, mutable_mask;
> >>> + int ret = 0;
> >>> +
> >>> + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> >>> +
> >>> + mutex_lock(&kvm->arch.config_lock);
> >>> + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> >>> + /*
> >>> + * The vCPU can't have more counters than the PMU
> >>> + * hardware implements.
> >>> + */
> >>> + if (new_n <= kvm->arch.pmcr_n_limit)
> >>> + kvm->arch.pmcr_n = new_n;
> >>> + else
> >>> + ret = -EINVAL;
> >>> + }
> >>> + mutex_unlock(&kvm->arch.config_lock);
> >>
> >> Another thing I am just wonder is that should we block any modification
> >> to the pmcr_n after vm start to run? Like add one more checking
> >> kvm_vm_has_ran_once() at the beginning of the set_pmcr() function.
> >>
> > Thanks for bringing it up. Reiji and I discussed about this. Checking
> > for kvm_vm_has_ran_once() will be a good move, however, it will go
> > against the ABI expectations of setting the PMCR. I'd like others to
> > weigh in on this as well. What do you think?
> >
> > Thank you.
> > Raghavendra
>
> Before this change, kvm not allowed userspace to change the pmcr_n, but
> allowed to change the lower ARMV8_PMU_PMCR_MASK bits. With this change,
> we now allow to change the pmcr_n, we should not block the change to
> ARMV8_PMU_PMCR_MASK after vm start to run, but how about we just block
> the change to ARMV8_PMU_PMCR_N after vm start to run?
>
I believe you are referring to the guest trap access part of it
(access_pmcr()). This patch focuses on the userspace access of PMCR
via the KVM_SET_ONE_REG ioctl. Before this patch, KVM did not control
the writes to the reg and userspace was free to write to any bits at
any time.
Thank you.
Raghavendra
> Thanks,
> Shaoqin
>
> >> Thanks,
> >> Shaoqin
> >>
> >>> + if (ret)
> >>> + return ret;
> >>> +
> >>> + /*
> >>> + * Ignore writes to RES0 bits, read only bits that are cleared on
> >>> + * vCPU reset, and writable bits that KVM doesn't support yet.
> >>> + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
> >>> + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
> >>> + * But, we leave the bit as it is here, as the vCPU's PMUver might
> >>> + * be changed later (NOTE: the bit will be cleared on first vCPU run
> >>> + * if necessary).
> >>> + */
> >>> + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
> >>> + val &= mutable_mask;
> >>> + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
> >>> +
> >>> + /* The LC bit is RES1 when AArch32 is not supported */
> >>> + if (!kvm_supports_32bit_el0())
> >>> + val |= ARMV8_PMU_PMCR_LC;
> >>> +
> >>> + __vcpu_sys_reg(vcpu, r->reg) = val;
> >>> + return 0;
> >>> +}
> >>> +
> >>> /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> >>> #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
> >>> { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
> >>> @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >>> { SYS_DESC(SYS_CTR_EL0), access_ctr },
> >>> { SYS_DESC(SYS_SVCR), undef_access },
> >>>
> >>> - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
> >>> - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
> >>> + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
> >>> + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
> >>> { PMU_SYS_REG(PMCNTENSET_EL0),
> >>> .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
> >>> { PMU_SYS_REG(PMCNTENCLR_EL0),
> >>
> >> --
> >> Shaoqin
> >>
> >
>
> --
> Shaoqin
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-25 22:34 ` Raghavendra Rao Ananta
@ 2023-08-26 2:40 ` Shaoqin Huang
0 siblings, 0 replies; 41+ messages in thread
From: Shaoqin Huang @ 2023-08-26 2:40 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Oliver Upton, Marc Zyngier, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On 8/26/23 06:34, Raghavendra Rao Ananta wrote:
> On Thu, Aug 24, 2023 at 1:50 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>>
>>
>>
>> On 8/24/23 00:06, Raghavendra Rao Ananta wrote:
>>> On Tue, Aug 22, 2023 at 3:06 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>>>>
>>>> Hi Raghavendra,
>>>>
>>>> On 8/17/23 08:30, Raghavendra Rao Ananta wrote:
>>>>> From: Reiji Watanabe <reijiw@google.com>
>>>>>
>>>>> KVM does not yet support userspace modifying PMCR_EL0.N (With
>>>>> the previous patch, KVM ignores what is written by upserspace).
>>>>> Add support userspace limiting PMCR_EL0.N.
>>>>>
>>>>> Disallow userspace to set PMCR_EL0.N to a value that is greater
>>>>> than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't
>>>>> support more event counters than the host HW implements.
>>>>> Although this is an ABI change, this change only affects
>>>>> userspace setting PMCR_EL0.N to a larger value than the host.
>>>>> As accesses to unadvertised event counters indices is CONSTRAINED
>>>>> UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value
>>>>> on every vCPU reset before this series, I can't think of any
>>>>> use case where a user space would do that.
>>>>>
>>>>> Also, ignore writes to read-only bits that are cleared on vCPU reset,
>>>>> and RES{0,1} bits (including writable bits that KVM doesn't support
>>>>> yet), as those bits shouldn't be modified (at least with
>>>>> the current KVM).
>>>>>
>>>>> Signed-off-by: Reiji Watanabe <reijiw@google.com>
>>>>> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
>>>>> ---
>>>>> arch/arm64/include/asm/kvm_host.h | 3 ++
>>>>> arch/arm64/kvm/pmu-emul.c | 1 +
>>>>> arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++--
>>>>> 3 files changed, 51 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>> index 0f2dbbe8f6a7e..c15ec365283d1 100644
>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>> @@ -259,6 +259,9 @@ struct kvm_arch {
>>>>> /* PMCR_EL0.N value for the guest */
>>>>> u8 pmcr_n;
>>>>>
>>>>> + /* Limit value of PMCR_EL0.N for the guest */
>>>>> + u8 pmcr_n_limit;
>>>>> +
>>>>> /* Hypercall features firmware registers' descriptor */
>>>>> struct kvm_smccc_features smccc_feat;
>>>>> struct maple_tree smccc_filter;
>>>>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>>>>> index ce7de6bbdc967..39ad56a71ad20 100644
>>>>> --- a/arch/arm64/kvm/pmu-emul.c
>>>>> +++ b/arch/arm64/kvm/pmu-emul.c
>>>>> @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
>>>>> * while the latter does not.
>>>>> */
>>>>> kvm->arch.pmcr_n = arm_pmu->num_events - 1;
>>>>> + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
>>>>>
>>>>> return 0;
>>>>> }
>>>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>>>> index 2075901356c5b..c01d62afa7db4 100644
>>>>> --- a/arch/arm64/kvm/sys_regs.c
>>>>> +++ b/arch/arm64/kvm/sys_regs.c
>>>>> @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>>>>> return 0;
>>>>> }
>>>>>
>>>>> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>>>>> + u64 val)
>>>>> +{
>>>>> + struct kvm *kvm = vcpu->kvm;
>>>>> + u64 new_n, mutable_mask;
>>>>> + int ret = 0;
>>>>> +
>>>>> + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
>>>>> +
>>>>> + mutex_lock(&kvm->arch.config_lock);
>>>>> + if (unlikely(new_n != kvm->arch.pmcr_n)) {
>>>>> + /*
>>>>> + * The vCPU can't have more counters than the PMU
>>>>> + * hardware implements.
>>>>> + */
>>>>> + if (new_n <= kvm->arch.pmcr_n_limit)
>>>>> + kvm->arch.pmcr_n = new_n;
>>>>> + else
>>>>> + ret = -EINVAL;
>>>>> + }
>>>>> + mutex_unlock(&kvm->arch.config_lock);
>>>>
>>>> Another thing I am just wonder is that should we block any modification
>>>> to the pmcr_n after vm start to run? Like add one more checking
>>>> kvm_vm_has_ran_once() at the beginning of the set_pmcr() function.
>>>>
>>> Thanks for bringing it up. Reiji and I discussed about this. Checking
>>> for kvm_vm_has_ran_once() will be a good move, however, it will go
>>> against the ABI expectations of setting the PMCR. I'd like others to
>>> weigh in on this as well. What do you think?
>>>
>>> Thank you.
>>> Raghavendra
>>
>> Before this change, kvm not allowed userspace to change the pmcr_n, but
>> allowed to change the lower ARMV8_PMU_PMCR_MASK bits. With this change,
>> we now allow to change the pmcr_n, we should not block the change to
>> ARMV8_PMU_PMCR_MASK after vm start to run, but how about we just block
>> the change to ARMV8_PMU_PMCR_N after vm start to run?
>>
> I believe you are referring to the guest trap access part of it
> (access_pmcr()). This patch focuses on the userspace access of PMCR
> via the KVM_SET_ONE_REG ioctl. Before this patch, KVM did not control
> the writes to the reg and userspace was free to write to any bits at
> any time.
>
Oh yeah. Thanks for your explanation. My head sometimes broken down.
Thanks,
Shaoqin
> Thank you.
> Raghavendra
>> Thanks,
>> Shaoqin
>>
>>>> Thanks,
>>>> Shaoqin
>>>>
>>>>> + if (ret)
>>>>> + return ret;
>>>>> +
>>>>> + /*
>>>>> + * Ignore writes to RES0 bits, read only bits that are cleared on
>>>>> + * vCPU reset, and writable bits that KVM doesn't support yet.
>>>>> + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
>>>>> + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
>>>>> + * But, we leave the bit as it is here, as the vCPU's PMUver might
>>>>> + * be changed later (NOTE: the bit will be cleared on first vCPU run
>>>>> + * if necessary).
>>>>> + */
>>>>> + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N);
>>>>> + val &= mutable_mask;
>>>>> + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask);
>>>>> +
>>>>> + /* The LC bit is RES1 when AArch32 is not supported */
>>>>> + if (!kvm_supports_32bit_el0())
>>>>> + val |= ARMV8_PMU_PMCR_LC;
>>>>> +
>>>>> + __vcpu_sys_reg(vcpu, r->reg) = val;
>>>>> + return 0;
>>>>> +}
>>>>> +
>>>>> /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>>>>> #define DBG_BCR_BVR_WCR_WVR_EL1(n) \
>>>>> { SYS_DESC(SYS_DBGBVRn_EL1(n)), \
>>>>> @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>>>> { SYS_DESC(SYS_CTR_EL0), access_ctr },
>>>>> { SYS_DESC(SYS_SVCR), undef_access },
>>>>>
>>>>> - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
>>>>> - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr },
>>>>> + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
>>>>> + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
>>>>> { PMU_SYS_REG(PMCNTENSET_EL0),
>>>>> .access = access_pmcnten, .reg = PMCNTENSET_EL0 },
>>>>> { PMU_SYS_REG(PMCNTENCLR_EL0),
>>>>
>>>> --
>>>> Shaoqin
>>>>
>>>
>>
>> --
>> Shaoqin
>>
>
--
Shaoqin
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU
2023-08-17 0:30 ` [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU Raghavendra Rao Ananta
@ 2023-09-15 19:22 ` Oliver Upton
2023-09-18 17:24 ` Raghavendra Rao Ananta
0 siblings, 1 reply; 41+ messages in thread
From: Oliver Upton @ 2023-09-15 19:22 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hi Raghu,
On Thu, Aug 17, 2023 at 12:30:18AM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
>
> Introduce a new helper function to set the guest's PMU
> (kvm->arch.arm_pmu), and use it when the guest's PMU needs
> to be set. This helper will make it easier for the following
> patches to modify the relevant code.
>
> No functional change intended.
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
> arch/arm64/kvm/pmu-emul.c | 52 +++++++++++++++++++++++++++------------
> 1 file changed, 36 insertions(+), 16 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 5606509724787..0ffd1efa90c07 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -865,6 +865,32 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
> return true;
> }
>
> +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> +{
> + lockdep_assert_held(&kvm->arch.config_lock);
> +
> + if (!arm_pmu) {
> + /*
> + * No PMU set, get the default one.
> + *
> + * The observant among you will notice that the supported_cpus
> + * mask does not get updated for the default PMU even though it
> + * is quite possible the selected instance supports only a
> + * subset of cores in the system. This is intentional, and
> + * upholds the preexisting behavior on heterogeneous systems
> + * where vCPUs can be scheduled on any core but the guest
> + * counters could stop working.
> + */
> + arm_pmu = kvm_pmu_probe_armpmu();
> + if (!arm_pmu)
> + return -ENODEV;
> + }
> +
> + kvm->arch.arm_pmu = arm_pmu;
> +
> + return 0;
> +}
> +
I'm not too big of a fan of adding the 'default' path to this helper.
I'd prefer it if kvm_arm_set_vm_pmu() does all the necessary
initialization for a valid pmu instance. You then avoid introducing
unexpected error handling where it didn't exist before.
static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
{
lockdep_assert_held(&kvm->arch.config_lock);
kvm->arch.arm_pmu = arm_pmu;
}
/*
* Blurb about default PMUs I'm too lazy to copy/paste
*/
static int kvm_arm_set_default_pmu(struct kvm *kvm)
{
struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu();
if (!arm_pmu)
return -ENODEV;
kvm_arm_set_pmu(kvm, arm_pmu);
return 0;
}
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
2023-08-17 0:30 ` [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset Raghavendra Rao Ananta
2023-08-17 5:03 ` kernel test robot
2023-08-17 7:54 ` kernel test robot
@ 2023-09-15 19:33 ` Oliver Upton
2023-09-18 16:41 ` Raghavendra Rao Ananta
2 siblings, 1 reply; 41+ messages in thread
From: Oliver Upton @ 2023-09-15 19:33 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Thu, Aug 17, 2023 at 12:30:19AM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
>
> The following patches will use the number of counters information
> from the arm_pmu and use this to set the PMCR.N for the guest
> during vCPU reset. However, since the guest is not associated
> with any arm_pmu until userspace configures the vPMU device
> attributes, and a reset can happen before this event, call
> kvm_arm_support_pmu_v3() just before doing the reset.
>
> No functional change intended.
But there absolutely is a functional change here, and user visible at
that. KVM_ARM_VCPU_INIT ioctls can now fail with -ENODEV, which is not
part of the documented errors for the interface.
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
> arch/arm64/kvm/pmu-emul.c | 9 +--------
> arch/arm64/kvm/reset.c | 18 +++++++++++++-----
> include/kvm/arm_pmu.h | 6 ++++++
> 3 files changed, 20 insertions(+), 13 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 0ffd1efa90c07..b87822024828a 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -865,7 +865,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
> return true;
> }
>
> -static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> {
> lockdep_assert_held(&kvm->arch.config_lock);
>
> @@ -937,13 +937,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> if (vcpu->arch.pmu.created)
> return -EBUSY;
>
> - if (!kvm->arch.arm_pmu) {
> - int ret = kvm_arm_set_vm_pmu(kvm, NULL);
> -
> - if (ret)
> - return ret;
> - }
> -
> switch (attr->attr) {
> case KVM_ARM_VCPU_PMU_V3_IRQ: {
> int __user *uaddr = (int __user *)(long)attr->addr;
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index bc8556b6f4590..4c20f1ccd0789 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -206,6 +206,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
> */
> int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> {
> + struct kvm *kvm = vcpu->kvm;
> struct vcpu_reset_state reset_state;
> int ret;
> bool loaded;
> @@ -216,6 +217,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> vcpu->arch.reset_state.reset = false;
> spin_unlock(&vcpu->arch.mp_state_lock);
>
> + /*
> + * When the vCPU has a PMU, but no PMU is set for the guest
> + * yet, set the default one.
> + */
> + if (kvm_vcpu_has_pmu(vcpu) && unlikely(!kvm->arch.arm_pmu)) {
> + ret = -EINVAL;
> + if (kvm_arm_support_pmu_v3())
> + ret = kvm_arm_set_vm_pmu(kvm, NULL);
> + if (ret)
> + return ret;
> + }
> +
On top of my prior suggestion w.r.t. the default PMU helper, I'd rather
see this block look like:
if (kvm_vcpu_has_pmu(vcpu)) {
if (!kvm_arm_support_pmu_v3())
return -EINVAL;
/*
* When the vCPU has a PMU but no PMU is set for the
* guest yet, set the default one.
*/
if (unlikely(!kvm->arch.arm_pmu) && kvm_set_default_pmu(kvm))
return -EINVAL;
}
This would eliminate the possibility of returning ENODEV to userspace
where we shouldn't.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N
2023-08-17 0:30 ` [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N Raghavendra Rao Ananta
2023-08-17 6:38 ` kernel test robot
@ 2023-09-15 19:56 ` Oliver Upton
2023-09-18 16:53 ` Raghavendra Rao Ananta
1 sibling, 1 reply; 41+ messages in thread
From: Oliver Upton @ 2023-09-15 19:56 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm, Will Deacon, Mark Rutland
+cc will, rutland
Hi Raghu,
Please make sure you cc the right folks for changes that poke multiple
subsystems.
The diff looks OK, but I'm somewhat dubious of the need for this change
in the context of what you're trying to accomplish for KVM. I'd prefer
we either leave the existing definition/usage intact or rework *all* of
the PMUv3 masks to be of the shifted variety.
On Thu, Aug 17, 2023 at 12:30:22AM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
>
> Some code extracts PMCR_EL0.N using ARMV8_PMU_PMCR_N_SHIFT and
> ARMV8_PMU_PMCR_N_MASK. Define ARMV8_PMU_PMCR_N (0x1f << 11),
> and simplify those codes using FIELD_GET() and/or ARMV8_PMU_PMCR_N.
> The following patches will also use these macros to extract PMCR_EL0.N.
Changelog is a bit wordy:
Define a shifted mask for accessing PMCR_EL0.N amenable to the use of
bitfield accessors and convert the existing, open-coded mask shifts to
the new definition.
> No functional change intended.
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
> arch/arm64/kvm/pmu-emul.c | 3 +--
> arch/arm64/kvm/sys_regs.c | 7 +++----
> drivers/perf/arm_pmuv3.c | 3 +--
> include/linux/perf/arm_pmuv3.h | 2 +-
> 4 files changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index b87822024828a..f7b5fa16341ad 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -245,9 +245,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
>
> u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> {
> - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
> + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
>
> - val &= ARMV8_PMU_PMCR_N_MASK;
> if (val == 0)
> return BIT(ARMV8_PMU_CYCLE_IDX);
> else
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 39e9248c935e7..30108f09e088b 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -750,7 +750,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> return 0;
>
> /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> + pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N;
> if (!kvm_supports_32bit_el0())
> pmcr |= ARMV8_PMU_PMCR_LC;
>
> @@ -858,10 +858,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>
> static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> {
> - u64 pmcr, val;
> + u64 val;
>
> - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
> - val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> + val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
> if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
> kvm_inject_undefined(vcpu);
> return false;
> diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
> index 08b3a1bf0ef62..7618b0adc0b8c 100644
> --- a/drivers/perf/arm_pmuv3.c
> +++ b/drivers/perf/arm_pmuv3.c
> @@ -1128,8 +1128,7 @@ static void __armv8pmu_probe_pmu(void *info)
> probe->present = true;
>
> /* Read the nb of CNTx counters supported from PMNC */
> - cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT)
> - & ARMV8_PMU_PMCR_N_MASK;
> + cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read());
>
> /* Add the CPU cycles counter */
> cpu_pmu->num_events += 1;
> diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h
> index e3899bd77f5cc..ecbcf3f93560c 100644
> --- a/include/linux/perf/arm_pmuv3.h
> +++ b/include/linux/perf/arm_pmuv3.h
> @@ -216,7 +216,7 @@
> #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
> #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
> #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
> -#define ARMV8_PMU_PMCR_N_MASK 0x1f
> +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT)
> #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
>
> /*
> --
> 2.41.0.694.ge786442a9b-goog
>
>
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-22 3:26 ` Shaoqin Huang
@ 2023-09-15 20:36 ` Oliver Upton
2023-09-18 17:02 ` Raghavendra Rao Ananta
0 siblings, 1 reply; 41+ messages in thread
From: Oliver Upton @ 2023-09-15 20:36 UTC (permalink / raw)
To: Shaoqin Huang
Cc: Raghavendra Rao Ananta, Marc Zyngier, Alexandru Elisei,
James Morse, Suzuki K Poulose, Paolo Bonzini, Zenghui Yu,
Jing Zhang, Reiji Watanabe, Colton Lewis, linux-arm-kernel,
kvmarm, linux-kernel, kvm
On Tue, Aug 22, 2023 at 11:26:23AM +0800, Shaoqin Huang wrote:
[...]
> > > > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > > > + u64 val)
> > > > +{
> > > > + struct kvm *kvm = vcpu->kvm;
> > > > + u64 new_n, mutable_mask;
> > > > + int ret = 0;
> > > > +
> > > > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > > > +
> > > > + mutex_lock(&kvm->arch.config_lock);
> > > > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > > > + /*
> > > > + * The vCPU can't have more counters than the PMU
> > > > + * hardware implements.
> > > > + */
> > > > + if (new_n <= kvm->arch.pmcr_n_limit)
> > > > + kvm->arch.pmcr_n = new_n;
> > > > + else
> > > > + ret = -EINVAL;
> > > > + }
> > >
> > > Since we have set the default value of pmcr_n, if we want to set a new
> > > pmcr_n, shouldn't it be a different value?
> > >
> > > So how about change the checking to:
> > >
> > > if (likely(new_n <= kvm->arch.pmcr_n_limit)
> > > kvm->arch.pmcr_n = new_n;
> > > else
> > > ret = -EINVAL;
> > >
> > > what do you think?
> > >
> > Sorry, I guess I didn't fully understand your suggestion. Are you
> > saying that it's 'likely' that userspace would configure the correct
> > value?
> >
> It depends on how userspace use this api to limit the number of pmcr. I
> think what you mean in the code is that userspace need to set every vcpu's
> pmcr to the same value, so the `unlikely` here is right, only one vcpu can
> change the kvm->arch.pmcr.n, it saves the cpu cycles.
>
> What suggest above might be wrong. Since I think when userspace want to
> limit the number of pmcr, it may just set the new_n on one vcpu, since the
> kvm->arch.pmcr_n is a VM-local value, every vcpu can see it, so it's
> `likely` the (new_n <= kvm->arch.pmcr_n_limit), it can decrease one checking
> statement.
How about we just do away with branch hints in the first place? This is
_not_ a hot path.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-08-17 0:30 ` [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
2023-08-21 12:12 ` Shaoqin Huang
2023-08-22 10:05 ` Shaoqin Huang
@ 2023-09-15 20:53 ` Oliver Upton
2023-09-15 21:54 ` Oliver Upton
2023-09-18 17:07 ` Raghavendra Rao Ananta
2 siblings, 2 replies; 41+ messages in thread
From: Oliver Upton @ 2023-09-15 20:53 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hi Raghu,
On Thu, Aug 17, 2023 at 12:30:25AM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
>
> KVM does not yet support userspace modifying PMCR_EL0.N (With
> the previous patch, KVM ignores what is written by upserspace).
typo: userspace
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index ce7de6bbdc967..39ad56a71ad20 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> * while the latter does not.
> */
> kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
Can't we just get at this through the arm_pmu instance rather than
copying it into kvm_arch?
> return 0;
> }
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2075901356c5b..c01d62afa7db4 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> return 0;
> }
>
> +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> + u64 val)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + u64 new_n, mutable_mask;
> + int ret = 0;
> +
> + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> +
> + mutex_lock(&kvm->arch.config_lock);
> + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> + /*
> + * The vCPU can't have more counters than the PMU
> + * hardware implements.
> + */
> + if (new_n <= kvm->arch.pmcr_n_limit)
> + kvm->arch.pmcr_n = new_n;
> + else
> + ret = -EINVAL;
> + }
Hmm, I'm not so sure about returning an error here. ABI has it that
userspace can write any value to PMCR_EL0 successfully. Can we just
ignore writes that attempt to set PMCR_EL0.N to something higher than
supported by hardware? Our general stance should be that system register
fields responsible for feature identification are immutable after the VM
has started.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
2023-08-17 0:30 ` [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
@ 2023-09-15 21:00 ` Oliver Upton
2023-09-18 17:20 ` Raghavendra Rao Ananta
0 siblings, 1 reply; 41+ messages in thread
From: Oliver Upton @ 2023-09-15 21:00 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hi Raghu,
On Thu, Aug 17, 2023 at 12:30:27AM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <reijiw@google.com>
>
> Introduce vpmu_counter_access test for arm64 platforms.
> The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU,
> and check if the guest can consistently see the same number of the
> PMU event counters (PMCR_EL0.N) that userspace sets.
> This test case is done with each of the PMCR_EL0.N values from
> 0 to 31 (With the PMCR_EL0.N values greater than the host value,
> the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail).
>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
> tools/testing/selftests/kvm/Makefile | 1 +
> .../kvm/aarch64/vpmu_counter_access.c | 235 ++++++++++++++++++
> 2 files changed, 236 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index c692cc86e7da8..a1599e2b82e38 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -148,6 +148,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
> TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
> TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
> TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
> +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
> TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
> TEST_GEN_PROGS_aarch64 += demand_paging_test
> TEST_GEN_PROGS_aarch64 += dirty_log_test
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> new file mode 100644
> index 0000000000000..d0afec07948ef
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -0,0 +1,235 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * vpmu_counter_access - Test vPMU event counter access
> + *
> + * Copyright (c) 2022 Google LLC.
> + *
> + * This test checks if the guest can see the same number of the PMU event
> + * counters (PMCR_EL0.N) that userspace sets.
> + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> + */
> +#include <kvm_util.h>
> +#include <processor.h>
> +#include <test_util.h>
> +#include <vgic.h>
> +#include <perf/arm_pmuv3.h>
> +#include <linux/bitfield.h>
> +
> +/* The max number of the PMU event counters (excluding the cycle counter) */
> +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
> +
> +struct vpmu_vm {
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> + int gic_fd;
> +};
> +
nit: this test is single threaded, so there will only ever be a single
instance of a VM at a time. Dynamically allocating a backing structure
doesn't add any value, IMO.
You can just get away with using globals.
> +/*
> + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
> + * the vCPU to @pmcr_n, which is larger than the host value.
> + * The attempt should fail as @pmcr_n is too big to set for the vCPU.
> + */
> +static void run_error_test(uint64_t pmcr_n)
> +{
> + struct vpmu_vm *vpmu_vm;
> + struct kvm_vcpu *vcpu;
> + int ret;
> + uint64_t pmcr, pmcr_orig;
> +
> + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
> + vpmu_vm = create_vpmu_vm(guest_code);
> + vcpu = vpmu_vm->vcpu;
> +
> + /* Update the PMCR_EL0.N with @pmcr_n */
> + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
> + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N;
> + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> +
> + /* This should fail as @pmcr_n is too big to set for the vCPU */
> + ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
> + TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
> + pmcr, pmcr_orig);
The failure pattern for this should now be the write to PMCR_EL0.N had
no effect.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-09-15 20:53 ` Oliver Upton
@ 2023-09-15 21:54 ` Oliver Upton
2023-09-18 17:11 ` Raghavendra Rao Ananta
2023-09-18 17:07 ` Raghavendra Rao Ananta
1 sibling, 1 reply; 41+ messages in thread
From: Oliver Upton @ 2023-09-15 21:54 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Fri, Sep 15, 2023 at 08:53:16PM +0000, Oliver Upton wrote:
> Hi Raghu,
>
> On Thu, Aug 17, 2023 at 12:30:25AM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > KVM does not yet support userspace modifying PMCR_EL0.N (With
> > the previous patch, KVM ignores what is written by upserspace).
>
> typo: userspace
>
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index ce7de6bbdc967..39ad56a71ad20 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > * while the latter does not.
> > */
> > kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> > + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
>
> Can't we just get at this through the arm_pmu instance rather than
> copying it into kvm_arch?
>
> > return 0;
> > }
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 2075901356c5b..c01d62afa7db4 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > return 0;
> > }
> >
> > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > + u64 val)
> > +{
> > + struct kvm *kvm = vcpu->kvm;
> > + u64 new_n, mutable_mask;
> > + int ret = 0;
> > +
> > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > +
> > + mutex_lock(&kvm->arch.config_lock);
> > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > + /*
> > + * The vCPU can't have more counters than the PMU
> > + * hardware implements.
> > + */
> > + if (new_n <= kvm->arch.pmcr_n_limit)
> > + kvm->arch.pmcr_n = new_n;
> > + else
> > + ret = -EINVAL;
> > + }
>
> Hmm, I'm not so sure about returning an error here. ABI has it that
> userspace can write any value to PMCR_EL0 successfully. Can we just
> ignore writes that attempt to set PMCR_EL0.N to something higher than
> supported by hardware? Our general stance should be that system register
> fields responsible for feature identification are immutable after the VM
> has started.
I hacked up my reply and dropped some context; this doesn't read right.
Shaoqin made the point about preventing changes to PMCR_EL0.N after the
VM has started and I firmly agree. The behavior should be:
- Writes to PMCR always succeed
- PMCR_EL0.N values greater than what's supported by hardware are
ignored
- Changes to N after the VM has started are ignored.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
2023-09-15 19:33 ` Oliver Upton
@ 2023-09-18 16:41 ` Raghavendra Rao Ananta
2023-09-18 16:47 ` Oliver Upton
0 siblings, 1 reply; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 16:41 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Fri, Sep 15, 2023 at 12:33 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Thu, Aug 17, 2023 at 12:30:19AM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > The following patches will use the number of counters information
> > from the arm_pmu and use this to set the PMCR.N for the guest
> > during vCPU reset. However, since the guest is not associated
> > with any arm_pmu until userspace configures the vPMU device
> > attributes, and a reset can happen before this event, call
> > kvm_arm_support_pmu_v3() just before doing the reset.
> >
> > No functional change intended.
>
> But there absolutely is a functional change here, and user visible at
> that. KVM_ARM_VCPU_INIT ioctls can now fail with -ENODEV, which is not
> part of the documented errors for the interface.
>
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > arch/arm64/kvm/pmu-emul.c | 9 +--------
> > arch/arm64/kvm/reset.c | 18 +++++++++++++-----
> > include/kvm/arm_pmu.h | 6 ++++++
> > 3 files changed, 20 insertions(+), 13 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index 0ffd1efa90c07..b87822024828a 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -865,7 +865,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
> > return true;
> > }
> >
> > -static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > {
> > lockdep_assert_held(&kvm->arch.config_lock);
> >
> > @@ -937,13 +937,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> > if (vcpu->arch.pmu.created)
> > return -EBUSY;
> >
> > - if (!kvm->arch.arm_pmu) {
> > - int ret = kvm_arm_set_vm_pmu(kvm, NULL);
> > -
> > - if (ret)
> > - return ret;
> > - }
> > -
> > switch (attr->attr) {
> > case KVM_ARM_VCPU_PMU_V3_IRQ: {
> > int __user *uaddr = (int __user *)(long)attr->addr;
> > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > index bc8556b6f4590..4c20f1ccd0789 100644
> > --- a/arch/arm64/kvm/reset.c
> > +++ b/arch/arm64/kvm/reset.c
> > @@ -206,6 +206,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
> > */
> > int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> > {
> > + struct kvm *kvm = vcpu->kvm;
> > struct vcpu_reset_state reset_state;
> > int ret;
> > bool loaded;
> > @@ -216,6 +217,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> > vcpu->arch.reset_state.reset = false;
> > spin_unlock(&vcpu->arch.mp_state_lock);
> >
> > + /*
> > + * When the vCPU has a PMU, but no PMU is set for the guest
> > + * yet, set the default one.
> > + */
> > + if (kvm_vcpu_has_pmu(vcpu) && unlikely(!kvm->arch.arm_pmu)) {
> > + ret = -EINVAL;
> > + if (kvm_arm_support_pmu_v3())
> > + ret = kvm_arm_set_vm_pmu(kvm, NULL);
> > + if (ret)
> > + return ret;
> > + }
> > +
>
> On top of my prior suggestion w.r.t. the default PMU helper, I'd rather
> see this block look like:
>
> if (kvm_vcpu_has_pmu(vcpu)) {
> if (!kvm_arm_support_pmu_v3())
> return -EINVAL;
> /*
> * When the vCPU has a PMU but no PMU is set for the
> * guest yet, set the default one.
> */
> if (unlikely(!kvm->arch.arm_pmu) && kvm_set_default_pmu(kvm))
> return -EINVAL;
> }
>
> This would eliminate the possibility of returning ENODEV to userspace
> where we shouldn't.
>
I understand that we'll be breaking the API contract and userspace may
have to adapt to this change, but is it not acceptable to document and
return ENODEV, since ENODEV may offer more clarity to userspace as to
why the ioctl failed? In general, do we never extend the APIs?
Thank you.
Raghavendra
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
2023-09-18 16:41 ` Raghavendra Rao Ananta
@ 2023-09-18 16:47 ` Oliver Upton
2023-09-18 16:58 ` Raghavendra Rao Ananta
0 siblings, 1 reply; 41+ messages in thread
From: Oliver Upton @ 2023-09-18 16:47 UTC (permalink / raw)
To: Raghavendra Rao Ananta
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Mon, Sep 18, 2023 at 09:41:02AM -0700, Raghavendra Rao Ananta wrote:
> On Fri, Sep 15, 2023 at 12:33 PM Oliver Upton <oliver.upton@linux.dev> wrote:
[...]
> > This would eliminate the possibility of returning ENODEV to userspace
> > where we shouldn't.
> >
> I understand that we'll be breaking the API contract and userspace may
> have to adapt to this change, but is it not acceptable to document and
> return ENODEV, since ENODEV may offer more clarity to userspace as to
> why the ioctl failed? In general, do we never extend the APIs?
Yes, we extend the existing interfaces all the time, but we almost
always require user opt in for user-visible changes in behavior. Look at
the way arm64_check_features() is handled -- we hide the 'detailed'
error and return EINVAL due to UAPI.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N
2023-09-15 19:56 ` Oliver Upton
@ 2023-09-18 16:53 ` Raghavendra Rao Ananta
0 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 16:53 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm, Will Deacon, Mark Rutland
Hi Oliver,
On Fri, Sep 15, 2023 at 12:56 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> +cc will, rutland
>
> Hi Raghu,
>
> Please make sure you cc the right folks for changes that poke multiple
> subsystems.
>
> The diff looks OK, but I'm somewhat dubious of the need for this change
> in the context of what you're trying to accomplish for KVM. I'd prefer
> we either leave the existing definition/usage intact or rework *all* of
> the PMUv3 masks to be of the shifted variety.
>
I believe the original intention was to make accessing the PMCR.N
field simple. However, if you feel its redundant and is unnecessary
for this series I'm happy to drop the patch.
Thank you.
Raghavendra
> On Thu, Aug 17, 2023 at 12:30:22AM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > Some code extracts PMCR_EL0.N using ARMV8_PMU_PMCR_N_SHIFT and
> > ARMV8_PMU_PMCR_N_MASK. Define ARMV8_PMU_PMCR_N (0x1f << 11),
> > and simplify those codes using FIELD_GET() and/or ARMV8_PMU_PMCR_N.
> > The following patches will also use these macros to extract PMCR_EL0.N.
>
> Changelog is a bit wordy:
>
> Define a shifted mask for accessing PMCR_EL0.N amenable to the use of
> bitfield accessors and convert the existing, open-coded mask shifts to
> the new definition.
>
> > No functional change intended.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > arch/arm64/kvm/pmu-emul.c | 3 +--
> > arch/arm64/kvm/sys_regs.c | 7 +++----
> > drivers/perf/arm_pmuv3.c | 3 +--
> > include/linux/perf/arm_pmuv3.h | 2 +-
> > 4 files changed, 6 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index b87822024828a..f7b5fa16341ad 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -245,9 +245,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
> >
> > u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> > {
> > - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
> >
> > - val &= ARMV8_PMU_PMCR_N_MASK;
> > if (val == 0)
> > return BIT(ARMV8_PMU_CYCLE_IDX);
> > else
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 39e9248c935e7..30108f09e088b 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -750,7 +750,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > return 0;
> >
> > /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> > - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > + pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N;
> > if (!kvm_supports_32bit_el0())
> > pmcr |= ARMV8_PMU_PMCR_LC;
> >
> > @@ -858,10 +858,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >
> > static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> > {
> > - u64 pmcr, val;
> > + u64 val;
> >
> > - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
> > - val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> > + val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
> > if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
> > kvm_inject_undefined(vcpu);
> > return false;
> > diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
> > index 08b3a1bf0ef62..7618b0adc0b8c 100644
> > --- a/drivers/perf/arm_pmuv3.c
> > +++ b/drivers/perf/arm_pmuv3.c
> > @@ -1128,8 +1128,7 @@ static void __armv8pmu_probe_pmu(void *info)
> > probe->present = true;
> >
> > /* Read the nb of CNTx counters supported from PMNC */
> > - cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT)
> > - & ARMV8_PMU_PMCR_N_MASK;
> > + cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read());
> >
> > /* Add the CPU cycles counter */
> > cpu_pmu->num_events += 1;
> > diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h
> > index e3899bd77f5cc..ecbcf3f93560c 100644
> > --- a/include/linux/perf/arm_pmuv3.h
> > +++ b/include/linux/perf/arm_pmuv3.h
> > @@ -216,7 +216,7 @@
> > #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
> > #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
> > #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
> > -#define ARMV8_PMU_PMCR_N_MASK 0x1f
> > +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT)
> > #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
> >
> > /*
> > --
> > 2.41.0.694.ge786442a9b-goog
> >
> >
>
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
2023-09-18 16:47 ` Oliver Upton
@ 2023-09-18 16:58 ` Raghavendra Rao Ananta
0 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 16:58 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Mon, Sep 18, 2023 at 9:47 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Sep 18, 2023 at 09:41:02AM -0700, Raghavendra Rao Ananta wrote:
> > On Fri, Sep 15, 2023 at 12:33 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> [...]
>
> > > This would eliminate the possibility of returning ENODEV to userspace
> > > where we shouldn't.
> > >
> > I understand that we'll be breaking the API contract and userspace may
> > have to adapt to this change, but is it not acceptable to document and
> > return ENODEV, since ENODEV may offer more clarity to userspace as to
> > why the ioctl failed? In general, do we never extend the APIs?
>
> Yes, we extend the existing interfaces all the time, but we almost
> always require user opt in for user-visible changes in behavior. Look at
> the way arm64_check_features() is handled -- we hide the 'detailed'
> error and return EINVAL due to UAPI.
>
Got it. Let's return EINVAL then. Thanks!
- Raghavendra
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-09-15 20:36 ` Oliver Upton
@ 2023-09-18 17:02 ` Raghavendra Rao Ananta
0 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 17:02 UTC (permalink / raw)
To: Oliver Upton
Cc: Shaoqin Huang, Marc Zyngier, Alexandru Elisei, James Morse,
Suzuki K Poulose, Paolo Bonzini, Zenghui Yu, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Fri, Sep 15, 2023 at 1:36 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Tue, Aug 22, 2023 at 11:26:23AM +0800, Shaoqin Huang wrote:
>
> [...]
>
> > > > > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > > > > + u64 val)
> > > > > +{
> > > > > + struct kvm *kvm = vcpu->kvm;
> > > > > + u64 new_n, mutable_mask;
> > > > > + int ret = 0;
> > > > > +
> > > > > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > > > > +
> > > > > + mutex_lock(&kvm->arch.config_lock);
> > > > > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > > > > + /*
> > > > > + * The vCPU can't have more counters than the PMU
> > > > > + * hardware implements.
> > > > > + */
> > > > > + if (new_n <= kvm->arch.pmcr_n_limit)
> > > > > + kvm->arch.pmcr_n = new_n;
> > > > > + else
> > > > > + ret = -EINVAL;
> > > > > + }
> > > >
> > > > Since we have set the default value of pmcr_n, if we want to set a new
> > > > pmcr_n, shouldn't it be a different value?
> > > >
> > > > So how about change the checking to:
> > > >
> > > > if (likely(new_n <= kvm->arch.pmcr_n_limit)
> > > > kvm->arch.pmcr_n = new_n;
> > > > else
> > > > ret = -EINVAL;
> > > >
> > > > what do you think?
> > > >
> > > Sorry, I guess I didn't fully understand your suggestion. Are you
> > > saying that it's 'likely' that userspace would configure the correct
> > > value?
> > >
> > It depends on how userspace use this api to limit the number of pmcr. I
> > think what you mean in the code is that userspace need to set every vcpu's
> > pmcr to the same value, so the `unlikely` here is right, only one vcpu can
> > change the kvm->arch.pmcr.n, it saves the cpu cycles.
> >
> > What suggest above might be wrong. Since I think when userspace want to
> > limit the number of pmcr, it may just set the new_n on one vcpu, since the
> > kvm->arch.pmcr_n is a VM-local value, every vcpu can see it, so it's
> > `likely` the (new_n <= kvm->arch.pmcr_n_limit), it can decrease one checking
> > statement.
>
> How about we just do away with branch hints in the first place? This is
> _not_ a hot path.
>
Sounds good to me.
Thank you.
Raghavendra
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-09-15 20:53 ` Oliver Upton
2023-09-15 21:54 ` Oliver Upton
@ 2023-09-18 17:07 ` Raghavendra Rao Ananta
1 sibling, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 17:07 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Fri, Sep 15, 2023 at 1:53 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Raghu,
>
> On Thu, Aug 17, 2023 at 12:30:25AM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > KVM does not yet support userspace modifying PMCR_EL0.N (With
> > the previous patch, KVM ignores what is written by upserspace).
>
> typo: userspace
>
Noted.
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index ce7de6bbdc967..39ad56a71ad20 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > * while the latter does not.
> > */
> > kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> > + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
>
> Can't we just get at this through the arm_pmu instance rather than
> copying it into kvm_arch?
>
Yeah, I suppose we can directly access it in set_pmcr().
Thank you.
Raghavendra
> > return 0;
> > }
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 2075901356c5b..c01d62afa7db4 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > return 0;
> > }
> >
> > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > + u64 val)
> > +{
> > + struct kvm *kvm = vcpu->kvm;
> > + u64 new_n, mutable_mask;
> > + int ret = 0;
> > +
> > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > +
> > + mutex_lock(&kvm->arch.config_lock);
> > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > + /*
> > + * The vCPU can't have more counters than the PMU
> > + * hardware implements.
> > + */
> > + if (new_n <= kvm->arch.pmcr_n_limit)
> > + kvm->arch.pmcr_n = new_n;
> > + else
> > + ret = -EINVAL;
> > + }
>
> Hmm, I'm not so sure about returning an error here. ABI has it that
> userspace can write any value to PMCR_EL0 successfully. Can we just
> ignore writes that attempt to set PMCR_EL0.N to something higher than
> supported by hardware? Our general stance should be that system register
> fields responsible for feature identification are immutable after the VM
> has started.
>
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-09-15 21:54 ` Oliver Upton
@ 2023-09-18 17:11 ` Raghavendra Rao Ananta
2023-09-18 17:22 ` Raghavendra Rao Ananta
0 siblings, 1 reply; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 17:11 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Fri, Sep 15, 2023 at 2:54 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Fri, Sep 15, 2023 at 08:53:16PM +0000, Oliver Upton wrote:
> > Hi Raghu,
> >
> > On Thu, Aug 17, 2023 at 12:30:25AM +0000, Raghavendra Rao Ananta wrote:
> > > From: Reiji Watanabe <reijiw@google.com>
> > >
> > > KVM does not yet support userspace modifying PMCR_EL0.N (With
> > > the previous patch, KVM ignores what is written by upserspace).
> >
> > typo: userspace
> >
> > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > > index ce7de6bbdc967..39ad56a71ad20 100644
> > > --- a/arch/arm64/kvm/pmu-emul.c
> > > +++ b/arch/arm64/kvm/pmu-emul.c
> > > @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > > * while the latter does not.
> > > */
> > > kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> > > + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
> >
> > Can't we just get at this through the arm_pmu instance rather than
> > copying it into kvm_arch?
> >
> > > return 0;
> > > }
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 2075901356c5b..c01d62afa7db4 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > > return 0;
> > > }
> > >
> > > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > > + u64 val)
> > > +{
> > > + struct kvm *kvm = vcpu->kvm;
> > > + u64 new_n, mutable_mask;
> > > + int ret = 0;
> > > +
> > > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > > +
> > > + mutex_lock(&kvm->arch.config_lock);
> > > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > > + /*
> > > + * The vCPU can't have more counters than the PMU
> > > + * hardware implements.
> > > + */
> > > + if (new_n <= kvm->arch.pmcr_n_limit)
> > > + kvm->arch.pmcr_n = new_n;
> > > + else
> > > + ret = -EINVAL;
> > > + }
> >
> > Hmm, I'm not so sure about returning an error here. ABI has it that
> > userspace can write any value to PMCR_EL0 successfully. Can we just
> > ignore writes that attempt to set PMCR_EL0.N to something higher than
> > supported by hardware? Our general stance should be that system register
> > fields responsible for feature identification are immutable after the VM
> > has started.
>
> I hacked up my reply and dropped some context; this doesn't read right.
> Shaoqin made the point about preventing changes to PMCR_EL0.N after the
> VM has started and I firmly agree. The behavior should be:
>
> - Writes to PMCR always succeed
>
> - PMCR_EL0.N values greater than what's supported by hardware are
> ignored
>
> - Changes to N after the VM has started are ignored.
>
Reiji and I were wondering if we should proceed with this as this
would change userspace expectation. BTW, when you said "ignored", does
that mean we silently return to userspace with a success or with EBUSY
(changing the expectations)?
Thank you.
Raghavendra
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test
2023-09-15 21:00 ` Oliver Upton
@ 2023-09-18 17:20 ` Raghavendra Rao Ananta
0 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 17:20 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hi Oliver,
On Fri, Sep 15, 2023 at 2:00 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Raghu,
>
> On Thu, Aug 17, 2023 at 12:30:27AM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > Introduce vpmu_counter_access test for arm64 platforms.
> > The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU,
> > and check if the guest can consistently see the same number of the
> > PMU event counters (PMCR_EL0.N) that userspace sets.
> > This test case is done with each of the PMCR_EL0.N values from
> > 0 to 31 (With the PMCR_EL0.N values greater than the host value,
> > the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail).
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > tools/testing/selftests/kvm/Makefile | 1 +
> > .../kvm/aarch64/vpmu_counter_access.c | 235 ++++++++++++++++++
> > 2 files changed, 236 insertions(+)
> > create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> >
> > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > index c692cc86e7da8..a1599e2b82e38 100644
> > --- a/tools/testing/selftests/kvm/Makefile
> > +++ b/tools/testing/selftests/kvm/Makefile
> > @@ -148,6 +148,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
> > TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
> > TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
> > TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
> > +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
> > TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
> > TEST_GEN_PROGS_aarch64 += demand_paging_test
> > TEST_GEN_PROGS_aarch64 += dirty_log_test
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > new file mode 100644
> > index 0000000000000..d0afec07948ef
> > --- /dev/null
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> > @@ -0,0 +1,235 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * vpmu_counter_access - Test vPMU event counter access
> > + *
> > + * Copyright (c) 2022 Google LLC.
> > + *
> > + * This test checks if the guest can see the same number of the PMU event
> > + * counters (PMCR_EL0.N) that userspace sets.
> > + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> > + */
> > +#include <kvm_util.h>
> > +#include <processor.h>
> > +#include <test_util.h>
> > +#include <vgic.h>
> > +#include <perf/arm_pmuv3.h>
> > +#include <linux/bitfield.h>
> > +
> > +/* The max number of the PMU event counters (excluding the cycle counter) */
> > +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
> > +
> > +struct vpmu_vm {
> > + struct kvm_vm *vm;
> > + struct kvm_vcpu *vcpu;
> > + int gic_fd;
> > +};
> > +
>
> nit: this test is single threaded, so there will only ever be a single
> instance of a VM at a time. Dynamically allocating a backing structure
> doesn't add any value, IMO.
>
> You can just get away with using globals.
>
Probably. I can try to have a single global.
> > +/*
> > + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
> > + * the vCPU to @pmcr_n, which is larger than the host value.
> > + * The attempt should fail as @pmcr_n is too big to set for the vCPU.
> > + */
> > +static void run_error_test(uint64_t pmcr_n)
> > +{
> > + struct vpmu_vm *vpmu_vm;
> > + struct kvm_vcpu *vcpu;
> > + int ret;
> > + uint64_t pmcr, pmcr_orig;
> > +
> > + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
> > + vpmu_vm = create_vpmu_vm(guest_code);
> > + vcpu = vpmu_vm->vcpu;
> > +
> > + /* Update the PMCR_EL0.N with @pmcr_n */
> > + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
> > + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N;
> > + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
> > +
> > + /* This should fail as @pmcr_n is too big to set for the vCPU */
> > + ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
> > + TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
> > + pmcr, pmcr_orig);
>
> The failure pattern for this should now be the write to PMCR_EL0.N had
> no effect.
>
Right. I'll make the change.
Thank you.
Raghavendra
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
2023-09-18 17:11 ` Raghavendra Rao Ananta
@ 2023-09-18 17:22 ` Raghavendra Rao Ananta
0 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 17:22 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
Hi Oliver,
On Mon, Sep 18, 2023 at 10:11 AM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> On Fri, Sep 15, 2023 at 2:54 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > On Fri, Sep 15, 2023 at 08:53:16PM +0000, Oliver Upton wrote:
> > > Hi Raghu,
> > >
> > > On Thu, Aug 17, 2023 at 12:30:25AM +0000, Raghavendra Rao Ananta wrote:
> > > > From: Reiji Watanabe <reijiw@google.com>
> > > >
> > > > KVM does not yet support userspace modifying PMCR_EL0.N (With
> > > > the previous patch, KVM ignores what is written by upserspace).
> > >
> > > typo: userspace
> > >
> > > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > > > index ce7de6bbdc967..39ad56a71ad20 100644
> > > > --- a/arch/arm64/kvm/pmu-emul.c
> > > > +++ b/arch/arm64/kvm/pmu-emul.c
> > > > @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > > > * while the latter does not.
> > > > */
> > > > kvm->arch.pmcr_n = arm_pmu->num_events - 1;
> > > > + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1;
> > >
> > > Can't we just get at this through the arm_pmu instance rather than
> > > copying it into kvm_arch?
> > >
> > > > return 0;
> > > > }
> > > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > > index 2075901356c5b..c01d62afa7db4 100644
> > > > --- a/arch/arm64/kvm/sys_regs.c
> > > > +++ b/arch/arm64/kvm/sys_regs.c
> > > > @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > > > return 0;
> > > > }
> > > >
> > > > +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> > > > + u64 val)
> > > > +{
> > > > + struct kvm *kvm = vcpu->kvm;
> > > > + u64 new_n, mutable_mask;
> > > > + int ret = 0;
> > > > +
> > > > + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val);
> > > > +
> > > > + mutex_lock(&kvm->arch.config_lock);
> > > > + if (unlikely(new_n != kvm->arch.pmcr_n)) {
> > > > + /*
> > > > + * The vCPU can't have more counters than the PMU
> > > > + * hardware implements.
> > > > + */
> > > > + if (new_n <= kvm->arch.pmcr_n_limit)
> > > > + kvm->arch.pmcr_n = new_n;
> > > > + else
> > > > + ret = -EINVAL;
> > > > + }
> > >
> > > Hmm, I'm not so sure about returning an error here. ABI has it that
> > > userspace can write any value to PMCR_EL0 successfully. Can we just
> > > ignore writes that attempt to set PMCR_EL0.N to something higher than
> > > supported by hardware? Our general stance should be that system register
> > > fields responsible for feature identification are immutable after the VM
> > > has started.
> >
> > I hacked up my reply and dropped some context; this doesn't read right.
> > Shaoqin made the point about preventing changes to PMCR_EL0.N after the
> > VM has started and I firmly agree. The behavior should be:
> >
> > - Writes to PMCR always succeed
> >
> > - PMCR_EL0.N values greater than what's supported by hardware are
> > ignored
> >
> > - Changes to N after the VM has started are ignored.
> >
> Reiji and I were wondering if we should proceed with this as this
> would change userspace expectation. BTW, when you said "ignored", does
> that mean we silently return to userspace with a success or with EBUSY
> (changing the expectations)?
>
Sorry, I just read your earlier comment (one before you detailed the
behavior), from which I'm guessing "ignore" means simply disregard the
change and return success to userspace. But wouldn't that cause issues
in debugging?
Thank you.
Raghavendra
> Thank you.
> Raghavendra
> > --
> > Thanks,
> > Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU
2023-09-15 19:22 ` Oliver Upton
@ 2023-09-18 17:24 ` Raghavendra Rao Ananta
0 siblings, 0 replies; 41+ messages in thread
From: Raghavendra Rao Ananta @ 2023-09-18 17:24 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, Alexandru Elisei, James Morse, Suzuki K Poulose,
Paolo Bonzini, Zenghui Yu, Shaoqin Huang, Jing Zhang,
Reiji Watanabe, Colton Lewis, linux-arm-kernel, kvmarm,
linux-kernel, kvm
On Fri, Sep 15, 2023 at 12:22 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Raghu,
>
> On Thu, Aug 17, 2023 at 12:30:18AM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <reijiw@google.com>
> >
> > Introduce a new helper function to set the guest's PMU
> > (kvm->arch.arm_pmu), and use it when the guest's PMU needs
> > to be set. This helper will make it easier for the following
> > patches to modify the relevant code.
> >
> > No functional change intended.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > arch/arm64/kvm/pmu-emul.c | 52 +++++++++++++++++++++++++++------------
> > 1 file changed, 36 insertions(+), 16 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index 5606509724787..0ffd1efa90c07 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -865,6 +865,32 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
> > return true;
> > }
> >
> > +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> > +{
> > + lockdep_assert_held(&kvm->arch.config_lock);
> > +
> > + if (!arm_pmu) {
> > + /*
> > + * No PMU set, get the default one.
> > + *
> > + * The observant among you will notice that the supported_cpus
> > + * mask does not get updated for the default PMU even though it
> > + * is quite possible the selected instance supports only a
> > + * subset of cores in the system. This is intentional, and
> > + * upholds the preexisting behavior on heterogeneous systems
> > + * where vCPUs can be scheduled on any core but the guest
> > + * counters could stop working.
> > + */
> > + arm_pmu = kvm_pmu_probe_armpmu();
> > + if (!arm_pmu)
> > + return -ENODEV;
> > + }
> > +
> > + kvm->arch.arm_pmu = arm_pmu;
> > +
> > + return 0;
> > +}
> > +
>
> I'm not too big of a fan of adding the 'default' path to this helper.
> I'd prefer it if kvm_arm_set_vm_pmu() does all the necessary
> initialization for a valid pmu instance. You then avoid introducing
> unexpected error handling where it didn't exist before.
>
> static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
> {
> lockdep_assert_held(&kvm->arch.config_lock);
>
> kvm->arch.arm_pmu = arm_pmu;
> }
>
> /*
> * Blurb about default PMUs I'm too lazy to copy/paste
> */
> static int kvm_arm_set_default_pmu(struct kvm *kvm)
> {
> struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu();
>
> if (!arm_pmu)
> return -ENODEV;
>
> kvm_arm_set_pmu(kvm, arm_pmu);
> return 0;
> }
>
Sounds good. We can adapt to your suggestion.
Thank you.
Raghavendra
> --
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2023-09-18 17:25 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-17 0:30 [PATCH v5 00/12] KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU Raghavendra Rao Ananta
2023-09-15 19:22 ` Oliver Upton
2023-09-18 17:24 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset Raghavendra Rao Ananta
2023-08-17 5:03 ` kernel test robot
2023-08-17 7:54 ` kernel test robot
2023-09-15 19:33 ` Oliver Upton
2023-09-18 16:41 ` Raghavendra Rao Ananta
2023-09-18 16:47 ` Oliver Upton
2023-09-18 16:58 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} " Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N Raghavendra Rao Ananta
2023-08-17 6:38 ` kernel test robot
2023-09-15 19:56 ` Oliver Upton
2023-09-18 16:53 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 06/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest Raghavendra Rao Ananta
2023-08-21 12:12 ` Shaoqin Huang
2023-08-21 23:28 ` Raghavendra Rao Ananta
2023-08-22 3:26 ` Shaoqin Huang
2023-09-15 20:36 ` Oliver Upton
2023-09-18 17:02 ` Raghavendra Rao Ananta
2023-08-22 10:05 ` Shaoqin Huang
2023-08-23 16:06 ` Raghavendra Rao Ananta
2023-08-24 8:50 ` Shaoqin Huang
2023-08-25 22:34 ` Raghavendra Rao Ananta
2023-08-26 2:40 ` Shaoqin Huang
2023-09-15 20:53 ` Oliver Upton
2023-09-15 21:54 ` Oliver Upton
2023-09-18 17:11 ` Raghavendra Rao Ananta
2023-09-18 17:22 ` Raghavendra Rao Ananta
2023-09-18 17:07 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 09/12] tools: Import arm_pmuv3.h Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
2023-09-15 21:00 ` Oliver Upton
2023-09-18 17:20 ` Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
2023-08-17 0:30 ` [PATCH v5 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).