All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/14] KVM: arm64: PMU: Fixing chained events, and PMUv3p5 support
@ 2022-10-28 10:53 ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Ricardo reported[0] that our PMU emulation was busted when it comes to
chained events, as we cannot expose the overflow on a 32bit boundary
(which the architecture requires).

This series aims at fixing this (by deleting a lot of code), and as a
bonus adds support for PMUv3p5, as this requires us to fix a few more
things.

Tested on A53 (PMUv3) and QEMU (PMUv3p5).

* From v1 [1]:
  - Rebased on 6.1-rc2
  - New patch advertising that we always support the CHAIN event
  - Plenty of bug fixes (idreg handling, AArch32, overflow narrowing)
  - Tons of cleanups
  - All kudos to Oliver and Reiji for spending the time to review this
    mess, and Ricardo for finding more bugs!

[0] https://lore.kernel.org/r/20220805004139.990531-1-ricarkol@google.com
[1] https://lore.kernel.org/r/20220805135813.2102034-1-maz@kernel.org

Marc Zyngier (14):
  arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
  KVM: arm64: PMU: Align chained counter implementation with
    architecture pseudocode
  KVM: arm64: PMU: Always advertise the CHAIN event
  KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow
  KVM: arm64: PMU: Narrow the overflow checking when required
  KVM: arm64: PMU: Only narrow counters that are not 64bit wide
  KVM: arm64: PMU: Add counter_index_to_*reg() helpers
  KVM: arm64: PMU: Simplify setting a counter to a specific value
  KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits
  KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
  KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon to be set from userspace
  KVM: arm64: PMU: Implement PMUv3p5 long counter support
  KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest

 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/include/asm/sysreg.h   |   2 +
 arch/arm64/kvm/arm.c              |   6 +
 arch/arm64/kvm/pmu-emul.c         | 408 ++++++++++++------------------
 arch/arm64/kvm/sys_regs.c         | 135 +++++++++-
 include/kvm/arm_pmu.h             |  15 +-
 6 files changed, 307 insertions(+), 260 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v2 00/14] KVM: arm64: PMU: Fixing chained events, and PMUv3p5 support
@ 2022-10-28 10:53 ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Ricardo reported[0] that our PMU emulation was busted when it comes to
chained events, as we cannot expose the overflow on a 32bit boundary
(which the architecture requires).

This series aims at fixing this (by deleting a lot of code), and as a
bonus adds support for PMUv3p5, as this requires us to fix a few more
things.

Tested on A53 (PMUv3) and QEMU (PMUv3p5).

* From v1 [1]:
  - Rebased on 6.1-rc2
  - New patch advertising that we always support the CHAIN event
  - Plenty of bug fixes (idreg handling, AArch32, overflow narrowing)
  - Tons of cleanups
  - All kudos to Oliver and Reiji for spending the time to review this
    mess, and Ricardo for finding more bugs!

[0] https://lore.kernel.org/r/20220805004139.990531-1-ricarkol@google.com
[1] https://lore.kernel.org/r/20220805135813.2102034-1-maz@kernel.org

Marc Zyngier (14):
  arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
  KVM: arm64: PMU: Align chained counter implementation with
    architecture pseudocode
  KVM: arm64: PMU: Always advertise the CHAIN event
  KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow
  KVM: arm64: PMU: Narrow the overflow checking when required
  KVM: arm64: PMU: Only narrow counters that are not 64bit wide
  KVM: arm64: PMU: Add counter_index_to_*reg() helpers
  KVM: arm64: PMU: Simplify setting a counter to a specific value
  KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits
  KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
  KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon to be set from userspace
  KVM: arm64: PMU: Implement PMUv3p5 long counter support
  KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest

 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/include/asm/sysreg.h   |   2 +
 arch/arm64/kvm/arm.c              |   6 +
 arch/arm64/kvm/pmu-emul.c         | 408 ++++++++++++------------------
 arch/arm64/kvm/sys_regs.c         | 135 +++++++++-
 include/kvm/arm_pmu.h             |  15 +-
 6 files changed, 307 insertions(+), 260 deletions(-)

-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v2 00/14] KVM: arm64: PMU: Fixing chained events, and PMUv3p5 support
@ 2022-10-28 10:53 ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Ricardo reported[0] that our PMU emulation was busted when it comes to
chained events, as we cannot expose the overflow on a 32bit boundary
(which the architecture requires).

This series aims at fixing this (by deleting a lot of code), and as a
bonus adds support for PMUv3p5, as this requires us to fix a few more
things.

Tested on A53 (PMUv3) and QEMU (PMUv3p5).

* From v1 [1]:
  - Rebased on 6.1-rc2
  - New patch advertising that we always support the CHAIN event
  - Plenty of bug fixes (idreg handling, AArch32, overflow narrowing)
  - Tons of cleanups
  - All kudos to Oliver and Reiji for spending the time to review this
    mess, and Ricardo for finding more bugs!

[0] https://lore.kernel.org/r/20220805004139.990531-1-ricarkol@google.com
[1] https://lore.kernel.org/r/20220805135813.2102034-1-maz@kernel.org

Marc Zyngier (14):
  arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
  KVM: arm64: PMU: Align chained counter implementation with
    architecture pseudocode
  KVM: arm64: PMU: Always advertise the CHAIN event
  KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow
  KVM: arm64: PMU: Narrow the overflow checking when required
  KVM: arm64: PMU: Only narrow counters that are not 64bit wide
  KVM: arm64: PMU: Add counter_index_to_*reg() helpers
  KVM: arm64: PMU: Simplify setting a counter to a specific value
  KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits
  KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
  KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon to be set from userspace
  KVM: arm64: PMU: Implement PMUv3p5 long counter support
  KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest

 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/include/asm/sysreg.h   |   2 +
 arch/arm64/kvm/arm.c              |   6 +
 arch/arm64/kvm/pmu-emul.c         | 408 ++++++++++++------------------
 arch/arm64/kvm/sys_regs.c         | 135 +++++++++-
 include/kvm/arm_pmu.h             |  15 +-
 6 files changed, 307 insertions(+), 260 deletions(-)

-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7d301700d1a9..84f59ce1dc6d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -698,6 +698,8 @@
 #define ID_DFR0_PERFMON_8_1		0x4
 #define ID_DFR0_PERFMON_8_4		0x5
 #define ID_DFR0_PERFMON_8_5		0x6
+#define ID_DFR0_PERFMON_8_7		0x7
+#define ID_DFR0_PERFMON_IMP_DEF		0xf
 
 #define ID_ISAR4_SWP_FRAC_SHIFT		28
 #define ID_ISAR4_PSR_M_SHIFT		24
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7d301700d1a9..84f59ce1dc6d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -698,6 +698,8 @@
 #define ID_DFR0_PERFMON_8_1		0x4
 #define ID_DFR0_PERFMON_8_4		0x5
 #define ID_DFR0_PERFMON_8_5		0x6
+#define ID_DFR0_PERFMON_8_7		0x7
+#define ID_DFR0_PERFMON_IMP_DEF		0xf
 
 #define ID_ISAR4_SWP_FRAC_SHIFT		28
 #define ID_ISAR4_PSR_M_SHIFT		24
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7d301700d1a9..84f59ce1dc6d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -698,6 +698,8 @@
 #define ID_DFR0_PERFMON_8_1		0x4
 #define ID_DFR0_PERFMON_8_4		0x5
 #define ID_DFR0_PERFMON_8_5		0x6
+#define ID_DFR0_PERFMON_8_7		0x7
+#define ID_DFR0_PERFMON_IMP_DEF		0xf
 
 #define ID_ISAR4_SWP_FRAC_SHIFT		28
 #define ID_ISAR4_PSR_M_SHIFT		24
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 02/14] KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Ricardo recently pointed out that the PMU chained counter emulation
in KVM wasn't quite behaving like the one on actual hardware, in
the sense that a chained counter would expose an overflow on
both halves of a chained counter, while KVM would only expose the
overflow on the top half.

The difference is subtle, but significant. What does the architecture
say (DDI0087 H.a):

- Up to PMUv3p4, all counters but the cycle counter are 32bit

- A 32bit counter that overflows generates a CHAIN event on the
  adjacent counter after exposing its own overflow status

- The CHAIN event is accounted if the counter is correctly
  configured (CHAIN event selected and counter enabled)

This all means that our current implementation (which uses 64bit
perf events) prevents us from emulating this overflow on the lower half.

How to fix this? By implementing the above, to the letter.

This largly results in code deletion, removing the notions of
"counter pair", "chained counters", and "canonical counter".
The code is further restructured to make the CHAIN handling similar
to SWINC, as the two are now extremely similar in behaviour.

Reported-by: Ricardo Koller <ricarkol@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 312 ++++++++++----------------------------
 include/kvm/arm_pmu.h     |   2 -
 2 files changed, 83 insertions(+), 231 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 0003c7d37533..a38b3127f649 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -15,16 +15,14 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#define PERF_ATTR_CFG1_COUNTER_64BIT	BIT(0)
+
 DEFINE_STATIC_KEY_FALSE(kvm_arm_pmu_available);
 
 static LIST_HEAD(arm_pmus);
 static DEFINE_MUTEX(arm_pmus_lock);
 
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
-static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx);
-static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
-
-#define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1
 
 static u32 kvm_pmu_event_mask(struct kvm *kvm)
 {
@@ -57,6 +55,11 @@ static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
 }
 
+static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
+{
+	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX);
+}
+
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu *pmu;
@@ -69,91 +72,22 @@ static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 }
 
 /**
- * kvm_pmu_pmc_is_chained - determine if the pmc is chained
- * @pmc: The PMU counter pointer
- */
-static bool kvm_pmu_pmc_is_chained(struct kvm_pmc *pmc)
-{
-	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
-
-	return test_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-}
-
-/**
- * kvm_pmu_idx_is_high_counter - determine if select_idx is a high/low counter
- * @select_idx: The counter index
- */
-static bool kvm_pmu_idx_is_high_counter(u64 select_idx)
-{
-	return select_idx & 0x1;
-}
-
-/**
- * kvm_pmu_get_canonical_pmc - obtain the canonical pmc
- * @pmc: The PMU counter pointer
- *
- * When a pair of PMCs are chained together we use the low counter (canonical)
- * to hold the underlying perf event.
- */
-static struct kvm_pmc *kvm_pmu_get_canonical_pmc(struct kvm_pmc *pmc)
-{
-	if (kvm_pmu_pmc_is_chained(pmc) &&
-	    kvm_pmu_idx_is_high_counter(pmc->idx))
-		return pmc - 1;
-
-	return pmc;
-}
-static struct kvm_pmc *kvm_pmu_get_alternate_pmc(struct kvm_pmc *pmc)
-{
-	if (kvm_pmu_idx_is_high_counter(pmc->idx))
-		return pmc - 1;
-	else
-		return pmc + 1;
-}
-
-/**
- * kvm_pmu_idx_has_chain_evtype - determine if the event type is chain
+ * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
  * @select_idx: The counter index
  */
-static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	u64 eventsel, reg;
-
-	select_idx |= 0x1;
-
-	if (select_idx == ARMV8_PMU_CYCLE_IDX)
-		return false;
-
-	reg = PMEVTYPER0_EL0 + select_idx;
-	eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm);
-
-	return eventsel == ARMV8_PMUV3_PERFCTR_CHAIN;
-}
-
-/**
- * kvm_pmu_get_pair_counter_value - get PMU counter value
- * @vcpu: The vcpu pointer
- * @pmc: The PMU counter pointer
- */
-static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
-					  struct kvm_pmc *pmc)
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	u64 counter, counter_high, reg, enabled, running;
-
-	if (kvm_pmu_pmc_is_chained(pmc)) {
-		pmc = kvm_pmu_get_canonical_pmc(pmc);
-		reg = PMEVCNTR0_EL0 + pmc->idx;
+	u64 counter, reg, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
-		counter = __vcpu_sys_reg(vcpu, reg);
-		counter_high = __vcpu_sys_reg(vcpu, reg + 1);
+	if (!kvm_vcpu_has_pmu(vcpu))
+		return 0;
 
-		counter = lower_32_bits(counter) | (counter_high << 32);
-	} else {
-		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
-		counter = __vcpu_sys_reg(vcpu, reg);
-	}
+	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
+		? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+	counter = __vcpu_sys_reg(vcpu, reg);
 
 	/*
 	 * The real counter value is equal to the value of counter register plus
@@ -163,29 +97,7 @@ static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
 		counter += perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
-	return counter;
-}
-
-/**
- * kvm_pmu_get_counter_value - get PMU counter value
- * @vcpu: The vcpu pointer
- * @select_idx: The counter index
- */
-u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	u64 counter;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
-
-	if (!kvm_vcpu_has_pmu(vcpu))
-		return 0;
-
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
-
-	if (kvm_pmu_pmc_is_chained(pmc) &&
-	    kvm_pmu_idx_is_high_counter(select_idx))
-		counter = upper_32_bits(counter);
-	else if (select_idx != ARMV8_PMU_CYCLE_IDX)
+	if (select_idx != ARMV8_PMU_CYCLE_IDX)
 		counter = lower_32_bits(counter);
 
 	return counter;
@@ -218,7 +130,6 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
  */
 static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
 {
-	pmc = kvm_pmu_get_canonical_pmc(pmc);
 	if (pmc->perf_event) {
 		perf_event_disable(pmc->perf_event);
 		perf_event_release_kernel(pmc->perf_event);
@@ -236,11 +147,10 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
 	u64 counter, reg, val;
 
-	pmc = kvm_pmu_get_canonical_pmc(pmc);
 	if (!pmc->perf_event)
 		return;
 
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
+	counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
 	if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
 		reg = PMCCNTR_EL0;
@@ -252,9 +162,6 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
-	if (kvm_pmu_pmc_is_chained(pmc))
-		__vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter);
-
 	kvm_pmu_release_perf_event(pmc);
 }
 
@@ -285,8 +192,6 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 
 	for_each_set_bit(i, &mask, 32)
 		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
-
-	bitmap_zero(vcpu->arch.pmu.chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
 }
 
 /**
@@ -340,11 +245,8 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 
 		pmc = &pmu->pmc[i];
 
-		/* A change in the enable state may affect the chain state */
-		kvm_pmu_update_pmc_chained(vcpu, i);
 		kvm_pmu_create_perf_event(vcpu, i);
 
-		/* At this point, pmc must be the canonical */
 		if (pmc->perf_event) {
 			perf_event_enable(pmc->perf_event);
 			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
@@ -375,11 +277,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 
 		pmc = &pmu->pmc[i];
 
-		/* A change in the enable state may affect the chain state */
-		kvm_pmu_update_pmc_chained(vcpu, i);
 		kvm_pmu_create_perf_event(vcpu, i);
 
-		/* At this point, pmc must be the canonical */
 		if (pmc->perf_event)
 			perf_event_disable(pmc->perf_event);
 	}
@@ -484,6 +383,48 @@ static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
 	kvm_vcpu_kick(vcpu);
 }
 
+/*
+ * Perform an increment on any of the counters described in @mask,
+ * generating the overflow if required, and propagate it as a chained
+ * event if possible.
+ */
+static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
+				      unsigned long mask, u32 event)
+{
+	int i;
+
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+		return;
+
+	/* Weed out disabled counters */
+	mask &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+
+	for_each_set_bit(i, &mask, ARMV8_PMU_CYCLE_IDX) {
+		u64 type, reg;
+
+		/* Filter on event type */
+		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type &= kvm_pmu_event_mask(vcpu->kvm);
+		if (type != event)
+			continue;
+
+		/* Increment this counter */
+		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = lower_32_bits(reg);
+		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+
+		if (reg) /* No overflow? move on */
+			continue;
+
+		/* Mark overflow */
+		__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
+
+		if (kvm_pmu_counter_can_chain(vcpu, i))
+			kvm_pmu_counter_increment(vcpu, BIT(i + 1),
+						  ARMV8_PMUV3_PERFCTR_CHAIN);
+	}
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -514,6 +455,10 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
+	if (kvm_pmu_counter_can_chain(vcpu, idx))
+		kvm_pmu_counter_increment(vcpu, BIT(idx + 1),
+					  ARMV8_PMUV3_PERFCTR_CHAIN);
+
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
 
@@ -533,50 +478,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
  */
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 {
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	int i;
-
-	if (!kvm_vcpu_has_pmu(vcpu))
-		return;
-
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
-		return;
-
-	/* Weed out disabled counters */
-	val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
-
-	for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
-		u64 type, reg;
-
-		if (!(val & BIT(i)))
-			continue;
-
-		/* PMSWINC only applies to ... SW_INC! */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
-		type &= kvm_pmu_event_mask(vcpu->kvm);
-		if (type != ARMV8_PMUV3_PERFCTR_SW_INCR)
-			continue;
-
-		/* increment this even SW_INC counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
-		reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
-
-		if (reg) /* no overflow on the low part */
-			continue;
-
-		if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) {
-			/* increment the high counter */
-			reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
-			reg = lower_32_bits(reg);
-			__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
-			if (!reg) /* mark overflow on the high counter */
-				__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
-		} else {
-			/* mark overflow on low counter */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
-		}
-	}
+	kvm_pmu_counter_increment(vcpu, val, ARMV8_PMUV3_PERFCTR_SW_INCR);
 }
 
 /**
@@ -625,18 +527,11 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	struct arm_pmu *arm_pmu = vcpu->kvm->arch.arm_pmu;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 	struct perf_event *event;
 	struct perf_event_attr attr;
 	u64 eventsel, counter, reg, data;
 
-	/*
-	 * For chained counters the event type and filtering attributes are
-	 * obtained from the low/even counter. We also use this counter to
-	 * determine if the event is enabled/disabled.
-	 */
-	pmc = kvm_pmu_get_canonical_pmc(&pmu->pmc[select_idx]);
-
 	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
 	data = __vcpu_sys_reg(vcpu, reg);
@@ -647,8 +542,12 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	else
 		eventsel = data & kvm_pmu_event_mask(vcpu->kvm);
 
-	/* Software increment event doesn't need to be backed by a perf event */
-	if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR)
+	/*
+	 * Neither SW increment nor chained events need to be backed
+	 * by a perf event.
+	 */
+	if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR ||
+	    eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
 		return;
 
 	/*
@@ -670,30 +569,21 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	attr.exclude_host = 1; /* Don't count host events */
 	attr.config = eventsel;
 
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	if (kvm_pmu_pmc_is_chained(pmc)) {
-		/**
-		 * The initial sample period (overflow count) of an event. For
-		 * chained counters we only support overflow interrupts on the
-		 * high counter.
-		 */
+	/*
+	 * If counting with a 64bit counter, advertise it to the perf
+	 * code, carefully dealing with the initial sample period.
+	 */
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+		attr.config1 |= PERF_ATTR_CFG1_COUNTER_64BIT;
 		attr.sample_period = (-counter) & GENMASK(63, 0);
-		attr.config1 |= PERF_ATTR_CFG1_KVM_PMU_CHAINED;
-
-		event = perf_event_create_kernel_counter(&attr, -1, current,
-							 kvm_pmu_perf_overflow,
-							 pmc + 1);
 	} else {
-		/* The initial sample period (overflow count) of an event. */
-		if (kvm_pmu_idx_is_64bit(vcpu, pmc->idx))
-			attr.sample_period = (-counter) & GENMASK(63, 0);
-		else
-			attr.sample_period = (-counter) & GENMASK(31, 0);
+		attr.sample_period = (-counter) & GENMASK(31, 0);
+	}
 
-		event = perf_event_create_kernel_counter(&attr, -1, current,
+	event = perf_event_create_kernel_counter(&attr, -1, current,
 						 kvm_pmu_perf_overflow, pmc);
-	}
 
 	if (IS_ERR(event)) {
 		pr_err_once("kvm: pmu event creation failed %ld\n",
@@ -704,41 +594,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	pmc->perf_event = event;
 }
 
-/**
- * kvm_pmu_update_pmc_chained - update chained bitmap
- * @vcpu: The vcpu pointer
- * @select_idx: The number of selected counter
- *
- * Update the chained bitmap based on the event type written in the
- * typer register and the enable state of the odd register.
- */
-static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx], *canonical_pmc;
-	bool new_state, old_state;
-
-	old_state = kvm_pmu_pmc_is_chained(pmc);
-	new_state = kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx) &&
-		    kvm_pmu_counter_is_enabled(vcpu, pmc->idx | 0x1);
-
-	if (old_state == new_state)
-		return;
-
-	canonical_pmc = kvm_pmu_get_canonical_pmc(pmc);
-	kvm_pmu_stop_counter(vcpu, canonical_pmc);
-	if (new_state) {
-		/*
-		 * During promotion from !chained to chained we must ensure
-		 * the adjacent counter is stopped and its event destroyed
-		 */
-		kvm_pmu_stop_counter(vcpu, kvm_pmu_get_alternate_pmc(pmc));
-		set_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-		return;
-	}
-	clear_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-}
-
 /**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
@@ -766,7 +621,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 
 	__vcpu_sys_reg(vcpu, reg) = data & mask;
 
-	kvm_pmu_update_pmc_chained(vcpu, select_idx);
 	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index c0b868ce6a8f..96b192139a23 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -11,7 +11,6 @@
 #include <asm/perf_event.h>
 
 #define ARMV8_PMU_CYCLE_IDX		(ARMV8_PMU_MAX_COUNTERS - 1)
-#define ARMV8_PMU_MAX_COUNTER_PAIRS	((ARMV8_PMU_MAX_COUNTERS + 1) >> 1)
 
 #ifdef CONFIG_HW_PERF_EVENTS
 
@@ -29,7 +28,6 @@ struct kvm_pmu {
 	struct irq_work overflow_work;
 	struct kvm_pmu_events events;
 	struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
-	DECLARE_BITMAP(chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
 	int irq_num;
 	bool created;
 	bool irq_level;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 02/14] KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Ricardo recently pointed out that the PMU chained counter emulation
in KVM wasn't quite behaving like the one on actual hardware, in
the sense that a chained counter would expose an overflow on
both halves of a chained counter, while KVM would only expose the
overflow on the top half.

The difference is subtle, but significant. What does the architecture
say (DDI0087 H.a):

- Up to PMUv3p4, all counters but the cycle counter are 32bit

- A 32bit counter that overflows generates a CHAIN event on the
  adjacent counter after exposing its own overflow status

- The CHAIN event is accounted if the counter is correctly
  configured (CHAIN event selected and counter enabled)

This all means that our current implementation (which uses 64bit
perf events) prevents us from emulating this overflow on the lower half.

How to fix this? By implementing the above, to the letter.

This largly results in code deletion, removing the notions of
"counter pair", "chained counters", and "canonical counter".
The code is further restructured to make the CHAIN handling similar
to SWINC, as the two are now extremely similar in behaviour.

Reported-by: Ricardo Koller <ricarkol@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 312 ++++++++++----------------------------
 include/kvm/arm_pmu.h     |   2 -
 2 files changed, 83 insertions(+), 231 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 0003c7d37533..a38b3127f649 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -15,16 +15,14 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#define PERF_ATTR_CFG1_COUNTER_64BIT	BIT(0)
+
 DEFINE_STATIC_KEY_FALSE(kvm_arm_pmu_available);
 
 static LIST_HEAD(arm_pmus);
 static DEFINE_MUTEX(arm_pmus_lock);
 
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
-static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx);
-static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
-
-#define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1
 
 static u32 kvm_pmu_event_mask(struct kvm *kvm)
 {
@@ -57,6 +55,11 @@ static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
 }
 
+static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
+{
+	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX);
+}
+
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu *pmu;
@@ -69,91 +72,22 @@ static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 }
 
 /**
- * kvm_pmu_pmc_is_chained - determine if the pmc is chained
- * @pmc: The PMU counter pointer
- */
-static bool kvm_pmu_pmc_is_chained(struct kvm_pmc *pmc)
-{
-	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
-
-	return test_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-}
-
-/**
- * kvm_pmu_idx_is_high_counter - determine if select_idx is a high/low counter
- * @select_idx: The counter index
- */
-static bool kvm_pmu_idx_is_high_counter(u64 select_idx)
-{
-	return select_idx & 0x1;
-}
-
-/**
- * kvm_pmu_get_canonical_pmc - obtain the canonical pmc
- * @pmc: The PMU counter pointer
- *
- * When a pair of PMCs are chained together we use the low counter (canonical)
- * to hold the underlying perf event.
- */
-static struct kvm_pmc *kvm_pmu_get_canonical_pmc(struct kvm_pmc *pmc)
-{
-	if (kvm_pmu_pmc_is_chained(pmc) &&
-	    kvm_pmu_idx_is_high_counter(pmc->idx))
-		return pmc - 1;
-
-	return pmc;
-}
-static struct kvm_pmc *kvm_pmu_get_alternate_pmc(struct kvm_pmc *pmc)
-{
-	if (kvm_pmu_idx_is_high_counter(pmc->idx))
-		return pmc - 1;
-	else
-		return pmc + 1;
-}
-
-/**
- * kvm_pmu_idx_has_chain_evtype - determine if the event type is chain
+ * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
  * @select_idx: The counter index
  */
-static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	u64 eventsel, reg;
-
-	select_idx |= 0x1;
-
-	if (select_idx == ARMV8_PMU_CYCLE_IDX)
-		return false;
-
-	reg = PMEVTYPER0_EL0 + select_idx;
-	eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm);
-
-	return eventsel == ARMV8_PMUV3_PERFCTR_CHAIN;
-}
-
-/**
- * kvm_pmu_get_pair_counter_value - get PMU counter value
- * @vcpu: The vcpu pointer
- * @pmc: The PMU counter pointer
- */
-static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
-					  struct kvm_pmc *pmc)
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	u64 counter, counter_high, reg, enabled, running;
-
-	if (kvm_pmu_pmc_is_chained(pmc)) {
-		pmc = kvm_pmu_get_canonical_pmc(pmc);
-		reg = PMEVCNTR0_EL0 + pmc->idx;
+	u64 counter, reg, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
-		counter = __vcpu_sys_reg(vcpu, reg);
-		counter_high = __vcpu_sys_reg(vcpu, reg + 1);
+	if (!kvm_vcpu_has_pmu(vcpu))
+		return 0;
 
-		counter = lower_32_bits(counter) | (counter_high << 32);
-	} else {
-		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
-		counter = __vcpu_sys_reg(vcpu, reg);
-	}
+	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
+		? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+	counter = __vcpu_sys_reg(vcpu, reg);
 
 	/*
 	 * The real counter value is equal to the value of counter register plus
@@ -163,29 +97,7 @@ static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
 		counter += perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
-	return counter;
-}
-
-/**
- * kvm_pmu_get_counter_value - get PMU counter value
- * @vcpu: The vcpu pointer
- * @select_idx: The counter index
- */
-u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	u64 counter;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
-
-	if (!kvm_vcpu_has_pmu(vcpu))
-		return 0;
-
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
-
-	if (kvm_pmu_pmc_is_chained(pmc) &&
-	    kvm_pmu_idx_is_high_counter(select_idx))
-		counter = upper_32_bits(counter);
-	else if (select_idx != ARMV8_PMU_CYCLE_IDX)
+	if (select_idx != ARMV8_PMU_CYCLE_IDX)
 		counter = lower_32_bits(counter);
 
 	return counter;
@@ -218,7 +130,6 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
  */
 static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
 {
-	pmc = kvm_pmu_get_canonical_pmc(pmc);
 	if (pmc->perf_event) {
 		perf_event_disable(pmc->perf_event);
 		perf_event_release_kernel(pmc->perf_event);
@@ -236,11 +147,10 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
 	u64 counter, reg, val;
 
-	pmc = kvm_pmu_get_canonical_pmc(pmc);
 	if (!pmc->perf_event)
 		return;
 
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
+	counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
 	if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
 		reg = PMCCNTR_EL0;
@@ -252,9 +162,6 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
-	if (kvm_pmu_pmc_is_chained(pmc))
-		__vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter);
-
 	kvm_pmu_release_perf_event(pmc);
 }
 
@@ -285,8 +192,6 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 
 	for_each_set_bit(i, &mask, 32)
 		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
-
-	bitmap_zero(vcpu->arch.pmu.chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
 }
 
 /**
@@ -340,11 +245,8 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 
 		pmc = &pmu->pmc[i];
 
-		/* A change in the enable state may affect the chain state */
-		kvm_pmu_update_pmc_chained(vcpu, i);
 		kvm_pmu_create_perf_event(vcpu, i);
 
-		/* At this point, pmc must be the canonical */
 		if (pmc->perf_event) {
 			perf_event_enable(pmc->perf_event);
 			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
@@ -375,11 +277,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 
 		pmc = &pmu->pmc[i];
 
-		/* A change in the enable state may affect the chain state */
-		kvm_pmu_update_pmc_chained(vcpu, i);
 		kvm_pmu_create_perf_event(vcpu, i);
 
-		/* At this point, pmc must be the canonical */
 		if (pmc->perf_event)
 			perf_event_disable(pmc->perf_event);
 	}
@@ -484,6 +383,48 @@ static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
 	kvm_vcpu_kick(vcpu);
 }
 
+/*
+ * Perform an increment on any of the counters described in @mask,
+ * generating the overflow if required, and propagate it as a chained
+ * event if possible.
+ */
+static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
+				      unsigned long mask, u32 event)
+{
+	int i;
+
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+		return;
+
+	/* Weed out disabled counters */
+	mask &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+
+	for_each_set_bit(i, &mask, ARMV8_PMU_CYCLE_IDX) {
+		u64 type, reg;
+
+		/* Filter on event type */
+		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type &= kvm_pmu_event_mask(vcpu->kvm);
+		if (type != event)
+			continue;
+
+		/* Increment this counter */
+		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = lower_32_bits(reg);
+		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+
+		if (reg) /* No overflow? move on */
+			continue;
+
+		/* Mark overflow */
+		__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
+
+		if (kvm_pmu_counter_can_chain(vcpu, i))
+			kvm_pmu_counter_increment(vcpu, BIT(i + 1),
+						  ARMV8_PMUV3_PERFCTR_CHAIN);
+	}
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -514,6 +455,10 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
+	if (kvm_pmu_counter_can_chain(vcpu, idx))
+		kvm_pmu_counter_increment(vcpu, BIT(idx + 1),
+					  ARMV8_PMUV3_PERFCTR_CHAIN);
+
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
 
@@ -533,50 +478,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
  */
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 {
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	int i;
-
-	if (!kvm_vcpu_has_pmu(vcpu))
-		return;
-
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
-		return;
-
-	/* Weed out disabled counters */
-	val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
-
-	for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
-		u64 type, reg;
-
-		if (!(val & BIT(i)))
-			continue;
-
-		/* PMSWINC only applies to ... SW_INC! */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
-		type &= kvm_pmu_event_mask(vcpu->kvm);
-		if (type != ARMV8_PMUV3_PERFCTR_SW_INCR)
-			continue;
-
-		/* increment this even SW_INC counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
-		reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
-
-		if (reg) /* no overflow on the low part */
-			continue;
-
-		if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) {
-			/* increment the high counter */
-			reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
-			reg = lower_32_bits(reg);
-			__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
-			if (!reg) /* mark overflow on the high counter */
-				__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
-		} else {
-			/* mark overflow on low counter */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
-		}
-	}
+	kvm_pmu_counter_increment(vcpu, val, ARMV8_PMUV3_PERFCTR_SW_INCR);
 }
 
 /**
@@ -625,18 +527,11 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	struct arm_pmu *arm_pmu = vcpu->kvm->arch.arm_pmu;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 	struct perf_event *event;
 	struct perf_event_attr attr;
 	u64 eventsel, counter, reg, data;
 
-	/*
-	 * For chained counters the event type and filtering attributes are
-	 * obtained from the low/even counter. We also use this counter to
-	 * determine if the event is enabled/disabled.
-	 */
-	pmc = kvm_pmu_get_canonical_pmc(&pmu->pmc[select_idx]);
-
 	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
 	data = __vcpu_sys_reg(vcpu, reg);
@@ -647,8 +542,12 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	else
 		eventsel = data & kvm_pmu_event_mask(vcpu->kvm);
 
-	/* Software increment event doesn't need to be backed by a perf event */
-	if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR)
+	/*
+	 * Neither SW increment nor chained events need to be backed
+	 * by a perf event.
+	 */
+	if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR ||
+	    eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
 		return;
 
 	/*
@@ -670,30 +569,21 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	attr.exclude_host = 1; /* Don't count host events */
 	attr.config = eventsel;
 
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	if (kvm_pmu_pmc_is_chained(pmc)) {
-		/**
-		 * The initial sample period (overflow count) of an event. For
-		 * chained counters we only support overflow interrupts on the
-		 * high counter.
-		 */
+	/*
+	 * If counting with a 64bit counter, advertise it to the perf
+	 * code, carefully dealing with the initial sample period.
+	 */
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+		attr.config1 |= PERF_ATTR_CFG1_COUNTER_64BIT;
 		attr.sample_period = (-counter) & GENMASK(63, 0);
-		attr.config1 |= PERF_ATTR_CFG1_KVM_PMU_CHAINED;
-
-		event = perf_event_create_kernel_counter(&attr, -1, current,
-							 kvm_pmu_perf_overflow,
-							 pmc + 1);
 	} else {
-		/* The initial sample period (overflow count) of an event. */
-		if (kvm_pmu_idx_is_64bit(vcpu, pmc->idx))
-			attr.sample_period = (-counter) & GENMASK(63, 0);
-		else
-			attr.sample_period = (-counter) & GENMASK(31, 0);
+		attr.sample_period = (-counter) & GENMASK(31, 0);
+	}
 
-		event = perf_event_create_kernel_counter(&attr, -1, current,
+	event = perf_event_create_kernel_counter(&attr, -1, current,
 						 kvm_pmu_perf_overflow, pmc);
-	}
 
 	if (IS_ERR(event)) {
 		pr_err_once("kvm: pmu event creation failed %ld\n",
@@ -704,41 +594,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	pmc->perf_event = event;
 }
 
-/**
- * kvm_pmu_update_pmc_chained - update chained bitmap
- * @vcpu: The vcpu pointer
- * @select_idx: The number of selected counter
- *
- * Update the chained bitmap based on the event type written in the
- * typer register and the enable state of the odd register.
- */
-static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx], *canonical_pmc;
-	bool new_state, old_state;
-
-	old_state = kvm_pmu_pmc_is_chained(pmc);
-	new_state = kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx) &&
-		    kvm_pmu_counter_is_enabled(vcpu, pmc->idx | 0x1);
-
-	if (old_state == new_state)
-		return;
-
-	canonical_pmc = kvm_pmu_get_canonical_pmc(pmc);
-	kvm_pmu_stop_counter(vcpu, canonical_pmc);
-	if (new_state) {
-		/*
-		 * During promotion from !chained to chained we must ensure
-		 * the adjacent counter is stopped and its event destroyed
-		 */
-		kvm_pmu_stop_counter(vcpu, kvm_pmu_get_alternate_pmc(pmc));
-		set_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-		return;
-	}
-	clear_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-}
-
 /**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
@@ -766,7 +621,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 
 	__vcpu_sys_reg(vcpu, reg) = data & mask;
 
-	kvm_pmu_update_pmc_chained(vcpu, select_idx);
 	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index c0b868ce6a8f..96b192139a23 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -11,7 +11,6 @@
 #include <asm/perf_event.h>
 
 #define ARMV8_PMU_CYCLE_IDX		(ARMV8_PMU_MAX_COUNTERS - 1)
-#define ARMV8_PMU_MAX_COUNTER_PAIRS	((ARMV8_PMU_MAX_COUNTERS + 1) >> 1)
 
 #ifdef CONFIG_HW_PERF_EVENTS
 
@@ -29,7 +28,6 @@ struct kvm_pmu {
 	struct irq_work overflow_work;
 	struct kvm_pmu_events events;
 	struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
-	DECLARE_BITMAP(chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
 	int irq_num;
 	bool created;
 	bool irq_level;
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 02/14] KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Ricardo recently pointed out that the PMU chained counter emulation
in KVM wasn't quite behaving like the one on actual hardware, in
the sense that a chained counter would expose an overflow on
both halves of a chained counter, while KVM would only expose the
overflow on the top half.

The difference is subtle, but significant. What does the architecture
say (DDI0087 H.a):

- Up to PMUv3p4, all counters but the cycle counter are 32bit

- A 32bit counter that overflows generates a CHAIN event on the
  adjacent counter after exposing its own overflow status

- The CHAIN event is accounted if the counter is correctly
  configured (CHAIN event selected and counter enabled)

This all means that our current implementation (which uses 64bit
perf events) prevents us from emulating this overflow on the lower half.

How to fix this? By implementing the above, to the letter.

This largly results in code deletion, removing the notions of
"counter pair", "chained counters", and "canonical counter".
The code is further restructured to make the CHAIN handling similar
to SWINC, as the two are now extremely similar in behaviour.

Reported-by: Ricardo Koller <ricarkol@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 312 ++++++++++----------------------------
 include/kvm/arm_pmu.h     |   2 -
 2 files changed, 83 insertions(+), 231 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 0003c7d37533..a38b3127f649 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -15,16 +15,14 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#define PERF_ATTR_CFG1_COUNTER_64BIT	BIT(0)
+
 DEFINE_STATIC_KEY_FALSE(kvm_arm_pmu_available);
 
 static LIST_HEAD(arm_pmus);
 static DEFINE_MUTEX(arm_pmus_lock);
 
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
-static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx);
-static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
-
-#define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1
 
 static u32 kvm_pmu_event_mask(struct kvm *kvm)
 {
@@ -57,6 +55,11 @@ static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
 }
 
+static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
+{
+	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX);
+}
+
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu *pmu;
@@ -69,91 +72,22 @@ static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 }
 
 /**
- * kvm_pmu_pmc_is_chained - determine if the pmc is chained
- * @pmc: The PMU counter pointer
- */
-static bool kvm_pmu_pmc_is_chained(struct kvm_pmc *pmc)
-{
-	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
-
-	return test_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-}
-
-/**
- * kvm_pmu_idx_is_high_counter - determine if select_idx is a high/low counter
- * @select_idx: The counter index
- */
-static bool kvm_pmu_idx_is_high_counter(u64 select_idx)
-{
-	return select_idx & 0x1;
-}
-
-/**
- * kvm_pmu_get_canonical_pmc - obtain the canonical pmc
- * @pmc: The PMU counter pointer
- *
- * When a pair of PMCs are chained together we use the low counter (canonical)
- * to hold the underlying perf event.
- */
-static struct kvm_pmc *kvm_pmu_get_canonical_pmc(struct kvm_pmc *pmc)
-{
-	if (kvm_pmu_pmc_is_chained(pmc) &&
-	    kvm_pmu_idx_is_high_counter(pmc->idx))
-		return pmc - 1;
-
-	return pmc;
-}
-static struct kvm_pmc *kvm_pmu_get_alternate_pmc(struct kvm_pmc *pmc)
-{
-	if (kvm_pmu_idx_is_high_counter(pmc->idx))
-		return pmc - 1;
-	else
-		return pmc + 1;
-}
-
-/**
- * kvm_pmu_idx_has_chain_evtype - determine if the event type is chain
+ * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
  * @select_idx: The counter index
  */
-static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	u64 eventsel, reg;
-
-	select_idx |= 0x1;
-
-	if (select_idx == ARMV8_PMU_CYCLE_IDX)
-		return false;
-
-	reg = PMEVTYPER0_EL0 + select_idx;
-	eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm);
-
-	return eventsel == ARMV8_PMUV3_PERFCTR_CHAIN;
-}
-
-/**
- * kvm_pmu_get_pair_counter_value - get PMU counter value
- * @vcpu: The vcpu pointer
- * @pmc: The PMU counter pointer
- */
-static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
-					  struct kvm_pmc *pmc)
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	u64 counter, counter_high, reg, enabled, running;
-
-	if (kvm_pmu_pmc_is_chained(pmc)) {
-		pmc = kvm_pmu_get_canonical_pmc(pmc);
-		reg = PMEVCNTR0_EL0 + pmc->idx;
+	u64 counter, reg, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
-		counter = __vcpu_sys_reg(vcpu, reg);
-		counter_high = __vcpu_sys_reg(vcpu, reg + 1);
+	if (!kvm_vcpu_has_pmu(vcpu))
+		return 0;
 
-		counter = lower_32_bits(counter) | (counter_high << 32);
-	} else {
-		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
-		counter = __vcpu_sys_reg(vcpu, reg);
-	}
+	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
+		? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+	counter = __vcpu_sys_reg(vcpu, reg);
 
 	/*
 	 * The real counter value is equal to the value of counter register plus
@@ -163,29 +97,7 @@ static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
 		counter += perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
-	return counter;
-}
-
-/**
- * kvm_pmu_get_counter_value - get PMU counter value
- * @vcpu: The vcpu pointer
- * @select_idx: The counter index
- */
-u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	u64 counter;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
-
-	if (!kvm_vcpu_has_pmu(vcpu))
-		return 0;
-
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
-
-	if (kvm_pmu_pmc_is_chained(pmc) &&
-	    kvm_pmu_idx_is_high_counter(select_idx))
-		counter = upper_32_bits(counter);
-	else if (select_idx != ARMV8_PMU_CYCLE_IDX)
+	if (select_idx != ARMV8_PMU_CYCLE_IDX)
 		counter = lower_32_bits(counter);
 
 	return counter;
@@ -218,7 +130,6 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
  */
 static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
 {
-	pmc = kvm_pmu_get_canonical_pmc(pmc);
 	if (pmc->perf_event) {
 		perf_event_disable(pmc->perf_event);
 		perf_event_release_kernel(pmc->perf_event);
@@ -236,11 +147,10 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
 	u64 counter, reg, val;
 
-	pmc = kvm_pmu_get_canonical_pmc(pmc);
 	if (!pmc->perf_event)
 		return;
 
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
+	counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
 	if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
 		reg = PMCCNTR_EL0;
@@ -252,9 +162,6 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
-	if (kvm_pmu_pmc_is_chained(pmc))
-		__vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter);
-
 	kvm_pmu_release_perf_event(pmc);
 }
 
@@ -285,8 +192,6 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 
 	for_each_set_bit(i, &mask, 32)
 		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
-
-	bitmap_zero(vcpu->arch.pmu.chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
 }
 
 /**
@@ -340,11 +245,8 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 
 		pmc = &pmu->pmc[i];
 
-		/* A change in the enable state may affect the chain state */
-		kvm_pmu_update_pmc_chained(vcpu, i);
 		kvm_pmu_create_perf_event(vcpu, i);
 
-		/* At this point, pmc must be the canonical */
 		if (pmc->perf_event) {
 			perf_event_enable(pmc->perf_event);
 			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
@@ -375,11 +277,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 
 		pmc = &pmu->pmc[i];
 
-		/* A change in the enable state may affect the chain state */
-		kvm_pmu_update_pmc_chained(vcpu, i);
 		kvm_pmu_create_perf_event(vcpu, i);
 
-		/* At this point, pmc must be the canonical */
 		if (pmc->perf_event)
 			perf_event_disable(pmc->perf_event);
 	}
@@ -484,6 +383,48 @@ static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
 	kvm_vcpu_kick(vcpu);
 }
 
+/*
+ * Perform an increment on any of the counters described in @mask,
+ * generating the overflow if required, and propagate it as a chained
+ * event if possible.
+ */
+static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
+				      unsigned long mask, u32 event)
+{
+	int i;
+
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+		return;
+
+	/* Weed out disabled counters */
+	mask &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+
+	for_each_set_bit(i, &mask, ARMV8_PMU_CYCLE_IDX) {
+		u64 type, reg;
+
+		/* Filter on event type */
+		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type &= kvm_pmu_event_mask(vcpu->kvm);
+		if (type != event)
+			continue;
+
+		/* Increment this counter */
+		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = lower_32_bits(reg);
+		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+
+		if (reg) /* No overflow? move on */
+			continue;
+
+		/* Mark overflow */
+		__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
+
+		if (kvm_pmu_counter_can_chain(vcpu, i))
+			kvm_pmu_counter_increment(vcpu, BIT(i + 1),
+						  ARMV8_PMUV3_PERFCTR_CHAIN);
+	}
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -514,6 +455,10 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
+	if (kvm_pmu_counter_can_chain(vcpu, idx))
+		kvm_pmu_counter_increment(vcpu, BIT(idx + 1),
+					  ARMV8_PMUV3_PERFCTR_CHAIN);
+
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
 
@@ -533,50 +478,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
  */
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 {
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	int i;
-
-	if (!kvm_vcpu_has_pmu(vcpu))
-		return;
-
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
-		return;
-
-	/* Weed out disabled counters */
-	val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
-
-	for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
-		u64 type, reg;
-
-		if (!(val & BIT(i)))
-			continue;
-
-		/* PMSWINC only applies to ... SW_INC! */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
-		type &= kvm_pmu_event_mask(vcpu->kvm);
-		if (type != ARMV8_PMUV3_PERFCTR_SW_INCR)
-			continue;
-
-		/* increment this even SW_INC counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
-		reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
-
-		if (reg) /* no overflow on the low part */
-			continue;
-
-		if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) {
-			/* increment the high counter */
-			reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
-			reg = lower_32_bits(reg);
-			__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
-			if (!reg) /* mark overflow on the high counter */
-				__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
-		} else {
-			/* mark overflow on low counter */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
-		}
-	}
+	kvm_pmu_counter_increment(vcpu, val, ARMV8_PMUV3_PERFCTR_SW_INCR);
 }
 
 /**
@@ -625,18 +527,11 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	struct arm_pmu *arm_pmu = vcpu->kvm->arch.arm_pmu;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 	struct perf_event *event;
 	struct perf_event_attr attr;
 	u64 eventsel, counter, reg, data;
 
-	/*
-	 * For chained counters the event type and filtering attributes are
-	 * obtained from the low/even counter. We also use this counter to
-	 * determine if the event is enabled/disabled.
-	 */
-	pmc = kvm_pmu_get_canonical_pmc(&pmu->pmc[select_idx]);
-
 	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
 	data = __vcpu_sys_reg(vcpu, reg);
@@ -647,8 +542,12 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	else
 		eventsel = data & kvm_pmu_event_mask(vcpu->kvm);
 
-	/* Software increment event doesn't need to be backed by a perf event */
-	if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR)
+	/*
+	 * Neither SW increment nor chained events need to be backed
+	 * by a perf event.
+	 */
+	if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR ||
+	    eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
 		return;
 
 	/*
@@ -670,30 +569,21 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	attr.exclude_host = 1; /* Don't count host events */
 	attr.config = eventsel;
 
-	counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	if (kvm_pmu_pmc_is_chained(pmc)) {
-		/**
-		 * The initial sample period (overflow count) of an event. For
-		 * chained counters we only support overflow interrupts on the
-		 * high counter.
-		 */
+	/*
+	 * If counting with a 64bit counter, advertise it to the perf
+	 * code, carefully dealing with the initial sample period.
+	 */
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+		attr.config1 |= PERF_ATTR_CFG1_COUNTER_64BIT;
 		attr.sample_period = (-counter) & GENMASK(63, 0);
-		attr.config1 |= PERF_ATTR_CFG1_KVM_PMU_CHAINED;
-
-		event = perf_event_create_kernel_counter(&attr, -1, current,
-							 kvm_pmu_perf_overflow,
-							 pmc + 1);
 	} else {
-		/* The initial sample period (overflow count) of an event. */
-		if (kvm_pmu_idx_is_64bit(vcpu, pmc->idx))
-			attr.sample_period = (-counter) & GENMASK(63, 0);
-		else
-			attr.sample_period = (-counter) & GENMASK(31, 0);
+		attr.sample_period = (-counter) & GENMASK(31, 0);
+	}
 
-		event = perf_event_create_kernel_counter(&attr, -1, current,
+	event = perf_event_create_kernel_counter(&attr, -1, current,
 						 kvm_pmu_perf_overflow, pmc);
-	}
 
 	if (IS_ERR(event)) {
 		pr_err_once("kvm: pmu event creation failed %ld\n",
@@ -704,41 +594,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	pmc->perf_event = event;
 }
 
-/**
- * kvm_pmu_update_pmc_chained - update chained bitmap
- * @vcpu: The vcpu pointer
- * @select_idx: The number of selected counter
- *
- * Update the chained bitmap based on the event type written in the
- * typer register and the enable state of the odd register.
- */
-static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx], *canonical_pmc;
-	bool new_state, old_state;
-
-	old_state = kvm_pmu_pmc_is_chained(pmc);
-	new_state = kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx) &&
-		    kvm_pmu_counter_is_enabled(vcpu, pmc->idx | 0x1);
-
-	if (old_state == new_state)
-		return;
-
-	canonical_pmc = kvm_pmu_get_canonical_pmc(pmc);
-	kvm_pmu_stop_counter(vcpu, canonical_pmc);
-	if (new_state) {
-		/*
-		 * During promotion from !chained to chained we must ensure
-		 * the adjacent counter is stopped and its event destroyed
-		 */
-		kvm_pmu_stop_counter(vcpu, kvm_pmu_get_alternate_pmc(pmc));
-		set_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-		return;
-	}
-	clear_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
-}
-
 /**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
@@ -766,7 +621,6 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 
 	__vcpu_sys_reg(vcpu, reg) = data & mask;
 
-	kvm_pmu_update_pmc_chained(vcpu, select_idx);
 	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index c0b868ce6a8f..96b192139a23 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -11,7 +11,6 @@
 #include <asm/perf_event.h>
 
 #define ARMV8_PMU_CYCLE_IDX		(ARMV8_PMU_MAX_COUNTERS - 1)
-#define ARMV8_PMU_MAX_COUNTER_PAIRS	((ARMV8_PMU_MAX_COUNTERS + 1) >> 1)
 
 #ifdef CONFIG_HW_PERF_EVENTS
 
@@ -29,7 +28,6 @@ struct kvm_pmu {
 	struct irq_work overflow_work;
 	struct kvm_pmu_events events;
 	struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
-	DECLARE_BITMAP(chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
 	int irq_num;
 	bool created;
 	bool irq_level;
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 03/14] KVM: arm64: PMU: Always advertise the CHAIN event
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Even when the underlying HW doesn't offer the CHAIN event
(which happens with QEMU), we can always support it as we're
in control of the counter overflow.

Always advertise the event via PMCEID0_EL0.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index a38b3127f649..e63ed0c71a37 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -703,6 +703,8 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 
 	if (!pmceid1) {
 		val = read_sysreg(pmceid0_el0);
+		/* always support CHAIN */
+		val |= BIT(ARMV8_PMUV3_PERFCTR_CHAIN);
 		base = 0;
 	} else {
 		val = read_sysreg(pmceid1_el0);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 03/14] KVM: arm64: PMU: Always advertise the CHAIN event
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Even when the underlying HW doesn't offer the CHAIN event
(which happens with QEMU), we can always support it as we're
in control of the counter overflow.

Always advertise the event via PMCEID0_EL0.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index a38b3127f649..e63ed0c71a37 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -703,6 +703,8 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 
 	if (!pmceid1) {
 		val = read_sysreg(pmceid0_el0);
+		/* always support CHAIN */
+		val |= BIT(ARMV8_PMUV3_PERFCTR_CHAIN);
 		base = 0;
 	} else {
 		val = read_sysreg(pmceid1_el0);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 03/14] KVM: arm64: PMU: Always advertise the CHAIN event
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Even when the underlying HW doesn't offer the CHAIN event
(which happens with QEMU), we can always support it as we're
in control of the counter overflow.

Always advertise the event via PMCEID0_EL0.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index a38b3127f649..e63ed0c71a37 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -703,6 +703,8 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 
 	if (!pmceid1) {
 		val = read_sysreg(pmceid0_el0);
+		/* always support CHAIN */
+		val |= BIT(ARMV8_PMUV3_PERFCTR_CHAIN);
 		base = 0;
 	} else {
 		val = read_sysreg(pmceid1_el0);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 04/14] KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

The PMU architecture makes a subtle difference between a 64bit
counter and a counter that has a 64bit overflow. This is for example
the case of the cycle counter, which can generate an overflow on
a 32bit boundary if PMCR_EL0.LC==0 despite the accumulation being
done on 64 bits.

Use this distinction in the few cases where it matters in the code,
as we will reuse this with PMUv3p5 long counters.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 43 ++++++++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index e63ed0c71a37..3724acefc07b 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -50,6 +50,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
  * @select_idx: The counter index
  */
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	return (select_idx == ARMV8_PMU_CYCLE_IDX);
+}
+
+static bool kvm_pmu_idx_has_64bit_overflow(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
 		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
@@ -57,7 +62,8 @@ static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 
 static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
 {
-	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX);
+	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX &&
+		!kvm_pmu_idx_has_64bit_overflow(vcpu, idx));
 }
 
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
@@ -97,7 +103,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 		counter += perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
-	if (select_idx != ARMV8_PMU_CYCLE_IDX)
+	if (!kvm_pmu_idx_is_64bit(vcpu, select_idx))
 		counter = lower_32_bits(counter);
 
 	return counter;
@@ -425,6 +431,23 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 	}
 }
 
+/* Compute the sample period for a given counter value */
+static u64 compute_period(struct kvm_vcpu *vcpu, u64 select_idx, u64 counter)
+{
+	u64 val;
+
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+		if (!kvm_pmu_idx_has_64bit_overflow(vcpu, select_idx))
+			val = -(counter & GENMASK(31, 0));
+		else
+			val = (-counter) & GENMASK(63, 0);
+	} else {
+		val = (-counter) & GENMASK(31, 0);
+	}
+
+	return val;
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -444,10 +467,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 	 * Reset the sample period to the architectural limit,
 	 * i.e. the point where the counter overflows.
 	 */
-	period = -(local64_read(&perf_event->count));
-
-	if (!kvm_pmu_idx_is_64bit(vcpu, pmc->idx))
-		period &= GENMASK(31, 0);
+	period = compute_period(vcpu, idx, local64_read(&perf_event->count));
 
 	local64_set(&perf_event->hw.period_left, 0);
 	perf_event->attr.sample_period = period;
@@ -573,14 +593,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 
 	/*
 	 * If counting with a 64bit counter, advertise it to the perf
-	 * code, carefully dealing with the initial sample period.
+	 * code, carefully dealing with the initial sample period
+	 * which also depends on the overflow.
 	 */
-	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx))
 		attr.config1 |= PERF_ATTR_CFG1_COUNTER_64BIT;
-		attr.sample_period = (-counter) & GENMASK(63, 0);
-	} else {
-		attr.sample_period = (-counter) & GENMASK(31, 0);
-	}
+
+	attr.sample_period = compute_period(vcpu, select_idx, counter);
 
 	event = perf_event_create_kernel_counter(&attr, -1, current,
 						 kvm_pmu_perf_overflow, pmc);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 04/14] KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

The PMU architecture makes a subtle difference between a 64bit
counter and a counter that has a 64bit overflow. This is for example
the case of the cycle counter, which can generate an overflow on
a 32bit boundary if PMCR_EL0.LC==0 despite the accumulation being
done on 64 bits.

Use this distinction in the few cases where it matters in the code,
as we will reuse this with PMUv3p5 long counters.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 43 ++++++++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index e63ed0c71a37..3724acefc07b 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -50,6 +50,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
  * @select_idx: The counter index
  */
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	return (select_idx == ARMV8_PMU_CYCLE_IDX);
+}
+
+static bool kvm_pmu_idx_has_64bit_overflow(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
 		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
@@ -57,7 +62,8 @@ static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 
 static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
 {
-	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX);
+	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX &&
+		!kvm_pmu_idx_has_64bit_overflow(vcpu, idx));
 }
 
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
@@ -97,7 +103,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 		counter += perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
-	if (select_idx != ARMV8_PMU_CYCLE_IDX)
+	if (!kvm_pmu_idx_is_64bit(vcpu, select_idx))
 		counter = lower_32_bits(counter);
 
 	return counter;
@@ -425,6 +431,23 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 	}
 }
 
+/* Compute the sample period for a given counter value */
+static u64 compute_period(struct kvm_vcpu *vcpu, u64 select_idx, u64 counter)
+{
+	u64 val;
+
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+		if (!kvm_pmu_idx_has_64bit_overflow(vcpu, select_idx))
+			val = -(counter & GENMASK(31, 0));
+		else
+			val = (-counter) & GENMASK(63, 0);
+	} else {
+		val = (-counter) & GENMASK(31, 0);
+	}
+
+	return val;
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -444,10 +467,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 	 * Reset the sample period to the architectural limit,
 	 * i.e. the point where the counter overflows.
 	 */
-	period = -(local64_read(&perf_event->count));
-
-	if (!kvm_pmu_idx_is_64bit(vcpu, pmc->idx))
-		period &= GENMASK(31, 0);
+	period = compute_period(vcpu, idx, local64_read(&perf_event->count));
 
 	local64_set(&perf_event->hw.period_left, 0);
 	perf_event->attr.sample_period = period;
@@ -573,14 +593,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 
 	/*
 	 * If counting with a 64bit counter, advertise it to the perf
-	 * code, carefully dealing with the initial sample period.
+	 * code, carefully dealing with the initial sample period
+	 * which also depends on the overflow.
 	 */
-	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx))
 		attr.config1 |= PERF_ATTR_CFG1_COUNTER_64BIT;
-		attr.sample_period = (-counter) & GENMASK(63, 0);
-	} else {
-		attr.sample_period = (-counter) & GENMASK(31, 0);
-	}
+
+	attr.sample_period = compute_period(vcpu, select_idx, counter);
 
 	event = perf_event_create_kernel_counter(&attr, -1, current,
 						 kvm_pmu_perf_overflow, pmc);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 04/14] KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

The PMU architecture makes a subtle difference between a 64bit
counter and a counter that has a 64bit overflow. This is for example
the case of the cycle counter, which can generate an overflow on
a 32bit boundary if PMCR_EL0.LC==0 despite the accumulation being
done on 64 bits.

Use this distinction in the few cases where it matters in the code,
as we will reuse this with PMUv3p5 long counters.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 43 ++++++++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index e63ed0c71a37..3724acefc07b 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -50,6 +50,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
  * @select_idx: The counter index
  */
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	return (select_idx == ARMV8_PMU_CYCLE_IDX);
+}
+
+static bool kvm_pmu_idx_has_64bit_overflow(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
 		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
@@ -57,7 +62,8 @@ static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 
 static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
 {
-	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX);
+	return (!(idx & 1) && (idx + 1) < ARMV8_PMU_CYCLE_IDX &&
+		!kvm_pmu_idx_has_64bit_overflow(vcpu, idx));
 }
 
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
@@ -97,7 +103,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 		counter += perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
-	if (select_idx != ARMV8_PMU_CYCLE_IDX)
+	if (!kvm_pmu_idx_is_64bit(vcpu, select_idx))
 		counter = lower_32_bits(counter);
 
 	return counter;
@@ -425,6 +431,23 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 	}
 }
 
+/* Compute the sample period for a given counter value */
+static u64 compute_period(struct kvm_vcpu *vcpu, u64 select_idx, u64 counter)
+{
+	u64 val;
+
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+		if (!kvm_pmu_idx_has_64bit_overflow(vcpu, select_idx))
+			val = -(counter & GENMASK(31, 0));
+		else
+			val = (-counter) & GENMASK(63, 0);
+	} else {
+		val = (-counter) & GENMASK(31, 0);
+	}
+
+	return val;
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -444,10 +467,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 	 * Reset the sample period to the architectural limit,
 	 * i.e. the point where the counter overflows.
 	 */
-	period = -(local64_read(&perf_event->count));
-
-	if (!kvm_pmu_idx_is_64bit(vcpu, pmc->idx))
-		period &= GENMASK(31, 0);
+	period = compute_period(vcpu, idx, local64_read(&perf_event->count));
 
 	local64_set(&perf_event->hw.period_left, 0);
 	perf_event->attr.sample_period = period;
@@ -573,14 +593,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 
 	/*
 	 * If counting with a 64bit counter, advertise it to the perf
-	 * code, carefully dealing with the initial sample period.
+	 * code, carefully dealing with the initial sample period
+	 * which also depends on the overflow.
 	 */
-	if (kvm_pmu_idx_is_64bit(vcpu, select_idx)) {
+	if (kvm_pmu_idx_is_64bit(vcpu, select_idx))
 		attr.config1 |= PERF_ATTR_CFG1_COUNTER_64BIT;
-		attr.sample_period = (-counter) & GENMASK(63, 0);
-	} else {
-		attr.sample_period = (-counter) & GENMASK(31, 0);
-	}
+
+	attr.sample_period = compute_period(vcpu, select_idx, counter);
 
 	event = perf_event_create_kernel_counter(&attr, -1, current,
 						 kvm_pmu_perf_overflow, pmc);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 05/14] KVM: arm64: PMU: Narrow the overflow checking when required
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

For 64bit counters that overflow on a 32bit boundary, make
sure we only check the bottom 32bit to generate a CHAIN event.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 3724acefc07b..39a04ae424d1 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -419,7 +419,8 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 		reg = lower_32_bits(reg);
 		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
-		if (reg) /* No overflow? move on */
+		/* No overflow? move on */
+		if (kvm_pmu_idx_has_64bit_overflow(vcpu, i) ? reg : lower_32_bits(reg))
 			continue;
 
 		/* Mark overflow */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 05/14] KVM: arm64: PMU: Narrow the overflow checking when required
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

For 64bit counters that overflow on a 32bit boundary, make
sure we only check the bottom 32bit to generate a CHAIN event.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 3724acefc07b..39a04ae424d1 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -419,7 +419,8 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 		reg = lower_32_bits(reg);
 		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
-		if (reg) /* No overflow? move on */
+		/* No overflow? move on */
+		if (kvm_pmu_idx_has_64bit_overflow(vcpu, i) ? reg : lower_32_bits(reg))
 			continue;
 
 		/* Mark overflow */
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 05/14] KVM: arm64: PMU: Narrow the overflow checking when required
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

For 64bit counters that overflow on a 32bit boundary, make
sure we only check the bottom 32bit to generate a CHAIN event.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 3724acefc07b..39a04ae424d1 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -419,7 +419,8 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 		reg = lower_32_bits(reg);
 		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
-		if (reg) /* No overflow? move on */
+		/* No overflow? move on */
+		if (kvm_pmu_idx_has_64bit_overflow(vcpu, i) ? reg : lower_32_bits(reg))
 			continue;
 
 		/* Mark overflow */
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 06/14] KVM: arm64: PMU: Only narrow counters that are not 64bit wide
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

The current PMU emulation sometimes narrows counters to 32bit
if the counter isn't the cycle counter. As this is going to
change with PMUv3p5 where the counters are all 64bit, fix
the couple of cases where this happens unconditionally.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 39a04ae424d1..8f6462cbc408 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -151,20 +151,17 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
  */
 static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
-	u64 counter, reg, val;
+	u64 reg, val;
 
 	if (!pmc->perf_event)
 		return;
 
-	counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+	val = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
-	if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
+	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		reg = PMCCNTR_EL0;
-		val = counter;
-	} else {
+	else
 		reg = PMEVCNTR0_EL0 + pmc->idx;
-		val = lower_32_bits(counter);
-	}
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
@@ -416,7 +413,8 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 
 		/* Increment this counter */
 		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
-		reg = lower_32_bits(reg);
+		if (!kvm_pmu_idx_is_64bit(vcpu, i))
+			reg = lower_32_bits(reg);
 		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
 		/* No overflow? move on */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 06/14] KVM: arm64: PMU: Only narrow counters that are not 64bit wide
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

The current PMU emulation sometimes narrows counters to 32bit
if the counter isn't the cycle counter. As this is going to
change with PMUv3p5 where the counters are all 64bit, fix
the couple of cases where this happens unconditionally.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 39a04ae424d1..8f6462cbc408 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -151,20 +151,17 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
  */
 static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
-	u64 counter, reg, val;
+	u64 reg, val;
 
 	if (!pmc->perf_event)
 		return;
 
-	counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+	val = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
-	if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
+	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		reg = PMCCNTR_EL0;
-		val = counter;
-	} else {
+	else
 		reg = PMEVCNTR0_EL0 + pmc->idx;
-		val = lower_32_bits(counter);
-	}
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
@@ -416,7 +413,8 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 
 		/* Increment this counter */
 		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
-		reg = lower_32_bits(reg);
+		if (!kvm_pmu_idx_is_64bit(vcpu, i))
+			reg = lower_32_bits(reg);
 		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
 		/* No overflow? move on */
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 06/14] KVM: arm64: PMU: Only narrow counters that are not 64bit wide
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

The current PMU emulation sometimes narrows counters to 32bit
if the counter isn't the cycle counter. As this is going to
change with PMUv3p5 where the counters are all 64bit, fix
the couple of cases where this happens unconditionally.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 39a04ae424d1..8f6462cbc408 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -151,20 +151,17 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
  */
 static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
-	u64 counter, reg, val;
+	u64 reg, val;
 
 	if (!pmc->perf_event)
 		return;
 
-	counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+	val = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
-	if (pmc->idx == ARMV8_PMU_CYCLE_IDX) {
+	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		reg = PMCCNTR_EL0;
-		val = counter;
-	} else {
+	else
 		reg = PMEVCNTR0_EL0 + pmc->idx;
-		val = lower_32_bits(counter);
-	}
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
@@ -416,7 +413,8 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 
 		/* Increment this counter */
 		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
-		reg = lower_32_bits(reg);
+		if (!kvm_pmu_idx_is_64bit(vcpu, i))
+			reg = lower_32_bits(reg);
 		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
 		/* No overflow? move on */
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 07/14] KVM: arm64: PMU: Add counter_index_to_*reg() helpers
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

In order to reduce the boilerplate code, add two helpers returning
the counter register index (resp. the event register) in the vcpu
register file from the counter index.

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 8f6462cbc408..44ad0fdba4db 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -77,6 +77,16 @@ static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 	return container_of(vcpu_arch, struct kvm_vcpu, arch);
 }
 
+static u32 counter_index_to_reg(u64 idx)
+{
+	return (idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + idx;
+}
+
+static u32 counter_index_to_evtreg(u64 idx)
+{
+	return (idx == ARMV8_PMU_CYCLE_IDX) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + idx;
+}
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -91,8 +101,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return 0;
 
-	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+	reg = counter_index_to_reg(select_idx);
 	counter = __vcpu_sys_reg(vcpu, reg);
 
 	/*
@@ -122,8 +131,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return;
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
+	reg = counter_index_to_reg(select_idx);
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
 	/* Recreate the perf event to reflect the updated sample_period */
@@ -158,10 +166,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 
 	val = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
-	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		reg = PMCCNTR_EL0;
-	else
-		reg = PMEVCNTR0_EL0 + pmc->idx;
+	reg = counter_index_to_reg(pmc->idx);
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
@@ -406,16 +411,16 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 		u64 type, reg;
 
 		/* Filter on event type */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type = __vcpu_sys_reg(vcpu, counter_index_to_evtreg(i));
 		type &= kvm_pmu_event_mask(vcpu->kvm);
 		if (type != event)
 			continue;
 
 		/* Increment this counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) + 1;
 		if (!kvm_pmu_idx_is_64bit(vcpu, i))
 			reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+		__vcpu_sys_reg(vcpu, counter_index_to_reg(i)) = reg;
 
 		/* No overflow? move on */
 		if (kvm_pmu_idx_has_64bit_overflow(vcpu, i) ? reg : lower_32_bits(reg))
@@ -551,8 +556,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	struct perf_event_attr attr;
 	u64 eventsel, counter, reg, data;
 
-	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
+	reg = counter_index_to_evtreg(select_idx);
 	data = __vcpu_sys_reg(vcpu, reg);
 
 	kvm_pmu_stop_counter(vcpu, pmc);
@@ -634,8 +638,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	mask &= ~ARMV8_PMU_EVTYPE_EVENT;
 	mask |= kvm_pmu_event_mask(vcpu->kvm);
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
+	reg = counter_index_to_evtreg(select_idx);
 
 	__vcpu_sys_reg(vcpu, reg) = data & mask;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 07/14] KVM: arm64: PMU: Add counter_index_to_*reg() helpers
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

In order to reduce the boilerplate code, add two helpers returning
the counter register index (resp. the event register) in the vcpu
register file from the counter index.

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 8f6462cbc408..44ad0fdba4db 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -77,6 +77,16 @@ static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 	return container_of(vcpu_arch, struct kvm_vcpu, arch);
 }
 
+static u32 counter_index_to_reg(u64 idx)
+{
+	return (idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + idx;
+}
+
+static u32 counter_index_to_evtreg(u64 idx)
+{
+	return (idx == ARMV8_PMU_CYCLE_IDX) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + idx;
+}
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -91,8 +101,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return 0;
 
-	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+	reg = counter_index_to_reg(select_idx);
 	counter = __vcpu_sys_reg(vcpu, reg);
 
 	/*
@@ -122,8 +131,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return;
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
+	reg = counter_index_to_reg(select_idx);
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
 	/* Recreate the perf event to reflect the updated sample_period */
@@ -158,10 +166,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 
 	val = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
-	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		reg = PMCCNTR_EL0;
-	else
-		reg = PMEVCNTR0_EL0 + pmc->idx;
+	reg = counter_index_to_reg(pmc->idx);
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
@@ -406,16 +411,16 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 		u64 type, reg;
 
 		/* Filter on event type */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type = __vcpu_sys_reg(vcpu, counter_index_to_evtreg(i));
 		type &= kvm_pmu_event_mask(vcpu->kvm);
 		if (type != event)
 			continue;
 
 		/* Increment this counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) + 1;
 		if (!kvm_pmu_idx_is_64bit(vcpu, i))
 			reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+		__vcpu_sys_reg(vcpu, counter_index_to_reg(i)) = reg;
 
 		/* No overflow? move on */
 		if (kvm_pmu_idx_has_64bit_overflow(vcpu, i) ? reg : lower_32_bits(reg))
@@ -551,8 +556,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	struct perf_event_attr attr;
 	u64 eventsel, counter, reg, data;
 
-	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
+	reg = counter_index_to_evtreg(select_idx);
 	data = __vcpu_sys_reg(vcpu, reg);
 
 	kvm_pmu_stop_counter(vcpu, pmc);
@@ -634,8 +638,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	mask &= ~ARMV8_PMU_EVTYPE_EVENT;
 	mask |= kvm_pmu_event_mask(vcpu->kvm);
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
+	reg = counter_index_to_evtreg(select_idx);
 
 	__vcpu_sys_reg(vcpu, reg) = data & mask;
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 07/14] KVM: arm64: PMU: Add counter_index_to_*reg() helpers
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

In order to reduce the boilerplate code, add two helpers returning
the counter register index (resp. the event register) in the vcpu
register file from the counter index.

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 8f6462cbc408..44ad0fdba4db 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -77,6 +77,16 @@ static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
 	return container_of(vcpu_arch, struct kvm_vcpu, arch);
 }
 
+static u32 counter_index_to_reg(u64 idx)
+{
+	return (idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + idx;
+}
+
+static u32 counter_index_to_evtreg(u64 idx)
+{
+	return (idx == ARMV8_PMU_CYCLE_IDX) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + idx;
+}
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -91,8 +101,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return 0;
 
-	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+	reg = counter_index_to_reg(select_idx);
 	counter = __vcpu_sys_reg(vcpu, reg);
 
 	/*
@@ -122,8 +131,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return;
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
+	reg = counter_index_to_reg(select_idx);
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
 	/* Recreate the perf event to reflect the updated sample_period */
@@ -158,10 +166,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 
 	val = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 
-	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-		reg = PMCCNTR_EL0;
-	else
-		reg = PMEVCNTR0_EL0 + pmc->idx;
+	reg = counter_index_to_reg(pmc->idx);
 
 	__vcpu_sys_reg(vcpu, reg) = val;
 
@@ -406,16 +411,16 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
 		u64 type, reg;
 
 		/* Filter on event type */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type = __vcpu_sys_reg(vcpu, counter_index_to_evtreg(i));
 		type &= kvm_pmu_event_mask(vcpu->kvm);
 		if (type != event)
 			continue;
 
 		/* Increment this counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) + 1;
 		if (!kvm_pmu_idx_is_64bit(vcpu, i))
 			reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+		__vcpu_sys_reg(vcpu, counter_index_to_reg(i)) = reg;
 
 		/* No overflow? move on */
 		if (kvm_pmu_idx_has_64bit_overflow(vcpu, i) ? reg : lower_32_bits(reg))
@@ -551,8 +556,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	struct perf_event_attr attr;
 	u64 eventsel, counter, reg, data;
 
-	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
+	reg = counter_index_to_evtreg(select_idx);
 	data = __vcpu_sys_reg(vcpu, reg);
 
 	kvm_pmu_stop_counter(vcpu, pmc);
@@ -634,8 +638,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	mask &= ~ARMV8_PMU_EVTYPE_EVENT;
 	mask |= kvm_pmu_event_mask(vcpu->kvm);
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
+	reg = counter_index_to_evtreg(select_idx);
 
 	__vcpu_sys_reg(vcpu, reg) = data & mask;
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 08/14] KVM: arm64: PMU: Simplify setting a counter to a specific value
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

kvm_pmu_set_counter_value() is pretty odd, as it tries to update
the counter value while taking into account the value that is
currently held by the running perf counter.

This is not only complicated, this is quite wrong. Nowhere in
the architecture is it said that the counter would be offset
by something that is pending. The counter should be updated
with the value set by SW, and start counting from there if
required.

Remove the odd computation and just assign the provided value
after having released the perf event (which is then restarted).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 44ad0fdba4db..03b761a63f5f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -23,6 +23,7 @@ static LIST_HEAD(arm_pmus);
 static DEFINE_MUTEX(arm_pmus_lock);
 
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
+static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc);
 
 static u32 kvm_pmu_event_mask(struct kvm *kvm)
 {
@@ -131,8 +132,10 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return;
 
+	kvm_pmu_release_perf_event(&vcpu->arch.pmu.pmc[select_idx]);
+
 	reg = counter_index_to_reg(select_idx);
-	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+	__vcpu_sys_reg(vcpu, reg) = val;
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 08/14] KVM: arm64: PMU: Simplify setting a counter to a specific value
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

kvm_pmu_set_counter_value() is pretty odd, as it tries to update
the counter value while taking into account the value that is
currently held by the running perf counter.

This is not only complicated, this is quite wrong. Nowhere in
the architecture is it said that the counter would be offset
by something that is pending. The counter should be updated
with the value set by SW, and start counting from there if
required.

Remove the odd computation and just assign the provided value
after having released the perf event (which is then restarted).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 44ad0fdba4db..03b761a63f5f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -23,6 +23,7 @@ static LIST_HEAD(arm_pmus);
 static DEFINE_MUTEX(arm_pmus_lock);
 
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
+static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc);
 
 static u32 kvm_pmu_event_mask(struct kvm *kvm)
 {
@@ -131,8 +132,10 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return;
 
+	kvm_pmu_release_perf_event(&vcpu->arch.pmu.pmc[select_idx]);
+
 	reg = counter_index_to_reg(select_idx);
-	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+	__vcpu_sys_reg(vcpu, reg) = val;
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 08/14] KVM: arm64: PMU: Simplify setting a counter to a specific value
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

kvm_pmu_set_counter_value() is pretty odd, as it tries to update
the counter value while taking into account the value that is
currently held by the running perf counter.

This is not only complicated, this is quite wrong. Nowhere in
the architecture is it said that the counter would be offset
by something that is pending. The counter should be updated
with the value set by SW, and start counting from there if
required.

Remove the odd computation and just assign the provided value
after having released the perf event (which is then restarted).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 44ad0fdba4db..03b761a63f5f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -23,6 +23,7 @@ static LIST_HEAD(arm_pmus);
 static DEFINE_MUTEX(arm_pmus_lock);
 
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
+static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc);
 
 static u32 kvm_pmu_event_mask(struct kvm *kvm)
 {
@@ -131,8 +132,10 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	if (!kvm_vcpu_has_pmu(vcpu))
 		return;
 
+	kvm_pmu_release_perf_event(&vcpu->arch.pmu.pmc[select_idx]);
+
 	reg = counter_index_to_reg(select_idx);
-	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+	__vcpu_sys_reg(vcpu, reg) = val;
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 09/14] KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Even when using PMUv3p5 (which implies 64bit counters), there is
no way for AArch32 to write to the top 32 bits of the counters.
The only way to influence these bits (other than by counting
events) is by writing PMCR.P==1.

Make sure we obey the architecture and preserve the top 32 bits
on a counter update.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 03b761a63f5f..87585c12ca41 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -119,13 +119,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	return counter;
 }
 
-/**
- * kvm_pmu_set_counter_value - set PMU counter value
- * @vcpu: The vcpu pointer
- * @select_idx: The counter index
- * @val: The counter value
- */
-void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
+static void kvm_pmu_set_counter(struct kvm_vcpu *vcpu, u64 select_idx, u64 val,
+				bool force)
 {
 	u64 reg;
 
@@ -135,12 +130,36 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	kvm_pmu_release_perf_event(&vcpu->arch.pmu.pmc[select_idx]);
 
 	reg = counter_index_to_reg(select_idx);
+
+	if (vcpu_mode_is_32bit(vcpu) && select_idx != ARMV8_PMU_CYCLE_IDX &&
+	    !force) {
+		/*
+		 * Even with PMUv3p5, AArch32 cannot write to the top
+		 * 32bit of the counters. The only possible course of
+		 * action is to use PMCR.P, which will reset them to
+		 * 0 (the only use of the 'force' parameter).
+		 */
+		val  = lower_32_bits(val);
+		val |= upper_32_bits(__vcpu_sys_reg(vcpu, reg));
+	}
+
 	__vcpu_sys_reg(vcpu, reg) = val;
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
+/**
+ * kvm_pmu_set_counter_value - set PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ * @val: The counter value
+ */
+void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
+{
+	kvm_pmu_set_counter(vcpu, select_idx, val, false);
+}
+
 /**
  * kvm_pmu_release_perf_event - remove the perf event
  * @pmc: The PMU counter pointer
@@ -535,7 +554,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
 		for_each_set_bit(i, &mask, 32)
-			kvm_pmu_set_counter_value(vcpu, i, 0);
+			kvm_pmu_set_counter(vcpu, i, 0, true);
 	}
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 09/14] KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Even when using PMUv3p5 (which implies 64bit counters), there is
no way for AArch32 to write to the top 32 bits of the counters.
The only way to influence these bits (other than by counting
events) is by writing PMCR.P==1.

Make sure we obey the architecture and preserve the top 32 bits
on a counter update.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 03b761a63f5f..87585c12ca41 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -119,13 +119,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	return counter;
 }
 
-/**
- * kvm_pmu_set_counter_value - set PMU counter value
- * @vcpu: The vcpu pointer
- * @select_idx: The counter index
- * @val: The counter value
- */
-void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
+static void kvm_pmu_set_counter(struct kvm_vcpu *vcpu, u64 select_idx, u64 val,
+				bool force)
 {
 	u64 reg;
 
@@ -135,12 +130,36 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	kvm_pmu_release_perf_event(&vcpu->arch.pmu.pmc[select_idx]);
 
 	reg = counter_index_to_reg(select_idx);
+
+	if (vcpu_mode_is_32bit(vcpu) && select_idx != ARMV8_PMU_CYCLE_IDX &&
+	    !force) {
+		/*
+		 * Even with PMUv3p5, AArch32 cannot write to the top
+		 * 32bit of the counters. The only possible course of
+		 * action is to use PMCR.P, which will reset them to
+		 * 0 (the only use of the 'force' parameter).
+		 */
+		val  = lower_32_bits(val);
+		val |= upper_32_bits(__vcpu_sys_reg(vcpu, reg));
+	}
+
 	__vcpu_sys_reg(vcpu, reg) = val;
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
+/**
+ * kvm_pmu_set_counter_value - set PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ * @val: The counter value
+ */
+void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
+{
+	kvm_pmu_set_counter(vcpu, select_idx, val, false);
+}
+
 /**
  * kvm_pmu_release_perf_event - remove the perf event
  * @pmc: The PMU counter pointer
@@ -535,7 +554,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
 		for_each_set_bit(i, &mask, 32)
-			kvm_pmu_set_counter_value(vcpu, i, 0);
+			kvm_pmu_set_counter(vcpu, i, 0, true);
 	}
 }
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 09/14] KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Even when using PMUv3p5 (which implies 64bit counters), there is
no way for AArch32 to write to the top 32 bits of the counters.
The only way to influence these bits (other than by counting
events) is by writing PMCR.P==1.

Make sure we obey the architecture and preserve the top 32 bits
on a counter update.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 03b761a63f5f..87585c12ca41 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -119,13 +119,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	return counter;
 }
 
-/**
- * kvm_pmu_set_counter_value - set PMU counter value
- * @vcpu: The vcpu pointer
- * @select_idx: The counter index
- * @val: The counter value
- */
-void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
+static void kvm_pmu_set_counter(struct kvm_vcpu *vcpu, u64 select_idx, u64 val,
+				bool force)
 {
 	u64 reg;
 
@@ -135,12 +130,36 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	kvm_pmu_release_perf_event(&vcpu->arch.pmu.pmc[select_idx]);
 
 	reg = counter_index_to_reg(select_idx);
+
+	if (vcpu_mode_is_32bit(vcpu) && select_idx != ARMV8_PMU_CYCLE_IDX &&
+	    !force) {
+		/*
+		 * Even with PMUv3p5, AArch32 cannot write to the top
+		 * 32bit of the counters. The only possible course of
+		 * action is to use PMCR.P, which will reset them to
+		 * 0 (the only use of the 'force' parameter).
+		 */
+		val  = lower_32_bits(val);
+		val |= upper_32_bits(__vcpu_sys_reg(vcpu, reg));
+	}
+
 	__vcpu_sys_reg(vcpu, reg) = val;
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
+/**
+ * kvm_pmu_set_counter_value - set PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ * @val: The counter value
+ */
+void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
+{
+	kvm_pmu_set_counter(vcpu, select_idx, val, false);
+}
+
 /**
  * kvm_pmu_release_perf_event - remove the perf event
  * @pmc: The PMU counter pointer
@@ -535,7 +554,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
 		for_each_set_bit(i, &mask, 32)
-			kvm_pmu_set_counter_value(vcpu, i, 0);
+			kvm_pmu_set_counter(vcpu, i, 0, true);
 	}
 }
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

As further patches will enable the selection of a PMU revision
from userspace, sample the supported PMU revision at VM creation
time, rather than building each time the ID_AA64DFR0_EL1 register
is accessed.

This shouldn't result in any change in behaviour.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/arm.c              |  6 +++++
 arch/arm64/kvm/pmu-emul.c         | 11 +++++++++
 arch/arm64/kvm/sys_regs.c         | 41 +++++++++++++++++++++++++------
 include/kvm/arm_pmu.h             |  6 +++++
 5 files changed, 57 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 45e2136322ba..90c9a2dd3f26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -163,6 +163,7 @@ struct kvm_arch {
 
 	u8 pfr0_csv2;
 	u8 pfr0_csv3;
+	u8 dfr0_pmuver;
 
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 94d33e296e10..6b3ed524630d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -164,6 +164,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	set_default_spectre(kvm);
 	kvm_arm_init_hypercalls(kvm);
 
+	/*
+	 * Initialise the default PMUver before there is a chance to
+	 * create an actual PMU.
+	 */
+	kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
+
 	return ret;
 out_free_stage2_pgd:
 	kvm_free_stage2_pgd(&kvm->arch.mmu);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 87585c12ca41..f126b45aa6c6 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -1049,3 +1049,14 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 
 	return -ENXIO;
 }
+
+u8 kvm_arm_pmu_get_pmuver_limit(void)
+{
+	u64 tmp;
+
+	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	tmp = cpuid_feature_cap_perfmon_field(tmp,
+					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
+					      ID_AA64DFR0_EL1_PMUVer_V3P4);
+	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4a7c5abcbca..7a4cd644b9c0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1062,6 +1062,32 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
+{
+	if (kvm_vcpu_has_pmu(vcpu))
+		return vcpu->kvm->arch.dfr0_pmuver;
+
+	/* Special case for IMPDEF PMUs that KVM has exposed in the past... */
+	if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
+		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
+
+	/* The real "no PMU" */
+	return 0;
+}
+
+static u8 pmuver_to_perfmon(u8 pmuver)
+{
+	switch (pmuver) {
+	case ID_AA64DFR0_EL1_PMUVer_IMP:
+		return ID_DFR0_PERFMON_8_0;
+	case ID_AA64DFR0_EL1_PMUVer_IMP_DEF:
+		return ID_DFR0_PERFMON_IMP_DEF;
+	default:
+		/* Anything ARMv8.1+ has the same value. For now. */
+		return pmuver;
+	}
+}
+
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
 {
@@ -1111,18 +1137,17 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
 		/* Limit debug to ARMv8.0 */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
 		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_EL1_PMUVer_V3P4 : 0);
+		/* Set PMUver to the required version */
+		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
+				  vcpu_pmuver(vcpu));
 		/* Hide SPE from guests */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
 		break;
 	case SYS_ID_DFR0_EL1:
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_DFR0_PERFMON_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
+		val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
+				  pmuver_to_perfmon(vcpu_pmuver(vcpu)));
 		break;
 	}
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 96b192139a23..812f729c9108 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -89,6 +89,8 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
+u8 kvm_arm_pmu_get_pmuver_limit(void);
+
 #else
 struct kvm_pmu {
 };
@@ -154,6 +156,10 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
+static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
+{
+	return 0;
+}
 
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

As further patches will enable the selection of a PMU revision
from userspace, sample the supported PMU revision at VM creation
time, rather than building each time the ID_AA64DFR0_EL1 register
is accessed.

This shouldn't result in any change in behaviour.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/arm.c              |  6 +++++
 arch/arm64/kvm/pmu-emul.c         | 11 +++++++++
 arch/arm64/kvm/sys_regs.c         | 41 +++++++++++++++++++++++++------
 include/kvm/arm_pmu.h             |  6 +++++
 5 files changed, 57 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 45e2136322ba..90c9a2dd3f26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -163,6 +163,7 @@ struct kvm_arch {
 
 	u8 pfr0_csv2;
 	u8 pfr0_csv3;
+	u8 dfr0_pmuver;
 
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 94d33e296e10..6b3ed524630d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -164,6 +164,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	set_default_spectre(kvm);
 	kvm_arm_init_hypercalls(kvm);
 
+	/*
+	 * Initialise the default PMUver before there is a chance to
+	 * create an actual PMU.
+	 */
+	kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
+
 	return ret;
 out_free_stage2_pgd:
 	kvm_free_stage2_pgd(&kvm->arch.mmu);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 87585c12ca41..f126b45aa6c6 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -1049,3 +1049,14 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 
 	return -ENXIO;
 }
+
+u8 kvm_arm_pmu_get_pmuver_limit(void)
+{
+	u64 tmp;
+
+	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	tmp = cpuid_feature_cap_perfmon_field(tmp,
+					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
+					      ID_AA64DFR0_EL1_PMUVer_V3P4);
+	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4a7c5abcbca..7a4cd644b9c0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1062,6 +1062,32 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
+{
+	if (kvm_vcpu_has_pmu(vcpu))
+		return vcpu->kvm->arch.dfr0_pmuver;
+
+	/* Special case for IMPDEF PMUs that KVM has exposed in the past... */
+	if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
+		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
+
+	/* The real "no PMU" */
+	return 0;
+}
+
+static u8 pmuver_to_perfmon(u8 pmuver)
+{
+	switch (pmuver) {
+	case ID_AA64DFR0_EL1_PMUVer_IMP:
+		return ID_DFR0_PERFMON_8_0;
+	case ID_AA64DFR0_EL1_PMUVer_IMP_DEF:
+		return ID_DFR0_PERFMON_IMP_DEF;
+	default:
+		/* Anything ARMv8.1+ has the same value. For now. */
+		return pmuver;
+	}
+}
+
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
 {
@@ -1111,18 +1137,17 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
 		/* Limit debug to ARMv8.0 */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
 		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_EL1_PMUVer_V3P4 : 0);
+		/* Set PMUver to the required version */
+		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
+				  vcpu_pmuver(vcpu));
 		/* Hide SPE from guests */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
 		break;
 	case SYS_ID_DFR0_EL1:
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_DFR0_PERFMON_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
+		val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
+				  pmuver_to_perfmon(vcpu_pmuver(vcpu)));
 		break;
 	}
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 96b192139a23..812f729c9108 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -89,6 +89,8 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
+u8 kvm_arm_pmu_get_pmuver_limit(void);
+
 #else
 struct kvm_pmu {
 };
@@ -154,6 +156,10 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
+static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
+{
+	return 0;
+}
 
 #endif
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

As further patches will enable the selection of a PMU revision
from userspace, sample the supported PMU revision at VM creation
time, rather than building each time the ID_AA64DFR0_EL1 register
is accessed.

This shouldn't result in any change in behaviour.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/arm.c              |  6 +++++
 arch/arm64/kvm/pmu-emul.c         | 11 +++++++++
 arch/arm64/kvm/sys_regs.c         | 41 +++++++++++++++++++++++++------
 include/kvm/arm_pmu.h             |  6 +++++
 5 files changed, 57 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 45e2136322ba..90c9a2dd3f26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -163,6 +163,7 @@ struct kvm_arch {
 
 	u8 pfr0_csv2;
 	u8 pfr0_csv3;
+	u8 dfr0_pmuver;
 
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 94d33e296e10..6b3ed524630d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -164,6 +164,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	set_default_spectre(kvm);
 	kvm_arm_init_hypercalls(kvm);
 
+	/*
+	 * Initialise the default PMUver before there is a chance to
+	 * create an actual PMU.
+	 */
+	kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
+
 	return ret;
 out_free_stage2_pgd:
 	kvm_free_stage2_pgd(&kvm->arch.mmu);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 87585c12ca41..f126b45aa6c6 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -1049,3 +1049,14 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 
 	return -ENXIO;
 }
+
+u8 kvm_arm_pmu_get_pmuver_limit(void)
+{
+	u64 tmp;
+
+	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	tmp = cpuid_feature_cap_perfmon_field(tmp,
+					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
+					      ID_AA64DFR0_EL1_PMUVer_V3P4);
+	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4a7c5abcbca..7a4cd644b9c0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1062,6 +1062,32 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
+{
+	if (kvm_vcpu_has_pmu(vcpu))
+		return vcpu->kvm->arch.dfr0_pmuver;
+
+	/* Special case for IMPDEF PMUs that KVM has exposed in the past... */
+	if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
+		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
+
+	/* The real "no PMU" */
+	return 0;
+}
+
+static u8 pmuver_to_perfmon(u8 pmuver)
+{
+	switch (pmuver) {
+	case ID_AA64DFR0_EL1_PMUVer_IMP:
+		return ID_DFR0_PERFMON_8_0;
+	case ID_AA64DFR0_EL1_PMUVer_IMP_DEF:
+		return ID_DFR0_PERFMON_IMP_DEF;
+	default:
+		/* Anything ARMv8.1+ has the same value. For now. */
+		return pmuver;
+	}
+}
+
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
 {
@@ -1111,18 +1137,17 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
 		/* Limit debug to ARMv8.0 */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
 		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_EL1_PMUVer_V3P4 : 0);
+		/* Set PMUver to the required version */
+		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
+				  vcpu_pmuver(vcpu));
 		/* Hide SPE from guests */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
 		break;
 	case SYS_ID_DFR0_EL1:
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_DFR0_PERFMON_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
+		val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
+				  pmuver_to_perfmon(vcpu_pmuver(vcpu)));
 		break;
 	}
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 96b192139a23..812f729c9108 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -89,6 +89,8 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
+u8 kvm_arm_pmu_get_pmuver_limit(void);
+
 #else
 struct kvm_pmu {
 };
@@ -154,6 +156,10 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
+static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
+{
+	return 0;
+}
 
 #endif
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:53   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
the PMUver field can be altered and be at most the one that was
initially computed for the guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7a4cd644b9c0..4fa14b4ae2a6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+			       const struct sys_reg_desc *rd,
+			       u64 val)
+{
+	u8 pmuver, host_pmuver;
+
+	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
+
+	/*
+	 * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
+	 * as it doesn't promise more than what the HW gives us. We
+	 * allow an IMPDEF PMU though, only if no PMU is supported
+	 * (KVM backward compatibility handling).
+	 */
+	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
+	if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
+		return -EINVAL;
+
+	/* We already have a PMU, don't try to disable it... */
+	if (kvm_vcpu_has_pmu(vcpu) &&
+	    (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
+		return -EINVAL;
+
+	/* We can only differ with PMUver, and anything else is an error */
+	val ^= read_id_reg(vcpu, rd);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+	if (val)
+		return -EINVAL;
+
+	vcpu->kvm->arch.dfr0_pmuver = pmuver;
+
+	return 0;
+}
+
 /*
  * cpufeature ID register user accessors
  *
@@ -1508,7 +1542,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	ID_UNALLOCATED(4,7),
 
 	/* CRm=5 */
-	ID_SANITISED(ID_AA64DFR0_EL1),
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
+	  .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
 	ID_SANITISED(ID_AA64DFR1_EL1),
 	ID_UNALLOCATED(5,2),
 	ID_UNALLOCATED(5,3),
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
the PMUver field can be altered and be at most the one that was
initially computed for the guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7a4cd644b9c0..4fa14b4ae2a6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+			       const struct sys_reg_desc *rd,
+			       u64 val)
+{
+	u8 pmuver, host_pmuver;
+
+	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
+
+	/*
+	 * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
+	 * as it doesn't promise more than what the HW gives us. We
+	 * allow an IMPDEF PMU though, only if no PMU is supported
+	 * (KVM backward compatibility handling).
+	 */
+	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
+	if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
+		return -EINVAL;
+
+	/* We already have a PMU, don't try to disable it... */
+	if (kvm_vcpu_has_pmu(vcpu) &&
+	    (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
+		return -EINVAL;
+
+	/* We can only differ with PMUver, and anything else is an error */
+	val ^= read_id_reg(vcpu, rd);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+	if (val)
+		return -EINVAL;
+
+	vcpu->kvm->arch.dfr0_pmuver = pmuver;
+
+	return 0;
+}
+
 /*
  * cpufeature ID register user accessors
  *
@@ -1508,7 +1542,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	ID_UNALLOCATED(4,7),
 
 	/* CRm=5 */
-	ID_SANITISED(ID_AA64DFR0_EL1),
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
+	  .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
 	ID_SANITISED(ID_AA64DFR1_EL1),
 	ID_UNALLOCATED(5,2),
 	ID_UNALLOCATED(5,3),
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-10-28 10:53   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:53 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
the PMUver field can be altered and be at most the one that was
initially computed for the guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7a4cd644b9c0..4fa14b4ae2a6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+			       const struct sys_reg_desc *rd,
+			       u64 val)
+{
+	u8 pmuver, host_pmuver;
+
+	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
+
+	/*
+	 * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
+	 * as it doesn't promise more than what the HW gives us. We
+	 * allow an IMPDEF PMU though, only if no PMU is supported
+	 * (KVM backward compatibility handling).
+	 */
+	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
+	if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
+		return -EINVAL;
+
+	/* We already have a PMU, don't try to disable it... */
+	if (kvm_vcpu_has_pmu(vcpu) &&
+	    (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
+		return -EINVAL;
+
+	/* We can only differ with PMUver, and anything else is an error */
+	val ^= read_id_reg(vcpu, rd);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+	if (val)
+		return -EINVAL;
+
+	vcpu->kvm->arch.dfr0_pmuver = pmuver;
+
+	return 0;
+}
+
 /*
  * cpufeature ID register user accessors
  *
@@ -1508,7 +1542,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	ID_UNALLOCATED(4,7),
 
 	/* CRm=5 */
-	ID_SANITISED(ID_AA64DFR0_EL1),
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
+	  .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
 	ID_SANITISED(ID_AA64DFR1_EL1),
 	ID_UNALLOCATED(5,2),
 	ID_UNALLOCATED(5,3),
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 12/14] KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon to be set from userspace
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:54   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Allow userspace to write ID_DFR0_EL1, on the condition that only
the PerfMon field can be altered and be something that is compatible
with what was computed for the AArch64 view of the guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 53 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4fa14b4ae2a6..603d2b43e8ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1075,6 +1075,19 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static u8 perfmon_to_pmuver(u8 perfmon)
+{
+	switch (perfmon) {
+	case ID_DFR0_PERFMON_8_0:
+		return ID_AA64DFR0_EL1_PMUVer_IMP;
+	case ID_DFR0_PERFMON_IMP_DEF:
+		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
+	default:
+		/* Anything ARMv8.1+ has the same value. For now. */
+		return perfmon;
+	}
+}
+
 static u8 pmuver_to_perfmon(u8 pmuver)
 {
 	switch (pmuver) {
@@ -1281,6 +1294,42 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
+			   const struct sys_reg_desc *rd,
+			   u64 val)
+{
+	u8 perfmon, host_perfmon = 0;
+
+	if (system_supports_32bit_el0())
+		host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+
+	/*
+	 * Allow DFR0_EL1.PerfMon to be set from userspace as long as
+	 * it doesn't promise more than what the HW gives us on the
+	 * AArch64 side (as everything is emulated with that), and
+	 * that this is a PMUv3.
+	 */
+	perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_PERFMON), val);
+	if ((perfmon != ID_DFR0_PERFMON_IMP_DEF && perfmon > host_perfmon) ||
+	    (perfmon != 0 && perfmon < ID_DFR0_PERFMON_8_0))
+		return -EINVAL;
+
+	/* We already have a PMU, don't try to disable it... */
+	if (kvm_vcpu_has_pmu(vcpu) &&
+	    (perfmon == 0 || perfmon == ID_DFR0_PERFMON_IMP_DEF))
+		return -EINVAL;
+
+	/* We can only differ with PerfMon, and anything else is an error */
+	val ^= read_id_reg(vcpu, rd);
+	val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+	if (val)
+		return -EINVAL;
+
+	vcpu->kvm->arch.dfr0_pmuver = perfmon_to_pmuver(perfmon);
+
+	return 0;
+}
+
 /*
  * cpufeature ID register user accessors
  *
@@ -1502,7 +1551,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* CRm=1 */
 	AA32_ID_SANITISED(ID_PFR0_EL1),
 	AA32_ID_SANITISED(ID_PFR1_EL1),
-	AA32_ID_SANITISED(ID_DFR0_EL1),
+	{ SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg,
+	  .get_user = get_id_reg, .set_user = set_id_dfr0_el1,
+	  .visibility = aa32_id_visibility, },
 	ID_HIDDEN(ID_AFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR1_EL1),
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 12/14] KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon to be set from userspace
@ 2022-10-28 10:54   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Allow userspace to write ID_DFR0_EL1, on the condition that only
the PerfMon field can be altered and be something that is compatible
with what was computed for the AArch64 view of the guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 53 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4fa14b4ae2a6..603d2b43e8ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1075,6 +1075,19 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static u8 perfmon_to_pmuver(u8 perfmon)
+{
+	switch (perfmon) {
+	case ID_DFR0_PERFMON_8_0:
+		return ID_AA64DFR0_EL1_PMUVer_IMP;
+	case ID_DFR0_PERFMON_IMP_DEF:
+		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
+	default:
+		/* Anything ARMv8.1+ has the same value. For now. */
+		return perfmon;
+	}
+}
+
 static u8 pmuver_to_perfmon(u8 pmuver)
 {
 	switch (pmuver) {
@@ -1281,6 +1294,42 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
+			   const struct sys_reg_desc *rd,
+			   u64 val)
+{
+	u8 perfmon, host_perfmon = 0;
+
+	if (system_supports_32bit_el0())
+		host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+
+	/*
+	 * Allow DFR0_EL1.PerfMon to be set from userspace as long as
+	 * it doesn't promise more than what the HW gives us on the
+	 * AArch64 side (as everything is emulated with that), and
+	 * that this is a PMUv3.
+	 */
+	perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_PERFMON), val);
+	if ((perfmon != ID_DFR0_PERFMON_IMP_DEF && perfmon > host_perfmon) ||
+	    (perfmon != 0 && perfmon < ID_DFR0_PERFMON_8_0))
+		return -EINVAL;
+
+	/* We already have a PMU, don't try to disable it... */
+	if (kvm_vcpu_has_pmu(vcpu) &&
+	    (perfmon == 0 || perfmon == ID_DFR0_PERFMON_IMP_DEF))
+		return -EINVAL;
+
+	/* We can only differ with PerfMon, and anything else is an error */
+	val ^= read_id_reg(vcpu, rd);
+	val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+	if (val)
+		return -EINVAL;
+
+	vcpu->kvm->arch.dfr0_pmuver = perfmon_to_pmuver(perfmon);
+
+	return 0;
+}
+
 /*
  * cpufeature ID register user accessors
  *
@@ -1502,7 +1551,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* CRm=1 */
 	AA32_ID_SANITISED(ID_PFR0_EL1),
 	AA32_ID_SANITISED(ID_PFR1_EL1),
-	AA32_ID_SANITISED(ID_DFR0_EL1),
+	{ SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg,
+	  .get_user = get_id_reg, .set_user = set_id_dfr0_el1,
+	  .visibility = aa32_id_visibility, },
 	ID_HIDDEN(ID_AFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR1_EL1),
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 12/14] KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon to be set from userspace
@ 2022-10-28 10:54   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Allow userspace to write ID_DFR0_EL1, on the condition that only
the PerfMon field can be altered and be something that is compatible
with what was computed for the AArch64 view of the guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 53 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4fa14b4ae2a6..603d2b43e8ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1075,6 +1075,19 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static u8 perfmon_to_pmuver(u8 perfmon)
+{
+	switch (perfmon) {
+	case ID_DFR0_PERFMON_8_0:
+		return ID_AA64DFR0_EL1_PMUVer_IMP;
+	case ID_DFR0_PERFMON_IMP_DEF:
+		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
+	default:
+		/* Anything ARMv8.1+ has the same value. For now. */
+		return perfmon;
+	}
+}
+
 static u8 pmuver_to_perfmon(u8 pmuver)
 {
 	switch (pmuver) {
@@ -1281,6 +1294,42 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
+			   const struct sys_reg_desc *rd,
+			   u64 val)
+{
+	u8 perfmon, host_perfmon = 0;
+
+	if (system_supports_32bit_el0())
+		host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+
+	/*
+	 * Allow DFR0_EL1.PerfMon to be set from userspace as long as
+	 * it doesn't promise more than what the HW gives us on the
+	 * AArch64 side (as everything is emulated with that), and
+	 * that this is a PMUv3.
+	 */
+	perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_PERFMON), val);
+	if ((perfmon != ID_DFR0_PERFMON_IMP_DEF && perfmon > host_perfmon) ||
+	    (perfmon != 0 && perfmon < ID_DFR0_PERFMON_8_0))
+		return -EINVAL;
+
+	/* We already have a PMU, don't try to disable it... */
+	if (kvm_vcpu_has_pmu(vcpu) &&
+	    (perfmon == 0 || perfmon == ID_DFR0_PERFMON_IMP_DEF))
+		return -EINVAL;
+
+	/* We can only differ with PerfMon, and anything else is an error */
+	val ^= read_id_reg(vcpu, rd);
+	val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+	if (val)
+		return -EINVAL;
+
+	vcpu->kvm->arch.dfr0_pmuver = perfmon_to_pmuver(perfmon);
+
+	return 0;
+}
+
 /*
  * cpufeature ID register user accessors
  *
@@ -1502,7 +1551,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* CRm=1 */
 	AA32_ID_SANITISED(ID_PFR0_EL1),
 	AA32_ID_SANITISED(ID_PFR1_EL1),
-	AA32_ID_SANITISED(ID_DFR0_EL1),
+	{ SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg,
+	  .get_user = get_id_reg, .set_user = set_id_dfr0_el1,
+	  .visibility = aa32_id_visibility, },
 	ID_HIDDEN(ID_AFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR1_EL1),
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 13/14] KVM: arm64: PMU: Implement PMUv3p5 long counter support
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:54   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

PMUv3p5 (which is mandatory with ARMv8.5) comes with some extra
features:

- All counters are 64bit

- The overflow point is controlled by the PMCR_EL0.LP bit

Add the required checks in the helpers that control counter
width and overflow, as well as the sysreg handling for the LP
bit. A new kvm_pmu_is_3p5() helper makes it easy to spot the
PMUv3p5 specific handling.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 8 +++++---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 include/kvm/arm_pmu.h     | 7 +++++++
 3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f126b45aa6c6..dc163e1a1fcf 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -52,13 +52,15 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
  */
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (select_idx == ARMV8_PMU_CYCLE_IDX);
+	return (select_idx == ARMV8_PMU_CYCLE_IDX || kvm_pmu_is_3p5(vcpu));
 }
 
 static bool kvm_pmu_idx_has_64bit_overflow(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
-		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
+	u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+
+	return (select_idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) ||
+	       (select_idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC));
 }
 
 static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 603d2b43e8ce..8f4412cd4bf6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	       | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
 	if (!kvm_supports_32bit_el0())
 		val |= ARMV8_PMU_PMCR_LC;
+	if (!kvm_pmu_is_3p5(vcpu))
+		val &= ~ARMV8_PMU_PMCR_LP;
 	__vcpu_sys_reg(vcpu, r->reg) = val;
 }
 
@@ -703,6 +705,8 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val |= p->regval & ARMV8_PMU_PMCR_MASK;
 		if (!kvm_supports_32bit_el0())
 			val |= ARMV8_PMU_PMCR_LC;
+		if (!kvm_pmu_is_3p5(vcpu))
+			val &= ~ARMV8_PMU_PMCR_LP;
 		__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 		kvm_pmu_handle_pmcr(vcpu, val);
 		kvm_vcpu_pmu_restore_guest(vcpu);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 812f729c9108..3d526df9f3c5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -89,6 +89,12 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
+/*
+ * Evaluates as true when emulating PMUv3p5, and false otherwise.
+ */
+#define kvm_pmu_is_3p5(vcpu)						\
+	(vcpu->kvm->arch.dfr0_pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+
 u8 kvm_arm_pmu_get_pmuver_limit(void);
 
 #else
@@ -153,6 +159,7 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 }
 
 #define kvm_vcpu_has_pmu(vcpu)		({ false; })
+#define kvm_pmu_is_3p5(vcpu)		({ false; })
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 13/14] KVM: arm64: PMU: Implement PMUv3p5 long counter support
@ 2022-10-28 10:54   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

PMUv3p5 (which is mandatory with ARMv8.5) comes with some extra
features:

- All counters are 64bit

- The overflow point is controlled by the PMCR_EL0.LP bit

Add the required checks in the helpers that control counter
width and overflow, as well as the sysreg handling for the LP
bit. A new kvm_pmu_is_3p5() helper makes it easy to spot the
PMUv3p5 specific handling.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 8 +++++---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 include/kvm/arm_pmu.h     | 7 +++++++
 3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f126b45aa6c6..dc163e1a1fcf 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -52,13 +52,15 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
  */
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (select_idx == ARMV8_PMU_CYCLE_IDX);
+	return (select_idx == ARMV8_PMU_CYCLE_IDX || kvm_pmu_is_3p5(vcpu));
 }
 
 static bool kvm_pmu_idx_has_64bit_overflow(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
-		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
+	u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+
+	return (select_idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) ||
+	       (select_idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC));
 }
 
 static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 603d2b43e8ce..8f4412cd4bf6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	       | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
 	if (!kvm_supports_32bit_el0())
 		val |= ARMV8_PMU_PMCR_LC;
+	if (!kvm_pmu_is_3p5(vcpu))
+		val &= ~ARMV8_PMU_PMCR_LP;
 	__vcpu_sys_reg(vcpu, r->reg) = val;
 }
 
@@ -703,6 +705,8 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val |= p->regval & ARMV8_PMU_PMCR_MASK;
 		if (!kvm_supports_32bit_el0())
 			val |= ARMV8_PMU_PMCR_LC;
+		if (!kvm_pmu_is_3p5(vcpu))
+			val &= ~ARMV8_PMU_PMCR_LP;
 		__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 		kvm_pmu_handle_pmcr(vcpu, val);
 		kvm_vcpu_pmu_restore_guest(vcpu);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 812f729c9108..3d526df9f3c5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -89,6 +89,12 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
+/*
+ * Evaluates as true when emulating PMUv3p5, and false otherwise.
+ */
+#define kvm_pmu_is_3p5(vcpu)						\
+	(vcpu->kvm->arch.dfr0_pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+
 u8 kvm_arm_pmu_get_pmuver_limit(void);
 
 #else
@@ -153,6 +159,7 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 }
 
 #define kvm_vcpu_has_pmu(vcpu)		({ false; })
+#define kvm_pmu_is_3p5(vcpu)		({ false; })
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 13/14] KVM: arm64: PMU: Implement PMUv3p5 long counter support
@ 2022-10-28 10:54   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

PMUv3p5 (which is mandatory with ARMv8.5) comes with some extra
features:

- All counters are 64bit

- The overflow point is controlled by the PMCR_EL0.LP bit

Add the required checks in the helpers that control counter
width and overflow, as well as the sysreg handling for the LP
bit. A new kvm_pmu_is_3p5() helper makes it easy to spot the
PMUv3p5 specific handling.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 8 +++++---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 include/kvm/arm_pmu.h     | 7 +++++++
 3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f126b45aa6c6..dc163e1a1fcf 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -52,13 +52,15 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
  */
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (select_idx == ARMV8_PMU_CYCLE_IDX);
+	return (select_idx == ARMV8_PMU_CYCLE_IDX || kvm_pmu_is_3p5(vcpu));
 }
 
 static bool kvm_pmu_idx_has_64bit_overflow(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
-		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
+	u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+
+	return (select_idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) ||
+	       (select_idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC));
 }
 
 static bool kvm_pmu_counter_can_chain(struct kvm_vcpu *vcpu, u64 idx)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 603d2b43e8ce..8f4412cd4bf6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	       | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
 	if (!kvm_supports_32bit_el0())
 		val |= ARMV8_PMU_PMCR_LC;
+	if (!kvm_pmu_is_3p5(vcpu))
+		val &= ~ARMV8_PMU_PMCR_LP;
 	__vcpu_sys_reg(vcpu, r->reg) = val;
 }
 
@@ -703,6 +705,8 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val |= p->regval & ARMV8_PMU_PMCR_MASK;
 		if (!kvm_supports_32bit_el0())
 			val |= ARMV8_PMU_PMCR_LC;
+		if (!kvm_pmu_is_3p5(vcpu))
+			val &= ~ARMV8_PMU_PMCR_LP;
 		__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 		kvm_pmu_handle_pmcr(vcpu, val);
 		kvm_vcpu_pmu_restore_guest(vcpu);
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 812f729c9108..3d526df9f3c5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -89,6 +89,12 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
+/*
+ * Evaluates as true when emulating PMUv3p5, and false otherwise.
+ */
+#define kvm_pmu_is_3p5(vcpu)						\
+	(vcpu->kvm->arch.dfr0_pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+
 u8 kvm_arm_pmu_get_pmuver_limit(void);
 
 #else
@@ -153,6 +159,7 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 }
 
 #define kvm_vcpu_has_pmu(vcpu)		({ false; })
+#define kvm_pmu_is_3p5(vcpu)		({ false; })
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 14/14] KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest
  2022-10-28 10:53 ` Marc Zyngier
  (?)
@ 2022-10-28 10:54   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Now that the infrastructure is in place, bump the PMU support up
to PMUv3p5.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index dc163e1a1fcf..26293f842b0f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -1059,6 +1059,6 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
 	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
 	tmp = cpuid_feature_cap_perfmon_field(tmp,
 					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-					      ID_AA64DFR0_EL1_PMUVer_V3P4);
+					      ID_AA64DFR0_EL1_PMUVer_V3P5);
 	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 14/14] KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest
@ 2022-10-28 10:54   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm

Now that the infrastructure is in place, bump the PMU support up
to PMUv3p5.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index dc163e1a1fcf..26293f842b0f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -1059,6 +1059,6 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
 	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
 	tmp = cpuid_feature_cap_perfmon_field(tmp,
 					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-					      ID_AA64DFR0_EL1_PMUVer_V3P4);
+					      ID_AA64DFR0_EL1_PMUVer_V3P5);
 	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
 }
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 14/14] KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest
@ 2022-10-28 10:54   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-10-28 10:54 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Ricardo Koller, Reiji Watanabe

Now that the infrastructure is in place, bump the PMU support up
to PMUv3p5.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index dc163e1a1fcf..26293f842b0f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -1059,6 +1059,6 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
 	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
 	tmp = cpuid_feature_cap_perfmon_field(tmp,
 					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-					      ID_AA64DFR0_EL1_PMUVer_V3P4);
+					      ID_AA64DFR0_EL1_PMUVer_V3P5);
 	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
 }
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
  2022-10-28 10:53   ` Marc Zyngier
  (?)
@ 2022-11-03  4:55     ` Reiji Watanabe
  -1 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03  4:55 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
>
> As further patches will enable the selection of a PMU revision
> from userspace, sample the supported PMU revision at VM creation
> time, rather than building each time the ID_AA64DFR0_EL1 register
> is accessed.
>
> This shouldn't result in any change in behaviour.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/arm.c              |  6 +++++
>  arch/arm64/kvm/pmu-emul.c         | 11 +++++++++
>  arch/arm64/kvm/sys_regs.c         | 41 +++++++++++++++++++++++++------
>  include/kvm/arm_pmu.h             |  6 +++++
>  5 files changed, 57 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 45e2136322ba..90c9a2dd3f26 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -163,6 +163,7 @@ struct kvm_arch {
>
>         u8 pfr0_csv2;
>         u8 pfr0_csv3;
> +       u8 dfr0_pmuver;
>
>         /* Hypercall features firmware registers' descriptor */
>         struct kvm_smccc_features smccc_feat;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 94d33e296e10..6b3ed524630d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -164,6 +164,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>         set_default_spectre(kvm);
>         kvm_arm_init_hypercalls(kvm);
>
> +       /*
> +        * Initialise the default PMUver before there is a chance to
> +        * create an actual PMU.
> +        */
> +       kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
> +
>         return ret;
>  out_free_stage2_pgd:
>         kvm_free_stage2_pgd(&kvm->arch.mmu);
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 87585c12ca41..f126b45aa6c6 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -1049,3 +1049,14 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
>
>         return -ENXIO;
>  }
> +
> +u8 kvm_arm_pmu_get_pmuver_limit(void)
> +{
> +       u64 tmp;
> +
> +       tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> +       tmp = cpuid_feature_cap_perfmon_field(tmp,
> +                                             ID_AA64DFR0_EL1_PMUVer_SHIFT,
> +                                             ID_AA64DFR0_EL1_PMUVer_V3P4);
> +       return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
> +}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f4a7c5abcbca..7a4cd644b9c0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1062,6 +1062,32 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
>         return true;
>  }
>
> +static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
> +{
> +       if (kvm_vcpu_has_pmu(vcpu))
> +               return vcpu->kvm->arch.dfr0_pmuver;
> +
> +       /* Special case for IMPDEF PMUs that KVM has exposed in the past... */
> +       if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
> +               return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
> +
> +       /* The real "no PMU" */
> +       return 0;
> +}
> +
> +static u8 pmuver_to_perfmon(u8 pmuver)
> +{
> +       switch (pmuver) {
> +       case ID_AA64DFR0_EL1_PMUVer_IMP:
> +               return ID_DFR0_PERFMON_8_0;
> +       case ID_AA64DFR0_EL1_PMUVer_IMP_DEF:
> +               return ID_DFR0_PERFMON_IMP_DEF;
> +       default:
> +               /* Anything ARMv8.1+ has the same value. For now. */
> +               return pmuver;
> +       }
> +}
> +
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
>  static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
>  {
> @@ -1111,18 +1137,17 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
>                 /* Limit debug to ARMv8.0 */
>                 val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
>                 val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
> -               /* Limit guests to PMUv3 for ARMv8.4 */
> -               val = cpuid_feature_cap_perfmon_field(val,
> -                                                     ID_AA64DFR0_EL1_PMUVer_SHIFT,
> -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_EL1_PMUVer_V3P4 : 0);
> +               /* Set PMUver to the required version */
> +               val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
> +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
> +                                 vcpu_pmuver(vcpu));
>                 /* Hide SPE from guests */
>                 val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
>                 break;
>         case SYS_ID_DFR0_EL1:
> -               /* Limit guests to PMUv3 for ARMv8.4 */
> -               val = cpuid_feature_cap_perfmon_field(val,
> -                                                     ID_DFR0_PERFMON_SHIFT,
> -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));

Shouldn't KVM expose the sanitized value as it is when AArch32 is
not supported at EL0 ? Since the register value is UNKNOWN when AArch32
is not supported at EL0, I would think this code might change the PERFMON
field value on such systems (could cause live migration to fail).

I should have noticed this with the previous version...

Thank you,
Reiji

>                 break;
>         }
>
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 96b192139a23..812f729c9108 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -89,6 +89,8 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
>                         vcpu->arch.pmu.events = *kvm_get_pmu_events();  \
>         } while (0)
>
> +u8 kvm_arm_pmu_get_pmuver_limit(void);
> +
>  #else
>  struct kvm_pmu {
>  };
> @@ -154,6 +156,10 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
>  static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
> +static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
> +{
> +       return 0;
> +}
>
>  #endif
>
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-11-03  4:55     ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03  4:55 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Marc,

On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
>
> As further patches will enable the selection of a PMU revision
> from userspace, sample the supported PMU revision at VM creation
> time, rather than building each time the ID_AA64DFR0_EL1 register
> is accessed.
>
> This shouldn't result in any change in behaviour.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/arm.c              |  6 +++++
>  arch/arm64/kvm/pmu-emul.c         | 11 +++++++++
>  arch/arm64/kvm/sys_regs.c         | 41 +++++++++++++++++++++++++------
>  include/kvm/arm_pmu.h             |  6 +++++
>  5 files changed, 57 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 45e2136322ba..90c9a2dd3f26 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -163,6 +163,7 @@ struct kvm_arch {
>
>         u8 pfr0_csv2;
>         u8 pfr0_csv3;
> +       u8 dfr0_pmuver;
>
>         /* Hypercall features firmware registers' descriptor */
>         struct kvm_smccc_features smccc_feat;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 94d33e296e10..6b3ed524630d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -164,6 +164,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>         set_default_spectre(kvm);
>         kvm_arm_init_hypercalls(kvm);
>
> +       /*
> +        * Initialise the default PMUver before there is a chance to
> +        * create an actual PMU.
> +        */
> +       kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
> +
>         return ret;
>  out_free_stage2_pgd:
>         kvm_free_stage2_pgd(&kvm->arch.mmu);
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 87585c12ca41..f126b45aa6c6 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -1049,3 +1049,14 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
>
>         return -ENXIO;
>  }
> +
> +u8 kvm_arm_pmu_get_pmuver_limit(void)
> +{
> +       u64 tmp;
> +
> +       tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> +       tmp = cpuid_feature_cap_perfmon_field(tmp,
> +                                             ID_AA64DFR0_EL1_PMUVer_SHIFT,
> +                                             ID_AA64DFR0_EL1_PMUVer_V3P4);
> +       return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
> +}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f4a7c5abcbca..7a4cd644b9c0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1062,6 +1062,32 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
>         return true;
>  }
>
> +static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
> +{
> +       if (kvm_vcpu_has_pmu(vcpu))
> +               return vcpu->kvm->arch.dfr0_pmuver;
> +
> +       /* Special case for IMPDEF PMUs that KVM has exposed in the past... */
> +       if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
> +               return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
> +
> +       /* The real "no PMU" */
> +       return 0;
> +}
> +
> +static u8 pmuver_to_perfmon(u8 pmuver)
> +{
> +       switch (pmuver) {
> +       case ID_AA64DFR0_EL1_PMUVer_IMP:
> +               return ID_DFR0_PERFMON_8_0;
> +       case ID_AA64DFR0_EL1_PMUVer_IMP_DEF:
> +               return ID_DFR0_PERFMON_IMP_DEF;
> +       default:
> +               /* Anything ARMv8.1+ has the same value. For now. */
> +               return pmuver;
> +       }
> +}
> +
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
>  static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
>  {
> @@ -1111,18 +1137,17 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
>                 /* Limit debug to ARMv8.0 */
>                 val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
>                 val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
> -               /* Limit guests to PMUv3 for ARMv8.4 */
> -               val = cpuid_feature_cap_perfmon_field(val,
> -                                                     ID_AA64DFR0_EL1_PMUVer_SHIFT,
> -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_EL1_PMUVer_V3P4 : 0);
> +               /* Set PMUver to the required version */
> +               val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
> +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
> +                                 vcpu_pmuver(vcpu));
>                 /* Hide SPE from guests */
>                 val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
>                 break;
>         case SYS_ID_DFR0_EL1:
> -               /* Limit guests to PMUv3 for ARMv8.4 */
> -               val = cpuid_feature_cap_perfmon_field(val,
> -                                                     ID_DFR0_PERFMON_SHIFT,
> -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));

Shouldn't KVM expose the sanitized value as it is when AArch32 is
not supported at EL0 ? Since the register value is UNKNOWN when AArch32
is not supported at EL0, I would think this code might change the PERFMON
field value on such systems (could cause live migration to fail).

I should have noticed this with the previous version...

Thank you,
Reiji

>                 break;
>         }
>
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 96b192139a23..812f729c9108 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -89,6 +89,8 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
>                         vcpu->arch.pmu.events = *kvm_get_pmu_events();  \
>         } while (0)
>
> +u8 kvm_arm_pmu_get_pmuver_limit(void);
> +
>  #else
>  struct kvm_pmu {
>  };
> @@ -154,6 +156,10 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
>  static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
> +static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
> +{
> +       return 0;
> +}
>
>  #endif
>
> --
> 2.34.1
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-11-03  4:55     ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03  4:55 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
>
> As further patches will enable the selection of a PMU revision
> from userspace, sample the supported PMU revision at VM creation
> time, rather than building each time the ID_AA64DFR0_EL1 register
> is accessed.
>
> This shouldn't result in any change in behaviour.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/arm.c              |  6 +++++
>  arch/arm64/kvm/pmu-emul.c         | 11 +++++++++
>  arch/arm64/kvm/sys_regs.c         | 41 +++++++++++++++++++++++++------
>  include/kvm/arm_pmu.h             |  6 +++++
>  5 files changed, 57 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 45e2136322ba..90c9a2dd3f26 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -163,6 +163,7 @@ struct kvm_arch {
>
>         u8 pfr0_csv2;
>         u8 pfr0_csv3;
> +       u8 dfr0_pmuver;
>
>         /* Hypercall features firmware registers' descriptor */
>         struct kvm_smccc_features smccc_feat;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 94d33e296e10..6b3ed524630d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -164,6 +164,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>         set_default_spectre(kvm);
>         kvm_arm_init_hypercalls(kvm);
>
> +       /*
> +        * Initialise the default PMUver before there is a chance to
> +        * create an actual PMU.
> +        */
> +       kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
> +
>         return ret;
>  out_free_stage2_pgd:
>         kvm_free_stage2_pgd(&kvm->arch.mmu);
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 87585c12ca41..f126b45aa6c6 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -1049,3 +1049,14 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
>
>         return -ENXIO;
>  }
> +
> +u8 kvm_arm_pmu_get_pmuver_limit(void)
> +{
> +       u64 tmp;
> +
> +       tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> +       tmp = cpuid_feature_cap_perfmon_field(tmp,
> +                                             ID_AA64DFR0_EL1_PMUVer_SHIFT,
> +                                             ID_AA64DFR0_EL1_PMUVer_V3P4);
> +       return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
> +}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f4a7c5abcbca..7a4cd644b9c0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1062,6 +1062,32 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
>         return true;
>  }
>
> +static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
> +{
> +       if (kvm_vcpu_has_pmu(vcpu))
> +               return vcpu->kvm->arch.dfr0_pmuver;
> +
> +       /* Special case for IMPDEF PMUs that KVM has exposed in the past... */
> +       if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
> +               return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
> +
> +       /* The real "no PMU" */
> +       return 0;
> +}
> +
> +static u8 pmuver_to_perfmon(u8 pmuver)
> +{
> +       switch (pmuver) {
> +       case ID_AA64DFR0_EL1_PMUVer_IMP:
> +               return ID_DFR0_PERFMON_8_0;
> +       case ID_AA64DFR0_EL1_PMUVer_IMP_DEF:
> +               return ID_DFR0_PERFMON_IMP_DEF;
> +       default:
> +               /* Anything ARMv8.1+ has the same value. For now. */
> +               return pmuver;
> +       }
> +}
> +
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
>  static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
>  {
> @@ -1111,18 +1137,17 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
>                 /* Limit debug to ARMv8.0 */
>                 val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
>                 val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
> -               /* Limit guests to PMUv3 for ARMv8.4 */
> -               val = cpuid_feature_cap_perfmon_field(val,
> -                                                     ID_AA64DFR0_EL1_PMUVer_SHIFT,
> -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_EL1_PMUVer_V3P4 : 0);
> +               /* Set PMUver to the required version */
> +               val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
> +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
> +                                 vcpu_pmuver(vcpu));
>                 /* Hide SPE from guests */
>                 val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
>                 break;
>         case SYS_ID_DFR0_EL1:
> -               /* Limit guests to PMUv3 for ARMv8.4 */
> -               val = cpuid_feature_cap_perfmon_field(val,
> -                                                     ID_DFR0_PERFMON_SHIFT,
> -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));

Shouldn't KVM expose the sanitized value as it is when AArch32 is
not supported at EL0 ? Since the register value is UNKNOWN when AArch32
is not supported at EL0, I would think this code might change the PERFMON
field value on such systems (could cause live migration to fail).

I should have noticed this with the previous version...

Thank you,
Reiji

>                 break;
>         }
>
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 96b192139a23..812f729c9108 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -89,6 +89,8 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
>                         vcpu->arch.pmu.events = *kvm_get_pmu_events();  \
>         } while (0)
>
> +u8 kvm_arm_pmu_get_pmuver_limit(void);
> +
>  #else
>  struct kvm_pmu {
>  };
> @@ -154,6 +156,10 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
>  static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
> +static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
> +{
> +       return 0;
> +}
>
>  #endif
>
> --
> 2.34.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-10-28 10:53   ` Marc Zyngier
  (?)
@ 2022-11-03  5:31     ` Reiji Watanabe
  -1 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03  5:31 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> the PMUver field can be altered and be at most the one that was
> initially computed for the guest.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
>  1 file changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7a4cd644b9c0..4fa14b4ae2a6 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
>         return 0;
>  }
>
> +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> +                              const struct sys_reg_desc *rd,
> +                              u64 val)
> +{
> +       u8 pmuver, host_pmuver;
> +
> +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> +
> +       /*
> +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> +        * as it doesn't promise more than what the HW gives us. We
> +        * allow an IMPDEF PMU though, only if no PMU is supported
> +        * (KVM backward compatibility handling).
> +        */

It appears the patch allows userspace to set IMPDEF even
when host_pmuver == 0.  Shouldn't it be allowed only when
host_pmuver == IMPDEF (as before)?
Probably, it may not cause any real problems though.


> +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> +               return -EINVAL;
> +
> +       /* We already have a PMU, don't try to disable it... */
> +       if (kvm_vcpu_has_pmu(vcpu) &&
> +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> +               return -EINVAL;

Nit: Perhaps it might be useful to return a different error code for the
above two (new) error cases (I plan to use -E2BIG and -EPERM
respectively for those cases with my ID register series).

Thank you,
Reiji

> +
> +       /* We can only differ with PMUver, and anything else is an error */
> +       val ^= read_id_reg(vcpu, rd);
> +       val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
> +       if (val)
> +               return -EINVAL;
> +
> +       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> +
> +       return 0;
> +}
> +
>  /*
>   * cpufeature ID register user accessors
>   *
> @@ -1508,7 +1542,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>         ID_UNALLOCATED(4,7),
>
>         /* CRm=5 */
> -       ID_SANITISED(ID_AA64DFR0_EL1),
> +       { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
> +         .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
>         ID_SANITISED(ID_AA64DFR1_EL1),
>         ID_UNALLOCATED(5,2),
>         ID_UNALLOCATED(5,3),
> --
> 2.34.1
>
>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-03  5:31     ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03  5:31 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Marc,

On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> the PMUver field can be altered and be at most the one that was
> initially computed for the guest.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
>  1 file changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7a4cd644b9c0..4fa14b4ae2a6 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
>         return 0;
>  }
>
> +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> +                              const struct sys_reg_desc *rd,
> +                              u64 val)
> +{
> +       u8 pmuver, host_pmuver;
> +
> +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> +
> +       /*
> +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> +        * as it doesn't promise more than what the HW gives us. We
> +        * allow an IMPDEF PMU though, only if no PMU is supported
> +        * (KVM backward compatibility handling).
> +        */

It appears the patch allows userspace to set IMPDEF even
when host_pmuver == 0.  Shouldn't it be allowed only when
host_pmuver == IMPDEF (as before)?
Probably, it may not cause any real problems though.


> +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> +               return -EINVAL;
> +
> +       /* We already have a PMU, don't try to disable it... */
> +       if (kvm_vcpu_has_pmu(vcpu) &&
> +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> +               return -EINVAL;

Nit: Perhaps it might be useful to return a different error code for the
above two (new) error cases (I plan to use -E2BIG and -EPERM
respectively for those cases with my ID register series).

Thank you,
Reiji

> +
> +       /* We can only differ with PMUver, and anything else is an error */
> +       val ^= read_id_reg(vcpu, rd);
> +       val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
> +       if (val)
> +               return -EINVAL;
> +
> +       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> +
> +       return 0;
> +}
> +
>  /*
>   * cpufeature ID register user accessors
>   *
> @@ -1508,7 +1542,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>         ID_UNALLOCATED(4,7),
>
>         /* CRm=5 */
> -       ID_SANITISED(ID_AA64DFR0_EL1),
> +       { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
> +         .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
>         ID_SANITISED(ID_AA64DFR1_EL1),
>         ID_UNALLOCATED(5,2),
>         ID_UNALLOCATED(5,3),
> --
> 2.34.1
>
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-03  5:31     ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03  5:31 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> the PMUver field can be altered and be at most the one that was
> initially computed for the guest.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
>  1 file changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7a4cd644b9c0..4fa14b4ae2a6 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
>         return 0;
>  }
>
> +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> +                              const struct sys_reg_desc *rd,
> +                              u64 val)
> +{
> +       u8 pmuver, host_pmuver;
> +
> +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> +
> +       /*
> +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> +        * as it doesn't promise more than what the HW gives us. We
> +        * allow an IMPDEF PMU though, only if no PMU is supported
> +        * (KVM backward compatibility handling).
> +        */

It appears the patch allows userspace to set IMPDEF even
when host_pmuver == 0.  Shouldn't it be allowed only when
host_pmuver == IMPDEF (as before)?
Probably, it may not cause any real problems though.


> +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> +               return -EINVAL;
> +
> +       /* We already have a PMU, don't try to disable it... */
> +       if (kvm_vcpu_has_pmu(vcpu) &&
> +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> +               return -EINVAL;

Nit: Perhaps it might be useful to return a different error code for the
above two (new) error cases (I plan to use -E2BIG and -EPERM
respectively for those cases with my ID register series).

Thank you,
Reiji

> +
> +       /* We can only differ with PMUver, and anything else is an error */
> +       val ^= read_id_reg(vcpu, rd);
> +       val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
> +       if (val)
> +               return -EINVAL;
> +
> +       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> +
> +       return 0;
> +}
> +
>  /*
>   * cpufeature ID register user accessors
>   *
> @@ -1508,7 +1542,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>         ID_UNALLOCATED(4,7),
>
>         /* CRm=5 */
> -       ID_SANITISED(ID_AA64DFR0_EL1),
> +       { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
> +         .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
>         ID_SANITISED(ID_AA64DFR1_EL1),
>         ID_UNALLOCATED(5,2),
>         ID_UNALLOCATED(5,3),
> --
> 2.34.1
>
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
  2022-11-03  4:55     ` Reiji Watanabe
  (?)
@ 2022-11-03  8:44       ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-03  8:44 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Reiji,

On Thu, 03 Nov 2022 04:55:52 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> >         case SYS_ID_DFR0_EL1:
> > -               /* Limit guests to PMUv3 for ARMv8.4 */
> > -               val = cpuid_feature_cap_perfmon_field(val,
> > -                                                     ID_DFR0_PERFMON_SHIFT,
> > -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> > +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> > +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> > +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));
> 
> Shouldn't KVM expose the sanitized value as it is when AArch32 is
> not supported at EL0 ? Since the register value is UNKNOWN when AArch32
> is not supported at EL0, I would think this code might change the PERFMON
> field value on such systems (could cause live migration to fail).

I'm not sure this would cause anything to fail as we now treat all
AArch32 idregs as RAZ/WI when AArch32 isn't supported (and the
visibility callback still applies here).

But it doesn't hurt to make pmuver_to_perfmon() return 0 when AArch32
isn't supported, and it will at least make the ID register consistent
from a guest perspective.

I plan to squash the following (untested) hack in:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 8f4412cd4bf6..3b28ef48a525 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1094,6 +1094,10 @@ static u8 perfmon_to_pmuver(u8 perfmon)
 
 static u8 pmuver_to_perfmon(u8 pmuver)
 {
+       /* If no AArch32, make the field RAZ */
+       if (!kvm_supports_32bit_el0())
+               return 0;
+
        switch (pmuver) {
        case ID_AA64DFR0_EL1_PMUVer_IMP:
                return ID_DFR0_PERFMON_8_0;
@@ -1302,10 +1306,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
                           const struct sys_reg_desc *rd,
                           u64 val)
 {
-       u8 perfmon, host_perfmon = 0;
+       u8 perfmon, host_perfmon;
 
-       if (system_supports_32bit_el0())
-               host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+       host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
 
        /*
         * Allow DFR0_EL1.PerfMon to be set from userspace as long as

> I should have noticed this with the previous version...

No worries, thanks a lot for having had a look!

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-11-03  8:44       ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-03  8:44 UTC (permalink / raw)
  To: Reiji Watanabe; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Reiji,

On Thu, 03 Nov 2022 04:55:52 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> >         case SYS_ID_DFR0_EL1:
> > -               /* Limit guests to PMUv3 for ARMv8.4 */
> > -               val = cpuid_feature_cap_perfmon_field(val,
> > -                                                     ID_DFR0_PERFMON_SHIFT,
> > -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> > +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> > +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> > +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));
> 
> Shouldn't KVM expose the sanitized value as it is when AArch32 is
> not supported at EL0 ? Since the register value is UNKNOWN when AArch32
> is not supported at EL0, I would think this code might change the PERFMON
> field value on such systems (could cause live migration to fail).

I'm not sure this would cause anything to fail as we now treat all
AArch32 idregs as RAZ/WI when AArch32 isn't supported (and the
visibility callback still applies here).

But it doesn't hurt to make pmuver_to_perfmon() return 0 when AArch32
isn't supported, and it will at least make the ID register consistent
from a guest perspective.

I plan to squash the following (untested) hack in:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 8f4412cd4bf6..3b28ef48a525 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1094,6 +1094,10 @@ static u8 perfmon_to_pmuver(u8 perfmon)
 
 static u8 pmuver_to_perfmon(u8 pmuver)
 {
+       /* If no AArch32, make the field RAZ */
+       if (!kvm_supports_32bit_el0())
+               return 0;
+
        switch (pmuver) {
        case ID_AA64DFR0_EL1_PMUVer_IMP:
                return ID_DFR0_PERFMON_8_0;
@@ -1302,10 +1306,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
                           const struct sys_reg_desc *rd,
                           u64 val)
 {
-       u8 perfmon, host_perfmon = 0;
+       u8 perfmon, host_perfmon;
 
-       if (system_supports_32bit_el0())
-               host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+       host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
 
        /*
         * Allow DFR0_EL1.PerfMon to be set from userspace as long as

> I should have noticed this with the previous version...

No worries, thanks a lot for having had a look!

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-11-03  8:44       ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-03  8:44 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Reiji,

On Thu, 03 Nov 2022 04:55:52 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> >         case SYS_ID_DFR0_EL1:
> > -               /* Limit guests to PMUv3 for ARMv8.4 */
> > -               val = cpuid_feature_cap_perfmon_field(val,
> > -                                                     ID_DFR0_PERFMON_SHIFT,
> > -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> > +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> > +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> > +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));
> 
> Shouldn't KVM expose the sanitized value as it is when AArch32 is
> not supported at EL0 ? Since the register value is UNKNOWN when AArch32
> is not supported at EL0, I would think this code might change the PERFMON
> field value on such systems (could cause live migration to fail).

I'm not sure this would cause anything to fail as we now treat all
AArch32 idregs as RAZ/WI when AArch32 isn't supported (and the
visibility callback still applies here).

But it doesn't hurt to make pmuver_to_perfmon() return 0 when AArch32
isn't supported, and it will at least make the ID register consistent
from a guest perspective.

I plan to squash the following (untested) hack in:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 8f4412cd4bf6..3b28ef48a525 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1094,6 +1094,10 @@ static u8 perfmon_to_pmuver(u8 perfmon)
 
 static u8 pmuver_to_perfmon(u8 pmuver)
 {
+       /* If no AArch32, make the field RAZ */
+       if (!kvm_supports_32bit_el0())
+               return 0;
+
        switch (pmuver) {
        case ID_AA64DFR0_EL1_PMUVer_IMP:
                return ID_DFR0_PERFMON_8_0;
@@ -1302,10 +1306,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
                           const struct sys_reg_desc *rd,
                           u64 val)
 {
-       u8 perfmon, host_perfmon = 0;
+       u8 perfmon, host_perfmon;
 
-       if (system_supports_32bit_el0())
-               host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+       host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
 
        /*
         * Allow DFR0_EL1.PerfMon to be set from userspace as long as

> I should have noticed this with the previous version...

No worries, thanks a lot for having had a look!

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-11-03  5:31     ` Reiji Watanabe
  (?)
@ 2022-11-03 10:24       ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-03 10:24 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

On Thu, 03 Nov 2022 05:31:56 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> > the PMUver field can be altered and be at most the one that was
> > initially computed for the guest.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 36 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 7a4cd644b9c0..4fa14b4ae2a6 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> >         return 0;
> >  }
> >
> > +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > +                              const struct sys_reg_desc *rd,
> > +                              u64 val)
> > +{
> > +       u8 pmuver, host_pmuver;
> > +
> > +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> > +
> > +       /*
> > +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> > +        * as it doesn't promise more than what the HW gives us. We
> > +        * allow an IMPDEF PMU though, only if no PMU is supported
> > +        * (KVM backward compatibility handling).
> > +        */
> 
> It appears the patch allows userspace to set IMPDEF even
> when host_pmuver == 0.  Shouldn't it be allowed only when
> host_pmuver == IMPDEF (as before)?
> Probably, it may not cause any real problems though.

Given that we don't treat the two cases any differently, I thought it
would be reasonable to relax this particular case, and I can't see any
reason why we shouldn't tolerate this sort of migration.

> 
> 
> > +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> > +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> > +               return -EINVAL;
> > +
> > +       /* We already have a PMU, don't try to disable it... */
> > +       if (kvm_vcpu_has_pmu(vcpu) &&
> > +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > +               return -EINVAL;
> 
> Nit: Perhaps it might be useful to return a different error code for the
> above two (new) error cases (I plan to use -E2BIG and -EPERM
> respectively for those cases with my ID register series).

My worry in doing so is that we don't have an established practice for
these cases. I'm fine with introducing new error codes, but I'm not
sure there is an existing practice in userspace to actually interpret
them.

Even -EPERM has a slightly different meaning, and although there is
some language there saying that it is all nonsense, we should be very
careful.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-03 10:24       ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-03 10:24 UTC (permalink / raw)
  To: Reiji Watanabe; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

On Thu, 03 Nov 2022 05:31:56 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> > the PMUver field can be altered and be at most the one that was
> > initially computed for the guest.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 36 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 7a4cd644b9c0..4fa14b4ae2a6 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> >         return 0;
> >  }
> >
> > +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > +                              const struct sys_reg_desc *rd,
> > +                              u64 val)
> > +{
> > +       u8 pmuver, host_pmuver;
> > +
> > +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> > +
> > +       /*
> > +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> > +        * as it doesn't promise more than what the HW gives us. We
> > +        * allow an IMPDEF PMU though, only if no PMU is supported
> > +        * (KVM backward compatibility handling).
> > +        */
> 
> It appears the patch allows userspace to set IMPDEF even
> when host_pmuver == 0.  Shouldn't it be allowed only when
> host_pmuver == IMPDEF (as before)?
> Probably, it may not cause any real problems though.

Given that we don't treat the two cases any differently, I thought it
would be reasonable to relax this particular case, and I can't see any
reason why we shouldn't tolerate this sort of migration.

> 
> 
> > +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> > +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> > +               return -EINVAL;
> > +
> > +       /* We already have a PMU, don't try to disable it... */
> > +       if (kvm_vcpu_has_pmu(vcpu) &&
> > +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > +               return -EINVAL;
> 
> Nit: Perhaps it might be useful to return a different error code for the
> above two (new) error cases (I plan to use -E2BIG and -EPERM
> respectively for those cases with my ID register series).

My worry in doing so is that we don't have an established practice for
these cases. I'm fine with introducing new error codes, but I'm not
sure there is an existing practice in userspace to actually interpret
them.

Even -EPERM has a slightly different meaning, and although there is
some language there saying that it is all nonsense, we should be very
careful.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-03 10:24       ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-03 10:24 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

On Thu, 03 Nov 2022 05:31:56 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> > the PMUver field can be altered and be at most the one that was
> > initially computed for the guest.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 36 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 7a4cd644b9c0..4fa14b4ae2a6 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> >         return 0;
> >  }
> >
> > +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > +                              const struct sys_reg_desc *rd,
> > +                              u64 val)
> > +{
> > +       u8 pmuver, host_pmuver;
> > +
> > +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> > +
> > +       /*
> > +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> > +        * as it doesn't promise more than what the HW gives us. We
> > +        * allow an IMPDEF PMU though, only if no PMU is supported
> > +        * (KVM backward compatibility handling).
> > +        */
> 
> It appears the patch allows userspace to set IMPDEF even
> when host_pmuver == 0.  Shouldn't it be allowed only when
> host_pmuver == IMPDEF (as before)?
> Probably, it may not cause any real problems though.

Given that we don't treat the two cases any differently, I thought it
would be reasonable to relax this particular case, and I can't see any
reason why we shouldn't tolerate this sort of migration.

> 
> 
> > +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> > +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> > +               return -EINVAL;
> > +
> > +       /* We already have a PMU, don't try to disable it... */
> > +       if (kvm_vcpu_has_pmu(vcpu) &&
> > +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > +               return -EINVAL;
> 
> Nit: Perhaps it might be useful to return a different error code for the
> above two (new) error cases (I plan to use -E2BIG and -EPERM
> respectively for those cases with my ID register series).

My worry in doing so is that we don't have an established practice for
these cases. I'm fine with introducing new error codes, but I'm not
sure there is an existing practice in userspace to actually interpret
them.

Even -EPERM has a slightly different meaning, and although there is
some language there saying that it is all nonsense, we should be very
careful.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
  2022-11-03  8:44       ` Marc Zyngier
  (?)
@ 2022-11-03 14:52         ` Reiji Watanabe
  -1 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03 14:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Thu, Nov 3, 2022 at 1:44 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Reiji,
>
> On Thu, 03 Nov 2022 04:55:52 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > >         case SYS_ID_DFR0_EL1:
> > > -               /* Limit guests to PMUv3 for ARMv8.4 */
> > > -               val = cpuid_feature_cap_perfmon_field(val,
> > > -                                                     ID_DFR0_PERFMON_SHIFT,
> > > -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> > > +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> > > +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> > > +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));
> >
> > Shouldn't KVM expose the sanitized value as it is when AArch32 is
> > not supported at EL0 ? Since the register value is UNKNOWN when AArch32
> > is not supported at EL0, I would think this code might change the PERFMON
> > field value on such systems (could cause live migration to fail).
>
> I'm not sure this would cause anything to fail as we now treat all
> AArch32 idregs as RAZ/WI when AArch32 isn't supported (and the
> visibility callback still applies here).

Oops, sorry I totally forgot about that change...

> But it doesn't hurt to make pmuver_to_perfmon() return 0 when AArch32
> isn't supported, and it will at least make the ID register consistent
> from a guest perspective.

I believe the register will be consistent (0) even from a guest
perspective with the current patch when AArch32 isn't supported
because read_id_reg() checks that with sysreg_visible_as_raz()
in the beginning.

I withdraw my comment, and the patch looks good to me.

Reviewed-by: Reiji Watanabe <reijiw@google.com>

Thank you,
Reiji

>
> I plan to squash the following (untested) hack in:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 8f4412cd4bf6..3b28ef48a525 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1094,6 +1094,10 @@ static u8 perfmon_to_pmuver(u8 perfmon)
>
>  static u8 pmuver_to_perfmon(u8 pmuver)
>  {
> +       /* If no AArch32, make the field RAZ */
> +       if (!kvm_supports_32bit_el0())
> +               return 0;
> +
>         switch (pmuver) {
>         case ID_AA64DFR0_EL1_PMUVer_IMP:
>                 return ID_DFR0_PERFMON_8_0;
> @@ -1302,10 +1306,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
>                            const struct sys_reg_desc *rd,
>                            u64 val)
>  {
> -       u8 perfmon, host_perfmon = 0;
> +       u8 perfmon, host_perfmon;
>
> -       if (system_supports_32bit_el0())
> -               host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
> +       host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
>
>         /*
>          * Allow DFR0_EL1.PerfMon to be set from userspace as long as
>
> > I should have noticed this with the previous version...
>
> No worries, thanks a lot for having had a look!
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-11-03 14:52         ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03 14:52 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Marc,

On Thu, Nov 3, 2022 at 1:44 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Reiji,
>
> On Thu, 03 Nov 2022 04:55:52 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > >         case SYS_ID_DFR0_EL1:
> > > -               /* Limit guests to PMUv3 for ARMv8.4 */
> > > -               val = cpuid_feature_cap_perfmon_field(val,
> > > -                                                     ID_DFR0_PERFMON_SHIFT,
> > > -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> > > +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> > > +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> > > +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));
> >
> > Shouldn't KVM expose the sanitized value as it is when AArch32 is
> > not supported at EL0 ? Since the register value is UNKNOWN when AArch32
> > is not supported at EL0, I would think this code might change the PERFMON
> > field value on such systems (could cause live migration to fail).
>
> I'm not sure this would cause anything to fail as we now treat all
> AArch32 idregs as RAZ/WI when AArch32 isn't supported (and the
> visibility callback still applies here).

Oops, sorry I totally forgot about that change...

> But it doesn't hurt to make pmuver_to_perfmon() return 0 when AArch32
> isn't supported, and it will at least make the ID register consistent
> from a guest perspective.

I believe the register will be consistent (0) even from a guest
perspective with the current patch when AArch32 isn't supported
because read_id_reg() checks that with sysreg_visible_as_raz()
in the beginning.

I withdraw my comment, and the patch looks good to me.

Reviewed-by: Reiji Watanabe <reijiw@google.com>

Thank you,
Reiji

>
> I plan to squash the following (untested) hack in:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 8f4412cd4bf6..3b28ef48a525 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1094,6 +1094,10 @@ static u8 perfmon_to_pmuver(u8 perfmon)
>
>  static u8 pmuver_to_perfmon(u8 pmuver)
>  {
> +       /* If no AArch32, make the field RAZ */
> +       if (!kvm_supports_32bit_el0())
> +               return 0;
> +
>         switch (pmuver) {
>         case ID_AA64DFR0_EL1_PMUVer_IMP:
>                 return ID_DFR0_PERFMON_8_0;
> @@ -1302,10 +1306,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
>                            const struct sys_reg_desc *rd,
>                            u64 val)
>  {
> -       u8 perfmon, host_perfmon = 0;
> +       u8 perfmon, host_perfmon;
>
> -       if (system_supports_32bit_el0())
> -               host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
> +       host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
>
>         /*
>          * Allow DFR0_EL1.PerfMon to be set from userspace as long as
>
> > I should have noticed this with the previous version...
>
> No worries, thanks a lot for having had a look!
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
@ 2022-11-03 14:52         ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-03 14:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Thu, Nov 3, 2022 at 1:44 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Reiji,
>
> On Thu, 03 Nov 2022 04:55:52 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > >         case SYS_ID_DFR0_EL1:
> > > -               /* Limit guests to PMUv3 for ARMv8.4 */
> > > -               val = cpuid_feature_cap_perfmon_field(val,
> > > -                                                     ID_DFR0_PERFMON_SHIFT,
> > > -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
> > > +               val &= ~ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
> > > +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
> > > +                                 pmuver_to_perfmon(vcpu_pmuver(vcpu)));
> >
> > Shouldn't KVM expose the sanitized value as it is when AArch32 is
> > not supported at EL0 ? Since the register value is UNKNOWN when AArch32
> > is not supported at EL0, I would think this code might change the PERFMON
> > field value on such systems (could cause live migration to fail).
>
> I'm not sure this would cause anything to fail as we now treat all
> AArch32 idregs as RAZ/WI when AArch32 isn't supported (and the
> visibility callback still applies here).

Oops, sorry I totally forgot about that change...

> But it doesn't hurt to make pmuver_to_perfmon() return 0 when AArch32
> isn't supported, and it will at least make the ID register consistent
> from a guest perspective.

I believe the register will be consistent (0) even from a guest
perspective with the current patch when AArch32 isn't supported
because read_id_reg() checks that with sysreg_visible_as_raz()
in the beginning.

I withdraw my comment, and the patch looks good to me.

Reviewed-by: Reiji Watanabe <reijiw@google.com>

Thank you,
Reiji

>
> I plan to squash the following (untested) hack in:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 8f4412cd4bf6..3b28ef48a525 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1094,6 +1094,10 @@ static u8 perfmon_to_pmuver(u8 perfmon)
>
>  static u8 pmuver_to_perfmon(u8 pmuver)
>  {
> +       /* If no AArch32, make the field RAZ */
> +       if (!kvm_supports_32bit_el0())
> +               return 0;
> +
>         switch (pmuver) {
>         case ID_AA64DFR0_EL1_PMUVer_IMP:
>                 return ID_DFR0_PERFMON_8_0;
> @@ -1302,10 +1306,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
>                            const struct sys_reg_desc *rd,
>                            u64 val)
>  {
> -       u8 perfmon, host_perfmon = 0;
> +       u8 perfmon, host_perfmon;
>
> -       if (system_supports_32bit_el0())
> -               host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
> +       host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
>
>         /*
>          * Allow DFR0_EL1.PerfMon to be set from userspace as long as
>
> > I should have noticed this with the previous version...
>
> No worries, thanks a lot for having had a look!
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-11-03 10:24       ` Marc Zyngier
  (?)
@ 2022-11-04  7:00         ` Reiji Watanabe
  -1 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-04  7:00 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Thu, 03 Nov 2022 05:31:56 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> > > the PMUver field can be altered and be at most the one that was
> > > initially computed for the guest.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 36 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 7a4cd644b9c0..4fa14b4ae2a6 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> > >         return 0;
> > >  }
> > >
> > > +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > > +                              const struct sys_reg_desc *rd,
> > > +                              u64 val)
> > > +{
> > > +       u8 pmuver, host_pmuver;
> > > +
> > > +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> > > +
> > > +       /*
> > > +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> > > +        * as it doesn't promise more than what the HW gives us. We
> > > +        * allow an IMPDEF PMU though, only if no PMU is supported
> > > +        * (KVM backward compatibility handling).
> > > +        */
> >
> > It appears the patch allows userspace to set IMPDEF even
> > when host_pmuver == 0.  Shouldn't it be allowed only when
> > host_pmuver == IMPDEF (as before)?
> > Probably, it may not cause any real problems though.
>
> Given that we don't treat the two cases any differently, I thought it
> would be reasonable to relax this particular case, and I can't see any
> reason why we shouldn't tolerate this sort of migration.

> Given that we don't treat the two cases any differently, I thought it
> would be reasonable to relax this particular case, and I can't see any
> reason why we shouldn't tolerate this sort of migration.

That's true. I assume it won't cause any functional issues.

I have another comment related to this.
KVM allows userspace to create a guest with a mix of vCPUs with and
without PMU.  For such a guest, if the register for the vCPU without
PMU is set last, I think the PMUVER value for vCPUs with PMU could
become no PMU (0) or IMPDEF (0xf).
Also, with the current patch, userspace can set PMUv3 support value
(non-zero or non-IMPDEF) for vCPUs without the PMU.
IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
is inconsistent with PMU configuration for the vCPU.
What do you think ?

I'm thinking of the following code (not tested).

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4fa14b4ae2a6..ddd849027cc3 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
        if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
                return -EINVAL;

-       /* We already have a PMU, don't try to disable it... */
-       if (kvm_vcpu_has_pmu(vcpu) &&
-           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
-               return -EINVAL;
+       if (kvm_vcpu_has_pmu(vcpu)) {
+               /* We already have a PMU, don't try to disable it... */
+               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
+                       return -EINVAL;
+               }
+       } else {
+               /* We don't have a PMU, don't try to enable it... */
+               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
+                       return -EINVAL;
+               }
+       }

        /* We can only differ with PMUver, and anything else is an error */
        val ^= read_id_reg(vcpu, rd);
@@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
        if (val)
                return -EINVAL;

-       vcpu->kvm->arch.dfr0_pmuver = pmuver;
+       if (kvm_vcpu_has_pmu(vcpu))
+               vcpu->kvm->arch.dfr0_pmuver = pmuver;

        return 0;
 }

Thank you,
Reiji



>
> >
> >
> > > +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> > > +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> > > +               return -EINVAL;
> > > +
> > > +       /* We already have a PMU, don't try to disable it... */
> > > +       if (kvm_vcpu_has_pmu(vcpu) &&
> > > +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > > +               return -EINVAL;
> >
> > Nit: Perhaps it might be useful to return a different error code for the
> > above two (new) error cases (I plan to use -E2BIG and -EPERM
> > respectively for those cases with my ID register series).
>
> My worry in doing so is that we don't have an established practice for
> these cases. I'm fine with introducing new error codes, but I'm not
> sure there is an existing practice in userspace to actually interpret
> them.
>
> Even -EPERM has a slightly different meaning, and although there is
> some language there saying that it is all nonsense, we should be very
> careful.
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
>

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-04  7:00         ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-04  7:00 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Marc,

On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Thu, 03 Nov 2022 05:31:56 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> > > the PMUver field can be altered and be at most the one that was
> > > initially computed for the guest.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 36 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 7a4cd644b9c0..4fa14b4ae2a6 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> > >         return 0;
> > >  }
> > >
> > > +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > > +                              const struct sys_reg_desc *rd,
> > > +                              u64 val)
> > > +{
> > > +       u8 pmuver, host_pmuver;
> > > +
> > > +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> > > +
> > > +       /*
> > > +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> > > +        * as it doesn't promise more than what the HW gives us. We
> > > +        * allow an IMPDEF PMU though, only if no PMU is supported
> > > +        * (KVM backward compatibility handling).
> > > +        */
> >
> > It appears the patch allows userspace to set IMPDEF even
> > when host_pmuver == 0.  Shouldn't it be allowed only when
> > host_pmuver == IMPDEF (as before)?
> > Probably, it may not cause any real problems though.
>
> Given that we don't treat the two cases any differently, I thought it
> would be reasonable to relax this particular case, and I can't see any
> reason why we shouldn't tolerate this sort of migration.

> Given that we don't treat the two cases any differently, I thought it
> would be reasonable to relax this particular case, and I can't see any
> reason why we shouldn't tolerate this sort of migration.

That's true. I assume it won't cause any functional issues.

I have another comment related to this.
KVM allows userspace to create a guest with a mix of vCPUs with and
without PMU.  For such a guest, if the register for the vCPU without
PMU is set last, I think the PMUVER value for vCPUs with PMU could
become no PMU (0) or IMPDEF (0xf).
Also, with the current patch, userspace can set PMUv3 support value
(non-zero or non-IMPDEF) for vCPUs without the PMU.
IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
is inconsistent with PMU configuration for the vCPU.
What do you think ?

I'm thinking of the following code (not tested).

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4fa14b4ae2a6..ddd849027cc3 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
        if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
                return -EINVAL;

-       /* We already have a PMU, don't try to disable it... */
-       if (kvm_vcpu_has_pmu(vcpu) &&
-           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
-               return -EINVAL;
+       if (kvm_vcpu_has_pmu(vcpu)) {
+               /* We already have a PMU, don't try to disable it... */
+               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
+                       return -EINVAL;
+               }
+       } else {
+               /* We don't have a PMU, don't try to enable it... */
+               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
+                       return -EINVAL;
+               }
+       }

        /* We can only differ with PMUver, and anything else is an error */
        val ^= read_id_reg(vcpu, rd);
@@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
        if (val)
                return -EINVAL;

-       vcpu->kvm->arch.dfr0_pmuver = pmuver;
+       if (kvm_vcpu_has_pmu(vcpu))
+               vcpu->kvm->arch.dfr0_pmuver = pmuver;

        return 0;
 }

Thank you,
Reiji



>
> >
> >
> > > +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> > > +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> > > +               return -EINVAL;
> > > +
> > > +       /* We already have a PMU, don't try to disable it... */
> > > +       if (kvm_vcpu_has_pmu(vcpu) &&
> > > +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > > +               return -EINVAL;
> >
> > Nit: Perhaps it might be useful to return a different error code for the
> > above two (new) error cases (I plan to use -E2BIG and -EPERM
> > respectively for those cases with my ID register series).
>
> My worry in doing so is that we don't have an established practice for
> these cases. I'm fine with introducing new error codes, but I'm not
> sure there is an existing practice in userspace to actually interpret
> them.
>
> Even -EPERM has a slightly different meaning, and although there is
> some language there saying that it is all nonsense, we should be very
> careful.
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-04  7:00         ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-04  7:00 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Thu, 03 Nov 2022 05:31:56 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Fri, Oct 28, 2022 at 4:16 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > Allow userspace to write ID_AA64DFR0_EL1, on the condition that only
> > > the PMUver field can be altered and be at most the one that was
> > > initially computed for the guest.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 36 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 7a4cd644b9c0..4fa14b4ae2a6 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1247,6 +1247,40 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> > >         return 0;
> > >  }
> > >
> > > +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > > +                              const struct sys_reg_desc *rd,
> > > +                              u64 val)
> > > +{
> > > +       u8 pmuver, host_pmuver;
> > > +
> > > +       host_pmuver = kvm_arm_pmu_get_pmuver_limit();
> > > +
> > > +       /*
> > > +        * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
> > > +        * as it doesn't promise more than what the HW gives us. We
> > > +        * allow an IMPDEF PMU though, only if no PMU is supported
> > > +        * (KVM backward compatibility handling).
> > > +        */
> >
> > It appears the patch allows userspace to set IMPDEF even
> > when host_pmuver == 0.  Shouldn't it be allowed only when
> > host_pmuver == IMPDEF (as before)?
> > Probably, it may not cause any real problems though.
>
> Given that we don't treat the two cases any differently, I thought it
> would be reasonable to relax this particular case, and I can't see any
> reason why we shouldn't tolerate this sort of migration.

> Given that we don't treat the two cases any differently, I thought it
> would be reasonable to relax this particular case, and I can't see any
> reason why we shouldn't tolerate this sort of migration.

That's true. I assume it won't cause any functional issues.

I have another comment related to this.
KVM allows userspace to create a guest with a mix of vCPUs with and
without PMU.  For such a guest, if the register for the vCPU without
PMU is set last, I think the PMUVER value for vCPUs with PMU could
become no PMU (0) or IMPDEF (0xf).
Also, with the current patch, userspace can set PMUv3 support value
(non-zero or non-IMPDEF) for vCPUs without the PMU.
IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
is inconsistent with PMU configuration for the vCPU.
What do you think ?

I'm thinking of the following code (not tested).

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4fa14b4ae2a6..ddd849027cc3 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
        if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
                return -EINVAL;

-       /* We already have a PMU, don't try to disable it... */
-       if (kvm_vcpu_has_pmu(vcpu) &&
-           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
-               return -EINVAL;
+       if (kvm_vcpu_has_pmu(vcpu)) {
+               /* We already have a PMU, don't try to disable it... */
+               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
+                       return -EINVAL;
+               }
+       } else {
+               /* We don't have a PMU, don't try to enable it... */
+               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
+                       return -EINVAL;
+               }
+       }

        /* We can only differ with PMUver, and anything else is an error */
        val ^= read_id_reg(vcpu, rd);
@@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
        if (val)
                return -EINVAL;

-       vcpu->kvm->arch.dfr0_pmuver = pmuver;
+       if (kvm_vcpu_has_pmu(vcpu))
+               vcpu->kvm->arch.dfr0_pmuver = pmuver;

        return 0;
 }

Thank you,
Reiji



>
> >
> >
> > > +       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
> > > +       if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> > > +               return -EINVAL;
> > > +
> > > +       /* We already have a PMU, don't try to disable it... */
> > > +       if (kvm_vcpu_has_pmu(vcpu) &&
> > > +           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > > +               return -EINVAL;
> >
> > Nit: Perhaps it might be useful to return a different error code for the
> > above two (new) error cases (I plan to use -E2BIG and -EPERM
> > respectively for those cases with my ID register series).
>
> My worry in doing so is that we don't have an established practice for
> these cases. I'm fine with introducing new error codes, but I'm not
> sure there is an existing practice in userspace to actually interpret
> them.
>
> Even -EPERM has a slightly different meaning, and although there is
> some language there saying that it is all nonsense, we should be very
> careful.
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-11-04  7:00         ` Reiji Watanabe
  (?)
@ 2022-11-04 12:20           ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-04 12:20 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Reiji,

On Fri, 04 Nov 2022 07:00:22 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
>
> On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Thu, 03 Nov 2022 05:31:56 +0000,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > >
> > > It appears the patch allows userspace to set IMPDEF even
> > > when host_pmuver == 0.  Shouldn't it be allowed only when
> > > host_pmuver == IMPDEF (as before)?
> > > Probably, it may not cause any real problems though.
> >
> > Given that we don't treat the two cases any differently, I thought it
> > would be reasonable to relax this particular case, and I can't see any
> > reason why we shouldn't tolerate this sort of migration.
>
> That's true. I assume it won't cause any functional issues.
> 
> I have another comment related to this.
> KVM allows userspace to create a guest with a mix of vCPUs with and
> without PMU.  For such a guest, if the register for the vCPU without
> PMU is set last, I think the PMUVER value for vCPUs with PMU could
> become no PMU (0) or IMPDEF (0xf).
> Also, with the current patch, userspace can set PMUv3 support value
> (non-zero or non-IMPDEF) for vCPUs without the PMU.
> IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
> is inconsistent with PMU configuration for the vCPU.
> What do you think ?

Yes, this seems sensible, and we only do it one way at the moment.

> I'm thinking of the following code (not tested).
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 4fa14b4ae2a6..ddd849027cc3 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
>                 return -EINVAL;
> 
> -       /* We already have a PMU, don't try to disable it... */
> -       if (kvm_vcpu_has_pmu(vcpu) &&
> -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> -               return -EINVAL;
> +       if (kvm_vcpu_has_pmu(vcpu)) {
> +               /* We already have a PMU, don't try to disable it... */
> +               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> +                       return -EINVAL;
> +               }
> +       } else {
> +               /* We don't have a PMU, don't try to enable it... */
> +               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> +                       return -EINVAL;
> +               }
> +       }

This is a bit ugly. I came up with this instead:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3b28ef48a525..e104fde1a0ee 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1273,6 +1273,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 			       u64 val)
 {
 	u8 pmuver, host_pmuver;
+	bool valid_pmu;
 
 	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
 
@@ -1286,9 +1287,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
 		return -EINVAL;
 
-	/* We already have a PMU, don't try to disable it... */
-	if (kvm_vcpu_has_pmu(vcpu) &&
-	    (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
+	valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
+
+	/* Make sure view register and PMU support do match */
+	if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
 		return -EINVAL;
 
 	/* We can only differ with PMUver, and anything else is an error */

and the similar check for the 32bit counterpart.

> 
>         /* We can only differ with PMUver, and anything else is an error */
>         val ^= read_id_reg(vcpu, rd);
> @@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (val)
>                 return -EINVAL;
> 
> -       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> +       if (kvm_vcpu_has_pmu(vcpu))
> +               vcpu->kvm->arch.dfr0_pmuver = pmuver;

We need to update this unconditionally if we want to be able to
restore an IMPDEF PMU view to the guest.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-04 12:20           ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-04 12:20 UTC (permalink / raw)
  To: Reiji Watanabe; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Reiji,

On Fri, 04 Nov 2022 07:00:22 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
>
> On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Thu, 03 Nov 2022 05:31:56 +0000,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > >
> > > It appears the patch allows userspace to set IMPDEF even
> > > when host_pmuver == 0.  Shouldn't it be allowed only when
> > > host_pmuver == IMPDEF (as before)?
> > > Probably, it may not cause any real problems though.
> >
> > Given that we don't treat the two cases any differently, I thought it
> > would be reasonable to relax this particular case, and I can't see any
> > reason why we shouldn't tolerate this sort of migration.
>
> That's true. I assume it won't cause any functional issues.
> 
> I have another comment related to this.
> KVM allows userspace to create a guest with a mix of vCPUs with and
> without PMU.  For such a guest, if the register for the vCPU without
> PMU is set last, I think the PMUVER value for vCPUs with PMU could
> become no PMU (0) or IMPDEF (0xf).
> Also, with the current patch, userspace can set PMUv3 support value
> (non-zero or non-IMPDEF) for vCPUs without the PMU.
> IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
> is inconsistent with PMU configuration for the vCPU.
> What do you think ?

Yes, this seems sensible, and we only do it one way at the moment.

> I'm thinking of the following code (not tested).
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 4fa14b4ae2a6..ddd849027cc3 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
>                 return -EINVAL;
> 
> -       /* We already have a PMU, don't try to disable it... */
> -       if (kvm_vcpu_has_pmu(vcpu) &&
> -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> -               return -EINVAL;
> +       if (kvm_vcpu_has_pmu(vcpu)) {
> +               /* We already have a PMU, don't try to disable it... */
> +               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> +                       return -EINVAL;
> +               }
> +       } else {
> +               /* We don't have a PMU, don't try to enable it... */
> +               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> +                       return -EINVAL;
> +               }
> +       }

This is a bit ugly. I came up with this instead:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3b28ef48a525..e104fde1a0ee 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1273,6 +1273,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 			       u64 val)
 {
 	u8 pmuver, host_pmuver;
+	bool valid_pmu;
 
 	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
 
@@ -1286,9 +1287,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
 		return -EINVAL;
 
-	/* We already have a PMU, don't try to disable it... */
-	if (kvm_vcpu_has_pmu(vcpu) &&
-	    (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
+	valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
+
+	/* Make sure view register and PMU support do match */
+	if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
 		return -EINVAL;
 
 	/* We can only differ with PMUver, and anything else is an error */

and the similar check for the 32bit counterpart.

> 
>         /* We can only differ with PMUver, and anything else is an error */
>         val ^= read_id_reg(vcpu, rd);
> @@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (val)
>                 return -EINVAL;
> 
> -       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> +       if (kvm_vcpu_has_pmu(vcpu))
> +               vcpu->kvm->arch.dfr0_pmuver = pmuver;

We need to update this unconditionally if we want to be able to
restore an IMPDEF PMU view to the guest.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-04 12:20           ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-04 12:20 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Reiji,

On Fri, 04 Nov 2022 07:00:22 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
>
> On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Thu, 03 Nov 2022 05:31:56 +0000,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > >
> > > It appears the patch allows userspace to set IMPDEF even
> > > when host_pmuver == 0.  Shouldn't it be allowed only when
> > > host_pmuver == IMPDEF (as before)?
> > > Probably, it may not cause any real problems though.
> >
> > Given that we don't treat the two cases any differently, I thought it
> > would be reasonable to relax this particular case, and I can't see any
> > reason why we shouldn't tolerate this sort of migration.
>
> That's true. I assume it won't cause any functional issues.
> 
> I have another comment related to this.
> KVM allows userspace to create a guest with a mix of vCPUs with and
> without PMU.  For such a guest, if the register for the vCPU without
> PMU is set last, I think the PMUVER value for vCPUs with PMU could
> become no PMU (0) or IMPDEF (0xf).
> Also, with the current patch, userspace can set PMUv3 support value
> (non-zero or non-IMPDEF) for vCPUs without the PMU.
> IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
> is inconsistent with PMU configuration for the vCPU.
> What do you think ?

Yes, this seems sensible, and we only do it one way at the moment.

> I'm thinking of the following code (not tested).
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 4fa14b4ae2a6..ddd849027cc3 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
>                 return -EINVAL;
> 
> -       /* We already have a PMU, don't try to disable it... */
> -       if (kvm_vcpu_has_pmu(vcpu) &&
> -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> -               return -EINVAL;
> +       if (kvm_vcpu_has_pmu(vcpu)) {
> +               /* We already have a PMU, don't try to disable it... */
> +               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> +                       return -EINVAL;
> +               }
> +       } else {
> +               /* We don't have a PMU, don't try to enable it... */
> +               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> +                       return -EINVAL;
> +               }
> +       }

This is a bit ugly. I came up with this instead:

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3b28ef48a525..e104fde1a0ee 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1273,6 +1273,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 			       u64 val)
 {
 	u8 pmuver, host_pmuver;
+	bool valid_pmu;
 
 	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
 
@@ -1286,9 +1287,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
 		return -EINVAL;
 
-	/* We already have a PMU, don't try to disable it... */
-	if (kvm_vcpu_has_pmu(vcpu) &&
-	    (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
+	valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
+
+	/* Make sure view register and PMU support do match */
+	if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
 		return -EINVAL;
 
 	/* We can only differ with PMUver, and anything else is an error */

and the similar check for the 32bit counterpart.

> 
>         /* We can only differ with PMUver, and anything else is an error */
>         val ^= read_id_reg(vcpu, rd);
> @@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (val)
>                 return -EINVAL;
> 
> -       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> +       if (kvm_vcpu_has_pmu(vcpu))
> +               vcpu->kvm->arch.dfr0_pmuver = pmuver;

We need to update this unconditionally if we want to be able to
restore an IMPDEF PMU view to the guest.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-11-04 12:20           ` Marc Zyngier
  (?)
@ 2022-11-04 15:53             ` Reiji Watanabe
  -1 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-04 15:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Fri, Nov 4, 2022 at 5:21 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Reiji,
>
> On Fri, 04 Nov 2022 07:00:22 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Thu, 03 Nov 2022 05:31:56 +0000,
> > > Reiji Watanabe <reijiw@google.com> wrote:
> > > >
> > > > It appears the patch allows userspace to set IMPDEF even
> > > > when host_pmuver == 0.  Shouldn't it be allowed only when
> > > > host_pmuver == IMPDEF (as before)?
> > > > Probably, it may not cause any real problems though.
> > >
> > > Given that we don't treat the two cases any differently, I thought it
> > > would be reasonable to relax this particular case, and I can't see any
> > > reason why we shouldn't tolerate this sort of migration.
> >
> > That's true. I assume it won't cause any functional issues.
> >
> > I have another comment related to this.
> > KVM allows userspace to create a guest with a mix of vCPUs with and
> > without PMU.  For such a guest, if the register for the vCPU without
> > PMU is set last, I think the PMUVER value for vCPUs with PMU could
> > become no PMU (0) or IMPDEF (0xf).
> > Also, with the current patch, userspace can set PMUv3 support value
> > (non-zero or non-IMPDEF) for vCPUs without the PMU.
> > IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
> > is inconsistent with PMU configuration for the vCPU.
> > What do you think ?
>
> Yes, this seems sensible, and we only do it one way at the moment.
>
> > I'm thinking of the following code (not tested).
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 4fa14b4ae2a6..ddd849027cc3 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> >         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> >                 return -EINVAL;
> >
> > -       /* We already have a PMU, don't try to disable it... */
> > -       if (kvm_vcpu_has_pmu(vcpu) &&
> > -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > -               return -EINVAL;
> > +       if (kvm_vcpu_has_pmu(vcpu)) {
> > +               /* We already have a PMU, don't try to disable it... */
> > +               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> > +                       return -EINVAL;
> > +               }
> > +       } else {
> > +               /* We don't have a PMU, don't try to enable it... */
> > +               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> > +                       return -EINVAL;
> > +               }
> > +       }
>
> This is a bit ugly. I came up with this instead:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3b28ef48a525..e104fde1a0ee 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1273,6 +1273,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>                                u64 val)
>  {
>         u8 pmuver, host_pmuver;
> +       bool valid_pmu;
>
>         host_pmuver = kvm_arm_pmu_get_pmuver_limit();
>
> @@ -1286,9 +1287,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
>                 return -EINVAL;
>
> -       /* We already have a PMU, don't try to disable it... */
> -       if (kvm_vcpu_has_pmu(vcpu) &&
> -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> +       valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
> +
> +       /* Make sure view register and PMU support do match */
> +       if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
>                 return -EINVAL;

Thanks, much better!

>
>         /* We can only differ with PMUver, and anything else is an error */
>
> and the similar check for the 32bit counterpart.
>
> >
> >         /* We can only differ with PMUver, and anything else is an error */
> >         val ^= read_id_reg(vcpu, rd);
> > @@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> >         if (val)
> >                 return -EINVAL;
> >
> > -       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> > +       if (kvm_vcpu_has_pmu(vcpu))
> > +               vcpu->kvm->arch.dfr0_pmuver = pmuver;
>
> We need to update this unconditionally if we want to be able to
> restore an IMPDEF PMU view to the guest.

Yes, right.
BTW, if we have no intention of supporting a mix of vCPUs with and
without PMU, I think it would be nice if we have a clear comment on
that in the code.  Or I'm hoping to disallow it if possible though.

Thank you,
Reiji

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-04 15:53             ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-04 15:53 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Marc,

On Fri, Nov 4, 2022 at 5:21 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Reiji,
>
> On Fri, 04 Nov 2022 07:00:22 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Thu, 03 Nov 2022 05:31:56 +0000,
> > > Reiji Watanabe <reijiw@google.com> wrote:
> > > >
> > > > It appears the patch allows userspace to set IMPDEF even
> > > > when host_pmuver == 0.  Shouldn't it be allowed only when
> > > > host_pmuver == IMPDEF (as before)?
> > > > Probably, it may not cause any real problems though.
> > >
> > > Given that we don't treat the two cases any differently, I thought it
> > > would be reasonable to relax this particular case, and I can't see any
> > > reason why we shouldn't tolerate this sort of migration.
> >
> > That's true. I assume it won't cause any functional issues.
> >
> > I have another comment related to this.
> > KVM allows userspace to create a guest with a mix of vCPUs with and
> > without PMU.  For such a guest, if the register for the vCPU without
> > PMU is set last, I think the PMUVER value for vCPUs with PMU could
> > become no PMU (0) or IMPDEF (0xf).
> > Also, with the current patch, userspace can set PMUv3 support value
> > (non-zero or non-IMPDEF) for vCPUs without the PMU.
> > IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
> > is inconsistent with PMU configuration for the vCPU.
> > What do you think ?
>
> Yes, this seems sensible, and we only do it one way at the moment.
>
> > I'm thinking of the following code (not tested).
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 4fa14b4ae2a6..ddd849027cc3 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> >         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> >                 return -EINVAL;
> >
> > -       /* We already have a PMU, don't try to disable it... */
> > -       if (kvm_vcpu_has_pmu(vcpu) &&
> > -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > -               return -EINVAL;
> > +       if (kvm_vcpu_has_pmu(vcpu)) {
> > +               /* We already have a PMU, don't try to disable it... */
> > +               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> > +                       return -EINVAL;
> > +               }
> > +       } else {
> > +               /* We don't have a PMU, don't try to enable it... */
> > +               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> > +                       return -EINVAL;
> > +               }
> > +       }
>
> This is a bit ugly. I came up with this instead:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3b28ef48a525..e104fde1a0ee 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1273,6 +1273,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>                                u64 val)
>  {
>         u8 pmuver, host_pmuver;
> +       bool valid_pmu;
>
>         host_pmuver = kvm_arm_pmu_get_pmuver_limit();
>
> @@ -1286,9 +1287,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
>                 return -EINVAL;
>
> -       /* We already have a PMU, don't try to disable it... */
> -       if (kvm_vcpu_has_pmu(vcpu) &&
> -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> +       valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
> +
> +       /* Make sure view register and PMU support do match */
> +       if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
>                 return -EINVAL;

Thanks, much better!

>
>         /* We can only differ with PMUver, and anything else is an error */
>
> and the similar check for the 32bit counterpart.
>
> >
> >         /* We can only differ with PMUver, and anything else is an error */
> >         val ^= read_id_reg(vcpu, rd);
> > @@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> >         if (val)
> >                 return -EINVAL;
> >
> > -       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> > +       if (kvm_vcpu_has_pmu(vcpu))
> > +               vcpu->kvm->arch.dfr0_pmuver = pmuver;
>
> We need to update this unconditionally if we want to be able to
> restore an IMPDEF PMU view to the guest.

Yes, right.
BTW, if we have no intention of supporting a mix of vCPUs with and
without PMU, I think it would be nice if we have a clear comment on
that in the code.  Or I'm hoping to disallow it if possible though.

Thank you,
Reiji
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-04 15:53             ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-04 15:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

On Fri, Nov 4, 2022 at 5:21 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Reiji,
>
> On Fri, 04 Nov 2022 07:00:22 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > On Thu, Nov 3, 2022 at 3:25 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Thu, 03 Nov 2022 05:31:56 +0000,
> > > Reiji Watanabe <reijiw@google.com> wrote:
> > > >
> > > > It appears the patch allows userspace to set IMPDEF even
> > > > when host_pmuver == 0.  Shouldn't it be allowed only when
> > > > host_pmuver == IMPDEF (as before)?
> > > > Probably, it may not cause any real problems though.
> > >
> > > Given that we don't treat the two cases any differently, I thought it
> > > would be reasonable to relax this particular case, and I can't see any
> > > reason why we shouldn't tolerate this sort of migration.
> >
> > That's true. I assume it won't cause any functional issues.
> >
> > I have another comment related to this.
> > KVM allows userspace to create a guest with a mix of vCPUs with and
> > without PMU.  For such a guest, if the register for the vCPU without
> > PMU is set last, I think the PMUVER value for vCPUs with PMU could
> > become no PMU (0) or IMPDEF (0xf).
> > Also, with the current patch, userspace can set PMUv3 support value
> > (non-zero or non-IMPDEF) for vCPUs without the PMU.
> > IMHO, KVM shouldn't allow userspace to set PMUVER to the value that
> > is inconsistent with PMU configuration for the vCPU.
> > What do you think ?
>
> Yes, this seems sensible, and we only do it one way at the moment.
>
> > I'm thinking of the following code (not tested).
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 4fa14b4ae2a6..ddd849027cc3 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1265,10 +1265,17 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> >         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
> >                 return -EINVAL;
> >
> > -       /* We already have a PMU, don't try to disable it... */
> > -       if (kvm_vcpu_has_pmu(vcpu) &&
> > -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> > -               return -EINVAL;
> > +       if (kvm_vcpu_has_pmu(vcpu)) {
> > +               /* We already have a PMU, don't try to disable it... */
> > +               if (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> > +                       return -EINVAL;
> > +               }
> > +       } else {
> > +               /* We don't have a PMU, don't try to enable it... */
> > +               if (pmuver > 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF) {
> > +                       return -EINVAL;
> > +               }
> > +       }
>
> This is a bit ugly. I came up with this instead:
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3b28ef48a525..e104fde1a0ee 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1273,6 +1273,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>                                u64 val)
>  {
>         u8 pmuver, host_pmuver;
> +       bool valid_pmu;
>
>         host_pmuver = kvm_arm_pmu_get_pmuver_limit();
>
> @@ -1286,9 +1287,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
>         if (pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)
>                 return -EINVAL;
>
> -       /* We already have a PMU, don't try to disable it... */
> -       if (kvm_vcpu_has_pmu(vcpu) &&
> -           (pmuver == 0 || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF))
> +       valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
> +
> +       /* Make sure view register and PMU support do match */
> +       if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
>                 return -EINVAL;

Thanks, much better!

>
>         /* We can only differ with PMUver, and anything else is an error */
>
> and the similar check for the 32bit counterpart.
>
> >
> >         /* We can only differ with PMUver, and anything else is an error */
> >         val ^= read_id_reg(vcpu, rd);
> > @@ -1276,7 +1283,8 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> >         if (val)
> >                 return -EINVAL;
> >
> > -       vcpu->kvm->arch.dfr0_pmuver = pmuver;
> > +       if (kvm_vcpu_has_pmu(vcpu))
> > +               vcpu->kvm->arch.dfr0_pmuver = pmuver;
>
> We need to update this unconditionally if we want to be able to
> restore an IMPDEF PMU view to the guest.

Yes, right.
BTW, if we have no intention of supporting a mix of vCPUs with and
without PMU, I think it would be nice if we have a clear comment on
that in the code.  Or I'm hoping to disallow it if possible though.

Thank you,
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
  2022-10-28 10:53   ` Marc Zyngier
  (?)
@ 2022-11-04 20:47     ` Oliver Upton
  -1 siblings, 0 replies; 87+ messages in thread
From: Oliver Upton @ 2022-11-04 20:47 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Ricardo Koller,
	Reiji Watanabe

On Fri, Oct 28, 2022 at 11:53:49AM +0100, Marc Zyngier wrote:
> Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>

FYI, another pile of ID reg changes is on the way that'll move DFR0 to a
generated definition.

https://lore.kernel.org/linux-arm-kernel/20220930140211.3215348-1-james.morse@arm.com/

--
Thanks,
Oliver

> ---
>  arch/arm64/include/asm/sysreg.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7d301700d1a9..84f59ce1dc6d 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -698,6 +698,8 @@
>  #define ID_DFR0_PERFMON_8_1		0x4
>  #define ID_DFR0_PERFMON_8_4		0x5
>  #define ID_DFR0_PERFMON_8_5		0x6
> +#define ID_DFR0_PERFMON_8_7		0x7
> +#define ID_DFR0_PERFMON_IMP_DEF		0xf
>  
>  #define ID_ISAR4_SWP_FRAC_SHIFT		28
>  #define ID_ISAR4_PSR_M_SHIFT		24
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
@ 2022-11-04 20:47     ` Oliver Upton
  0 siblings, 0 replies; 87+ messages in thread
From: Oliver Upton @ 2022-11-04 20:47 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

On Fri, Oct 28, 2022 at 11:53:49AM +0100, Marc Zyngier wrote:
> Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>

FYI, another pile of ID reg changes is on the way that'll move DFR0 to a
generated definition.

https://lore.kernel.org/linux-arm-kernel/20220930140211.3215348-1-james.morse@arm.com/

--
Thanks,
Oliver

> ---
>  arch/arm64/include/asm/sysreg.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7d301700d1a9..84f59ce1dc6d 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -698,6 +698,8 @@
>  #define ID_DFR0_PERFMON_8_1		0x4
>  #define ID_DFR0_PERFMON_8_4		0x5
>  #define ID_DFR0_PERFMON_8_5		0x6
> +#define ID_DFR0_PERFMON_8_7		0x7
> +#define ID_DFR0_PERFMON_IMP_DEF		0xf
>  
>  #define ID_ISAR4_SWP_FRAC_SHIFT		28
>  #define ID_ISAR4_PSR_M_SHIFT		24
> -- 
> 2.34.1
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
@ 2022-11-04 20:47     ` Oliver Upton
  0 siblings, 0 replies; 87+ messages in thread
From: Oliver Upton @ 2022-11-04 20:47 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Ricardo Koller,
	Reiji Watanabe

On Fri, Oct 28, 2022 at 11:53:49AM +0100, Marc Zyngier wrote:
> Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>

FYI, another pile of ID reg changes is on the way that'll move DFR0 to a
generated definition.

https://lore.kernel.org/linux-arm-kernel/20220930140211.3215348-1-james.morse@arm.com/

--
Thanks,
Oliver

> ---
>  arch/arm64/include/asm/sysreg.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7d301700d1a9..84f59ce1dc6d 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -698,6 +698,8 @@
>  #define ID_DFR0_PERFMON_8_1		0x4
>  #define ID_DFR0_PERFMON_8_4		0x5
>  #define ID_DFR0_PERFMON_8_5		0x6
> +#define ID_DFR0_PERFMON_8_7		0x7
> +#define ID_DFR0_PERFMON_IMP_DEF		0xf
>  
>  #define ID_ISAR4_SWP_FRAC_SHIFT		28
>  #define ID_ISAR4_PSR_M_SHIFT		24
> -- 
> 2.34.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
  2022-11-04 20:47     ` Oliver Upton
  (?)
@ 2022-11-05  9:42       ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-05  9:42 UTC (permalink / raw)
  To: Oliver Upton
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Ricardo Koller,
	Reiji Watanabe

On 2022-11-04 20:47, Oliver Upton wrote:
> On Fri, Oct 28, 2022 at 11:53:49AM +0100, Marc Zyngier wrote:
>> Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
> 
> FYI, another pile of ID reg changes is on the way that'll move DFR0 to 
> a
> generated definition.
> 
> https://lore.kernel.org/linux-arm-kernel/20220930140211.3215348-1-james.morse@arm.com/
> 

Eh, another of these. The usual way we deal with this churn
is to have a stable branch in the arm64 tree which I pull into
the offending branch in the kvmarm tree.

Thanks for the heads up!

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
@ 2022-11-05  9:42       ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-05  9:42 UTC (permalink / raw)
  To: Oliver Upton; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

On 2022-11-04 20:47, Oliver Upton wrote:
> On Fri, Oct 28, 2022 at 11:53:49AM +0100, Marc Zyngier wrote:
>> Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
> 
> FYI, another pile of ID reg changes is on the way that'll move DFR0 to 
> a
> generated definition.
> 
> https://lore.kernel.org/linux-arm-kernel/20220930140211.3215348-1-james.morse@arm.com/
> 

Eh, another of these. The usual way we deal with this churn
is to have a stable branch in the arm64 tree which I pull into
the offending branch in the kvmarm tree.

Thanks for the heads up!

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
@ 2022-11-05  9:42       ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-05  9:42 UTC (permalink / raw)
  To: Oliver Upton
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Ricardo Koller,
	Reiji Watanabe

On 2022-11-04 20:47, Oliver Upton wrote:
> On Fri, Oct 28, 2022 at 11:53:49AM +0100, Marc Zyngier wrote:
>> Align the ID_DFR0_EL1.PerfMon values with ID_AA64DFR0_EL1.PMUver.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
> 
> FYI, another pile of ID reg changes is on the way that'll move DFR0 to 
> a
> generated definition.
> 
> https://lore.kernel.org/linux-arm-kernel/20220930140211.3215348-1-james.morse@arm.com/
> 

Eh, another of these. The usual way we deal with this churn
is to have a stable branch in the arm64 tree which I pull into
the offending branch in the kvmarm tree.

Thanks for the heads up!

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-11-04 15:53             ` Reiji Watanabe
  (?)
@ 2022-11-06 12:47               ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-06 12:47 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Reiji,

On Fri, 04 Nov 2022 15:53:21 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> BTW, if we have no intention of supporting a mix of vCPUs with and
> without PMU, I think it would be nice if we have a clear comment on
> that in the code.  Or I'm hoping to disallow it if possible though.

I'm not sure we're in a position to do this right now. The current API
has always (for good or bad reasons) been per-vcpu as it is tied to
the vcpu initialisation.

However, once we move to a sysreg-based API to control the vcpu
features, we can revisit this and say that some features have a
VM-wide effect if the vcpus have been created with some special flag
(or some other TBD mechanism).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-06 12:47               ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-06 12:47 UTC (permalink / raw)
  To: Reiji Watanabe; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Reiji,

On Fri, 04 Nov 2022 15:53:21 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> BTW, if we have no intention of supporting a mix of vCPUs with and
> without PMU, I think it would be nice if we have a clear comment on
> that in the code.  Or I'm hoping to disallow it if possible though.

I'm not sure we're in a position to do this right now. The current API
has always (for good or bad reasons) been per-vcpu as it is tied to
the vcpu initialisation.

However, once we move to a sysreg-based API to control the vcpu
features, we can revisit this and say that some features have a
VM-wide effect if the vcpus have been created with some special flag
(or some other TBD mechanism).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-06 12:47               ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-06 12:47 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Reiji,

On Fri, 04 Nov 2022 15:53:21 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> BTW, if we have no intention of supporting a mix of vCPUs with and
> without PMU, I think it would be nice if we have a clear comment on
> that in the code.  Or I'm hoping to disallow it if possible though.

I'm not sure we're in a position to do this right now. The current API
has always (for good or bad reasons) been per-vcpu as it is tied to
the vcpu initialisation.

However, once we move to a sysreg-based API to control the vcpu
features, we can revisit this and say that some features have a
VM-wide effect if the vcpus have been created with some special flag
(or some other TBD mechanism).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-11-06 12:47               ` Marc Zyngier
  (?)
@ 2022-11-08  5:36                 ` Reiji Watanabe
  -1 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-08  5:36 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

Hi Marc,

> > BTW, if we have no intention of supporting a mix of vCPUs with and
> > without PMU, I think it would be nice if we have a clear comment on
> > that in the code.  Or I'm hoping to disallow it if possible though.
>
> I'm not sure we're in a position to do this right now. The current API
> has always (for good or bad reasons) been per-vcpu as it is tied to
> the vcpu initialisation.

Thank you for your comments!
Then, when a guest that has a mix of vCPUs with and without PMU,
userspace can set kvm->arch.dfr0_pmuver to zero or IMPDEF, and the
PMUVER for vCPUs with PMU will become 0 or IMPDEF as I mentioned.
For instance, on the host whose PMUVER==1, if vCPU#0 has no PMU(PMUVER==0),
vCPU#1 has PMU(PMUVER==1), if the guest is migrated to another host with
same CPU features (PMUVER==1), if SET_ONE_REG of ID_AA64DFR0_EL1 for vCPU#0
is done after for vCPU#1, kvm->arch.dfr0_pmuver will be set to 0, and
the guest will see PMUVER==0 even for vCPU1.

Should we be concerned about this case?

Thank you,
Reiji
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-08  5:36                 ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-08  5:36 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

> > BTW, if we have no intention of supporting a mix of vCPUs with and
> > without PMU, I think it would be nice if we have a clear comment on
> > that in the code.  Or I'm hoping to disallow it if possible though.
>
> I'm not sure we're in a position to do this right now. The current API
> has always (for good or bad reasons) been per-vcpu as it is tied to
> the vcpu initialisation.

Thank you for your comments!
Then, when a guest that has a mix of vCPUs with and without PMU,
userspace can set kvm->arch.dfr0_pmuver to zero or IMPDEF, and the
PMUVER for vCPUs with PMU will become 0 or IMPDEF as I mentioned.
For instance, on the host whose PMUVER==1, if vCPU#0 has no PMU(PMUVER==0),
vCPU#1 has PMU(PMUVER==1), if the guest is migrated to another host with
same CPU features (PMUVER==1), if SET_ONE_REG of ID_AA64DFR0_EL1 for vCPU#0
is done after for vCPU#1, kvm->arch.dfr0_pmuver will be set to 0, and
the guest will see PMUVER==0 even for vCPU1.

Should we be concerned about this case?

Thank you,
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-08  5:36                 ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-08  5:36 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

Hi Marc,

> > BTW, if we have no intention of supporting a mix of vCPUs with and
> > without PMU, I think it would be nice if we have a clear comment on
> > that in the code.  Or I'm hoping to disallow it if possible though.
>
> I'm not sure we're in a position to do this right now. The current API
> has always (for good or bad reasons) been per-vcpu as it is tied to
> the vcpu initialisation.

Thank you for your comments!
Then, when a guest that has a mix of vCPUs with and without PMU,
userspace can set kvm->arch.dfr0_pmuver to zero or IMPDEF, and the
PMUVER for vCPUs with PMU will become 0 or IMPDEF as I mentioned.
For instance, on the host whose PMUVER==1, if vCPU#0 has no PMU(PMUVER==0),
vCPU#1 has PMU(PMUVER==1), if the guest is migrated to another host with
same CPU features (PMUVER==1), if SET_ONE_REG of ID_AA64DFR0_EL1 for vCPU#0
is done after for vCPU#1, kvm->arch.dfr0_pmuver will be set to 0, and
the guest will see PMUVER==0 even for vCPU1.

Should we be concerned about this case?

Thank you,
Reiji

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 03/14] KVM: arm64: PMU: Always advertise the CHAIN event
  2022-10-28 10:53   ` Marc Zyngier
  (?)
@ 2022-11-12  8:01     ` Reiji Watanabe
  -1 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-12  8:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

On Fri, Oct 28, 2022 at 3:54 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Even when the underlying HW doesn't offer the CHAIN event
> (which happens with QEMU), we can always support it as we're
> in control of the counter overflow.
>
> Always advertise the event via PMCEID0_EL0.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 03/14] KVM: arm64: PMU: Always advertise the CHAIN event
@ 2022-11-12  8:01     ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-12  8:01 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

On Fri, Oct 28, 2022 at 3:54 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Even when the underlying HW doesn't offer the CHAIN event
> (which happens with QEMU), we can always support it as we're
> in control of the counter overflow.
>
> Always advertise the event via PMCEID0_EL0.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 03/14] KVM: arm64: PMU: Always advertise the CHAIN event
@ 2022-11-12  8:01     ` Reiji Watanabe
  0 siblings, 0 replies; 87+ messages in thread
From: Reiji Watanabe @ 2022-11-12  8:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

On Fri, Oct 28, 2022 at 3:54 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Even when the underlying HW doesn't offer the CHAIN event
> (which happens with QEMU), we can always support it as we're
> in control of the counter overflow.
>
> Always advertise the event via PMCEID0_EL0.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
  2022-11-08  5:36                 ` Reiji Watanabe
  (?)
@ 2022-11-13 10:56                   ` Marc Zyngier
  -1 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-13 10:56 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

On 2022-11-08 05:36, Reiji Watanabe wrote:
> Hi Marc,
> 
>> > BTW, if we have no intention of supporting a mix of vCPUs with and
>> > without PMU, I think it would be nice if we have a clear comment on
>> > that in the code.  Or I'm hoping to disallow it if possible though.
>> 
>> I'm not sure we're in a position to do this right now. The current API
>> has always (for good or bad reasons) been per-vcpu as it is tied to
>> the vcpu initialisation.
> 
> Thank you for your comments!
> Then, when a guest that has a mix of vCPUs with and without PMU,
> userspace can set kvm->arch.dfr0_pmuver to zero or IMPDEF, and the
> PMUVER for vCPUs with PMU will become 0 or IMPDEF as I mentioned.
> For instance, on the host whose PMUVER==1, if vCPU#0 has no 
> PMU(PMUVER==0),
> vCPU#1 has PMU(PMUVER==1), if the guest is migrated to another host 
> with
> same CPU features (PMUVER==1), if SET_ONE_REG of ID_AA64DFR0_EL1 for 
> vCPU#0
> is done after for vCPU#1, kvm->arch.dfr0_pmuver will be set to 0, and
> the guest will see PMUVER==0 even for vCPU1.
> 
> Should we be concerned about this case?

Yeah, this is a real problem. The issue is that we want to keep
track of two separate bits of information:

- what is the revision of the PMU when the PMU is supported?
- what is the PMU unsupported or IMPDEF?

and we use the same field for both, which clearly cannot work
if we allow vcpus with and without PMUs in the same VM.

I've now switched to an implementation where I track both
the architected version as well as the version exposed when
no PMU is supported, see below.

We still cannot track both no-PMU *and* impdef-PMU, nor can we
track multiple PMU revisions. But that's not a thing as far as
I am concerned.

Thanks,

         M.

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 90c9a2dd3f26..cc44e3bc528d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -163,7 +163,10 @@ struct kvm_arch {

  	u8 pfr0_csv2;
  	u8 pfr0_csv3;
-	u8 dfr0_pmuver;
+	struct {
+		u8 imp:4;
+		u8 unimp:4;
+	} dfr0_pmuver;

  	/* Hypercall features firmware registers' descriptor */
  	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 6b3ed524630d..f956aab438c7 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -168,7 +168,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long 
type)
  	 * Initialise the default PMUver before there is a chance to
  	 * create an actual PMU.
  	 */
-	kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
+	kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();

  	return ret;
  out_free_stage2_pgd:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 95100896de72..615cb148e22a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1069,14 +1069,9 @@ static bool access_arch_timer(struct kvm_vcpu 
*vcpu,
  static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
  {
  	if (kvm_vcpu_has_pmu(vcpu))
-		return vcpu->kvm->arch.dfr0_pmuver;
+		return vcpu->kvm->arch.dfr0_pmuver.imp;

-	/* Special case for IMPDEF PMUs that KVM has exposed in the past... */
-	if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
-		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
-
-	/* The real "no PMU" */
-	return 0;
+	return vcpu->kvm->arch.dfr0_pmuver.unimp;
  }

  static u8 perfmon_to_pmuver(u8 perfmon)
@@ -1295,7 +1290,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu 
*vcpu,
  	if (val)
  		return -EINVAL;

-	vcpu->kvm->arch.dfr0_pmuver = pmuver;
+	if (valid_pmu)
+		vcpu->kvm->arch.dfr0_pmuver.imp = pmuver;
+	else
+		vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver;

  	return 0;
  }
@@ -1332,7 +1330,10 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
  	if (val)
  		return -EINVAL;

-	vcpu->kvm->arch.dfr0_pmuver = perfmon_to_pmuver(perfmon);
+	if (valid_pmu)
+		vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon);
+	else
+		vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon);

  	return 0;
  }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 3d526df9f3c5..628775334d5e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -93,7 +93,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
   * Evaluates as true when emulating PMUv3p5, and false otherwise.
   */
  #define kvm_pmu_is_3p5(vcpu)						\
-	(vcpu->kvm->arch.dfr0_pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+	(vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)

  u8 kvm_arm_pmu_get_pmuver_limit(void);

-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-13 10:56                   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-13 10:56 UTC (permalink / raw)
  To: Reiji Watanabe; +Cc: kvm, kvmarm, kvmarm, linux-arm-kernel

On 2022-11-08 05:36, Reiji Watanabe wrote:
> Hi Marc,
> 
>> > BTW, if we have no intention of supporting a mix of vCPUs with and
>> > without PMU, I think it would be nice if we have a clear comment on
>> > that in the code.  Or I'm hoping to disallow it if possible though.
>> 
>> I'm not sure we're in a position to do this right now. The current API
>> has always (for good or bad reasons) been per-vcpu as it is tied to
>> the vcpu initialisation.
> 
> Thank you for your comments!
> Then, when a guest that has a mix of vCPUs with and without PMU,
> userspace can set kvm->arch.dfr0_pmuver to zero or IMPDEF, and the
> PMUVER for vCPUs with PMU will become 0 or IMPDEF as I mentioned.
> For instance, on the host whose PMUVER==1, if vCPU#0 has no 
> PMU(PMUVER==0),
> vCPU#1 has PMU(PMUVER==1), if the guest is migrated to another host 
> with
> same CPU features (PMUVER==1), if SET_ONE_REG of ID_AA64DFR0_EL1 for 
> vCPU#0
> is done after for vCPU#1, kvm->arch.dfr0_pmuver will be set to 0, and
> the guest will see PMUVER==0 even for vCPU1.
> 
> Should we be concerned about this case?

Yeah, this is a real problem. The issue is that we want to keep
track of two separate bits of information:

- what is the revision of the PMU when the PMU is supported?
- what is the PMU unsupported or IMPDEF?

and we use the same field for both, which clearly cannot work
if we allow vcpus with and without PMUs in the same VM.

I've now switched to an implementation where I track both
the architected version as well as the version exposed when
no PMU is supported, see below.

We still cannot track both no-PMU *and* impdef-PMU, nor can we
track multiple PMU revisions. But that's not a thing as far as
I am concerned.

Thanks,

         M.

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 90c9a2dd3f26..cc44e3bc528d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -163,7 +163,10 @@ struct kvm_arch {

  	u8 pfr0_csv2;
  	u8 pfr0_csv3;
-	u8 dfr0_pmuver;
+	struct {
+		u8 imp:4;
+		u8 unimp:4;
+	} dfr0_pmuver;

  	/* Hypercall features firmware registers' descriptor */
  	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 6b3ed524630d..f956aab438c7 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -168,7 +168,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long 
type)
  	 * Initialise the default PMUver before there is a chance to
  	 * create an actual PMU.
  	 */
-	kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
+	kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();

  	return ret;
  out_free_stage2_pgd:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 95100896de72..615cb148e22a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1069,14 +1069,9 @@ static bool access_arch_timer(struct kvm_vcpu 
*vcpu,
  static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
  {
  	if (kvm_vcpu_has_pmu(vcpu))
-		return vcpu->kvm->arch.dfr0_pmuver;
+		return vcpu->kvm->arch.dfr0_pmuver.imp;

-	/* Special case for IMPDEF PMUs that KVM has exposed in the past... */
-	if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
-		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
-
-	/* The real "no PMU" */
-	return 0;
+	return vcpu->kvm->arch.dfr0_pmuver.unimp;
  }

  static u8 perfmon_to_pmuver(u8 perfmon)
@@ -1295,7 +1290,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu 
*vcpu,
  	if (val)
  		return -EINVAL;

-	vcpu->kvm->arch.dfr0_pmuver = pmuver;
+	if (valid_pmu)
+		vcpu->kvm->arch.dfr0_pmuver.imp = pmuver;
+	else
+		vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver;

  	return 0;
  }
@@ -1332,7 +1330,10 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
  	if (val)
  		return -EINVAL;

-	vcpu->kvm->arch.dfr0_pmuver = perfmon_to_pmuver(perfmon);
+	if (valid_pmu)
+		vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon);
+	else
+		vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon);

  	return 0;
  }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 3d526df9f3c5..628775334d5e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -93,7 +93,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
   * Evaluates as true when emulating PMUv3p5, and false otherwise.
   */
  #define kvm_pmu_is_3p5(vcpu)						\
-	(vcpu->kvm->arch.dfr0_pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+	(vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)

  u8 kvm_arm_pmu_get_pmuver_limit(void);

-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
@ 2022-11-13 10:56                   ` Marc Zyngier
  0 siblings, 0 replies; 87+ messages in thread
From: Marc Zyngier @ 2022-11-13 10:56 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-arm-kernel, kvmarm, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Oliver Upton, Ricardo Koller

On 2022-11-08 05:36, Reiji Watanabe wrote:
> Hi Marc,
> 
>> > BTW, if we have no intention of supporting a mix of vCPUs with and
>> > without PMU, I think it would be nice if we have a clear comment on
>> > that in the code.  Or I'm hoping to disallow it if possible though.
>> 
>> I'm not sure we're in a position to do this right now. The current API
>> has always (for good or bad reasons) been per-vcpu as it is tied to
>> the vcpu initialisation.
> 
> Thank you for your comments!
> Then, when a guest that has a mix of vCPUs with and without PMU,
> userspace can set kvm->arch.dfr0_pmuver to zero or IMPDEF, and the
> PMUVER for vCPUs with PMU will become 0 or IMPDEF as I mentioned.
> For instance, on the host whose PMUVER==1, if vCPU#0 has no 
> PMU(PMUVER==0),
> vCPU#1 has PMU(PMUVER==1), if the guest is migrated to another host 
> with
> same CPU features (PMUVER==1), if SET_ONE_REG of ID_AA64DFR0_EL1 for 
> vCPU#0
> is done after for vCPU#1, kvm->arch.dfr0_pmuver will be set to 0, and
> the guest will see PMUVER==0 even for vCPU1.
> 
> Should we be concerned about this case?

Yeah, this is a real problem. The issue is that we want to keep
track of two separate bits of information:

- what is the revision of the PMU when the PMU is supported?
- what is the PMU unsupported or IMPDEF?

and we use the same field for both, which clearly cannot work
if we allow vcpus with and without PMUs in the same VM.

I've now switched to an implementation where I track both
the architected version as well as the version exposed when
no PMU is supported, see below.

We still cannot track both no-PMU *and* impdef-PMU, nor can we
track multiple PMU revisions. But that's not a thing as far as
I am concerned.

Thanks,

         M.

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 90c9a2dd3f26..cc44e3bc528d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -163,7 +163,10 @@ struct kvm_arch {

  	u8 pfr0_csv2;
  	u8 pfr0_csv3;
-	u8 dfr0_pmuver;
+	struct {
+		u8 imp:4;
+		u8 unimp:4;
+	} dfr0_pmuver;

  	/* Hypercall features firmware registers' descriptor */
  	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 6b3ed524630d..f956aab438c7 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -168,7 +168,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long 
type)
  	 * Initialise the default PMUver before there is a chance to
  	 * create an actual PMU.
  	 */
-	kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
+	kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();

  	return ret;
  out_free_stage2_pgd:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 95100896de72..615cb148e22a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1069,14 +1069,9 @@ static bool access_arch_timer(struct kvm_vcpu 
*vcpu,
  static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
  {
  	if (kvm_vcpu_has_pmu(vcpu))
-		return vcpu->kvm->arch.dfr0_pmuver;
+		return vcpu->kvm->arch.dfr0_pmuver.imp;

-	/* Special case for IMPDEF PMUs that KVM has exposed in the past... */
-	if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
-		return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
-
-	/* The real "no PMU" */
-	return 0;
+	return vcpu->kvm->arch.dfr0_pmuver.unimp;
  }

  static u8 perfmon_to_pmuver(u8 perfmon)
@@ -1295,7 +1290,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu 
*vcpu,
  	if (val)
  		return -EINVAL;

-	vcpu->kvm->arch.dfr0_pmuver = pmuver;
+	if (valid_pmu)
+		vcpu->kvm->arch.dfr0_pmuver.imp = pmuver;
+	else
+		vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver;

  	return 0;
  }
@@ -1332,7 +1330,10 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
  	if (val)
  		return -EINVAL;

-	vcpu->kvm->arch.dfr0_pmuver = perfmon_to_pmuver(perfmon);
+	if (valid_pmu)
+		vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon);
+	else
+		vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon);

  	return 0;
  }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 3d526df9f3c5..628775334d5e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -93,7 +93,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
   * Evaluates as true when emulating PMUv3p5, and false otherwise.
   */
  #define kvm_pmu_is_3p5(vcpu)						\
-	(vcpu->kvm->arch.dfr0_pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+	(vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)

  u8 kvm_arm_pmu_get_pmuver_limit(void);

-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 87+ messages in thread

end of thread, other threads:[~2022-11-13 10:58 UTC | newest]

Thread overview: 87+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-28 10:53 [PATCH v2 00/14] KVM: arm64: PMU: Fixing chained events, and PMUv3p5 support Marc Zyngier
2022-10-28 10:53 ` Marc Zyngier
2022-10-28 10:53 ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 01/14] arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-11-04 20:47   ` Oliver Upton
2022-11-04 20:47     ` Oliver Upton
2022-11-04 20:47     ` Oliver Upton
2022-11-05  9:42     ` Marc Zyngier
2022-11-05  9:42       ` Marc Zyngier
2022-11-05  9:42       ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 02/14] KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 03/14] KVM: arm64: PMU: Always advertise the CHAIN event Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-11-12  8:01   ` Reiji Watanabe
2022-11-12  8:01     ` Reiji Watanabe
2022-11-12  8:01     ` Reiji Watanabe
2022-10-28 10:53 ` [PATCH v2 04/14] KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 05/14] KVM: arm64: PMU: Narrow the overflow checking when required Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 06/14] KVM: arm64: PMU: Only narrow counters that are not 64bit wide Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 07/14] KVM: arm64: PMU: Add counter_index_to_*reg() helpers Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 08/14] KVM: arm64: PMU: Simplify setting a counter to a specific value Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 09/14] KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53 ` [PATCH v2 10/14] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-11-03  4:55   ` Reiji Watanabe
2022-11-03  4:55     ` Reiji Watanabe
2022-11-03  4:55     ` Reiji Watanabe
2022-11-03  8:44     ` Marc Zyngier
2022-11-03  8:44       ` Marc Zyngier
2022-11-03  8:44       ` Marc Zyngier
2022-11-03 14:52       ` Reiji Watanabe
2022-11-03 14:52         ` Reiji Watanabe
2022-11-03 14:52         ` Reiji Watanabe
2022-10-28 10:53 ` [PATCH v2 11/14] KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-10-28 10:53   ` Marc Zyngier
2022-11-03  5:31   ` Reiji Watanabe
2022-11-03  5:31     ` Reiji Watanabe
2022-11-03  5:31     ` Reiji Watanabe
2022-11-03 10:24     ` Marc Zyngier
2022-11-03 10:24       ` Marc Zyngier
2022-11-03 10:24       ` Marc Zyngier
2022-11-04  7:00       ` Reiji Watanabe
2022-11-04  7:00         ` Reiji Watanabe
2022-11-04  7:00         ` Reiji Watanabe
2022-11-04 12:20         ` Marc Zyngier
2022-11-04 12:20           ` Marc Zyngier
2022-11-04 12:20           ` Marc Zyngier
2022-11-04 15:53           ` Reiji Watanabe
2022-11-04 15:53             ` Reiji Watanabe
2022-11-04 15:53             ` Reiji Watanabe
2022-11-06 12:47             ` Marc Zyngier
2022-11-06 12:47               ` Marc Zyngier
2022-11-06 12:47               ` Marc Zyngier
2022-11-08  5:36               ` Reiji Watanabe
2022-11-08  5:36                 ` Reiji Watanabe
2022-11-08  5:36                 ` Reiji Watanabe
2022-11-13 10:56                 ` Marc Zyngier
2022-11-13 10:56                   ` Marc Zyngier
2022-11-13 10:56                   ` Marc Zyngier
2022-10-28 10:54 ` [PATCH v2 12/14] KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon " Marc Zyngier
2022-10-28 10:54   ` Marc Zyngier
2022-10-28 10:54   ` Marc Zyngier
2022-10-28 10:54 ` [PATCH v2 13/14] KVM: arm64: PMU: Implement PMUv3p5 long counter support Marc Zyngier
2022-10-28 10:54   ` Marc Zyngier
2022-10-28 10:54   ` Marc Zyngier
2022-10-28 10:54 ` [PATCH v2 14/14] KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest Marc Zyngier
2022-10-28 10:54   ` Marc Zyngier
2022-10-28 10:54   ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.