* [PATCH v3 0/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters
@ 2020-01-02 12:39 Andrew Murray
2020-01-02 12:39 ` [PATCH v3 1/3] arm64: cpufeature: Extract capped fields Andrew Murray
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Andrew Murray @ 2020-01-02 12:39 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland
Cc: kvmarm, linux-arm-kernel
At present ARMv8 event counters are limited to 32-bits, though by
using the CHAIN event it's possible to combine adjacent counters to
achieve 64-bits. The perf config1:0 bit can be set to use such a
configuration.
With the introduction of ARMv8.5-PMU support, all event counters can
now be used as 64-bit counters. Let's add support for 64-bit event
counters.
As KVM doesn't yet support 64-bit event counters, we also trap
and emulate the Debug Feature Registers to limit the PMU version a
guest sees to PMUv3 for ARMv8.4.
Tested by running the following perf command on both guest and host
and ensuring that the figures are very similar:
perf stat -e armv8_pmuv3/inst_retired,long=1/ \
-e armv8_pmuv3/inst_retired,long=0/ -e cycles
Changes since v2:
- Rebased onto v5.5-rc4
- Mask 'cap' value to 'width' in cpuid_feature_cap_signed_field_width
Changes since v1:
- Rebased onto v5.5-rc1
Andrew Murray (3):
arm64: cpufeature: Extract capped fields
KVM: arm64: limit PMU version to ARMv8.4
arm64: perf: Add support for ARMv8.5-PMU 64-bit counters
arch/arm64/include/asm/cpufeature.h | 16 ++++++
arch/arm64/include/asm/perf_event.h | 3 +-
arch/arm64/include/asm/sysreg.h | 4 ++
arch/arm64/kernel/perf_event.c | 86 +++++++++++++++++++++++------
arch/arm64/kvm/sys_regs.c | 36 +++++++++++-
include/linux/perf/arm_pmu.h | 1 +
6 files changed, 126 insertions(+), 20 deletions(-)
--
2.21.0
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/3] arm64: cpufeature: Extract capped fields
2020-01-02 12:39 [PATCH v3 0/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters Andrew Murray
@ 2020-01-02 12:39 ` Andrew Murray
2020-01-02 16:22 ` Suzuki Kuruppassery Poulose
2020-01-02 12:39 ` [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4 Andrew Murray
2020-01-02 12:39 ` [PATCH v3 3/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters Andrew Murray
2 siblings, 1 reply; 11+ messages in thread
From: Andrew Murray @ 2020-01-02 12:39 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland
Cc: kvmarm, linux-arm-kernel
When emulating ID registers there is often a need to cap the version
bits of a feature such that the guest will not use features that do
not yet exist.
Let's add a helper that extracts a field and caps the version to a
given value.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
arch/arm64/include/asm/cpufeature.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 4261d55e8506..1462fd1101e3 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -447,6 +447,22 @@ cpuid_feature_extract_unsigned_field(u64 features, int field)
return cpuid_feature_extract_unsigned_field_width(features, field, 4);
}
+static inline u64 __attribute_const__
+cpuid_feature_cap_signed_field_width(u64 features, int field, int width,
+ s64 cap)
+{
+ s64 val = cpuid_feature_extract_signed_field_width(features, field,
+ width);
+ u64 mask = GENMASK_ULL(field + width - 1, field);
+
+ if (val > cap) {
+ features &= ~mask;
+ features |= (cap << field) & mask;
+ }
+
+ return features;
+}
+
static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
{
return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
--
2.21.0
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4
2020-01-02 12:39 [PATCH v3 0/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters Andrew Murray
2020-01-02 12:39 ` [PATCH v3 1/3] arm64: cpufeature: Extract capped fields Andrew Murray
@ 2020-01-02 12:39 ` Andrew Murray
2020-01-20 17:44 ` Will Deacon
2020-01-20 17:55 ` Marc Zyngier
2020-01-02 12:39 ` [PATCH v3 3/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters Andrew Murray
2 siblings, 2 replies; 11+ messages in thread
From: Andrew Murray @ 2020-01-02 12:39 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland
Cc: kvmarm, linux-arm-kernel
ARMv8.5-PMU introduces 64-bit event counters, however KVM doesn't yet
support this. Let's trap the Debug Feature Registers in order to limit
PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.4.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
arch/arm64/include/asm/sysreg.h | 4 ++++
arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++--
2 files changed, 38 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 6e919fafb43d..1b74f275a115 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -672,6 +672,10 @@
#define ID_AA64DFR0_TRACEVER_SHIFT 4
#define ID_AA64DFR0_DEBUGVER_SHIFT 0
+#define ID_DFR0_PERFMON_SHIFT 24
+
+#define ID_DFR0_EL1_PMUVER_8_4 5
+
#define ID_ISAR5_RDM_SHIFT 24
#define ID_ISAR5_CRC32_SHIFT 16
#define ID_ISAR5_SHA2_SHIFT 12
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9f2165937f7d..61b984d934d1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -668,6 +668,37 @@ static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN);
}
+static bool access_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *rd)
+{
+ if (p->is_write)
+ return write_to_read_only(vcpu, p, rd);
+
+ /* Limit guests to PMUv3 for ARMv8.4 */
+ p->regval = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+ p->regval = cpuid_feature_cap_signed_field_width(p->regval,
+ ID_AA64DFR0_PMUVER_SHIFT,
+ 4, ID_DFR0_EL1_PMUVER_8_4);
+
+ return p->regval;
+}
+
+static bool access_id_dfr0_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *rd)
+{
+ if (p->is_write)
+ return write_to_read_only(vcpu, p, rd);
+
+ /* Limit guests to PMUv3 for ARMv8.4 */
+ p->regval = read_sanitised_ftr_reg(SYS_ID_DFR0_EL1);
+ p->regval = cpuid_feature_cap_signed_field_width(p->regval,
+ ID_DFR0_PERFMON_SHIFT,
+ 4, ID_DFR0_EL1_PMUVER_8_4);
+
+ return p->regval;
+}
+
static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
@@ -1409,7 +1440,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* CRm=1 */
ID_SANITISED(ID_PFR0_EL1),
ID_SANITISED(ID_PFR1_EL1),
- ID_SANITISED(ID_DFR0_EL1),
+ { SYS_DESC(SYS_ID_DFR0_EL1), access_id_dfr0_el1 },
+
ID_HIDDEN(ID_AFR0_EL1),
ID_SANITISED(ID_MMFR0_EL1),
ID_SANITISED(ID_MMFR1_EL1),
@@ -1448,7 +1480,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_UNALLOCATED(4,7),
/* CRm=5 */
- ID_SANITISED(ID_AA64DFR0_EL1),
+ { SYS_DESC(SYS_ID_AA64DFR0_EL1), access_id_aa64dfr0_el1 },
ID_SANITISED(ID_AA64DFR1_EL1),
ID_UNALLOCATED(5,2),
ID_UNALLOCATED(5,3),
--
2.21.0
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 3/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters
2020-01-02 12:39 [PATCH v3 0/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters Andrew Murray
2020-01-02 12:39 ` [PATCH v3 1/3] arm64: cpufeature: Extract capped fields Andrew Murray
2020-01-02 12:39 ` [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4 Andrew Murray
@ 2020-01-02 12:39 ` Andrew Murray
2 siblings, 0 replies; 11+ messages in thread
From: Andrew Murray @ 2020-01-02 12:39 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland
Cc: kvmarm, linux-arm-kernel
At present ARMv8 event counters are limited to 32-bits, though by
using the CHAIN event it's possible to combine adjacent counters to
achieve 64-bits. The perf config1:0 bit can be set to use such a
configuration.
With the introduction of ARMv8.5-PMU support, all event counters can
now be used as 64-bit counters.
Let's enable 64-bit event counters where support exists. Unless the
user sets config1:0 we will adjust the counter value such that it
overflows upon 32-bit overflow. This follows the same behaviour as
the cycle counter which has always been (and remains) 64-bits.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
arch/arm64/include/asm/perf_event.h | 3 +-
arch/arm64/kernel/perf_event.c | 86 +++++++++++++++++++++++------
include/linux/perf/arm_pmu.h | 1 +
3 files changed, 72 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index 2bdbc79bbd01..e7765b62c712 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -176,9 +176,10 @@
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
+#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
#define ARMV8_PMU_PMCR_N_MASK 0x1f
-#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */
+#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
/*
* PMOVSR: counters overflow flag status reg
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index e40b65645c86..4e27f90bb89e 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -285,6 +285,17 @@ static struct attribute_group armv8_pmuv3_format_attr_group = {
#define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
+
+/*
+ * We unconditionally enable ARMv8.5-PMU long event counter support
+ * (64-bit events) where supported. Indicate if this arm_pmu has long
+ * event counter support.
+ */
+static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu)
+{
+ return (cpu_pmu->pmuver > ID_DFR0_EL1_PMUVER_8_4);
+}
+
/*
* We must chain two programmable counters for 64 bit events,
* except when we have allocated the 64bit cycle counter (for CPU
@@ -294,9 +305,11 @@ static struct attribute_group armv8_pmuv3_format_attr_group = {
static inline bool armv8pmu_event_is_chained(struct perf_event *event)
{
int idx = event->hw.idx;
+ struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
return !WARN_ON(idx < 0) &&
armv8pmu_event_is_64bit(event) &&
+ !armv8pmu_has_long_event(cpu_pmu) &&
(idx != ARMV8_IDX_CYCLE_COUNTER);
}
@@ -345,7 +358,7 @@ static inline void armv8pmu_select_counter(int idx)
isb();
}
-static inline u32 armv8pmu_read_evcntr(int idx)
+static inline u64 armv8pmu_read_evcntr(int idx)
{
armv8pmu_select_counter(idx);
return read_sysreg(pmxevcntr_el0);
@@ -362,6 +375,44 @@ static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
return val;
}
+/*
+ * The cycle counter is always a 64-bit counter. When ARMV8_PMU_PMCR_LP
+ * is set the event counters also become 64-bit counters. Unless the
+ * user has requested a long counter (attr.config1) then we want to
+ * interrupt upon 32-bit overflow - we achieve this by applying a bias.
+ */
+static bool armv8pmu_event_needs_bias(struct perf_event *event)
+{
+ struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
+ struct hw_perf_event *hwc = &event->hw;
+ int idx = hwc->idx;
+
+ if (armv8pmu_event_is_64bit(event))
+ return false;
+
+ if (armv8pmu_has_long_event(cpu_pmu) ||
+ idx == ARMV8_IDX_CYCLE_COUNTER)
+ return true;
+
+ return false;
+}
+
+static u64 armv8pmu_bias_long_counter(struct perf_event *event, u64 value)
+{
+ if (armv8pmu_event_needs_bias(event))
+ value |= GENMASK(63, 32);
+
+ return value;
+}
+
+static u64 armv8pmu_unbias_long_counter(struct perf_event *event, u64 value)
+{
+ if (armv8pmu_event_needs_bias(event))
+ value &= ~GENMASK(63, 32);
+
+ return value;
+}
+
static u64 armv8pmu_read_counter(struct perf_event *event)
{
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
@@ -377,10 +428,10 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
else
value = armv8pmu_read_hw_counter(event);
- return value;
+ return armv8pmu_unbias_long_counter(event, value);
}
-static inline void armv8pmu_write_evcntr(int idx, u32 value)
+static inline void armv8pmu_write_evcntr(int idx, u64 value)
{
armv8pmu_select_counter(idx);
write_sysreg(value, pmxevcntr_el0);
@@ -405,20 +456,14 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx;
+ value = armv8pmu_bias_long_counter(event, value);
+
if (!armv8pmu_counter_valid(cpu_pmu, idx))
pr_err("CPU%u writing wrong counter %d\n",
smp_processor_id(), idx);
- else if (idx == ARMV8_IDX_CYCLE_COUNTER) {
- /*
- * The cycles counter is really a 64-bit counter.
- * When treating it as a 32-bit counter, we only count
- * the lower 32 bits, and set the upper 32-bits so that
- * we get an interrupt upon 32-bit overflow.
- */
- if (!armv8pmu_event_is_64bit(event))
- value |= 0xffffffff00000000ULL;
+ else if (idx == ARMV8_IDX_CYCLE_COUNTER)
write_sysreg(value, pmccntr_el0);
- } else
+ else
armv8pmu_write_hw_counter(event, value);
}
@@ -743,7 +788,8 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
/*
* Otherwise use events counters
*/
- if (armv8pmu_event_is_64bit(event))
+ if (armv8pmu_event_is_64bit(event) &&
+ !armv8pmu_has_long_event(cpu_pmu))
return armv8pmu_get_chain_idx(cpuc, cpu_pmu);
else
return armv8pmu_get_single_idx(cpuc, cpu_pmu);
@@ -815,7 +861,7 @@ static int armv8pmu_filter_match(struct perf_event *event)
static void armv8pmu_reset(void *info)
{
struct arm_pmu *cpu_pmu = (struct arm_pmu *)info;
- u32 idx, nb_cnt = cpu_pmu->num_events;
+ u32 idx, pmcr, nb_cnt = cpu_pmu->num_events;
/* The counter and interrupt enable registers are unknown at reset. */
for (idx = ARMV8_IDX_CYCLE_COUNTER; idx < nb_cnt; ++idx) {
@@ -830,8 +876,13 @@ static void armv8pmu_reset(void *info)
* Initialize & Reset PMNC. Request overflow interrupt for
* 64 bit cycle counter but cheat in armv8pmu_write_counter().
*/
- armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C |
- ARMV8_PMU_PMCR_LC);
+ pmcr = ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC;
+
+ /* Enable long event counter support where available */
+ if (armv8pmu_has_long_event(cpu_pmu))
+ pmcr |= ARMV8_PMU_PMCR_LP;
+
+ armv8pmu_pmcr_write(pmcr);
}
static int __armv8_pmuv3_map_event(struct perf_event *event,
@@ -914,6 +965,7 @@ static void __armv8pmu_probe_pmu(void *info)
if (pmuver == 0xf || pmuver == 0)
return;
+ cpu_pmu->pmuver = pmuver;
probe->present = true;
/* Read the nb of CNTx counters supported from PMNC */
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 71f525a35ac2..5b616dde9a4c 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -80,6 +80,7 @@ struct arm_pmu {
struct pmu pmu;
cpumask_t supported_cpus;
char *name;
+ int pmuver;
irqreturn_t (*handle_irq)(struct arm_pmu *pmu);
void (*enable)(struct perf_event *event);
void (*disable)(struct perf_event *event);
--
2.21.0
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/3] arm64: cpufeature: Extract capped fields
2020-01-02 12:39 ` [PATCH v3 1/3] arm64: cpufeature: Extract capped fields Andrew Murray
@ 2020-01-02 16:22 ` Suzuki Kuruppassery Poulose
0 siblings, 0 replies; 11+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2020-01-02 16:22 UTC (permalink / raw)
To: Andrew Murray, Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland
Cc: kvmarm, linux-arm-kernel
On 02/01/2020 12:39, Andrew Murray wrote:
> When emulating ID registers there is often a need to cap the version
> bits of a feature such that the guest will not use features that do
> not yet exist.
>
> Let's add a helper that extracts a field and caps the version to a
> given value.
>
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4
2020-01-02 12:39 ` [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4 Andrew Murray
@ 2020-01-20 17:44 ` Will Deacon
2020-01-21 11:28 ` Andrew Murray
2020-01-20 17:55 ` Marc Zyngier
1 sibling, 1 reply; 11+ messages in thread
From: Will Deacon @ 2020-01-20 17:44 UTC (permalink / raw)
To: Andrew Murray; +Cc: Catalin Marinas, Marc Zyngier, kvmarm, linux-arm-kernel
On Thu, Jan 02, 2020 at 12:39:04PM +0000, Andrew Murray wrote:
> ARMv8.5-PMU introduces 64-bit event counters, however KVM doesn't yet
> support this. Let's trap the Debug Feature Registers in order to limit
> PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.4.
>
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> arch/arm64/include/asm/sysreg.h | 4 ++++
> arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++--
> 2 files changed, 38 insertions(+), 2 deletions(-)
I'll need an ack from the kvm side for this.
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 6e919fafb43d..1b74f275a115 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -672,6 +672,10 @@
> #define ID_AA64DFR0_TRACEVER_SHIFT 4
> #define ID_AA64DFR0_DEBUGVER_SHIFT 0
>
> +#define ID_DFR0_PERFMON_SHIFT 24
> +
> +#define ID_DFR0_EL1_PMUVER_8_4 5
> +
> #define ID_ISAR5_RDM_SHIFT 24
> #define ID_ISAR5_CRC32_SHIFT 16
> #define ID_ISAR5_SHA2_SHIFT 12
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 9f2165937f7d..61b984d934d1 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -668,6 +668,37 @@ static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN);
> }
>
> +static bool access_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> + struct sys_reg_params *p,
> + const struct sys_reg_desc *rd)
> +{
> + if (p->is_write)
> + return write_to_read_only(vcpu, p, rd);
> +
> + /* Limit guests to PMUv3 for ARMv8.4 */
> + p->regval = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> + ID_AA64DFR0_PMUVER_SHIFT,
> + 4, ID_DFR0_EL1_PMUVER_8_4);
nit: I'd probably have a separate define for the field value of the 64-bit
register, since there's no guarantee other values will be encoded the same
way. (i.e. add ID_AA64DFR0_PMUVER_8_4 as well).
> +
> + return p->regval;
> +}
> +
> +static bool access_id_dfr0_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> + const struct sys_reg_desc *rd)
> +{
> + if (p->is_write)
> + return write_to_read_only(vcpu, p, rd);
> +
> + /* Limit guests to PMUv3 for ARMv8.4 */
> + p->regval = read_sanitised_ftr_reg(SYS_ID_DFR0_EL1);
> + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
You could just return the result here (same above).
Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4
2020-01-02 12:39 ` [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4 Andrew Murray
2020-01-20 17:44 ` Will Deacon
@ 2020-01-20 17:55 ` Marc Zyngier
2020-01-21 9:04 ` Will Deacon
2020-01-21 11:24 ` Andrew Murray
1 sibling, 2 replies; 11+ messages in thread
From: Marc Zyngier @ 2020-01-20 17:55 UTC (permalink / raw)
To: Andrew Murray; +Cc: Catalin Marinas, Will Deacon, kvmarm, linux-arm-kernel
On 2020-01-02 12:39, Andrew Murray wrote:
> ARMv8.5-PMU introduces 64-bit event counters, however KVM doesn't yet
> support this. Let's trap the Debug Feature Registers in order to limit
> PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.4.
>
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> arch/arm64/include/asm/sysreg.h | 4 ++++
> arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++--
> 2 files changed, 38 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/sysreg.h
> b/arch/arm64/include/asm/sysreg.h
> index 6e919fafb43d..1b74f275a115 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -672,6 +672,10 @@
> #define ID_AA64DFR0_TRACEVER_SHIFT 4
> #define ID_AA64DFR0_DEBUGVER_SHIFT 0
>
> +#define ID_DFR0_PERFMON_SHIFT 24
> +
> +#define ID_DFR0_EL1_PMUVER_8_4 5
> +
> #define ID_ISAR5_RDM_SHIFT 24
> #define ID_ISAR5_CRC32_SHIFT 16
> #define ID_ISAR5_SHA2_SHIFT 12
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 9f2165937f7d..61b984d934d1 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -668,6 +668,37 @@ static bool
> pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER |
> ARMV8_PMU_USERENR_EN);
> }
>
> +static bool access_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> + struct sys_reg_params *p,
> + const struct sys_reg_desc *rd)
> +{
> + if (p->is_write)
> + return write_to_read_only(vcpu, p, rd);
> +
> + /* Limit guests to PMUv3 for ARMv8.4 */
> + p->regval = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> + ID_AA64DFR0_PMUVER_SHIFT,
> + 4, ID_DFR0_EL1_PMUVER_8_4);
> +
> + return p->regval;
If feels very odd to return the register value in place of a something
that actually indicates whether we should update the PC or not. I have
no idea what is happening here in this case.
> +}
> +
> +static bool access_id_dfr0_el1(struct kvm_vcpu *vcpu, struct
> sys_reg_params *p,
> + const struct sys_reg_desc *rd)
> +{
> + if (p->is_write)
> + return write_to_read_only(vcpu, p, rd);
> +
> + /* Limit guests to PMUv3 for ARMv8.4 */
> + p->regval = read_sanitised_ftr_reg(SYS_ID_DFR0_EL1);
> + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> + ID_DFR0_PERFMON_SHIFT,
> + 4, ID_DFR0_EL1_PMUVER_8_4);
> +
> + return p->regval;
Same here.
> +}
> +
> static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params
> *p,
> const struct sys_reg_desc *r)
> {
> @@ -1409,7 +1440,8 @@ static const struct sys_reg_desc sys_reg_descs[]
> = {
> /* CRm=1 */
> ID_SANITISED(ID_PFR0_EL1),
> ID_SANITISED(ID_PFR1_EL1),
> - ID_SANITISED(ID_DFR0_EL1),
> + { SYS_DESC(SYS_ID_DFR0_EL1), access_id_dfr0_el1 },
How about the .get_user and .set_user accessors that were provided by
ID_SANITISED and that are now dropped? You should probably define a
new wrapper that allows you to override the .access method.
> +
> ID_HIDDEN(ID_AFR0_EL1),
> ID_SANITISED(ID_MMFR0_EL1),
> ID_SANITISED(ID_MMFR1_EL1),
> @@ -1448,7 +1480,7 @@ static const struct sys_reg_desc sys_reg_descs[]
> = {
> ID_UNALLOCATED(4,7),
>
> /* CRm=5 */
> - ID_SANITISED(ID_AA64DFR0_EL1),
> + { SYS_DESC(SYS_ID_AA64DFR0_EL1), access_id_aa64dfr0_el1 },
> ID_SANITISED(ID_AA64DFR1_EL1),
> ID_UNALLOCATED(5,2),
> ID_UNALLOCATED(5,3),
Thanks,
M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4
2020-01-20 17:55 ` Marc Zyngier
@ 2020-01-21 9:04 ` Will Deacon
2020-01-21 11:18 ` Andrew Murray
2020-01-21 11:24 ` Andrew Murray
1 sibling, 1 reply; 11+ messages in thread
From: Will Deacon @ 2020-01-21 9:04 UTC (permalink / raw)
To: Marc Zyngier; +Cc: Catalin Marinas, kvmarm, linux-arm-kernel
On Mon, Jan 20, 2020 at 05:55:17PM +0000, Marc Zyngier wrote:
> On 2020-01-02 12:39, Andrew Murray wrote:
> > ARMv8.5-PMU introduces 64-bit event counters, however KVM doesn't yet
> > support this. Let's trap the Debug Feature Registers in order to limit
> > PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.4.
> >
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> > ---
> > arch/arm64/include/asm/sysreg.h | 4 ++++
> > arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++--
> > 2 files changed, 38 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/sysreg.h
> > b/arch/arm64/include/asm/sysreg.h
> > index 6e919fafb43d..1b74f275a115 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -672,6 +672,10 @@
> > #define ID_AA64DFR0_TRACEVER_SHIFT 4
> > #define ID_AA64DFR0_DEBUGVER_SHIFT 0
> >
> > +#define ID_DFR0_PERFMON_SHIFT 24
> > +
> > +#define ID_DFR0_EL1_PMUVER_8_4 5
> > +
> > #define ID_ISAR5_RDM_SHIFT 24
> > #define ID_ISAR5_CRC32_SHIFT 16
> > #define ID_ISAR5_SHA2_SHIFT 12
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 9f2165937f7d..61b984d934d1 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -668,6 +668,37 @@ static bool
> > pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> > return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER |
> > ARMV8_PMU_USERENR_EN);
> > }
> >
> > +static bool access_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > + struct sys_reg_params *p,
> > + const struct sys_reg_desc *rd)
> > +{
> > + if (p->is_write)
> > + return write_to_read_only(vcpu, p, rd);
> > +
> > + /* Limit guests to PMUv3 for ARMv8.4 */
> > + p->regval = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> > + ID_AA64DFR0_PMUVER_SHIFT,
> > + 4, ID_DFR0_EL1_PMUVER_8_4);
> > +
> > + return p->regval;
>
> If feels very odd to return the register value in place of a something
> that actually indicates whether we should update the PC or not. I have
> no idea what is happening here in this case.
Crikey, yes, I missed that and it probably explains why the code looks so
odd. Andrew -- is there a missing hunk or something here?
Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4
2020-01-21 9:04 ` Will Deacon
@ 2020-01-21 11:18 ` Andrew Murray
0 siblings, 0 replies; 11+ messages in thread
From: Andrew Murray @ 2020-01-21 11:18 UTC (permalink / raw)
To: Will Deacon; +Cc: Marc Zyngier, Catalin Marinas, kvmarm, linux-arm-kernel
On Tue, Jan 21, 2020 at 09:04:21AM +0000, Will Deacon wrote:
> On Mon, Jan 20, 2020 at 05:55:17PM +0000, Marc Zyngier wrote:
> > On 2020-01-02 12:39, Andrew Murray wrote:
> > > ARMv8.5-PMU introduces 64-bit event counters, however KVM doesn't yet
> > > support this. Let's trap the Debug Feature Registers in order to limit
> > > PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.4.
> > >
> > > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> > > ---
> > > arch/arm64/include/asm/sysreg.h | 4 ++++
> > > arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++--
> > > 2 files changed, 38 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/sysreg.h
> > > b/arch/arm64/include/asm/sysreg.h
> > > index 6e919fafb43d..1b74f275a115 100644
> > > --- a/arch/arm64/include/asm/sysreg.h
> > > +++ b/arch/arm64/include/asm/sysreg.h
> > > @@ -672,6 +672,10 @@
> > > #define ID_AA64DFR0_TRACEVER_SHIFT 4
> > > #define ID_AA64DFR0_DEBUGVER_SHIFT 0
> > >
> > > +#define ID_DFR0_PERFMON_SHIFT 24
> > > +
> > > +#define ID_DFR0_EL1_PMUVER_8_4 5
> > > +
> > > #define ID_ISAR5_RDM_SHIFT 24
> > > #define ID_ISAR5_CRC32_SHIFT 16
> > > #define ID_ISAR5_SHA2_SHIFT 12
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 9f2165937f7d..61b984d934d1 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -668,6 +668,37 @@ static bool
> > > pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> > > return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER |
> > > ARMV8_PMU_USERENR_EN);
> > > }
> > >
> > > +static bool access_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > > + struct sys_reg_params *p,
> > > + const struct sys_reg_desc *rd)
> > > +{
> > > + if (p->is_write)
> > > + return write_to_read_only(vcpu, p, rd);
> > > +
> > > + /* Limit guests to PMUv3 for ARMv8.4 */
> > > + p->regval = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > > + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> > > + ID_AA64DFR0_PMUVER_SHIFT,
> > > + 4, ID_DFR0_EL1_PMUVER_8_4);
> > > +
> > > + return p->regval;
> >
> > If feels very odd to return the register value in place of a something
> > that actually indicates whether we should update the PC or not. I have
> > no idea what is happening here in this case.
>
> Crikey, yes, I missed that and it probably explains why the code looks so
> odd. Andrew -- is there a missing hunk or something here?
Doh, it should always return true.
Nothing missing here - sometimes I also look at my own code and have no
idea what I was thinking.
Thanks,
Andrew Murray
>
> Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4
2020-01-20 17:55 ` Marc Zyngier
2020-01-21 9:04 ` Will Deacon
@ 2020-01-21 11:24 ` Andrew Murray
1 sibling, 0 replies; 11+ messages in thread
From: Andrew Murray @ 2020-01-21 11:24 UTC (permalink / raw)
To: Marc Zyngier; +Cc: Catalin Marinas, Will Deacon, kvmarm, linux-arm-kernel
On Mon, Jan 20, 2020 at 05:55:17PM +0000, Marc Zyngier wrote:
> On 2020-01-02 12:39, Andrew Murray wrote:
> > ARMv8.5-PMU introduces 64-bit event counters, however KVM doesn't yet
> > support this. Let's trap the Debug Feature Registers in order to limit
> > PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.4.
> >
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> > ---
> > arch/arm64/include/asm/sysreg.h | 4 ++++
> > arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++--
> > 2 files changed, 38 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/sysreg.h
> > b/arch/arm64/include/asm/sysreg.h
> > index 6e919fafb43d..1b74f275a115 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -672,6 +672,10 @@
> > #define ID_AA64DFR0_TRACEVER_SHIFT 4
> > #define ID_AA64DFR0_DEBUGVER_SHIFT 0
> >
> > +#define ID_DFR0_PERFMON_SHIFT 24
> > +
> > +#define ID_DFR0_EL1_PMUVER_8_4 5
> > +
> > #define ID_ISAR5_RDM_SHIFT 24
> > #define ID_ISAR5_CRC32_SHIFT 16
> > #define ID_ISAR5_SHA2_SHIFT 12
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 9f2165937f7d..61b984d934d1 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -668,6 +668,37 @@ static bool
> > pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> > return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER |
> > ARMV8_PMU_USERENR_EN);
> > }
> >
> > +static bool access_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > + struct sys_reg_params *p,
> > + const struct sys_reg_desc *rd)
> > +{
> > + if (p->is_write)
> > + return write_to_read_only(vcpu, p, rd);
> > +
> > + /* Limit guests to PMUv3 for ARMv8.4 */
> > + p->regval = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> > + ID_AA64DFR0_PMUVER_SHIFT,
> > + 4, ID_DFR0_EL1_PMUVER_8_4);
> > +
> > + return p->regval;
>
> If feels very odd to return the register value in place of a something
> that actually indicates whether we should update the PC or not. I have
> no idea what is happening here in this case.
This should have returned true. I have no idea why I did this.
>
> > +}
> > +
> > +static bool access_id_dfr0_el1(struct kvm_vcpu *vcpu, struct
> > sys_reg_params *p,
> > + const struct sys_reg_desc *rd)
> > +{
> > + if (p->is_write)
> > + return write_to_read_only(vcpu, p, rd);
> > +
> > + /* Limit guests to PMUv3 for ARMv8.4 */
> > + p->regval = read_sanitised_ftr_reg(SYS_ID_DFR0_EL1);
> > + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> > + ID_DFR0_PERFMON_SHIFT,
> > + 4, ID_DFR0_EL1_PMUVER_8_4);
> > +
> > + return p->regval;
>
> Same here.
>
> > +}
> > +
> > static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params
> > *p,
> > const struct sys_reg_desc *r)
> > {
> > @@ -1409,7 +1440,8 @@ static const struct sys_reg_desc sys_reg_descs[] =
> > {
> > /* CRm=1 */
> > ID_SANITISED(ID_PFR0_EL1),
> > ID_SANITISED(ID_PFR1_EL1),
> > - ID_SANITISED(ID_DFR0_EL1),
> > + { SYS_DESC(SYS_ID_DFR0_EL1), access_id_dfr0_el1 },
>
> How about the .get_user and .set_user accessors that were provided by
> ID_SANITISED and that are now dropped? You should probably define a
> new wrapper that allows you to override the .access method.
Yes I can do that, thus ensuring we continue to return sanitised values
rather than the current vcpu value.
However should I also update read_id_reg - thus ensuring the host sees
the same value that the guest sees? (I see this already does something
similar with AA64PFR0 and AA64ISAR1).
Thanks,
Andrew Murray
>
> > +
> > ID_HIDDEN(ID_AFR0_EL1),
> > ID_SANITISED(ID_MMFR0_EL1),
> > ID_SANITISED(ID_MMFR1_EL1),
> > @@ -1448,7 +1480,7 @@ static const struct sys_reg_desc sys_reg_descs[] =
> > {
> > ID_UNALLOCATED(4,7),
> >
> > /* CRm=5 */
> > - ID_SANITISED(ID_AA64DFR0_EL1),
> > + { SYS_DESC(SYS_ID_AA64DFR0_EL1), access_id_aa64dfr0_el1 },
> > ID_SANITISED(ID_AA64DFR1_EL1),
> > ID_UNALLOCATED(5,2),
> > ID_UNALLOCATED(5,3),
>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4
2020-01-20 17:44 ` Will Deacon
@ 2020-01-21 11:28 ` Andrew Murray
0 siblings, 0 replies; 11+ messages in thread
From: Andrew Murray @ 2020-01-21 11:28 UTC (permalink / raw)
To: Will Deacon; +Cc: Catalin Marinas, Marc Zyngier, kvmarm, linux-arm-kernel
On Mon, Jan 20, 2020 at 05:44:33PM +0000, Will Deacon wrote:
> On Thu, Jan 02, 2020 at 12:39:04PM +0000, Andrew Murray wrote:
> > ARMv8.5-PMU introduces 64-bit event counters, however KVM doesn't yet
> > support this. Let's trap the Debug Feature Registers in order to limit
> > PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.4.
> >
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> > ---
> > arch/arm64/include/asm/sysreg.h | 4 ++++
> > arch/arm64/kvm/sys_regs.c | 36 +++++++++++++++++++++++++++++++--
> > 2 files changed, 38 insertions(+), 2 deletions(-)
>
> I'll need an ack from the kvm side for this.
>
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 6e919fafb43d..1b74f275a115 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -672,6 +672,10 @@
> > #define ID_AA64DFR0_TRACEVER_SHIFT 4
> > #define ID_AA64DFR0_DEBUGVER_SHIFT 0
> >
> > +#define ID_DFR0_PERFMON_SHIFT 24
> > +
> > +#define ID_DFR0_EL1_PMUVER_8_4 5
> > +
> > #define ID_ISAR5_RDM_SHIFT 24
> > #define ID_ISAR5_CRC32_SHIFT 16
> > #define ID_ISAR5_SHA2_SHIFT 12
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 9f2165937f7d..61b984d934d1 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -668,6 +668,37 @@ static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> > return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN);
> > }
> >
> > +static bool access_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > + struct sys_reg_params *p,
> > + const struct sys_reg_desc *rd)
> > +{
> > + if (p->is_write)
> > + return write_to_read_only(vcpu, p, rd);
> > +
> > + /* Limit guests to PMUv3 for ARMv8.4 */
> > + p->regval = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
> > + ID_AA64DFR0_PMUVER_SHIFT,
> > + 4, ID_DFR0_EL1_PMUVER_8_4);
>
> nit: I'd probably have a separate define for the field value of the 64-bit
> register, since there's no guarantee other values will be encoded the same
> way. (i.e. add ID_AA64DFR0_PMUVER_8_4 as well).
Yes that seems reasonable, i'll update it.
>
> > +
> > + return p->regval;
> > +}
> > +
> > +static bool access_id_dfr0_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > + const struct sys_reg_desc *rd)
> > +{
> > + if (p->is_write)
> > + return write_to_read_only(vcpu, p, rd);
> > +
> > + /* Limit guests to PMUv3 for ARMv8.4 */
> > + p->regval = read_sanitised_ftr_reg(SYS_ID_DFR0_EL1);
> > + p->regval = cpuid_feature_cap_signed_field_width(p->regval,
>
> You could just return the result here (same above).
Or perhaps a bool - sigh.
Thanks,
Andrew Murray
>
> Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-01-21 11:28 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-02 12:39 [PATCH v3 0/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters Andrew Murray
2020-01-02 12:39 ` [PATCH v3 1/3] arm64: cpufeature: Extract capped fields Andrew Murray
2020-01-02 16:22 ` Suzuki Kuruppassery Poulose
2020-01-02 12:39 ` [PATCH v3 2/3] KVM: arm64: limit PMU version to ARMv8.4 Andrew Murray
2020-01-20 17:44 ` Will Deacon
2020-01-21 11:28 ` Andrew Murray
2020-01-20 17:55 ` Marc Zyngier
2020-01-21 9:04 ` Will Deacon
2020-01-21 11:18 ` Andrew Murray
2020-01-21 11:24 ` Andrew Murray
2020-01-02 12:39 ` [PATCH v3 3/3] arm64: perf: Add support for ARMv8.5-PMU 64-bit counters Andrew Murray
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).