All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more)
@ 2021-07-13 13:58 ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

Hi all,

After some back and forth with Alexandre about patch #3 of this
series, it became apparent that some of the PMU code paths perform
some unnecessary masking, only to hide the fact that some of the PMU
register reset values are not architecturally compliant (RES0 bits get
set, among other things).

The first patch of this series addresses the reset value problem, the
second one rids us of the pointless masking, and Alexandre's patch
(which depends on the first two) is slapped on top, with a small
cosmetic change.

Thanks,

	M.

Alexandre Chartre (1):
  KVM: arm64: Disabling disabled PMU counters wastes a lot of time

Marc Zyngier (2):
  KVM: arm64: Narrow PMU sysreg reset values to architectural
    requirements
  KVM: arm64: Drop unnecessary masking of PMU registers

 arch/arm64/kvm/pmu-emul.c |  8 +++---
 arch/arm64/kvm/sys_regs.c | 52 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 50 insertions(+), 10 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more)
@ 2021-07-13 13:58 ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy

Hi all,

After some back and forth with Alexandre about patch #3 of this
series, it became apparent that some of the PMU code paths perform
some unnecessary masking, only to hide the fact that some of the PMU
register reset values are not architecturally compliant (RES0 bits get
set, among other things).

The first patch of this series addresses the reset value problem, the
second one rids us of the pointless masking, and Alexandre's patch
(which depends on the first two) is slapped on top, with a small
cosmetic change.

Thanks,

	M.

Alexandre Chartre (1):
  KVM: arm64: Disabling disabled PMU counters wastes a lot of time

Marc Zyngier (2):
  KVM: arm64: Narrow PMU sysreg reset values to architectural
    requirements
  KVM: arm64: Drop unnecessary masking of PMU registers

 arch/arm64/kvm/pmu-emul.c |  8 +++---
 arch/arm64/kvm/sys_regs.c | 52 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 50 insertions(+), 10 deletions(-)

-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more)
@ 2021-07-13 13:58 ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

Hi all,

After some back and forth with Alexandre about patch #3 of this
series, it became apparent that some of the PMU code paths perform
some unnecessary masking, only to hide the fact that some of the PMU
register reset values are not architecturally compliant (RES0 bits get
set, among other things).

The first patch of this series addresses the reset value problem, the
second one rids us of the pointless masking, and Alexandre's patch
(which depends on the first two) is slapped on top, with a small
cosmetic change.

Thanks,

	M.

Alexandre Chartre (1):
  KVM: arm64: Disabling disabled PMU counters wastes a lot of time

Marc Zyngier (2):
  KVM: arm64: Narrow PMU sysreg reset values to architectural
    requirements
  KVM: arm64: Drop unnecessary masking of PMU registers

 arch/arm64/kvm/pmu-emul.c |  8 +++---
 arch/arm64/kvm/sys_regs.c | 52 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 50 insertions(+), 10 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-13 13:58 ` Marc Zyngier
  (?)
@ 2021-07-13 13:58   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

A number of the PMU sysregs expose reset values that are not in
compliant with the architecture (set bits in the RES0 ranges,
for example).

This in turn has the effect that we need to pointlessly mask
some register when using them.

Let's start by making sure we don't have illegal values in the
shadow registers at reset time. This affects all the registers
that dedicate one bit per counter, the counters themselves,
PMEVTYPERn_EL0 and PMSELR_EL0.

Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..95ccb8f45409 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
+static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 n, mask;
+
+	/* No PMU available, any PMU reg may UNDEF... */
+	if (!kvm_arm_support_pmu_v3())
+		return;
+
+	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
+	n &= ARMV8_PMU_PMCR_N_MASK;
+
+	reset_unknown(vcpu, r);
+
+	mask = BIT(ARMV8_PMU_CYCLE_IDX);
+	if (n)
+		mask |= GENMASK(n - 1, 0);
+
+	__vcpu_sys_reg(vcpu, r->reg) &= mask;
+}
+
+static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
+}
+
+static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
+}
+
+static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK;
+}
+
 static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	u64 pmcr, val;
@@ -944,16 +982,18 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(r)						\
-	SYS_DESC(r), .reset = reset_unknown, .visibility = pmu_visibility
+	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
 #define PMU_PMEVCNTR_EL0(n)						\
 	{ PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)),				\
+	  .reset = reset_pmevcntr,					\
 	  .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), }
 
 /* Macro to expand the PMEVTYPERn_EL0 register */
 #define PMU_PMEVTYPER_EL0(n)						\
 	{ PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)),				\
+	  .reset = reset_pmevtyper,					\
 	  .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), }
 
 static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
@@ -1595,13 +1635,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ PMU_SYS_REG(SYS_PMSWINC_EL0),
 	  .access = access_pmswinc, .reg = PMSWINC_EL0 },
 	{ PMU_SYS_REG(SYS_PMSELR_EL0),
-	  .access = access_pmselr, .reg = PMSELR_EL0 },
+	  .access = access_pmselr, .reset = reset_pmselr, .reg = PMSELR_EL0 },
 	{ PMU_SYS_REG(SYS_PMCEID0_EL0),
 	  .access = access_pmceid, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMCEID1_EL0),
 	  .access = access_pmceid, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMCCNTR_EL0),
-	  .access = access_pmu_evcntr, .reg = PMCCNTR_EL0 },
+	  .access = access_pmu_evcntr, .reset = reset_unknown, .reg = PMCCNTR_EL0 },
 	{ PMU_SYS_REG(SYS_PMXEVTYPER_EL0),
 	  .access = access_pmu_evtyper, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMXEVCNTR_EL0),
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 13:58   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy

A number of the PMU sysregs expose reset values that are not in
compliant with the architecture (set bits in the RES0 ranges,
for example).

This in turn has the effect that we need to pointlessly mask
some register when using them.

Let's start by making sure we don't have illegal values in the
shadow registers at reset time. This affects all the registers
that dedicate one bit per counter, the counters themselves,
PMEVTYPERn_EL0 and PMSELR_EL0.

Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..95ccb8f45409 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
+static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 n, mask;
+
+	/* No PMU available, any PMU reg may UNDEF... */
+	if (!kvm_arm_support_pmu_v3())
+		return;
+
+	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
+	n &= ARMV8_PMU_PMCR_N_MASK;
+
+	reset_unknown(vcpu, r);
+
+	mask = BIT(ARMV8_PMU_CYCLE_IDX);
+	if (n)
+		mask |= GENMASK(n - 1, 0);
+
+	__vcpu_sys_reg(vcpu, r->reg) &= mask;
+}
+
+static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
+}
+
+static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
+}
+
+static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK;
+}
+
 static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	u64 pmcr, val;
@@ -944,16 +982,18 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(r)						\
-	SYS_DESC(r), .reset = reset_unknown, .visibility = pmu_visibility
+	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
 #define PMU_PMEVCNTR_EL0(n)						\
 	{ PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)),				\
+	  .reset = reset_pmevcntr,					\
 	  .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), }
 
 /* Macro to expand the PMEVTYPERn_EL0 register */
 #define PMU_PMEVTYPER_EL0(n)						\
 	{ PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)),				\
+	  .reset = reset_pmevtyper,					\
 	  .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), }
 
 static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
@@ -1595,13 +1635,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ PMU_SYS_REG(SYS_PMSWINC_EL0),
 	  .access = access_pmswinc, .reg = PMSWINC_EL0 },
 	{ PMU_SYS_REG(SYS_PMSELR_EL0),
-	  .access = access_pmselr, .reg = PMSELR_EL0 },
+	  .access = access_pmselr, .reset = reset_pmselr, .reg = PMSELR_EL0 },
 	{ PMU_SYS_REG(SYS_PMCEID0_EL0),
 	  .access = access_pmceid, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMCEID1_EL0),
 	  .access = access_pmceid, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMCCNTR_EL0),
-	  .access = access_pmu_evcntr, .reg = PMCCNTR_EL0 },
+	  .access = access_pmu_evcntr, .reset = reset_unknown, .reg = PMCCNTR_EL0 },
 	{ PMU_SYS_REG(SYS_PMXEVTYPER_EL0),
 	  .access = access_pmu_evtyper, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMXEVCNTR_EL0),
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 13:58   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

A number of the PMU sysregs expose reset values that are not in
compliant with the architecture (set bits in the RES0 ranges,
for example).

This in turn has the effect that we need to pointlessly mask
some register when using them.

Let's start by making sure we don't have illegal values in the
shadow registers at reset time. This affects all the registers
that dedicate one bit per counter, the counters themselves,
PMEVTYPERn_EL0 and PMSELR_EL0.

Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..95ccb8f45409 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
+static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 n, mask;
+
+	/* No PMU available, any PMU reg may UNDEF... */
+	if (!kvm_arm_support_pmu_v3())
+		return;
+
+	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
+	n &= ARMV8_PMU_PMCR_N_MASK;
+
+	reset_unknown(vcpu, r);
+
+	mask = BIT(ARMV8_PMU_CYCLE_IDX);
+	if (n)
+		mask |= GENMASK(n - 1, 0);
+
+	__vcpu_sys_reg(vcpu, r->reg) &= mask;
+}
+
+static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
+}
+
+static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
+}
+
+static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	reset_unknown(vcpu, r);
+	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK;
+}
+
 static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	u64 pmcr, val;
@@ -944,16 +982,18 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
 
 #define PMU_SYS_REG(r)						\
-	SYS_DESC(r), .reset = reset_unknown, .visibility = pmu_visibility
+	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
 
 /* Macro to expand the PMEVCNTRn_EL0 register */
 #define PMU_PMEVCNTR_EL0(n)						\
 	{ PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)),				\
+	  .reset = reset_pmevcntr,					\
 	  .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), }
 
 /* Macro to expand the PMEVTYPERn_EL0 register */
 #define PMU_PMEVTYPER_EL0(n)						\
 	{ PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)),				\
+	  .reset = reset_pmevtyper,					\
 	  .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), }
 
 static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
@@ -1595,13 +1635,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ PMU_SYS_REG(SYS_PMSWINC_EL0),
 	  .access = access_pmswinc, .reg = PMSWINC_EL0 },
 	{ PMU_SYS_REG(SYS_PMSELR_EL0),
-	  .access = access_pmselr, .reg = PMSELR_EL0 },
+	  .access = access_pmselr, .reset = reset_pmselr, .reg = PMSELR_EL0 },
 	{ PMU_SYS_REG(SYS_PMCEID0_EL0),
 	  .access = access_pmceid, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMCEID1_EL0),
 	  .access = access_pmceid, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMCCNTR_EL0),
-	  .access = access_pmu_evcntr, .reg = PMCCNTR_EL0 },
+	  .access = access_pmu_evcntr, .reset = reset_unknown, .reg = PMCCNTR_EL0 },
 	{ PMU_SYS_REG(SYS_PMXEVTYPER_EL0),
 	  .access = access_pmu_evtyper, .reset = NULL },
 	{ PMU_SYS_REG(SYS_PMXEVCNTR_EL0),
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/3] KVM: arm64: Drop unnecessary masking of PMU registers
  2021-07-13 13:58 ` Marc Zyngier
  (?)
@ 2021-07-13 13:58   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

We always sanitise our PMU sysreg on the write side, so there
is no need to do it on the read side as well.

Drop the unnecessary masking.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 3 +--
 arch/arm64/kvm/sys_regs.c | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f33825c995cb..fae4e95b586c 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
 		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
-		reg &= kvm_pmu_valid_counter_mask(vcpu);
 	}
 
 	return reg;
@@ -569,7 +568,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
-		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	} else {
 		kvm_pmu_disable_counter_mask(vcpu, mask);
 	}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 95ccb8f45409..7ead93a8d67f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -883,7 +883,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			kvm_pmu_disable_counter_mask(vcpu, val);
 		}
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 	}
 
 	return true;
@@ -907,7 +907,7 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			/* accessing PMINTENCLR_EL1 */
 			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
 	}
 
 	return true;
@@ -929,7 +929,7 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			/* accessing PMOVSCLR_EL0 */
 			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
 	}
 
 	return true;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/3] KVM: arm64: Drop unnecessary masking of PMU registers
@ 2021-07-13 13:58   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy

We always sanitise our PMU sysreg on the write side, so there
is no need to do it on the read side as well.

Drop the unnecessary masking.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 3 +--
 arch/arm64/kvm/sys_regs.c | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f33825c995cb..fae4e95b586c 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
 		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
-		reg &= kvm_pmu_valid_counter_mask(vcpu);
 	}
 
 	return reg;
@@ -569,7 +568,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
-		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	} else {
 		kvm_pmu_disable_counter_mask(vcpu, mask);
 	}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 95ccb8f45409..7ead93a8d67f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -883,7 +883,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			kvm_pmu_disable_counter_mask(vcpu, val);
 		}
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 	}
 
 	return true;
@@ -907,7 +907,7 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			/* accessing PMINTENCLR_EL1 */
 			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
 	}
 
 	return true;
@@ -929,7 +929,7 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			/* accessing PMOVSCLR_EL0 */
 			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
 	}
 
 	return true;
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/3] KVM: arm64: Drop unnecessary masking of PMU registers
@ 2021-07-13 13:58   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

We always sanitise our PMU sysreg on the write side, so there
is no need to do it on the read side as well.

Drop the unnecessary masking.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/pmu-emul.c | 3 +--
 arch/arm64/kvm/sys_regs.c | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f33825c995cb..fae4e95b586c 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
 		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
-		reg &= kvm_pmu_valid_counter_mask(vcpu);
 	}
 
 	return reg;
@@ -569,7 +568,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
-		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	} else {
 		kvm_pmu_disable_counter_mask(vcpu, mask);
 	}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 95ccb8f45409..7ead93a8d67f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -883,7 +883,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			kvm_pmu_disable_counter_mask(vcpu, val);
 		}
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 	}
 
 	return true;
@@ -907,7 +907,7 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			/* accessing PMINTENCLR_EL1 */
 			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
 	}
 
 	return true;
@@ -929,7 +929,7 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			/* accessing PMOVSCLR_EL0 */
 			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
 	}
 
 	return true;
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/3] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
  2021-07-13 13:58 ` Marc Zyngier
  (?)
@ 2021-07-13 13:59   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:59 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

From: Alexandre Chartre <alexandre.chartre@oracle.com>

In a KVM guest on arm64, performance counters interrupts have an
unnecessary overhead which slows down execution when using the "perf
record" command and limits the "perf record" sampling period.

The problem is that when a guest VM disables counters by clearing the
PMCR_EL0.E bit (bit 0), KVM will disable all counters defined in
PMCR_EL0 even if they are not enabled in PMCNTENSET_EL0.

KVM disables a counter by calling into the perf framework, in particular
by calling perf_event_create_kernel_counter() which is a time consuming
operation. So, for example, with a Neoverse N1 CPU core which has 6 event
counters and one cycle counter, KVM will always disable all 7 counters
even if only one is enabled.

This typically happens when using the "perf record" command in a guest
VM: perf will disable all event counters with PMCNTENTSET_EL0 and only
uses the cycle counter. And when using the "perf record" -F option with
a high profiling frequency, the overhead of KVM disabling all counters
instead of one on every counter interrupt becomes very noticeable.

The problem is fixed by having KVM disable only counters which are
enabled in PMCNTENSET_EL0. If a counter is not enabled in PMCNTENSET_EL0
then KVM will not enable it when setting PMCR_EL0.E and it will remain
disabled as long as it is not enabled in PMCNTENSET_EL0. So there is
effectively no need to disable a counter when clearing PMCR_EL0.E if it
is not enabled PMCNTENSET_EL0.

Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com>
[maz: moved 'mask' close to the actual user, simplifying the patch]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210712170345.660272-1-alexandre.chartre@oracle.com
---
 arch/arm64/kvm/pmu-emul.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fae4e95b586c..dc65b58dc68f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -563,20 +563,21 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
  */
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 {
-	unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 	int i;
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
 		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	} else {
-		kvm_pmu_disable_counter_mask(vcpu, mask);
+		kvm_pmu_disable_counter_mask(vcpu,
+		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	}
 
 	if (val & ARMV8_PMU_PMCR_C)
 		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
 
 	if (val & ARMV8_PMU_PMCR_P) {
+		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
 		for_each_set_bit(i, &mask, 32)
 			kvm_pmu_set_counter_value(vcpu, i, 0);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/3] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
@ 2021-07-13 13:59   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:59 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy

From: Alexandre Chartre <alexandre.chartre@oracle.com>

In a KVM guest on arm64, performance counters interrupts have an
unnecessary overhead which slows down execution when using the "perf
record" command and limits the "perf record" sampling period.

The problem is that when a guest VM disables counters by clearing the
PMCR_EL0.E bit (bit 0), KVM will disable all counters defined in
PMCR_EL0 even if they are not enabled in PMCNTENSET_EL0.

KVM disables a counter by calling into the perf framework, in particular
by calling perf_event_create_kernel_counter() which is a time consuming
operation. So, for example, with a Neoverse N1 CPU core which has 6 event
counters and one cycle counter, KVM will always disable all 7 counters
even if only one is enabled.

This typically happens when using the "perf record" command in a guest
VM: perf will disable all event counters with PMCNTENTSET_EL0 and only
uses the cycle counter. And when using the "perf record" -F option with
a high profiling frequency, the overhead of KVM disabling all counters
instead of one on every counter interrupt becomes very noticeable.

The problem is fixed by having KVM disable only counters which are
enabled in PMCNTENSET_EL0. If a counter is not enabled in PMCNTENSET_EL0
then KVM will not enable it when setting PMCR_EL0.E and it will remain
disabled as long as it is not enabled in PMCNTENSET_EL0. So there is
effectively no need to disable a counter when clearing PMCR_EL0.E if it
is not enabled PMCNTENSET_EL0.

Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com>
[maz: moved 'mask' close to the actual user, simplifying the patch]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210712170345.660272-1-alexandre.chartre@oracle.com
---
 arch/arm64/kvm/pmu-emul.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fae4e95b586c..dc65b58dc68f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -563,20 +563,21 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
  */
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 {
-	unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 	int i;
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
 		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	} else {
-		kvm_pmu_disable_counter_mask(vcpu, mask);
+		kvm_pmu_disable_counter_mask(vcpu,
+		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	}
 
 	if (val & ARMV8_PMU_PMCR_C)
 		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
 
 	if (val & ARMV8_PMU_PMCR_P) {
+		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
 		for_each_set_bit(i, &mask, 32)
 			kvm_pmu_set_counter_value(vcpu, i, 0);
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/3] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
@ 2021-07-13 13:59   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 13:59 UTC (permalink / raw)
  To: linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei,
	Alexandre Chartre, Robin Murphy, kernel-team

From: Alexandre Chartre <alexandre.chartre@oracle.com>

In a KVM guest on arm64, performance counters interrupts have an
unnecessary overhead which slows down execution when using the "perf
record" command and limits the "perf record" sampling period.

The problem is that when a guest VM disables counters by clearing the
PMCR_EL0.E bit (bit 0), KVM will disable all counters defined in
PMCR_EL0 even if they are not enabled in PMCNTENSET_EL0.

KVM disables a counter by calling into the perf framework, in particular
by calling perf_event_create_kernel_counter() which is a time consuming
operation. So, for example, with a Neoverse N1 CPU core which has 6 event
counters and one cycle counter, KVM will always disable all 7 counters
even if only one is enabled.

This typically happens when using the "perf record" command in a guest
VM: perf will disable all event counters with PMCNTENTSET_EL0 and only
uses the cycle counter. And when using the "perf record" -F option with
a high profiling frequency, the overhead of KVM disabling all counters
instead of one on every counter interrupt becomes very noticeable.

The problem is fixed by having KVM disable only counters which are
enabled in PMCNTENSET_EL0. If a counter is not enabled in PMCNTENSET_EL0
then KVM will not enable it when setting PMCR_EL0.E and it will remain
disabled as long as it is not enabled in PMCNTENSET_EL0. So there is
effectively no need to disable a counter when clearing PMCR_EL0.E if it
is not enabled PMCNTENSET_EL0.

Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com>
[maz: moved 'mask' close to the actual user, simplifying the patch]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210712170345.660272-1-alexandre.chartre@oracle.com
---
 arch/arm64/kvm/pmu-emul.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fae4e95b586c..dc65b58dc68f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -563,20 +563,21 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
  */
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 {
-	unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 	int i;
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
 		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	} else {
-		kvm_pmu_disable_counter_mask(vcpu, mask);
+		kvm_pmu_disable_counter_mask(vcpu,
+		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
 	}
 
 	if (val & ARMV8_PMU_PMCR_C)
 		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
 
 	if (val & ARMV8_PMU_PMCR_P) {
+		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
 		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
 		for_each_set_bit(i, &mask, 32)
 			kvm_pmu_set_counter_value(vcpu, i, 0);
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-13 13:58   ` Marc Zyngier
  (?)
@ 2021-07-13 14:39     ` Russell King (Oracle)
  -1 siblings, 0 replies; 42+ messages in thread
From: Russell King (Oracle) @ 2021-07-13 14:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Alexandre Chartre, Robin Murphy, kernel-team

On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 n, mask;
> +
> +	/* No PMU available, any PMU reg may UNDEF... */
> +	if (!kvm_arm_support_pmu_v3())
> +		return;
> +
> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> +	n &= ARMV8_PMU_PMCR_N_MASK;
> +
> +	reset_unknown(vcpu, r);
> +
> +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> +	if (n)
> +		mask |= GENMASK(n - 1, 0);
> +
> +	__vcpu_sys_reg(vcpu, r->reg) &= mask;

Would this read more logically to structure it as:

	mask = BIT(ARMV8_PMU_CYCLE_IDX);

	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
	n &= ARMV8_PMU_PMCR_N_MASK;
	if (n)
		mask |= GENMASK(n - 1, 0);

	reset_unknown(vcpu, r);
	__vcpu_sys_reg(vcpu, r->reg) &= mask;

?

Thanks.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 14:39     ` Russell King (Oracle)
  0 siblings, 0 replies; 42+ messages in thread
From: Russell King (Oracle) @ 2021-07-13 14:39 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvm, Robin Murphy, kvmarm, linux-arm-kernel

On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 n, mask;
> +
> +	/* No PMU available, any PMU reg may UNDEF... */
> +	if (!kvm_arm_support_pmu_v3())
> +		return;
> +
> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> +	n &= ARMV8_PMU_PMCR_N_MASK;
> +
> +	reset_unknown(vcpu, r);
> +
> +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> +	if (n)
> +		mask |= GENMASK(n - 1, 0);
> +
> +	__vcpu_sys_reg(vcpu, r->reg) &= mask;

Would this read more logically to structure it as:

	mask = BIT(ARMV8_PMU_CYCLE_IDX);

	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
	n &= ARMV8_PMU_PMCR_N_MASK;
	if (n)
		mask |= GENMASK(n - 1, 0);

	reset_unknown(vcpu, r);
	__vcpu_sys_reg(vcpu, r->reg) &= mask;

?

Thanks.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 14:39     ` Russell King (Oracle)
  0 siblings, 0 replies; 42+ messages in thread
From: Russell King (Oracle) @ 2021-07-13 14:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Alexandre Chartre, Robin Murphy, kernel-team

On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 n, mask;
> +
> +	/* No PMU available, any PMU reg may UNDEF... */
> +	if (!kvm_arm_support_pmu_v3())
> +		return;
> +
> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> +	n &= ARMV8_PMU_PMCR_N_MASK;
> +
> +	reset_unknown(vcpu, r);
> +
> +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> +	if (n)
> +		mask |= GENMASK(n - 1, 0);
> +
> +	__vcpu_sys_reg(vcpu, r->reg) &= mask;

Would this read more logically to structure it as:

	mask = BIT(ARMV8_PMU_CYCLE_IDX);

	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
	n &= ARMV8_PMU_PMCR_N_MASK;
	if (n)
		mask |= GENMASK(n - 1, 0);

	reset_unknown(vcpu, r);
	__vcpu_sys_reg(vcpu, r->reg) &= mask;

?

Thanks.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-13 14:39     ` Russell King (Oracle)
  (?)
@ 2021-07-13 15:59       ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 15:59 UTC (permalink / raw)
  To: Russell King (Oracle)
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Alexandre Chartre, Robin Murphy, kernel-team

On Tue, 13 Jul 2021 15:39:49 +0100,
"Russell King (Oracle)" <linux@armlinux.org.uk> wrote:
> 
> On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 n, mask;
> > +
> > +	/* No PMU available, any PMU reg may UNDEF... */
> > +	if (!kvm_arm_support_pmu_v3())
> > +		return;
> > +
> > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > +
> > +	reset_unknown(vcpu, r);
> > +
> > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > +	if (n)
> > +		mask |= GENMASK(n - 1, 0);
> > +
> > +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> 
> Would this read more logically to structure it as:
> 
> 	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> 
> 	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> 	n &= ARMV8_PMU_PMCR_N_MASK;
> 	if (n)
> 		mask |= GENMASK(n - 1, 0);
> 
> 	reset_unknown(vcpu, r);
> 	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> 
> ?

Yup, that's nicer. Amended locally.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 15:59       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 15:59 UTC (permalink / raw)
  To: Russell King (Oracle)
  Cc: kernel-team, kvm, Robin Murphy, kvmarm, linux-arm-kernel

On Tue, 13 Jul 2021 15:39:49 +0100,
"Russell King (Oracle)" <linux@armlinux.org.uk> wrote:
> 
> On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 n, mask;
> > +
> > +	/* No PMU available, any PMU reg may UNDEF... */
> > +	if (!kvm_arm_support_pmu_v3())
> > +		return;
> > +
> > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > +
> > +	reset_unknown(vcpu, r);
> > +
> > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > +	if (n)
> > +		mask |= GENMASK(n - 1, 0);
> > +
> > +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> 
> Would this read more logically to structure it as:
> 
> 	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> 
> 	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> 	n &= ARMV8_PMU_PMCR_N_MASK;
> 	if (n)
> 		mask |= GENMASK(n - 1, 0);
> 
> 	reset_unknown(vcpu, r);
> 	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> 
> ?

Yup, that's nicer. Amended locally.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 15:59       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-13 15:59 UTC (permalink / raw)
  To: Russell King (Oracle)
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Alexandre Chartre, Robin Murphy, kernel-team

On Tue, 13 Jul 2021 15:39:49 +0100,
"Russell King (Oracle)" <linux@armlinux.org.uk> wrote:
> 
> On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 n, mask;
> > +
> > +	/* No PMU available, any PMU reg may UNDEF... */
> > +	if (!kvm_arm_support_pmu_v3())
> > +		return;
> > +
> > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > +
> > +	reset_unknown(vcpu, r);
> > +
> > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > +	if (n)
> > +		mask |= GENMASK(n - 1, 0);
> > +
> > +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> 
> Would this read more logically to structure it as:
> 
> 	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> 
> 	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> 	n &= ARMV8_PMU_PMCR_N_MASK;
> 	if (n)
> 		mask |= GENMASK(n - 1, 0);
> 
> 	reset_unknown(vcpu, r);
> 	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> 
> ?

Yup, that's nicer. Amended locally.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-13 15:59       ` Marc Zyngier
  (?)
@ 2021-07-13 16:15         ` Russell King (Oracle)
  -1 siblings, 0 replies; 42+ messages in thread
From: Russell King (Oracle) @ 2021-07-13 16:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Alexandre Chartre, Robin Murphy, kernel-team

On Tue, Jul 13, 2021 at 04:59:58PM +0100, Marc Zyngier wrote:
> On Tue, 13 Jul 2021 15:39:49 +0100,
> "Russell King (Oracle)" <linux@armlinux.org.uk> wrote:
> > 
> > On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> > > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > > +{
> > > +	u64 n, mask;
> > > +
> > > +	/* No PMU available, any PMU reg may UNDEF... */
> > > +	if (!kvm_arm_support_pmu_v3())
> > > +		return;
> > > +
> > > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > > +
> > > +	reset_unknown(vcpu, r);
> > > +
> > > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > > +	if (n)
> > > +		mask |= GENMASK(n - 1, 0);
> > > +
> > > +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > 
> > Would this read more logically to structure it as:
> > 
> > 	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > 
> > 	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > 	n &= ARMV8_PMU_PMCR_N_MASK;
> > 	if (n)
> > 		mask |= GENMASK(n - 1, 0);
> > 
> > 	reset_unknown(vcpu, r);
> > 	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > 
> > ?
> 
> Yup, that's nicer. Amended locally.

Thanks Marc.

For the whole series:

Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 16:15         ` Russell King (Oracle)
  0 siblings, 0 replies; 42+ messages in thread
From: Russell King (Oracle) @ 2021-07-13 16:15 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvm, Robin Murphy, kvmarm, linux-arm-kernel

On Tue, Jul 13, 2021 at 04:59:58PM +0100, Marc Zyngier wrote:
> On Tue, 13 Jul 2021 15:39:49 +0100,
> "Russell King (Oracle)" <linux@armlinux.org.uk> wrote:
> > 
> > On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> > > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > > +{
> > > +	u64 n, mask;
> > > +
> > > +	/* No PMU available, any PMU reg may UNDEF... */
> > > +	if (!kvm_arm_support_pmu_v3())
> > > +		return;
> > > +
> > > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > > +
> > > +	reset_unknown(vcpu, r);
> > > +
> > > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > > +	if (n)
> > > +		mask |= GENMASK(n - 1, 0);
> > > +
> > > +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > 
> > Would this read more logically to structure it as:
> > 
> > 	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > 
> > 	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > 	n &= ARMV8_PMU_PMCR_N_MASK;
> > 	if (n)
> > 		mask |= GENMASK(n - 1, 0);
> > 
> > 	reset_unknown(vcpu, r);
> > 	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > 
> > ?
> 
> Yup, that's nicer. Amended locally.

Thanks Marc.

For the whole series:

Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-13 16:15         ` Russell King (Oracle)
  0 siblings, 0 replies; 42+ messages in thread
From: Russell King (Oracle) @ 2021-07-13 16:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Alexandre Chartre, Robin Murphy, kernel-team

On Tue, Jul 13, 2021 at 04:59:58PM +0100, Marc Zyngier wrote:
> On Tue, 13 Jul 2021 15:39:49 +0100,
> "Russell King (Oracle)" <linux@armlinux.org.uk> wrote:
> > 
> > On Tue, Jul 13, 2021 at 02:58:58PM +0100, Marc Zyngier wrote:
> > > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > > +{
> > > +	u64 n, mask;
> > > +
> > > +	/* No PMU available, any PMU reg may UNDEF... */
> > > +	if (!kvm_arm_support_pmu_v3())
> > > +		return;
> > > +
> > > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > > +
> > > +	reset_unknown(vcpu, r);
> > > +
> > > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > > +	if (n)
> > > +		mask |= GENMASK(n - 1, 0);
> > > +
> > > +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > 
> > Would this read more logically to structure it as:
> > 
> > 	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> > 
> > 	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > 	n &= ARMV8_PMU_PMCR_N_MASK;
> > 	if (n)
> > 		mask |= GENMASK(n - 1, 0);
> > 
> > 	reset_unknown(vcpu, r);
> > 	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> > 
> > ?
> 
> Yup, that's nicer. Amended locally.

Thanks Marc.

For the whole series:

Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-13 13:58   ` Marc Zyngier
  (?)
@ 2021-07-14 15:48     ` Alexandru Elisei
  -1 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 15:48 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandre Chartre, Robin Murphy,
	kernel-team

Hi Marc,

On 7/13/21 2:58 PM, Marc Zyngier wrote:
> A number of the PMU sysregs expose reset values that are not in
> compliant with the architecture (set bits in the RES0 ranges,
> for example).
>
> This in turn has the effect that we need to pointlessly mask
> some register when using them.
>
> Let's start by making sure we don't have illegal values in the
> shadow registers at reset time. This affects all the registers
> that dedicate one bit per counter, the counters themselves,
> PMEVTYPERn_EL0 and PMSELR_EL0.
>
> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
>  1 file changed, 43 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f6f126eb6ac1..95ccb8f45409 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>  	return REG_HIDDEN;
>  }
>  
> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 n, mask;
> +
> +	/* No PMU available, any PMU reg may UNDEF... */
> +	if (!kvm_arm_support_pmu_v3())
> +		return;
> +
> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;

Isn't this going to cause a lot of unnecessary traps with NV? Is that going to be
a problem? Because at the moment I can't think of an elegant way to avoid it,
other than special casing PMCR_EL0 in kvm_reset_sys_regs() and using here
__vcpu_sys_reg(vcpu, PMCR_EL0). Or, even better, using
kvm_pmu_valid_counter_mask(vcpu), since this is identical to what that function does.

> +	n &= ARMV8_PMU_PMCR_N_MASK;
> +
> +	reset_unknown(vcpu, r);
> +
> +	mask = BIT(ARMV8_PMU_CYCLE_IDX);

PMSWINC_EL0 has bit 31 RES0. Other than that, looked at all the PMU registers and
everything looks correct to me.

Thanks,

Alex

> +	if (n)
> +		mask |= GENMASK(n - 1, 0);
> +
> +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> +}
> +
> +static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
> +}
> +
> +static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
> +}
> +
> +static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK;
> +}
> +
>  static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
>  	u64 pmcr, val;
> @@ -944,16 +982,18 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>  
>  #define PMU_SYS_REG(r)						\
> -	SYS_DESC(r), .reset = reset_unknown, .visibility = pmu_visibility
> +	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
>  
>  /* Macro to expand the PMEVCNTRn_EL0 register */
>  #define PMU_PMEVCNTR_EL0(n)						\
>  	{ PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)),				\
> +	  .reset = reset_pmevcntr,					\
>  	  .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), }
>  
>  /* Macro to expand the PMEVTYPERn_EL0 register */
>  #define PMU_PMEVTYPER_EL0(n)						\
>  	{ PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)),				\
> +	  .reset = reset_pmevtyper,					\
>  	  .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), }
>  
>  static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> @@ -1595,13 +1635,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ PMU_SYS_REG(SYS_PMSWINC_EL0),
>  	  .access = access_pmswinc, .reg = PMSWINC_EL0 },
>  	{ PMU_SYS_REG(SYS_PMSELR_EL0),
> -	  .access = access_pmselr, .reg = PMSELR_EL0 },
> +	  .access = access_pmselr, .reset = reset_pmselr, .reg = PMSELR_EL0 },
>  	{ PMU_SYS_REG(SYS_PMCEID0_EL0),
>  	  .access = access_pmceid, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMCEID1_EL0),
>  	  .access = access_pmceid, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMCCNTR_EL0),
> -	  .access = access_pmu_evcntr, .reg = PMCCNTR_EL0 },
> +	  .access = access_pmu_evcntr, .reset = reset_unknown, .reg = PMCCNTR_EL0 },
>  	{ PMU_SYS_REG(SYS_PMXEVTYPER_EL0),
>  	  .access = access_pmu_evtyper, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMXEVCNTR_EL0),

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-14 15:48     ` Alexandru Elisei
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 15:48 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy

Hi Marc,

On 7/13/21 2:58 PM, Marc Zyngier wrote:
> A number of the PMU sysregs expose reset values that are not in
> compliant with the architecture (set bits in the RES0 ranges,
> for example).
>
> This in turn has the effect that we need to pointlessly mask
> some register when using them.
>
> Let's start by making sure we don't have illegal values in the
> shadow registers at reset time. This affects all the registers
> that dedicate one bit per counter, the counters themselves,
> PMEVTYPERn_EL0 and PMSELR_EL0.
>
> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
>  1 file changed, 43 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f6f126eb6ac1..95ccb8f45409 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>  	return REG_HIDDEN;
>  }
>  
> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 n, mask;
> +
> +	/* No PMU available, any PMU reg may UNDEF... */
> +	if (!kvm_arm_support_pmu_v3())
> +		return;
> +
> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;

Isn't this going to cause a lot of unnecessary traps with NV? Is that going to be
a problem? Because at the moment I can't think of an elegant way to avoid it,
other than special casing PMCR_EL0 in kvm_reset_sys_regs() and using here
__vcpu_sys_reg(vcpu, PMCR_EL0). Or, even better, using
kvm_pmu_valid_counter_mask(vcpu), since this is identical to what that function does.

> +	n &= ARMV8_PMU_PMCR_N_MASK;
> +
> +	reset_unknown(vcpu, r);
> +
> +	mask = BIT(ARMV8_PMU_CYCLE_IDX);

PMSWINC_EL0 has bit 31 RES0. Other than that, looked at all the PMU registers and
everything looks correct to me.

Thanks,

Alex

> +	if (n)
> +		mask |= GENMASK(n - 1, 0);
> +
> +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> +}
> +
> +static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
> +}
> +
> +static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
> +}
> +
> +static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK;
> +}
> +
>  static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
>  	u64 pmcr, val;
> @@ -944,16 +982,18 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>  
>  #define PMU_SYS_REG(r)						\
> -	SYS_DESC(r), .reset = reset_unknown, .visibility = pmu_visibility
> +	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
>  
>  /* Macro to expand the PMEVCNTRn_EL0 register */
>  #define PMU_PMEVCNTR_EL0(n)						\
>  	{ PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)),				\
> +	  .reset = reset_pmevcntr,					\
>  	  .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), }
>  
>  /* Macro to expand the PMEVTYPERn_EL0 register */
>  #define PMU_PMEVTYPER_EL0(n)						\
>  	{ PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)),				\
> +	  .reset = reset_pmevtyper,					\
>  	  .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), }
>  
>  static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> @@ -1595,13 +1635,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ PMU_SYS_REG(SYS_PMSWINC_EL0),
>  	  .access = access_pmswinc, .reg = PMSWINC_EL0 },
>  	{ PMU_SYS_REG(SYS_PMSELR_EL0),
> -	  .access = access_pmselr, .reg = PMSELR_EL0 },
> +	  .access = access_pmselr, .reset = reset_pmselr, .reg = PMSELR_EL0 },
>  	{ PMU_SYS_REG(SYS_PMCEID0_EL0),
>  	  .access = access_pmceid, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMCEID1_EL0),
>  	  .access = access_pmceid, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMCCNTR_EL0),
> -	  .access = access_pmu_evcntr, .reg = PMCCNTR_EL0 },
> +	  .access = access_pmu_evcntr, .reset = reset_unknown, .reg = PMCCNTR_EL0 },
>  	{ PMU_SYS_REG(SYS_PMXEVTYPER_EL0),
>  	  .access = access_pmu_evtyper, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMXEVCNTR_EL0),
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-14 15:48     ` Alexandru Elisei
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 15:48 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandre Chartre, Robin Murphy,
	kernel-team

Hi Marc,

On 7/13/21 2:58 PM, Marc Zyngier wrote:
> A number of the PMU sysregs expose reset values that are not in
> compliant with the architecture (set bits in the RES0 ranges,
> for example).
>
> This in turn has the effect that we need to pointlessly mask
> some register when using them.
>
> Let's start by making sure we don't have illegal values in the
> shadow registers at reset time. This affects all the registers
> that dedicate one bit per counter, the counters themselves,
> PMEVTYPERn_EL0 and PMSELR_EL0.
>
> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
>  1 file changed, 43 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f6f126eb6ac1..95ccb8f45409 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>  	return REG_HIDDEN;
>  }
>  
> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 n, mask;
> +
> +	/* No PMU available, any PMU reg may UNDEF... */
> +	if (!kvm_arm_support_pmu_v3())
> +		return;
> +
> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;

Isn't this going to cause a lot of unnecessary traps with NV? Is that going to be
a problem? Because at the moment I can't think of an elegant way to avoid it,
other than special casing PMCR_EL0 in kvm_reset_sys_regs() and using here
__vcpu_sys_reg(vcpu, PMCR_EL0). Or, even better, using
kvm_pmu_valid_counter_mask(vcpu), since this is identical to what that function does.

> +	n &= ARMV8_PMU_PMCR_N_MASK;
> +
> +	reset_unknown(vcpu, r);
> +
> +	mask = BIT(ARMV8_PMU_CYCLE_IDX);

PMSWINC_EL0 has bit 31 RES0. Other than that, looked at all the PMU registers and
everything looks correct to me.

Thanks,

Alex

> +	if (n)
> +		mask |= GENMASK(n - 1, 0);
> +
> +	__vcpu_sys_reg(vcpu, r->reg) &= mask;
> +}
> +
> +static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
> +}
> +
> +static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
> +}
> +
> +static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	reset_unknown(vcpu, r);
> +	__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK;
> +}
> +
>  static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
>  	u64 pmcr, val;
> @@ -944,16 +982,18 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	  trap_wcr, reset_wcr, 0, 0,  get_wcr, set_wcr }
>  
>  #define PMU_SYS_REG(r)						\
> -	SYS_DESC(r), .reset = reset_unknown, .visibility = pmu_visibility
> +	SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility
>  
>  /* Macro to expand the PMEVCNTRn_EL0 register */
>  #define PMU_PMEVCNTR_EL0(n)						\
>  	{ PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)),				\
> +	  .reset = reset_pmevcntr,					\
>  	  .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), }
>  
>  /* Macro to expand the PMEVTYPERn_EL0 register */
>  #define PMU_PMEVTYPER_EL0(n)						\
>  	{ PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)),				\
> +	  .reset = reset_pmevtyper,					\
>  	  .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), }
>  
>  static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> @@ -1595,13 +1635,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ PMU_SYS_REG(SYS_PMSWINC_EL0),
>  	  .access = access_pmswinc, .reg = PMSWINC_EL0 },
>  	{ PMU_SYS_REG(SYS_PMSELR_EL0),
> -	  .access = access_pmselr, .reg = PMSELR_EL0 },
> +	  .access = access_pmselr, .reset = reset_pmselr, .reg = PMSELR_EL0 },
>  	{ PMU_SYS_REG(SYS_PMCEID0_EL0),
>  	  .access = access_pmceid, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMCEID1_EL0),
>  	  .access = access_pmceid, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMCCNTR_EL0),
> -	  .access = access_pmu_evcntr, .reg = PMCCNTR_EL0 },
> +	  .access = access_pmu_evcntr, .reset = reset_unknown, .reg = PMCCNTR_EL0 },
>  	{ PMU_SYS_REG(SYS_PMXEVTYPER_EL0),
>  	  .access = access_pmu_evtyper, .reset = NULL },
>  	{ PMU_SYS_REG(SYS_PMXEVCNTR_EL0),

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Drop unnecessary masking of PMU registers
  2021-07-13 13:58   ` Marc Zyngier
  (?)
@ 2021-07-14 16:12     ` Alexandru Elisei
  -1 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 16:12 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandre Chartre, Robin Murphy,
	kernel-team

Hi Marc,

On 7/13/21 2:58 PM, Marc Zyngier wrote:
> We always sanitise our PMU sysreg on the write side, so there
> is no need to do it on the read side as well.
>
> Drop the unnecessary masking.

Checked for all the remaining uses of kvm_pmu_valid_counter_mask in sys_regs.c and
in pmu-emul.c, and nothing stands out:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/pmu-emul.c | 3 +--
>  arch/arm64/kvm/sys_regs.c | 6 +++---
>  2 files changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index f33825c995cb..fae4e95b586c 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
>  		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
> -		reg &= kvm_pmu_valid_counter_mask(vcpu);
>  	}
>  
>  	return reg;
> @@ -569,7 +568,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  
>  	if (val & ARMV8_PMU_PMCR_E) {
>  		kvm_pmu_enable_counter_mask(vcpu,
> -		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
> +		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	} else {
>  		kvm_pmu_disable_counter_mask(vcpu, mask);
>  	}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 95ccb8f45409..7ead93a8d67f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -883,7 +883,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			kvm_pmu_disable_counter_mask(vcpu, val);
>  		}
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  	}
>  
>  	return true;
> @@ -907,7 +907,7 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			/* accessing PMINTENCLR_EL1 */
>  			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>  	}
>  
>  	return true;
> @@ -929,7 +929,7 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			/* accessing PMOVSCLR_EL0 */
>  			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>  	}
>  
>  	return true;

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Drop unnecessary masking of PMU registers
@ 2021-07-14 16:12     ` Alexandru Elisei
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 16:12 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy

Hi Marc,

On 7/13/21 2:58 PM, Marc Zyngier wrote:
> We always sanitise our PMU sysreg on the write side, so there
> is no need to do it on the read side as well.
>
> Drop the unnecessary masking.

Checked for all the remaining uses of kvm_pmu_valid_counter_mask in sys_regs.c and
in pmu-emul.c, and nothing stands out:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/pmu-emul.c | 3 +--
>  arch/arm64/kvm/sys_regs.c | 6 +++---
>  2 files changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index f33825c995cb..fae4e95b586c 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
>  		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
> -		reg &= kvm_pmu_valid_counter_mask(vcpu);
>  	}
>  
>  	return reg;
> @@ -569,7 +568,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  
>  	if (val & ARMV8_PMU_PMCR_E) {
>  		kvm_pmu_enable_counter_mask(vcpu,
> -		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
> +		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	} else {
>  		kvm_pmu_disable_counter_mask(vcpu, mask);
>  	}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 95ccb8f45409..7ead93a8d67f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -883,7 +883,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			kvm_pmu_disable_counter_mask(vcpu, val);
>  		}
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  	}
>  
>  	return true;
> @@ -907,7 +907,7 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			/* accessing PMINTENCLR_EL1 */
>  			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>  	}
>  
>  	return true;
> @@ -929,7 +929,7 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			/* accessing PMOVSCLR_EL0 */
>  			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>  	}
>  
>  	return true;
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Drop unnecessary masking of PMU registers
@ 2021-07-14 16:12     ` Alexandru Elisei
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 16:12 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandre Chartre, Robin Murphy,
	kernel-team

Hi Marc,

On 7/13/21 2:58 PM, Marc Zyngier wrote:
> We always sanitise our PMU sysreg on the write side, so there
> is no need to do it on the read side as well.
>
> Drop the unnecessary masking.

Checked for all the remaining uses of kvm_pmu_valid_counter_mask in sys_regs.c and
in pmu-emul.c, and nothing stands out:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/pmu-emul.c | 3 +--
>  arch/arm64/kvm/sys_regs.c | 6 +++---
>  2 files changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index f33825c995cb..fae4e95b586c 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
>  		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
> -		reg &= kvm_pmu_valid_counter_mask(vcpu);
>  	}
>  
>  	return reg;
> @@ -569,7 +568,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  
>  	if (val & ARMV8_PMU_PMCR_E) {
>  		kvm_pmu_enable_counter_mask(vcpu,
> -		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
> +		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	} else {
>  		kvm_pmu_disable_counter_mask(vcpu, mask);
>  	}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 95ccb8f45409..7ead93a8d67f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -883,7 +883,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			kvm_pmu_disable_counter_mask(vcpu, val);
>  		}
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>  	}
>  
>  	return true;
> @@ -907,7 +907,7 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			/* accessing PMINTENCLR_EL1 */
>  			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>  	}
>  
>  	return true;
> @@ -929,7 +929,7 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			/* accessing PMOVSCLR_EL0 */
>  			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
>  	} else {
> -		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
> +		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>  	}
>  
>  	return true;

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
  2021-07-13 13:59   ` Marc Zyngier
  (?)
@ 2021-07-14 16:18     ` Alexandru Elisei
  -1 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 16:18 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandre Chartre, Robin Murphy,
	kernel-team

Hi Marc,

On 7/13/21 2:59 PM, Marc Zyngier wrote:
> From: Alexandre Chartre <alexandre.chartre@oracle.com>
>
> In a KVM guest on arm64, performance counters interrupts have an
> unnecessary overhead which slows down execution when using the "perf
> record" command and limits the "perf record" sampling period.
>
> The problem is that when a guest VM disables counters by clearing the
> PMCR_EL0.E bit (bit 0), KVM will disable all counters defined in
> PMCR_EL0 even if they are not enabled in PMCNTENSET_EL0.
>
> KVM disables a counter by calling into the perf framework, in particular
> by calling perf_event_create_kernel_counter() which is a time consuming
> operation. So, for example, with a Neoverse N1 CPU core which has 6 event
> counters and one cycle counter, KVM will always disable all 7 counters
> even if only one is enabled.
>
> This typically happens when using the "perf record" command in a guest
> VM: perf will disable all event counters with PMCNTENTSET_EL0 and only
> uses the cycle counter. And when using the "perf record" -F option with
> a high profiling frequency, the overhead of KVM disabling all counters
> instead of one on every counter interrupt becomes very noticeable.
>
> The problem is fixed by having KVM disable only counters which are
> enabled in PMCNTENSET_EL0. If a counter is not enabled in PMCNTENSET_EL0
> then KVM will not enable it when setting PMCR_EL0.E and it will remain
> disabled as long as it is not enabled in PMCNTENSET_EL0. So there is
> effectively no need to disable a counter when clearing PMCR_EL0.E if it
> is not enabled PMCNTENSET_EL0.
>
> Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> [maz: moved 'mask' close to the actual user, simplifying the patch]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Link: https://lore.kernel.org/r/20210712170345.660272-1-alexandre.chartre@oracle.com
> ---
>  arch/arm64/kvm/pmu-emul.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index fae4e95b586c..dc65b58dc68f 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -563,20 +563,21 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
>   */
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  {
> -	unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>  	int i;
>  
>  	if (val & ARMV8_PMU_PMCR_E) {
>  		kvm_pmu_enable_counter_mask(vcpu,
>  		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	} else {
> -		kvm_pmu_disable_counter_mask(vcpu, mask);
> +		kvm_pmu_disable_counter_mask(vcpu,
> +		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	}
>  
>  	if (val & ARMV8_PMU_PMCR_C)
>  		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
>  
>  	if (val & ARMV8_PMU_PMCR_P) {
> +		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>  		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
>  		for_each_set_bit(i, &mask, 32)
>  			kvm_pmu_set_counter_value(vcpu, i, 0);

Looks reasonable to me, and it fixes the issue described in the commit:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
@ 2021-07-14 16:18     ` Alexandru Elisei
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 16:18 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy

Hi Marc,

On 7/13/21 2:59 PM, Marc Zyngier wrote:
> From: Alexandre Chartre <alexandre.chartre@oracle.com>
>
> In a KVM guest on arm64, performance counters interrupts have an
> unnecessary overhead which slows down execution when using the "perf
> record" command and limits the "perf record" sampling period.
>
> The problem is that when a guest VM disables counters by clearing the
> PMCR_EL0.E bit (bit 0), KVM will disable all counters defined in
> PMCR_EL0 even if they are not enabled in PMCNTENSET_EL0.
>
> KVM disables a counter by calling into the perf framework, in particular
> by calling perf_event_create_kernel_counter() which is a time consuming
> operation. So, for example, with a Neoverse N1 CPU core which has 6 event
> counters and one cycle counter, KVM will always disable all 7 counters
> even if only one is enabled.
>
> This typically happens when using the "perf record" command in a guest
> VM: perf will disable all event counters with PMCNTENTSET_EL0 and only
> uses the cycle counter. And when using the "perf record" -F option with
> a high profiling frequency, the overhead of KVM disabling all counters
> instead of one on every counter interrupt becomes very noticeable.
>
> The problem is fixed by having KVM disable only counters which are
> enabled in PMCNTENSET_EL0. If a counter is not enabled in PMCNTENSET_EL0
> then KVM will not enable it when setting PMCR_EL0.E and it will remain
> disabled as long as it is not enabled in PMCNTENSET_EL0. So there is
> effectively no need to disable a counter when clearing PMCR_EL0.E if it
> is not enabled PMCNTENSET_EL0.
>
> Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> [maz: moved 'mask' close to the actual user, simplifying the patch]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Link: https://lore.kernel.org/r/20210712170345.660272-1-alexandre.chartre@oracle.com
> ---
>  arch/arm64/kvm/pmu-emul.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index fae4e95b586c..dc65b58dc68f 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -563,20 +563,21 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
>   */
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  {
> -	unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>  	int i;
>  
>  	if (val & ARMV8_PMU_PMCR_E) {
>  		kvm_pmu_enable_counter_mask(vcpu,
>  		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	} else {
> -		kvm_pmu_disable_counter_mask(vcpu, mask);
> +		kvm_pmu_disable_counter_mask(vcpu,
> +		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	}
>  
>  	if (val & ARMV8_PMU_PMCR_C)
>  		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
>  
>  	if (val & ARMV8_PMU_PMCR_P) {
> +		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>  		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
>  		for_each_set_bit(i, &mask, 32)
>  			kvm_pmu_set_counter_value(vcpu, i, 0);

Looks reasonable to me, and it fixes the issue described in the commit:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
@ 2021-07-14 16:18     ` Alexandru Elisei
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandru Elisei @ 2021-07-14 16:18 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandre Chartre, Robin Murphy,
	kernel-team

Hi Marc,

On 7/13/21 2:59 PM, Marc Zyngier wrote:
> From: Alexandre Chartre <alexandre.chartre@oracle.com>
>
> In a KVM guest on arm64, performance counters interrupts have an
> unnecessary overhead which slows down execution when using the "perf
> record" command and limits the "perf record" sampling period.
>
> The problem is that when a guest VM disables counters by clearing the
> PMCR_EL0.E bit (bit 0), KVM will disable all counters defined in
> PMCR_EL0 even if they are not enabled in PMCNTENSET_EL0.
>
> KVM disables a counter by calling into the perf framework, in particular
> by calling perf_event_create_kernel_counter() which is a time consuming
> operation. So, for example, with a Neoverse N1 CPU core which has 6 event
> counters and one cycle counter, KVM will always disable all 7 counters
> even if only one is enabled.
>
> This typically happens when using the "perf record" command in a guest
> VM: perf will disable all event counters with PMCNTENTSET_EL0 and only
> uses the cycle counter. And when using the "perf record" -F option with
> a high profiling frequency, the overhead of KVM disabling all counters
> instead of one on every counter interrupt becomes very noticeable.
>
> The problem is fixed by having KVM disable only counters which are
> enabled in PMCNTENSET_EL0. If a counter is not enabled in PMCNTENSET_EL0
> then KVM will not enable it when setting PMCR_EL0.E and it will remain
> disabled as long as it is not enabled in PMCNTENSET_EL0. So there is
> effectively no need to disable a counter when clearing PMCR_EL0.E if it
> is not enabled PMCNTENSET_EL0.
>
> Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> [maz: moved 'mask' close to the actual user, simplifying the patch]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Link: https://lore.kernel.org/r/20210712170345.660272-1-alexandre.chartre@oracle.com
> ---
>  arch/arm64/kvm/pmu-emul.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index fae4e95b586c..dc65b58dc68f 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -563,20 +563,21 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
>   */
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  {
> -	unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>  	int i;
>  
>  	if (val & ARMV8_PMU_PMCR_E) {
>  		kvm_pmu_enable_counter_mask(vcpu,
>  		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	} else {
> -		kvm_pmu_disable_counter_mask(vcpu, mask);
> +		kvm_pmu_disable_counter_mask(vcpu,
> +		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>  	}
>  
>  	if (val & ARMV8_PMU_PMCR_C)
>  		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
>  
>  	if (val & ARMV8_PMU_PMCR_P) {
> +		unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>  		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
>  		for_each_set_bit(i, &mask, 32)
>  			kvm_pmu_set_counter_value(vcpu, i, 0);

Looks reasonable to me, and it fixes the issue described in the commit:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more)
  2021-07-13 13:58 ` Marc Zyngier
  (?)
@ 2021-07-15  8:34   ` Alexandre Chartre
  -1 siblings, 0 replies; 42+ messages in thread
From: Alexandre Chartre @ 2021-07-15  8:34 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Robin Murphy,
	kernel-team



On 7/13/21 3:58 PM, Marc Zyngier wrote:
> Hi all,
> 
> After some back and forth with Alexandre about patch #3 of this
> series, it became apparent that some of the PMU code paths perform
> some unnecessary masking, only to hide the fact that some of the PMU
> register reset values are not architecturally compliant (RES0 bits get
> set, among other things).
> 
> The first patch of this series addresses the reset value problem, the
> second one rids us of the pointless masking, and Alexandre's patch
> (which depends on the first two) is slapped on top, with a small
> cosmetic change.
> 

Thanks Marc.

You can add my Reviewed-by to patch 1 and 2:

Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>

alex.

> 
> Alexandre Chartre (1):
>    KVM: arm64: Disabling disabled PMU counters wastes a lot of time
> 
> Marc Zyngier (2):
>    KVM: arm64: Narrow PMU sysreg reset values to architectural
>      requirements
>    KVM: arm64: Drop unnecessary masking of PMU registers
> 
>   arch/arm64/kvm/pmu-emul.c |  8 +++---
>   arch/arm64/kvm/sys_regs.c | 52 ++++++++++++++++++++++++++++++++++-----
>   2 files changed, 50 insertions(+), 10 deletions(-)
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more)
@ 2021-07-15  8:34   ` Alexandre Chartre
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandre Chartre @ 2021-07-15  8:34 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm; +Cc: kernel-team, Robin Murphy



On 7/13/21 3:58 PM, Marc Zyngier wrote:
> Hi all,
> 
> After some back and forth with Alexandre about patch #3 of this
> series, it became apparent that some of the PMU code paths perform
> some unnecessary masking, only to hide the fact that some of the PMU
> register reset values are not architecturally compliant (RES0 bits get
> set, among other things).
> 
> The first patch of this series addresses the reset value problem, the
> second one rids us of the pointless masking, and Alexandre's patch
> (which depends on the first two) is slapped on top, with a small
> cosmetic change.
> 

Thanks Marc.

You can add my Reviewed-by to patch 1 and 2:

Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>

alex.

> 
> Alexandre Chartre (1):
>    KVM: arm64: Disabling disabled PMU counters wastes a lot of time
> 
> Marc Zyngier (2):
>    KVM: arm64: Narrow PMU sysreg reset values to architectural
>      requirements
>    KVM: arm64: Drop unnecessary masking of PMU registers
> 
>   arch/arm64/kvm/pmu-emul.c |  8 +++---
>   arch/arm64/kvm/sys_regs.c | 52 ++++++++++++++++++++++++++++++++++-----
>   2 files changed, 50 insertions(+), 10 deletions(-)
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more)
@ 2021-07-15  8:34   ` Alexandre Chartre
  0 siblings, 0 replies; 42+ messages in thread
From: Alexandre Chartre @ 2021-07-15  8:34 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvm, kvmarm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Robin Murphy,
	kernel-team



On 7/13/21 3:58 PM, Marc Zyngier wrote:
> Hi all,
> 
> After some back and forth with Alexandre about patch #3 of this
> series, it became apparent that some of the PMU code paths perform
> some unnecessary masking, only to hide the fact that some of the PMU
> register reset values are not architecturally compliant (RES0 bits get
> set, among other things).
> 
> The first patch of this series addresses the reset value problem, the
> second one rids us of the pointless masking, and Alexandre's patch
> (which depends on the first two) is slapped on top, with a small
> cosmetic change.
> 

Thanks Marc.

You can add my Reviewed-by to patch 1 and 2:

Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>

alex.

> 
> Alexandre Chartre (1):
>    KVM: arm64: Disabling disabled PMU counters wastes a lot of time
> 
> Marc Zyngier (2):
>    KVM: arm64: Narrow PMU sysreg reset values to architectural
>      requirements
>    KVM: arm64: Drop unnecessary masking of PMU registers
> 
>   arch/arm64/kvm/pmu-emul.c |  8 +++---
>   arch/arm64/kvm/sys_regs.c | 52 ++++++++++++++++++++++++++++++++++-----
>   2 files changed, 50 insertions(+), 10 deletions(-)
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-14 15:48     ` Alexandru Elisei
  (?)
@ 2021-07-15 11:11       ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-15 11:11 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandre Chartre, Robin Murphy, kernel-team

Hi Alex,

On Wed, 14 Jul 2021 16:48:07 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> Hi Marc,
> 
> On 7/13/21 2:58 PM, Marc Zyngier wrote:
> > A number of the PMU sysregs expose reset values that are not in
> > compliant with the architecture (set bits in the RES0 ranges,
> > for example).
> >
> > This in turn has the effect that we need to pointlessly mask
> > some register when using them.
> >
> > Let's start by making sure we don't have illegal values in the
> > shadow registers at reset time. This affects all the registers
> > that dedicate one bit per counter, the counters themselves,
> > PMEVTYPERn_EL0 and PMSELR_EL0.
> >
> > Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 43 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index f6f126eb6ac1..95ccb8f45409 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >  	return REG_HIDDEN;
> >  }
> >  
> > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 n, mask;
> > +
> > +	/* No PMU available, any PMU reg may UNDEF... */
> > +	if (!kvm_arm_support_pmu_v3())
> > +		return;
> > +
> > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> 
> Isn't this going to cause a lot of unnecessary traps with NV? Is
> that going to be a problem?

We'll get a new traps at L2 VM creation if we expose a PMU to the L1
guest, and if L2 gets one too. I don't think that's a real problem, as
the performance of an L2 PMU is bound to be hilarious, and if we are
really worried about that, we can always cache it locally. Which is
likely the best thing to do if you think of big-little.

Let's not think of big-little.

Another thing is that we could perfectly ignore the number of counter
on the host and always expose the architectural maximum, given that
the PMU is completely emulated. With that, no trap.

> Because at the moment I can't think of an elegant way to avoid it,
> other than special casing PMCR_EL0 in kvm_reset_sys_regs() and using
> here __vcpu_sys_reg(vcpu, PMCR_EL0). Or, even better, using
> kvm_pmu_valid_counter_mask(vcpu), since this is identical to what
> that function does.

I looked into that and bailed out, as it creates interesting ordering
problems...

> 
> > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > +
> > +	reset_unknown(vcpu, r);
> > +
> > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> 
> PMSWINC_EL0 has bit 31 RES0. Other than that, looked at all the PMU
> registers and everything looks correct to me.

PMSWINC_EL0 is a RAZ/WO register, which really shouldn't have a shadow
counterpart (the storage is completely unused). Let me get rid on this
sucker in v2.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-15 11:11       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-15 11:11 UTC (permalink / raw)
  To: Alexandru Elisei; +Cc: kvm, kernel-team, Robin Murphy, kvmarm, linux-arm-kernel

Hi Alex,

On Wed, 14 Jul 2021 16:48:07 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> Hi Marc,
> 
> On 7/13/21 2:58 PM, Marc Zyngier wrote:
> > A number of the PMU sysregs expose reset values that are not in
> > compliant with the architecture (set bits in the RES0 ranges,
> > for example).
> >
> > This in turn has the effect that we need to pointlessly mask
> > some register when using them.
> >
> > Let's start by making sure we don't have illegal values in the
> > shadow registers at reset time. This affects all the registers
> > that dedicate one bit per counter, the counters themselves,
> > PMEVTYPERn_EL0 and PMSELR_EL0.
> >
> > Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 43 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index f6f126eb6ac1..95ccb8f45409 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >  	return REG_HIDDEN;
> >  }
> >  
> > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 n, mask;
> > +
> > +	/* No PMU available, any PMU reg may UNDEF... */
> > +	if (!kvm_arm_support_pmu_v3())
> > +		return;
> > +
> > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> 
> Isn't this going to cause a lot of unnecessary traps with NV? Is
> that going to be a problem?

We'll get a new traps at L2 VM creation if we expose a PMU to the L1
guest, and if L2 gets one too. I don't think that's a real problem, as
the performance of an L2 PMU is bound to be hilarious, and if we are
really worried about that, we can always cache it locally. Which is
likely the best thing to do if you think of big-little.

Let's not think of big-little.

Another thing is that we could perfectly ignore the number of counter
on the host and always expose the architectural maximum, given that
the PMU is completely emulated. With that, no trap.

> Because at the moment I can't think of an elegant way to avoid it,
> other than special casing PMCR_EL0 in kvm_reset_sys_regs() and using
> here __vcpu_sys_reg(vcpu, PMCR_EL0). Or, even better, using
> kvm_pmu_valid_counter_mask(vcpu), since this is identical to what
> that function does.

I looked into that and bailed out, as it creates interesting ordering
problems...

> 
> > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > +
> > +	reset_unknown(vcpu, r);
> > +
> > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> 
> PMSWINC_EL0 has bit 31 RES0. Other than that, looked at all the PMU
> registers and everything looks correct to me.

PMSWINC_EL0 is a RAZ/WO register, which really shouldn't have a shadow
counterpart (the storage is completely unused). Let me get rid on this
sucker in v2.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-15 11:11       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-15 11:11 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandre Chartre, Robin Murphy, kernel-team

Hi Alex,

On Wed, 14 Jul 2021 16:48:07 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> Hi Marc,
> 
> On 7/13/21 2:58 PM, Marc Zyngier wrote:
> > A number of the PMU sysregs expose reset values that are not in
> > compliant with the architecture (set bits in the RES0 ranges,
> > for example).
> >
> > This in turn has the effect that we need to pointlessly mask
> > some register when using them.
> >
> > Let's start by making sure we don't have illegal values in the
> > shadow registers at reset time. This affects all the registers
> > that dedicate one bit per counter, the counters themselves,
> > PMEVTYPERn_EL0 and PMSELR_EL0.
> >
> > Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 43 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index f6f126eb6ac1..95ccb8f45409 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >  	return REG_HIDDEN;
> >  }
> >  
> > +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 n, mask;
> > +
> > +	/* No PMU available, any PMU reg may UNDEF... */
> > +	if (!kvm_arm_support_pmu_v3())
> > +		return;
> > +
> > +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> 
> Isn't this going to cause a lot of unnecessary traps with NV? Is
> that going to be a problem?

We'll get a new traps at L2 VM creation if we expose a PMU to the L1
guest, and if L2 gets one too. I don't think that's a real problem, as
the performance of an L2 PMU is bound to be hilarious, and if we are
really worried about that, we can always cache it locally. Which is
likely the best thing to do if you think of big-little.

Let's not think of big-little.

Another thing is that we could perfectly ignore the number of counter
on the host and always expose the architectural maximum, given that
the PMU is completely emulated. With that, no trap.

> Because at the moment I can't think of an elegant way to avoid it,
> other than special casing PMCR_EL0 in kvm_reset_sys_regs() and using
> here __vcpu_sys_reg(vcpu, PMCR_EL0). Or, even better, using
> kvm_pmu_valid_counter_mask(vcpu), since this is identical to what
> that function does.

I looked into that and bailed out, as it creates interesting ordering
problems...

> 
> > +	n &= ARMV8_PMU_PMCR_N_MASK;
> > +
> > +	reset_unknown(vcpu, r);
> > +
> > +	mask = BIT(ARMV8_PMU_CYCLE_IDX);
> 
> PMSWINC_EL0 has bit 31 RES0. Other than that, looked at all the PMU
> registers and everything looks correct to me.

PMSWINC_EL0 is a RAZ/WO register, which really shouldn't have a shadow
counterpart (the storage is completely unused). Let me get rid on this
sucker in v2.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-15 11:11       ` Marc Zyngier
  (?)
@ 2021-07-15 11:51         ` Robin Murphy
  -1 siblings, 0 replies; 42+ messages in thread
From: Robin Murphy @ 2021-07-15 11:51 UTC (permalink / raw)
  To: Marc Zyngier, Alexandru Elisei
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandre Chartre, kernel-team

On 2021-07-15 12:11, Marc Zyngier wrote:
> Hi Alex,
> 
> On Wed, 14 Jul 2021 16:48:07 +0100,
> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>>
>> Hi Marc,
>>
>> On 7/13/21 2:58 PM, Marc Zyngier wrote:
>>> A number of the PMU sysregs expose reset values that are not in
>>> compliant with the architecture (set bits in the RES0 ranges,
>>> for example).
>>>
>>> This in turn has the effect that we need to pointlessly mask
>>> some register when using them.
>>>
>>> Let's start by making sure we don't have illegal values in the
>>> shadow registers at reset time. This affects all the registers
>>> that dedicate one bit per counter, the counters themselves,
>>> PMEVTYPERn_EL0 and PMSELR_EL0.
>>>
>>> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>   arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
>>>   1 file changed, 43 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index f6f126eb6ac1..95ccb8f45409 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>>>   	return REG_HIDDEN;
>>>   }
>>>   
>>> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> +{
>>> +	u64 n, mask;
>>> +
>>> +	/* No PMU available, any PMU reg may UNDEF... */
>>> +	if (!kvm_arm_support_pmu_v3())
>>> +		return;
>>> +
>>> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
>>
>> Isn't this going to cause a lot of unnecessary traps with NV? Is
>> that going to be a problem?
> 
> We'll get a new traps at L2 VM creation if we expose a PMU to the L1
> guest, and if L2 gets one too. I don't think that's a real problem, as
> the performance of an L2 PMU is bound to be hilarious, and if we are
> really worried about that, we can always cache it locally. Which is
> likely the best thing to do if you think of big-little.
> 
> Let's not think of big-little.
> 
> Another thing is that we could perfectly ignore the number of counter
> on the host and always expose the architectural maximum, given that
> the PMU is completely emulated. With that, no trap.

Although that would deliberately exacerbate the existing problem of 
guest counters mysteriously under-reporting due to the host event 
getting multiplexed, thus arguably make the L2 PMU even less useful.

But then trying to analyse application performance under NV at all seems 
to stand a high chance of being akin to shovelling fog, so...

Robin.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-15 11:51         ` Robin Murphy
  0 siblings, 0 replies; 42+ messages in thread
From: Robin Murphy @ 2021-07-15 11:51 UTC (permalink / raw)
  To: Marc Zyngier, Alexandru Elisei; +Cc: kvm, kernel-team, kvmarm, linux-arm-kernel

On 2021-07-15 12:11, Marc Zyngier wrote:
> Hi Alex,
> 
> On Wed, 14 Jul 2021 16:48:07 +0100,
> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>>
>> Hi Marc,
>>
>> On 7/13/21 2:58 PM, Marc Zyngier wrote:
>>> A number of the PMU sysregs expose reset values that are not in
>>> compliant with the architecture (set bits in the RES0 ranges,
>>> for example).
>>>
>>> This in turn has the effect that we need to pointlessly mask
>>> some register when using them.
>>>
>>> Let's start by making sure we don't have illegal values in the
>>> shadow registers at reset time. This affects all the registers
>>> that dedicate one bit per counter, the counters themselves,
>>> PMEVTYPERn_EL0 and PMSELR_EL0.
>>>
>>> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>   arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
>>>   1 file changed, 43 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index f6f126eb6ac1..95ccb8f45409 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>>>   	return REG_HIDDEN;
>>>   }
>>>   
>>> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> +{
>>> +	u64 n, mask;
>>> +
>>> +	/* No PMU available, any PMU reg may UNDEF... */
>>> +	if (!kvm_arm_support_pmu_v3())
>>> +		return;
>>> +
>>> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
>>
>> Isn't this going to cause a lot of unnecessary traps with NV? Is
>> that going to be a problem?
> 
> We'll get a new traps at L2 VM creation if we expose a PMU to the L1
> guest, and if L2 gets one too. I don't think that's a real problem, as
> the performance of an L2 PMU is bound to be hilarious, and if we are
> really worried about that, we can always cache it locally. Which is
> likely the best thing to do if you think of big-little.
> 
> Let's not think of big-little.
> 
> Another thing is that we could perfectly ignore the number of counter
> on the host and always expose the architectural maximum, given that
> the PMU is completely emulated. With that, no trap.

Although that would deliberately exacerbate the existing problem of 
guest counters mysteriously under-reporting due to the host event 
getting multiplexed, thus arguably make the L2 PMU even less useful.

But then trying to analyse application performance under NV at all seems 
to stand a high chance of being akin to shovelling fog, so...

Robin.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-15 11:51         ` Robin Murphy
  0 siblings, 0 replies; 42+ messages in thread
From: Robin Murphy @ 2021-07-15 11:51 UTC (permalink / raw)
  To: Marc Zyngier, Alexandru Elisei
  Cc: linux-arm-kernel, kvm, kvmarm, James Morse, Suzuki K Poulose,
	Alexandre Chartre, kernel-team

On 2021-07-15 12:11, Marc Zyngier wrote:
> Hi Alex,
> 
> On Wed, 14 Jul 2021 16:48:07 +0100,
> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>>
>> Hi Marc,
>>
>> On 7/13/21 2:58 PM, Marc Zyngier wrote:
>>> A number of the PMU sysregs expose reset values that are not in
>>> compliant with the architecture (set bits in the RES0 ranges,
>>> for example).
>>>
>>> This in turn has the effect that we need to pointlessly mask
>>> some register when using them.
>>>
>>> Let's start by making sure we don't have illegal values in the
>>> shadow registers at reset time. This affects all the registers
>>> that dedicate one bit per counter, the counters themselves,
>>> PMEVTYPERn_EL0 and PMSELR_EL0.
>>>
>>> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>   arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
>>>   1 file changed, 43 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index f6f126eb6ac1..95ccb8f45409 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
>>>   	return REG_HIDDEN;
>>>   }
>>>   
>>> +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> +{
>>> +	u64 n, mask;
>>> +
>>> +	/* No PMU available, any PMU reg may UNDEF... */
>>> +	if (!kvm_arm_support_pmu_v3())
>>> +		return;
>>> +
>>> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
>>
>> Isn't this going to cause a lot of unnecessary traps with NV? Is
>> that going to be a problem?
> 
> We'll get a new traps at L2 VM creation if we expose a PMU to the L1
> guest, and if L2 gets one too. I don't think that's a real problem, as
> the performance of an L2 PMU is bound to be hilarious, and if we are
> really worried about that, we can always cache it locally. Which is
> likely the best thing to do if you think of big-little.
> 
> Let's not think of big-little.
> 
> Another thing is that we could perfectly ignore the number of counter
> on the host and always expose the architectural maximum, given that
> the PMU is completely emulated. With that, no trap.

Although that would deliberately exacerbate the existing problem of 
guest counters mysteriously under-reporting due to the host event 
getting multiplexed, thus arguably make the L2 PMU even less useful.

But then trying to analyse application performance under NV at all seems 
to stand a high chance of being akin to shovelling fog, so...

Robin.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
  2021-07-15 11:51         ` Robin Murphy
  (?)
@ 2021-07-15 12:25           ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-15 12:25 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Alexandru Elisei, linux-arm-kernel, kvm, kvmarm, James Morse,
	Suzuki K Poulose, Alexandre Chartre, kernel-team

On Thu, 15 Jul 2021 12:51:49 +0100,
Robin Murphy <robin.murphy@arm.com> wrote:
> 
> On 2021-07-15 12:11, Marc Zyngier wrote:
> > Hi Alex,
> > 
> > On Wed, 14 Jul 2021 16:48:07 +0100,
> > Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> >> 
> >> Hi Marc,
> >> 
> >> On 7/13/21 2:58 PM, Marc Zyngier wrote:
> >>> A number of the PMU sysregs expose reset values that are not in
> >>> compliant with the architecture (set bits in the RES0 ranges,
> >>> for example).
> >>> 
> >>> This in turn has the effect that we need to pointlessly mask
> >>> some register when using them.
> >>> 
> >>> Let's start by making sure we don't have illegal values in the
> >>> shadow registers at reset time. This affects all the registers
> >>> that dedicate one bit per counter, the counters themselves,
> >>> PMEVTYPERn_EL0 and PMSELR_EL0.
> >>> 
> >>> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> >>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >>> ---
> >>>   arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
> >>>   1 file changed, 43 insertions(+), 3 deletions(-)
> >>> 
> >>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >>> index f6f126eb6ac1..95ccb8f45409 100644
> >>> --- a/arch/arm64/kvm/sys_regs.c
> >>> +++ b/arch/arm64/kvm/sys_regs.c
> >>> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >>>   	return REG_HIDDEN;
> >>>   }
> >>>   +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct
> >>> sys_reg_desc *r)
> >>> +{
> >>> +	u64 n, mask;
> >>> +
> >>> +	/* No PMU available, any PMU reg may UNDEF... */
> >>> +	if (!kvm_arm_support_pmu_v3())
> >>> +		return;
> >>> +
> >>> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> >> 
> >> Isn't this going to cause a lot of unnecessary traps with NV? Is
> >> that going to be a problem?
> > 
> > We'll get a new traps at L2 VM creation if we expose a PMU to the L1
> > guest, and if L2 gets one too. I don't think that's a real problem, as
> > the performance of an L2 PMU is bound to be hilarious, and if we are
> > really worried about that, we can always cache it locally. Which is
> > likely the best thing to do if you think of big-little.
> > 
> > Let's not think of big-little.
> > 
> > Another thing is that we could perfectly ignore the number of counter
> > on the host and always expose the architectural maximum, given that
> > the PMU is completely emulated. With that, no trap.
> 
> Although that would deliberately exacerbate the existing problem of
> guest counters mysteriously under-reporting due to the host event
> getting multiplexed, thus arguably make the L2 PMU even less useful.

Oh, absolutely. But the current implementation of the PMU emulation
would be pretty terrible on NV anyway.

> But then trying to analyse application performance under NV at all
> seems to stand a high chance of being akin to shovelling fog, so...

Indeed. Not to mention that there is no (publicly available) HW to
measure performance on anyway...

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-15 12:25           ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-15 12:25 UTC (permalink / raw)
  To: Robin Murphy; +Cc: kvm, kernel-team, kvmarm, linux-arm-kernel

On Thu, 15 Jul 2021 12:51:49 +0100,
Robin Murphy <robin.murphy@arm.com> wrote:
> 
> On 2021-07-15 12:11, Marc Zyngier wrote:
> > Hi Alex,
> > 
> > On Wed, 14 Jul 2021 16:48:07 +0100,
> > Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> >> 
> >> Hi Marc,
> >> 
> >> On 7/13/21 2:58 PM, Marc Zyngier wrote:
> >>> A number of the PMU sysregs expose reset values that are not in
> >>> compliant with the architecture (set bits in the RES0 ranges,
> >>> for example).
> >>> 
> >>> This in turn has the effect that we need to pointlessly mask
> >>> some register when using them.
> >>> 
> >>> Let's start by making sure we don't have illegal values in the
> >>> shadow registers at reset time. This affects all the registers
> >>> that dedicate one bit per counter, the counters themselves,
> >>> PMEVTYPERn_EL0 and PMSELR_EL0.
> >>> 
> >>> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> >>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >>> ---
> >>>   arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
> >>>   1 file changed, 43 insertions(+), 3 deletions(-)
> >>> 
> >>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >>> index f6f126eb6ac1..95ccb8f45409 100644
> >>> --- a/arch/arm64/kvm/sys_regs.c
> >>> +++ b/arch/arm64/kvm/sys_regs.c
> >>> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >>>   	return REG_HIDDEN;
> >>>   }
> >>>   +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct
> >>> sys_reg_desc *r)
> >>> +{
> >>> +	u64 n, mask;
> >>> +
> >>> +	/* No PMU available, any PMU reg may UNDEF... */
> >>> +	if (!kvm_arm_support_pmu_v3())
> >>> +		return;
> >>> +
> >>> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> >> 
> >> Isn't this going to cause a lot of unnecessary traps with NV? Is
> >> that going to be a problem?
> > 
> > We'll get a new traps at L2 VM creation if we expose a PMU to the L1
> > guest, and if L2 gets one too. I don't think that's a real problem, as
> > the performance of an L2 PMU is bound to be hilarious, and if we are
> > really worried about that, we can always cache it locally. Which is
> > likely the best thing to do if you think of big-little.
> > 
> > Let's not think of big-little.
> > 
> > Another thing is that we could perfectly ignore the number of counter
> > on the host and always expose the architectural maximum, given that
> > the PMU is completely emulated. With that, no trap.
> 
> Although that would deliberately exacerbate the existing problem of
> guest counters mysteriously under-reporting due to the host event
> getting multiplexed, thus arguably make the L2 PMU even less useful.

Oh, absolutely. But the current implementation of the PMU emulation
would be pretty terrible on NV anyway.

> But then trying to analyse application performance under NV at all
> seems to stand a high chance of being akin to shovelling fog, so...

Indeed. Not to mention that there is no (publicly available) HW to
measure performance on anyway...

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements
@ 2021-07-15 12:25           ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2021-07-15 12:25 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Alexandru Elisei, linux-arm-kernel, kvm, kvmarm, James Morse,
	Suzuki K Poulose, Alexandre Chartre, kernel-team

On Thu, 15 Jul 2021 12:51:49 +0100,
Robin Murphy <robin.murphy@arm.com> wrote:
> 
> On 2021-07-15 12:11, Marc Zyngier wrote:
> > Hi Alex,
> > 
> > On Wed, 14 Jul 2021 16:48:07 +0100,
> > Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> >> 
> >> Hi Marc,
> >> 
> >> On 7/13/21 2:58 PM, Marc Zyngier wrote:
> >>> A number of the PMU sysregs expose reset values that are not in
> >>> compliant with the architecture (set bits in the RES0 ranges,
> >>> for example).
> >>> 
> >>> This in turn has the effect that we need to pointlessly mask
> >>> some register when using them.
> >>> 
> >>> Let's start by making sure we don't have illegal values in the
> >>> shadow registers at reset time. This affects all the registers
> >>> that dedicate one bit per counter, the counters themselves,
> >>> PMEVTYPERn_EL0 and PMSELR_EL0.
> >>> 
> >>> Reported-by: Alexandre Chartre <alexandre.chartre@oracle.com>
> >>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >>> ---
> >>>   arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++---
> >>>   1 file changed, 43 insertions(+), 3 deletions(-)
> >>> 
> >>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >>> index f6f126eb6ac1..95ccb8f45409 100644
> >>> --- a/arch/arm64/kvm/sys_regs.c
> >>> +++ b/arch/arm64/kvm/sys_regs.c
> >>> @@ -603,6 +603,44 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
> >>>   	return REG_HIDDEN;
> >>>   }
> >>>   +static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct
> >>> sys_reg_desc *r)
> >>> +{
> >>> +	u64 n, mask;
> >>> +
> >>> +	/* No PMU available, any PMU reg may UNDEF... */
> >>> +	if (!kvm_arm_support_pmu_v3())
> >>> +		return;
> >>> +
> >>> +	n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
> >> 
> >> Isn't this going to cause a lot of unnecessary traps with NV? Is
> >> that going to be a problem?
> > 
> > We'll get a new traps at L2 VM creation if we expose a PMU to the L1
> > guest, and if L2 gets one too. I don't think that's a real problem, as
> > the performance of an L2 PMU is bound to be hilarious, and if we are
> > really worried about that, we can always cache it locally. Which is
> > likely the best thing to do if you think of big-little.
> > 
> > Let's not think of big-little.
> > 
> > Another thing is that we could perfectly ignore the number of counter
> > on the host and always expose the architectural maximum, given that
> > the PMU is completely emulated. With that, no trap.
> 
> Although that would deliberately exacerbate the existing problem of
> guest counters mysteriously under-reporting due to the host event
> getting multiplexed, thus arguably make the L2 PMU even less useful.

Oh, absolutely. But the current implementation of the PMU emulation
would be pretty terrible on NV anyway.

> But then trying to analyse application performance under NV at all
> seems to stand a high chance of being akin to shovelling fog, so...

Indeed. Not to mention that there is no (publicly available) HW to
measure performance on anyway...

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2021-07-15 12:28 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-13 13:58 [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more) Marc Zyngier
2021-07-13 13:58 ` Marc Zyngier
2021-07-13 13:58 ` Marc Zyngier
2021-07-13 13:58 ` [PATCH 1/3] KVM: arm64: Narrow PMU sysreg reset values to architectural requirements Marc Zyngier
2021-07-13 13:58   ` Marc Zyngier
2021-07-13 13:58   ` Marc Zyngier
2021-07-13 14:39   ` Russell King (Oracle)
2021-07-13 14:39     ` Russell King (Oracle)
2021-07-13 14:39     ` Russell King (Oracle)
2021-07-13 15:59     ` Marc Zyngier
2021-07-13 15:59       ` Marc Zyngier
2021-07-13 15:59       ` Marc Zyngier
2021-07-13 16:15       ` Russell King (Oracle)
2021-07-13 16:15         ` Russell King (Oracle)
2021-07-13 16:15         ` Russell King (Oracle)
2021-07-14 15:48   ` Alexandru Elisei
2021-07-14 15:48     ` Alexandru Elisei
2021-07-14 15:48     ` Alexandru Elisei
2021-07-15 11:11     ` Marc Zyngier
2021-07-15 11:11       ` Marc Zyngier
2021-07-15 11:11       ` Marc Zyngier
2021-07-15 11:51       ` Robin Murphy
2021-07-15 11:51         ` Robin Murphy
2021-07-15 11:51         ` Robin Murphy
2021-07-15 12:25         ` Marc Zyngier
2021-07-15 12:25           ` Marc Zyngier
2021-07-15 12:25           ` Marc Zyngier
2021-07-13 13:58 ` [PATCH 2/3] KVM: arm64: Drop unnecessary masking of PMU registers Marc Zyngier
2021-07-13 13:58   ` Marc Zyngier
2021-07-13 13:58   ` Marc Zyngier
2021-07-14 16:12   ` Alexandru Elisei
2021-07-14 16:12     ` Alexandru Elisei
2021-07-14 16:12     ` Alexandru Elisei
2021-07-13 13:59 ` [PATCH 3/3] KVM: arm64: Disabling disabled PMU counters wastes a lot of time Marc Zyngier
2021-07-13 13:59   ` Marc Zyngier
2021-07-13 13:59   ` Marc Zyngier
2021-07-14 16:18   ` Alexandru Elisei
2021-07-14 16:18     ` Alexandru Elisei
2021-07-14 16:18     ` Alexandru Elisei
2021-07-15  8:34 ` [PATCH 0/3] kvm-arm64: Fix PMU reset values (and more) Alexandre Chartre
2021-07-15  8:34   ` Alexandre Chartre
2021-07-15  8:34   ` Alexandre Chartre

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.