All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-05-27  4:02 ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

This series fixes issues with PMUVer handling for a guest with
PMU configured on heterogeneous PMU systems.
Specifically, it addresses the following two issues.

[A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
    to its sanitized value.  This could be inappropriate on
    heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
    as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
    PEs on the host is not uniform, the sanitized value will be 0).

[B] KVM uses PMUVer of the PMU hardware that is associated to
    the guest (kvm->arch.arm_pmu->pmuver) for the guest in some
    cases, even though userspace might have changed the guest's
    ID_AA64DFR0_EL1.PMUVer (kvm->arch.dfr0_pmuver.imp).

To fix [A], KVM will set the default value of the guest's
ID_AA64DFR0_EL1.PMUVer to the PMUVer of the guest's PMU
(kvm->arch.arm_pmu->pmuver).

To fix [B], KVM will stop using kvm->arch.arm_pmu->pmuver (except
for some special cases) and use ID_AA64DFR0_EL1.PMUVer for the
guest instead.

Patch 1 adds a helper to set a PMU for the guest. This helper will
make it easier for the following patches to modify the relevant
code.

Patch 2 make the default PMU for the guest set on the first
vCPU reset. As userspace can get the value of ID_AA64DFR0_EL1
after the initial vCPU reset, this change is to make the
default PMUVer value based on the guest's PMU available on
the initial vCPU reset.

Patch 3 and 4 fix the issue [A] and [B] respectively.

The series is based on v6.4-rc3.
The patches in this series were originally included as part of [1].

[1] https://lore.kernel.org/all/20230211031506.4159098-1-reijiw@google.com/

Reiji Watanabe (4):
  KVM: arm64: PMU: Introduce a helper to set the guest's PMU
  KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
  KVM: arm64: PMU: Use PMUVer of the guest's PMU for ID_AA64DFR0.PMUVer
  KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest

 arch/arm64/include/asm/kvm_host.h |  2 +
 arch/arm64/kvm/arm.c              |  6 ---
 arch/arm64/kvm/pmu-emul.c         | 73 +++++++++++++++++++++----------
 arch/arm64/kvm/reset.c            | 20 ++++++---
 arch/arm64/kvm/sys_regs.c         | 48 +++++++++++++-------
 include/kvm/arm_pmu.h             | 10 ++++-
 6 files changed, 106 insertions(+), 53 deletions(-)


base-commit: 44c026a73be8038f03dbdeef028b642880cf1511
-- 
2.41.0.rc0.172.g3f132b7071-goog


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-05-27  4:02 ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

This series fixes issues with PMUVer handling for a guest with
PMU configured on heterogeneous PMU systems.
Specifically, it addresses the following two issues.

[A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
    to its sanitized value.  This could be inappropriate on
    heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
    as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
    PEs on the host is not uniform, the sanitized value will be 0).

[B] KVM uses PMUVer of the PMU hardware that is associated to
    the guest (kvm->arch.arm_pmu->pmuver) for the guest in some
    cases, even though userspace might have changed the guest's
    ID_AA64DFR0_EL1.PMUVer (kvm->arch.dfr0_pmuver.imp).

To fix [A], KVM will set the default value of the guest's
ID_AA64DFR0_EL1.PMUVer to the PMUVer of the guest's PMU
(kvm->arch.arm_pmu->pmuver).

To fix [B], KVM will stop using kvm->arch.arm_pmu->pmuver (except
for some special cases) and use ID_AA64DFR0_EL1.PMUVer for the
guest instead.

Patch 1 adds a helper to set a PMU for the guest. This helper will
make it easier for the following patches to modify the relevant
code.

Patch 2 make the default PMU for the guest set on the first
vCPU reset. As userspace can get the value of ID_AA64DFR0_EL1
after the initial vCPU reset, this change is to make the
default PMUVer value based on the guest's PMU available on
the initial vCPU reset.

Patch 3 and 4 fix the issue [A] and [B] respectively.

The series is based on v6.4-rc3.
The patches in this series were originally included as part of [1].

[1] https://lore.kernel.org/all/20230211031506.4159098-1-reijiw@google.com/

Reiji Watanabe (4):
  KVM: arm64: PMU: Introduce a helper to set the guest's PMU
  KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
  KVM: arm64: PMU: Use PMUVer of the guest's PMU for ID_AA64DFR0.PMUVer
  KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest

 arch/arm64/include/asm/kvm_host.h |  2 +
 arch/arm64/kvm/arm.c              |  6 ---
 arch/arm64/kvm/pmu-emul.c         | 73 +++++++++++++++++++++----------
 arch/arm64/kvm/reset.c            | 20 ++++++---
 arch/arm64/kvm/sys_regs.c         | 48 +++++++++++++-------
 include/kvm/arm_pmu.h             | 10 ++++-
 6 files changed, 106 insertions(+), 53 deletions(-)


base-commit: 44c026a73be8038f03dbdeef028b642880cf1511
-- 
2.41.0.rc0.172.g3f132b7071-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/4] KVM: arm64: PMU: Introduce a helper to set the guest's PMU
  2023-05-27  4:02 ` Reiji Watanabe
@ 2023-05-27  4:02   ` Reiji Watanabe
  -1 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Introduce a new helper function to set the guest's PMU
(kvm->arch.arm_pmu), and use it when the guest's PMU needs
to be set. This helper will make it easier for the following
patches to modify the relevant code.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 45727d50d18d..d50c8f7a2410 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -869,6 +869,21 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 	return true;
 }
 
+static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+{
+	lockdep_assert_held(&kvm->arch.config_lock);
+
+	if (!arm_pmu) {
+		arm_pmu = kvm_pmu_probe_armpmu();
+		if (!arm_pmu)
+			return -ENODEV;
+	}
+
+	kvm->arch.arm_pmu = arm_pmu;
+
+	return 0;
+}
+
 static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -888,7 +903,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
 				break;
 			}
 
-			kvm->arch.arm_pmu = arm_pmu;
+			WARN_ON_ONCE(kvm_arm_set_vm_pmu(kvm, arm_pmu));
 			cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus);
 			ret = 0;
 			break;
@@ -913,9 +928,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 
 	if (!kvm->arch.arm_pmu) {
 		/* No PMU set, get the default one */
-		kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
-		if (!kvm->arch.arm_pmu)
-			return -ENODEV;
+		int ret = kvm_arm_set_vm_pmu(kvm, NULL);
+
+		if (ret)
+			return ret;
 	}
 
 	switch (attr->attr) {
-- 
2.41.0.rc0.172.g3f132b7071-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 1/4] KVM: arm64: PMU: Introduce a helper to set the guest's PMU
@ 2023-05-27  4:02   ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Introduce a new helper function to set the guest's PMU
(kvm->arch.arm_pmu), and use it when the guest's PMU needs
to be set. This helper will make it easier for the following
patches to modify the relevant code.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 45727d50d18d..d50c8f7a2410 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -869,6 +869,21 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 	return true;
 }
 
+static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+{
+	lockdep_assert_held(&kvm->arch.config_lock);
+
+	if (!arm_pmu) {
+		arm_pmu = kvm_pmu_probe_armpmu();
+		if (!arm_pmu)
+			return -ENODEV;
+	}
+
+	kvm->arch.arm_pmu = arm_pmu;
+
+	return 0;
+}
+
 static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -888,7 +903,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
 				break;
 			}
 
-			kvm->arch.arm_pmu = arm_pmu;
+			WARN_ON_ONCE(kvm_arm_set_vm_pmu(kvm, arm_pmu));
 			cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus);
 			ret = 0;
 			break;
@@ -913,9 +928,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 
 	if (!kvm->arch.arm_pmu) {
 		/* No PMU set, get the default one */
-		kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
-		if (!kvm->arch.arm_pmu)
-			return -ENODEV;
+		int ret = kvm_arm_set_vm_pmu(kvm, NULL);
+
+		if (ret)
+			return ret;
 	}
 
 	switch (attr->attr) {
-- 
2.41.0.rc0.172.g3f132b7071-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
  2023-05-27  4:02 ` Reiji Watanabe
@ 2023-05-27  4:02   ` Reiji Watanabe
  -1 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Set the default PMU for the guest on the first vCPU reset,
not when userspace initially uses KVM_ARM_VCPU_PMU_V3_CTRL.
The following patches will use the PMUVer of the PMU as the
default value of the ID_AA64DFR0_EL1.PMUVer for vCPUs with
PMU configured.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 10 +---------
 arch/arm64/kvm/reset.c    | 20 +++++++++++++-------
 include/kvm/arm_pmu.h     |  6 ++++++
 3 files changed, 20 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index d50c8f7a2410..0194a94c4bae 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -869,7 +869,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 	return true;
 }
 
-static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 {
 	lockdep_assert_held(&kvm->arch.config_lock);
 
@@ -926,14 +926,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 	if (vcpu->arch.pmu.created)
 		return -EBUSY;
 
-	if (!kvm->arch.arm_pmu) {
-		/* No PMU set, get the default one */
-		int ret = kvm_arm_set_vm_pmu(kvm, NULL);
-
-		if (ret)
-			return ret;
-	}
-
 	switch (attr->attr) {
 	case KVM_ARM_VCPU_PMU_V3_IRQ: {
 		int __user *uaddr = (int __user *)(long)attr->addr;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index b5dee8e57e77..f5e24492926c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -258,13 +258,24 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu)
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_reset_state reset_state;
+	struct kvm *kvm = vcpu->kvm;
 	int ret;
 	bool loaded;
 	u32 pstate;
 
-	mutex_lock(&vcpu->kvm->arch.config_lock);
+	mutex_lock(&kvm->arch.config_lock);
 	ret = kvm_set_vm_width(vcpu);
-	mutex_unlock(&vcpu->kvm->arch.config_lock);
+	if (!ret && kvm_vcpu_has_pmu(vcpu)) {
+		if (!kvm_arm_support_pmu_v3())
+			ret = -EINVAL;
+		else if (unlikely(!kvm->arch.arm_pmu))
+			/*
+			 * As no PMU is set for the guest yet,
+			 * set the default one.
+			 */
+			ret = kvm_arm_set_vm_pmu(kvm, NULL);
+	}
+	mutex_unlock(&kvm->arch.config_lock);
 
 	if (ret)
 		return ret;
@@ -315,11 +326,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		} else {
 			pstate = VCPU_RESET_PSTATE_EL1;
 		}
-
-		if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) {
-			ret = -EINVAL;
-			goto out;
-		}
 		break;
 	}
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 1a6a695ca67a..5ece2a3c1858 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -96,6 +96,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 	(vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)
 
 u8 kvm_arm_pmu_get_pmuver_limit(void);
+int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu);
 
 #else
 struct kvm_pmu {
@@ -168,6 +169,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
 	return 0;
 }
 
+static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+{
+	return 0;
+}
+
 #endif
 
 #endif
-- 
2.41.0.rc0.172.g3f132b7071-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
@ 2023-05-27  4:02   ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Set the default PMU for the guest on the first vCPU reset,
not when userspace initially uses KVM_ARM_VCPU_PMU_V3_CTRL.
The following patches will use the PMUVer of the PMU as the
default value of the ID_AA64DFR0_EL1.PMUVer for vCPUs with
PMU configured.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 10 +---------
 arch/arm64/kvm/reset.c    | 20 +++++++++++++-------
 include/kvm/arm_pmu.h     |  6 ++++++
 3 files changed, 20 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index d50c8f7a2410..0194a94c4bae 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -869,7 +869,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 	return true;
 }
 
-static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 {
 	lockdep_assert_held(&kvm->arch.config_lock);
 
@@ -926,14 +926,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 	if (vcpu->arch.pmu.created)
 		return -EBUSY;
 
-	if (!kvm->arch.arm_pmu) {
-		/* No PMU set, get the default one */
-		int ret = kvm_arm_set_vm_pmu(kvm, NULL);
-
-		if (ret)
-			return ret;
-	}
-
 	switch (attr->attr) {
 	case KVM_ARM_VCPU_PMU_V3_IRQ: {
 		int __user *uaddr = (int __user *)(long)attr->addr;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index b5dee8e57e77..f5e24492926c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -258,13 +258,24 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu)
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_reset_state reset_state;
+	struct kvm *kvm = vcpu->kvm;
 	int ret;
 	bool loaded;
 	u32 pstate;
 
-	mutex_lock(&vcpu->kvm->arch.config_lock);
+	mutex_lock(&kvm->arch.config_lock);
 	ret = kvm_set_vm_width(vcpu);
-	mutex_unlock(&vcpu->kvm->arch.config_lock);
+	if (!ret && kvm_vcpu_has_pmu(vcpu)) {
+		if (!kvm_arm_support_pmu_v3())
+			ret = -EINVAL;
+		else if (unlikely(!kvm->arch.arm_pmu))
+			/*
+			 * As no PMU is set for the guest yet,
+			 * set the default one.
+			 */
+			ret = kvm_arm_set_vm_pmu(kvm, NULL);
+	}
+	mutex_unlock(&kvm->arch.config_lock);
 
 	if (ret)
 		return ret;
@@ -315,11 +326,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		} else {
 			pstate = VCPU_RESET_PSTATE_EL1;
 		}
-
-		if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) {
-			ret = -EINVAL;
-			goto out;
-		}
 		break;
 	}
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 1a6a695ca67a..5ece2a3c1858 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -96,6 +96,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 	(vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)
 
 u8 kvm_arm_pmu_get_pmuver_limit(void);
+int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu);
 
 #else
 struct kvm_pmu {
@@ -168,6 +169,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
 	return 0;
 }
 
+static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
+{
+	return 0;
+}
+
 #endif
 
 #endif
-- 
2.41.0.rc0.172.g3f132b7071-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/4] KVM: arm64: PMU: Use PMUVer of the guest's PMU for ID_AA64DFR0.PMUVer
  2023-05-27  4:02 ` Reiji Watanabe
@ 2023-05-27  4:02   ` Reiji Watanabe
  -1 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Currently, KVM uses the sanitized value of ID_AA64DFR0_EL1.PMUVer
as the default value and the limit value of this field for
vCPUs with PMU configured. But, the sanitized value could
be inappropriate for the vCPUs on some heterogeneous PMU systems,
as arm64_ftr_bits for PMUVer is defined as FTR_EXACT with
safe_val == 0 (if the ID_AA64DFR0_EL1.PMUVer of all PEs on the
host is not uniform, the sanitized value will be 0).

Use the PMUver of the guest's PMU (kvm->arch.arm_pmu->pmuver) as the
default value and the limit value of ID_AA64DFR0_EL1.PMUVer for vCPUs
with PMU configured.

When the guest's PMU is switched to a different PMU, reset
the value of ID_AA64DFR0_EL1.PMUVer for the vCPUs based on
the new PMU, unless userspace has already modified the PMUVer
and the value is still valid even with the new PMU.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/arm.c              |  6 ----
 arch/arm64/kvm/pmu-emul.c         | 28 +++++++++++++-----
 arch/arm64/kvm/sys_regs.c         | 48 ++++++++++++++++++++-----------
 include/kvm/arm_pmu.h             |  4 +--
 5 files changed, 57 insertions(+), 31 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7e7e19ef6993..8ca0e7210a59 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -231,6 +231,8 @@ struct kvm_arch {
 #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE		7
 	/* SMCCC filter initialized for the VM */
 #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED		8
+	/* PMUVer set by userspace for the VM */
+#define KVM_ARCH_FLAG_PMUVER_DIRTY			9
 	unsigned long flags;
 
 	/*
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14391826241c..3c2fddfe90f7 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -164,12 +164,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	set_default_spectre(kvm);
 	kvm_arm_init_hypercalls(kvm);
 
-	/*
-	 * Initialise the default PMUver before there is a chance to
-	 * create an actual PMU.
-	 */
-	kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();
-
 	return 0;
 
 err_free_cpumask:
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 0194a94c4bae..6cd08d5e5b72 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -871,6 +871,8 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 
 int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 {
+	u8 new_limit;
+
 	lockdep_assert_held(&kvm->arch.config_lock);
 
 	if (!arm_pmu) {
@@ -880,6 +882,22 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 	}
 
 	kvm->arch.arm_pmu = arm_pmu;
+	new_limit = kvm_arm_pmu_get_pmuver_limit(kvm);
+
+	/*
+	 * Reset the value of ID_AA64DFR0_EL1.PMUVer to the new limit value,
+	 * unless the current value was set by userspace and is still a valid
+	 * value for the new PMU.
+	 */
+	if (!test_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &kvm->arch.flags)) {
+		kvm->arch.dfr0_pmuver.imp = new_limit;
+		return 0;
+	}
+
+	if (kvm->arch.dfr0_pmuver.imp > new_limit) {
+		kvm->arch.dfr0_pmuver.imp = new_limit;
+		clear_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &kvm->arch.flags);
+	}
 
 	return 0;
 }
@@ -1049,13 +1067,9 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 	return -ENXIO;
 }
 
-u8 kvm_arm_pmu_get_pmuver_limit(void)
+u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm)
 {
-	u64 tmp;
+	u8 host_pmuver = kvm->arch.arm_pmu ? kvm->arch.arm_pmu->pmuver : 0;
 
-	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
-	tmp = cpuid_feature_cap_perfmon_field(tmp,
-					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-					      ID_AA64DFR0_EL1_PMUVer_V3P5);
-	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
+	return min_t(u8, host_pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5);
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 71b12094d613..a76155ad997c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1382,8 +1382,11 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 {
 	u8 pmuver, host_pmuver;
 	bool valid_pmu;
+	u64 current_val = read_id_reg(vcpu, rd);
+	int ret = -EINVAL;
 
-	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
+	mutex_lock(&vcpu->kvm->arch.config_lock);
+	host_pmuver = kvm_arm_pmu_get_pmuver_limit(vcpu->kvm);
 
 	/*
 	 * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
@@ -1393,26 +1396,31 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	 */
 	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
 	if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver))
-		return -EINVAL;
+		goto out;
 
 	valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
 
 	/* Make sure view register and PMU support do match */
 	if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
-		return -EINVAL;
+		goto out;
 
 	/* We can only differ with PMUver, and anything else is an error */
-	val ^= read_id_reg(vcpu, rd);
+	val ^= current_val;
 	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
 	if (val)
-		return -EINVAL;
+		goto out;
 
-	if (valid_pmu)
+	if (valid_pmu) {
 		vcpu->kvm->arch.dfr0_pmuver.imp = pmuver;
-	else
+		set_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &vcpu->kvm->arch.flags);
+	} else
 		vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver;
 
-	return 0;
+	ret = 0;
+out:
+	mutex_unlock(&vcpu->kvm->arch.config_lock);
+
+	return ret;
 }
 
 static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
@@ -1421,8 +1429,11 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 {
 	u8 perfmon, host_perfmon;
 	bool valid_pmu;
+	u64 current_val = read_id_reg(vcpu, rd);
+	int ret = -EINVAL;
 
-	host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+	mutex_lock(&vcpu->kvm->arch.config_lock);
+	host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit(vcpu->kvm));
 
 	/*
 	 * Allow DFR0_EL1.PerfMon to be set from userspace as long as
@@ -1433,26 +1444,31 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 	perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val);
 	if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) ||
 	    (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3))
-		return -EINVAL;
+		goto out;
 
 	valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF);
 
 	/* Make sure view register and PMU support do match */
 	if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
-		return -EINVAL;
+		goto out;
 
 	/* We can only differ with PerfMon, and anything else is an error */
-	val ^= read_id_reg(vcpu, rd);
+	val ^= current_val;
 	val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
 	if (val)
-		return -EINVAL;
+		goto out;
 
-	if (valid_pmu)
+	if (valid_pmu) {
 		vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon);
-	else
+		set_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &vcpu->kvm->arch.flags);
+	} else
 		vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon);
 
-	return 0;
+	ret = 0;
+out:
+	mutex_unlock(&vcpu->kvm->arch.config_lock);
+
+	return ret;
 }
 
 /*
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5ece2a3c1858..00c05d17cf3a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -95,7 +95,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 #define kvm_pmu_is_3p5(vcpu)						\
 	(vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)
 
-u8 kvm_arm_pmu_get_pmuver_limit(void);
+u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm);
 int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu);
 
 #else
@@ -164,7 +164,7 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
-static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
+static inline u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm)
 {
 	return 0;
 }
-- 
2.41.0.rc0.172.g3f132b7071-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/4] KVM: arm64: PMU: Use PMUVer of the guest's PMU for ID_AA64DFR0.PMUVer
@ 2023-05-27  4:02   ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Currently, KVM uses the sanitized value of ID_AA64DFR0_EL1.PMUVer
as the default value and the limit value of this field for
vCPUs with PMU configured. But, the sanitized value could
be inappropriate for the vCPUs on some heterogeneous PMU systems,
as arm64_ftr_bits for PMUVer is defined as FTR_EXACT with
safe_val == 0 (if the ID_AA64DFR0_EL1.PMUVer of all PEs on the
host is not uniform, the sanitized value will be 0).

Use the PMUver of the guest's PMU (kvm->arch.arm_pmu->pmuver) as the
default value and the limit value of ID_AA64DFR0_EL1.PMUVer for vCPUs
with PMU configured.

When the guest's PMU is switched to a different PMU, reset
the value of ID_AA64DFR0_EL1.PMUVer for the vCPUs based on
the new PMU, unless userspace has already modified the PMUVer
and the value is still valid even with the new PMU.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/arm.c              |  6 ----
 arch/arm64/kvm/pmu-emul.c         | 28 +++++++++++++-----
 arch/arm64/kvm/sys_regs.c         | 48 ++++++++++++++++++++-----------
 include/kvm/arm_pmu.h             |  4 +--
 5 files changed, 57 insertions(+), 31 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7e7e19ef6993..8ca0e7210a59 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -231,6 +231,8 @@ struct kvm_arch {
 #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE		7
 	/* SMCCC filter initialized for the VM */
 #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED		8
+	/* PMUVer set by userspace for the VM */
+#define KVM_ARCH_FLAG_PMUVER_DIRTY			9
 	unsigned long flags;
 
 	/*
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14391826241c..3c2fddfe90f7 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -164,12 +164,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	set_default_spectre(kvm);
 	kvm_arm_init_hypercalls(kvm);
 
-	/*
-	 * Initialise the default PMUver before there is a chance to
-	 * create an actual PMU.
-	 */
-	kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();
-
 	return 0;
 
 err_free_cpumask:
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 0194a94c4bae..6cd08d5e5b72 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -871,6 +871,8 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
 
 int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 {
+	u8 new_limit;
+
 	lockdep_assert_held(&kvm->arch.config_lock);
 
 	if (!arm_pmu) {
@@ -880,6 +882,22 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
 	}
 
 	kvm->arch.arm_pmu = arm_pmu;
+	new_limit = kvm_arm_pmu_get_pmuver_limit(kvm);
+
+	/*
+	 * Reset the value of ID_AA64DFR0_EL1.PMUVer to the new limit value,
+	 * unless the current value was set by userspace and is still a valid
+	 * value for the new PMU.
+	 */
+	if (!test_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &kvm->arch.flags)) {
+		kvm->arch.dfr0_pmuver.imp = new_limit;
+		return 0;
+	}
+
+	if (kvm->arch.dfr0_pmuver.imp > new_limit) {
+		kvm->arch.dfr0_pmuver.imp = new_limit;
+		clear_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &kvm->arch.flags);
+	}
 
 	return 0;
 }
@@ -1049,13 +1067,9 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 	return -ENXIO;
 }
 
-u8 kvm_arm_pmu_get_pmuver_limit(void)
+u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm)
 {
-	u64 tmp;
+	u8 host_pmuver = kvm->arch.arm_pmu ? kvm->arch.arm_pmu->pmuver : 0;
 
-	tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
-	tmp = cpuid_feature_cap_perfmon_field(tmp,
-					      ID_AA64DFR0_EL1_PMUVer_SHIFT,
-					      ID_AA64DFR0_EL1_PMUVer_V3P5);
-	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
+	return min_t(u8, host_pmuver, ID_AA64DFR0_EL1_PMUVer_V3P5);
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 71b12094d613..a76155ad997c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1382,8 +1382,11 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 {
 	u8 pmuver, host_pmuver;
 	bool valid_pmu;
+	u64 current_val = read_id_reg(vcpu, rd);
+	int ret = -EINVAL;
 
-	host_pmuver = kvm_arm_pmu_get_pmuver_limit();
+	mutex_lock(&vcpu->kvm->arch.config_lock);
+	host_pmuver = kvm_arm_pmu_get_pmuver_limit(vcpu->kvm);
 
 	/*
 	 * Allow AA64DFR0_EL1.PMUver to be set from userspace as long
@@ -1393,26 +1396,31 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 	 */
 	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val);
 	if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver))
-		return -EINVAL;
+		goto out;
 
 	valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
 
 	/* Make sure view register and PMU support do match */
 	if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
-		return -EINVAL;
+		goto out;
 
 	/* We can only differ with PMUver, and anything else is an error */
-	val ^= read_id_reg(vcpu, rd);
+	val ^= current_val;
 	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
 	if (val)
-		return -EINVAL;
+		goto out;
 
-	if (valid_pmu)
+	if (valid_pmu) {
 		vcpu->kvm->arch.dfr0_pmuver.imp = pmuver;
-	else
+		set_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &vcpu->kvm->arch.flags);
+	} else
 		vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver;
 
-	return 0;
+	ret = 0;
+out:
+	mutex_unlock(&vcpu->kvm->arch.config_lock);
+
+	return ret;
 }
 
 static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
@@ -1421,8 +1429,11 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 {
 	u8 perfmon, host_perfmon;
 	bool valid_pmu;
+	u64 current_val = read_id_reg(vcpu, rd);
+	int ret = -EINVAL;
 
-	host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
+	mutex_lock(&vcpu->kvm->arch.config_lock);
+	host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit(vcpu->kvm));
 
 	/*
 	 * Allow DFR0_EL1.PerfMon to be set from userspace as long as
@@ -1433,26 +1444,31 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 	perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val);
 	if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) ||
 	    (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3))
-		return -EINVAL;
+		goto out;
 
 	valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF);
 
 	/* Make sure view register and PMU support do match */
 	if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
-		return -EINVAL;
+		goto out;
 
 	/* We can only differ with PerfMon, and anything else is an error */
-	val ^= read_id_reg(vcpu, rd);
+	val ^= current_val;
 	val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
 	if (val)
-		return -EINVAL;
+		goto out;
 
-	if (valid_pmu)
+	if (valid_pmu) {
 		vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon);
-	else
+		set_bit(KVM_ARCH_FLAG_PMUVER_DIRTY, &vcpu->kvm->arch.flags);
+	} else
 		vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon);
 
-	return 0;
+	ret = 0;
+out:
+	mutex_unlock(&vcpu->kvm->arch.config_lock);
+
+	return ret;
 }
 
 /*
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5ece2a3c1858..00c05d17cf3a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -95,7 +95,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
 #define kvm_pmu_is_3p5(vcpu)						\
 	(vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)
 
-u8 kvm_arm_pmu_get_pmuver_limit(void);
+u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm);
 int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu);
 
 #else
@@ -164,7 +164,7 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
-static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
+static inline u8 kvm_arm_pmu_get_pmuver_limit(struct kvm *kvm)
 {
 	return 0;
 }
-- 
2.41.0.rc0.172.g3f132b7071-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/4] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest
  2023-05-27  4:02 ` Reiji Watanabe
@ 2023-05-27  4:02   ` Reiji Watanabe
  -1 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Avoid using the PMUVer of the PMU hardware that is associated to
the guest, except in a few cases, as the PMUVer may be different
from the value of ID_AA64DFR0_EL1.PMUVer for the guest.

The first case is when using the PMUVer as the limit value of
the ID_AA64DFR0_EL1.PMUVer for the guest. The second case is
when using the PMUVer to determine the valid range of events for
KVM_ARM_VCPU_PMU_V3_FILTER, as it has been allowing userspace to
specify events that are valid for the PMU hardware, regardless of
the value of the guest's ID_AA64DFR0_EL1.PMUVer. KVM will change
the valid range of the event that the guest can use based on the
value of the guest's ID_AA64DFR0_EL1.PMUVer though.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 6cd08d5e5b72..67512b13ba2d 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -35,12 +35,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx)
 	return &vcpu->arch.pmu.pmc[cnt_idx];
 }
 
-static u32 kvm_pmu_event_mask(struct kvm *kvm)
+static u32 __kvm_pmu_event_mask(u8 pmuver)
 {
-	unsigned int pmuver;
-
-	pmuver = kvm->arch.arm_pmu->pmuver;
-
 	switch (pmuver) {
 	case ID_AA64DFR0_EL1_PMUVer_IMP:
 		return GENMASK(9, 0);
@@ -55,6 +51,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
 	}
 }
 
+static u32 kvm_pmu_event_mask(struct kvm *kvm)
+{
+	return __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp);
+}
+
 /**
  * kvm_pmc_is_64bit - determine if counter is 64bit
  * @pmc: counter context
@@ -757,7 +758,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 		 * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled
 		 * as RAZ
 		 */
-		if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4)
+		if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4)
 			val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32);
 		base = 32;
 	}
@@ -970,11 +971,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 		return 0;
 	}
 	case KVM_ARM_VCPU_PMU_V3_FILTER: {
+		u8 pmuver = kvm_arm_pmu_get_pmuver_limit(kvm);
 		struct kvm_pmu_event_filter __user *uaddr;
 		struct kvm_pmu_event_filter filter;
 		int nr_events;
 
-		nr_events = kvm_pmu_event_mask(kvm) + 1;
+		/*
+		 * Allow userspace to specify an event filter for the entire
+		 * event range supported by PMUVer of the hardware, rather
+		 * than the guest's PMUVer for KVM backward compatibility.
+		 */
+		nr_events = __kvm_pmu_event_mask(pmuver) + 1;
 
 		uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;
 
-- 
2.41.0.rc0.172.g3f132b7071-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/4] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest
@ 2023-05-27  4:02   ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-27  4:02 UTC (permalink / raw)
  To: Marc Zyngier, Oliver Upton, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei, Zenghui Yu,
	Suzuki K Poulose, Paolo Bonzini, Ricardo Koller, Jing Zhang,
	Raghavendra Rao Anata, Will Deacon, Reiji Watanabe

Avoid using the PMUVer of the PMU hardware that is associated to
the guest, except in a few cases, as the PMUVer may be different
from the value of ID_AA64DFR0_EL1.PMUVer for the guest.

The first case is when using the PMUVer as the limit value of
the ID_AA64DFR0_EL1.PMUVer for the guest. The second case is
when using the PMUVer to determine the valid range of events for
KVM_ARM_VCPU_PMU_V3_FILTER, as it has been allowing userspace to
specify events that are valid for the PMU hardware, regardless of
the value of the guest's ID_AA64DFR0_EL1.PMUVer. KVM will change
the valid range of the event that the guest can use based on the
value of the guest's ID_AA64DFR0_EL1.PMUVer though.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 6cd08d5e5b72..67512b13ba2d 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -35,12 +35,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx)
 	return &vcpu->arch.pmu.pmc[cnt_idx];
 }
 
-static u32 kvm_pmu_event_mask(struct kvm *kvm)
+static u32 __kvm_pmu_event_mask(u8 pmuver)
 {
-	unsigned int pmuver;
-
-	pmuver = kvm->arch.arm_pmu->pmuver;
-
 	switch (pmuver) {
 	case ID_AA64DFR0_EL1_PMUVer_IMP:
 		return GENMASK(9, 0);
@@ -55,6 +51,11 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
 	}
 }
 
+static u32 kvm_pmu_event_mask(struct kvm *kvm)
+{
+	return __kvm_pmu_event_mask(kvm->arch.dfr0_pmuver.imp);
+}
+
 /**
  * kvm_pmc_is_64bit - determine if counter is 64bit
  * @pmc: counter context
@@ -757,7 +758,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 		 * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled
 		 * as RAZ
 		 */
-		if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4)
+		if (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P4)
 			val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32);
 		base = 32;
 	}
@@ -970,11 +971,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 		return 0;
 	}
 	case KVM_ARM_VCPU_PMU_V3_FILTER: {
+		u8 pmuver = kvm_arm_pmu_get_pmuver_limit(kvm);
 		struct kvm_pmu_event_filter __user *uaddr;
 		struct kvm_pmu_event_filter filter;
 		int nr_events;
 
-		nr_events = kvm_pmu_event_mask(kvm) + 1;
+		/*
+		 * Allow userspace to specify an event filter for the entire
+		 * event range supported by PMUVer of the hardware, rather
+		 * than the guest's PMUVer for KVM backward compatibility.
+		 */
+		nr_events = __kvm_pmu_event_mask(pmuver) + 1;
 
 		uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;
 
-- 
2.41.0.rc0.172.g3f132b7071-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
  2023-05-27  4:02   ` Reiji Watanabe
@ 2023-05-27 17:35     ` kernel test robot
  -1 siblings, 0 replies; 24+ messages in thread
From: kernel test robot @ 2023-05-27 17:35 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, Oliver Upton, kvmarm
  Cc: oe-kbuild-all, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon,
	Reiji Watanabe

Hi Reiji,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 44c026a73be8038f03dbdeef028b642880cf1511]

url:    https://github.com/intel-lab-lkp/linux/commits/Reiji-Watanabe/KVM-arm64-PMU-Introduce-a-helper-to-set-the-guest-s-PMU/20230527-120717
base:   44c026a73be8038f03dbdeef028b642880cf1511
patch link:    https://lore.kernel.org/r/20230527040236.1875860-3-reijiw%40google.com
patch subject: [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
config: arm64-randconfig-r006-20230526 (https://download.01.org/0day-ci/archive/20230528/202305280138.CQFgYLdh-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        mkdir -p ~/bin
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/6339e7261a0e27669f5e17362150b7f3f5681f4a
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Reiji-Watanabe/KVM-arm64-PMU-Introduce-a-helper-to-set-the-guest-s-PMU/20230527-120717
        git checkout 6339e7261a0e27669f5e17362150b7f3f5681f4a
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 ~/bin/make.cross W=1 O=build_dir ARCH=arm64 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 ~/bin/make.cross W=1 O=build_dir ARCH=arm64 prepare

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202305280138.CQFgYLdh-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from arch/arm64/include/asm/kvm_host.h:37,
                    from include/linux/kvm_host.h:45,
                    from arch/arm64/kernel/asm-offsets.c:16:
>> include/kvm/arm_pmu.h:172:62: warning: 'struct arm_pmu' declared inside parameter list will not be visible outside of this definition or declaration
     172 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
         |                                                              ^~~~~~~
--
   In file included from arch/arm64/include/asm/kvm_host.h:37,
                    from include/linux/kvm_host.h:45,
                    from arch/arm64/kernel/asm-offsets.c:16:
>> include/kvm/arm_pmu.h:172:62: warning: 'struct arm_pmu' declared inside parameter list will not be visible outside of this definition or declaration
     172 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
         |                                                              ^~~~~~~


vim +172 include/kvm/arm_pmu.h

   171	
 > 172	static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
   173	{
   174		return 0;
   175	}
   176	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
@ 2023-05-27 17:35     ` kernel test robot
  0 siblings, 0 replies; 24+ messages in thread
From: kernel test robot @ 2023-05-27 17:35 UTC (permalink / raw)
  To: Reiji Watanabe, Marc Zyngier, Oliver Upton, kvmarm
  Cc: oe-kbuild-all, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon,
	Reiji Watanabe

Hi Reiji,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 44c026a73be8038f03dbdeef028b642880cf1511]

url:    https://github.com/intel-lab-lkp/linux/commits/Reiji-Watanabe/KVM-arm64-PMU-Introduce-a-helper-to-set-the-guest-s-PMU/20230527-120717
base:   44c026a73be8038f03dbdeef028b642880cf1511
patch link:    https://lore.kernel.org/r/20230527040236.1875860-3-reijiw%40google.com
patch subject: [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset
config: arm64-randconfig-r006-20230526 (https://download.01.org/0day-ci/archive/20230528/202305280138.CQFgYLdh-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        mkdir -p ~/bin
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/6339e7261a0e27669f5e17362150b7f3f5681f4a
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Reiji-Watanabe/KVM-arm64-PMU-Introduce-a-helper-to-set-the-guest-s-PMU/20230527-120717
        git checkout 6339e7261a0e27669f5e17362150b7f3f5681f4a
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 ~/bin/make.cross W=1 O=build_dir ARCH=arm64 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 ~/bin/make.cross W=1 O=build_dir ARCH=arm64 prepare

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202305280138.CQFgYLdh-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from arch/arm64/include/asm/kvm_host.h:37,
                    from include/linux/kvm_host.h:45,
                    from arch/arm64/kernel/asm-offsets.c:16:
>> include/kvm/arm_pmu.h:172:62: warning: 'struct arm_pmu' declared inside parameter list will not be visible outside of this definition or declaration
     172 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
         |                                                              ^~~~~~~
--
   In file included from arch/arm64/include/asm/kvm_host.h:37,
                    from include/linux/kvm_host.h:45,
                    from arch/arm64/kernel/asm-offsets.c:16:
>> include/kvm/arm_pmu.h:172:62: warning: 'struct arm_pmu' declared inside parameter list will not be visible outside of this definition or declaration
     172 | static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
         |                                                              ^~~~~~~


vim +172 include/kvm/arm_pmu.h

   171	
 > 172	static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
   173	{
   174		return 0;
   175	}
   176	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
  2023-05-27  4:02 ` Reiji Watanabe
@ 2023-05-29 13:39   ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2023-05-29 13:39 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

On Sat, 27 May 2023 05:02:32 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> This series fixes issues with PMUVer handling for a guest with
> PMU configured on heterogeneous PMU systems.
> Specifically, it addresses the following two issues.
> 
> [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
>     to its sanitized value.  This could be inappropriate on
>     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
>     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
>     PEs on the host is not uniform, the sanitized value will be 0).

Why is this a problem? The CPUs don't implement the same version of
the architecture, we don't get a PMU. Why should we try to do anything
better? I really don't think we should go out or out way and make the
code more complicated for something that doesn't really exist.

Or am I missing the problem altogether?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-05-29 13:39   ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2023-05-29 13:39 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

On Sat, 27 May 2023 05:02:32 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> This series fixes issues with PMUVer handling for a guest with
> PMU configured on heterogeneous PMU systems.
> Specifically, it addresses the following two issues.
> 
> [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
>     to its sanitized value.  This could be inappropriate on
>     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
>     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
>     PEs on the host is not uniform, the sanitized value will be 0).

Why is this a problem? The CPUs don't implement the same version of
the architecture, we don't get a PMU. Why should we try to do anything
better? I really don't think we should go out or out way and make the
code more complicated for something that doesn't really exist.

Or am I missing the problem altogether?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
  2023-05-29 13:39   ` Marc Zyngier
@ 2023-05-30 12:53     ` Reiji Watanabe
  -1 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-30 12:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

Hi Marc,

On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> On Sat, 27 May 2023 05:02:32 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > This series fixes issues with PMUVer handling for a guest with
> > PMU configured on heterogeneous PMU systems.
> > Specifically, it addresses the following two issues.
> > 
> > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> >     to its sanitized value.  This could be inappropriate on
> >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> >     PEs on the host is not uniform, the sanitized value will be 0).
> 
> Why is this a problem? The CPUs don't implement the same version of
> the architecture, we don't get a PMU. Why should we try to do anything
> better? I really don't think we should go out or out way and make the
> code more complicated for something that doesn't really exist.

Even when the CPUs don't implement the same version of the architecture,
if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.

In this case, although KVM provides PMU emulations for the guest,
the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
with PMU configured on such systems (since KVM also doesn't allow
userspace to set the PMUVer to 0 for the vCPUs with PMU configured).

I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
indicate PMUv3, or KVM should not allow userspace to configure PMU,
in this case.

This series is a fix for the former, mainly to keep the current
behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
systems, since I wasn't sure if such systems don't really exist :)
(Also, I plan to implement a similar fix for PMCR_EL0.N on top of
those changes)

I could make a fix for the latter instead though. What do you think ?

Thank you,
Reiji

> 
> Or am I missing the problem altogether?
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-05-30 12:53     ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-05-30 12:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

Hi Marc,

On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> On Sat, 27 May 2023 05:02:32 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > This series fixes issues with PMUVer handling for a guest with
> > PMU configured on heterogeneous PMU systems.
> > Specifically, it addresses the following two issues.
> > 
> > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> >     to its sanitized value.  This could be inappropriate on
> >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> >     PEs on the host is not uniform, the sanitized value will be 0).
> 
> Why is this a problem? The CPUs don't implement the same version of
> the architecture, we don't get a PMU. Why should we try to do anything
> better? I really don't think we should go out or out way and make the
> code more complicated for something that doesn't really exist.

Even when the CPUs don't implement the same version of the architecture,
if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.

In this case, although KVM provides PMU emulations for the guest,
the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
with PMU configured on such systems (since KVM also doesn't allow
userspace to set the PMUVer to 0 for the vCPUs with PMU configured).

I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
indicate PMUv3, or KVM should not allow userspace to configure PMU,
in this case.

This series is a fix for the former, mainly to keep the current
behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
systems, since I wasn't sure if such systems don't really exist :)
(Also, I plan to implement a similar fix for PMCR_EL0.N on top of
those changes)

I could make a fix for the latter instead though. What do you think ?

Thank you,
Reiji

> 
> Or am I missing the problem altogether?
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
  2023-05-30 12:53     ` Reiji Watanabe
@ 2023-06-01  5:02       ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2023-06-01  5:02 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

Hey Reiji,

On Tue, 30 May 2023 13:53:24 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> > On Sat, 27 May 2023 05:02:32 +0100,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > > 
> > > This series fixes issues with PMUVer handling for a guest with
> > > PMU configured on heterogeneous PMU systems.
> > > Specifically, it addresses the following two issues.
> > > 
> > > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> > >     to its sanitized value.  This could be inappropriate on
> > >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> > >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> > >     PEs on the host is not uniform, the sanitized value will be 0).
> > 
> > Why is this a problem? The CPUs don't implement the same version of
> > the architecture, we don't get a PMU. Why should we try to do anything
> > better? I really don't think we should go out or out way and make the
> > code more complicated for something that doesn't really exist.
> 
> Even when the CPUs don't implement the same version of the architecture,
> if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
> and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.

Ah, I see it now. The kernel will register the PMU even if it decides
that advertising it is wrong, and then we pick it up. Great :-/.

> In this case, although KVM provides PMU emulations for the guest,
> the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
> KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
> with PMU configured on such systems (since KVM also doesn't allow
> userspace to set the PMUVer to 0 for the vCPUs with PMU configured).
> 
> I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
> indicate PMUv3, or KVM should not allow userspace to configure PMU,
> in this case.

My vote is on the latter. Even if a PMU is available, we should rely
on the feature exposed by the kernel to decide whether exposing a PMU
or not.

To be honest, this will affect almost nobody (I only know of a single
one, an obscure ARMv8.0+ARMv8.2 system which is very unlikely to ever
use KVM). I'm happy to take the responsibility to actively break those.

> This series is a fix for the former, mainly to keep the current
> behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
> systems, since I wasn't sure if such systems don't really exist :)
> (Also, I plan to implement a similar fix for PMCR_EL0.N on top of
> those changes)
> 
> I could make a fix for the latter instead though. What do you think ?

I think this would be valuable.

Also, didn't you have patches for the EL0 side of the PMU? I've been
trying to look for a new version, but couldn't find it...

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-06-01  5:02       ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2023-06-01  5:02 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

Hey Reiji,

On Tue, 30 May 2023 13:53:24 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> > On Sat, 27 May 2023 05:02:32 +0100,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > > 
> > > This series fixes issues with PMUVer handling for a guest with
> > > PMU configured on heterogeneous PMU systems.
> > > Specifically, it addresses the following two issues.
> > > 
> > > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> > >     to its sanitized value.  This could be inappropriate on
> > >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> > >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> > >     PEs on the host is not uniform, the sanitized value will be 0).
> > 
> > Why is this a problem? The CPUs don't implement the same version of
> > the architecture, we don't get a PMU. Why should we try to do anything
> > better? I really don't think we should go out or out way and make the
> > code more complicated for something that doesn't really exist.
> 
> Even when the CPUs don't implement the same version of the architecture,
> if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
> and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.

Ah, I see it now. The kernel will register the PMU even if it decides
that advertising it is wrong, and then we pick it up. Great :-/.

> In this case, although KVM provides PMU emulations for the guest,
> the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
> KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
> with PMU configured on such systems (since KVM also doesn't allow
> userspace to set the PMUVer to 0 for the vCPUs with PMU configured).
> 
> I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
> indicate PMUv3, or KVM should not allow userspace to configure PMU,
> in this case.

My vote is on the latter. Even if a PMU is available, we should rely
on the feature exposed by the kernel to decide whether exposing a PMU
or not.

To be honest, this will affect almost nobody (I only know of a single
one, an obscure ARMv8.0+ARMv8.2 system which is very unlikely to ever
use KVM). I'm happy to take the responsibility to actively break those.

> This series is a fix for the former, mainly to keep the current
> behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
> systems, since I wasn't sure if such systems don't really exist :)
> (Also, I plan to implement a similar fix for PMCR_EL0.N on top of
> those changes)
> 
> I could make a fix for the latter instead though. What do you think ?

I think this would be valuable.

Also, didn't you have patches for the EL0 side of the PMU? I've been
trying to look for a new version, but couldn't find it...

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
  2023-06-01  5:02       ` Marc Zyngier
@ 2023-06-02  5:23         ` Reiji Watanabe
  -1 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-06-02  5:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

Hi Marc,

On Thu, Jun 01, 2023 at 06:02:41AM +0100, Marc Zyngier wrote:
> Hey Reiji,
> 
> On Tue, 30 May 2023 13:53:24 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > Hi Marc,
> > 
> > On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> > > On Sat, 27 May 2023 05:02:32 +0100,
> > > Reiji Watanabe <reijiw@google.com> wrote:
> > > > 
> > > > This series fixes issues with PMUVer handling for a guest with
> > > > PMU configured on heterogeneous PMU systems.
> > > > Specifically, it addresses the following two issues.
> > > > 
> > > > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> > > >     to its sanitized value.  This could be inappropriate on
> > > >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> > > >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> > > >     PEs on the host is not uniform, the sanitized value will be 0).
> > > 
> > > Why is this a problem? The CPUs don't implement the same version of
> > > the architecture, we don't get a PMU. Why should we try to do anything
> > > better? I really don't think we should go out or out way and make the
> > > code more complicated for something that doesn't really exist.
> > 
> > Even when the CPUs don't implement the same version of the architecture,
> > if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
> > and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.
> 
> Ah, I see it now. The kernel will register the PMU even if it decides
> that advertising it is wrong, and then we pick it up. Great :-/.
> 
> > In this case, although KVM provides PMU emulations for the guest,
> > the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
> > KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
> > with PMU configured on such systems (since KVM also doesn't allow
> > userspace to set the PMUVer to 0 for the vCPUs with PMU configured).
> > 
> > I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
> > indicate PMUv3, or KVM should not allow userspace to configure PMU,
> > in this case.
> 
> My vote is on the latter. Even if a PMU is available, we should rely
> on the feature exposed by the kernel to decide whether exposing a PMU
> or not.
> 
> To be honest, this will affect almost nobody (I only know of a single
> one, an obscure ARMv8.0+ARMv8.2 system which is very unlikely to ever
> use KVM). I'm happy to take the responsibility to actively break those.

Thank you for the information! Just curious, how about a mix of
cores with and without PMU ? (with the same ARMv8.x version)
I'm guessing there are very few if any though :) 


> 
> > This series is a fix for the former, mainly to keep the current
> > behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
> > systems, since I wasn't sure if such systems don't really exist :)
> > (Also, I plan to implement a similar fix for PMCR_EL0.N on top of
> > those changes)
> > 
> > I could make a fix for the latter instead though. What do you think ?
> 
> I think this would be valuable.

Thank you for the comment! I will go with the latter.


> Also, didn't you have patches for the EL0 side of the PMU? I've been
> trying to look for a new version, but couldn't find it...

While I'm working on fixing the series based on the recent comment from
Oliver (https://lore.kernel.org/all/ZG%2Fw95pYjWnMJB62@linux.dev/),
I have a new PMU EL0 issue, which blocked my testing of the series.
So, I am debugging the new PMU EL0 issue.

It appears that arch_perf_update_userpage() defined in
drivers/perf/arm_pmuv3.c isn't used, and instead, the weak one in
kernel/events/core.c is used.  This prevents cap_user_rdpmc (, etc)
from being set (This prevented my test program from directly
accessing counters).  This seems to be caused by the commit 7755cec63ade
("arm64: perf: Move PMUv3 driver to drivers/perf").

I have not yet figured out why the one in arm_pmuv3.c isn't used
though (The weak one in core.c seems to take precedence over strong
ones under drivers/ somehow...).

Anyway, I worked around the new issue for now, and ran the test for
my series though. I will post the new version of the EL0 series
tomorrow hopefully.

Thank you,
Reiji


> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-06-02  5:23         ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-06-02  5:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

Hi Marc,

On Thu, Jun 01, 2023 at 06:02:41AM +0100, Marc Zyngier wrote:
> Hey Reiji,
> 
> On Tue, 30 May 2023 13:53:24 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > Hi Marc,
> > 
> > On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> > > On Sat, 27 May 2023 05:02:32 +0100,
> > > Reiji Watanabe <reijiw@google.com> wrote:
> > > > 
> > > > This series fixes issues with PMUVer handling for a guest with
> > > > PMU configured on heterogeneous PMU systems.
> > > > Specifically, it addresses the following two issues.
> > > > 
> > > > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> > > >     to its sanitized value.  This could be inappropriate on
> > > >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> > > >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> > > >     PEs on the host is not uniform, the sanitized value will be 0).
> > > 
> > > Why is this a problem? The CPUs don't implement the same version of
> > > the architecture, we don't get a PMU. Why should we try to do anything
> > > better? I really don't think we should go out or out way and make the
> > > code more complicated for something that doesn't really exist.
> > 
> > Even when the CPUs don't implement the same version of the architecture,
> > if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
> > and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.
> 
> Ah, I see it now. The kernel will register the PMU even if it decides
> that advertising it is wrong, and then we pick it up. Great :-/.
> 
> > In this case, although KVM provides PMU emulations for the guest,
> > the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
> > KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
> > with PMU configured on such systems (since KVM also doesn't allow
> > userspace to set the PMUVer to 0 for the vCPUs with PMU configured).
> > 
> > I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
> > indicate PMUv3, or KVM should not allow userspace to configure PMU,
> > in this case.
> 
> My vote is on the latter. Even if a PMU is available, we should rely
> on the feature exposed by the kernel to decide whether exposing a PMU
> or not.
> 
> To be honest, this will affect almost nobody (I only know of a single
> one, an obscure ARMv8.0+ARMv8.2 system which is very unlikely to ever
> use KVM). I'm happy to take the responsibility to actively break those.

Thank you for the information! Just curious, how about a mix of
cores with and without PMU ? (with the same ARMv8.x version)
I'm guessing there are very few if any though :) 


> 
> > This series is a fix for the former, mainly to keep the current
> > behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
> > systems, since I wasn't sure if such systems don't really exist :)
> > (Also, I plan to implement a similar fix for PMCR_EL0.N on top of
> > those changes)
> > 
> > I could make a fix for the latter instead though. What do you think ?
> 
> I think this would be valuable.

Thank you for the comment! I will go with the latter.


> Also, didn't you have patches for the EL0 side of the PMU? I've been
> trying to look for a new version, but couldn't find it...

While I'm working on fixing the series based on the recent comment from
Oliver (https://lore.kernel.org/all/ZG%2Fw95pYjWnMJB62@linux.dev/),
I have a new PMU EL0 issue, which blocked my testing of the series.
So, I am debugging the new PMU EL0 issue.

It appears that arch_perf_update_userpage() defined in
drivers/perf/arm_pmuv3.c isn't used, and instead, the weak one in
kernel/events/core.c is used.  This prevents cap_user_rdpmc (, etc)
from being set (This prevented my test program from directly
accessing counters).  This seems to be caused by the commit 7755cec63ade
("arm64: perf: Move PMUv3 driver to drivers/perf").

I have not yet figured out why the one in arm_pmuv3.c isn't used
though (The weak one in core.c seems to take precedence over strong
ones under drivers/ somehow...).

Anyway, I worked around the new issue for now, and ran the test for
my series though. I will post the new version of the EL0 series
tomorrow hopefully.

Thank you,
Reiji


> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
  2023-06-02  5:23         ` Reiji Watanabe
@ 2023-06-02  9:05           ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2023-06-02  9:05 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

On Fri, 02 Jun 2023 06:23:23 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Thu, Jun 01, 2023 at 06:02:41AM +0100, Marc Zyngier wrote:
> > Hey Reiji,
> > 
> > On Tue, 30 May 2023 13:53:24 +0100,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > > 
> > > Hi Marc,
> > > 
> > > On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> > > > On Sat, 27 May 2023 05:02:32 +0100,
> > > > Reiji Watanabe <reijiw@google.com> wrote:
> > > > > 
> > > > > This series fixes issues with PMUVer handling for a guest with
> > > > > PMU configured on heterogeneous PMU systems.
> > > > > Specifically, it addresses the following two issues.
> > > > > 
> > > > > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> > > > >     to its sanitized value.  This could be inappropriate on
> > > > >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> > > > >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> > > > >     PEs on the host is not uniform, the sanitized value will be 0).
> > > > 
> > > > Why is this a problem? The CPUs don't implement the same version of
> > > > the architecture, we don't get a PMU. Why should we try to do anything
> > > > better? I really don't think we should go out or out way and make the
> > > > code more complicated for something that doesn't really exist.
> > > 
> > > Even when the CPUs don't implement the same version of the architecture,
> > > if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
> > > and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.
> > 
> > Ah, I see it now. The kernel will register the PMU even if it decides
> > that advertising it is wrong, and then we pick it up. Great :-/.
> > 
> > > In this case, although KVM provides PMU emulations for the guest,
> > > the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
> > > KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
> > > with PMU configured on such systems (since KVM also doesn't allow
> > > userspace to set the PMUVer to 0 for the vCPUs with PMU configured).
> > > 
> > > I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
> > > indicate PMUv3, or KVM should not allow userspace to configure PMU,
> > > in this case.
> > 
> > My vote is on the latter. Even if a PMU is available, we should rely
> > on the feature exposed by the kernel to decide whether exposing a PMU
> > or not.
> > 
> > To be honest, this will affect almost nobody (I only know of a single
> > one, an obscure ARMv8.0+ARMv8.2 system which is very unlikely to ever
> > use KVM). I'm happy to take the responsibility to actively break those.
> 
> Thank you for the information! Just curious, how about a mix of
> cores with and without PMU ? (with the same ARMv8.x version)
> I'm guessing there are very few if any though :)

I don't know of any. Similar things for IMPDEF PMUs. And to be honest,
I'd be very tempted to nuke that in KVM as well, because this is one
of the worse decision I ever made.

> > > This series is a fix for the former, mainly to keep the current
> > > behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
> > > systems, since I wasn't sure if such systems don't really exist :)
> > > (Also, I plan to implement a similar fix for PMCR_EL0.N on top of
> > > those changes)
> > > 
> > > I could make a fix for the latter instead though. What do you think ?
> > 
> > I think this would be valuable.
> 
> Thank you for the comment! I will go with the latter.

Thanks.

> > Also, didn't you have patches for the EL0 side of the PMU? I've been
> > trying to look for a new version, but couldn't find it...
> 
> While I'm working on fixing the series based on the recent comment from
> Oliver (https://lore.kernel.org/all/ZG%2Fw95pYjWnMJB62@linux.dev/),
> I have a new PMU EL0 issue, which blocked my testing of the series.
> So, I am debugging the new PMU EL0 issue.
> 
> It appears that arch_perf_update_userpage() defined in
> drivers/perf/arm_pmuv3.c isn't used, and instead, the weak one in
> kernel/events/core.c is used.

Wut??? How comes? /me disassembles the kernel:

ffff8000082a1ab0 <arch_perf_update_userpage>:
ffff8000082a1ab0:       d503201f        nop
ffff8000082a1ab4:       d503201f        nop
ffff8000082a1ab8:       d65f03c0        ret
ffff8000082a1abc:       d503201f        nop
ffff8000082a1ac0:       d503201f        nop
ffff8000082a1ac4:       d503201f        nop

What the hell is happening here???

> This prevents cap_user_rdpmc (, etc)
> from being set (This prevented my test program from directly
> accessing counters).  This seems to be caused by the commit 7755cec63ade
> ("arm64: perf: Move PMUv3 driver to drivers/perf").

It is becoming more puzzling by the minute.

> 
> I have not yet figured out why the one in arm_pmuv3.c isn't used
> though (The weak one in core.c seems to take precedence over strong
> ones under drivers/ somehow...).
> 
> Anyway, I worked around the new issue for now, and ran the test for
> my series though. I will post the new version of the EL0 series
> tomorrow hopefully.

I have a "fix" for this. It doesn't make any sense, but it seems to
work here (GCC 10.2.1 from Debian). Can you please give it a shot?

Thanks,

	M.

From 236ac26bd0e03bf2ca3b40471b61a35b02272662 Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Fri, 2 Jun 2023 09:52:25 +0100
Subject: [PATCH] perf/core: Drop __weak attribute on arch-specific prototypes

Reiji reports that the arm64 implementation of arch_perf_update_userpage()
is now ignored and replaced by the dummy stub in core code.
This seems to happen since the PMUv3 driver was moved to driver/perf.

As it turns out, dropping the __weak attribute from the *prototype*
of the function solves the problem. You're right, this doesn't seem
to make much sense. And yet...

With this, arm64 is able to enjoy arch_perf_update_userpage() again.

And while we're at it, drop the same __weak attribute from the
arch_perf_get_page_size() prototype.

Reported-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 include/linux/perf_event.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index d5628a7b5eaa..1509aea69a16 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1845,12 +1845,12 @@ int perf_event_exit_cpu(unsigned int cpu);
 #define perf_event_exit_cpu	NULL
 #endif
 
-extern void __weak arch_perf_update_userpage(struct perf_event *event,
-					     struct perf_event_mmap_page *userpg,
-					     u64 now);
+extern void arch_perf_update_userpage(struct perf_event *event,
+				      struct perf_event_mmap_page *userpg,
+				      u64 now);
 
 #ifdef CONFIG_MMU
-extern __weak u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
+extern u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
 #endif
 
 /*
-- 
2.39.2


-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-06-02  9:05           ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2023-06-02  9:05 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

On Fri, 02 Jun 2023 06:23:23 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Thu, Jun 01, 2023 at 06:02:41AM +0100, Marc Zyngier wrote:
> > Hey Reiji,
> > 
> > On Tue, 30 May 2023 13:53:24 +0100,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > > 
> > > Hi Marc,
> > > 
> > > On Mon, May 29, 2023 at 02:39:28PM +0100, Marc Zyngier wrote:
> > > > On Sat, 27 May 2023 05:02:32 +0100,
> > > > Reiji Watanabe <reijiw@google.com> wrote:
> > > > > 
> > > > > This series fixes issues with PMUVer handling for a guest with
> > > > > PMU configured on heterogeneous PMU systems.
> > > > > Specifically, it addresses the following two issues.
> > > > > 
> > > > > [A] The default value of ID_AA64DFR0_EL1.PMUVer of the vCPU is set
> > > > >     to its sanitized value.  This could be inappropriate on
> > > > >     heterogeneous PMU systems, as arm64_ftr_bits for PMUVer is defined
> > > > >     as FTR_EXACT with safe_val == 0 (when ID_AA64DFR0_EL1.PMUVer of all
> > > > >     PEs on the host is not uniform, the sanitized value will be 0).
> > > > 
> > > > Why is this a problem? The CPUs don't implement the same version of
> > > > the architecture, we don't get a PMU. Why should we try to do anything
> > > > better? I really don't think we should go out or out way and make the
> > > > code more complicated for something that doesn't really exist.
> > > 
> > > Even when the CPUs don't implement the same version of the architecture,
> > > if one of them implement PMUv3, KVM advertises KVM_CAP_ARM_PMU_V3,
> > > and allows userspace to configure PMU (KVM_ARM_VCPU_PMU_V3) for vCPUs.
> > 
> > Ah, I see it now. The kernel will register the PMU even if it decides
> > that advertising it is wrong, and then we pick it up. Great :-/.
> > 
> > > In this case, although KVM provides PMU emulations for the guest,
> > > the guest's ID_AA64DFR0_EL1.PMUVer will be zero.  Also,
> > > KVM_SET_ONE_REG for ID_AA64DFR0_EL1 will never work for vCPUs
> > > with PMU configured on such systems (since KVM also doesn't allow
> > > userspace to set the PMUVer to 0 for the vCPUs with PMU configured).
> > > 
> > > I would think either ID_AA64DFR0_EL1.PMUVer for the guest should
> > > indicate PMUv3, or KVM should not allow userspace to configure PMU,
> > > in this case.
> > 
> > My vote is on the latter. Even if a PMU is available, we should rely
> > on the feature exposed by the kernel to decide whether exposing a PMU
> > or not.
> > 
> > To be honest, this will affect almost nobody (I only know of a single
> > one, an obscure ARMv8.0+ARMv8.2 system which is very unlikely to ever
> > use KVM). I'm happy to take the responsibility to actively break those.
> 
> Thank you for the information! Just curious, how about a mix of
> cores with and without PMU ? (with the same ARMv8.x version)
> I'm guessing there are very few if any though :)

I don't know of any. Similar things for IMPDEF PMUs. And to be honest,
I'd be very tempted to nuke that in KVM as well, because this is one
of the worse decision I ever made.

> > > This series is a fix for the former, mainly to keep the current
> > > behavior of KVM_CAP_ARM_PMU_V3 and KVM_ARM_VCPU_PMU_V3 on such
> > > systems, since I wasn't sure if such systems don't really exist :)
> > > (Also, I plan to implement a similar fix for PMCR_EL0.N on top of
> > > those changes)
> > > 
> > > I could make a fix for the latter instead though. What do you think ?
> > 
> > I think this would be valuable.
> 
> Thank you for the comment! I will go with the latter.

Thanks.

> > Also, didn't you have patches for the EL0 side of the PMU? I've been
> > trying to look for a new version, but couldn't find it...
> 
> While I'm working on fixing the series based on the recent comment from
> Oliver (https://lore.kernel.org/all/ZG%2Fw95pYjWnMJB62@linux.dev/),
> I have a new PMU EL0 issue, which blocked my testing of the series.
> So, I am debugging the new PMU EL0 issue.
> 
> It appears that arch_perf_update_userpage() defined in
> drivers/perf/arm_pmuv3.c isn't used, and instead, the weak one in
> kernel/events/core.c is used.

Wut??? How comes? /me disassembles the kernel:

ffff8000082a1ab0 <arch_perf_update_userpage>:
ffff8000082a1ab0:       d503201f        nop
ffff8000082a1ab4:       d503201f        nop
ffff8000082a1ab8:       d65f03c0        ret
ffff8000082a1abc:       d503201f        nop
ffff8000082a1ac0:       d503201f        nop
ffff8000082a1ac4:       d503201f        nop

What the hell is happening here???

> This prevents cap_user_rdpmc (, etc)
> from being set (This prevented my test program from directly
> accessing counters).  This seems to be caused by the commit 7755cec63ade
> ("arm64: perf: Move PMUv3 driver to drivers/perf").

It is becoming more puzzling by the minute.

> 
> I have not yet figured out why the one in arm_pmuv3.c isn't used
> though (The weak one in core.c seems to take precedence over strong
> ones under drivers/ somehow...).
> 
> Anyway, I worked around the new issue for now, and ran the test for
> my series though. I will post the new version of the EL0 series
> tomorrow hopefully.

I have a "fix" for this. It doesn't make any sense, but it seems to
work here (GCC 10.2.1 from Debian). Can you please give it a shot?

Thanks,

	M.

From 236ac26bd0e03bf2ca3b40471b61a35b02272662 Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Fri, 2 Jun 2023 09:52:25 +0100
Subject: [PATCH] perf/core: Drop __weak attribute on arch-specific prototypes

Reiji reports that the arm64 implementation of arch_perf_update_userpage()
is now ignored and replaced by the dummy stub in core code.
This seems to happen since the PMUv3 driver was moved to driver/perf.

As it turns out, dropping the __weak attribute from the *prototype*
of the function solves the problem. You're right, this doesn't seem
to make much sense. And yet...

With this, arm64 is able to enjoy arch_perf_update_userpage() again.

And while we're at it, drop the same __weak attribute from the
arch_perf_get_page_size() prototype.

Reported-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 include/linux/perf_event.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index d5628a7b5eaa..1509aea69a16 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1845,12 +1845,12 @@ int perf_event_exit_cpu(unsigned int cpu);
 #define perf_event_exit_cpu	NULL
 #endif
 
-extern void __weak arch_perf_update_userpage(struct perf_event *event,
-					     struct perf_event_mmap_page *userpg,
-					     u64 now);
+extern void arch_perf_update_userpage(struct perf_event *event,
+				      struct perf_event_mmap_page *userpg,
+				      u64 now);
 
 #ifdef CONFIG_MMU
-extern __weak u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
+extern u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
 #endif
 
 /*
-- 
2.39.2


-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
  2023-06-02  9:05           ` Marc Zyngier
@ 2023-06-02 16:07             ` Reiji Watanabe
  -1 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-06-02 16:07 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

> > > Also, didn't you have patches for the EL0 side of the PMU? I've been
> > > trying to look for a new version, but couldn't find it...
> > 
> > While I'm working on fixing the series based on the recent comment from
> > Oliver (https://lore.kernel.org/all/ZG%2Fw95pYjWnMJB62@linux.dev/),
> > I have a new PMU EL0 issue, which blocked my testing of the series.
> > So, I am debugging the new PMU EL0 issue.
> > 
> > It appears that arch_perf_update_userpage() defined in
> > drivers/perf/arm_pmuv3.c isn't used, and instead, the weak one in
> > kernel/events/core.c is used.
> 
> Wut??? How comes? /me disassembles the kernel:
> 
> ffff8000082a1ab0 <arch_perf_update_userpage>:
> ffff8000082a1ab0:       d503201f        nop
> ffff8000082a1ab4:       d503201f        nop
> ffff8000082a1ab8:       d65f03c0        ret
> ffff8000082a1abc:       d503201f        nop
> ffff8000082a1ac0:       d503201f        nop
> ffff8000082a1ac4:       d503201f        nop
> 
> What the hell is happening here???
> 
> > This prevents cap_user_rdpmc (, etc)
> > from being set (This prevented my test program from directly
> > accessing counters).  This seems to be caused by the commit 7755cec63ade
> > ("arm64: perf: Move PMUv3 driver to drivers/perf").
> 
> It is becoming more puzzling by the minute.
> 
> > 
> > I have not yet figured out why the one in arm_pmuv3.c isn't used
> > though (The weak one in core.c seems to take precedence over strong
> > ones under drivers/ somehow...).
> > 
> > Anyway, I worked around the new issue for now, and ran the test for
> > my series though. I will post the new version of the EL0 series
> > tomorrow hopefully.
> 
> I have a "fix" for this. It doesn't make any sense, but it seems to
> work here (GCC 10.2.1 from Debian). Can you please give it a shot?
> 
> Thanks,
> 
> 	M.
> 
> From 236ac26bd0e03bf2ca3b40471b61a35b02272662 Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <maz@kernel.org>
> Date: Fri, 2 Jun 2023 09:52:25 +0100
> Subject: [PATCH] perf/core: Drop __weak attribute on arch-specific prototypes
> 
> Reiji reports that the arm64 implementation of arch_perf_update_userpage()
> is now ignored and replaced by the dummy stub in core code.
> This seems to happen since the PMUv3 driver was moved to driver/perf.
> 
> As it turns out, dropping the __weak attribute from the *prototype*
> of the function solves the problem. You're right, this doesn't seem
> to make much sense. And yet...
> 
> With this, arm64 is able to enjoy arch_perf_update_userpage() again.

Oh, that's interesting... But, it worked, thank you!
(With the patch, the disassembles of the kernel for
arch_perf_update_userpage look right, and my EL0 test works fine)


> And while we're at it, drop the same __weak attribute from the
> arch_perf_get_page_size() prototype.

The arch_perf_get_page_size() prototype seems to be unnecessary now
(after the commit 8af26be06272 "erf/core: Fix arch_perf_get_page_size()").
So, it appears that we could drop the prototype itself.

Thank you,
Reiji


> 
> Reported-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  include/linux/perf_event.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index d5628a7b5eaa..1509aea69a16 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -1845,12 +1845,12 @@ int perf_event_exit_cpu(unsigned int cpu);
>  #define perf_event_exit_cpu	NULL
>  #endif
>  
> -extern void __weak arch_perf_update_userpage(struct perf_event *event,
> -					     struct perf_event_mmap_page *userpg,
> -					     u64 now);
> +extern void arch_perf_update_userpage(struct perf_event *event,
> +				      struct perf_event_mmap_page *userpg,
> +				      u64 now);
>  
>  #ifdef CONFIG_MMU
> -extern __weak u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
> +extern u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
>  #endif
>  
>  /*
> -- 
> 2.39.2
> 
> 
> -- 
> Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems
@ 2023-06-02 16:07             ` Reiji Watanabe
  0 siblings, 0 replies; 24+ messages in thread
From: Reiji Watanabe @ 2023-06-02 16:07 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Oliver Upton, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Zenghui Yu, Suzuki K Poulose, Paolo Bonzini,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata, Will Deacon

> > > Also, didn't you have patches for the EL0 side of the PMU? I've been
> > > trying to look for a new version, but couldn't find it...
> > 
> > While I'm working on fixing the series based on the recent comment from
> > Oliver (https://lore.kernel.org/all/ZG%2Fw95pYjWnMJB62@linux.dev/),
> > I have a new PMU EL0 issue, which blocked my testing of the series.
> > So, I am debugging the new PMU EL0 issue.
> > 
> > It appears that arch_perf_update_userpage() defined in
> > drivers/perf/arm_pmuv3.c isn't used, and instead, the weak one in
> > kernel/events/core.c is used.
> 
> Wut??? How comes? /me disassembles the kernel:
> 
> ffff8000082a1ab0 <arch_perf_update_userpage>:
> ffff8000082a1ab0:       d503201f        nop
> ffff8000082a1ab4:       d503201f        nop
> ffff8000082a1ab8:       d65f03c0        ret
> ffff8000082a1abc:       d503201f        nop
> ffff8000082a1ac0:       d503201f        nop
> ffff8000082a1ac4:       d503201f        nop
> 
> What the hell is happening here???
> 
> > This prevents cap_user_rdpmc (, etc)
> > from being set (This prevented my test program from directly
> > accessing counters).  This seems to be caused by the commit 7755cec63ade
> > ("arm64: perf: Move PMUv3 driver to drivers/perf").
> 
> It is becoming more puzzling by the minute.
> 
> > 
> > I have not yet figured out why the one in arm_pmuv3.c isn't used
> > though (The weak one in core.c seems to take precedence over strong
> > ones under drivers/ somehow...).
> > 
> > Anyway, I worked around the new issue for now, and ran the test for
> > my series though. I will post the new version of the EL0 series
> > tomorrow hopefully.
> 
> I have a "fix" for this. It doesn't make any sense, but it seems to
> work here (GCC 10.2.1 from Debian). Can you please give it a shot?
> 
> Thanks,
> 
> 	M.
> 
> From 236ac26bd0e03bf2ca3b40471b61a35b02272662 Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <maz@kernel.org>
> Date: Fri, 2 Jun 2023 09:52:25 +0100
> Subject: [PATCH] perf/core: Drop __weak attribute on arch-specific prototypes
> 
> Reiji reports that the arm64 implementation of arch_perf_update_userpage()
> is now ignored and replaced by the dummy stub in core code.
> This seems to happen since the PMUv3 driver was moved to driver/perf.
> 
> As it turns out, dropping the __weak attribute from the *prototype*
> of the function solves the problem. You're right, this doesn't seem
> to make much sense. And yet...
> 
> With this, arm64 is able to enjoy arch_perf_update_userpage() again.

Oh, that's interesting... But, it worked, thank you!
(With the patch, the disassembles of the kernel for
arch_perf_update_userpage look right, and my EL0 test works fine)


> And while we're at it, drop the same __weak attribute from the
> arch_perf_get_page_size() prototype.

The arch_perf_get_page_size() prototype seems to be unnecessary now
(after the commit 8af26be06272 "erf/core: Fix arch_perf_get_page_size()").
So, it appears that we could drop the prototype itself.

Thank you,
Reiji


> 
> Reported-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  include/linux/perf_event.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index d5628a7b5eaa..1509aea69a16 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -1845,12 +1845,12 @@ int perf_event_exit_cpu(unsigned int cpu);
>  #define perf_event_exit_cpu	NULL
>  #endif
>  
> -extern void __weak arch_perf_update_userpage(struct perf_event *event,
> -					     struct perf_event_mmap_page *userpg,
> -					     u64 now);
> +extern void arch_perf_update_userpage(struct perf_event *event,
> +				      struct perf_event_mmap_page *userpg,
> +				      u64 now);
>  
>  #ifdef CONFIG_MMU
> -extern __weak u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
> +extern u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr);
>  #endif
>  
>  /*
> -- 
> 2.39.2
> 
> 
> -- 
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2023-06-02 16:07 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-27  4:02 [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems Reiji Watanabe
2023-05-27  4:02 ` Reiji Watanabe
2023-05-27  4:02 ` [PATCH 1/4] KVM: arm64: PMU: Introduce a helper to set the guest's PMU Reiji Watanabe
2023-05-27  4:02   ` Reiji Watanabe
2023-05-27  4:02 ` [PATCH 2/4] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset Reiji Watanabe
2023-05-27  4:02   ` Reiji Watanabe
2023-05-27 17:35   ` kernel test robot
2023-05-27 17:35     ` kernel test robot
2023-05-27  4:02 ` [PATCH 3/4] KVM: arm64: PMU: Use PMUVer of the guest's PMU for ID_AA64DFR0.PMUVer Reiji Watanabe
2023-05-27  4:02   ` Reiji Watanabe
2023-05-27  4:02 ` [PATCH 4/4] KVM: arm64: PMU: Don't use the PMUVer of the PMU set for guest Reiji Watanabe
2023-05-27  4:02   ` Reiji Watanabe
2023-05-29 13:39 ` [PATCH 0/4] KVM: arm64: PMU: Fix PMUVer handling on heterogeneous PMU systems Marc Zyngier
2023-05-29 13:39   ` Marc Zyngier
2023-05-30 12:53   ` Reiji Watanabe
2023-05-30 12:53     ` Reiji Watanabe
2023-06-01  5:02     ` Marc Zyngier
2023-06-01  5:02       ` Marc Zyngier
2023-06-02  5:23       ` Reiji Watanabe
2023-06-02  5:23         ` Reiji Watanabe
2023-06-02  9:05         ` Marc Zyngier
2023-06-02  9:05           ` Marc Zyngier
2023-06-02 16:07           ` Reiji Watanabe
2023-06-02 16:07             ` Reiji Watanabe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.