All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] KVM: arm64: Cleanups and one optimization
@ 2021-07-14  9:55 ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm

The first four patches are cosmetic and aim to remove the inconsistencies I
noticed around ctxt_sys_reg/vcpu_sys_reg and how feature bits are checked.

The last patch is a minor optimization to the way KVM disables profiling
when running with VHE disabled.

Based on v5.14-rc1. The changes touch quite a lot of code, I'm happy to
rebase on another branch if necessary.

Boot tested on an odroid c4 with 4k, 16k and 64k guests running on 4k, 16k
and 64k hosts (so 3 x 3 tests in total).

Alexandru Elisei (5):
  KVM: arm64: Move vcpu_has_feature() to asm/kvm_host.h
  KVM: arm64: Use vcpu_has_feature() to check the feature bits
  KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
  KVM: arm64: Add __vcpu_sys_reg()
  KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1

 arch/arm64/include/asm/kvm_emulate.h       |  7 +--
 arch/arm64/include/asm/kvm_host.h          | 13 +++--
 arch/arm64/kvm/arch_timer.c                | 20 ++++----
 arch/arm64/kvm/arm.c                       |  5 +-
 arch/arm64/kvm/fpsimd.c                    |  2 +-
 arch/arm64/kvm/guest.c                     |  6 +--
 arch/arm64/kvm/hyp/exception.c             |  4 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  6 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 12 ++---
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  1 -
 arch/arm64/kvm/pmu-emul.c                  | 58 +++++++++++-----------
 arch/arm64/kvm/psci.c                      |  2 +-
 arch/arm64/kvm/reset.c                     | 12 ++---
 arch/arm64/kvm/sys_regs.c                  | 54 ++++++++++----------
 arch/arm64/kvm/sys_regs.h                  |  4 +-
 include/kvm/arm_psci.h                     |  2 +-
 16 files changed, 103 insertions(+), 105 deletions(-)

-- 
2.32.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/5] KVM: arm64: Cleanups and one optimization
@ 2021-07-14  9:55 ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm; +Cc: james.morse, suzuki.poulose

The first four patches are cosmetic and aim to remove the inconsistencies I
noticed around ctxt_sys_reg/vcpu_sys_reg and how feature bits are checked.

The last patch is a minor optimization to the way KVM disables profiling
when running with VHE disabled.

Based on v5.14-rc1. The changes touch quite a lot of code, I'm happy to
rebase on another branch if necessary.

Boot tested on an odroid c4 with 4k, 16k and 64k guests running on 4k, 16k
and 64k hosts (so 3 x 3 tests in total).

Alexandru Elisei (5):
  KVM: arm64: Move vcpu_has_feature() to asm/kvm_host.h
  KVM: arm64: Use vcpu_has_feature() to check the feature bits
  KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
  KVM: arm64: Add __vcpu_sys_reg()
  KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1

 arch/arm64/include/asm/kvm_emulate.h       |  7 +--
 arch/arm64/include/asm/kvm_host.h          | 13 +++--
 arch/arm64/kvm/arch_timer.c                | 20 ++++----
 arch/arm64/kvm/arm.c                       |  5 +-
 arch/arm64/kvm/fpsimd.c                    |  2 +-
 arch/arm64/kvm/guest.c                     |  6 +--
 arch/arm64/kvm/hyp/exception.c             |  4 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  6 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 12 ++---
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  1 -
 arch/arm64/kvm/pmu-emul.c                  | 58 +++++++++++-----------
 arch/arm64/kvm/psci.c                      |  2 +-
 arch/arm64/kvm/reset.c                     | 12 ++---
 arch/arm64/kvm/sys_regs.c                  | 54 ++++++++++----------
 arch/arm64/kvm/sys_regs.h                  |  4 +-
 include/kvm/arm_psci.h                     |  2 +-
 16 files changed, 103 insertions(+), 105 deletions(-)

-- 
2.32.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/5] KVM: arm64: Move vcpu_has_feature() to asm/kvm_host.h
  2021-07-14  9:55 ` Alexandru Elisei
@ 2021-07-14  9:55   ` Alexandru Elisei
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm

Move the vcpu_has_feature() function to asm/kvm_host.h in preparation for
replacing test_bit() feature checks.

struct kvm_vcpu is defined in linux/kvm_host.h, which includes
asm/kvm_host.h; the function has been transformed into a macro to avoid
compiler errors about struct kvm_vcpu being undefined.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 5 -----
 arch/arm64/include/asm/kvm_host.h    | 3 +++
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index fd418955e31e..6bf776c2399c 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -466,9 +466,4 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 	vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
 }
 
-static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
-{
-	return test_bit(feature, vcpu->arch.features);
-}
-
 #endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 41911585ae0c..0b087bcfcfeb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -774,6 +774,9 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
+#define vcpu_has_feature(vcpu, feature) \
+	(test_bit((feature), (vcpu)->arch.features))
+
 #define kvm_arm_vcpu_sve_finalized(vcpu) \
 	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
 
-- 
2.32.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 1/5] KVM: arm64: Move vcpu_has_feature() to asm/kvm_host.h
@ 2021-07-14  9:55   ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm; +Cc: james.morse, suzuki.poulose

Move the vcpu_has_feature() function to asm/kvm_host.h in preparation for
replacing test_bit() feature checks.

struct kvm_vcpu is defined in linux/kvm_host.h, which includes
asm/kvm_host.h; the function has been transformed into a macro to avoid
compiler errors about struct kvm_vcpu being undefined.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 5 -----
 arch/arm64/include/asm/kvm_host.h    | 3 +++
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index fd418955e31e..6bf776c2399c 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -466,9 +466,4 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 	vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
 }
 
-static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
-{
-	return test_bit(feature, vcpu->arch.features);
-}
-
 #endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 41911585ae0c..0b087bcfcfeb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -774,6 +774,9 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
+#define vcpu_has_feature(vcpu, feature) \
+	(test_bit((feature), (vcpu)->arch.features))
+
 #define kvm_arm_vcpu_sve_finalized(vcpu) \
 	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
 
-- 
2.32.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/5] KVM: arm64: Use vcpu_has_feature() to check the feature bits
  2021-07-14  9:55 ` Alexandru Elisei
@ 2021-07-14  9:55   ` Alexandru Elisei
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm

Commit 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") added
the vcpu_has_feature() function for checking if a feature bit is set.
Replace uses of test_bit() with the vcpu_has_feature() abstraction.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
There's one place place where I opted not to replace test_bit() with
vcpu_has_feature(), in arm.c::kvm_vcpu_set_target(). The function already
manipulates the feature bits directly, so I preferred to keep it
consistent.

 arch/arm64/include/asm/kvm_emulate.h |  2 +-
 arch/arm64/include/asm/kvm_host.h    |  4 ++--
 arch/arm64/kvm/arm.c                 |  2 +-
 arch/arm64/kvm/psci.c                |  2 +-
 arch/arm64/kvm/reset.c               | 12 ++++++------
 include/kvm/arm_psci.h               |  2 +-
 6 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 6bf776c2399c..738fb51920d2 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -70,7 +70,7 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 		vcpu->arch.hcr_el2 |= HCR_TVM;
 	}
 
-	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT))
 		vcpu->arch.hcr_el2 &= ~HCR_RW;
 
 	/*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0b087bcfcfeb..e6bdf1feb922 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -781,8 +781,8 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
 
 #define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled)
-#define kvm_vcpu_has_pmu(vcpu)					\
-	(test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
+
+#define kvm_vcpu_has_pmu(vcpu)	(vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3))
 
 int kvm_trng_call(struct kvm_vcpu *vcpu);
 #ifdef CONFIG_KVM
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..646c7b003a59 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1108,7 +1108,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	/*
 	 * Handle the "start in power-off" case.
 	 */
-	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_POWER_OFF))
 		vcpu_power_off(vcpu);
 	else
 		vcpu->arch.power_off = false;
diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
index db4056ecccfd..536f637f1e41 100644
--- a/arch/arm64/kvm/psci.c
+++ b/arch/arm64/kvm/psci.c
@@ -492,7 +492,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	{
 		bool wants_02;
 
-		wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
+		wants_02 = vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2);
 
 		switch (val) {
 		case KVM_ARM_PSCI_0_1:
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index cba7872d69a8..80b16984cf3a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -157,8 +157,8 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
 	 * features are requested by the userspace together and the system
 	 * supports these capabilities.
 	 */
-	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
-	    !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) ||
+	if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) ||
+	    !vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC) ||
 	    !system_has_full_ptr_auth())
 		return -EINVAL;
 
@@ -223,7 +223,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		kvm_arch_vcpu_put(vcpu);
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu)) {
-		if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) {
+		if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) {
 			ret = kvm_vcpu_enable_sve(vcpu);
 			if (ret)
 				goto out;
@@ -232,8 +232,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		kvm_vcpu_reset_sve(vcpu);
 	}
 
-	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
-	    test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) ||
+	    vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC)) {
 		if (kvm_vcpu_enable_ptrauth(vcpu)) {
 			ret = -EINVAL;
 			goto out;
@@ -247,7 +247,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 
 	switch (vcpu->arch.target) {
 	default:
-		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
+		if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT)) {
 			pstate = VCPU_RESET_PSTATE_SVC;
 		} else {
 			pstate = VCPU_RESET_PSTATE_EL1;
diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h
index 5b58bd2fe088..8b7891d7a368 100644
--- a/include/kvm/arm_psci.h
+++ b/include/kvm/arm_psci.h
@@ -29,7 +29,7 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm)
 	 * revisions. It is thus safe to return the latest, unless
 	 * userspace has instructed us otherwise.
 	 */
-	if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) {
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2)) {
 		if (vcpu->kvm->arch.psci_version)
 			return vcpu->kvm->arch.psci_version;
 
-- 
2.32.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/5] KVM: arm64: Use vcpu_has_feature() to check the feature bits
@ 2021-07-14  9:55   ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm; +Cc: james.morse, suzuki.poulose

Commit 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") added
the vcpu_has_feature() function for checking if a feature bit is set.
Replace uses of test_bit() with the vcpu_has_feature() abstraction.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
There's one place place where I opted not to replace test_bit() with
vcpu_has_feature(), in arm.c::kvm_vcpu_set_target(). The function already
manipulates the feature bits directly, so I preferred to keep it
consistent.

 arch/arm64/include/asm/kvm_emulate.h |  2 +-
 arch/arm64/include/asm/kvm_host.h    |  4 ++--
 arch/arm64/kvm/arm.c                 |  2 +-
 arch/arm64/kvm/psci.c                |  2 +-
 arch/arm64/kvm/reset.c               | 12 ++++++------
 include/kvm/arm_psci.h               |  2 +-
 6 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 6bf776c2399c..738fb51920d2 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -70,7 +70,7 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 		vcpu->arch.hcr_el2 |= HCR_TVM;
 	}
 
-	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT))
 		vcpu->arch.hcr_el2 &= ~HCR_RW;
 
 	/*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0b087bcfcfeb..e6bdf1feb922 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -781,8 +781,8 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
 
 #define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled)
-#define kvm_vcpu_has_pmu(vcpu)					\
-	(test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
+
+#define kvm_vcpu_has_pmu(vcpu)	(vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3))
 
 int kvm_trng_call(struct kvm_vcpu *vcpu);
 #ifdef CONFIG_KVM
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..646c7b003a59 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1108,7 +1108,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	/*
 	 * Handle the "start in power-off" case.
 	 */
-	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_POWER_OFF))
 		vcpu_power_off(vcpu);
 	else
 		vcpu->arch.power_off = false;
diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
index db4056ecccfd..536f637f1e41 100644
--- a/arch/arm64/kvm/psci.c
+++ b/arch/arm64/kvm/psci.c
@@ -492,7 +492,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	{
 		bool wants_02;
 
-		wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
+		wants_02 = vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2);
 
 		switch (val) {
 		case KVM_ARM_PSCI_0_1:
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index cba7872d69a8..80b16984cf3a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -157,8 +157,8 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
 	 * features are requested by the userspace together and the system
 	 * supports these capabilities.
 	 */
-	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
-	    !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) ||
+	if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) ||
+	    !vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC) ||
 	    !system_has_full_ptr_auth())
 		return -EINVAL;
 
@@ -223,7 +223,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		kvm_arch_vcpu_put(vcpu);
 
 	if (!kvm_arm_vcpu_sve_finalized(vcpu)) {
-		if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) {
+		if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) {
 			ret = kvm_vcpu_enable_sve(vcpu);
 			if (ret)
 				goto out;
@@ -232,8 +232,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		kvm_vcpu_reset_sve(vcpu);
 	}
 
-	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
-	    test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) ||
+	    vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC)) {
 		if (kvm_vcpu_enable_ptrauth(vcpu)) {
 			ret = -EINVAL;
 			goto out;
@@ -247,7 +247,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 
 	switch (vcpu->arch.target) {
 	default:
-		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
+		if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT)) {
 			pstate = VCPU_RESET_PSTATE_SVC;
 		} else {
 			pstate = VCPU_RESET_PSTATE_EL1;
diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h
index 5b58bd2fe088..8b7891d7a368 100644
--- a/include/kvm/arm_psci.h
+++ b/include/kvm/arm_psci.h
@@ -29,7 +29,7 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm)
 	 * revisions. It is thus safe to return the latest, unless
 	 * userspace has instructed us otherwise.
 	 */
-	if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) {
+	if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2)) {
 		if (vcpu->kvm->arch.psci_version)
 			return vcpu->kvm->arch.psci_version;
 
-- 
2.32.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
  2021-07-14  9:55 ` Alexandru Elisei
@ 2021-07-14  9:55   ` Alexandru Elisei
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm

There are two macros to access a specific system register from a known
kvm_cpu_context: __ctxt_sys_reg(), which returns a pointer to the register,
and ctxt_sys_reg(), which deferences the pointer returned by
__ctxt_sys_reg().

__vcpu_sys_reg() serves a similar purpose, with the difference being that
it takes a struct kvm_vcpu as a parameter. __vcpu_sys_reg(), although it
looks like __ctxt_sys_reg(), it dereferences the pointer to the register,
like ctxt_sys_reg() does, and indeed it is defined as an abstraction over
ctxt_sys_reg().

Let's remove this naming inconsistency by renaming __vcpu_sys_reg() to
vcpu_sys_reg(), to make it clear it behaves like ctxt_sys_reg(), and not
like __ctxt_sys_reg().

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/include/asm/kvm_host.h          |  4 +-
 arch/arm64/kvm/arch_timer.c                | 20 ++++----
 arch/arm64/kvm/arm.c                       |  3 +-
 arch/arm64/kvm/fpsimd.c                    |  2 +-
 arch/arm64/kvm/hyp/exception.c             |  4 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  6 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 12 ++---
 arch/arm64/kvm/pmu-emul.c                  | 58 +++++++++++-----------
 arch/arm64/kvm/sys_regs.c                  | 54 ++++++++++----------
 arch/arm64/kvm/sys_regs.h                  |  4 +-
 10 files changed, 83 insertions(+), 84 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e6bdf1feb922..76efede8beae 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -459,7 +459,7 @@ struct kvm_vcpu_arch {
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
 
 /*
- * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the
+ * Only use vcpu_sys_reg/ctxt_sys_reg if you know you want the
  * memory backed version of a register, and not the one most recently
  * accessed by a running VCPU.  For example, for userspace access or
  * for system registers that are never context switched, but only
@@ -469,7 +469,7 @@ struct kvm_vcpu_arch {
 
 #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
 
-#define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
+#define vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
 
 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 3df67c127489..b928a70c01d2 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -58,9 +58,9 @@ u32 timer_get_ctl(struct arch_timer_context *ctxt)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		return __vcpu_sys_reg(vcpu, CNTV_CTL_EL0);
+		return vcpu_sys_reg(vcpu, CNTV_CTL_EL0);
 	case TIMER_PTIMER:
-		return __vcpu_sys_reg(vcpu, CNTP_CTL_EL0);
+		return vcpu_sys_reg(vcpu, CNTP_CTL_EL0);
 	default:
 		WARN_ON(1);
 		return 0;
@@ -73,9 +73,9 @@ u64 timer_get_cval(struct arch_timer_context *ctxt)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		return __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0);
+		return vcpu_sys_reg(vcpu, CNTV_CVAL_EL0);
 	case TIMER_PTIMER:
-		return __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0);
+		return vcpu_sys_reg(vcpu, CNTP_CVAL_EL0);
 	default:
 		WARN_ON(1);
 		return 0;
@@ -88,7 +88,7 @@ static u64 timer_get_offset(struct arch_timer_context *ctxt)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		return __vcpu_sys_reg(vcpu, CNTVOFF_EL2);
+		return vcpu_sys_reg(vcpu, CNTVOFF_EL2);
 	default:
 		return 0;
 	}
@@ -100,10 +100,10 @@ static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		__vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl;
+		vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl;
 		break;
 	case TIMER_PTIMER:
-		__vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl;
+		vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl;
 		break;
 	default:
 		WARN_ON(1);
@@ -116,10 +116,10 @@ static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		__vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval;
+		vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval;
 		break;
 	case TIMER_PTIMER:
-		__vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval;
+		vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval;
 		break;
 	default:
 		WARN_ON(1);
@@ -132,7 +132,7 @@ static void timer_set_offset(struct arch_timer_context *ctxt, u64 offset)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		__vcpu_sys_reg(vcpu, CNTVOFF_EL2) = offset;
+		vcpu_sys_reg(vcpu, CNTVOFF_EL2) = offset;
 		break;
 	default:
 		WARN(offset, "timer %ld\n", arch_timer_ctx_index(ctxt));
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 646c7b003a59..04c722e4eca0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -700,8 +700,7 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 		}
 
 		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
-			kvm_pmu_handle_pmcr(vcpu,
-					    __vcpu_sys_reg(vcpu, PMCR_EL0));
+			kvm_pmu_handle_pmcr(vcpu, vcpu_sys_reg(vcpu, PMCR_EL0));
 	}
 }
 
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 5621020b28de..3fea05c7f49d 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -122,7 +122,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 
 	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
 		if (guest_has_sve) {
-			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
+			vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
 
 			/* Restore the VL that was saved when bound to the CPU */
 			if (!has_vhe())
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 0418399e0a20..89b5cdc850f5 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -25,7 +25,7 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 	if (__vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
-	return __vcpu_sys_reg(vcpu, reg);
+	return vcpu_sys_reg(vcpu, reg);
 }
 
 static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
@@ -33,7 +33,7 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 	if (__vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-	 __vcpu_sys_reg(vcpu, reg) = val;
+	 vcpu_sys_reg(vcpu, reg) = val;
 }
 
 static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..cc0ab79edcd9 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -57,7 +57,7 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
-	__vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2);
+	vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2);
 }
 
 static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
@@ -218,7 +218,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 	sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
 	__sve_restore_state(vcpu_sve_pffr(vcpu),
 			    &vcpu->arch.ctxt.fp_regs.fpsr);
-	write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
+	write_sysreg_el1(vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
 }
 
 /* Check for an FPSIMD/SVE trap and handle as appropriate */
@@ -280,7 +280,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 	/* Skip restoring fpexc32 for AArch64 guests */
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
-		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
+		write_sysreg(vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
 	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index de7e14c862e6..c50e7462adff 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -187,11 +187,11 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq);
 	vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq);
 
-	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
-	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
+	vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
+	vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
 	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
-		__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
+		vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
 static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
@@ -204,11 +204,11 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(vcpu->arch.ctxt.spsr_irq, spsr_irq);
 	write_sysreg(vcpu->arch.ctxt.spsr_fiq, spsr_fiq);
 
-	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
-	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
+	write_sysreg(vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
+	write_sysreg(vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
 
 	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
-		write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
+		write_sysreg(vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
 }
 
 #endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f33825c995cb..f53e1a2d6df5 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -43,7 +43,7 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
-		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
+		vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
 }
 
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
@@ -115,7 +115,7 @@ static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx)
 		return false;
 
 	reg = PMEVTYPER0_EL0 + select_idx;
-	eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm);
+	eventsel = vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm);
 
 	return eventsel == ARMV8_PMUV3_PERFCTR_CHAIN;
 }
@@ -134,14 +134,14 @@ static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
 		pmc = kvm_pmu_get_canonical_pmc(pmc);
 		reg = PMEVCNTR0_EL0 + pmc->idx;
 
-		counter = __vcpu_sys_reg(vcpu, reg);
-		counter_high = __vcpu_sys_reg(vcpu, reg + 1);
+		counter = vcpu_sys_reg(vcpu, reg);
+		counter_high = vcpu_sys_reg(vcpu, reg + 1);
 
 		counter = lower_32_bits(counter) | (counter_high << 32);
 	} else {
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
-		counter = __vcpu_sys_reg(vcpu, reg);
+		counter = vcpu_sys_reg(vcpu, reg);
 	}
 
 	/*
@@ -189,7 +189,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
-	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+	vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
@@ -233,10 +233,10 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 		val = lower_32_bits(counter);
 	}
 
-	__vcpu_sys_reg(vcpu, reg) = val;
+	vcpu_sys_reg(vcpu, reg) = val;
 
 	if (kvm_pmu_pmc_is_chained(pmc))
-		__vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter);
+		vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter);
 
 	kvm_pmu_release_perf_event(pmc);
 }
@@ -289,7 +289,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
-	u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
+	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
 
 	val &= ARMV8_PMU_PMCR_N_MASK;
 	if (val == 0)
@@ -311,7 +311,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
@@ -369,10 +369,10 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
 {
 	u64 reg = 0;
 
-	if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
-		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
-		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
-		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+	if ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
+		reg = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+		reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		reg &= vcpu_sys_reg(vcpu, PMINTENSET_EL1);
 		reg &= kvm_pmu_valid_counter_mask(vcpu);
 	}
 
@@ -493,7 +493,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 	perf_event->attr.sample_period = period;
 	perf_event->hw.sample_period = period;
 
-	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
+	vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
@@ -517,11 +517,11 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	int i;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
 		return;
 
 	/* Weed out disabled counters */
-	val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+	val &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 
 	for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
 		u64 type, reg;
@@ -530,29 +530,29 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 			continue;
 
 		/* PMSWINC only applies to ... SW_INC! */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
 		type &= kvm_pmu_event_mask(vcpu->kvm);
 		if (type != ARMV8_PMUV3_PERFCTR_SW_INCR)
 			continue;
 
 		/* increment this even SW_INC counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
 		reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+		vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
 		if (reg) /* no overflow on the low part */
 			continue;
 
 		if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) {
 			/* increment the high counter */
-			reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
+			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
 			reg = lower_32_bits(reg);
-			__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
 			if (!reg) /* mark overflow on the high counter */
-				__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
+				vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
 		} else {
 			/* mark overflow on low counter */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
 		}
 	}
 }
@@ -569,7 +569,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
-		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+		       vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
 	} else {
 		kvm_pmu_disable_counter_mask(vcpu, mask);
 	}
@@ -586,8 +586,8 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
-	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
+	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+	       (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
 }
 
 /**
@@ -612,7 +612,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 
 	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
-	data = __vcpu_sys_reg(vcpu, reg);
+	data = vcpu_sys_reg(vcpu, reg);
 
 	kvm_pmu_stop_counter(vcpu, pmc);
 	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
@@ -734,7 +734,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
 
-	__vcpu_sys_reg(vcpu, reg) = data & mask;
+	vcpu_sys_reg(vcpu, reg) = data & mask;
 
 	kvm_pmu_update_pmc_chained(vcpu, select_idx);
 	kvm_pmu_create_perf_event(vcpu, select_idx);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..a3c6419f1df6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -76,7 +76,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 	    __vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
-	return __vcpu_sys_reg(vcpu, reg);
+	return vcpu_sys_reg(vcpu, reg);
 }
 
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
@@ -85,7 +85,7 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 	    __vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-	 __vcpu_sys_reg(vcpu, reg) = val;
+	 vcpu_sys_reg(vcpu, reg) = val;
 }
 
 /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
@@ -620,12 +620,12 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	       | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
 	if (!system_supports_32bit_el0())
 		val |= ARMV8_PMU_PMCR_LC;
-	__vcpu_sys_reg(vcpu, r->reg) = val;
+	vcpu_sys_reg(vcpu, r->reg) = val;
 }
 
 static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags)
 {
-	u64 reg = __vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
 	bool enabled = (reg & flags) || vcpu_mode_priv(vcpu);
 
 	if (!enabled)
@@ -664,17 +664,17 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 	if (p->is_write) {
 		/* Only update writeable bits of PMCR */
-		val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+		val = vcpu_sys_reg(vcpu, PMCR_EL0);
 		val &= ~ARMV8_PMU_PMCR_MASK;
 		val |= p->regval & ARMV8_PMU_PMCR_MASK;
 		if (!system_supports_32bit_el0())
 			val |= ARMV8_PMU_PMCR_LC;
-		__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 		kvm_pmu_handle_pmcr(vcpu, val);
 		kvm_vcpu_pmu_restore_guest(vcpu);
 	} else {
 		/* PMCR.P & PMCR.C are RAZ */
-		val = __vcpu_sys_reg(vcpu, PMCR_EL0)
+		val = vcpu_sys_reg(vcpu, PMCR_EL0)
 		      & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
 		p->regval = val;
 	}
@@ -689,10 +689,10 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		return false;
 
 	if (p->is_write)
-		__vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
+		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
 	else
 		/* return PMSELR.SEL field */
-		p->regval = __vcpu_sys_reg(vcpu, PMSELR_EL0)
+		p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0)
 			    & ARMV8_PMU_COUNTER_MASK;
 
 	return true;
@@ -723,7 +723,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
 {
 	u64 pmcr, val;
 
-	pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
+	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
 	val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
 	if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
 		kvm_inject_undefined(vcpu);
@@ -745,7 +745,7 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 			if (pmu_access_event_counter_el0_disabled(vcpu))
 				return false;
 
-			idx = __vcpu_sys_reg(vcpu, PMSELR_EL0)
+			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
 			      & ARMV8_PMU_COUNTER_MASK;
 		} else if (r->Op2 == 0) {
 			/* PMCCNTR_EL0 */
@@ -796,7 +796,7 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
 		/* PMXEVTYPER_EL0 */
-		idx = __vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK;
+		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK;
 		reg = PMEVTYPER0_EL0 + idx;
 	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
 		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
@@ -814,10 +814,10 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 	if (p->is_write) {
 		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
-		__vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK;
+		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK;
 		kvm_vcpu_pmu_restore_guest(vcpu);
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK;
+		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK;
 	}
 
 	return true;
@@ -836,16 +836,16 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val = p->regval & mask;
 		if (r->Op2 & 0x1) {
 			/* accessing PMCNTENSET_EL0 */
-			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
 			kvm_pmu_enable_counter_mask(vcpu, val);
 			kvm_vcpu_pmu_restore_guest(vcpu);
 		} else {
 			/* accessing PMCNTENCLR_EL0 */
-			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
 			kvm_pmu_disable_counter_mask(vcpu, val);
 		}
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+		p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
 	}
 
 	return true;
@@ -864,12 +864,12 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 		if (r->Op2 & 0x1)
 			/* accessing PMINTENSET_EL1 */
-			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val;
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val;
 		else
 			/* accessing PMINTENCLR_EL1 */
-			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+		p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
 	}
 
 	return true;
@@ -886,12 +886,12 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (p->is_write) {
 		if (r->CRm & 0x2)
 			/* accessing PMOVSSET_EL0 */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask);
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask);
 		else
 			/* accessing PMOVSCLR_EL0 */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+		p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
 	}
 
 	return true;
@@ -922,10 +922,10 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			return false;
 		}
 
-		__vcpu_sys_reg(vcpu, PMUSERENR_EL0) =
+		vcpu_sys_reg(vcpu, PMUSERENR_EL0) =
 			       p->regval & ARMV8_PMU_USERENR_MASK;
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMUSERENR_EL0)
+		p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0)
 			    & ARMV8_PMU_USERENR_MASK;
 	}
 
@@ -2635,7 +2635,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 	if (r->get_user)
 		return (r->get_user)(vcpu, r, reg, uaddr);
 
-	return reg_to_user(uaddr, &__vcpu_sys_reg(vcpu, r->reg), reg->id);
+	return reg_to_user(uaddr, &vcpu_sys_reg(vcpu, r->reg), reg->id);
 }
 
 int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
@@ -2660,7 +2660,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 	if (r->set_user)
 		return (r->set_user)(vcpu, r, reg, uaddr);
 
-	return reg_from_user(&__vcpu_sys_reg(vcpu, r->reg), uaddr, reg->id);
+	return reg_from_user(&vcpu_sys_reg(vcpu, r->reg), uaddr, reg->id);
 }
 
 static unsigned int num_demux_regs(void)
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..7840d657ab09 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -105,14 +105,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
 {
 	BUG_ON(!r->reg);
 	BUG_ON(r->reg >= NR_SYS_REGS);
-	__vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
+	vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
 }
 
 static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	BUG_ON(!r->reg);
 	BUG_ON(r->reg >= NR_SYS_REGS);
-	__vcpu_sys_reg(vcpu, r->reg) = r->val;
+	vcpu_sys_reg(vcpu, r->reg) = r->val;
 }
 
 static inline bool sysreg_hidden(const struct kvm_vcpu *vcpu,
-- 
2.32.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
@ 2021-07-14  9:55   ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:55 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm; +Cc: james.morse, suzuki.poulose

There are two macros to access a specific system register from a known
kvm_cpu_context: __ctxt_sys_reg(), which returns a pointer to the register,
and ctxt_sys_reg(), which deferences the pointer returned by
__ctxt_sys_reg().

__vcpu_sys_reg() serves a similar purpose, with the difference being that
it takes a struct kvm_vcpu as a parameter. __vcpu_sys_reg(), although it
looks like __ctxt_sys_reg(), it dereferences the pointer to the register,
like ctxt_sys_reg() does, and indeed it is defined as an abstraction over
ctxt_sys_reg().

Let's remove this naming inconsistency by renaming __vcpu_sys_reg() to
vcpu_sys_reg(), to make it clear it behaves like ctxt_sys_reg(), and not
like __ctxt_sys_reg().

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/include/asm/kvm_host.h          |  4 +-
 arch/arm64/kvm/arch_timer.c                | 20 ++++----
 arch/arm64/kvm/arm.c                       |  3 +-
 arch/arm64/kvm/fpsimd.c                    |  2 +-
 arch/arm64/kvm/hyp/exception.c             |  4 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  6 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 12 ++---
 arch/arm64/kvm/pmu-emul.c                  | 58 +++++++++++-----------
 arch/arm64/kvm/sys_regs.c                  | 54 ++++++++++----------
 arch/arm64/kvm/sys_regs.h                  |  4 +-
 10 files changed, 83 insertions(+), 84 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e6bdf1feb922..76efede8beae 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -459,7 +459,7 @@ struct kvm_vcpu_arch {
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
 
 /*
- * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the
+ * Only use vcpu_sys_reg/ctxt_sys_reg if you know you want the
  * memory backed version of a register, and not the one most recently
  * accessed by a running VCPU.  For example, for userspace access or
  * for system registers that are never context switched, but only
@@ -469,7 +469,7 @@ struct kvm_vcpu_arch {
 
 #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
 
-#define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
+#define vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
 
 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 3df67c127489..b928a70c01d2 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -58,9 +58,9 @@ u32 timer_get_ctl(struct arch_timer_context *ctxt)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		return __vcpu_sys_reg(vcpu, CNTV_CTL_EL0);
+		return vcpu_sys_reg(vcpu, CNTV_CTL_EL0);
 	case TIMER_PTIMER:
-		return __vcpu_sys_reg(vcpu, CNTP_CTL_EL0);
+		return vcpu_sys_reg(vcpu, CNTP_CTL_EL0);
 	default:
 		WARN_ON(1);
 		return 0;
@@ -73,9 +73,9 @@ u64 timer_get_cval(struct arch_timer_context *ctxt)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		return __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0);
+		return vcpu_sys_reg(vcpu, CNTV_CVAL_EL0);
 	case TIMER_PTIMER:
-		return __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0);
+		return vcpu_sys_reg(vcpu, CNTP_CVAL_EL0);
 	default:
 		WARN_ON(1);
 		return 0;
@@ -88,7 +88,7 @@ static u64 timer_get_offset(struct arch_timer_context *ctxt)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		return __vcpu_sys_reg(vcpu, CNTVOFF_EL2);
+		return vcpu_sys_reg(vcpu, CNTVOFF_EL2);
 	default:
 		return 0;
 	}
@@ -100,10 +100,10 @@ static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		__vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl;
+		vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl;
 		break;
 	case TIMER_PTIMER:
-		__vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl;
+		vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl;
 		break;
 	default:
 		WARN_ON(1);
@@ -116,10 +116,10 @@ static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		__vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval;
+		vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval;
 		break;
 	case TIMER_PTIMER:
-		__vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval;
+		vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval;
 		break;
 	default:
 		WARN_ON(1);
@@ -132,7 +132,7 @@ static void timer_set_offset(struct arch_timer_context *ctxt, u64 offset)
 
 	switch(arch_timer_ctx_index(ctxt)) {
 	case TIMER_VTIMER:
-		__vcpu_sys_reg(vcpu, CNTVOFF_EL2) = offset;
+		vcpu_sys_reg(vcpu, CNTVOFF_EL2) = offset;
 		break;
 	default:
 		WARN(offset, "timer %ld\n", arch_timer_ctx_index(ctxt));
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 646c7b003a59..04c722e4eca0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -700,8 +700,7 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 		}
 
 		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
-			kvm_pmu_handle_pmcr(vcpu,
-					    __vcpu_sys_reg(vcpu, PMCR_EL0));
+			kvm_pmu_handle_pmcr(vcpu, vcpu_sys_reg(vcpu, PMCR_EL0));
 	}
 }
 
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 5621020b28de..3fea05c7f49d 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -122,7 +122,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 
 	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
 		if (guest_has_sve) {
-			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
+			vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
 
 			/* Restore the VL that was saved when bound to the CPU */
 			if (!has_vhe())
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 0418399e0a20..89b5cdc850f5 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -25,7 +25,7 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 	if (__vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
-	return __vcpu_sys_reg(vcpu, reg);
+	return vcpu_sys_reg(vcpu, reg);
 }
 
 static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
@@ -33,7 +33,7 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 	if (__vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-	 __vcpu_sys_reg(vcpu, reg) = val;
+	 vcpu_sys_reg(vcpu, reg) = val;
 }
 
 static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..cc0ab79edcd9 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -57,7 +57,7 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
-	__vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2);
+	vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2);
 }
 
 static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
@@ -218,7 +218,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 	sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
 	__sve_restore_state(vcpu_sve_pffr(vcpu),
 			    &vcpu->arch.ctxt.fp_regs.fpsr);
-	write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
+	write_sysreg_el1(vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
 }
 
 /* Check for an FPSIMD/SVE trap and handle as appropriate */
@@ -280,7 +280,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 	/* Skip restoring fpexc32 for AArch64 guests */
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
-		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
+		write_sysreg(vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
 	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index de7e14c862e6..c50e7462adff 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -187,11 +187,11 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq);
 	vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq);
 
-	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
-	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
+	vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
+	vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
 	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
-		__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
+		vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
 static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
@@ -204,11 +204,11 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(vcpu->arch.ctxt.spsr_irq, spsr_irq);
 	write_sysreg(vcpu->arch.ctxt.spsr_fiq, spsr_fiq);
 
-	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
-	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
+	write_sysreg(vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
+	write_sysreg(vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
 
 	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
-		write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
+		write_sysreg(vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
 }
 
 #endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f33825c995cb..f53e1a2d6df5 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -43,7 +43,7 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
 static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (select_idx == ARMV8_PMU_CYCLE_IDX &&
-		__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
+		vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC);
 }
 
 static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
@@ -115,7 +115,7 @@ static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx)
 		return false;
 
 	reg = PMEVTYPER0_EL0 + select_idx;
-	eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm);
+	eventsel = vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm);
 
 	return eventsel == ARMV8_PMUV3_PERFCTR_CHAIN;
 }
@@ -134,14 +134,14 @@ static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
 		pmc = kvm_pmu_get_canonical_pmc(pmc);
 		reg = PMEVCNTR0_EL0 + pmc->idx;
 
-		counter = __vcpu_sys_reg(vcpu, reg);
-		counter_high = __vcpu_sys_reg(vcpu, reg + 1);
+		counter = vcpu_sys_reg(vcpu, reg);
+		counter_high = vcpu_sys_reg(vcpu, reg + 1);
 
 		counter = lower_32_bits(counter) | (counter_high << 32);
 	} else {
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
-		counter = __vcpu_sys_reg(vcpu, reg);
+		counter = vcpu_sys_reg(vcpu, reg);
 	}
 
 	/*
@@ -189,7 +189,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
-	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+	vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
 	/* Recreate the perf event to reflect the updated sample_period */
 	kvm_pmu_create_perf_event(vcpu, select_idx);
@@ -233,10 +233,10 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 		val = lower_32_bits(counter);
 	}
 
-	__vcpu_sys_reg(vcpu, reg) = val;
+	vcpu_sys_reg(vcpu, reg) = val;
 
 	if (kvm_pmu_pmc_is_chained(pmc))
-		__vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter);
+		vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter);
 
 	kvm_pmu_release_perf_event(pmc);
 }
@@ -289,7 +289,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
-	u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
+	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
 
 	val &= ARMV8_PMU_PMCR_N_MASK;
 	if (val == 0)
@@ -311,7 +311,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
@@ -369,10 +369,10 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
 {
 	u64 reg = 0;
 
-	if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
-		reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
-		reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
-		reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+	if ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
+		reg = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+		reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		reg &= vcpu_sys_reg(vcpu, PMINTENSET_EL1);
 		reg &= kvm_pmu_valid_counter_mask(vcpu);
 	}
 
@@ -493,7 +493,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 	perf_event->attr.sample_period = period;
 	perf_event->hw.sample_period = period;
 
-	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
+	vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
@@ -517,11 +517,11 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	int i;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
 		return;
 
 	/* Weed out disabled counters */
-	val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+	val &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
 
 	for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
 		u64 type, reg;
@@ -530,29 +530,29 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 			continue;
 
 		/* PMSWINC only applies to ... SW_INC! */
-		type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
+		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
 		type &= kvm_pmu_event_mask(vcpu->kvm);
 		if (type != ARMV8_PMUV3_PERFCTR_SW_INCR)
 			continue;
 
 		/* increment this even SW_INC counter */
-		reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+		reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
 		reg = lower_32_bits(reg);
-		__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+		vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
 
 		if (reg) /* no overflow on the low part */
 			continue;
 
 		if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) {
 			/* increment the high counter */
-			reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
+			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
 			reg = lower_32_bits(reg);
-			__vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
 			if (!reg) /* mark overflow on the high counter */
-				__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
+				vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
 		} else {
 			/* mark overflow on low counter */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
 		}
 	}
 }
@@ -569,7 +569,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 	if (val & ARMV8_PMU_PMCR_E) {
 		kvm_pmu_enable_counter_mask(vcpu,
-		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+		       vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
 	} else {
 		kvm_pmu_disable_counter_mask(vcpu, mask);
 	}
@@ -586,8 +586,8 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
-	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
+	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+	       (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
 }
 
 /**
@@ -612,7 +612,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 
 	reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx;
-	data = __vcpu_sys_reg(vcpu, reg);
+	data = vcpu_sys_reg(vcpu, reg);
 
 	kvm_pmu_stop_counter(vcpu, pmc);
 	if (pmc->idx == ARMV8_PMU_CYCLE_IDX)
@@ -734,7 +734,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
 
-	__vcpu_sys_reg(vcpu, reg) = data & mask;
+	vcpu_sys_reg(vcpu, reg) = data & mask;
 
 	kvm_pmu_update_pmc_chained(vcpu, select_idx);
 	kvm_pmu_create_perf_event(vcpu, select_idx);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..a3c6419f1df6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -76,7 +76,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 	    __vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
-	return __vcpu_sys_reg(vcpu, reg);
+	return vcpu_sys_reg(vcpu, reg);
 }
 
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
@@ -85,7 +85,7 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 	    __vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-	 __vcpu_sys_reg(vcpu, reg) = val;
+	 vcpu_sys_reg(vcpu, reg) = val;
 }
 
 /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
@@ -620,12 +620,12 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	       | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
 	if (!system_supports_32bit_el0())
 		val |= ARMV8_PMU_PMCR_LC;
-	__vcpu_sys_reg(vcpu, r->reg) = val;
+	vcpu_sys_reg(vcpu, r->reg) = val;
 }
 
 static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags)
 {
-	u64 reg = __vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
 	bool enabled = (reg & flags) || vcpu_mode_priv(vcpu);
 
 	if (!enabled)
@@ -664,17 +664,17 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 	if (p->is_write) {
 		/* Only update writeable bits of PMCR */
-		val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+		val = vcpu_sys_reg(vcpu, PMCR_EL0);
 		val &= ~ARMV8_PMU_PMCR_MASK;
 		val |= p->regval & ARMV8_PMU_PMCR_MASK;
 		if (!system_supports_32bit_el0())
 			val |= ARMV8_PMU_PMCR_LC;
-		__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 		kvm_pmu_handle_pmcr(vcpu, val);
 		kvm_vcpu_pmu_restore_guest(vcpu);
 	} else {
 		/* PMCR.P & PMCR.C are RAZ */
-		val = __vcpu_sys_reg(vcpu, PMCR_EL0)
+		val = vcpu_sys_reg(vcpu, PMCR_EL0)
 		      & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
 		p->regval = val;
 	}
@@ -689,10 +689,10 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		return false;
 
 	if (p->is_write)
-		__vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
+		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
 	else
 		/* return PMSELR.SEL field */
-		p->regval = __vcpu_sys_reg(vcpu, PMSELR_EL0)
+		p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0)
 			    & ARMV8_PMU_COUNTER_MASK;
 
 	return true;
@@ -723,7 +723,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
 {
 	u64 pmcr, val;
 
-	pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
+	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
 	val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
 	if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
 		kvm_inject_undefined(vcpu);
@@ -745,7 +745,7 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 			if (pmu_access_event_counter_el0_disabled(vcpu))
 				return false;
 
-			idx = __vcpu_sys_reg(vcpu, PMSELR_EL0)
+			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
 			      & ARMV8_PMU_COUNTER_MASK;
 		} else if (r->Op2 == 0) {
 			/* PMCCNTR_EL0 */
@@ -796,7 +796,7 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
 		/* PMXEVTYPER_EL0 */
-		idx = __vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK;
+		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK;
 		reg = PMEVTYPER0_EL0 + idx;
 	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
 		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
@@ -814,10 +814,10 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 	if (p->is_write) {
 		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
-		__vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK;
+		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK;
 		kvm_vcpu_pmu_restore_guest(vcpu);
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK;
+		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK;
 	}
 
 	return true;
@@ -836,16 +836,16 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val = p->regval & mask;
 		if (r->Op2 & 0x1) {
 			/* accessing PMCNTENSET_EL0 */
-			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
 			kvm_pmu_enable_counter_mask(vcpu, val);
 			kvm_vcpu_pmu_restore_guest(vcpu);
 		} else {
 			/* accessing PMCNTENCLR_EL0 */
-			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
 			kvm_pmu_disable_counter_mask(vcpu, val);
 		}
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+		p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
 	}
 
 	return true;
@@ -864,12 +864,12 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 		if (r->Op2 & 0x1)
 			/* accessing PMINTENSET_EL1 */
-			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val;
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val;
 		else
 			/* accessing PMINTENCLR_EL1 */
-			__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+		p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
 	}
 
 	return true;
@@ -886,12 +886,12 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (p->is_write) {
 		if (r->CRm & 0x2)
 			/* accessing PMOVSSET_EL0 */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask);
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask);
 		else
 			/* accessing PMOVSCLR_EL0 */
-			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+		p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
 	}
 
 	return true;
@@ -922,10 +922,10 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			return false;
 		}
 
-		__vcpu_sys_reg(vcpu, PMUSERENR_EL0) =
+		vcpu_sys_reg(vcpu, PMUSERENR_EL0) =
 			       p->regval & ARMV8_PMU_USERENR_MASK;
 	} else {
-		p->regval = __vcpu_sys_reg(vcpu, PMUSERENR_EL0)
+		p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0)
 			    & ARMV8_PMU_USERENR_MASK;
 	}
 
@@ -2635,7 +2635,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 	if (r->get_user)
 		return (r->get_user)(vcpu, r, reg, uaddr);
 
-	return reg_to_user(uaddr, &__vcpu_sys_reg(vcpu, r->reg), reg->id);
+	return reg_to_user(uaddr, &vcpu_sys_reg(vcpu, r->reg), reg->id);
 }
 
 int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
@@ -2660,7 +2660,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 	if (r->set_user)
 		return (r->set_user)(vcpu, r, reg, uaddr);
 
-	return reg_from_user(&__vcpu_sys_reg(vcpu, r->reg), uaddr, reg->id);
+	return reg_from_user(&vcpu_sys_reg(vcpu, r->reg), uaddr, reg->id);
 }
 
 static unsigned int num_demux_regs(void)
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..7840d657ab09 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -105,14 +105,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
 {
 	BUG_ON(!r->reg);
 	BUG_ON(r->reg >= NR_SYS_REGS);
-	__vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
+	vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
 }
 
 static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	BUG_ON(!r->reg);
 	BUG_ON(r->reg >= NR_SYS_REGS);
-	__vcpu_sys_reg(vcpu, r->reg) = r->val;
+	vcpu_sys_reg(vcpu, r->reg) = r->val;
 }
 
 static inline bool sysreg_hidden(const struct kvm_vcpu *vcpu,
-- 
2.32.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/5] KVM: arm64: Add __vcpu_sys_reg()
  2021-07-14  9:55 ` Alexandru Elisei
@ 2021-07-14  9:56   ` Alexandru Elisei
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:56 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm

Similar to how __ctxt_sys_reg() returns a reference to a system register,
add the __vcpu_sys_reg() macro which returns the pointer, but using a
struct kvm_vcpu as a parameter instead of a kvm_cpu_context.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 2 ++
 arch/arm64/kvm/guest.c            | 6 +++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 76efede8beae..267b5e6f1c28 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -469,6 +469,8 @@ struct kvm_vcpu_arch {
 
 #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
 
+#define __vcpu_sys_reg(v,r)	(__ctxt_sys_reg(&(v)->arch.ctxt, (r)))
+
 #define vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
 
 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 1dfb83578277..a8c5a76a8e7a 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -148,13 +148,13 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 		return &vcpu->arch.ctxt.regs.pstate;
 
 	case KVM_REG_ARM_CORE_REG(sp_el1):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, SP_EL1);
+		return __vcpu_sys_reg(vcpu, SP_EL1);
 
 	case KVM_REG_ARM_CORE_REG(elr_el1):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, ELR_EL1);
+		return __vcpu_sys_reg(vcpu, ELR_EL1);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_EL1]):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, SPSR_EL1);
+		return __vcpu_sys_reg(vcpu, SPSR_EL1);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_ABT]):
 		return &vcpu->arch.ctxt.spsr_abt;
-- 
2.32.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/5] KVM: arm64: Add __vcpu_sys_reg()
@ 2021-07-14  9:56   ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:56 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm; +Cc: james.morse, suzuki.poulose

Similar to how __ctxt_sys_reg() returns a reference to a system register,
add the __vcpu_sys_reg() macro which returns the pointer, but using a
struct kvm_vcpu as a parameter instead of a kvm_cpu_context.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 2 ++
 arch/arm64/kvm/guest.c            | 6 +++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 76efede8beae..267b5e6f1c28 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -469,6 +469,8 @@ struct kvm_vcpu_arch {
 
 #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
 
+#define __vcpu_sys_reg(v,r)	(__ctxt_sys_reg(&(v)->arch.ctxt, (r)))
+
 #define vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
 
 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 1dfb83578277..a8c5a76a8e7a 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -148,13 +148,13 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 		return &vcpu->arch.ctxt.regs.pstate;
 
 	case KVM_REG_ARM_CORE_REG(sp_el1):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, SP_EL1);
+		return __vcpu_sys_reg(vcpu, SP_EL1);
 
 	case KVM_REG_ARM_CORE_REG(elr_el1):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, ELR_EL1);
+		return __vcpu_sys_reg(vcpu, ELR_EL1);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_EL1]):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, SPSR_EL1);
+		return __vcpu_sys_reg(vcpu, SPSR_EL1);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_ABT]):
 		return &vcpu->arch.ctxt.spsr_abt;
-- 
2.32.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1
  2021-07-14  9:55 ` Alexandru Elisei
@ 2021-07-14  9:56   ` Alexandru Elisei
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:56 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm; +Cc: Will Deacon

According to ARM DDI 0487G.a, page D9-2930, profiling is disabled when
the PE is executing at a higher exception level than the profiling
buffer owning exception level. This is also confirmed by the pseudocode
for the StatisticalProfilingEnabled() function.

During the world switch and before activating guest traps, KVM executes
at EL2 with the buffer owning exception level being EL1 (MDCR_EL2.E2PB =
0b11). As a result, profiling is already disabled when draining the
buffer, making the isb() after the write to PMSCR_EL1 unnecessary.

CC: Will Deacon <will@kernel.org>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 7d3f25868cae..fdf0e0ba17e9 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -33,7 +33,6 @@ static void __debug_save_spe(u64 *pmscr_el1)
 	/* Yes; save the control register and disable data generation */
 	*pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
 	write_sysreg_s(0, SYS_PMSCR_EL1);
-	isb();
 
 	/* Now drain all buffered data to memory */
 	psb_csync();
-- 
2.32.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1
@ 2021-07-14  9:56   ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14  9:56 UTC (permalink / raw)
  To: maz, linux-arm-kernel, kvmarm; +Cc: james.morse, suzuki.poulose, Will Deacon

According to ARM DDI 0487G.a, page D9-2930, profiling is disabled when
the PE is executing at a higher exception level than the profiling
buffer owning exception level. This is also confirmed by the pseudocode
for the StatisticalProfilingEnabled() function.

During the world switch and before activating guest traps, KVM executes
at EL2 with the buffer owning exception level being EL1 (MDCR_EL2.E2PB =
0b11). As a result, profiling is already disabled when draining the
buffer, making the isb() after the write to PMSCR_EL1 unnecessary.

CC: Will Deacon <will@kernel.org>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 7d3f25868cae..fdf0e0ba17e9 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -33,7 +33,6 @@ static void __debug_save_spe(u64 *pmscr_el1)
 	/* Yes; save the control register and disable data generation */
 	*pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
 	write_sysreg_s(0, SYS_PMSCR_EL1);
-	isb();
 
 	/* Now drain all buffered data to memory */
 	psb_csync();
-- 
2.32.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
  2021-07-14  9:55   ` Alexandru Elisei
@ 2021-07-14 10:23     ` Marc Zyngier
  -1 siblings, 0 replies; 20+ messages in thread
From: Marc Zyngier @ 2021-07-14 10:23 UTC (permalink / raw)
  To: Alexandru Elisei; +Cc: kvmarm, linux-arm-kernel

Hi Alex,

On Wed, 14 Jul 2021 10:55:59 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> There are two macros to access a specific system register from a known
> kvm_cpu_context: __ctxt_sys_reg(), which returns a pointer to the register,
> and ctxt_sys_reg(), which deferences the pointer returned by
> __ctxt_sys_reg().
> 
> __vcpu_sys_reg() serves a similar purpose, with the difference being that
> it takes a struct kvm_vcpu as a parameter. __vcpu_sys_reg(), although it
> looks like __ctxt_sys_reg(), it dereferences the pointer to the register,
> like ctxt_sys_reg() does, and indeed it is defined as an abstraction over
> ctxt_sys_reg().
> 
> Let's remove this naming inconsistency by renaming __vcpu_sys_reg() to
> vcpu_sys_reg(), to make it clear it behaves like ctxt_sys_reg(), and not
> like __ctxt_sys_reg().

I can't say I'm keen on this change.

The leading underscores really are there to outline that *this is
dangerous*, as you really need to know which context you are in.
Dropping the leading '__' may give the false impression that this is
safe, and not actually a primitive that requires careful thinking
before use.

ctxt_sys_reg() is, on the other hand, clearly something that acts
solely on memory because it takes a context structure, and not a
vcpu. At least that's what the 'ctxt' prefix is supposed to convey
(not very successfully, apparently).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
@ 2021-07-14 10:23     ` Marc Zyngier
  0 siblings, 0 replies; 20+ messages in thread
From: Marc Zyngier @ 2021-07-14 10:23 UTC (permalink / raw)
  To: Alexandru Elisei; +Cc: linux-arm-kernel, kvmarm, james.morse, suzuki.poulose

Hi Alex,

On Wed, 14 Jul 2021 10:55:59 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> There are two macros to access a specific system register from a known
> kvm_cpu_context: __ctxt_sys_reg(), which returns a pointer to the register,
> and ctxt_sys_reg(), which deferences the pointer returned by
> __ctxt_sys_reg().
> 
> __vcpu_sys_reg() serves a similar purpose, with the difference being that
> it takes a struct kvm_vcpu as a parameter. __vcpu_sys_reg(), although it
> looks like __ctxt_sys_reg(), it dereferences the pointer to the register,
> like ctxt_sys_reg() does, and indeed it is defined as an abstraction over
> ctxt_sys_reg().
> 
> Let's remove this naming inconsistency by renaming __vcpu_sys_reg() to
> vcpu_sys_reg(), to make it clear it behaves like ctxt_sys_reg(), and not
> like __ctxt_sys_reg().

I can't say I'm keen on this change.

The leading underscores really are there to outline that *this is
dangerous*, as you really need to know which context you are in.
Dropping the leading '__' may give the false impression that this is
safe, and not actually a primitive that requires careful thinking
before use.

ctxt_sys_reg() is, on the other hand, clearly something that acts
solely on memory because it takes a context structure, and not a
vcpu. At least that's what the 'ctxt' prefix is supposed to convey
(not very successfully, apparently).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
  2021-07-14 10:23     ` Marc Zyngier
@ 2021-07-14 10:40       ` Alexandru Elisei
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14 10:40 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Marc,

On 7/14/21 11:23 AM, Marc Zyngier wrote:
> Hi Alex,
>
> On Wed, 14 Jul 2021 10:55:59 +0100,
> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>> There are two macros to access a specific system register from a known
>> kvm_cpu_context: __ctxt_sys_reg(), which returns a pointer to the register,
>> and ctxt_sys_reg(), which deferences the pointer returned by
>> __ctxt_sys_reg().
>>
>> __vcpu_sys_reg() serves a similar purpose, with the difference being that
>> it takes a struct kvm_vcpu as a parameter. __vcpu_sys_reg(), although it
>> looks like __ctxt_sys_reg(), it dereferences the pointer to the register,
>> like ctxt_sys_reg() does, and indeed it is defined as an abstraction over
>> ctxt_sys_reg().
>>
>> Let's remove this naming inconsistency by renaming __vcpu_sys_reg() to
>> vcpu_sys_reg(), to make it clear it behaves like ctxt_sys_reg(), and not
>> like __ctxt_sys_reg().
> I can't say I'm keen on this change.
>
> The leading underscores really are there to outline that *this is
> dangerous*, as you really need to know which context you are in.
> Dropping the leading '__' may give the false impression that this is
> safe, and not actually a primitive that requires careful thinking
> before use.
>
> ctxt_sys_reg() is, on the other hand, clearly something that acts
> solely on memory because it takes a context structure, and not a
> vcpu. At least that's what the 'ctxt' prefix is supposed to convey
> (not very successfully, apparently).

Oh, so that's the real reason for the leading underscores, I assumed that the
comment was warning enough. Since the 3 macros were right next to each other, and
__vcpu_sys_reg() is a wrapper over ctxt_sys_reg(), I thought they're there to
differentiate between returning a reference and returning a value. I'll drop this
patch and the one after it.

Thanks,

Alex

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg
@ 2021-07-14 10:40       ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-14 10:40 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvmarm, james.morse, suzuki.poulose

Hi Marc,

On 7/14/21 11:23 AM, Marc Zyngier wrote:
> Hi Alex,
>
> On Wed, 14 Jul 2021 10:55:59 +0100,
> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>> There are two macros to access a specific system register from a known
>> kvm_cpu_context: __ctxt_sys_reg(), which returns a pointer to the register,
>> and ctxt_sys_reg(), which deferences the pointer returned by
>> __ctxt_sys_reg().
>>
>> __vcpu_sys_reg() serves a similar purpose, with the difference being that
>> it takes a struct kvm_vcpu as a parameter. __vcpu_sys_reg(), although it
>> looks like __ctxt_sys_reg(), it dereferences the pointer to the register,
>> like ctxt_sys_reg() does, and indeed it is defined as an abstraction over
>> ctxt_sys_reg().
>>
>> Let's remove this naming inconsistency by renaming __vcpu_sys_reg() to
>> vcpu_sys_reg(), to make it clear it behaves like ctxt_sys_reg(), and not
>> like __ctxt_sys_reg().
> I can't say I'm keen on this change.
>
> The leading underscores really are there to outline that *this is
> dangerous*, as you really need to know which context you are in.
> Dropping the leading '__' may give the false impression that this is
> safe, and not actually a primitive that requires careful thinking
> before use.
>
> ctxt_sys_reg() is, on the other hand, clearly something that acts
> solely on memory because it takes a context structure, and not a
> vcpu. At least that's what the 'ctxt' prefix is supposed to convey
> (not very successfully, apparently).

Oh, so that's the real reason for the leading underscores, I assumed that the
comment was warning enough. Since the 3 macros were right next to each other, and
__vcpu_sys_reg() is a wrapper over ctxt_sys_reg(), I thought they're there to
differentiate between returning a reference and returning a value. I'll drop this
patch and the one after it.

Thanks,

Alex


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1
  2021-07-14  9:56   ` Alexandru Elisei
@ 2021-07-14 18:20     ` Will Deacon
  -1 siblings, 0 replies; 20+ messages in thread
From: Will Deacon @ 2021-07-14 18:20 UTC (permalink / raw)
  To: Alexandru Elisei; +Cc: maz, kvmarm, linux-arm-kernel

On Wed, Jul 14, 2021 at 10:56:01AM +0100, Alexandru Elisei wrote:
> According to ARM DDI 0487G.a, page D9-2930, profiling is disabled when
> the PE is executing at a higher exception level than the profiling
> buffer owning exception level. This is also confirmed by the pseudocode
> for the StatisticalProfilingEnabled() function.
> 
> During the world switch and before activating guest traps, KVM executes
> at EL2 with the buffer owning exception level being EL1 (MDCR_EL2.E2PB =
> 0b11). As a result, profiling is already disabled when draining the
> buffer, making the isb() after the write to PMSCR_EL1 unnecessary.
> 
> CC: Will Deacon <will@kernel.org>
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> index 7d3f25868cae..fdf0e0ba17e9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> @@ -33,7 +33,6 @@ static void __debug_save_spe(u64 *pmscr_el1)
>  	/* Yes; save the control register and disable data generation */
>  	*pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
>  	write_sysreg_s(0, SYS_PMSCR_EL1);
> -	isb();

Hmm, but we still need an ISB somewhere between clearing pmscr_el1 and
mdcr_el2.e2pb, right? Where does that occur?

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1
@ 2021-07-14 18:20     ` Will Deacon
  0 siblings, 0 replies; 20+ messages in thread
From: Will Deacon @ 2021-07-14 18:20 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: maz, linux-arm-kernel, kvmarm, james.morse, suzuki.poulose

On Wed, Jul 14, 2021 at 10:56:01AM +0100, Alexandru Elisei wrote:
> According to ARM DDI 0487G.a, page D9-2930, profiling is disabled when
> the PE is executing at a higher exception level than the profiling
> buffer owning exception level. This is also confirmed by the pseudocode
> for the StatisticalProfilingEnabled() function.
> 
> During the world switch and before activating guest traps, KVM executes
> at EL2 with the buffer owning exception level being EL1 (MDCR_EL2.E2PB =
> 0b11). As a result, profiling is already disabled when draining the
> buffer, making the isb() after the write to PMSCR_EL1 unnecessary.
> 
> CC: Will Deacon <will@kernel.org>
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> index 7d3f25868cae..fdf0e0ba17e9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> @@ -33,7 +33,6 @@ static void __debug_save_spe(u64 *pmscr_el1)
>  	/* Yes; save the control register and disable data generation */
>  	*pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
>  	write_sysreg_s(0, SYS_PMSCR_EL1);
> -	isb();

Hmm, but we still need an ISB somewhere between clearing pmscr_el1 and
mdcr_el2.e2pb, right? Where does that occur?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1
  2021-07-14 18:20     ` Will Deacon
@ 2021-07-15  9:31       ` Alexandru Elisei
  -1 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-15  9:31 UTC (permalink / raw)
  To: Will Deacon; +Cc: maz, kvmarm, linux-arm-kernel

Hi Will,

On 7/14/21 7:20 PM, Will Deacon wrote:
> On Wed, Jul 14, 2021 at 10:56:01AM +0100, Alexandru Elisei wrote:
>> According to ARM DDI 0487G.a, page D9-2930, profiling is disabled when
>> the PE is executing at a higher exception level than the profiling
>> buffer owning exception level. This is also confirmed by the pseudocode
>> for the StatisticalProfilingEnabled() function.
>>
>> During the world switch and before activating guest traps, KVM executes
>> at EL2 with the buffer owning exception level being EL1 (MDCR_EL2.E2PB =
>> 0b11). As a result, profiling is already disabled when draining the
>> buffer, making the isb() after the write to PMSCR_EL1 unnecessary.
>>
>> CC: Will Deacon <will@kernel.org>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 -
>>  1 file changed, 1 deletion(-)
>>
>> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> index 7d3f25868cae..fdf0e0ba17e9 100644
>> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> @@ -33,7 +33,6 @@ static void __debug_save_spe(u64 *pmscr_el1)
>>  	/* Yes; save the control register and disable data generation */
>>  	*pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
>>  	write_sysreg_s(0, SYS_PMSCR_EL1);
>> -	isb();
> Hmm, but we still need an ISB somewhere between clearing pmscr_el1 and
> mdcr_el2.e2pb, right? Where does that occur?

Yes, we do need an ISB to make sure we don't start profiling using the EL2&0
translation regime, but with a buffer pointer programmed by the host at EL1 which
is most likely not even mapped at EL2.

When I wrote the patch, I reasoned that the ISB in
__sysreg_restore_state_nvhe->__sysreg_restore_el1_state and the isb from
__load_stage2 will make sure that PMSCR_EL1 is cleared before the change to the
buffer owning regime.

As I was double checking that just now, I realized that *both* ISBs are executed
only if the system has ARM64_WORKAROUND_SPECULATIVE_AT. No ISB gets executed when
the workaround is not needed. We could make the ISB here depend on the system not
having the workaround, but it looks to me like there's little to be gained from
that (just one less ISB when the workaround is applied), at the expense of making
the code even more difficult to reason about.

My preference would be to drop this patch.

Thanks,

Alex

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1
@ 2021-07-15  9:31       ` Alexandru Elisei
  0 siblings, 0 replies; 20+ messages in thread
From: Alexandru Elisei @ 2021-07-15  9:31 UTC (permalink / raw)
  To: Will Deacon; +Cc: maz, linux-arm-kernel, kvmarm, james.morse, suzuki.poulose

Hi Will,

On 7/14/21 7:20 PM, Will Deacon wrote:
> On Wed, Jul 14, 2021 at 10:56:01AM +0100, Alexandru Elisei wrote:
>> According to ARM DDI 0487G.a, page D9-2930, profiling is disabled when
>> the PE is executing at a higher exception level than the profiling
>> buffer owning exception level. This is also confirmed by the pseudocode
>> for the StatisticalProfilingEnabled() function.
>>
>> During the world switch and before activating guest traps, KVM executes
>> at EL2 with the buffer owning exception level being EL1 (MDCR_EL2.E2PB =
>> 0b11). As a result, profiling is already disabled when draining the
>> buffer, making the isb() after the write to PMSCR_EL1 unnecessary.
>>
>> CC: Will Deacon <will@kernel.org>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 -
>>  1 file changed, 1 deletion(-)
>>
>> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> index 7d3f25868cae..fdf0e0ba17e9 100644
>> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
>> @@ -33,7 +33,6 @@ static void __debug_save_spe(u64 *pmscr_el1)
>>  	/* Yes; save the control register and disable data generation */
>>  	*pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
>>  	write_sysreg_s(0, SYS_PMSCR_EL1);
>> -	isb();
> Hmm, but we still need an ISB somewhere between clearing pmscr_el1 and
> mdcr_el2.e2pb, right? Where does that occur?

Yes, we do need an ISB to make sure we don't start profiling using the EL2&0
translation regime, but with a buffer pointer programmed by the host at EL1 which
is most likely not even mapped at EL2.

When I wrote the patch, I reasoned that the ISB in
__sysreg_restore_state_nvhe->__sysreg_restore_el1_state and the isb from
__load_stage2 will make sure that PMSCR_EL1 is cleared before the change to the
buffer owning regime.

As I was double checking that just now, I realized that *both* ISBs are executed
only if the system has ARM64_WORKAROUND_SPECULATIVE_AT. No ISB gets executed when
the workaround is not needed. We could make the ISB here depend on the system not
having the workaround, but it looks to me like there's little to be gained from
that (just one less ISB when the workaround is applied), at the expense of making
the code even more difficult to reason about.

My preference would be to drop this patch.

Thanks,

Alex


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-07-15  9:48 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14  9:55 [PATCH 0/5] KVM: arm64: Cleanups and one optimization Alexandru Elisei
2021-07-14  9:55 ` Alexandru Elisei
2021-07-14  9:55 ` [PATCH 1/5] KVM: arm64: Move vcpu_has_feature() to asm/kvm_host.h Alexandru Elisei
2021-07-14  9:55   ` Alexandru Elisei
2021-07-14  9:55 ` [PATCH 2/5] KVM: arm64: Use vcpu_has_feature() to check the feature bits Alexandru Elisei
2021-07-14  9:55   ` Alexandru Elisei
2021-07-14  9:55 ` [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg Alexandru Elisei
2021-07-14  9:55   ` Alexandru Elisei
2021-07-14 10:23   ` Marc Zyngier
2021-07-14 10:23     ` Marc Zyngier
2021-07-14 10:40     ` Alexandru Elisei
2021-07-14 10:40       ` Alexandru Elisei
2021-07-14  9:56 ` [PATCH 4/5] KVM: arm64: Add __vcpu_sys_reg() Alexandru Elisei
2021-07-14  9:56   ` Alexandru Elisei
2021-07-14  9:56 ` [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1 Alexandru Elisei
2021-07-14  9:56   ` Alexandru Elisei
2021-07-14 18:20   ` Will Deacon
2021-07-14 18:20     ` Will Deacon
2021-07-15  9:31     ` Alexandru Elisei
2021-07-15  9:31       ` Alexandru Elisei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.