kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags
@ 2022-06-10  9:28 Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 01/19] KVM: arm64: Always start with clearing SVE flag on load Marc Zyngier
                   ` (19 more replies)
  0 siblings, 20 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

This is a iteration on [1], which aims at making the vcpu flags suck a
bit less.

* From v1 [1]:
  - Rebased onto v5.19-rc1
  - Took the first two patches into kvmarm-fixes, included here for
    completeness
  - Additional patch to move system_supports_fpsimd() outside of
    the run path (Reiji)
  - Expanded on comments (Reiji)
  - New kvm_pend_exception() accessor (Fuad)
  - Various bracketing fixups (Reiji)
  - Some renaming (Reiji, Broonie)
  - Collected RBs, with thanks

[1] https://lore.kernel.org/r/20220528113829.1043361-1-maz@kernel.org

Marc Zyngier (19):
  KVM: arm64: Always start with clearing SVE flag on load
  KVM: arm64: Always start with clearing SME flag on load
  KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  KVM: arm64: Move FP state ownership from flag to a tristate
  KVM: arm64: Add helpers to manipulate vcpu flags among a set
  KVM: arm64: Add three sets of flags to the vcpu state
  KVM: arm64: Move vcpu configuration flags into their own set
  KVM: arm64: Move vcpu PC/Exception flags to the input flag set
  KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  KVM: arm64: Move vcpu SVE/SME flags to the state flag set
  KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  KVM: arm64: Move vcpu WFIT flag to the state flag set
  KVM: arm64: Kill unused vcpu flags field
  KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
  KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set
    together
  KVM: arm64: Add build-time sanity checks for flags
  KVM: arm64: Reduce the size of the vcpu flag members
  KVM: arm64: Document why pause cannot be turned into a flag
  KVM: arm64: Move the handling of !FP outside of the fast path

 arch/arm64/include/asm/kvm_emulate.h       |  11 +-
 arch/arm64/include/asm/kvm_host.h          | 203 +++++++++++++++------
 arch/arm64/kvm/arch_timer.c                |   2 +-
 arch/arm64/kvm/arm.c                       |  12 +-
 arch/arm64/kvm/debug.c                     |  25 ++-
 arch/arm64/kvm/fpsimd.c                    |  37 ++--
 arch/arm64/kvm/handle_exit.c               |   2 +-
 arch/arm64/kvm/hyp/exception.c             |  23 ++-
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |   6 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  24 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |   4 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |   8 +-
 arch/arm64/kvm/hyp/nvhe/switch.c           |   6 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c         |   4 +-
 arch/arm64/kvm/hyp/vhe/switch.c            |   4 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |   4 +-
 arch/arm64/kvm/inject_fault.c              |  17 +-
 arch/arm64/kvm/reset.c                     |   6 +-
 arch/arm64/kvm/sys_regs.c                  |  12 +-
 19 files changed, 248 insertions(+), 162 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 01/19] KVM: arm64: Always start with clearing SVE flag on load
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 02/19] KVM: arm64: Always start with clearing SME " Marc Zyngier
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team, stable

On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
flag if SVE is enabled for EL0 on the host. This is used to restore
the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SVE for userspace.

We probably never saw the issue because no VMM uses SVE, but
that's still pretty bad. Unconditionally clearing the flag
on vcpu load addresses the issue.

Fixes: 8383741ab2e7 ("KVM: arm64: Get rid of host SVE tracking/saving")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220528113829.1043361-2-maz@kernel.org
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3d251a4d2cf7..8267ff4642d3 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -80,6 +80,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
 	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
 
+	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
 		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 02/19] KVM: arm64: Always start with clearing SME flag on load
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 01/19] KVM: arm64: Always start with clearing SVE flag on load Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 03/19] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code Marc Zyngier
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

On each vcpu load, we set the KVM_ARM64_HOST_SME_ENABLED
flag if SME is enabled for EL0 on the host. This is used to
restore the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SME for userspace. As it turns
out, this is due to the SME code being more or less copied from
SVE, and inheriting the same shortcomings.

We never saw the issue because nothing uses SME, and the amount
of testing is probably still pretty low.

Fixes: 861262ab8627 ("KVM: arm64: Handle SME host state when running guests")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviwed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220528113829.1043361-3-maz@kernel.org
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 8267ff4642d3..6012b08ecb14 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -94,6 +94,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
+		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
 			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 03/19] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 01/19] KVM: arm64: Always start with clearing SVE flag on load Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 02/19] KVM: arm64: Always start with clearing SME " Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 04/19] KVM: arm64: Move FP state ownership from flag to a tristate Marc Zyngier
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
TIF_FOREIGN_FPSTATE so that we can evaluate just before running
the vcpu whether it the FP regs contain something that is owned
by the vcpu or not by updating the rest of the FP flags.

We do this in the hypervisor code in order to make sure we're
in a context where we are not interruptible. But we already
have a hook in the run loop to generate this flag. We may as
well update the FP flags directly and save the pointless flag
tracking.

Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
to indicate what the leftover of this helper actually do.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  1 -
 arch/arm64/kvm/fpsimd.c                 | 17 ++++++++++-------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 16 ++--------------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 47a1e25e25bb..63103cc1bdc4 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -462,7 +462,6 @@ struct kvm_vcpu_arch {
 
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
-#define KVM_ARM64_FP_FOREIGN_FPSTATE	(1 << 14)
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 6012b08ecb14..edbc0183c89b 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 }
 
 /*
- * Called just before entering the guest once we are no longer
- * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
- * mirror of the flag used by the hypervisor.
+ * Called just before entering the guest once we are no longer preemptable
+ * and interrupts are disabled. If we have managed to run anything using
+ * FP while we were preemptible (such as off the back of an interrupt),
+ * then neither the host nor the guest own the FP hardware (and it was the
+ * responsibility of the code that used FP to save the existing state).
+ *
+ * Note that not supporting FP is basically the same thing as far as the
+ * hypervisor is concerned (nothing to save).
  */
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
-	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
-	else
-		vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
+	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
+		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
 }
 
 /*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 37d9f211c200..e54320384943 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -37,21 +37,9 @@ struct kvm_exception_table_entry {
 extern struct kvm_exception_table_entry __start___kvm_ex_table;
 extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 
-/* Check whether the FP regs were dirtied while in the host-side run loop: */
-static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
+/* Check whether the FP regs are owned by the guest */
+static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	/*
-	 * When the system doesn't support FP/SIMD, we cannot rely on
-	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
-	 * abort on the very first access to FP and thus we should never
-	 * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
-	 * trap the accesses.
-	 */
-	if (!system_supports_fpsimd() ||
-	    vcpu->arch.flags & KVM_ARM64_FP_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
-
 	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6db801db8f27..a6b9f1186577 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -43,7 +43,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
+	if (!guest_owns_fp_regs(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
 		__activate_traps_fpsimd32(vcpu);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 969f20daf97a..46f365254e9f 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -55,7 +55,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (update_fp_enabled(vcpu)) {
+	if (guest_owns_fp_regs(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
 	} else {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 04/19] KVM: arm64: Move FP state ownership from flag to a tristate
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (2 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 03/19] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set Marc Zyngier
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The KVM FP code uses a pair of flags to denote three states:

- FP_ENABLED set: the guest owns the FP state
- FP_HOST set: the host owns the FP state
- FP_ENABLED and FP_HOST clear: nobody owns the FP state at all

and both flags set is an illegal state, which nothing ever checks
for...

As it turns out, this isn't really a good match for flags, and
we'd be better off if this was a simpler tristate, each state
having a name that actually reflect the state:

- FP_STATE_FREE
- FP_STATE_HOST_OWNED
- FP_STATE_GUEST_OWNED

Kill the two flags, and move over to an enum encoding these
three states. This results in less confusing code, and less risk of
ending up in the uncharted territory of a 4th state if we forget
to clear one of the two flags.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  9 +++++++--
 arch/arm64/kvm/fpsimd.c                 | 14 ++++++--------
 arch/arm64/kvm/hyp/include/hyp/switch.h |  8 +++-----
 arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 63103cc1bdc4..372c5642cfab 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -325,6 +325,13 @@ struct kvm_vcpu_arch {
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
+	/* Ownership of the FP regs */
+	enum {
+		FP_STATE_FREE,
+		FP_STATE_HOST_OWNED,
+		FP_STATE_GUEST_OWNED,
+	} fp_state;
+
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
@@ -430,8 +437,6 @@ struct kvm_vcpu_arch {
 
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
-#define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
-#define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index edbc0183c89b..d397efe1a378 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -77,8 +77,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	BUG_ON(!current->mm);
 	BUG_ON(test_thread_flag(TIF_SVE));
 
-	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
-	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+	vcpu->arch.fp_state = FP_STATE_HOST_OWNED;
 
 	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
@@ -98,9 +97,8 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
 			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
 
-		if (read_sysreg_s(SYS_SVCR) &
-		    (SVCR_SM_MASK | SVCR_ZA_MASK)) {
-			vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
+		if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) {
+			vcpu->arch.fp_state = FP_STATE_FREE;
 			fpsimd_save_and_flush_cpu_state();
 		}
 	}
@@ -119,7 +117,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
 	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
+		vcpu->arch.fp_state = FP_STATE_FREE;
 }
 
 /*
@@ -133,7 +131,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) {
 		/*
 		 * Currently we do not support SME guests so SVCR is
 		 * always 0 and we just need a variable to point to.
@@ -176,7 +174,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 					 CPACR_EL1_SMEN_EL1EN);
 	}
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) {
 		if (vcpu_has_sve(vcpu)) {
 			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e54320384943..6cbbb6c02f66 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -40,7 +40,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 /* Check whether the FP regs are owned by the guest */
 static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
+	return vcpu->arch.fp_state == FP_STATE_GUEST_OWNED;
 }
 
 /* Save the 32-bit only FPSIMD system register state */
@@ -179,10 +179,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	isb();
 
 	/* Write out the host state if it's in the registers */
-	if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
+	if (vcpu->arch.fp_state == FP_STATE_HOST_OWNED)
 		__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
-		vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
-	}
 
 	/* Restore the guest state */
 	if (sve_guest)
@@ -194,7 +192,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
 		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
-	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+	vcpu->arch.fp_state = FP_STATE_GUEST_OWNED;
 
 	return true;
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index a6b9f1186577..764bdc423cb8 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -123,7 +123,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	}
 
 	cptr = CPTR_EL2_DEFAULT;
-	if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
+	if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED))
 		cptr |= CPTR_EL2_TZ;
 	if (cpus_have_final_cap(ARM64_SME))
 		cptr &= ~CPTR_EL2_TSM;
@@ -335,7 +335,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 46f365254e9f..bce7fc51f9a1 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -175,7 +175,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (3 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 04/19] KVM: arm64: Move FP state ownership from flag to a tristate Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10 13:05   ` Fuad Tabba
  2022-06-11 18:37   ` Reiji Watanabe
  2022-06-10  9:28 ` [PATCH v2 06/19] KVM: arm64: Add three sets of flags to the vcpu state Marc Zyngier
                   ` (14 subsequent siblings)
  19 siblings, 2 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

Careful analysis of the vcpu flags show that this is a mix of
configuration, communication between the host and the hypervisor,
as well as anciliary state that has no consistency. It'd be a lot
better if we could split these flags into consistent categories.

However, even if we split these flags apart, we want to make sure
that each flag can only be applied to its own set, and not across
sets.

To achieve this, use a preprocessor hack so that each flag is always
associated with:

- the set that contains it,

- a mask that describe all the bits that contain it (for a simple
  flag, this is the same thing as the flag itself, but we will
  eventually have values that cover multiple bits at once).

Each flag is thus a triplet that is not directly usable as a value,
but used by three helpers that allow the flag to be set, cleared,
and fetched. By mandating the use of such helper, we can easily
enforce that a flag can only be used with the set it belongs to.

Finally, one last helper "unpacks" the raw value from the triplet
that represents a flag, which is useful for multi-bit values that
need to be enumerated (in a switch statement, for example).

Further patches will start making use of this infrastructure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 44 +++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 372c5642cfab..6d30ac7e3164 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -415,6 +415,50 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+/*
+ * Each 'flag' is composed of a comma-separated triplet:
+ *
+ * - the flag-set it belongs to in the vcpu->arch structure
+ * - the value for that flag
+ * - the mask for that flag
+ *
+ *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
+ * unpack_vcpu_flag() extract the flag value from the triplet for
+ * direct use outside of the flag accessors.
+ */
+#define __vcpu_single_flag(_set, _f)	_set, (_f), (_f)
+
+#define __unpack_flag(_set, _f, _m)	_f
+#define unpack_vcpu_flag(...)		__unpack_flag(__VA_ARGS__)
+
+#define __vcpu_get_flag(v, flagset, f, m)			\
+	({							\
+		v->arch.flagset & (m);				\
+	})
+
+#define __vcpu_set_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		if (HWEIGHT(m) > 1)				\
+			*fset &= ~(m);				\
+		*fset |= (f);					\
+	} while (0)
+
+#define __vcpu_clear_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		*fset &= ~(m);					\
+	} while (0)
+
+#define vcpu_get_flag(v, ...)	__vcpu_get_flag((v), __VA_ARGS__)
+#define vcpu_set_flag(v, ...)	__vcpu_set_flag((v), __VA_ARGS__)
+#define vcpu_clear_flag(v, ...)	__vcpu_clear_flag((v), __VA_ARGS__)
+
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 06/19] KVM: arm64: Add three sets of flags to the vcpu state
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (4 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-11 18:41   ` Reiji Watanabe
  2022-06-10  9:28 ` [PATCH v2 07/19] KVM: arm64: Move vcpu configuration flags into their own set Marc Zyngier
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

It so appears that each of the vcpu flags is really belonging to
one of three categories:

- a configuration flag, set once and for all
- an input flag generated by the kernel for the hypervisor to use
- a state flag that is only for the kernel's own bookkeeping

As we are going to split all the existing flags into these three
sets, introduce all three in one go.

No functional change other than a bit of bloat...

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6d30ac7e3164..af45320f247f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -335,6 +335,15 @@ struct kvm_vcpu_arch {
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
+	/* Configuration flags, set once and for all before the vcpu can run */
+	u64 cflags;
+
+	/* Input flags to the hypervisor code, potentially cleared after use */
+	u64 iflags;
+
+	/* State flags for kernel bookkeeping, unused by the hypervisor code */
+	u64 sflags;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 07/19] KVM: arm64: Move vcpu configuration flags into their own set
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (5 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 06/19] KVM: arm64: Add three sets of flags to the vcpu state Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 08/19] KVM: arm64: Move vcpu PC/Exception flags to the input flag set Marc Zyngier
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The KVM_ARM64_{GUEST_HAS_SVE,VCPU_SVE_FINALIZED,GUEST_HAS_PTRAUTH}
flags are purely configuration flags. Once set, they are never cleared,
but evaluated all over the code base.

Move these three flags into the configuration set in one go, using
the new accessors, and take this opportunity to drop the KVM_ARM64_
prefix which doesn't provide any help.

Reviewed-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 17 ++++++++++-------
 arch/arm64/kvm/reset.c            |  6 +++---
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index af45320f247f..66a08b0e12a8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -467,6 +467,13 @@ struct kvm_vcpu_arch {
 #define vcpu_set_flag(v, ...)	__vcpu_set_flag((v), __VA_ARGS__)
 #define vcpu_clear_flag(v, ...)	__vcpu_clear_flag((v), __VA_ARGS__)
 
+/* SVE exposed to guest */
+#define GUEST_HAS_SVE		__vcpu_single_flag(cflags, BIT(0))
+/* SVE config completed */
+#define VCPU_SVE_FINALIZED	__vcpu_single_flag(cflags, BIT(1))
+/* PTRAUTH exposed to guest */
+#define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
+
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -491,9 +498,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
-#define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
-#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 #define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
 /*
  * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
@@ -530,13 +534,13 @@ struct kvm_vcpu_arch {
 				 KVM_GUESTDBG_SINGLESTEP)
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() &&			\
-			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
+			    vcpu_get_flag(vcpu, GUEST_HAS_SVE))
 
 #ifdef CONFIG_ARM64_PTR_AUTH
 #define vcpu_has_ptrauth(vcpu)						\
 	((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||		\
 	  cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&		\
-	 (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+	  vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH))
 #else
 #define vcpu_has_ptrauth(vcpu)		false
 #endif
@@ -893,8 +897,7 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-#define kvm_arm_vcpu_sve_finalized(vcpu) \
-	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
+#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED)
 
 #define kvm_has_mte(kvm)					\
 	(system_supports_mte() &&				\
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 6c70c6f61c70..0e08fbe68715 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
 	 * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
 	 * kvm_arm_vcpu_finalize(), which freezes the configuration.
 	 */
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+	vcpu_set_flag(vcpu, GUEST_HAS_SVE);
 
 	return 0;
 }
@@ -120,7 +120,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 	}
 	
 	vcpu->arch.sve_state = buf;
-	vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
+	vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED);
 	return 0;
 }
 
@@ -177,7 +177,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
 	    !system_has_full_ptr_auth())
 		return -EINVAL;
 
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 08/19] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (6 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 07/19] KVM: arm64: Move vcpu configuration flags into their own set Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 09/19] KVM: arm64: Move vcpu debug/SPE/TRBE " Marc Zyngier
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The PC update flags (which also deal with exception injection)
is one of the most complicated use of the flag we have. Make it
more fool prof by:

- moving it over to the new accessors and assign it to the
  input flag set

- turn the combination of generic ELx flags with another flag
  indicating the target EL itself into an explicit set of
  flags for each EL and vector combination

- add a new accessor to pend the exception

This is otherwise a pretty straightformward conversion.

Reviewed-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h |  9 ++++-
 arch/arm64/include/asm/kvm_host.h    | 58 ++++++++++++++++------------
 arch/arm64/kvm/arm.c                 |  4 +-
 arch/arm64/kvm/hyp/exception.c       | 23 ++++++-----
 arch/arm64/kvm/hyp/nvhe/sys_regs.c   |  4 +-
 arch/arm64/kvm/inject_fault.c        | 17 +++-----
 6 files changed, 61 insertions(+), 54 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 0e66edd3aff2..6ec58080ece8 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,9 +473,16 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
+	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
+#define kvm_pend_exception(v, e)					\
+	do {								\
+		vcpu_set_flag((v), PENDING_EXCEPTION);			\
+		vcpu_set_flag((v), e);					\
+	} while (0)
+
+
 static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
 {
 	return test_bit(feature, vcpu->arch.features);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 66a08b0e12a8..db42b4c06449 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -474,6 +474,40 @@ struct kvm_vcpu_arch {
 /* PTRAUTH exposed to guest */
 #define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
 
+/* Exception pending */
+#define PENDING_EXCEPTION	__vcpu_single_flag(iflags, BIT(0))
+/*
+ * PC increment. Overlaps with EXCEPT_MASK on purpose so that it can't
+ * be set together with an exception...
+ */
+#define INCREMENT_PC		__vcpu_single_flag(iflags, BIT(1))
+/* Target EL/MODE (not a single flag, but let's abuse the macro) */
+#define EXCEPT_MASK		__vcpu_single_flag(iflags, GENMASK(3, 1))
+
+/* Helpers to encode exceptions with minimum fuss */
+#define __EXCEPT_MASK_VAL	unpack_vcpu_flag(EXCEPT_MASK)
+#define __EXCEPT_SHIFT		__builtin_ctzl(__EXCEPT_MASK_VAL)
+#define __vcpu_except_flags(_f)	iflags, (_f << __EXCEPT_SHIFT), __EXCEPT_MASK_VAL
+
+/*
+ * When PENDING_EXCEPTION is set, EXCEPT_MASK can take the following
+ * values:
+ *
+ * For AArch32 EL1:
+ */
+#define EXCEPT_AA32_UND		__vcpu_except_flags(0)
+#define EXCEPT_AA32_IABT	__vcpu_except_flags(1)
+#define EXCEPT_AA32_DABT	__vcpu_except_flags(2)
+/* For AArch64: */
+#define EXCEPT_AA64_EL1_SYNC	__vcpu_except_flags(0)
+#define EXCEPT_AA64_EL1_IRQ	__vcpu_except_flags(1)
+#define EXCEPT_AA64_EL1_FIQ	__vcpu_except_flags(2)
+#define EXCEPT_AA64_EL1_SERR	__vcpu_except_flags(3)
+/* For AArch64 with NV (one day): */
+#define EXCEPT_AA64_EL2_SYNC	__vcpu_except_flags(4)
+#define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
+#define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
+#define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -498,30 +532,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
-/*
- * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
- * set together with an exception...
- */
-#define KVM_ARM64_INCREMENT_PC		(1 << 9) /* Increment PC */
-#define KVM_ARM64_EXCEPT_MASK		(7 << 9) /* Target EL/MODE */
-/*
- * When KVM_ARM64_PENDING_EXCEPTION is set, KVM_ARM64_EXCEPT_MASK can
- * take the following values:
- *
- * For AArch32 EL1:
- */
-#define KVM_ARM64_EXCEPT_AA32_UND	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA32_IABT	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA32_DABT	(2 << 9)
-/* For AArch64: */
-#define KVM_ARM64_EXCEPT_AA64_ELx_SYNC	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_IRQ	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_FIQ	(2 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_SERR	(3 << 9)
-#define KVM_ARM64_EXCEPT_AA64_EL1	(0 << 11)
-#define KVM_ARM64_EXCEPT_AA64_EL2	(1 << 11)
-
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 400bb0fe2745..5beabbe69585 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1013,8 +1013,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	 * the vcpu state. Note that this relies on __kvm_adjust_pc()
 	 * being preempt-safe on VHE.
 	 */
-	if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION |
-					 KVM_ARM64_INCREMENT_PC)))
+	if (unlikely(vcpu_get_flag(vcpu, PENDING_EXCEPTION) ||
+		     vcpu_get_flag(vcpu, INCREMENT_PC)))
 		kvm_call_hyp(__kvm_adjust_pc, vcpu);
 
 	vcpu_put(vcpu);
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index c5d009715402..b7557b25ed56 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -303,14 +303,14 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
 	if (vcpu_el1_is_32bit(vcpu)) {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case KVM_ARM64_EXCEPT_AA32_UND:
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case unpack_vcpu_flag(EXCEPT_AA32_UND):
 			enter_exception32(vcpu, PSR_AA32_MODE_UND, 4);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_IABT:
+		case unpack_vcpu_flag(EXCEPT_AA32_IABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_DABT:
+		case unpack_vcpu_flag(EXCEPT_AA32_DABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16);
 			break;
 		default:
@@ -318,9 +318,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 			break;
 		}
 	} else {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-		      KVM_ARM64_EXCEPT_AA64_EL1):
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC):
 			enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
 			break;
 		default:
@@ -340,12 +339,12 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
  */
 void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
+	if (vcpu_get_flag(vcpu, PENDING_EXCEPTION)) {
 		kvm_inject_exception(vcpu);
-		vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
-				      KVM_ARM64_EXCEPT_MASK);
-	} else 	if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {
+		vcpu_clear_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_clear_flag(vcpu, EXCEPT_MASK);
+	} else if (vcpu_get_flag(vcpu, INCREMENT_PC)) {
 		kvm_skip_instr(vcpu);
-		vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;
+		vcpu_clear_flag(vcpu, INCREMENT_PC);
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index b6d86e423319..edd3eabf520f 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -38,9 +38,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
 	*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	__kvm_adjust_pc(vcpu);
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 55a5dbe957e0..f32f4a2a347f 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,9 +20,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u64 esr = 0;
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
 
@@ -52,9 +50,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u64 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	/*
 	 * Build an unknown exception, depending on the instruction
@@ -73,8 +69,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_UND |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	kvm_pend_exception(vcpu, EXCEPT_AA32_UND);
 }
 
 /*
@@ -97,14 +92,12 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
 	if (is_pabt) {
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_IABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		kvm_pend_exception(vcpu, EXCEPT_AA32_IABT);
 		far &= GENMASK(31, 0);
 		far |= (u64)addr << 32;
 		vcpu_write_sys_reg(vcpu, fsr, IFSR32_EL2);
 	} else { /* !iabt */
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_DABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		kvm_pend_exception(vcpu, EXCEPT_AA32_DABT);
 		far &= GENMASK(63, 32);
 		far |= addr;
 		vcpu_write_sys_reg(vcpu, fsr, ESR_EL1);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 09/19] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (7 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 08/19] KVM: arm64: Move vcpu PC/Exception flags to the input flag set Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 10/19] KVM: arm64: Move vcpu SVE/SME flags to the state " Marc Zyngier
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The three debug flags (which deal with the debug registers, SPE and
TRBE) all are input flags to the hypervisor code.

Move them into the input set and convert them to the new accessors.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h          |  9 +++++---
 arch/arm64/kvm/debug.c                     | 25 +++++++++++-----------
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 +++----
 arch/arm64/kvm/sys_regs.c                  |  8 +++----
 6 files changed, 31 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index db42b4c06449..4c7446400b77 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -508,6 +508,12 @@ struct kvm_vcpu_arch {
 #define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
 #define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
 #define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
+/* Guest debug is live */
+#define DEBUG_DIRTY		__vcpu_single_flag(iflags, BIT(4))
+/* Save SPE context if active  */
+#define DEBUG_STATE_SAVE_SPE	__vcpu_single_flag(iflags, BIT(5))
+/* Save TRBE context if active  */
+#define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -530,10 +536,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
-#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 4fd5c216c4bb..0b28d7db7c76 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 * Trap debug register access when one of the following is true:
 	 *  - Userspace is using the hardware to debug the guest
 	 *  (KVM_GUESTDBG_USE_HW is set).
-	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
+	 *  - The guest is not using debug (DEBUG_DIRTY clear).
 	 *  - The guest has enabled the OS Lock (debug exceptions are blocked).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
-	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
+	    !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
 	    kvm_vcpu_os_lock_enabled(vcpu))
 		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
 
@@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
  * debug related registers.
  *
  * Additionally, KVM only traps guest accesses to the debug registers if
- * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
- * flag on vcpu->arch.flags).  Since the guest must not interfere
+ * the guest is not actively using them (see the DEBUG_DIRTY
+ * flag on vcpu->arch.iflags).  Since the guest must not interfere
  * with the hardware state when debugging the guest, we must ensure that
  * trapping is enabled whenever we are debugging the guest using the
  * debug registers.
@@ -205,9 +205,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 		 *
 		 * We simply switch the debug_ptr to point to our new
 		 * external_debug_state which has been populated by the
-		 * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
-		 * mechanism ensures the registers are updated on the
-		 * world switch.
+		 * debug ioctl. The existing DEBUG_DIRTY mechanism ensures
+		 * the registers are updated on the world switch.
 		 */
 		if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW) {
 			/* Enable breakpoints/watchpoints */
@@ -216,7 +215,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 			vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
 
 			vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
-			vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+			vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 			trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
 						&vcpu->arch.debug_ptr->dbg_bcr[0],
@@ -246,7 +245,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
 	/* If KDE or MDE are set, perform a full save/restore cycle. */
 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 	/* Write mdcr_el2 changes since vcpu_load on VHE systems */
 	if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
@@ -298,16 +297,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
 	 */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
 	    !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE);
 
 	/* Check if we have TRBE implemented and available at the host */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
 	    !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
 
 void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
-			      KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 4ebe9f558f3a..961bbef104a6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
 
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+	vcpu_clear_flag(vcpu, DEBUG_DIRTY);
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 7ecca8b07851..baa5b9b3dde5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -195,7 +195,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
 	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
@@ -212,7 +212,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
 	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index df361d839902..e17455773b98 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
 void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
 	/* Disable and flush SPE data generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
 	/* Disable and flush Self-Hosted Trace generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
 }
 
@@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 
 void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c06c0477fab5..f24797c57df8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -387,7 +387,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
 {
 	if (p->is_write) {
 		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 	} else {
 		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
 	}
@@ -403,8 +403,8 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
  * A 32 bit write to a debug register leave top bits alone
  * A 32 bit read from a debug register only returns the bottom bits
  *
- * All writes will set the KVM_ARM64_DEBUG_DIRTY flag to ensure the
- * hyp.S code switches between host and guest values in future.
+ * All writes will set the DEBUG_DIRTY flag to ensure the hyp code
+ * switches between host and guest values in future.
  */
 static void reg_to_dbg(struct kvm_vcpu *vcpu,
 		       struct sys_reg_params *p,
@@ -420,7 +420,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
 	val |= (p->regval & (mask >> shift)) << shift;
 	*dbg_reg = val;
 
-	vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+	vcpu_set_flag(vcpu, DEBUG_DIRTY);
 }
 
 static void dbg_to_reg(struct kvm_vcpu *vcpu,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 10/19] KVM: arm64: Move vcpu SVE/SME flags to the state flag set
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (8 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 09/19] KVM: arm64: Move vcpu debug/SPE/TRBE " Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag " Marc Zyngier
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The two HOST_{SVE,SME}_ENABLED are only used for the host kernel
to track its own state across a vcpu run so that it can be fully
restored.

Move these flags to the so called state set.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  8 +++++---
 arch/arm64/kvm/fpsimd.c           | 12 ++++++------
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4c7446400b77..4f147bdc5ce9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -515,6 +515,11 @@ struct kvm_vcpu_arch {
 /* Save TRBE context if active  */
 #define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
+/* SVE enabled for host EL0 */
+#define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
+/* SME enabled for EL0 */
+#define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
@@ -536,11 +541,8 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
-#define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
-
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index d397efe1a378..557a96f8e06a 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -79,9 +79,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.fp_state = FP_STATE_HOST_OWNED;
 
-	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
+	vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+		vcpu_set_flag(vcpu, HOST_SVE_ENABLED);
 
 	/*
 	 * We don't currently support SME guests but if we leave
@@ -93,9 +93,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
-		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
+		vcpu_clear_flag(vcpu, HOST_SME_ENABLED);
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
-			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
+			vcpu_set_flag(vcpu, HOST_SME_ENABLED);
 
 		if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) {
 			vcpu->arch.fp_state = FP_STATE_FREE;
@@ -164,7 +164,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 	 */
 	if (has_vhe() && system_supports_sme()) {
 		/* Also restore EL0 state seen on entry */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SME_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SME_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0,
 					 CPACR_EL1_SMEN_EL0EN |
 					 CPACR_EL1_SMEN_EL1EN);
@@ -193,7 +193,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 		 * for EL0.  To avoid spurious traps, restore the trap state
 		 * seen by kvm_arch_vcpu_load_fp():
 		 */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);
 		else
 			sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (9 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 10/19] KVM: arm64: Move vcpu SVE/SME flags to the state " Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-15 13:14   ` Alexandru Elisei
  2022-06-10  9:28 ` [PATCH v2 12/19] KVM: arm64: Move vcpu WFIT " Marc Zyngier
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The ON_UNSUPPORTED_CPU flag is only there to track the sad fact
that we have ended-up on a CPU where we cannot really run.

Since this is only for the host kernel's use, move it to the state
set.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4f147bdc5ce9..0c22514cb7c7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -519,6 +519,8 @@ struct kvm_vcpu_arch {
 #define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
 /* SME enabled for EL0 */
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+/* Physical CPU not in supported_cpus */
+#define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -541,7 +543,6 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
@@ -561,13 +562,13 @@ struct kvm_vcpu_arch {
 #endif
 
 #define vcpu_on_unsupported_cpu(vcpu)					\
-	((vcpu)->arch.flags & KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_get_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_set_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags |= KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_set_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_clear_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags &= ~KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_clear_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 12/19] KVM: arm64: Move vcpu WFIT flag to the state flag set
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (10 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag " Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 13/19] KVM: arm64: Kill unused vcpu flags field Marc Zyngier
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The host kernel uses the WFIT flag to remember that a vcpu has used
this instruction and wake it up as required. Move it to the state
set, as nothing in the hypervisor uses this information.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 4 ++--
 arch/arm64/kvm/arch_timer.c       | 2 +-
 arch/arm64/kvm/arm.c              | 2 +-
 arch/arm64/kvm/handle_exit.c      | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0c22514cb7c7..0fb1a5b86f16 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -521,6 +521,8 @@ struct kvm_vcpu_arch {
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
 /* Physical CPU not in supported_cpus */
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
+/* WFIT instruction trapped */
+#define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -542,8 +544,6 @@ struct kvm_vcpu_arch {
 	__size_ret;							\
 })
 
-/* vcpu_arch flags field values: */
-#define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 4e39ace073af..5290ca5db663 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -242,7 +242,7 @@ static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
 static bool vcpu_has_wfit_active(struct kvm_vcpu *vcpu)
 {
 	return (cpus_have_final_cap(ARM64_HAS_WFXT) &&
-		(vcpu->arch.flags & KVM_ARM64_WFIT));
+		vcpu_get_flag(vcpu, IN_WFIT));
 }
 
 static u64 wfit_delay_ns(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 5beabbe69585..8b9da9d30485 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -657,7 +657,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
 	preempt_enable();
 
 	kvm_vcpu_halt(vcpu);
-	vcpu->arch.flags &= ~KVM_ARM64_WFIT;
+	vcpu_clear_flag(vcpu, IN_WFIT);
 	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 
 	preempt_disable();
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index f66c0142b335..d045f5b973b9 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -120,7 +120,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
 		kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
 	} else {
 		if (esr & ESR_ELx_WFx_ISS_WFxT)
-			vcpu->arch.flags |= KVM_ARM64_WFIT;
+			vcpu_set_flag(vcpu, IN_WFIT);
 
 		kvm_vcpu_wfi(vcpu);
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 13/19] KVM: arm64: Kill unused vcpu flags field
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (11 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 12/19] KVM: arm64: Move vcpu WFIT " Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 14/19] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag Marc Zyngier
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

Horray, we have now sorted all the preexisting flags, and the
'flags' field is now unused. Get rid of it while nobody is
looking.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0fb1a5b86f16..39da28f85045 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -332,9 +332,6 @@ struct kvm_vcpu_arch {
 		FP_STATE_GUEST_OWNED,
 	} fp_state;
 
-	/* Miscellaneous vcpu state flags */
-	u64 flags;
-
 	/* Configuration flags, set once and for all before the vcpu can run */
 	u64 cflags;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 14/19] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (12 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 13/19] KVM: arm64: Kill unused vcpu flags field Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 15/19] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together Marc Zyngier
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

The aptly named boolean 'sysregs_loaded_on_cpu' tracks whether
some of the vcpu system registers are resident on the physical
CPU when running in VHE mode.

This is obviously a flag in hidding, so let's convert it to
a state flag, since this is solely a host concern (the hypervisor
itself always knows which state we're in).

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h  | 6 ++----
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 4 ++--
 arch/arm64/kvm/sys_regs.c          | 4 ++--
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 39da28f85045..ffbeb5f5692e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -410,10 +410,6 @@ struct kvm_vcpu_arch {
 	/* Additional reset state */
 	struct vcpu_reset_state	reset_state;
 
-	/* True when deferrable sysregs are loaded on the physical CPU,
-	 * see kvm_vcpu_load_sysregs_vhe and kvm_vcpu_put_sysregs_vhe. */
-	bool sysregs_loaded_on_cpu;
-
 	/* Guest PV state */
 	struct {
 		u64 last_steal;
@@ -520,6 +516,8 @@ struct kvm_vcpu_arch {
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 /* WFIT instruction trapped */
 #define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
+/* vcpu system registers loaded on physical CPU */
+#define SYSREGS_ON_CPU		__vcpu_single_flag(sflags, BIT(4))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 007a12dd4351..7b44f6b3b547 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -79,7 +79,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_user_state(guest_ctxt);
 	__sysreg_restore_el1_state(guest_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = true;
+	vcpu_set_flag(vcpu, SYSREGS_ON_CPU);
 
 	activate_traps_vhe_load(vcpu);
 }
@@ -110,5 +110,5 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	/* Restore host user state */
 	__sysreg_restore_user_state(host_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = false;
+	vcpu_clear_flag(vcpu, SYSREGS_ON_CPU);
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f24797c57df8..1c562bcfeccf 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -72,7 +72,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 {
 	u64 val = 0x8badf00d8badf00d;
 
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
@@ -81,7 +81,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 {
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 15/19] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (13 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 14/19] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 16/19] KVM: arm64: Add build-time sanity checks for flags Marc Zyngier
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
set at the same time, as they are mutually exclusive. Add checks
that will generate a warning should this ever happen.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 6ec58080ece8..9bdba47f7e14 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,11 +473,13 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
+	WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
 	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
 #define kvm_pend_exception(v, e)					\
 	do {								\
+		WARN_ON(vcpu_get_flag((v), INCREMENT_PC));		\
 		vcpu_set_flag((v), PENDING_EXCEPTION);			\
 		vcpu_set_flag((v), e);					\
 	} while (0)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 16/19] KVM: arm64: Add build-time sanity checks for flags
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (14 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 15/19] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 17/19] KVM: arm64: Reduce the size of the vcpu flag members Marc Zyngier
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

Flags are great, but flags can also be dangerous: it is easy
to encode a flag that is bigger than its container (unless the
container is a u64), and it is easy to construct a flag value
that doesn't fit in the mask that is associated with it.

Add a couple of build-time sanity checks that ensure we catch
these two cases.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ffbeb5f5692e..6a37018f40b7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -433,8 +433,20 @@ struct kvm_vcpu_arch {
 #define __unpack_flag(_set, _f, _m)	_f
 #define unpack_vcpu_flag(...)		__unpack_flag(__VA_ARGS__)
 
+#define __build_check_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *_fset;			\
+								\
+		/* Check that the flags fit in the mask */	\
+		BUILD_BUG_ON(HWEIGHT(m) != HWEIGHT((f) | (m)));	\
+		/* Check that the flags fit in the type */	\
+		BUILD_BUG_ON((sizeof(*_fset) * 8) <= __fls(m));	\
+	} while (0)
+
 #define __vcpu_get_flag(v, flagset, f, m)			\
 	({							\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		v->arch.flagset & (m);				\
 	})
 
@@ -442,6 +454,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		if (HWEIGHT(m) > 1)				\
 			*fset &= ~(m);				\
@@ -452,6 +466,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		*fset &= ~(m);					\
 	} while (0)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 17/19] KVM: arm64: Reduce the size of the vcpu flag members
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (15 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 16/19] KVM: arm64: Add build-time sanity checks for flags Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 18/19] KVM: arm64: Document why pause cannot be turned into a flag Marc Zyngier
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

Now that we can detect flags overflowing their container, reduce
the size of all flag set members in the vcpu struct, turning them
into 8bit quantities.

Even with the FP state enum occupying 32bit, the whole of the state
that was represented by flags is smaller by one byte. Profit!

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6a37018f40b7..c6975ecf5a5f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -333,13 +333,13 @@ struct kvm_vcpu_arch {
 	} fp_state;
 
 	/* Configuration flags, set once and for all before the vcpu can run */
-	u64 cflags;
+	u8 cflags;
 
 	/* Input flags to the hypervisor code, potentially cleared after use */
-	u64 iflags;
+	u8 iflags;
 
 	/* State flags for kernel bookkeeping, unused by the hypervisor code */
-	u64 sflags;
+	u8 sflags;
 
 	/*
 	 * We maintain more than a single set of debug registers to support
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 18/19] KVM: arm64: Document why pause cannot be turned into a flag
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (16 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 17/19] KVM: arm64: Reduce the size of the vcpu flag members Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-10  9:28 ` [PATCH v2 19/19] KVM: arm64: Move the handling of !FP outside of the fast path Marc Zyngier
  2022-06-29  9:58 ` [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

It would be tempting to turn the 'pause' state into a flag.

However, this cannot easily be done as it is updated out of context,
while all the flags expect to only be updated from the vcpu thread.
Turning it into a flag would require to make all flag updates
atomic, which isn't necessary desireable.

Document this, and take this opportunity to move the field next
to the flag sets, filling a hole in the vcpu structure.

Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c6975ecf5a5f..2cc42e1fec18 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -341,6 +341,15 @@ struct kvm_vcpu_arch {
 	/* State flags for kernel bookkeeping, unused by the hypervisor code */
 	u8 sflags;
 
+	/*
+	 * Don't run the guest (internal implementation need).
+	 *
+	 * Contrary to the flags above, this is set/cleared outside of
+	 * a vcpu context, and thus cannot be mixed with the flags
+	 * themselves (or the flag accesses need to be made atomic).
+	 */
+	bool pause;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
@@ -394,9 +403,6 @@ struct kvm_vcpu_arch {
 	/* vcpu power state */
 	struct kvm_mp_state mp_state;
 
-	/* Don't run the guest (internal implementation need) */
-	bool pause;
-
 	/* Cache some mmu pages needed inside spinlock regions */
 	struct kvm_mmu_memory_cache mmu_page_cache;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 19/19] KVM: arm64: Move the handling of !FP outside of the fast path
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (17 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 18/19] KVM: arm64: Document why pause cannot be turned into a flag Marc Zyngier
@ 2022-06-10  9:28 ` Marc Zyngier
  2022-06-14  4:15   ` Reiji Watanabe
  2022-06-29  9:58 ` [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
  19 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-06-10  9:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown,
	Reiji Watanabe, kernel-team

We currently start by assuming that the host owns the FP unit
at load time, then check again whether this is the case as
we are about to run. Only at this point do we account for the
fact that there is a (vanishingly small) chance that we're running
on a system without a FPSIMD unit (yes, this is madness).

We can actually move this FPSIMD check as early as load-time,
and drop the check at run time.

No intended change in behaviour.

Suggested-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/arm.c    | 6 ++++++
 arch/arm64/kvm/fpsimd.c | 8 ++++----
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 8b9da9d30485..a9dd7ec38f38 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -328,6 +328,12 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
 
+	/*
+	 * Default value for the FP state, will be overloaded at load
+	 * time if we support FP (pretty likely)
+	 */
+	vcpu->arch.fp_state = FP_STATE_FREE;
+
 	/* Set up the timer */
 	kvm_timer_vcpu_init(vcpu);
 
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 557a96f8e06a..ec8e4494873d 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -77,6 +77,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	BUG_ON(!current->mm);
 	BUG_ON(test_thread_flag(TIF_SVE));
 
+	if (!system_supports_fpsimd())
+		return;
+
 	vcpu->arch.fp_state = FP_STATE_HOST_OWNED;
 
 	vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
@@ -110,13 +113,10 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
  * FP while we were preemptible (such as off the back of an interrupt),
  * then neither the host nor the guest own the FP hardware (and it was the
  * responsibility of the code that used FP to save the existing state).
- *
- * Note that not supporting FP is basically the same thing as far as the
- * hypervisor is concerned (nothing to save).
  */
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
-	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
+	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
 		vcpu->arch.fp_state = FP_STATE_FREE;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-06-10  9:28 ` [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set Marc Zyngier
@ 2022-06-10 13:05   ` Fuad Tabba
  2022-06-11 18:37   ` Reiji Watanabe
  1 sibling, 0 replies; 28+ messages in thread
From: Fuad Tabba @ 2022-06-10 13:05 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, Reiji Watanabe, kernel-team

Hi Marc,

On Fri, Jun 10, 2022 at 10:28 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Careful analysis of the vcpu flags show that this is a mix of
> configuration, communication between the host and the hypervisor,
> as well as anciliary state that has no consistency. It'd be a lot
> better if we could split these flags into consistent categories.
>
> However, even if we split these flags apart, we want to make sure
> that each flag can only be applied to its own set, and not across
> sets.
>
> To achieve this, use a preprocessor hack so that each flag is always
> associated with:
>
> - the set that contains it,
>
> - a mask that describe all the bits that contain it (for a simple
>   flag, this is the same thing as the flag itself, but we will
>   eventually have values that cover multiple bits at once).
>
> Each flag is thus a triplet that is not directly usable as a value,
> but used by three helpers that allow the flag to be set, cleared,
> and fetched. By mandating the use of such helper, we can easily
> enforce that a flag can only be used with the set it belongs to.
>
> Finally, one last helper "unpacks" the raw value from the triplet
> that represents a flag, which is useful for multi-bit values that
> need to be enumerated (in a switch statement, for example).
>
> Further patches will start making use of this infrastructure.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 44 +++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 372c5642cfab..6d30ac7e3164 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -415,6 +415,50 @@ struct kvm_vcpu_arch {
>         } steal;
>  };
>
> +/*
> + * Each 'flag' is composed of a comma-separated triplet:
> + *
> + * - the flag-set it belongs to in the vcpu->arch structure
> + * - the value for that flag
> + * - the mask for that flag
> + *
> + *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
> + * unpack_vcpu_flag() extract the flag value from the triplet for
> + * direct use outside of the flag accessors.
> + */
> +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> +
> +#define __unpack_flag(_set, _f, _m)    _f
> +#define unpack_vcpu_flag(...)          __unpack_flag(__VA_ARGS__)
> +
> +#define __vcpu_get_flag(v, flagset, f, m)                      \
> +       ({                                                      \
> +               v->arch.flagset & (m);                          \
> +       })
> +
> +#define __vcpu_set_flag(v, flagset, f, m)                      \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               if (HWEIGHT(m) > 1)                             \
> +                       *fset &= ~(m);                          \
> +               *fset |= (f);                                   \
> +       } while (0)
> +
> +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               *fset &= ~(m);                                  \
> +       } while (0)
> +
> +#define vcpu_get_flag(v, ...)  __vcpu_get_flag((v), __VA_ARGS__)
> +#define vcpu_set_flag(v, ...)  __vcpu_set_flag((v), __VA_ARGS__)
> +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag((v), __VA_ARGS__)
> +
> +
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
>                              sve_ffr_offset((vcpu)->arch.sve_max_vl))

A bit of macro magic going on here, but with some help I think I've
wrapped my head around it. With that

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad


> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-06-10  9:28 ` [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set Marc Zyngier
  2022-06-10 13:05   ` Fuad Tabba
@ 2022-06-11 18:37   ` Reiji Watanabe
  2022-06-12  8:59     ` Marc Zyngier
  1 sibling, 1 reply; 28+ messages in thread
From: Reiji Watanabe @ 2022-06-11 18:37 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, Linux ARM, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, Mark Brown, kernel-team

On Fri, Jun 10, 2022 at 2:28 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Careful analysis of the vcpu flags show that this is a mix of
> configuration, communication between the host and the hypervisor,
> as well as anciliary state that has no consistency. It'd be a lot
> better if we could split these flags into consistent categories.
>
> However, even if we split these flags apart, we want to make sure
> that each flag can only be applied to its own set, and not across
> sets.
>
> To achieve this, use a preprocessor hack so that each flag is always
> associated with:
>
> - the set that contains it,
>
> - a mask that describe all the bits that contain it (for a simple
>   flag, this is the same thing as the flag itself, but we will
>   eventually have values that cover multiple bits at once).
>
> Each flag is thus a triplet that is not directly usable as a value,
> but used by three helpers that allow the flag to be set, cleared,
> and fetched. By mandating the use of such helper, we can easily
> enforce that a flag can only be used with the set it belongs to.
>
> Finally, one last helper "unpacks" the raw value from the triplet
> that represents a flag, which is useful for multi-bit values that
> need to be enumerated (in a switch statement, for example).
>
> Further patches will start making use of this infrastructure.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 44 +++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 372c5642cfab..6d30ac7e3164 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -415,6 +415,50 @@ struct kvm_vcpu_arch {
>         } steal;
>  };
>
> +/*
> + * Each 'flag' is composed of a comma-separated triplet:
> + *
> + * - the flag-set it belongs to in the vcpu->arch structure
> + * - the value for that flag
> + * - the mask for that flag
> + *
> + *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
> + * unpack_vcpu_flag() extract the flag value from the triplet for
> + * direct use outside of the flag accessors.
> + */
> +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> +
> +#define __unpack_flag(_set, _f, _m)    _f
> +#define unpack_vcpu_flag(...)          __unpack_flag(__VA_ARGS__)
> +
> +#define __vcpu_get_flag(v, flagset, f, m)                      \
> +       ({                                                      \
> +               v->arch.flagset & (m);                          \
> +       })
> +
> +#define __vcpu_set_flag(v, flagset, f, m)                      \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               if (HWEIGHT(m) > 1)                             \
> +                       *fset &= ~(m);                          \
> +               *fset |= (f);                                   \
> +       } while (0)
> +
> +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               *fset &= ~(m);                                  \
> +       } while (0)

Reviewed-by: Reiji Watanabe <reijiw@google.com>

IMHO I would prefer to have 'v' enclosed in parentheses in the
implementation of __vcpu_{get,set,clear}_flag rather than in
the implementation of vcpu_{get,set,clear}_flag though.
(That was what I meant in my comment for v1)

Thanks,
Reiji

> +#define vcpu_get_flag(v, ...)  __vcpu_get_flag((v), __VA_ARGS__)
> +#define vcpu_set_flag(v, ...)  __vcpu_set_flag((v), __VA_ARGS__)
> +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag((v), __VA_ARGS__)
> +
> +
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
>                              sve_ffr_offset((vcpu)->arch.sve_max_vl))
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 06/19] KVM: arm64: Add three sets of flags to the vcpu state
  2022-06-10  9:28 ` [PATCH v2 06/19] KVM: arm64: Add three sets of flags to the vcpu state Marc Zyngier
@ 2022-06-11 18:41   ` Reiji Watanabe
  0 siblings, 0 replies; 28+ messages in thread
From: Reiji Watanabe @ 2022-06-11 18:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, Linux ARM, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, Mark Brown, kernel-team

On Fri, Jun 10, 2022 at 2:29 AM Marc Zyngier <maz@kernel.org> wrote:
>
> It so appears that each of the vcpu flags is really belonging to
> one of three categories:
>
> - a configuration flag, set once and for all
> - an input flag generated by the kernel for the hypervisor to use
> - a state flag that is only for the kernel's own bookkeeping
>
> As we are going to split all the existing flags into these three
> sets, introduce all three in one go.
>
> No functional change other than a bit of bloat...
>
> Reviewed-by: Fuad Tabba <tabba@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 6d30ac7e3164..af45320f247f 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -335,6 +335,15 @@ struct kvm_vcpu_arch {
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> +       /* Configuration flags, set once and for all before the vcpu can run */
> +       u64 cflags;
> +
> +       /* Input flags to the hypervisor code, potentially cleared after use */
> +       u64 iflags;
> +
> +       /* State flags for kernel bookkeeping, unused by the hypervisor code */
> +       u64 sflags;

Reviewed-by: Reiji Watanabe <reijiw@google.com>

Thank you, those comments are more clear to me now.

Thanks,
Reiji


> +
>         /*
>          * We maintain more than a single set of debug registers to support
>          * debugging the guest from the host and to maintain separate host and
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-06-11 18:37   ` Reiji Watanabe
@ 2022-06-12  8:59     ` Marc Zyngier
  0 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-12  8:59 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, Mark Brown, kernel-team

On Sat, 11 Jun 2022 19:37:50 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> On Fri, Jun 10, 2022 at 2:28 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > Careful analysis of the vcpu flags show that this is a mix of
> > configuration, communication between the host and the hypervisor,
> > as well as anciliary state that has no consistency. It'd be a lot
> > better if we could split these flags into consistent categories.
> >
> > However, even if we split these flags apart, we want to make sure
> > that each flag can only be applied to its own set, and not across
> > sets.
> >
> > To achieve this, use a preprocessor hack so that each flag is always
> > associated with:
> >
> > - the set that contains it,
> >
> > - a mask that describe all the bits that contain it (for a simple
> >   flag, this is the same thing as the flag itself, but we will
> >   eventually have values that cover multiple bits at once).
> >
> > Each flag is thus a triplet that is not directly usable as a value,
> > but used by three helpers that allow the flag to be set, cleared,
> > and fetched. By mandating the use of such helper, we can easily
> > enforce that a flag can only be used with the set it belongs to.
> >
> > Finally, one last helper "unpacks" the raw value from the triplet
> > that represents a flag, which is useful for multi-bit values that
> > need to be enumerated (in a switch statement, for example).
> >
> > Further patches will start making use of this infrastructure.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 44 +++++++++++++++++++++++++++++++
> >  1 file changed, 44 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 372c5642cfab..6d30ac7e3164 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -415,6 +415,50 @@ struct kvm_vcpu_arch {
> >         } steal;
> >  };
> >
> > +/*
> > + * Each 'flag' is composed of a comma-separated triplet:
> > + *
> > + * - the flag-set it belongs to in the vcpu->arch structure
> > + * - the value for that flag
> > + * - the mask for that flag
> > + *
> > + *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
> > + * unpack_vcpu_flag() extract the flag value from the triplet for
> > + * direct use outside of the flag accessors.
> > + */
> > +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> > +
> > +#define __unpack_flag(_set, _f, _m)    _f
> > +#define unpack_vcpu_flag(...)          __unpack_flag(__VA_ARGS__)
> > +
> > +#define __vcpu_get_flag(v, flagset, f, m)                      \
> > +       ({                                                      \
> > +               v->arch.flagset & (m);                          \
> > +       })
> > +
> > +#define __vcpu_set_flag(v, flagset, f, m)                      \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               if (HWEIGHT(m) > 1)                             \
> > +                       *fset &= ~(m);                          \
> > +               *fset |= (f);                                   \
> > +       } while (0)
> > +
> > +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               *fset &= ~(m);                                  \
> > +       } while (0)
> 
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> 
> IMHO I would prefer to have 'v' enclosed in parentheses in the
> implementation of __vcpu_{get,set,clear}_flag rather than in
> the implementation of vcpu_{get,set,clear}_flag though.
> (That was what I meant in my comment for v1)

I understood what you were asking the first place, but I don't think
this has any advantage over what is above. __vcpu_{get,set,clear}_flag
are not meant to be used directly, and adding extra bracketing to
these only makes them more painful to read.

If you can show a case where it breaks, I'll be happy to revisit this.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 19/19] KVM: arm64: Move the handling of !FP outside of the fast path
  2022-06-10  9:28 ` [PATCH v2 19/19] KVM: arm64: Move the handling of !FP outside of the fast path Marc Zyngier
@ 2022-06-14  4:15   ` Reiji Watanabe
  0 siblings, 0 replies; 28+ messages in thread
From: Reiji Watanabe @ 2022-06-14  4:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, Linux ARM, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, Mark Brown, kernel-team

On Fri, Jun 10, 2022 at 2:35 AM Marc Zyngier <maz@kernel.org> wrote:
>
> We currently start by assuming that the host owns the FP unit
> at load time, then check again whether this is the case as
> we are about to run. Only at this point do we account for the
> fact that there is a (vanishingly small) chance that we're running
> on a system without a FPSIMD unit (yes, this is madness).
>
> We can actually move this FPSIMD check as early as load-time,
> and drop the check at run time.
>
> No intended change in behaviour.
>
> Suggested-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

For patches 9~19 in the series,
Reviewed-by: Reiji Watanabe <reijiw@google.com>

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  2022-06-10  9:28 ` [PATCH v2 11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag " Marc Zyngier
@ 2022-06-15 13:14   ` Alexandru Elisei
  2022-06-15 13:23     ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Alexandru Elisei @ 2022-06-15 13:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Will Deacon, Fuad Tabba, Quentin Perret,
	Mark Brown, Reiji Watanabe, kernel-team

Hi,

On Fri, Jun 10, 2022 at 10:28:30AM +0100, Marc Zyngier wrote:
> The ON_UNSUPPORTED_CPU flag is only there to track the sad fact
> that we have ended-up on a CPU where we cannot really run.
> 
> Since this is only for the host kernel's use, move it to the state
> set.
> 
> Reviewed-by: Fuad Tabba <tabba@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4f147bdc5ce9..0c22514cb7c7 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -519,6 +519,8 @@ struct kvm_vcpu_arch {
>  #define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
>  /* SME enabled for EL0 */
>  #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
> +/* Physical CPU not in supported_cpus */
> +#define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))

I'm a bit confused here. The ON_UNSUPPORTED_CPU flag ends up in sflags. The
comment for sflags says:

+	/* State flags for kernel bookkeeping, unused by the hypervisor code */
+	u64 sflags;

The ON_UNSUPPORT_CPU flag is used exclusively by KVM (it's only used by the
file arch/arm64/kvm/arm.c), so why is it part of a set of flags which are
supposed to be unused by the hypervisor code?

Thanks,
Alex

>  
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
> @@ -541,7 +543,6 @@ struct kvm_vcpu_arch {
>  })
>  
>  /* vcpu_arch flags field values: */
> -#define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
>  #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
>  #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
>  				 KVM_GUESTDBG_USE_SW_BP | \
> @@ -561,13 +562,13 @@ struct kvm_vcpu_arch {
>  #endif
>  
>  #define vcpu_on_unsupported_cpu(vcpu)					\
> -	((vcpu)->arch.flags & KVM_ARM64_ON_UNSUPPORTED_CPU)
> +	vcpu_get_flag(vcpu, ON_UNSUPPORTED_CPU)
>  
>  #define vcpu_set_on_unsupported_cpu(vcpu)				\
> -	((vcpu)->arch.flags |= KVM_ARM64_ON_UNSUPPORTED_CPU)
> +	vcpu_set_flag(vcpu, ON_UNSUPPORTED_CPU)
>  
>  #define vcpu_clear_on_unsupported_cpu(vcpu)				\
> -	((vcpu)->arch.flags &= ~KVM_ARM64_ON_UNSUPPORTED_CPU)
> +	vcpu_clear_flag(vcpu, ON_UNSUPPORTED_CPU)
>  
>  #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  2022-06-15 13:14   ` Alexandru Elisei
@ 2022-06-15 13:23     ` Marc Zyngier
  0 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-15 13:23 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Will Deacon, Fuad Tabba, Quentin Perret,
	Mark Brown, Reiji Watanabe, kernel-team

On Wed, 15 Jun 2022 14:14:19 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> Hi,
> 
> On Fri, Jun 10, 2022 at 10:28:30AM +0100, Marc Zyngier wrote:
> > The ON_UNSUPPORTED_CPU flag is only there to track the sad fact
> > that we have ended-up on a CPU where we cannot really run.
> > 
> > Since this is only for the host kernel's use, move it to the state
> > set.
> > 
> > Reviewed-by: Fuad Tabba <tabba@google.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 9 +++++----
> >  1 file changed, 5 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 4f147bdc5ce9..0c22514cb7c7 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -519,6 +519,8 @@ struct kvm_vcpu_arch {
> >  #define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
> >  /* SME enabled for EL0 */
> >  #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
> > +/* Physical CPU not in supported_cpus */
> > +#define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
> 
> I'm a bit confused here. The ON_UNSUPPORTED_CPU flag ends up in sflags. The
> comment for sflags says:
> 
> +	/* State flags for kernel bookkeeping, unused by the hypervisor code */
> +	u64 sflags;
> 
> The ON_UNSUPPORT_CPU flag is used exclusively by KVM (it's only used by the
> file arch/arm64/kvm/arm.c), so why is it part of a set of flags which are
> supposed to be unused by the hypervisor code?

Are we going to have the same terminology discussion we had when you
reviewed the NV patches?

These flags are only used by code that isn't involved in any sort of
world switching. If you are running nVHE, these flags are not used by
the EL2 code.

That's what 'hypervisor code' means in this context.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags
  2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
                   ` (18 preceding siblings ...)
  2022-06-10  9:28 ` [PATCH v2 19/19] KVM: arm64: Move the handling of !FP outside of the fast path Marc Zyngier
@ 2022-06-29  9:58 ` Marc Zyngier
  19 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-06-29  9:58 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, Marc Zyngier, kvm
  Cc: Fuad Tabba, Reiji Watanabe, Will Deacon, kernel-team,
	James Morse, Mark Brown, Oliver Upton, Suzuki K Poulose,
	Alexandru Elisei, Quentin Perret

On Fri, 10 Jun 2022 10:28:19 +0100, Marc Zyngier wrote:
> This is a iteration on [1], which aims at making the vcpu flags suck a
> bit less.
> 
> * From v1 [1]:
>   - Rebased onto v5.19-rc1
>   - Took the first two patches into kvmarm-fixes, included here for
>     completeness
>   - Additional patch to move system_supports_fpsimd() outside of
>     the run path (Reiji)
>   - Expanded on comments (Reiji)
>   - New kvm_pend_exception() accessor (Fuad)
>   - Various bracketing fixups (Reiji)
>   - Some renaming (Reiji, Broonie)
>   - Collected RBs, with thanks
> 
> [...]

Applied to next, thanks!

[01/19] KVM: arm64: Always start with clearing SVE flag on load
        commit: d52d165d67c5aa26c8c89909003c94a66492d23d
[02/19] KVM: arm64: Always start with clearing SME flag on load
        commit: 039f49c4cafb785504c678f28664d088e0108d35
[03/19] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
        commit: e9ada6c208c15c907afe5afb1aa82e23e81eb8ba
[04/19] KVM: arm64: Move FP state ownership from flag to a tristate
        commit: f8077b0d59230cbb58e0b98839e04b564529a5ac
[05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set
        commit: e87abb73e5946379896cf49b10f6b57e02937a4c
[06/19] KVM: arm64: Add three sets of flags to the vcpu state
        commit: 690bacb83bc30d14821bd32cac1c5839b4a9ac6c
[07/19] KVM: arm64: Move vcpu configuration flags into their own set
        commit: 4c0680d394d8a77868049931101e4a59372346b5
[08/19] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
        commit: 699bb2e0c6f3796549dabac329501df7ffd99439
[09/19] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
        commit: b1da49088ac68a21c613efd734dada8272ec0b00
[10/19] KVM: arm64: Move vcpu SVE/SME flags to the state flag set
        commit: 0affa37fcd1d6f701a0fe805c4ceb7f348d377d5
[11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
        commit: aff3ccd7320eed5814d317fcb80244f474d66a84
[12/19] KVM: arm64: Move vcpu WFIT flag to the state flag set
        commit: eebc538d8e07e0ec759823664cbe2011a8bd885d
[13/19] KVM: arm64: Kill unused vcpu flags field
        commit: 781e3ae148fd2f9b0cf9b5b94f6c32f2361eb7c0
[14/19] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
        commit: 30b6ab45f81334e83dcb440451b6a4c4a753a118
[15/19] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
        commit: e19f2c6cd14668c0d5b1cef280632b7ca5893118
[16/19] KVM: arm64: Add build-time sanity checks for flags
        commit: 5a3984f4ec73d1c7cf31a4cee46cca7d4c75deee
[17/19] KVM: arm64: Reduce the size of the vcpu flag members
        commit: 54ddda919c4bc37c113727034619c4e15c184334
[18/19] KVM: arm64: Document why pause cannot be turned into a flag
        commit: 0fa4a3137e943cd6acab386ff26cd8d5e94e9559
[19/19] KVM: arm64: Move the handling of !FP outside of the fast path
        commit: b4da91879e98bdd5998ee84f47f02426ac50a729

Cheers,

	M.
-- 
Marc Zyngier <maz@kernel.org>


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2022-06-29  9:59 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-10  9:28 [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 01/19] KVM: arm64: Always start with clearing SVE flag on load Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 02/19] KVM: arm64: Always start with clearing SME " Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 03/19] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 04/19] KVM: arm64: Move FP state ownership from flag to a tristate Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 05/19] KVM: arm64: Add helpers to manipulate vcpu flags among a set Marc Zyngier
2022-06-10 13:05   ` Fuad Tabba
2022-06-11 18:37   ` Reiji Watanabe
2022-06-12  8:59     ` Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 06/19] KVM: arm64: Add three sets of flags to the vcpu state Marc Zyngier
2022-06-11 18:41   ` Reiji Watanabe
2022-06-10  9:28 ` [PATCH v2 07/19] KVM: arm64: Move vcpu configuration flags into their own set Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 08/19] KVM: arm64: Move vcpu PC/Exception flags to the input flag set Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 09/19] KVM: arm64: Move vcpu debug/SPE/TRBE " Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 10/19] KVM: arm64: Move vcpu SVE/SME flags to the state " Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 11/19] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag " Marc Zyngier
2022-06-15 13:14   ` Alexandru Elisei
2022-06-15 13:23     ` Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 12/19] KVM: arm64: Move vcpu WFIT " Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 13/19] KVM: arm64: Kill unused vcpu flags field Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 14/19] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 15/19] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 16/19] KVM: arm64: Add build-time sanity checks for flags Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 17/19] KVM: arm64: Reduce the size of the vcpu flag members Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 18/19] KVM: arm64: Document why pause cannot be turned into a flag Marc Zyngier
2022-06-10  9:28 ` [PATCH v2 19/19] KVM: arm64: Move the handling of !FP outside of the fast path Marc Zyngier
2022-06-14  4:15   ` Reiji Watanabe
2022-06-29  9:58 ` [PATCH v2 00/19] KVM/arm64: Refactoring the vcpu flags Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).