All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
@ 2022-05-28 11:38 ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

While working on pKVM, it slowly became apparent that dealing with the
flags was a pain, as they serve multiple purposes:

- some flags are purely a configuration artefact,

- some are an input from the host kernel to the world switch,

- a bunch of them are bookkeeping information for the kernel itself,

- and finally some form a state machine between the host and the world
  switch.

Given that, it became pretty hard to clearly delineate what needed to
be conveyed between the host view of a vcpu and the shadow copy the
world switch deals with, both on entry and exit. This has led to a
flurry of bad bugs when developing the feature, and it is time to put
some order in this mess.

This series is roughly split in four parts:

- patch 1 addresses an embarrassing bug that would leave SVE enabled
  for host EL0 once the vcpu had the flag set (it was never cleared),
  and patch 2 fix the same bug for SME, as it copied the bad
  behaviour (both patches are fix candidates for -rc1, and the first
  one carries a Cc stable).

- patches 3 and 4 rid us of the FP flags altogether, as they really
  form a state machine that is better represented with an enum instead
  of dubious bit fiddling in both directions.

- patch 5 through to 14 split all the flags into three distinct
  categories: configuration, input to the world switch, and host
  state, using some ugly^Wbeautiful^Wquestionable cpp tricks.

- finally, the last patches add some cheap hardening and size
  optimisation to the new flags.

With that in place, it should be much easier to reason about which
flags need to be synchronised at runtime, and in which direction (for
pKVM, this is only a subset of the input flags, and nothing else).

This has been lightly tested on both VHE and nVHE systems, but not
with pKVM itself (there is a bit of work to rebase it on top of this
infrastructure). Patches on top of kvmarm-4.19 (there is a minor
conflict with Linus' current tree).

Marc Zyngier (18):
  KVM: arm64: Always start with clearing SVE flag on load
  KVM: arm64: Always start with clearing SME flag on load
  KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  KVM: arm64: Move FP state ownership from flag to a tristate
  KVM: arm64: Add helpers to manipulate vcpu flags among a set
  KVM: arm64: Add three sets of flags to the vcpu state
  KVM: arm64: Move vcpu configuration flags into their own set
  KVM: arm64: Move vcpu PC/Exception flags to the input flag set
  KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  KVM: arm64: Move vcpu SVE/SME flags to the state flag set
  KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  KVM: arm64: Move vcpu WFIT flag to the state flag set
  KVM: arm64: Kill unused vcpu flags field
  KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
  KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set
    together
  KVM: arm64: Add build-time sanity checks for flags
  KVM: arm64: Reduce the size of the vcpu flag members
  KVM: arm64: Document why pause cannot be turned into a flag

 arch/arm64/include/asm/kvm_emulate.h       |   3 +-
 arch/arm64/include/asm/kvm_host.h          | 192 +++++++++++++++------
 arch/arm64/kvm/arch_timer.c                |   2 +-
 arch/arm64/kvm/arm.c                       |   6 +-
 arch/arm64/kvm/debug.c                     |  22 +--
 arch/arm64/kvm/fpsimd.c                    |  36 ++--
 arch/arm64/kvm/handle_exit.c               |   2 +-
 arch/arm64/kvm/hyp/exception.c             |  23 ++-
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |   6 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  24 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |   4 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |   8 +-
 arch/arm64/kvm/hyp/nvhe/switch.c           |   6 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c         |   7 +-
 arch/arm64/kvm/hyp/vhe/switch.c            |   4 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |   4 +-
 arch/arm64/kvm/inject_fault.c              |  30 ++--
 arch/arm64/kvm/reset.c                     |   6 +-
 arch/arm64/kvm/sys_regs.c                  |  12 +-
 19 files changed, 238 insertions(+), 159 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 141+ messages in thread

* [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
@ 2022-05-28 11:38 ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

While working on pKVM, it slowly became apparent that dealing with the
flags was a pain, as they serve multiple purposes:

- some flags are purely a configuration artefact,

- some are an input from the host kernel to the world switch,

- a bunch of them are bookkeeping information for the kernel itself,

- and finally some form a state machine between the host and the world
  switch.

Given that, it became pretty hard to clearly delineate what needed to
be conveyed between the host view of a vcpu and the shadow copy the
world switch deals with, both on entry and exit. This has led to a
flurry of bad bugs when developing the feature, and it is time to put
some order in this mess.

This series is roughly split in four parts:

- patch 1 addresses an embarrassing bug that would leave SVE enabled
  for host EL0 once the vcpu had the flag set (it was never cleared),
  and patch 2 fix the same bug for SME, as it copied the bad
  behaviour (both patches are fix candidates for -rc1, and the first
  one carries a Cc stable).

- patches 3 and 4 rid us of the FP flags altogether, as they really
  form a state machine that is better represented with an enum instead
  of dubious bit fiddling in both directions.

- patch 5 through to 14 split all the flags into three distinct
  categories: configuration, input to the world switch, and host
  state, using some ugly^Wbeautiful^Wquestionable cpp tricks.

- finally, the last patches add some cheap hardening and size
  optimisation to the new flags.

With that in place, it should be much easier to reason about which
flags need to be synchronised at runtime, and in which direction (for
pKVM, this is only a subset of the input flags, and nothing else).

This has been lightly tested on both VHE and nVHE systems, but not
with pKVM itself (there is a bit of work to rebase it on top of this
infrastructure). Patches on top of kvmarm-4.19 (there is a minor
conflict with Linus' current tree).

Marc Zyngier (18):
  KVM: arm64: Always start with clearing SVE flag on load
  KVM: arm64: Always start with clearing SME flag on load
  KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  KVM: arm64: Move FP state ownership from flag to a tristate
  KVM: arm64: Add helpers to manipulate vcpu flags among a set
  KVM: arm64: Add three sets of flags to the vcpu state
  KVM: arm64: Move vcpu configuration flags into their own set
  KVM: arm64: Move vcpu PC/Exception flags to the input flag set
  KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  KVM: arm64: Move vcpu SVE/SME flags to the state flag set
  KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  KVM: arm64: Move vcpu WFIT flag to the state flag set
  KVM: arm64: Kill unused vcpu flags field
  KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
  KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set
    together
  KVM: arm64: Add build-time sanity checks for flags
  KVM: arm64: Reduce the size of the vcpu flag members
  KVM: arm64: Document why pause cannot be turned into a flag

 arch/arm64/include/asm/kvm_emulate.h       |   3 +-
 arch/arm64/include/asm/kvm_host.h          | 192 +++++++++++++++------
 arch/arm64/kvm/arch_timer.c                |   2 +-
 arch/arm64/kvm/arm.c                       |   6 +-
 arch/arm64/kvm/debug.c                     |  22 +--
 arch/arm64/kvm/fpsimd.c                    |  36 ++--
 arch/arm64/kvm/handle_exit.c               |   2 +-
 arch/arm64/kvm/hyp/exception.c             |  23 ++-
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |   6 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  24 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |   4 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |   8 +-
 arch/arm64/kvm/hyp/nvhe/switch.c           |   6 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c         |   7 +-
 arch/arm64/kvm/hyp/vhe/switch.c            |   4 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |   4 +-
 arch/arm64/kvm/inject_fault.c              |  30 ++--
 arch/arm64/kvm/reset.c                     |   6 +-
 arch/arm64/kvm/sys_regs.c                  |  12 +-
 19 files changed, 238 insertions(+), 159 deletions(-)

-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
@ 2022-05-28 11:38 ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

While working on pKVM, it slowly became apparent that dealing with the
flags was a pain, as they serve multiple purposes:

- some flags are purely a configuration artefact,

- some are an input from the host kernel to the world switch,

- a bunch of them are bookkeeping information for the kernel itself,

- and finally some form a state machine between the host and the world
  switch.

Given that, it became pretty hard to clearly delineate what needed to
be conveyed between the host view of a vcpu and the shadow copy the
world switch deals with, both on entry and exit. This has led to a
flurry of bad bugs when developing the feature, and it is time to put
some order in this mess.

This series is roughly split in four parts:

- patch 1 addresses an embarrassing bug that would leave SVE enabled
  for host EL0 once the vcpu had the flag set (it was never cleared),
  and patch 2 fix the same bug for SME, as it copied the bad
  behaviour (both patches are fix candidates for -rc1, and the first
  one carries a Cc stable).

- patches 3 and 4 rid us of the FP flags altogether, as they really
  form a state machine that is better represented with an enum instead
  of dubious bit fiddling in both directions.

- patch 5 through to 14 split all the flags into three distinct
  categories: configuration, input to the world switch, and host
  state, using some ugly^Wbeautiful^Wquestionable cpp tricks.

- finally, the last patches add some cheap hardening and size
  optimisation to the new flags.

With that in place, it should be much easier to reason about which
flags need to be synchronised at runtime, and in which direction (for
pKVM, this is only a subset of the input flags, and nothing else).

This has been lightly tested on both VHE and nVHE systems, but not
with pKVM itself (there is a bit of work to rebase it on top of this
infrastructure). Patches on top of kvmarm-4.19 (there is a minor
conflict with Linus' current tree).

Marc Zyngier (18):
  KVM: arm64: Always start with clearing SVE flag on load
  KVM: arm64: Always start with clearing SME flag on load
  KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  KVM: arm64: Move FP state ownership from flag to a tristate
  KVM: arm64: Add helpers to manipulate vcpu flags among a set
  KVM: arm64: Add three sets of flags to the vcpu state
  KVM: arm64: Move vcpu configuration flags into their own set
  KVM: arm64: Move vcpu PC/Exception flags to the input flag set
  KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  KVM: arm64: Move vcpu SVE/SME flags to the state flag set
  KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  KVM: arm64: Move vcpu WFIT flag to the state flag set
  KVM: arm64: Kill unused vcpu flags field
  KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
  KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set
    together
  KVM: arm64: Add build-time sanity checks for flags
  KVM: arm64: Reduce the size of the vcpu flag members
  KVM: arm64: Document why pause cannot be turned into a flag

 arch/arm64/include/asm/kvm_emulate.h       |   3 +-
 arch/arm64/include/asm/kvm_host.h          | 192 +++++++++++++++------
 arch/arm64/kvm/arch_timer.c                |   2 +-
 arch/arm64/kvm/arm.c                       |   6 +-
 arch/arm64/kvm/debug.c                     |  22 +--
 arch/arm64/kvm/fpsimd.c                    |  36 ++--
 arch/arm64/kvm/handle_exit.c               |   2 +-
 arch/arm64/kvm/hyp/exception.c             |  23 ++-
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |   6 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  24 +--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |   4 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |   8 +-
 arch/arm64/kvm/hyp/nvhe/switch.c           |   6 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c         |   7 +-
 arch/arm64/kvm/hyp/vhe/switch.c            |   4 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |   4 +-
 arch/arm64/kvm/inject_fault.c              |  30 ++--
 arch/arm64/kvm/reset.c                     |   6 +-
 arch/arm64/kvm/sys_regs.c                  |  12 +-
 19 files changed, 238 insertions(+), 159 deletions(-)

-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team,
	stable

On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
flag if SVE is enabled for EL0 on the host. This is used to restore
the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SVE for userspace.

We probably never saw the issue because no VMM uses SVE, but
that's still pretty bad. Unconditionally clearing the flag
on vcpu load addresses the issue.

Fixes: 8383741ab2e7 ("KVM: arm64: Get rid of host SVE tracking/saving")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 441edb9c398c..3c2cfc3adc51 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -80,6 +80,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
 	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
 
+	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
 		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: kernel-team, Will Deacon, stable, Mark Brown

On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
flag if SVE is enabled for EL0 on the host. This is used to restore
the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SVE for userspace.

We probably never saw the issue because no VMM uses SVE, but
that's still pretty bad. Unconditionally clearing the flag
on vcpu load addresses the issue.

Fixes: 8383741ab2e7 ("KVM: arm64: Get rid of host SVE tracking/saving")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 441edb9c398c..3c2cfc3adc51 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -80,6 +80,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
 	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
 
+	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
 		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team,
	stable

On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
flag if SVE is enabled for EL0 on the host. This is used to restore
the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SVE for userspace.

We probably never saw the issue because no VMM uses SVE, but
that's still pretty bad. Unconditionally clearing the flag
on vcpu load addresses the issue.

Fixes: 8383741ab2e7 ("KVM: arm64: Get rid of host SVE tracking/saving")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 441edb9c398c..3c2cfc3adc51 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -80,6 +80,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
 	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
 
+	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
 		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 02/18] KVM: arm64: Always start with clearing SME flag on load
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

On each vcpu load, we set the KVM_ARM64_HOST_SME_ENABLED
flag if SVE is enabled for EL0 on the host. This is used to
restore the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SME for userspace. As it turns
out, this is due to the SME code being more or less copied from
SVE, and inheriting the same shortcomings.

We never saw the issue because nothing uses SME, and the amount
of testing is probably still pretty low.

Fixes: 861262ab8627 ("KVM: arm64: Handle SME host state when running guests")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3c2cfc3adc51..78b3f143a2d0 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -94,6 +94,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
+		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
 			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 02/18] KVM: arm64: Always start with clearing SME flag on load
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

On each vcpu load, we set the KVM_ARM64_HOST_SME_ENABLED
flag if SVE is enabled for EL0 on the host. This is used to
restore the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SME for userspace. As it turns
out, this is due to the SME code being more or less copied from
SVE, and inheriting the same shortcomings.

We never saw the issue because nothing uses SME, and the amount
of testing is probably still pretty low.

Fixes: 861262ab8627 ("KVM: arm64: Handle SME host state when running guests")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3c2cfc3adc51..78b3f143a2d0 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -94,6 +94,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
+		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
 			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 02/18] KVM: arm64: Always start with clearing SME flag on load
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

On each vcpu load, we set the KVM_ARM64_HOST_SME_ENABLED
flag if SVE is enabled for EL0 on the host. This is used to
restore the correct state on vpcu put.

However, it appears that nothing ever clears this flag. Once
set, it will stick until the vcpu is destroyed, which has the
potential to spuriously enable SME for userspace. As it turns
out, this is due to the SME code being more or less copied from
SVE, and inheriting the same shortcomings.

We never saw the issue because nothing uses SME, and the amount
of testing is probably still pretty low.

Fixes: 861262ab8627 ("KVM: arm64: Handle SME host state when running guests")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/fpsimd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3c2cfc3adc51..78b3f143a2d0 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -94,6 +94,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
+		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
 			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
TIF_FOREIGN_FPSTATE so that we can evaluate just before running
the vcpu whether it the FP regs contain something that is owned
by the vcpu or not by updating the rest of the FP flags.

We do this in the hypervisor code in order to make sure we're
in a context where we are not interruptible. But we already
have a hook in the run loop to generate this flag. We may as
well update the FP flags directly and save the pointless flag
tracking.

Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
to indicate what the leftover of this helper actually do.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  1 -
 arch/arm64/kvm/fpsimd.c                 | 17 ++++++++++-------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 16 ++--------------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 026e91b8d00b..9252d71b4ac5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -465,7 +465,6 @@ struct kvm_vcpu_arch {
 
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
-#define KVM_ARM64_FP_FOREIGN_FPSTATE	(1 << 14)
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 78b3f143a2d0..9ebd89541281 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 }
 
 /*
- * Called just before entering the guest once we are no longer
- * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
- * mirror of the flag used by the hypervisor.
+ * Called just before entering the guest once we are no longer preemptable
+ * and interrupts are disabled. If we have managed to run anything using
+ * FP while we were preemptible (such as off the back of an interrupt),
+ * then neither the host nor the guest own the FP hardware (and it was the
+ * responsibility of the code that used FP to save the existing state).
+ *
+ * Note that not supporting FP is basically the same thing as far as the
+ * hypervisor is concerned (nothing to save).
  */
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
-	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
-	else
-		vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
+	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
+		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
 }
 
 /*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5d31f6c64c8c..1209248d2a3d 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -37,21 +37,9 @@ struct kvm_exception_table_entry {
 extern struct kvm_exception_table_entry __start___kvm_ex_table;
 extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 
-/* Check whether the FP regs were dirtied while in the host-side run loop: */
-static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
+/* Check whether the FP regs are owned by the guest */
+static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	/*
-	 * When the system doesn't support FP/SIMD, we cannot rely on
-	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
-	 * abort on the very first access to FP and thus we should never
-	 * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
-	 * trap the accesses.
-	 */
-	if (!system_supports_fpsimd() ||
-	    vcpu->arch.flags & KVM_ARM64_FP_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
-
 	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6db801db8f27..a6b9f1186577 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -43,7 +43,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
+	if (!guest_owns_fp_regs(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
 		__activate_traps_fpsimd32(vcpu);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 969f20daf97a..46f365254e9f 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -55,7 +55,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (update_fp_enabled(vcpu)) {
+	if (guest_owns_fp_regs(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
 	} else {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
TIF_FOREIGN_FPSTATE so that we can evaluate just before running
the vcpu whether it the FP regs contain something that is owned
by the vcpu or not by updating the rest of the FP flags.

We do this in the hypervisor code in order to make sure we're
in a context where we are not interruptible. But we already
have a hook in the run loop to generate this flag. We may as
well update the FP flags directly and save the pointless flag
tracking.

Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
to indicate what the leftover of this helper actually do.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  1 -
 arch/arm64/kvm/fpsimd.c                 | 17 ++++++++++-------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 16 ++--------------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 026e91b8d00b..9252d71b4ac5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -465,7 +465,6 @@ struct kvm_vcpu_arch {
 
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
-#define KVM_ARM64_FP_FOREIGN_FPSTATE	(1 << 14)
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 78b3f143a2d0..9ebd89541281 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 }
 
 /*
- * Called just before entering the guest once we are no longer
- * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
- * mirror of the flag used by the hypervisor.
+ * Called just before entering the guest once we are no longer preemptable
+ * and interrupts are disabled. If we have managed to run anything using
+ * FP while we were preemptible (such as off the back of an interrupt),
+ * then neither the host nor the guest own the FP hardware (and it was the
+ * responsibility of the code that used FP to save the existing state).
+ *
+ * Note that not supporting FP is basically the same thing as far as the
+ * hypervisor is concerned (nothing to save).
  */
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
-	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
-	else
-		vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
+	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
+		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
 }
 
 /*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5d31f6c64c8c..1209248d2a3d 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -37,21 +37,9 @@ struct kvm_exception_table_entry {
 extern struct kvm_exception_table_entry __start___kvm_ex_table;
 extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 
-/* Check whether the FP regs were dirtied while in the host-side run loop: */
-static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
+/* Check whether the FP regs are owned by the guest */
+static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	/*
-	 * When the system doesn't support FP/SIMD, we cannot rely on
-	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
-	 * abort on the very first access to FP and thus we should never
-	 * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
-	 * trap the accesses.
-	 */
-	if (!system_supports_fpsimd() ||
-	    vcpu->arch.flags & KVM_ARM64_FP_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
-
 	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6db801db8f27..a6b9f1186577 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -43,7 +43,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
+	if (!guest_owns_fp_regs(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
 		__activate_traps_fpsimd32(vcpu);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 969f20daf97a..46f365254e9f 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -55,7 +55,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (update_fp_enabled(vcpu)) {
+	if (guest_owns_fp_regs(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
 	} else {
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
TIF_FOREIGN_FPSTATE so that we can evaluate just before running
the vcpu whether it the FP regs contain something that is owned
by the vcpu or not by updating the rest of the FP flags.

We do this in the hypervisor code in order to make sure we're
in a context where we are not interruptible. But we already
have a hook in the run loop to generate this flag. We may as
well update the FP flags directly and save the pointless flag
tracking.

Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
to indicate what the leftover of this helper actually do.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  1 -
 arch/arm64/kvm/fpsimd.c                 | 17 ++++++++++-------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 16 ++--------------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 026e91b8d00b..9252d71b4ac5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -465,7 +465,6 @@ struct kvm_vcpu_arch {
 
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
-#define KVM_ARM64_FP_FOREIGN_FPSTATE	(1 << 14)
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 78b3f143a2d0..9ebd89541281 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 }
 
 /*
- * Called just before entering the guest once we are no longer
- * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
- * mirror of the flag used by the hypervisor.
+ * Called just before entering the guest once we are no longer preemptable
+ * and interrupts are disabled. If we have managed to run anything using
+ * FP while we were preemptible (such as off the back of an interrupt),
+ * then neither the host nor the guest own the FP hardware (and it was the
+ * responsibility of the code that used FP to save the existing state).
+ *
+ * Note that not supporting FP is basically the same thing as far as the
+ * hypervisor is concerned (nothing to save).
  */
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
-	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
-	else
-		vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
+	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
+		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
 }
 
 /*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5d31f6c64c8c..1209248d2a3d 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -37,21 +37,9 @@ struct kvm_exception_table_entry {
 extern struct kvm_exception_table_entry __start___kvm_ex_table;
 extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 
-/* Check whether the FP regs were dirtied while in the host-side run loop: */
-static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
+/* Check whether the FP regs are owned by the guest */
+static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	/*
-	 * When the system doesn't support FP/SIMD, we cannot rely on
-	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
-	 * abort on the very first access to FP and thus we should never
-	 * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
-	 * trap the accesses.
-	 */
-	if (!system_supports_fpsimd() ||
-	    vcpu->arch.flags & KVM_ARM64_FP_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
-
 	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6db801db8f27..a6b9f1186577 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -43,7 +43,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
+	if (!guest_owns_fp_regs(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
 		__activate_traps_fpsimd32(vcpu);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 969f20daf97a..46f365254e9f 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -55,7 +55,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (update_fp_enabled(vcpu)) {
+	if (guest_owns_fp_regs(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
 	} else {
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The KVM FP code uses a pair of flags to denote three states:

- FP_ENABLED set: the guest owns the FP state
- FP_HOST set: the host owns the FP state
- FP_ENABLED and FP_HOST clear: nobody owns the FP state at all

and both flags set is an illegal state, which nothing ever checks
for...

As it turns out, this isn't really a good match for flags, and
we'd be better off if this was a simpler tristate, each state
having a name that actually reflect the state:

- FP_STATE_CLEAN
- FP_STATE_HOST_DIRTY
- FP_STATE_GUEST_DIRTY

Kill the two flags, and move over to an enum encoding these
three states. This results in less confusing code, and less risk of
ending up in the uncharted territory of a 4th state if we forget
to clear one of the two flags.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  9 +++++++--
 arch/arm64/kvm/fpsimd.c                 | 11 +++++------
 arch/arm64/kvm/hyp/include/hyp/switch.h |  8 +++-----
 arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 18 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9252d71b4ac5..a46f952b97f6 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -328,6 +328,13 @@ struct kvm_vcpu_arch {
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
+	/* Ownership of the FP regs */
+	enum {
+		FP_STATE_CLEAN,
+		FP_STATE_DIRTY_HOST,
+		FP_STATE_DIRTY_GUEST,
+	} fp_state;
+
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
@@ -433,8 +440,6 @@ struct kvm_vcpu_arch {
 
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
-#define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
-#define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 9ebd89541281..0d82f6c5b110 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -77,8 +77,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	BUG_ON(!current->mm);
 	BUG_ON(test_thread_flag(TIF_SVE));
 
-	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
-	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+	vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;
 
 	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
@@ -100,7 +99,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 		if (read_sysreg_s(SYS_SVCR_EL0) &
 		    (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
-			vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
+			vcpu->arch.fp_state = FP_STATE_CLEAN;
 			fpsimd_save_and_flush_cpu_state();
 		}
 	}
@@ -119,7 +118,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
 	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
+		vcpu->arch.fp_state = FP_STATE_CLEAN;
 }
 
 /*
@@ -133,7 +132,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
 		/*
 		 * Currently we do not support SME guests so SVCR is
 		 * always 0 and we just need a variable to point to.
@@ -176,7 +175,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 					 CPACR_EL1_SMEN_EL1EN);
 	}
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
 		if (vcpu_has_sve(vcpu)) {
 			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 1209248d2a3d..b22378abfb57 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -40,7 +40,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 /* Check whether the FP regs are owned by the guest */
 static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
+	return vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST;
 }
 
 /* Save the 32-bit only FPSIMD system register state */
@@ -179,10 +179,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	isb();
 
 	/* Write out the host state if it's in the registers */
-	if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_HOST)
 		__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
-		vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
-	}
 
 	/* Restore the guest state */
 	if (sve_guest)
@@ -194,7 +192,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
 		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
-	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+	vcpu->arch.fp_state = FP_STATE_DIRTY_GUEST;
 
 	return true;
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index a6b9f1186577..89e0f88c9006 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -123,7 +123,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	}
 
 	cptr = CPTR_EL2_DEFAULT;
-	if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
+	if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST))
 		cptr |= CPTR_EL2_TZ;
 	if (cpus_have_final_cap(ARM64_SME))
 		cptr &= ~CPTR_EL2_TSM;
@@ -335,7 +335,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 46f365254e9f..258e87325c95 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -175,7 +175,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The KVM FP code uses a pair of flags to denote three states:

- FP_ENABLED set: the guest owns the FP state
- FP_HOST set: the host owns the FP state
- FP_ENABLED and FP_HOST clear: nobody owns the FP state at all

and both flags set is an illegal state, which nothing ever checks
for...

As it turns out, this isn't really a good match for flags, and
we'd be better off if this was a simpler tristate, each state
having a name that actually reflect the state:

- FP_STATE_CLEAN
- FP_STATE_HOST_DIRTY
- FP_STATE_GUEST_DIRTY

Kill the two flags, and move over to an enum encoding these
three states. This results in less confusing code, and less risk of
ending up in the uncharted territory of a 4th state if we forget
to clear one of the two flags.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  9 +++++++--
 arch/arm64/kvm/fpsimd.c                 | 11 +++++------
 arch/arm64/kvm/hyp/include/hyp/switch.h |  8 +++-----
 arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 18 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9252d71b4ac5..a46f952b97f6 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -328,6 +328,13 @@ struct kvm_vcpu_arch {
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
+	/* Ownership of the FP regs */
+	enum {
+		FP_STATE_CLEAN,
+		FP_STATE_DIRTY_HOST,
+		FP_STATE_DIRTY_GUEST,
+	} fp_state;
+
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
@@ -433,8 +440,6 @@ struct kvm_vcpu_arch {
 
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
-#define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
-#define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 9ebd89541281..0d82f6c5b110 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -77,8 +77,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	BUG_ON(!current->mm);
 	BUG_ON(test_thread_flag(TIF_SVE));
 
-	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
-	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+	vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;
 
 	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
@@ -100,7 +99,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 		if (read_sysreg_s(SYS_SVCR_EL0) &
 		    (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
-			vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
+			vcpu->arch.fp_state = FP_STATE_CLEAN;
 			fpsimd_save_and_flush_cpu_state();
 		}
 	}
@@ -119,7 +118,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
 	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
+		vcpu->arch.fp_state = FP_STATE_CLEAN;
 }
 
 /*
@@ -133,7 +132,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
 		/*
 		 * Currently we do not support SME guests so SVCR is
 		 * always 0 and we just need a variable to point to.
@@ -176,7 +175,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 					 CPACR_EL1_SMEN_EL1EN);
 	}
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
 		if (vcpu_has_sve(vcpu)) {
 			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 1209248d2a3d..b22378abfb57 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -40,7 +40,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 /* Check whether the FP regs are owned by the guest */
 static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
+	return vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST;
 }
 
 /* Save the 32-bit only FPSIMD system register state */
@@ -179,10 +179,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	isb();
 
 	/* Write out the host state if it's in the registers */
-	if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_HOST)
 		__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
-		vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
-	}
 
 	/* Restore the guest state */
 	if (sve_guest)
@@ -194,7 +192,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
 		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
-	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+	vcpu->arch.fp_state = FP_STATE_DIRTY_GUEST;
 
 	return true;
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index a6b9f1186577..89e0f88c9006 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -123,7 +123,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	}
 
 	cptr = CPTR_EL2_DEFAULT;
-	if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
+	if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST))
 		cptr |= CPTR_EL2_TZ;
 	if (cpus_have_final_cap(ARM64_SME))
 		cptr &= ~CPTR_EL2_TSM;
@@ -335,7 +335,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 46f365254e9f..258e87325c95 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -175,7 +175,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The KVM FP code uses a pair of flags to denote three states:

- FP_ENABLED set: the guest owns the FP state
- FP_HOST set: the host owns the FP state
- FP_ENABLED and FP_HOST clear: nobody owns the FP state at all

and both flags set is an illegal state, which nothing ever checks
for...

As it turns out, this isn't really a good match for flags, and
we'd be better off if this was a simpler tristate, each state
having a name that actually reflect the state:

- FP_STATE_CLEAN
- FP_STATE_HOST_DIRTY
- FP_STATE_GUEST_DIRTY

Kill the two flags, and move over to an enum encoding these
three states. This results in less confusing code, and less risk of
ending up in the uncharted territory of a 4th state if we forget
to clear one of the two flags.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  9 +++++++--
 arch/arm64/kvm/fpsimd.c                 | 11 +++++------
 arch/arm64/kvm/hyp/include/hyp/switch.h |  8 +++-----
 arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 18 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9252d71b4ac5..a46f952b97f6 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -328,6 +328,13 @@ struct kvm_vcpu_arch {
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
+	/* Ownership of the FP regs */
+	enum {
+		FP_STATE_CLEAN,
+		FP_STATE_DIRTY_HOST,
+		FP_STATE_DIRTY_GUEST,
+	} fp_state;
+
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
@@ -433,8 +440,6 @@ struct kvm_vcpu_arch {
 
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
-#define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
-#define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 9ebd89541281..0d82f6c5b110 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -77,8 +77,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	BUG_ON(!current->mm);
 	BUG_ON(test_thread_flag(TIF_SVE));
 
-	vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
-	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+	vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;
 
 	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
@@ -100,7 +99,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 		if (read_sysreg_s(SYS_SVCR_EL0) &
 		    (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
-			vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
+			vcpu->arch.fp_state = FP_STATE_CLEAN;
 			fpsimd_save_and_flush_cpu_state();
 		}
 	}
@@ -119,7 +118,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
 	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
+		vcpu->arch.fp_state = FP_STATE_CLEAN;
 }
 
 /*
@@ -133,7 +132,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
 		/*
 		 * Currently we do not support SME guests so SVCR is
 		 * always 0 and we just need a variable to point to.
@@ -176,7 +175,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 					 CPACR_EL1_SMEN_EL1EN);
 	}
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
 		if (vcpu_has_sve(vcpu)) {
 			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 1209248d2a3d..b22378abfb57 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -40,7 +40,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 /* Check whether the FP regs are owned by the guest */
 static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
+	return vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST;
 }
 
 /* Save the 32-bit only FPSIMD system register state */
@@ -179,10 +179,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	isb();
 
 	/* Write out the host state if it's in the registers */
-	if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_HOST)
 		__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
-		vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
-	}
 
 	/* Restore the guest state */
 	if (sve_guest)
@@ -194,7 +192,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
 		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
-	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+	vcpu->arch.fp_state = FP_STATE_DIRTY_GUEST;
 
 	return true;
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index a6b9f1186577..89e0f88c9006 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -123,7 +123,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	}
 
 	cptr = CPTR_EL2_DEFAULT;
-	if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
+	if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST))
 		cptr |= CPTR_EL2_TZ;
 	if (cpus_have_final_cap(ARM64_SME))
 		cptr &= ~CPTR_EL2_TSM;
@@ -335,7 +335,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 46f365254e9f..258e87325c95 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -175,7 +175,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Careful analysis of the vcpu flags show that this is a mix of
configuration, communication between the host and the hypervisor,
as well as anciliary state that has no consistency. It'd be a lot
better if we could split these flags into consistent categories.

However, even if we split these flags apart, we want to make sure
that each flag can only be applied to its own set, and not across
sets.

To achieve this, use a preprocessor hack so that each flag is always
associated with:

- the set that contains it,

- a mask that describe all the bits that contain it (for a simple
  flag, this is the same thing as the flag itself, but we will
  eventually have values that cover multiple bits at once).

Each flag is thus a triplet that is not directly usable as a value,
but used by three helpers that allow the flag to be set, cleared,
and fetched. By mandating the use of such helper, we can easily
enforce that a flag can only be used with the set it belongs to.

Finally, one last helper "unpacks" the raw value from the triplet
that represents a flag, which is useful for multi-bit values that
need to be enumerated (in a switch statement, for example).

Further patches will start making use of this infrastructure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a46f952b97f6..5eb6791df608 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+#define __vcpu_get_flag(v, flagset, f, m)			\
+	({							\
+		v->arch.flagset & (m);				\
+	})
+
+#define __vcpu_set_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		if (HWEIGHT(m) > 1)				\
+			*fset &= ~(m);				\
+		*fset |= (f);					\
+	} while (0)
+
+#define __vcpu_clear_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		*fset &= ~(m);					\
+	} while (0)
+
+#define vcpu_get_flag(v, ...)	__vcpu_get_flag(v, __VA_ARGS__)
+#define vcpu_set_flag(v, ...)	__vcpu_set_flag(v, __VA_ARGS__)
+#define vcpu_clear_flag(v, ...)	__vcpu_clear_flag(v, __VA_ARGS__)
+
+#define __vcpu_single_flag(_set, _f)	_set, (_f), (_f)
+
+#define __flag_unpack(_set, _f, _m)	_f
+#define vcpu_flag_unpack(...)		__flag_unpack(__VA_ARGS__)
+
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

Careful analysis of the vcpu flags show that this is a mix of
configuration, communication between the host and the hypervisor,
as well as anciliary state that has no consistency. It'd be a lot
better if we could split these flags into consistent categories.

However, even if we split these flags apart, we want to make sure
that each flag can only be applied to its own set, and not across
sets.

To achieve this, use a preprocessor hack so that each flag is always
associated with:

- the set that contains it,

- a mask that describe all the bits that contain it (for a simple
  flag, this is the same thing as the flag itself, but we will
  eventually have values that cover multiple bits at once).

Each flag is thus a triplet that is not directly usable as a value,
but used by three helpers that allow the flag to be set, cleared,
and fetched. By mandating the use of such helper, we can easily
enforce that a flag can only be used with the set it belongs to.

Finally, one last helper "unpacks" the raw value from the triplet
that represents a flag, which is useful for multi-bit values that
need to be enumerated (in a switch statement, for example).

Further patches will start making use of this infrastructure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a46f952b97f6..5eb6791df608 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+#define __vcpu_get_flag(v, flagset, f, m)			\
+	({							\
+		v->arch.flagset & (m);				\
+	})
+
+#define __vcpu_set_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		if (HWEIGHT(m) > 1)				\
+			*fset &= ~(m);				\
+		*fset |= (f);					\
+	} while (0)
+
+#define __vcpu_clear_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		*fset &= ~(m);					\
+	} while (0)
+
+#define vcpu_get_flag(v, ...)	__vcpu_get_flag(v, __VA_ARGS__)
+#define vcpu_set_flag(v, ...)	__vcpu_set_flag(v, __VA_ARGS__)
+#define vcpu_clear_flag(v, ...)	__vcpu_clear_flag(v, __VA_ARGS__)
+
+#define __vcpu_single_flag(_set, _f)	_set, (_f), (_f)
+
+#define __flag_unpack(_set, _f, _m)	_f
+#define vcpu_flag_unpack(...)		__flag_unpack(__VA_ARGS__)
+
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Careful analysis of the vcpu flags show that this is a mix of
configuration, communication between the host and the hypervisor,
as well as anciliary state that has no consistency. It'd be a lot
better if we could split these flags into consistent categories.

However, even if we split these flags apart, we want to make sure
that each flag can only be applied to its own set, and not across
sets.

To achieve this, use a preprocessor hack so that each flag is always
associated with:

- the set that contains it,

- a mask that describe all the bits that contain it (for a simple
  flag, this is the same thing as the flag itself, but we will
  eventually have values that cover multiple bits at once).

Each flag is thus a triplet that is not directly usable as a value,
but used by three helpers that allow the flag to be set, cleared,
and fetched. By mandating the use of such helper, we can easily
enforce that a flag can only be used with the set it belongs to.

Finally, one last helper "unpacks" the raw value from the triplet
that represents a flag, which is useful for multi-bit values that
need to be enumerated (in a switch statement, for example).

Further patches will start making use of this infrastructure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a46f952b97f6..5eb6791df608 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+#define __vcpu_get_flag(v, flagset, f, m)			\
+	({							\
+		v->arch.flagset & (m);				\
+	})
+
+#define __vcpu_set_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		if (HWEIGHT(m) > 1)				\
+			*fset &= ~(m);				\
+		*fset |= (f);					\
+	} while (0)
+
+#define __vcpu_clear_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *fset;			\
+								\
+		fset = &v->arch.flagset;			\
+		*fset &= ~(m);					\
+	} while (0)
+
+#define vcpu_get_flag(v, ...)	__vcpu_get_flag(v, __VA_ARGS__)
+#define vcpu_set_flag(v, ...)	__vcpu_set_flag(v, __VA_ARGS__)
+#define vcpu_clear_flag(v, ...)	__vcpu_clear_flag(v, __VA_ARGS__)
+
+#define __vcpu_single_flag(_set, _f)	_set, (_f), (_f)
+
+#define __flag_unpack(_set, _f, _m)	_f
+#define vcpu_flag_unpack(...)		__flag_unpack(__VA_ARGS__)
+
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

It so appears that each of the vcpu flags is really belonging to
one of three categories:

- a configuration flag, set once and for all
- an input flag generated by the kernel for the hypervisor to use
- a state flag that is only for the kernel's own bookkeeping

As we are going to split all the existing flags into these three
sets, introduce all three in one go.

No functional change other than a bit of bloat...

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 5eb6791df608..c9dd0d4e22f2 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
+	/* Configuration flags */
+	u64 cflags;
+
+	/* Input flags to the hypervisor code */
+	u64 iflags;
+
+	/* State flags, unused by the hypervisor code */
+	u64 sflags;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

It so appears that each of the vcpu flags is really belonging to
one of three categories:

- a configuration flag, set once and for all
- an input flag generated by the kernel for the hypervisor to use
- a state flag that is only for the kernel's own bookkeeping

As we are going to split all the existing flags into these three
sets, introduce all three in one go.

No functional change other than a bit of bloat...

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 5eb6791df608..c9dd0d4e22f2 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
+	/* Configuration flags */
+	u64 cflags;
+
+	/* Input flags to the hypervisor code */
+	u64 iflags;
+
+	/* State flags, unused by the hypervisor code */
+	u64 sflags;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

It so appears that each of the vcpu flags is really belonging to
one of three categories:

- a configuration flag, set once and for all
- an input flag generated by the kernel for the hypervisor to use
- a state flag that is only for the kernel's own bookkeeping

As we are going to split all the existing flags into these three
sets, introduce all three in one go.

No functional change other than a bit of bloat...

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 5eb6791df608..c9dd0d4e22f2 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
+	/* Configuration flags */
+	u64 cflags;
+
+	/* Input flags to the hypervisor code */
+	u64 iflags;
+
+	/* State flags, unused by the hypervisor code */
+	u64 sflags;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 07/18] KVM: arm64: Move vcpu configuration flags into their own set
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The KVM_ARM64_{GUEST_HAS_SVE,VCPU_SVE_FINALIZED,GUEST_HAS_PTRAUTH}
flags are purely configuration flags. Once set, they are never cleared,
but evaluated all over the code base.

Move these three flags into the configuration set in one go, using
the new accessors, and take this opportunity to drop the KVM_ARM64_
prefix which doesn't provide any help.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 17 ++++++++++-------
 arch/arm64/kvm/reset.c            |  6 +++---
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c9dd0d4e22f2..2b8f1265eade 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -459,6 +459,13 @@ struct kvm_vcpu_arch {
 #define __flag_unpack(_set, _f, _m)	_f
 #define vcpu_flag_unpack(...)		__flag_unpack(__VA_ARGS__)
 
+/* SVE exposed to guest */
+#define GUEST_HAS_SVE		__vcpu_single_flag(cflags, BIT(0))
+/* SVE config completed */
+#define VCPU_SVE_FINALIZED	__vcpu_single_flag(cflags, BIT(1))
+/* PTRAUTH exposed to guest */
+#define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
+
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -483,9 +490,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
-#define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
-#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 #define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
 /*
  * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
@@ -522,13 +526,13 @@ struct kvm_vcpu_arch {
 				 KVM_GUESTDBG_SINGLESTEP)
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() &&			\
-			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
+			    vcpu_get_flag((vcpu), GUEST_HAS_SVE))
 
 #ifdef CONFIG_ARM64_PTR_AUTH
 #define vcpu_has_ptrauth(vcpu)						\
 	((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||		\
 	  cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&		\
-	 (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+	  vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH))
 #else
 #define vcpu_has_ptrauth(vcpu)		false
 #endif
@@ -885,8 +889,7 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-#define kvm_arm_vcpu_sve_finalized(vcpu) \
-	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
+#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED)
 
 #define kvm_has_mte(kvm)					\
 	(system_supports_mte() &&				\
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 6c70c6f61c70..0e08fbe68715 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
 	 * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
 	 * kvm_arm_vcpu_finalize(), which freezes the configuration.
 	 */
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+	vcpu_set_flag(vcpu, GUEST_HAS_SVE);
 
 	return 0;
 }
@@ -120,7 +120,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 	}
 	
 	vcpu->arch.sve_state = buf;
-	vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
+	vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED);
 	return 0;
 }
 
@@ -177,7 +177,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
 	    !system_has_full_ptr_auth())
 		return -EINVAL;
 
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 07/18] KVM: arm64: Move vcpu configuration flags into their own set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The KVM_ARM64_{GUEST_HAS_SVE,VCPU_SVE_FINALIZED,GUEST_HAS_PTRAUTH}
flags are purely configuration flags. Once set, they are never cleared,
but evaluated all over the code base.

Move these three flags into the configuration set in one go, using
the new accessors, and take this opportunity to drop the KVM_ARM64_
prefix which doesn't provide any help.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 17 ++++++++++-------
 arch/arm64/kvm/reset.c            |  6 +++---
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c9dd0d4e22f2..2b8f1265eade 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -459,6 +459,13 @@ struct kvm_vcpu_arch {
 #define __flag_unpack(_set, _f, _m)	_f
 #define vcpu_flag_unpack(...)		__flag_unpack(__VA_ARGS__)
 
+/* SVE exposed to guest */
+#define GUEST_HAS_SVE		__vcpu_single_flag(cflags, BIT(0))
+/* SVE config completed */
+#define VCPU_SVE_FINALIZED	__vcpu_single_flag(cflags, BIT(1))
+/* PTRAUTH exposed to guest */
+#define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
+
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -483,9 +490,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
-#define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
-#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 #define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
 /*
  * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
@@ -522,13 +526,13 @@ struct kvm_vcpu_arch {
 				 KVM_GUESTDBG_SINGLESTEP)
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() &&			\
-			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
+			    vcpu_get_flag((vcpu), GUEST_HAS_SVE))
 
 #ifdef CONFIG_ARM64_PTR_AUTH
 #define vcpu_has_ptrauth(vcpu)						\
 	((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||		\
 	  cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&		\
-	 (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+	  vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH))
 #else
 #define vcpu_has_ptrauth(vcpu)		false
 #endif
@@ -885,8 +889,7 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-#define kvm_arm_vcpu_sve_finalized(vcpu) \
-	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
+#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED)
 
 #define kvm_has_mte(kvm)					\
 	(system_supports_mte() &&				\
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 6c70c6f61c70..0e08fbe68715 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
 	 * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
 	 * kvm_arm_vcpu_finalize(), which freezes the configuration.
 	 */
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+	vcpu_set_flag(vcpu, GUEST_HAS_SVE);
 
 	return 0;
 }
@@ -120,7 +120,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 	}
 	
 	vcpu->arch.sve_state = buf;
-	vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
+	vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED);
 	return 0;
 }
 
@@ -177,7 +177,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
 	    !system_has_full_ptr_auth())
 		return -EINVAL;
 
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
 	return 0;
 }
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 07/18] KVM: arm64: Move vcpu configuration flags into their own set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The KVM_ARM64_{GUEST_HAS_SVE,VCPU_SVE_FINALIZED,GUEST_HAS_PTRAUTH}
flags are purely configuration flags. Once set, they are never cleared,
but evaluated all over the code base.

Move these three flags into the configuration set in one go, using
the new accessors, and take this opportunity to drop the KVM_ARM64_
prefix which doesn't provide any help.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 17 ++++++++++-------
 arch/arm64/kvm/reset.c            |  6 +++---
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c9dd0d4e22f2..2b8f1265eade 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -459,6 +459,13 @@ struct kvm_vcpu_arch {
 #define __flag_unpack(_set, _f, _m)	_f
 #define vcpu_flag_unpack(...)		__flag_unpack(__VA_ARGS__)
 
+/* SVE exposed to guest */
+#define GUEST_HAS_SVE		__vcpu_single_flag(cflags, BIT(0))
+/* SVE config completed */
+#define VCPU_SVE_FINALIZED	__vcpu_single_flag(cflags, BIT(1))
+/* PTRAUTH exposed to guest */
+#define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
+
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -483,9 +490,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
-#define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
-#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 #define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
 /*
  * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
@@ -522,13 +526,13 @@ struct kvm_vcpu_arch {
 				 KVM_GUESTDBG_SINGLESTEP)
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() &&			\
-			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
+			    vcpu_get_flag((vcpu), GUEST_HAS_SVE))
 
 #ifdef CONFIG_ARM64_PTR_AUTH
 #define vcpu_has_ptrauth(vcpu)						\
 	((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||		\
 	  cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&		\
-	 (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+	  vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH))
 #else
 #define vcpu_has_ptrauth(vcpu)		false
 #endif
@@ -885,8 +889,7 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-#define kvm_arm_vcpu_sve_finalized(vcpu) \
-	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
+#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED)
 
 #define kvm_has_mte(kvm)					\
 	(system_supports_mte() &&				\
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 6c70c6f61c70..0e08fbe68715 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
 	 * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
 	 * kvm_arm_vcpu_finalize(), which freezes the configuration.
 	 */
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+	vcpu_set_flag(vcpu, GUEST_HAS_SVE);
 
 	return 0;
 }
@@ -120,7 +120,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 	}
 	
 	vcpu->arch.sve_state = buf;
-	vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
+	vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED);
 	return 0;
 }
 
@@ -177,7 +177,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
 	    !system_has_full_ptr_auth())
 		return -EINVAL;
 
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
 	return 0;
 }
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 08/18] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The PC update flags (which also deal with exception injection)
is one of the most complicated use of the flag we have. Make it
more fool prof by:

- moving it over to the new accessors and assign it to the
  input flag set

- turn the combination of generic ELx flags with another flag
  indicating the target EL itself into an explicit set of
  flags for each EL and vector combination

This is otherwise a pretty straightformward conversion.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h |  2 +-
 arch/arm64/include/asm/kvm_host.h    | 58 ++++++++++++++++------------
 arch/arm64/kvm/arm.c                 |  4 +-
 arch/arm64/kvm/hyp/exception.c       | 23 ++++++-----
 arch/arm64/kvm/hyp/nvhe/sys_regs.c   |  5 +--
 arch/arm64/kvm/inject_fault.c        | 22 +++++------
 6 files changed, 60 insertions(+), 54 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 07812680fcaf..46e631cd8d9e 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,7 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
+	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
 static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2b8f1265eade..078567f5709c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -466,6 +466,40 @@ struct kvm_vcpu_arch {
 /* PTRAUTH exposed to guest */
 #define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
 
+/* Exception pending */
+#define PENDING_EXCEPTION	__vcpu_single_flag(iflags, BIT(0))
+/*
+ * PC increment. Overlaps with EXCEPT_MASK on purpose so that it can't
+ * be set together with an exception...
+ */
+#define INCREMENT_PC		__vcpu_single_flag(iflags, BIT(1))
+/* Target EL/MODE (not a single flag, but let's abuse the macro) */
+#define EXCEPT_MASK		__vcpu_single_flag(iflags, GENMASK(3, 1))
+
+/* Helpers to encode exceptions with minimum fuss */
+#define __EXCEPT_MASK_VAL	vcpu_flag_unpack(EXCEPT_MASK)
+#define __EXCEPT_SHIFT		__builtin_ctzl(__EXCEPT_MASK_VAL)
+#define __vcpu_except_flags(_f)	iflags, (_f << __EXCEPT_SHIFT), __EXCEPT_MASK_VAL
+
+/*
+ * When PENDING_EXCEPTION is set, KVM_ARM64_IFLAG_EXCEPT_MASK can take
+ * the following values:
+ *
+ * For AArch32 EL1:
+ */
+#define EXCEPT_AA32_UND		__vcpu_except_flags(0)
+#define EXCEPT_AA32_IABT	__vcpu_except_flags(1)
+#define EXCEPT_AA32_DABT	__vcpu_except_flags(2)
+/* For AArch64: */
+#define EXCEPT_AA64_EL1_SYNC	__vcpu_except_flags(0)
+#define EXCEPT_AA64_EL1_IRQ	__vcpu_except_flags(1)
+#define EXCEPT_AA64_EL1_FIQ	__vcpu_except_flags(2)
+#define EXCEPT_AA64_EL1_SERR	__vcpu_except_flags(3)
+/* For AArch64 with NV (one day): */
+#define EXCEPT_AA64_EL2_SYNC	__vcpu_except_flags(4)
+#define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
+#define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
+#define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -490,30 +524,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
-/*
- * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
- * set together with an exception...
- */
-#define KVM_ARM64_INCREMENT_PC		(1 << 9) /* Increment PC */
-#define KVM_ARM64_EXCEPT_MASK		(7 << 9) /* Target EL/MODE */
-/*
- * When KVM_ARM64_PENDING_EXCEPTION is set, KVM_ARM64_EXCEPT_MASK can
- * take the following values:
- *
- * For AArch32 EL1:
- */
-#define KVM_ARM64_EXCEPT_AA32_UND	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA32_IABT	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA32_DABT	(2 << 9)
-/* For AArch64: */
-#define KVM_ARM64_EXCEPT_AA64_ELx_SYNC	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_IRQ	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_FIQ	(2 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_SERR	(3 << 9)
-#define KVM_ARM64_EXCEPT_AA64_EL1	(0 << 11)
-#define KVM_ARM64_EXCEPT_AA64_EL2	(1 << 11)
-
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index dcf691e3c72f..d7d42d79ede1 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1012,8 +1012,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	 * the vcpu state. Note that this relies on __kvm_adjust_pc()
 	 * being preempt-safe on VHE.
 	 */
-	if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION |
-					 KVM_ARM64_INCREMENT_PC)))
+	if (unlikely(vcpu_get_flag(vcpu, PENDING_EXCEPTION) ||
+		     vcpu_get_flag(vcpu, INCREMENT_PC)))
 		kvm_call_hyp(__kvm_adjust_pc, vcpu);
 
 	vcpu_put(vcpu);
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index c5d009715402..a9563e20fda8 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -303,14 +303,14 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
 	if (vcpu_el1_is_32bit(vcpu)) {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case KVM_ARM64_EXCEPT_AA32_UND:
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case vcpu_flag_unpack(EXCEPT_AA32_UND):
 			enter_exception32(vcpu, PSR_AA32_MODE_UND, 4);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_IABT:
+		case vcpu_flag_unpack(EXCEPT_AA32_IABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_DABT:
+		case vcpu_flag_unpack(EXCEPT_AA32_DABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16);
 			break;
 		default:
@@ -318,9 +318,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 			break;
 		}
 	} else {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-		      KVM_ARM64_EXCEPT_AA64_EL1):
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case vcpu_flag_unpack(EXCEPT_AA64_EL1_SYNC):
 			enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
 			break;
 		default:
@@ -340,12 +339,12 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
  */
 void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
+	if (vcpu_get_flag(vcpu, PENDING_EXCEPTION)) {
 		kvm_inject_exception(vcpu);
-		vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
-				      KVM_ARM64_EXCEPT_MASK);
-	} else 	if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {
+		vcpu_clear_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_clear_flag(vcpu, EXCEPT_MASK);
+	} else if (vcpu_get_flag(vcpu, INCREMENT_PC)) {
 		kvm_skip_instr(vcpu);
-		vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;
+		vcpu_clear_flag(vcpu, INCREMENT_PC);
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 3f5d7bd171c5..2841a2d447a1 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -38,9 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
 	*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	__kvm_adjust_pc(vcpu);
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index ba20405d2dc2..a9a7b513f3b0 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,9 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u32 esr = 0;
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
 
@@ -52,9 +51,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	/*
 	 * Build an unknown exception, depending on the instruction
@@ -73,8 +71,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_UND |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
 }
 
 /*
@@ -97,14 +95,14 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
 	if (is_pabt) {
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_IABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
 		far &= GENMASK(31, 0);
 		far |= (u64)addr << 32;
 		vcpu_write_sys_reg(vcpu, fsr, IFSR32_EL2);
 	} else { /* !iabt */
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_DABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, EXCEPT_AA32_DABT);
 		far &= GENMASK(63, 32);
 		far |= addr;
 		vcpu_write_sys_reg(vcpu, fsr, ESR_EL1);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 08/18] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The PC update flags (which also deal with exception injection)
is one of the most complicated use of the flag we have. Make it
more fool prof by:

- moving it over to the new accessors and assign it to the
  input flag set

- turn the combination of generic ELx flags with another flag
  indicating the target EL itself into an explicit set of
  flags for each EL and vector combination

This is otherwise a pretty straightformward conversion.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h |  2 +-
 arch/arm64/include/asm/kvm_host.h    | 58 ++++++++++++++++------------
 arch/arm64/kvm/arm.c                 |  4 +-
 arch/arm64/kvm/hyp/exception.c       | 23 ++++++-----
 arch/arm64/kvm/hyp/nvhe/sys_regs.c   |  5 +--
 arch/arm64/kvm/inject_fault.c        | 22 +++++------
 6 files changed, 60 insertions(+), 54 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 07812680fcaf..46e631cd8d9e 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,7 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
+	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
 static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2b8f1265eade..078567f5709c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -466,6 +466,40 @@ struct kvm_vcpu_arch {
 /* PTRAUTH exposed to guest */
 #define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
 
+/* Exception pending */
+#define PENDING_EXCEPTION	__vcpu_single_flag(iflags, BIT(0))
+/*
+ * PC increment. Overlaps with EXCEPT_MASK on purpose so that it can't
+ * be set together with an exception...
+ */
+#define INCREMENT_PC		__vcpu_single_flag(iflags, BIT(1))
+/* Target EL/MODE (not a single flag, but let's abuse the macro) */
+#define EXCEPT_MASK		__vcpu_single_flag(iflags, GENMASK(3, 1))
+
+/* Helpers to encode exceptions with minimum fuss */
+#define __EXCEPT_MASK_VAL	vcpu_flag_unpack(EXCEPT_MASK)
+#define __EXCEPT_SHIFT		__builtin_ctzl(__EXCEPT_MASK_VAL)
+#define __vcpu_except_flags(_f)	iflags, (_f << __EXCEPT_SHIFT), __EXCEPT_MASK_VAL
+
+/*
+ * When PENDING_EXCEPTION is set, KVM_ARM64_IFLAG_EXCEPT_MASK can take
+ * the following values:
+ *
+ * For AArch32 EL1:
+ */
+#define EXCEPT_AA32_UND		__vcpu_except_flags(0)
+#define EXCEPT_AA32_IABT	__vcpu_except_flags(1)
+#define EXCEPT_AA32_DABT	__vcpu_except_flags(2)
+/* For AArch64: */
+#define EXCEPT_AA64_EL1_SYNC	__vcpu_except_flags(0)
+#define EXCEPT_AA64_EL1_IRQ	__vcpu_except_flags(1)
+#define EXCEPT_AA64_EL1_FIQ	__vcpu_except_flags(2)
+#define EXCEPT_AA64_EL1_SERR	__vcpu_except_flags(3)
+/* For AArch64 with NV (one day): */
+#define EXCEPT_AA64_EL2_SYNC	__vcpu_except_flags(4)
+#define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
+#define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
+#define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -490,30 +524,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
-/*
- * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
- * set together with an exception...
- */
-#define KVM_ARM64_INCREMENT_PC		(1 << 9) /* Increment PC */
-#define KVM_ARM64_EXCEPT_MASK		(7 << 9) /* Target EL/MODE */
-/*
- * When KVM_ARM64_PENDING_EXCEPTION is set, KVM_ARM64_EXCEPT_MASK can
- * take the following values:
- *
- * For AArch32 EL1:
- */
-#define KVM_ARM64_EXCEPT_AA32_UND	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA32_IABT	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA32_DABT	(2 << 9)
-/* For AArch64: */
-#define KVM_ARM64_EXCEPT_AA64_ELx_SYNC	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_IRQ	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_FIQ	(2 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_SERR	(3 << 9)
-#define KVM_ARM64_EXCEPT_AA64_EL1	(0 << 11)
-#define KVM_ARM64_EXCEPT_AA64_EL2	(1 << 11)
-
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index dcf691e3c72f..d7d42d79ede1 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1012,8 +1012,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	 * the vcpu state. Note that this relies on __kvm_adjust_pc()
 	 * being preempt-safe on VHE.
 	 */
-	if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION |
-					 KVM_ARM64_INCREMENT_PC)))
+	if (unlikely(vcpu_get_flag(vcpu, PENDING_EXCEPTION) ||
+		     vcpu_get_flag(vcpu, INCREMENT_PC)))
 		kvm_call_hyp(__kvm_adjust_pc, vcpu);
 
 	vcpu_put(vcpu);
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index c5d009715402..a9563e20fda8 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -303,14 +303,14 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
 	if (vcpu_el1_is_32bit(vcpu)) {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case KVM_ARM64_EXCEPT_AA32_UND:
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case vcpu_flag_unpack(EXCEPT_AA32_UND):
 			enter_exception32(vcpu, PSR_AA32_MODE_UND, 4);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_IABT:
+		case vcpu_flag_unpack(EXCEPT_AA32_IABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_DABT:
+		case vcpu_flag_unpack(EXCEPT_AA32_DABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16);
 			break;
 		default:
@@ -318,9 +318,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 			break;
 		}
 	} else {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-		      KVM_ARM64_EXCEPT_AA64_EL1):
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case vcpu_flag_unpack(EXCEPT_AA64_EL1_SYNC):
 			enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
 			break;
 		default:
@@ -340,12 +339,12 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
  */
 void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
+	if (vcpu_get_flag(vcpu, PENDING_EXCEPTION)) {
 		kvm_inject_exception(vcpu);
-		vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
-				      KVM_ARM64_EXCEPT_MASK);
-	} else 	if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {
+		vcpu_clear_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_clear_flag(vcpu, EXCEPT_MASK);
+	} else if (vcpu_get_flag(vcpu, INCREMENT_PC)) {
 		kvm_skip_instr(vcpu);
-		vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;
+		vcpu_clear_flag(vcpu, INCREMENT_PC);
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 3f5d7bd171c5..2841a2d447a1 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -38,9 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
 	*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	__kvm_adjust_pc(vcpu);
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index ba20405d2dc2..a9a7b513f3b0 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,9 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u32 esr = 0;
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
 
@@ -52,9 +51,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	/*
 	 * Build an unknown exception, depending on the instruction
@@ -73,8 +71,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_UND |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
 }
 
 /*
@@ -97,14 +95,14 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
 	if (is_pabt) {
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_IABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
 		far &= GENMASK(31, 0);
 		far |= (u64)addr << 32;
 		vcpu_write_sys_reg(vcpu, fsr, IFSR32_EL2);
 	} else { /* !iabt */
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_DABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, EXCEPT_AA32_DABT);
 		far &= GENMASK(63, 32);
 		far |= addr;
 		vcpu_write_sys_reg(vcpu, fsr, ESR_EL1);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 08/18] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The PC update flags (which also deal with exception injection)
is one of the most complicated use of the flag we have. Make it
more fool prof by:

- moving it over to the new accessors and assign it to the
  input flag set

- turn the combination of generic ELx flags with another flag
  indicating the target EL itself into an explicit set of
  flags for each EL and vector combination

This is otherwise a pretty straightformward conversion.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h |  2 +-
 arch/arm64/include/asm/kvm_host.h    | 58 ++++++++++++++++------------
 arch/arm64/kvm/arm.c                 |  4 +-
 arch/arm64/kvm/hyp/exception.c       | 23 ++++++-----
 arch/arm64/kvm/hyp/nvhe/sys_regs.c   |  5 +--
 arch/arm64/kvm/inject_fault.c        | 22 +++++------
 6 files changed, 60 insertions(+), 54 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 07812680fcaf..46e631cd8d9e 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,7 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
+	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
 static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2b8f1265eade..078567f5709c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -466,6 +466,40 @@ struct kvm_vcpu_arch {
 /* PTRAUTH exposed to guest */
 #define GUEST_HAS_PTRAUTH	__vcpu_single_flag(cflags, BIT(2))
 
+/* Exception pending */
+#define PENDING_EXCEPTION	__vcpu_single_flag(iflags, BIT(0))
+/*
+ * PC increment. Overlaps with EXCEPT_MASK on purpose so that it can't
+ * be set together with an exception...
+ */
+#define INCREMENT_PC		__vcpu_single_flag(iflags, BIT(1))
+/* Target EL/MODE (not a single flag, but let's abuse the macro) */
+#define EXCEPT_MASK		__vcpu_single_flag(iflags, GENMASK(3, 1))
+
+/* Helpers to encode exceptions with minimum fuss */
+#define __EXCEPT_MASK_VAL	vcpu_flag_unpack(EXCEPT_MASK)
+#define __EXCEPT_SHIFT		__builtin_ctzl(__EXCEPT_MASK_VAL)
+#define __vcpu_except_flags(_f)	iflags, (_f << __EXCEPT_SHIFT), __EXCEPT_MASK_VAL
+
+/*
+ * When PENDING_EXCEPTION is set, KVM_ARM64_IFLAG_EXCEPT_MASK can take
+ * the following values:
+ *
+ * For AArch32 EL1:
+ */
+#define EXCEPT_AA32_UND		__vcpu_except_flags(0)
+#define EXCEPT_AA32_IABT	__vcpu_except_flags(1)
+#define EXCEPT_AA32_DABT	__vcpu_except_flags(2)
+/* For AArch64: */
+#define EXCEPT_AA64_EL1_SYNC	__vcpu_except_flags(0)
+#define EXCEPT_AA64_EL1_IRQ	__vcpu_except_flags(1)
+#define EXCEPT_AA64_EL1_FIQ	__vcpu_except_flags(2)
+#define EXCEPT_AA64_EL1_SERR	__vcpu_except_flags(3)
+/* For AArch64 with NV (one day): */
+#define EXCEPT_AA64_EL2_SYNC	__vcpu_except_flags(4)
+#define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
+#define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
+#define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -490,30 +524,6 @@ struct kvm_vcpu_arch {
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
-/*
- * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
- * set together with an exception...
- */
-#define KVM_ARM64_INCREMENT_PC		(1 << 9) /* Increment PC */
-#define KVM_ARM64_EXCEPT_MASK		(7 << 9) /* Target EL/MODE */
-/*
- * When KVM_ARM64_PENDING_EXCEPTION is set, KVM_ARM64_EXCEPT_MASK can
- * take the following values:
- *
- * For AArch32 EL1:
- */
-#define KVM_ARM64_EXCEPT_AA32_UND	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA32_IABT	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA32_DABT	(2 << 9)
-/* For AArch64: */
-#define KVM_ARM64_EXCEPT_AA64_ELx_SYNC	(0 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_IRQ	(1 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_FIQ	(2 << 9)
-#define KVM_ARM64_EXCEPT_AA64_ELx_SERR	(3 << 9)
-#define KVM_ARM64_EXCEPT_AA64_EL1	(0 << 11)
-#define KVM_ARM64_EXCEPT_AA64_EL2	(1 << 11)
-
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index dcf691e3c72f..d7d42d79ede1 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1012,8 +1012,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	 * the vcpu state. Note that this relies on __kvm_adjust_pc()
 	 * being preempt-safe on VHE.
 	 */
-	if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION |
-					 KVM_ARM64_INCREMENT_PC)))
+	if (unlikely(vcpu_get_flag(vcpu, PENDING_EXCEPTION) ||
+		     vcpu_get_flag(vcpu, INCREMENT_PC)))
 		kvm_call_hyp(__kvm_adjust_pc, vcpu);
 
 	vcpu_put(vcpu);
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index c5d009715402..a9563e20fda8 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -303,14 +303,14 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
 	if (vcpu_el1_is_32bit(vcpu)) {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case KVM_ARM64_EXCEPT_AA32_UND:
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case vcpu_flag_unpack(EXCEPT_AA32_UND):
 			enter_exception32(vcpu, PSR_AA32_MODE_UND, 4);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_IABT:
+		case vcpu_flag_unpack(EXCEPT_AA32_IABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12);
 			break;
-		case KVM_ARM64_EXCEPT_AA32_DABT:
+		case vcpu_flag_unpack(EXCEPT_AA32_DABT):
 			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16);
 			break;
 		default:
@@ -318,9 +318,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 			break;
 		}
 	} else {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
-		case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-		      KVM_ARM64_EXCEPT_AA64_EL1):
+		switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
+		case vcpu_flag_unpack(EXCEPT_AA64_EL1_SYNC):
 			enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
 			break;
 		default:
@@ -340,12 +339,12 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
  */
 void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
+	if (vcpu_get_flag(vcpu, PENDING_EXCEPTION)) {
 		kvm_inject_exception(vcpu);
-		vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
-				      KVM_ARM64_EXCEPT_MASK);
-	} else 	if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {
+		vcpu_clear_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_clear_flag(vcpu, EXCEPT_MASK);
+	} else if (vcpu_get_flag(vcpu, INCREMENT_PC)) {
 		kvm_skip_instr(vcpu);
-		vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;
+		vcpu_clear_flag(vcpu, INCREMENT_PC);
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 3f5d7bd171c5..2841a2d447a1 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -38,9 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
 	*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	__kvm_adjust_pc(vcpu);
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index ba20405d2dc2..a9a7b513f3b0 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,9 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u32 esr = 0;
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
 
@@ -52,9 +51,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
-			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
 	/*
 	 * Build an unknown exception, depending on the instruction
@@ -73,8 +71,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_UND |
-			     KVM_ARM64_PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+	vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
 }
 
 /*
@@ -97,14 +95,14 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
 	if (is_pabt) {
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_IABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
 		far &= GENMASK(31, 0);
 		far |= (u64)addr << 32;
 		vcpu_write_sys_reg(vcpu, fsr, IFSR32_EL2);
 	} else { /* !iabt */
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_DABT |
-				     KVM_ARM64_PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
+		vcpu_set_flag(vcpu, EXCEPT_AA32_DABT);
 		far &= GENMASK(63, 32);
 		far |= addr;
 		vcpu_write_sys_reg(vcpu, fsr, ESR_EL1);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The three debug flags (which deal with the debug registers, SPE and
TRBE) all are input flags to the hypervisor code.

Move them into the input set and convert them to the new accessors.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
 arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
 arch/arm64/kvm/sys_regs.c                  |  8 ++++----
 6 files changed, 30 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 078567f5709c..a426cd3aaa74 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
 #define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
 #define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
 #define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
+/* Guest debug is live */
+#define DEBUG_DIRTY		__vcpu_single_flag(iflags, BIT(4))
+/* Save SPE context if active  */
+#define DEBUG_STATE_SAVE_SPE	__vcpu_single_flag(iflags, BIT(5))
+/* Save TRBE context if active  */
+#define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
-#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 4fd5c216c4bb..c5c4c1837bf3 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 * Trap debug register access when one of the following is true:
 	 *  - Userspace is using the hardware to debug the guest
 	 *  (KVM_GUESTDBG_USE_HW is set).
-	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
+	 *  - The guest is not using debug (DEBUG_DIRTY clear).
 	 *  - The guest has enabled the OS Lock (debug exceptions are blocked).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
-	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
+	    !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
 	    kvm_vcpu_os_lock_enabled(vcpu))
 		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
 
@@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
  * debug related registers.
  *
  * Additionally, KVM only traps guest accesses to the debug registers if
- * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
- * flag on vcpu->arch.flags).  Since the guest must not interfere
+ * the guest is not actively using them (see the DEBUG_DIRTY
+ * flag on vcpu->arch.iflags).  Since the guest must not interfere
  * with the hardware state when debugging the guest, we must ensure that
  * trapping is enabled whenever we are debugging the guest using the
  * debug registers.
@@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 		 *
 		 * We simply switch the debug_ptr to point to our new
 		 * external_debug_state which has been populated by the
-		 * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
+		 * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY
 		 * mechanism ensures the registers are updated on the
 		 * world switch.
 		 */
@@ -216,7 +216,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 			vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
 
 			vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
-			vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+			vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 			trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
 						&vcpu->arch.debug_ptr->dbg_bcr[0],
@@ -246,7 +246,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
 	/* If KDE or MDE are set, perform a full save/restore cycle. */
 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 	/* Write mdcr_el2 changes since vcpu_load on VHE systems */
 	if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
@@ -298,16 +298,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
 	 */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
 	    !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE);
 
 	/* Check if we have TRBE implemented and available at the host */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
 	    !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
 
 void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
-			      KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 4ebe9f558f3a..961bbef104a6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
 
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+	vcpu_clear_flag(vcpu, DEBUG_DIRTY);
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 7ecca8b07851..baa5b9b3dde5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -195,7 +195,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
 	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
@@ -212,7 +212,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
 	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index df361d839902..e17455773b98 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
 void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
 	/* Disable and flush SPE data generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
 	/* Disable and flush Self-Hosted Trace generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
 }
 
@@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 
 void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d77be152cbd5..d6a55ed9ff10 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -387,7 +387,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
 {
 	if (p->is_write) {
 		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 	} else {
 		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
 	}
@@ -403,8 +403,8 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
  * A 32 bit write to a debug register leave top bits alone
  * A 32 bit read from a debug register only returns the bottom bits
  *
- * All writes will set the KVM_ARM64_DEBUG_DIRTY flag to ensure the
- * hyp.S code switches between host and guest values in future.
+ * All writes will set the DEBUG_DIRTY flag to ensure the hyp code
+ * switches between host and guest values in future.
  */
 static void reg_to_dbg(struct kvm_vcpu *vcpu,
 		       struct sys_reg_params *p,
@@ -420,7 +420,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
 	val |= (p->regval & (mask >> shift)) << shift;
 	*dbg_reg = val;
 
-	vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+	vcpu_set_flag(vcpu, DEBUG_DIRTY);
 }
 
 static void dbg_to_reg(struct kvm_vcpu *vcpu,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The three debug flags (which deal with the debug registers, SPE and
TRBE) all are input flags to the hypervisor code.

Move them into the input set and convert them to the new accessors.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
 arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
 arch/arm64/kvm/sys_regs.c                  |  8 ++++----
 6 files changed, 30 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 078567f5709c..a426cd3aaa74 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
 #define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
 #define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
 #define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
+/* Guest debug is live */
+#define DEBUG_DIRTY		__vcpu_single_flag(iflags, BIT(4))
+/* Save SPE context if active  */
+#define DEBUG_STATE_SAVE_SPE	__vcpu_single_flag(iflags, BIT(5))
+/* Save TRBE context if active  */
+#define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
-#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 4fd5c216c4bb..c5c4c1837bf3 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 * Trap debug register access when one of the following is true:
 	 *  - Userspace is using the hardware to debug the guest
 	 *  (KVM_GUESTDBG_USE_HW is set).
-	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
+	 *  - The guest is not using debug (DEBUG_DIRTY clear).
 	 *  - The guest has enabled the OS Lock (debug exceptions are blocked).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
-	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
+	    !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
 	    kvm_vcpu_os_lock_enabled(vcpu))
 		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
 
@@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
  * debug related registers.
  *
  * Additionally, KVM only traps guest accesses to the debug registers if
- * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
- * flag on vcpu->arch.flags).  Since the guest must not interfere
+ * the guest is not actively using them (see the DEBUG_DIRTY
+ * flag on vcpu->arch.iflags).  Since the guest must not interfere
  * with the hardware state when debugging the guest, we must ensure that
  * trapping is enabled whenever we are debugging the guest using the
  * debug registers.
@@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 		 *
 		 * We simply switch the debug_ptr to point to our new
 		 * external_debug_state which has been populated by the
-		 * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
+		 * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY
 		 * mechanism ensures the registers are updated on the
 		 * world switch.
 		 */
@@ -216,7 +216,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 			vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
 
 			vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
-			vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+			vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 			trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
 						&vcpu->arch.debug_ptr->dbg_bcr[0],
@@ -246,7 +246,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
 	/* If KDE or MDE are set, perform a full save/restore cycle. */
 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 	/* Write mdcr_el2 changes since vcpu_load on VHE systems */
 	if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
@@ -298,16 +298,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
 	 */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
 	    !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE);
 
 	/* Check if we have TRBE implemented and available at the host */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
 	    !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
 
 void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
-			      KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 4ebe9f558f3a..961bbef104a6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
 
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+	vcpu_clear_flag(vcpu, DEBUG_DIRTY);
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 7ecca8b07851..baa5b9b3dde5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -195,7 +195,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
 	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
@@ -212,7 +212,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
 	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index df361d839902..e17455773b98 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
 void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
 	/* Disable and flush SPE data generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
 	/* Disable and flush Self-Hosted Trace generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
 }
 
@@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 
 void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d77be152cbd5..d6a55ed9ff10 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -387,7 +387,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
 {
 	if (p->is_write) {
 		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 	} else {
 		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
 	}
@@ -403,8 +403,8 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
  * A 32 bit write to a debug register leave top bits alone
  * A 32 bit read from a debug register only returns the bottom bits
  *
- * All writes will set the KVM_ARM64_DEBUG_DIRTY flag to ensure the
- * hyp.S code switches between host and guest values in future.
+ * All writes will set the DEBUG_DIRTY flag to ensure the hyp code
+ * switches between host and guest values in future.
  */
 static void reg_to_dbg(struct kvm_vcpu *vcpu,
 		       struct sys_reg_params *p,
@@ -420,7 +420,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
 	val |= (p->regval & (mask >> shift)) << shift;
 	*dbg_reg = val;
 
-	vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+	vcpu_set_flag(vcpu, DEBUG_DIRTY);
 }
 
 static void dbg_to_reg(struct kvm_vcpu *vcpu,
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The three debug flags (which deal with the debug registers, SPE and
TRBE) all are input flags to the hypervisor code.

Move them into the input set and convert them to the new accessors.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
 arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
 arch/arm64/kvm/sys_regs.c                  |  8 ++++----
 6 files changed, 30 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 078567f5709c..a426cd3aaa74 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
 #define EXCEPT_AA64_EL2_IRQ	__vcpu_except_flags(5)
 #define EXCEPT_AA64_EL2_FIQ	__vcpu_except_flags(6)
 #define EXCEPT_AA64_EL2_SERR	__vcpu_except_flags(7)
+/* Guest debug is live */
+#define DEBUG_DIRTY		__vcpu_single_flag(iflags, BIT(4))
+/* Save SPE context if active  */
+#define DEBUG_STATE_SAVE_SPE	__vcpu_single_flag(iflags, BIT(5))
+/* Save TRBE context if active  */
+#define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
-#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 4fd5c216c4bb..c5c4c1837bf3 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 * Trap debug register access when one of the following is true:
 	 *  - Userspace is using the hardware to debug the guest
 	 *  (KVM_GUESTDBG_USE_HW is set).
-	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
+	 *  - The guest is not using debug (DEBUG_DIRTY clear).
 	 *  - The guest has enabled the OS Lock (debug exceptions are blocked).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
-	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
+	    !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
 	    kvm_vcpu_os_lock_enabled(vcpu))
 		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
 
@@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
  * debug related registers.
  *
  * Additionally, KVM only traps guest accesses to the debug registers if
- * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
- * flag on vcpu->arch.flags).  Since the guest must not interfere
+ * the guest is not actively using them (see the DEBUG_DIRTY
+ * flag on vcpu->arch.iflags).  Since the guest must not interfere
  * with the hardware state when debugging the guest, we must ensure that
  * trapping is enabled whenever we are debugging the guest using the
  * debug registers.
@@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 		 *
 		 * We simply switch the debug_ptr to point to our new
 		 * external_debug_state which has been populated by the
-		 * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
+		 * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY
 		 * mechanism ensures the registers are updated on the
 		 * world switch.
 		 */
@@ -216,7 +216,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 			vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
 
 			vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
-			vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+			vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 			trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
 						&vcpu->arch.debug_ptr->dbg_bcr[0],
@@ -246,7 +246,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
 	/* If KDE or MDE are set, perform a full save/restore cycle. */
 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 
 	/* Write mdcr_el2 changes since vcpu_load on VHE systems */
 	if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
@@ -298,16 +298,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
 	 */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
 	    !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE);
 
 	/* Check if we have TRBE implemented and available at the host */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
 	    !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
+		vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
 
 void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
-			      KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE);
+	vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
 }
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 4ebe9f558f3a..961bbef104a6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
 
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+	vcpu_clear_flag(vcpu, DEBUG_DIRTY);
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 7ecca8b07851..baa5b9b3dde5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -195,7 +195,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
 	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
@@ -212,7 +212,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
 	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
 		write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index df361d839902..e17455773b98 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
 void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
 	/* Disable and flush SPE data generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
 	/* Disable and flush Self-Hosted Trace generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
 }
 
@@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 
 void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
 		__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
 		__debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d77be152cbd5..d6a55ed9ff10 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -387,7 +387,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
 {
 	if (p->is_write) {
 		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_set_flag(vcpu, DEBUG_DIRTY);
 	} else {
 		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
 	}
@@ -403,8 +403,8 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
  * A 32 bit write to a debug register leave top bits alone
  * A 32 bit read from a debug register only returns the bottom bits
  *
- * All writes will set the KVM_ARM64_DEBUG_DIRTY flag to ensure the
- * hyp.S code switches between host and guest values in future.
+ * All writes will set the DEBUG_DIRTY flag to ensure the hyp code
+ * switches between host and guest values in future.
  */
 static void reg_to_dbg(struct kvm_vcpu *vcpu,
 		       struct sys_reg_params *p,
@@ -420,7 +420,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
 	val |= (p->regval & (mask >> shift)) << shift;
 	*dbg_reg = val;
 
-	vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+	vcpu_set_flag(vcpu, DEBUG_DIRTY);
 }
 
 static void dbg_to_reg(struct kvm_vcpu *vcpu,
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 10/18] KVM: arm64: Move vcpu SVE/SME flags to the state flag set
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The two HOST_{SVE,SME}_ENABLED are only used for the host kernel
to track its own state across a vcpu run so that it can be fully
restored.

Move these flags to the so called state set.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  8 +++++---
 arch/arm64/kvm/fpsimd.c           | 12 ++++++------
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a426cd3aaa74..a28a2dca8767 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -507,6 +507,11 @@ struct kvm_vcpu_arch {
 /* Save TRBE context if active  */
 #define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
+/* SVE enabled for host EL0 */
+#define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
+/* SME enabled for EL0 */
+#define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
@@ -528,11 +533,8 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
-#define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
-
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 0d82f6c5b110..1f5238c80d27 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -79,9 +79,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;
 
-	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
+	vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+		vcpu_set_flag(vcpu, HOST_SVE_ENABLED);
 
 	/*
 	 * We don't currently support SME guests but if we leave
@@ -93,9 +93,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
-		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
+		vcpu_clear_flag(vcpu, HOST_SME_ENABLED);
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
-			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
+			vcpu_set_flag(vcpu, HOST_SME_ENABLED);
 
 		if (read_sysreg_s(SYS_SVCR_EL0) &
 		    (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
@@ -165,7 +165,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 	 */
 	if (has_vhe() && system_supports_sme()) {
 		/* Also restore EL0 state seen on entry */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SME_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SME_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0,
 					 CPACR_EL1_SMEN_EL0EN |
 					 CPACR_EL1_SMEN_EL1EN);
@@ -194,7 +194,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 		 * for EL0.  To avoid spurious traps, restore the trap state
 		 * seen by kvm_arch_vcpu_load_fp():
 		 */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);
 		else
 			sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 10/18] KVM: arm64: Move vcpu SVE/SME flags to the state flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The two HOST_{SVE,SME}_ENABLED are only used for the host kernel
to track its own state across a vcpu run so that it can be fully
restored.

Move these flags to the so called state set.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  8 +++++---
 arch/arm64/kvm/fpsimd.c           | 12 ++++++------
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a426cd3aaa74..a28a2dca8767 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -507,6 +507,11 @@ struct kvm_vcpu_arch {
 /* Save TRBE context if active  */
 #define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
+/* SVE enabled for host EL0 */
+#define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
+/* SME enabled for EL0 */
+#define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
@@ -528,11 +533,8 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
-#define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
-
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 0d82f6c5b110..1f5238c80d27 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -79,9 +79,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;
 
-	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
+	vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+		vcpu_set_flag(vcpu, HOST_SVE_ENABLED);
 
 	/*
 	 * We don't currently support SME guests but if we leave
@@ -93,9 +93,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
-		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
+		vcpu_clear_flag(vcpu, HOST_SME_ENABLED);
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
-			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
+			vcpu_set_flag(vcpu, HOST_SME_ENABLED);
 
 		if (read_sysreg_s(SYS_SVCR_EL0) &
 		    (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
@@ -165,7 +165,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 	 */
 	if (has_vhe() && system_supports_sme()) {
 		/* Also restore EL0 state seen on entry */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SME_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SME_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0,
 					 CPACR_EL1_SMEN_EL0EN |
 					 CPACR_EL1_SMEN_EL1EN);
@@ -194,7 +194,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 		 * for EL0.  To avoid spurious traps, restore the trap state
 		 * seen by kvm_arch_vcpu_load_fp():
 		 */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);
 		else
 			sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 10/18] KVM: arm64: Move vcpu SVE/SME flags to the state flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The two HOST_{SVE,SME}_ENABLED are only used for the host kernel
to track its own state across a vcpu run so that it can be fully
restored.

Move these flags to the so called state set.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  8 +++++---
 arch/arm64/kvm/fpsimd.c           | 12 ++++++------
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a426cd3aaa74..a28a2dca8767 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -507,6 +507,11 @@ struct kvm_vcpu_arch {
 /* Save TRBE context if active  */
 #define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
 
+/* SVE enabled for host EL0 */
+#define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
+/* SME enabled for EL0 */
+#define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
@@ -528,11 +533,8 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
-#define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
-
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 0d82f6c5b110..1f5238c80d27 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -79,9 +79,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;
 
-	vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
+	vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+		vcpu_set_flag(vcpu, HOST_SVE_ENABLED);
 
 	/*
 	 * We don't currently support SME guests but if we leave
@@ -93,9 +93,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	 * operations. Do this for ZA as well for now for simplicity.
 	 */
 	if (system_supports_sme()) {
-		vcpu->arch.flags &= ~KVM_ARM64_HOST_SME_ENABLED;
+		vcpu_clear_flag(vcpu, HOST_SME_ENABLED);
 		if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
-			vcpu->arch.flags |= KVM_ARM64_HOST_SME_ENABLED;
+			vcpu_set_flag(vcpu, HOST_SME_ENABLED);
 
 		if (read_sysreg_s(SYS_SVCR_EL0) &
 		    (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
@@ -165,7 +165,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 	 */
 	if (has_vhe() && system_supports_sme()) {
 		/* Also restore EL0 state seen on entry */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SME_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SME_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0,
 					 CPACR_EL1_SMEN_EL0EN |
 					 CPACR_EL1_SMEN_EL1EN);
@@ -194,7 +194,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 		 * for EL0.  To avoid spurious traps, restore the trap state
 		 * seen by kvm_arch_vcpu_load_fp():
 		 */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED)
+		if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED))
 			sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);
 		else
 			sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 11/18] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The ON_UNSUPPORTED_CPU flag is only there to track the sad fact
that we have ended-up on a CPU where we cannot really run.

Since this is only for the host kernel's use, move it to the state
set.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a28a2dca8767..e0a2edca5861 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -511,6 +511,8 @@ struct kvm_vcpu_arch {
 #define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
 /* SME enabled for EL0 */
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+/* Physical CPU not in supported_cpus */
+#define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -533,7 +535,6 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
@@ -553,13 +554,13 @@ struct kvm_vcpu_arch {
 #endif
 
 #define vcpu_on_unsupported_cpu(vcpu)					\
-	((vcpu)->arch.flags & KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_get_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_set_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags |= KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_set_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_clear_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags &= ~KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_clear_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 11/18] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The ON_UNSUPPORTED_CPU flag is only there to track the sad fact
that we have ended-up on a CPU where we cannot really run.

Since this is only for the host kernel's use, move it to the state
set.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a28a2dca8767..e0a2edca5861 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -511,6 +511,8 @@ struct kvm_vcpu_arch {
 #define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
 /* SME enabled for EL0 */
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+/* Physical CPU not in supported_cpus */
+#define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -533,7 +535,6 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
@@ -553,13 +554,13 @@ struct kvm_vcpu_arch {
 #endif
 
 #define vcpu_on_unsupported_cpu(vcpu)					\
-	((vcpu)->arch.flags & KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_get_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_set_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags |= KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_set_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_clear_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags &= ~KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_clear_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 11/18] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag to the state flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The ON_UNSUPPORTED_CPU flag is only there to track the sad fact
that we have ended-up on a CPU where we cannot really run.

Since this is only for the host kernel's use, move it to the state
set.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a28a2dca8767..e0a2edca5861 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -511,6 +511,8 @@ struct kvm_vcpu_arch {
 #define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
 /* SME enabled for EL0 */
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
+/* Physical CPU not in supported_cpus */
+#define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -533,7 +535,6 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
@@ -553,13 +554,13 @@ struct kvm_vcpu_arch {
 #endif
 
 #define vcpu_on_unsupported_cpu(vcpu)					\
-	((vcpu)->arch.flags & KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_get_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_set_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags |= KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_set_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_clear_on_unsupported_cpu(vcpu)				\
-	((vcpu)->arch.flags &= ~KVM_ARM64_ON_UNSUPPORTED_CPU)
+	vcpu_clear_flag(vcpu, ON_UNSUPPORTED_CPU)
 
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 12/18] KVM: arm64: Move vcpu WFIT flag to the state flag set
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The host kernel uses the WFIT flag to remember that a vcpu has used
this instruction and wake it up as required. Move it to the state
set, as nothing in the hypervisor uses this information.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 4 ++--
 arch/arm64/kvm/arch_timer.c       | 2 +-
 arch/arm64/kvm/arm.c              | 2 +-
 arch/arm64/kvm/handle_exit.c      | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e0a2edca5861..fe7e1c44e6e9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -513,6 +513,8 @@ struct kvm_vcpu_arch {
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
 /* Physical CPU not in supported_cpus */
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
+/* WFIT instruction trapped */
+#define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -534,8 +536,6 @@ struct kvm_vcpu_arch {
 	__size_ret;							\
 })
 
-/* vcpu_arch flags field values: */
-#define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 4e39ace073af..5290ca5db663 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -242,7 +242,7 @@ static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
 static bool vcpu_has_wfit_active(struct kvm_vcpu *vcpu)
 {
 	return (cpus_have_final_cap(ARM64_HAS_WFXT) &&
-		(vcpu->arch.flags & KVM_ARM64_WFIT));
+		vcpu_get_flag(vcpu, IN_WFIT));
 }
 
 static u64 wfit_delay_ns(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d7d42d79ede1..49a3fe9f7009 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -657,7 +657,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
 	preempt_enable();
 
 	kvm_vcpu_halt(vcpu);
-	vcpu->arch.flags &= ~KVM_ARM64_WFIT;
+	vcpu_clear_flag(vcpu, IN_WFIT);
 	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 
 	preempt_disable();
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 2ebebd3efaee..dac86d2c6654 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -120,7 +120,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
 		kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
 	} else {
 		if (esr & ESR_ELx_WFx_ISS_WFxT)
-			vcpu->arch.flags |= KVM_ARM64_WFIT;
+			vcpu_set_flag(vcpu, IN_WFIT);
 
 		kvm_vcpu_wfi(vcpu);
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 12/18] KVM: arm64: Move vcpu WFIT flag to the state flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The host kernel uses the WFIT flag to remember that a vcpu has used
this instruction and wake it up as required. Move it to the state
set, as nothing in the hypervisor uses this information.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 4 ++--
 arch/arm64/kvm/arch_timer.c       | 2 +-
 arch/arm64/kvm/arm.c              | 2 +-
 arch/arm64/kvm/handle_exit.c      | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e0a2edca5861..fe7e1c44e6e9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -513,6 +513,8 @@ struct kvm_vcpu_arch {
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
 /* Physical CPU not in supported_cpus */
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
+/* WFIT instruction trapped */
+#define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -534,8 +536,6 @@ struct kvm_vcpu_arch {
 	__size_ret;							\
 })
 
-/* vcpu_arch flags field values: */
-#define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 4e39ace073af..5290ca5db663 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -242,7 +242,7 @@ static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
 static bool vcpu_has_wfit_active(struct kvm_vcpu *vcpu)
 {
 	return (cpus_have_final_cap(ARM64_HAS_WFXT) &&
-		(vcpu->arch.flags & KVM_ARM64_WFIT));
+		vcpu_get_flag(vcpu, IN_WFIT));
 }
 
 static u64 wfit_delay_ns(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d7d42d79ede1..49a3fe9f7009 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -657,7 +657,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
 	preempt_enable();
 
 	kvm_vcpu_halt(vcpu);
-	vcpu->arch.flags &= ~KVM_ARM64_WFIT;
+	vcpu_clear_flag(vcpu, IN_WFIT);
 	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 
 	preempt_disable();
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 2ebebd3efaee..dac86d2c6654 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -120,7 +120,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
 		kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
 	} else {
 		if (esr & ESR_ELx_WFx_ISS_WFxT)
-			vcpu->arch.flags |= KVM_ARM64_WFIT;
+			vcpu_set_flag(vcpu, IN_WFIT);
 
 		kvm_vcpu_wfi(vcpu);
 	}
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 12/18] KVM: arm64: Move vcpu WFIT flag to the state flag set
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The host kernel uses the WFIT flag to remember that a vcpu has used
this instruction and wake it up as required. Move it to the state
set, as nothing in the hypervisor uses this information.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 4 ++--
 arch/arm64/kvm/arch_timer.c       | 2 +-
 arch/arm64/kvm/arm.c              | 2 +-
 arch/arm64/kvm/handle_exit.c      | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e0a2edca5861..fe7e1c44e6e9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -513,6 +513,8 @@ struct kvm_vcpu_arch {
 #define HOST_SME_ENABLED	__vcpu_single_flag(sflags, BIT(1))
 /* Physical CPU not in supported_cpus */
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
+/* WFIT instruction trapped */
+#define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -534,8 +536,6 @@ struct kvm_vcpu_arch {
 	__size_ret;							\
 })
 
-/* vcpu_arch flags field values: */
-#define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
 #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
 				 KVM_GUESTDBG_USE_SW_BP | \
 				 KVM_GUESTDBG_USE_HW | \
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 4e39ace073af..5290ca5db663 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -242,7 +242,7 @@ static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
 static bool vcpu_has_wfit_active(struct kvm_vcpu *vcpu)
 {
 	return (cpus_have_final_cap(ARM64_HAS_WFXT) &&
-		(vcpu->arch.flags & KVM_ARM64_WFIT));
+		vcpu_get_flag(vcpu, IN_WFIT));
 }
 
 static u64 wfit_delay_ns(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d7d42d79ede1..49a3fe9f7009 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -657,7 +657,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
 	preempt_enable();
 
 	kvm_vcpu_halt(vcpu);
-	vcpu->arch.flags &= ~KVM_ARM64_WFIT;
+	vcpu_clear_flag(vcpu, IN_WFIT);
 	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 
 	preempt_disable();
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 2ebebd3efaee..dac86d2c6654 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -120,7 +120,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
 		kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
 	} else {
 		if (esr & ESR_ELx_WFx_ISS_WFxT)
-			vcpu->arch.flags |= KVM_ARM64_WFIT;
+			vcpu_set_flag(vcpu, IN_WFIT);
 
 		kvm_vcpu_wfi(vcpu);
 	}
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 13/18] KVM: arm64: Kill unused vcpu flags field
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Horray, we have now sorted all the preexisting flags, and the
'flags' field is now unused. Get rid of it while nobody is
looking.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe7e1c44e6e9..d571c9991a11 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -335,9 +335,6 @@ struct kvm_vcpu_arch {
 		FP_STATE_DIRTY_GUEST,
 	} fp_state;
 
-	/* Miscellaneous vcpu state flags */
-	u64 flags;
-
 	/* Configuration flags */
 	u64 cflags;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 13/18] KVM: arm64: Kill unused vcpu flags field
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

Horray, we have now sorted all the preexisting flags, and the
'flags' field is now unused. Get rid of it while nobody is
looking.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe7e1c44e6e9..d571c9991a11 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -335,9 +335,6 @@ struct kvm_vcpu_arch {
 		FP_STATE_DIRTY_GUEST,
 	} fp_state;
 
-	/* Miscellaneous vcpu state flags */
-	u64 flags;
-
 	/* Configuration flags */
 	u64 cflags;
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 13/18] KVM: arm64: Kill unused vcpu flags field
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Horray, we have now sorted all the preexisting flags, and the
'flags' field is now unused. Get rid of it while nobody is
looking.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe7e1c44e6e9..d571c9991a11 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -335,9 +335,6 @@ struct kvm_vcpu_arch {
 		FP_STATE_DIRTY_GUEST,
 	} fp_state;
 
-	/* Miscellaneous vcpu state flags */
-	u64 flags;
-
 	/* Configuration flags */
 	u64 cflags;
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 14/18] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The aptly named boolean 'sysregs_loaded_on_cpu' tracks whether
some of the vcpu system registers are resident on the physical
CPU when running in VHE mode.

This is obviously a flag in hidding, so let's convert it to
a state flag, since this is solely a host concern (the hypervisor
itself always knows which state we're in).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h  | 6 ++----
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 4 ++--
 arch/arm64/kvm/sys_regs.c          | 4 ++--
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d571c9991a11..4073a33af17c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -413,10 +413,6 @@ struct kvm_vcpu_arch {
 	/* Additional reset state */
 	struct vcpu_reset_state	reset_state;
 
-	/* True when deferrable sysregs are loaded on the physical CPU,
-	 * see kvm_vcpu_load_sysregs_vhe and kvm_vcpu_put_sysregs_vhe. */
-	bool sysregs_loaded_on_cpu;
-
 	/* Guest PV state */
 	struct {
 		u64 last_steal;
@@ -512,6 +508,8 @@ struct kvm_vcpu_arch {
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 /* WFIT instruction trapped */
 #define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
+/* vcpu system registers loaded on physical CPU */
+#define SYSREGS_ON_CPU		__vcpu_single_flag(sflags, BIT(4))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 007a12dd4351..7b44f6b3b547 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -79,7 +79,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_user_state(guest_ctxt);
 	__sysreg_restore_el1_state(guest_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = true;
+	vcpu_set_flag(vcpu, SYSREGS_ON_CPU);
 
 	activate_traps_vhe_load(vcpu);
 }
@@ -110,5 +110,5 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	/* Restore host user state */
 	__sysreg_restore_user_state(host_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = false;
+	vcpu_clear_flag(vcpu, SYSREGS_ON_CPU);
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d6a55ed9ff10..684a22d6ecf7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -72,7 +72,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 {
 	u64 val = 0x8badf00d8badf00d;
 
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
@@ -81,7 +81,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 {
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 14/18] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

The aptly named boolean 'sysregs_loaded_on_cpu' tracks whether
some of the vcpu system registers are resident on the physical
CPU when running in VHE mode.

This is obviously a flag in hidding, so let's convert it to
a state flag, since this is solely a host concern (the hypervisor
itself always knows which state we're in).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h  | 6 ++----
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 4 ++--
 arch/arm64/kvm/sys_regs.c          | 4 ++--
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d571c9991a11..4073a33af17c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -413,10 +413,6 @@ struct kvm_vcpu_arch {
 	/* Additional reset state */
 	struct vcpu_reset_state	reset_state;
 
-	/* True when deferrable sysregs are loaded on the physical CPU,
-	 * see kvm_vcpu_load_sysregs_vhe and kvm_vcpu_put_sysregs_vhe. */
-	bool sysregs_loaded_on_cpu;
-
 	/* Guest PV state */
 	struct {
 		u64 last_steal;
@@ -512,6 +508,8 @@ struct kvm_vcpu_arch {
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 /* WFIT instruction trapped */
 #define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
+/* vcpu system registers loaded on physical CPU */
+#define SYSREGS_ON_CPU		__vcpu_single_flag(sflags, BIT(4))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 007a12dd4351..7b44f6b3b547 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -79,7 +79,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_user_state(guest_ctxt);
 	__sysreg_restore_el1_state(guest_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = true;
+	vcpu_set_flag(vcpu, SYSREGS_ON_CPU);
 
 	activate_traps_vhe_load(vcpu);
 }
@@ -110,5 +110,5 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	/* Restore host user state */
 	__sysreg_restore_user_state(host_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = false;
+	vcpu_clear_flag(vcpu, SYSREGS_ON_CPU);
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d6a55ed9ff10..684a22d6ecf7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -72,7 +72,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 {
 	u64 val = 0x8badf00d8badf00d;
 
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
@@ -81,7 +81,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 {
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 14/18] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

The aptly named boolean 'sysregs_loaded_on_cpu' tracks whether
some of the vcpu system registers are resident on the physical
CPU when running in VHE mode.

This is obviously a flag in hidding, so let's convert it to
a state flag, since this is solely a host concern (the hypervisor
itself always knows which state we're in).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h  | 6 ++----
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 4 ++--
 arch/arm64/kvm/sys_regs.c          | 4 ++--
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d571c9991a11..4073a33af17c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -413,10 +413,6 @@ struct kvm_vcpu_arch {
 	/* Additional reset state */
 	struct vcpu_reset_state	reset_state;
 
-	/* True when deferrable sysregs are loaded on the physical CPU,
-	 * see kvm_vcpu_load_sysregs_vhe and kvm_vcpu_put_sysregs_vhe. */
-	bool sysregs_loaded_on_cpu;
-
 	/* Guest PV state */
 	struct {
 		u64 last_steal;
@@ -512,6 +508,8 @@ struct kvm_vcpu_arch {
 #define ON_UNSUPPORTED_CPU	__vcpu_single_flag(sflags, BIT(2))
 /* WFIT instruction trapped */
 #define IN_WFIT			__vcpu_single_flag(sflags, BIT(3))
+/* vcpu system registers loaded on physical CPU */
+#define SYSREGS_ON_CPU		__vcpu_single_flag(sflags, BIT(4))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 007a12dd4351..7b44f6b3b547 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -79,7 +79,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_user_state(guest_ctxt);
 	__sysreg_restore_el1_state(guest_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = true;
+	vcpu_set_flag(vcpu, SYSREGS_ON_CPU);
 
 	activate_traps_vhe_load(vcpu);
 }
@@ -110,5 +110,5 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	/* Restore host user state */
 	__sysreg_restore_user_state(host_ctxt);
 
-	vcpu->arch.sysregs_loaded_on_cpu = false;
+	vcpu_clear_flag(vcpu, SYSREGS_ON_CPU);
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d6a55ed9ff10..684a22d6ecf7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -72,7 +72,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 {
 	u64 val = 0x8badf00d8badf00d;
 
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
@@ -81,7 +81,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 {
-	if (vcpu->arch.sysregs_loaded_on_cpu &&
+	if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
 	    __vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
set at the same time, as they are mutually exclusive. Add checks
that will generate a warning should this ever happen.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h | 1 +
 arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
 arch/arm64/kvm/inject_fault.c        | 8 ++++++++
 3 files changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 46e631cd8d9e..861fa0b24a7f 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
+	WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
 	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 2841a2d447a1..04973984b6db 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
 	*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index a9a7b513f3b0..2f4b9afc16ec 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u32 esr = 0;
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
@@ -51,6 +53,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
@@ -71,6 +75,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
 }
@@ -94,6 +100,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	if (is_pabt) {
 		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 		vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
set at the same time, as they are mutually exclusive. Add checks
that will generate a warning should this ever happen.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h | 1 +
 arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
 arch/arm64/kvm/inject_fault.c        | 8 ++++++++
 3 files changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 46e631cd8d9e..861fa0b24a7f 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
+	WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
 	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 2841a2d447a1..04973984b6db 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
 	*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index a9a7b513f3b0..2f4b9afc16ec 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u32 esr = 0;
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
@@ -51,6 +53,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
@@ -71,6 +75,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
 }
@@ -94,6 +100,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	if (is_pabt) {
 		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 		vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
set at the same time, as they are mutually exclusive. Add checks
that will generate a warning should this ever happen.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h | 1 +
 arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
 arch/arm64/kvm/inject_fault.c        | 8 ++++++++
 3 files changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 46e631cd8d9e..861fa0b24a7f 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
+	WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
 	vcpu_set_flag(vcpu, INCREMENT_PC);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 2841a2d447a1..04973984b6db 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
 	*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index a9a7b513f3b0..2f4b9afc16ec 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u32 esr = 0;
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
@@ -51,6 +53,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
 
@@ -71,6 +75,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 	vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
 }
@@ -94,6 +100,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
+	WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
+
 	if (is_pabt) {
 		vcpu_set_flag(vcpu, PENDING_EXCEPTION);
 		vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 16/18] KVM: arm64: Add build-time sanity checks for flags
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Flags are great, but flags can also be dangerous: it is easy
to encode a flag that is bigger than its container (unless the
container is a u64), and it is easy to construct a flag value
that doesn't fit in the mask that is associated with it.

Add a couple of build-time sanity checks that ensure we catch
these two cases.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4073a33af17c..70931231f0cb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -420,8 +420,20 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+#define __build_check_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *_fset;			\
+								\
+		/* Check that the flags fit in the mask */	\
+		BUILD_BUG_ON(HWEIGHT(m) != HWEIGHT((f) | (m)));	\
+		/* Check that the flags fit in the type */	\
+		BUILD_BUG_ON((sizeof(*_fset) * 8) <= __fls(m));	\
+	} while (0)
+
 #define __vcpu_get_flag(v, flagset, f, m)			\
 	({							\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		v->arch.flagset & (m);				\
 	})
 
@@ -429,6 +441,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		if (HWEIGHT(m) > 1)				\
 			*fset &= ~(m);				\
@@ -439,6 +453,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		*fset &= ~(m);					\
 	} while (0)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 16/18] KVM: arm64: Add build-time sanity checks for flags
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

Flags are great, but flags can also be dangerous: it is easy
to encode a flag that is bigger than its container (unless the
container is a u64), and it is easy to construct a flag value
that doesn't fit in the mask that is associated with it.

Add a couple of build-time sanity checks that ensure we catch
these two cases.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4073a33af17c..70931231f0cb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -420,8 +420,20 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+#define __build_check_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *_fset;			\
+								\
+		/* Check that the flags fit in the mask */	\
+		BUILD_BUG_ON(HWEIGHT(m) != HWEIGHT((f) | (m)));	\
+		/* Check that the flags fit in the type */	\
+		BUILD_BUG_ON((sizeof(*_fset) * 8) <= __fls(m));	\
+	} while (0)
+
 #define __vcpu_get_flag(v, flagset, f, m)			\
 	({							\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		v->arch.flagset & (m);				\
 	})
 
@@ -429,6 +441,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		if (HWEIGHT(m) > 1)				\
 			*fset &= ~(m);				\
@@ -439,6 +453,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		*fset &= ~(m);					\
 	} while (0)
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 16/18] KVM: arm64: Add build-time sanity checks for flags
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Flags are great, but flags can also be dangerous: it is easy
to encode a flag that is bigger than its container (unless the
container is a u64), and it is easy to construct a flag value
that doesn't fit in the mask that is associated with it.

Add a couple of build-time sanity checks that ensure we catch
these two cases.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4073a33af17c..70931231f0cb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -420,8 +420,20 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+#define __build_check_flag(v, flagset, f, m)			\
+	do {							\
+		typeof(v->arch.flagset) *_fset;			\
+								\
+		/* Check that the flags fit in the mask */	\
+		BUILD_BUG_ON(HWEIGHT(m) != HWEIGHT((f) | (m)));	\
+		/* Check that the flags fit in the type */	\
+		BUILD_BUG_ON((sizeof(*_fset) * 8) <= __fls(m));	\
+	} while (0)
+
 #define __vcpu_get_flag(v, flagset, f, m)			\
 	({							\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		v->arch.flagset & (m);				\
 	})
 
@@ -429,6 +441,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		if (HWEIGHT(m) > 1)				\
 			*fset &= ~(m);				\
@@ -439,6 +453,8 @@ struct kvm_vcpu_arch {
 	do {							\
 		typeof(v->arch.flagset) *fset;			\
 								\
+		__build_check_flag(v, flagset, f, m);		\
+								\
 		fset = &v->arch.flagset;			\
 		*fset &= ~(m);					\
 	} while (0)
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 17/18] KVM: arm64: Reduce the size of the vcpu flag members
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Now that we can detect flags overflowing their container, reduce
the size of all flag set members in the vcpu struct, turning them
into 8bit quantities.

Even with the FP state enum occupying 32bit, the whole of the state
that was represented by flags is smaller by one byte. Profit!

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 70931231f0cb..83f3dae4333a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -336,13 +336,13 @@ struct kvm_vcpu_arch {
 	} fp_state;
 
 	/* Configuration flags */
-	u64 cflags;
+	u8 cflags;
 
 	/* Input flags to the hypervisor code */
-	u64 iflags;
+	u8 iflags;
 
 	/* State flags, unused by the hypervisor code */
-	u64 sflags;
+	u8 sflags;
 
 	/*
 	 * We maintain more than a single set of debug registers to support
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 17/18] KVM: arm64: Reduce the size of the vcpu flag members
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

Now that we can detect flags overflowing their container, reduce
the size of all flag set members in the vcpu struct, turning them
into 8bit quantities.

Even with the FP state enum occupying 32bit, the whole of the state
that was represented by flags is smaller by one byte. Profit!

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 70931231f0cb..83f3dae4333a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -336,13 +336,13 @@ struct kvm_vcpu_arch {
 	} fp_state;
 
 	/* Configuration flags */
-	u64 cflags;
+	u8 cflags;
 
 	/* Input flags to the hypervisor code */
-	u64 iflags;
+	u8 iflags;
 
 	/* State flags, unused by the hypervisor code */
-	u64 sflags;
+	u8 sflags;
 
 	/*
 	 * We maintain more than a single set of debug registers to support
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 17/18] KVM: arm64: Reduce the size of the vcpu flag members
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

Now that we can detect flags overflowing their container, reduce
the size of all flag set members in the vcpu struct, turning them
into 8bit quantities.

Even with the FP state enum occupying 32bit, the whole of the state
that was represented by flags is smaller by one byte. Profit!

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 70931231f0cb..83f3dae4333a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -336,13 +336,13 @@ struct kvm_vcpu_arch {
 	} fp_state;
 
 	/* Configuration flags */
-	u64 cflags;
+	u8 cflags;
 
 	/* Input flags to the hypervisor code */
-	u64 iflags;
+	u8 iflags;
 
 	/* State flags, unused by the hypervisor code */
-	u64 sflags;
+	u8 sflags;
 
 	/*
 	 * We maintain more than a single set of debug registers to support
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 18/18] KVM: arm64: Document why pause cannot be turned into a flag
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-28 11:38   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

It would be tempting to turn the 'pause' state into a flag.

However, this cannot easily be done as it is updated out of context,
while all the flags expect to only be updated from the vcpu thread.
Turning it into a flag would require to make all flag updates
atomic, which isn't necessary desireable.

Document this, and take this opportunity to move the field next
to the flag sets, filling a hole in the vcpu structure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 83f3dae4333a..8c47b7f8ef92 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -344,6 +344,15 @@ struct kvm_vcpu_arch {
 	/* State flags, unused by the hypervisor code */
 	u8 sflags;
 
+	/*
+	 * Don't run the guest (internal implementation need).
+	 *
+	 * Contrary to the flags above, this is set/cleared outside of
+	 * a vcpu context, and thus cannot be mixed with the flags
+	 * themselves (or the flag accesses need to be made atomic).
+	 */
+	bool pause;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
@@ -397,9 +406,6 @@ struct kvm_vcpu_arch {
 	/* vcpu power state */
 	struct kvm_mp_state mp_state;
 
-	/* Don't run the guest (internal implementation need) */
-	bool pause;
-
 	/* Cache some mmu pages needed inside spinlock regions */
 	struct kvm_mmu_memory_cache mmu_page_cache;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 18/18] KVM: arm64: Document why pause cannot be turned into a flag
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

It would be tempting to turn the 'pause' state into a flag.

However, this cannot easily be done as it is updated out of context,
while all the flags expect to only be updated from the vcpu thread.
Turning it into a flag would require to make all flag updates
atomic, which isn't necessary desireable.

Document this, and take this opportunity to move the field next
to the flag sets, filling a hole in the vcpu structure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 83f3dae4333a..8c47b7f8ef92 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -344,6 +344,15 @@ struct kvm_vcpu_arch {
 	/* State flags, unused by the hypervisor code */
 	u8 sflags;
 
+	/*
+	 * Don't run the guest (internal implementation need).
+	 *
+	 * Contrary to the flags above, this is set/cleared outside of
+	 * a vcpu context, and thus cannot be mixed with the flags
+	 * themselves (or the flag accesses need to be made atomic).
+	 */
+	bool pause;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
@@ -397,9 +406,6 @@ struct kvm_vcpu_arch {
 	/* vcpu power state */
 	struct kvm_mp_state mp_state;
 
-	/* Don't run the guest (internal implementation need) */
-	bool pause;
-
 	/* Cache some mmu pages needed inside spinlock regions */
 	struct kvm_mmu_memory_cache mmu_page_cache;
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 18/18] KVM: arm64: Document why pause cannot be turned into a flag
@ 2022-05-28 11:38   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-28 11:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

It would be tempting to turn the 'pause' state into a flag.

However, this cannot easily be done as it is updated out of context,
while all the flags expect to only be updated from the vcpu thread.
Turning it into a flag would require to make all flag updates
atomic, which isn't necessary desireable.

Document this, and take this opportunity to move the field next
to the flag sets, filling a hole in the vcpu structure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 83f3dae4333a..8c47b7f8ef92 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -344,6 +344,15 @@ struct kvm_vcpu_arch {
 	/* State flags, unused by the hypervisor code */
 	u8 sflags;
 
+	/*
+	 * Don't run the guest (internal implementation need).
+	 *
+	 * Contrary to the flags above, this is set/cleared outside of
+	 * a vcpu context, and thus cannot be mixed with the flags
+	 * themselves (or the flag accesses need to be made atomic).
+	 */
+	bool pause;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
@@ -397,9 +406,6 @@ struct kvm_vcpu_arch {
 	/* vcpu power state */
 	struct kvm_mp_state mp_state;
 
-	/* Don't run the guest (internal implementation need) */
-	bool pause;
-
 	/* Cache some mmu pages needed inside spinlock regions */
 	struct kvm_mmu_memory_cache mmu_page_cache;
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* Re: [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-05-30  8:28   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-30  8:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

On 2022-05-28 12:38, Marc Zyngier wrote:

[...]

> This has been lightly tested on both VHE and nVHE systems, but not
> with pKVM itself (there is a bit of work to rebase it on top of this
> infrastructure). Patches on top of kvmarm-4.19 (there is a minor
> conflict with Linus' current tree).

As Will just pointed out to me in private, this should really read
kvmarm-5.19, as that's what the patches are actually based on.

I guess I'm still suffering from a form of Stockholm syndrome...

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
@ 2022-05-30  8:28   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-30  8:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel; +Cc: kernel-team, Will Deacon, Mark Brown

On 2022-05-28 12:38, Marc Zyngier wrote:

[...]

> This has been lightly tested on both VHE and nVHE systems, but not
> with pKVM itself (there is a bit of work to rebase it on top of this
> infrastructure). Patches on top of kvmarm-4.19 (there is a minor
> conflict with Linus' current tree).

As Will just pointed out to me in private, this should really read
kvmarm-5.19, as that's what the patches are actually based on.

I guess I'm still suffering from a form of Stockholm syndrome...

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
@ 2022-05-30  8:28   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-05-30  8:28 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Oliver Upton,
	Will Deacon, Fuad Tabba, Quentin Perret, Mark Brown, kernel-team

On 2022-05-28 12:38, Marc Zyngier wrote:

[...]

> This has been lightly tested on both VHE and nVHE systems, but not
> with pKVM itself (there is a bit of work to rebase it on top of this
> infrastructure). Patches on top of kvmarm-4.19 (there is a minor
> conflict with Linus' current tree).

As Will just pointed out to me in private, this should really read
kvmarm-5.19, as that's what the patches are actually based on.

I guess I'm still suffering from a form of Stockholm syndrome...

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-05-30 14:41     ` Mark Brown
  -1 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-05-30 14:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Will Deacon, stable, kvmarm, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 962 bytes --]

On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:
> On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
> flag if SVE is enabled for EL0 on the host. This is used to restore
> the correct state on vpcu put.
> 
> However, it appears that nothing ever clears this flag. Once
> set, it will stick until the vcpu is destroyed, which has the
> potential to spuriously enable SVE for userspace.

Oh dear.

Reviewed-by: Mark Brown <broonie@kernel.org>

> We probably never saw the issue because no VMM uses SVE, but
> that's still pretty bad. Unconditionally clearing the flag
> on vcpu load addresses the issue.

Unless I'm missing something since we currently always disable
SVE on syscall even if the VMM were using SVE for some reason
(SVE memcpy()?) we should already have disabled SVE for EL0 in
sve_user_discard() during kernel entry so EL0 access to SVE
should be disabled in the system register by the time we get
here.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-05-30 14:41     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-05-30 14:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team, stable


[-- Attachment #1.1: Type: text/plain, Size: 962 bytes --]

On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:
> On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
> flag if SVE is enabled for EL0 on the host. This is used to restore
> the correct state on vpcu put.
> 
> However, it appears that nothing ever clears this flag. Once
> set, it will stick until the vcpu is destroyed, which has the
> potential to spuriously enable SVE for userspace.

Oh dear.

Reviewed-by: Mark Brown <broonie@kernel.org>

> We probably never saw the issue because no VMM uses SVE, but
> that's still pretty bad. Unconditionally clearing the flag
> on vcpu load addresses the issue.

Unless I'm missing something since we currently always disable
SVE on syscall even if the VMM were using SVE for some reason
(SVE memcpy()?) we should already have disabled SVE for EL0 in
sve_user_discard() during kernel entry so EL0 access to SVE
should be disabled in the system register by the time we get
here.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-05-30 14:41     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-05-30 14:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team, stable

[-- Attachment #1: Type: text/plain, Size: 962 bytes --]

On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:
> On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
> flag if SVE is enabled for EL0 on the host. This is used to restore
> the correct state on vpcu put.
> 
> However, it appears that nothing ever clears this flag. Once
> set, it will stick until the vcpu is destroyed, which has the
> potential to spuriously enable SVE for userspace.

Oh dear.

Reviewed-by: Mark Brown <broonie@kernel.org>

> We probably never saw the issue because no VMM uses SVE, but
> that's still pretty bad. Unconditionally clearing the flag
> on vcpu load addresses the issue.

Unless I'm missing something since we currently always disable
SVE on syscall even if the VMM were using SVE for some reason
(SVE memcpy()?) we should already have disabled SVE for EL0 in
sve_user_discard() during kernel entry so EL0 access to SVE
should be disabled in the system register by the time we get
here.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 02/18] KVM: arm64: Always start with clearing SME flag on load
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-05-30 14:51     ` Mark Brown
  -1 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-05-30 14:51 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvm, Will Deacon, kvmarm, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 296 bytes --]

On Sat, May 28, 2022 at 12:38:12PM +0100, Marc Zyngier wrote:
> On each vcpu load, we set the KVM_ARM64_HOST_SME_ENABLED
> flag if SVE is enabled for EL0 on the host. This is used to
> restore the correct state on vpcu put.

s/SVE/SME/

but otherwise

Reviwed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 02/18] KVM: arm64: Always start with clearing SME flag on load
@ 2022-05-30 14:51     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-05-30 14:51 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team


[-- Attachment #1.1: Type: text/plain, Size: 296 bytes --]

On Sat, May 28, 2022 at 12:38:12PM +0100, Marc Zyngier wrote:
> On each vcpu load, we set the KVM_ARM64_HOST_SME_ENABLED
> flag if SVE is enabled for EL0 on the host. This is used to
> restore the correct state on vpcu put.

s/SVE/SME/

but otherwise

Reviwed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 02/18] KVM: arm64: Always start with clearing SME flag on load
@ 2022-05-30 14:51     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-05-30 14:51 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team

[-- Attachment #1: Type: text/plain, Size: 296 bytes --]

On Sat, May 28, 2022 at 12:38:12PM +0100, Marc Zyngier wrote:
> On each vcpu load, we set the KVM_ARM64_HOST_SME_ENABLED
> flag if SVE is enabled for EL0 on the host. This is used to
> restore the correct state on vpcu put.

s/SVE/SME/

but otherwise

Reviwed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-03  5:23     ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-03  5:23 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.
>
> We do this in the hypervisor code in order to make sure we're
> in a context where we are not interruptible. But we already
> have a hook in the run loop to generate this flag. We may as
> well update the FP flags directly and save the pointless flag
> tracking.
>
> Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> to indicate what the leftover of this helper actually do.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>


> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  }
>
>  /*
> - * Called just before entering the guest once we are no longer
> - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> - * mirror of the flag used by the hypervisor.
> + * Called just before entering the guest once we are no longer preemptable
> + * and interrupts are disabled. If we have managed to run anything using
> + * FP while we were preemptible (such as off the back of an interrupt),
> + * then neither the host nor the guest own the FP hardware (and it was the
> + * responsibility of the code that used FP to save the existing state).
> + *
> + * Note that not supporting FP is basically the same thing as far as the
> + * hypervisor is concerned (nothing to save).
>   */
>  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
>  {
> -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> -       else
> -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
>  }

Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
FP is supported might be more consistent?
Then, checking system_supports_fpsimd() is unnecessary here.
(KVM_ARM64_FP_ENABLED is not set when FP is not supported)

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-03  5:23     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-03  5:23 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.
>
> We do this in the hypervisor code in order to make sure we're
> in a context where we are not interruptible. But we already
> have a hook in the run loop to generate this flag. We may as
> well update the FP flags directly and save the pointless flag
> tracking.
>
> Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> to indicate what the leftover of this helper actually do.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>


> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  }
>
>  /*
> - * Called just before entering the guest once we are no longer
> - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> - * mirror of the flag used by the hypervisor.
> + * Called just before entering the guest once we are no longer preemptable
> + * and interrupts are disabled. If we have managed to run anything using
> + * FP while we were preemptible (such as off the back of an interrupt),
> + * then neither the host nor the guest own the FP hardware (and it was the
> + * responsibility of the code that used FP to save the existing state).
> + *
> + * Note that not supporting FP is basically the same thing as far as the
> + * hypervisor is concerned (nothing to save).
>   */
>  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
>  {
> -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> -       else
> -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
>  }

Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
FP is supported might be more consistent?
Then, checking system_supports_fpsimd() is unnecessary here.
(KVM_ARM64_FP_ENABLED is not set when FP is not supported)

Thanks,
Reiji
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-03  5:23     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-03  5:23 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.
>
> We do this in the hypervisor code in order to make sure we're
> in a context where we are not interruptible. But we already
> have a hook in the run loop to generate this flag. We may as
> well update the FP flags directly and save the pointless flag
> tracking.
>
> Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> to indicate what the leftover of this helper actually do.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>


> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  }
>
>  /*
> - * Called just before entering the guest once we are no longer
> - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> - * mirror of the flag used by the hypervisor.
> + * Called just before entering the guest once we are no longer preemptable
> + * and interrupts are disabled. If we have managed to run anything using
> + * FP while we were preemptible (such as off the back of an interrupt),
> + * then neither the host nor the guest own the FP hardware (and it was the
> + * responsibility of the code that used FP to save the existing state).
> + *
> + * Note that not supporting FP is basically the same thing as far as the
> + * hypervisor is concerned (nothing to save).
>   */
>  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
>  {
> -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> -       else
> -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
>  }

Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
FP is supported might be more consistent?
Then, checking system_supports_fpsimd() is unnecessary here.
(KVM_ARM64_FP_ENABLED is not set when FP is not supported)

Thanks,
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-03  9:09     ` Mark Brown
  -1 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-03  9:09 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team

[-- Attachment #1: Type: text/plain, Size: 368 bytes --]

On Sat, May 28, 2022 at 12:38:13PM +0100, Marc Zyngier wrote:
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-03  9:09     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-03  9:09 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvm, Will Deacon, kvmarm, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 368 bytes --]

On Sat, May 28, 2022 at 12:38:13PM +0100, Marc Zyngier wrote:
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-03  9:09     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-03  9:09 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team


[-- Attachment #1.1: Type: text/plain, Size: 368 bytes --]

On Sat, May 28, 2022 at 12:38:13PM +0100, Marc Zyngier wrote:
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-03  9:14     ` Mark Brown
  -1 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-03  9:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team

[-- Attachment #1: Type: text/plain, Size: 536 bytes --]

On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:

> As it turns out, this isn't really a good match for flags, and
> we'd be better off if this was a simpler tristate, each state
> having a name that actually reflect the state:
> 
> - FP_STATE_CLEAN
> - FP_STATE_HOST_DIRTY
> - FP_STATE_GUEST_DIRTY

I had to think a bit more than I liked about the _DIRTY in the
names of the host and guest flags, but that's really just
bikeshedding and not a meaningful issue.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-03  9:14     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-03  9:14 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvm, Will Deacon, kvmarm, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 536 bytes --]

On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:

> As it turns out, this isn't really a good match for flags, and
> we'd be better off if this was a simpler tristate, each state
> having a name that actually reflect the state:
> 
> - FP_STATE_CLEAN
> - FP_STATE_HOST_DIRTY
> - FP_STATE_GUEST_DIRTY

I had to think a bit more than I liked about the _DIRTY in the
names of the host and guest flags, but that's really just
bikeshedding and not a meaningful issue.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-03  9:14     ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-03  9:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team


[-- Attachment #1.1: Type: text/plain, Size: 536 bytes --]

On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:

> As it turns out, this isn't really a good match for flags, and
> we'd be better off if this was a simpler tristate, each state
> having a name that actually reflect the state:
> 
> - FP_STATE_CLEAN
> - FP_STATE_HOST_DIRTY
> - FP_STATE_GUEST_DIRTY

I had to think a bit more than I liked about the _DIRTY in the
names of the host and guest flags, but that's really just
bikeshedding and not a meaningful issue.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  2022-06-03  5:23     ` Reiji Watanabe
  (?)
@ 2022-06-04  8:10       ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-04  8:10 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Fri, 03 Jun 2022 06:23:25 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > the vcpu whether it the FP regs contain something that is owned
> > by the vcpu or not by updating the rest of the FP flags.
> >
> > We do this in the hypervisor code in order to make sure we're
> > in a context where we are not interruptible. But we already
> > have a hook in the run loop to generate this flag. We may as
> > well update the FP flags directly and save the pointless flag
> > tracking.
> >
> > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > to indicate what the leftover of this helper actually do.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> 
> 
> > --- a/arch/arm64/kvm/fpsimd.c
> > +++ b/arch/arm64/kvm/fpsimd.c
> > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> >  }
> >
> >  /*
> > - * Called just before entering the guest once we are no longer
> > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > - * mirror of the flag used by the hypervisor.
> > + * Called just before entering the guest once we are no longer preemptable
> > + * and interrupts are disabled. If we have managed to run anything using
> > + * FP while we were preemptible (such as off the back of an interrupt),
> > + * then neither the host nor the guest own the FP hardware (and it was the
> > + * responsibility of the code that used FP to save the existing state).
> > + *
> > + * Note that not supporting FP is basically the same thing as far as the
> > + * hypervisor is concerned (nothing to save).
> >   */
> >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> >  {
> > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > -       else
> > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> >  }
> 
> Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> FP is supported might be more consistent?
> Then, checking system_supports_fpsimd() is unnecessary here.
> (KVM_ARM64_FP_ENABLED is not set when FP is not supported)

That's indeed a possibility. But I'm trying not to change the logic
here, only to move it to a place that provides the same semantic
without the need for an extra flag.

I'm happy to stack an extra patch on top of this series though.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-04  8:10       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-04  8:10 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Fri, 03 Jun 2022 06:23:25 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > the vcpu whether it the FP regs contain something that is owned
> > by the vcpu or not by updating the rest of the FP flags.
> >
> > We do this in the hypervisor code in order to make sure we're
> > in a context where we are not interruptible. But we already
> > have a hook in the run loop to generate this flag. We may as
> > well update the FP flags directly and save the pointless flag
> > tracking.
> >
> > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > to indicate what the leftover of this helper actually do.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> 
> 
> > --- a/arch/arm64/kvm/fpsimd.c
> > +++ b/arch/arm64/kvm/fpsimd.c
> > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> >  }
> >
> >  /*
> > - * Called just before entering the guest once we are no longer
> > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > - * mirror of the flag used by the hypervisor.
> > + * Called just before entering the guest once we are no longer preemptable
> > + * and interrupts are disabled. If we have managed to run anything using
> > + * FP while we were preemptible (such as off the back of an interrupt),
> > + * then neither the host nor the guest own the FP hardware (and it was the
> > + * responsibility of the code that used FP to save the existing state).
> > + *
> > + * Note that not supporting FP is basically the same thing as far as the
> > + * hypervisor is concerned (nothing to save).
> >   */
> >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> >  {
> > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > -       else
> > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> >  }
> 
> Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> FP is supported might be more consistent?
> Then, checking system_supports_fpsimd() is unnecessary here.
> (KVM_ARM64_FP_ENABLED is not set when FP is not supported)

That's indeed a possibility. But I'm trying not to change the logic
here, only to move it to a place that provides the same semantic
without the need for an extra flag.

I'm happy to stack an extra patch on top of this series though.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-04  8:10       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-04  8:10 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Fri, 03 Jun 2022 06:23:25 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > the vcpu whether it the FP regs contain something that is owned
> > by the vcpu or not by updating the rest of the FP flags.
> >
> > We do this in the hypervisor code in order to make sure we're
> > in a context where we are not interruptible. But we already
> > have a hook in the run loop to generate this flag. We may as
> > well update the FP flags directly and save the pointless flag
> > tracking.
> >
> > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > to indicate what the leftover of this helper actually do.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> 
> 
> > --- a/arch/arm64/kvm/fpsimd.c
> > +++ b/arch/arm64/kvm/fpsimd.c
> > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> >  }
> >
> >  /*
> > - * Called just before entering the guest once we are no longer
> > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > - * mirror of the flag used by the hypervisor.
> > + * Called just before entering the guest once we are no longer preemptable
> > + * and interrupts are disabled. If we have managed to run anything using
> > + * FP while we were preemptible (such as off the back of an interrupt),
> > + * then neither the host nor the guest own the FP hardware (and it was the
> > + * responsibility of the code that used FP to save the existing state).
> > + *
> > + * Note that not supporting FP is basically the same thing as far as the
> > + * hypervisor is concerned (nothing to save).
> >   */
> >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> >  {
> > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > -       else
> > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> >  }
> 
> Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> FP is supported might be more consistent?
> Then, checking system_supports_fpsimd() is unnecessary here.
> (KVM_ARM64_FP_ENABLED is not set when FP is not supported)

That's indeed a possibility. But I'm trying not to change the logic
here, only to move it to a place that provides the same semantic
without the need for an extra flag.

I'm happy to stack an extra patch on top of this series though.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-04  8:16     ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-04  8:16 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The KVM FP code uses a pair of flags to denote three states:
>
> - FP_ENABLED set: the guest owns the FP state
> - FP_HOST set: the host owns the FP state
> - FP_ENABLED and FP_HOST clear: nobody owns the FP state at all
>
> and both flags set is an illegal state, which nothing ever checks
> for...
>
> As it turns out, this isn't really a good match for flags, and
> we'd be better off if this was a simpler tristate, each state
> having a name that actually reflect the state:
>
> - FP_STATE_CLEAN
> - FP_STATE_HOST_DIRTY
> - FP_STATE_GUEST_DIRTY
>
> Kill the two flags, and move over to an enum encoding these
> three states. This results in less confusing code, and less risk of
> ending up in the uncharted territory of a 4th state if we forget
> to clear one of the two flags.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

I have the same comment as I have for the patch-3 though.
(i.e. I think having kvm_arch_vcpu_load_fp() set vcpu->arch.fp_state to
FP_STATE_DIRTY_HOST only when FP is supported would be more consistent.)

Thanks,
Reiji

> ---
>  arch/arm64/include/asm/kvm_host.h       |  9 +++++++--
>  arch/arm64/kvm/fpsimd.c                 | 11 +++++------
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  8 +++-----
>  arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
>  arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
>  5 files changed, 18 insertions(+), 16 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9252d71b4ac5..a46f952b97f6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -328,6 +328,13 @@ struct kvm_vcpu_arch {
>         /* Exception Information */
>         struct kvm_vcpu_fault_info fault;
>
> +       /* Ownership of the FP regs */
> +       enum {
> +               FP_STATE_CLEAN,
> +               FP_STATE_DIRTY_HOST,
> +               FP_STATE_DIRTY_GUEST,
> +       } fp_state;
> +
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> @@ -433,8 +440,6 @@ struct kvm_vcpu_arch {
>
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
> -#define KVM_ARM64_FP_ENABLED           (1 << 1) /* guest FP regs loaded */
> -#define KVM_ARM64_FP_HOST              (1 << 2) /* host FP regs loaded */
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE                (1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED   (1 << 6) /* SVE config completed */
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 9ebd89541281..0d82f6c5b110 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -77,8 +77,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>         BUG_ON(!current->mm);
>         BUG_ON(test_thread_flag(TIF_SVE));
>
> -       vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
> -       vcpu->arch.flags |= KVM_ARM64_FP_HOST;
> +       vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;



>
>         vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
>         if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> @@ -100,7 +99,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>
>                 if (read_sysreg_s(SYS_SVCR_EL0) &
>                     (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
> -                       vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
> +                       vcpu->arch.fp_state = FP_STATE_CLEAN;
>                         fpsimd_save_and_flush_cpu_state();
>                 }
>         }
> @@ -119,7 +118,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
>  {
>         if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> -               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> +               vcpu->arch.fp_state = FP_STATE_CLEAN;
>  }
>
>  /*
> @@ -133,7 +132,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
>  {
>         WARN_ON_ONCE(!irqs_disabled());
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
>                 /*
>                  * Currently we do not support SME guests so SVCR is
>                  * always 0 and we just need a variable to point to.
> @@ -176,7 +175,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
>                                          CPACR_EL1_SMEN_EL1EN);
>         }
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
>                 if (vcpu_has_sve(vcpu)) {
>                         __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 1209248d2a3d..b22378abfb57 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -40,7 +40,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_table;
>  /* Check whether the FP regs are owned by the guest */
>  static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
>  {
> -       return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
> +       return vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST;
>  }
>
>  /* Save the 32-bit only FPSIMD system register state */
> @@ -179,10 +179,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
>         isb();
>
>         /* Write out the host state if it's in the registers */
> -       if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_HOST)
>                 __fpsimd_save_state(vcpu->arch.host_fpsimd_state);
> -               vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
> -       }
>
>         /* Restore the guest state */
>         if (sve_guest)
> @@ -194,7 +192,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
>         if (!(read_sysreg(hcr_el2) & HCR_RW))
>                 write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
>
> -       vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
> +       vcpu->arch.fp_state = FP_STATE_DIRTY_GUEST;
>
>         return true;
>  }
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index a6b9f1186577..89e0f88c9006 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -123,7 +123,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>         }
>
>         cptr = CPTR_EL2_DEFAULT;
> -       if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
> +       if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST))
>                 cptr |= CPTR_EL2_TZ;
>         if (cpus_have_final_cap(ARM64_SME))
>                 cptr &= ~CPTR_EL2_TSM;
> @@ -335,7 +335,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>
>         __sysreg_restore_state_nvhe(host_ctxt);
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
>                 __fpsimd_save_fpexc32(vcpu);
>
>         __debug_switch_to_host(vcpu);
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 46f365254e9f..258e87325c95 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -175,7 +175,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>
>         sysreg_restore_host_state_vhe(host_ctxt);
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
>                 __fpsimd_save_fpexc32(vcpu);
>
>         __debug_switch_to_host(vcpu);
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-04  8:16     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-04  8:16 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The KVM FP code uses a pair of flags to denote three states:
>
> - FP_ENABLED set: the guest owns the FP state
> - FP_HOST set: the host owns the FP state
> - FP_ENABLED and FP_HOST clear: nobody owns the FP state at all
>
> and both flags set is an illegal state, which nothing ever checks
> for...
>
> As it turns out, this isn't really a good match for flags, and
> we'd be better off if this was a simpler tristate, each state
> having a name that actually reflect the state:
>
> - FP_STATE_CLEAN
> - FP_STATE_HOST_DIRTY
> - FP_STATE_GUEST_DIRTY
>
> Kill the two flags, and move over to an enum encoding these
> three states. This results in less confusing code, and less risk of
> ending up in the uncharted territory of a 4th state if we forget
> to clear one of the two flags.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

I have the same comment as I have for the patch-3 though.
(i.e. I think having kvm_arch_vcpu_load_fp() set vcpu->arch.fp_state to
FP_STATE_DIRTY_HOST only when FP is supported would be more consistent.)

Thanks,
Reiji

> ---
>  arch/arm64/include/asm/kvm_host.h       |  9 +++++++--
>  arch/arm64/kvm/fpsimd.c                 | 11 +++++------
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  8 +++-----
>  arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
>  arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
>  5 files changed, 18 insertions(+), 16 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9252d71b4ac5..a46f952b97f6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -328,6 +328,13 @@ struct kvm_vcpu_arch {
>         /* Exception Information */
>         struct kvm_vcpu_fault_info fault;
>
> +       /* Ownership of the FP regs */
> +       enum {
> +               FP_STATE_CLEAN,
> +               FP_STATE_DIRTY_HOST,
> +               FP_STATE_DIRTY_GUEST,
> +       } fp_state;
> +
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> @@ -433,8 +440,6 @@ struct kvm_vcpu_arch {
>
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
> -#define KVM_ARM64_FP_ENABLED           (1 << 1) /* guest FP regs loaded */
> -#define KVM_ARM64_FP_HOST              (1 << 2) /* host FP regs loaded */
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE                (1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED   (1 << 6) /* SVE config completed */
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 9ebd89541281..0d82f6c5b110 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -77,8 +77,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>         BUG_ON(!current->mm);
>         BUG_ON(test_thread_flag(TIF_SVE));
>
> -       vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
> -       vcpu->arch.flags |= KVM_ARM64_FP_HOST;
> +       vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;



>
>         vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
>         if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> @@ -100,7 +99,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>
>                 if (read_sysreg_s(SYS_SVCR_EL0) &
>                     (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
> -                       vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
> +                       vcpu->arch.fp_state = FP_STATE_CLEAN;
>                         fpsimd_save_and_flush_cpu_state();
>                 }
>         }
> @@ -119,7 +118,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
>  {
>         if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> -               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> +               vcpu->arch.fp_state = FP_STATE_CLEAN;
>  }
>
>  /*
> @@ -133,7 +132,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
>  {
>         WARN_ON_ONCE(!irqs_disabled());
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
>                 /*
>                  * Currently we do not support SME guests so SVCR is
>                  * always 0 and we just need a variable to point to.
> @@ -176,7 +175,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
>                                          CPACR_EL1_SMEN_EL1EN);
>         }
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
>                 if (vcpu_has_sve(vcpu)) {
>                         __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 1209248d2a3d..b22378abfb57 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -40,7 +40,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_table;
>  /* Check whether the FP regs are owned by the guest */
>  static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
>  {
> -       return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
> +       return vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST;
>  }
>
>  /* Save the 32-bit only FPSIMD system register state */
> @@ -179,10 +179,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
>         isb();
>
>         /* Write out the host state if it's in the registers */
> -       if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_HOST)
>                 __fpsimd_save_state(vcpu->arch.host_fpsimd_state);
> -               vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
> -       }
>
>         /* Restore the guest state */
>         if (sve_guest)
> @@ -194,7 +192,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
>         if (!(read_sysreg(hcr_el2) & HCR_RW))
>                 write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
>
> -       vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
> +       vcpu->arch.fp_state = FP_STATE_DIRTY_GUEST;
>
>         return true;
>  }
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index a6b9f1186577..89e0f88c9006 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -123,7 +123,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>         }
>
>         cptr = CPTR_EL2_DEFAULT;
> -       if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
> +       if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST))
>                 cptr |= CPTR_EL2_TZ;
>         if (cpus_have_final_cap(ARM64_SME))
>                 cptr &= ~CPTR_EL2_TSM;
> @@ -335,7 +335,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>
>         __sysreg_restore_state_nvhe(host_ctxt);
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
>                 __fpsimd_save_fpexc32(vcpu);
>
>         __debug_switch_to_host(vcpu);
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 46f365254e9f..258e87325c95 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -175,7 +175,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>
>         sysreg_restore_host_state_vhe(host_ctxt);
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
>                 __fpsimd_save_fpexc32(vcpu);
>
>         __debug_switch_to_host(vcpu);
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-04  8:16     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-04  8:16 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The KVM FP code uses a pair of flags to denote three states:
>
> - FP_ENABLED set: the guest owns the FP state
> - FP_HOST set: the host owns the FP state
> - FP_ENABLED and FP_HOST clear: nobody owns the FP state at all
>
> and both flags set is an illegal state, which nothing ever checks
> for...
>
> As it turns out, this isn't really a good match for flags, and
> we'd be better off if this was a simpler tristate, each state
> having a name that actually reflect the state:
>
> - FP_STATE_CLEAN
> - FP_STATE_HOST_DIRTY
> - FP_STATE_GUEST_DIRTY
>
> Kill the two flags, and move over to an enum encoding these
> three states. This results in less confusing code, and less risk of
> ending up in the uncharted territory of a 4th state if we forget
> to clear one of the two flags.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

I have the same comment as I have for the patch-3 though.
(i.e. I think having kvm_arch_vcpu_load_fp() set vcpu->arch.fp_state to
FP_STATE_DIRTY_HOST only when FP is supported would be more consistent.)

Thanks,
Reiji

> ---
>  arch/arm64/include/asm/kvm_host.h       |  9 +++++++--
>  arch/arm64/kvm/fpsimd.c                 | 11 +++++------
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  8 +++-----
>  arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
>  arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
>  5 files changed, 18 insertions(+), 16 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9252d71b4ac5..a46f952b97f6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -328,6 +328,13 @@ struct kvm_vcpu_arch {
>         /* Exception Information */
>         struct kvm_vcpu_fault_info fault;
>
> +       /* Ownership of the FP regs */
> +       enum {
> +               FP_STATE_CLEAN,
> +               FP_STATE_DIRTY_HOST,
> +               FP_STATE_DIRTY_GUEST,
> +       } fp_state;
> +
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> @@ -433,8 +440,6 @@ struct kvm_vcpu_arch {
>
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
> -#define KVM_ARM64_FP_ENABLED           (1 << 1) /* guest FP regs loaded */
> -#define KVM_ARM64_FP_HOST              (1 << 2) /* host FP regs loaded */
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE                (1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED   (1 << 6) /* SVE config completed */
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 9ebd89541281..0d82f6c5b110 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -77,8 +77,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>         BUG_ON(!current->mm);
>         BUG_ON(test_thread_flag(TIF_SVE));
>
> -       vcpu->arch.flags &= ~KVM_ARM64_FP_ENABLED;
> -       vcpu->arch.flags |= KVM_ARM64_FP_HOST;
> +       vcpu->arch.fp_state = FP_STATE_DIRTY_HOST;



>
>         vcpu->arch.flags &= ~KVM_ARM64_HOST_SVE_ENABLED;
>         if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> @@ -100,7 +99,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>
>                 if (read_sysreg_s(SYS_SVCR_EL0) &
>                     (SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK)) {
> -                       vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
> +                       vcpu->arch.fp_state = FP_STATE_CLEAN;
>                         fpsimd_save_and_flush_cpu_state();
>                 }
>         }
> @@ -119,7 +118,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
>  {
>         if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> -               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> +               vcpu->arch.fp_state = FP_STATE_CLEAN;
>  }
>
>  /*
> @@ -133,7 +132,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
>  {
>         WARN_ON_ONCE(!irqs_disabled());
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
>                 /*
>                  * Currently we do not support SME guests so SVCR is
>                  * always 0 and we just need a variable to point to.
> @@ -176,7 +175,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
>                                          CPACR_EL1_SMEN_EL1EN);
>         }
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST) {
>                 if (vcpu_has_sve(vcpu)) {
>                         __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 1209248d2a3d..b22378abfb57 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -40,7 +40,7 @@ extern struct kvm_exception_table_entry __stop___kvm_ex_table;
>  /* Check whether the FP regs are owned by the guest */
>  static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
>  {
> -       return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
> +       return vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST;
>  }
>
>  /* Save the 32-bit only FPSIMD system register state */
> @@ -179,10 +179,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
>         isb();
>
>         /* Write out the host state if it's in the registers */
> -       if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_HOST)
>                 __fpsimd_save_state(vcpu->arch.host_fpsimd_state);
> -               vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
> -       }
>
>         /* Restore the guest state */
>         if (sve_guest)
> @@ -194,7 +192,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
>         if (!(read_sysreg(hcr_el2) & HCR_RW))
>                 write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
>
> -       vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
> +       vcpu->arch.fp_state = FP_STATE_DIRTY_GUEST;
>
>         return true;
>  }
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index a6b9f1186577..89e0f88c9006 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -123,7 +123,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>         }
>
>         cptr = CPTR_EL2_DEFAULT;
> -       if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
> +       if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST))
>                 cptr |= CPTR_EL2_TZ;
>         if (cpus_have_final_cap(ARM64_SME))
>                 cptr &= ~CPTR_EL2_TSM;
> @@ -335,7 +335,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>
>         __sysreg_restore_state_nvhe(host_ctxt);
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
>                 __fpsimd_save_fpexc32(vcpu);
>
>         __debug_switch_to_host(vcpu);
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 46f365254e9f..258e87325c95 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -175,7 +175,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>
>         sysreg_restore_host_state_vhe(host_ctxt);
>
> -       if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
> +       if (vcpu->arch.fp_state == FP_STATE_DIRTY_GUEST)
>                 __fpsimd_save_fpexc32(vcpu);
>
>         __debug_switch_to_host(vcpu);
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
  2022-06-03  9:14     ` Mark Brown
  (?)
@ 2022-06-06  8:41       ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-06  8:41 UTC (permalink / raw)
  To: Mark Brown; +Cc: kernel-team, kvm, Will Deacon, kvmarm, linux-arm-kernel

On Fri, 03 Jun 2022 10:14:11 +0100,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:
> 
> > As it turns out, this isn't really a good match for flags, and
> > we'd be better off if this was a simpler tristate, each state
> > having a name that actually reflect the state:
> > 
> > - FP_STATE_CLEAN
> > - FP_STATE_HOST_DIRTY
> > - FP_STATE_GUEST_DIRTY
> 
> I had to think a bit more than I liked about the _DIRTY in the
> names of the host and guest flags, but that's really just
> bikeshedding and not a meaningful issue.

Another option was:

- FP_STATE_FREE
- FP_STATE_HOST_OWNED
- FP_STATE_GUEST_OWNED

I don't mind wither way.

> Reviewed-by: Mark Brown <broonie@kernel.org>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-06  8:41       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-06  8:41 UTC (permalink / raw)
  To: Mark Brown
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team

On Fri, 03 Jun 2022 10:14:11 +0100,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:
> 
> > As it turns out, this isn't really a good match for flags, and
> > we'd be better off if this was a simpler tristate, each state
> > having a name that actually reflect the state:
> > 
> > - FP_STATE_CLEAN
> > - FP_STATE_HOST_DIRTY
> > - FP_STATE_GUEST_DIRTY
> 
> I had to think a bit more than I liked about the _DIRTY in the
> names of the host and guest flags, but that's really just
> bikeshedding and not a meaningful issue.

Another option was:

- FP_STATE_FREE
- FP_STATE_HOST_OWNED
- FP_STATE_GUEST_OWNED

I don't mind wither way.

> Reviewed-by: Mark Brown <broonie@kernel.org>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-06  8:41       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-06  8:41 UTC (permalink / raw)
  To: Mark Brown
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team

On Fri, 03 Jun 2022 10:14:11 +0100,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:
> 
> > As it turns out, this isn't really a good match for flags, and
> > we'd be better off if this was a simpler tristate, each state
> > having a name that actually reflect the state:
> > 
> > - FP_STATE_CLEAN
> > - FP_STATE_HOST_DIRTY
> > - FP_STATE_GUEST_DIRTY
> 
> I had to think a bit more than I liked about the _DIRTY in the
> names of the host and guest flags, but that's really just
> bikeshedding and not a meaningful issue.

Another option was:

- FP_STATE_FREE
- FP_STATE_HOST_OWNED
- FP_STATE_GUEST_OWNED

I don't mind wither way.

> Reviewed-by: Mark Brown <broonie@kernel.org>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
  2022-06-06  8:41       ` Marc Zyngier
  (?)
@ 2022-06-06 10:31         ` Mark Brown
  -1 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-06 10:31 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team

[-- Attachment #1: Type: text/plain, Size: 612 bytes --]

On Mon, Jun 06, 2022 at 09:41:52AM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:

> > > - FP_STATE_CLEAN
> > > - FP_STATE_HOST_DIRTY
> > > - FP_STATE_GUEST_DIRTY

> > I had to think a bit more than I liked about the _DIRTY in the
> > names of the host and guest flags, but that's really just
> > bikeshedding and not a meaningful issue.

> Another option was:

> - FP_STATE_FREE
> - FP_STATE_HOST_OWNED
> - FP_STATE_GUEST_OWNED

> I don't mind wither way.

I think I do prefer that option, but like I say it's bikeshedding.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-06 10:31         ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-06 10:31 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvm, Will Deacon, kvmarm, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 612 bytes --]

On Mon, Jun 06, 2022 at 09:41:52AM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:

> > > - FP_STATE_CLEAN
> > > - FP_STATE_HOST_DIRTY
> > > - FP_STATE_GUEST_DIRTY

> > I had to think a bit more than I liked about the _DIRTY in the
> > names of the host and guest flags, but that's really just
> > bikeshedding and not a meaningful issue.

> Another option was:

> - FP_STATE_FREE
> - FP_STATE_HOST_OWNED
> - FP_STATE_GUEST_OWNED

> I don't mind wither way.

I think I do prefer that option, but like I say it's bikeshedding.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate
@ 2022-06-06 10:31         ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-06 10:31 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team


[-- Attachment #1.1: Type: text/plain, Size: 612 bytes --]

On Mon, Jun 06, 2022 at 09:41:52AM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Sat, May 28, 2022 at 12:38:14PM +0100, Marc Zyngier wrote:

> > > - FP_STATE_CLEAN
> > > - FP_STATE_HOST_DIRTY
> > > - FP_STATE_GUEST_DIRTY

> > I had to think a bit more than I liked about the _DIRTY in the
> > names of the host and guest flags, but that's really just
> > bikeshedding and not a meaningful issue.

> Another option was:

> - FP_STATE_FREE
> - FP_STATE_HOST_OWNED
> - FP_STATE_GUEST_OWNED

> I don't mind wither way.

I think I do prefer that option, but like I say it's bikeshedding.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
  2022-05-30 14:41     ` Mark Brown
  (?)
@ 2022-06-06 11:28       ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-06 11:28 UTC (permalink / raw)
  To: Mark Brown
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team, stable

On Mon, 30 May 2022 15:41:54 +0100,
Mark Brown <broonie@kernel.org> wrote:
> 
> [1  <text/plain; us-ascii (quoted-printable)>]
> On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:
> > On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
> > flag if SVE is enabled for EL0 on the host. This is used to restore
> > the correct state on vpcu put.
> > 
> > However, it appears that nothing ever clears this flag. Once
> > set, it will stick until the vcpu is destroyed, which has the
> > potential to spuriously enable SVE for userspace.
> 
> Oh dear.
> 
> Reviewed-by: Mark Brown <broonie@kernel.org>
> 
> > We probably never saw the issue because no VMM uses SVE, but
> > that's still pretty bad. Unconditionally clearing the flag
> > on vcpu load addresses the issue.
> 
> Unless I'm missing something since we currently always disable
> SVE on syscall even if the VMM were using SVE for some reason
> (SVE memcpy()?) we should already have disabled SVE for EL0 in
> sve_user_discard() during kernel entry so EL0 access to SVE
> should be disabled in the system register by the time we get
> here.

Indeed. And this begs the question: what is this code actually doing?
Is there any way we can end-up running a guest with any valid host SVE
state?

I remember being >this< close to removing that code some time ago, and
only stopped because I vaguely remembered Dave Martin convincing me at
some point that it was necessary. I'm unable to piece the argument
together again though.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-06-06 11:28       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-06 11:28 UTC (permalink / raw)
  To: Mark Brown
  Cc: kernel-team, kvm, Will Deacon, stable, kvmarm, linux-arm-kernel

On Mon, 30 May 2022 15:41:54 +0100,
Mark Brown <broonie@kernel.org> wrote:
> 
> [1  <text/plain; us-ascii (quoted-printable)>]
> On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:
> > On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
> > flag if SVE is enabled for EL0 on the host. This is used to restore
> > the correct state on vpcu put.
> > 
> > However, it appears that nothing ever clears this flag. Once
> > set, it will stick until the vcpu is destroyed, which has the
> > potential to spuriously enable SVE for userspace.
> 
> Oh dear.
> 
> Reviewed-by: Mark Brown <broonie@kernel.org>
> 
> > We probably never saw the issue because no VMM uses SVE, but
> > that's still pretty bad. Unconditionally clearing the flag
> > on vcpu load addresses the issue.
> 
> Unless I'm missing something since we currently always disable
> SVE on syscall even if the VMM were using SVE for some reason
> (SVE memcpy()?) we should already have disabled SVE for EL0 in
> sve_user_discard() during kernel entry so EL0 access to SVE
> should be disabled in the system register by the time we get
> here.

Indeed. And this begs the question: what is this code actually doing?
Is there any way we can end-up running a guest with any valid host SVE
state?

I remember being >this< close to removing that code some time ago, and
only stopped because I vaguely remembered Dave Martin convincing me at
some point that it was necessary. I'm unable to piece the argument
together again though.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-06-06 11:28       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-06 11:28 UTC (permalink / raw)
  To: Mark Brown
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team, stable

On Mon, 30 May 2022 15:41:54 +0100,
Mark Brown <broonie@kernel.org> wrote:
> 
> [1  <text/plain; us-ascii (quoted-printable)>]
> On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:
> > On each vcpu load, we set the KVM_ARM64_HOST_SVE_ENABLED
> > flag if SVE is enabled for EL0 on the host. This is used to restore
> > the correct state on vpcu put.
> > 
> > However, it appears that nothing ever clears this flag. Once
> > set, it will stick until the vcpu is destroyed, which has the
> > potential to spuriously enable SVE for userspace.
> 
> Oh dear.
> 
> Reviewed-by: Mark Brown <broonie@kernel.org>
> 
> > We probably never saw the issue because no VMM uses SVE, but
> > that's still pretty bad. Unconditionally clearing the flag
> > on vcpu load addresses the issue.
> 
> Unless I'm missing something since we currently always disable
> SVE on syscall even if the VMM were using SVE for some reason
> (SVE memcpy()?) we should already have disabled SVE for EL0 in
> sve_user_discard() during kernel entry so EL0 access to SVE
> should be disabled in the system register by the time we get
> here.

Indeed. And this begs the question: what is this code actually doing?
Is there any way we can end-up running a guest with any valid host SVE
state?

I remember being >this< close to removing that code some time ago, and
only stopped because I vaguely remembered Dave Martin convincing me at
some point that it was necessary. I'm unable to piece the argument
together again though.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
  2022-06-06 11:28       ` Marc Zyngier
  (?)
@ 2022-06-06 12:16         ` Mark Brown
  -1 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-06 12:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team, stable

[-- Attachment #1: Type: text/plain, Size: 1843 bytes --]

On Mon, Jun 06, 2022 at 12:28:32PM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:

> > > We probably never saw the issue because no VMM uses SVE, but
> > > that's still pretty bad. Unconditionally clearing the flag
> > > on vcpu load addresses the issue.

> > Unless I'm missing something since we currently always disable
> > SVE on syscall even if the VMM were using SVE for some reason
> > (SVE memcpy()?) we should already have disabled SVE for EL0 in
> > sve_user_discard() during kernel entry so EL0 access to SVE
> > should be disabled in the system register by the time we get
> > here.

> Indeed. And this begs the question: what is this code actually doing?
> Is there any way we can end-up running a guest with any valid host SVE
> state?

> I remember being >this< close to removing that code some time ago, and
> only stopped because I vaguely remembered Dave Martin convincing me at
> some point that it was necessary. I'm unable to piece the argument
> together again though.

I've stared at that code a few times as well, I think I'd ended up
assuming it was some path to do with preempting and context switching
but in that case I've never been clear why there'd be anything left that
we'd need to preserve, or if we do why we don't just force a
fpsimd_save().  It's possible this was from some earlier stage in review
where the ABI didn't allow us to discard the SVE register state, or that
it's there as defensive programming so for future work where we don't
just disable on entry.

Conicidentally I am going to post some patches later today or tomorrow
which leave SVE enabled on syscall, they still have the hook for
disabling it when entering KVM though so we'd still not need to save the
EL0 state and the above should still apply.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-06-06 12:16         ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-06 12:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Will Deacon, stable, kvmarm, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1843 bytes --]

On Mon, Jun 06, 2022 at 12:28:32PM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:

> > > We probably never saw the issue because no VMM uses SVE, but
> > > that's still pretty bad. Unconditionally clearing the flag
> > > on vcpu load addresses the issue.

> > Unless I'm missing something since we currently always disable
> > SVE on syscall even if the VMM were using SVE for some reason
> > (SVE memcpy()?) we should already have disabled SVE for EL0 in
> > sve_user_discard() during kernel entry so EL0 access to SVE
> > should be disabled in the system register by the time we get
> > here.

> Indeed. And this begs the question: what is this code actually doing?
> Is there any way we can end-up running a guest with any valid host SVE
> state?

> I remember being >this< close to removing that code some time ago, and
> only stopped because I vaguely remembered Dave Martin convincing me at
> some point that it was necessary. I'm unable to piece the argument
> together again though.

I've stared at that code a few times as well, I think I'd ended up
assuming it was some path to do with preempting and context switching
but in that case I've never been clear why there'd be anything left that
we'd need to preserve, or if we do why we don't just force a
fpsimd_save().  It's possible this was from some earlier stage in review
where the ABI didn't allow us to discard the SVE register state, or that
it's there as defensive programming so for future work where we don't
just disable on entry.

Conicidentally I am going to post some patches later today or tomorrow
which leave SVE enabled on syscall, they still have the hook for
disabling it when entering KVM though so we'd still not need to save the
EL0 state and the above should still apply.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load
@ 2022-06-06 12:16         ` Mark Brown
  0 siblings, 0 replies; 141+ messages in thread
From: Mark Brown @ 2022-06-06 12:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Fuad Tabba,
	Quentin Perret, kernel-team, stable


[-- Attachment #1.1: Type: text/plain, Size: 1843 bytes --]

On Mon, Jun 06, 2022 at 12:28:32PM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Sat, May 28, 2022 at 12:38:11PM +0100, Marc Zyngier wrote:

> > > We probably never saw the issue because no VMM uses SVE, but
> > > that's still pretty bad. Unconditionally clearing the flag
> > > on vcpu load addresses the issue.

> > Unless I'm missing something since we currently always disable
> > SVE on syscall even if the VMM were using SVE for some reason
> > (SVE memcpy()?) we should already have disabled SVE for EL0 in
> > sve_user_discard() during kernel entry so EL0 access to SVE
> > should be disabled in the system register by the time we get
> > here.

> Indeed. And this begs the question: what is this code actually doing?
> Is there any way we can end-up running a guest with any valid host SVE
> state?

> I remember being >this< close to removing that code some time ago, and
> only stopped because I vaguely remembered Dave Martin convincing me at
> some point that it was necessary. I'm unable to piece the argument
> together again though.

I've stared at that code a few times as well, I think I'd ended up
assuming it was some path to do with preempting and context switching
but in that case I've never been clear why there'd be anything left that
we'd need to preserve, or if we do why we don't just force a
fpsimd_save().  It's possible this was from some earlier stage in review
where the ABI didn't allow us to discard the SVE register state, or that
it's there as defensive programming so for future work where we don't
just disable on entry.

Conicidentally I am going to post some patches later today or tomorrow
which leave SVE enabled on syscall, they still have the hook for
disabling it when entering KVM though so we'd still not need to save the
EL0 state and the above should still apply.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
  2022-06-04  8:10       ` Marc Zyngier
  (?)
@ 2022-06-07  4:47         ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-07  4:47 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Sat, Jun 4, 2022 at 1:10 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 03 Jun 2022 06:23:25 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > > the vcpu whether it the FP regs contain something that is owned
> > > by the vcpu or not by updating the rest of the FP flags.
> > >
> > > We do this in the hypervisor code in order to make sure we're
> > > in a context where we are not interruptible. But we already
> > > have a hook in the run loop to generate this flag. We may as
> > > well update the FP flags directly and save the pointless flag
> > > tracking.
> > >
> > > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > > to indicate what the leftover of this helper actually do.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> >
> > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> >
> >
> > > --- a/arch/arm64/kvm/fpsimd.c
> > > +++ b/arch/arm64/kvm/fpsimd.c
> > > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> > >  }
> > >
> > >  /*
> > > - * Called just before entering the guest once we are no longer
> > > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > > - * mirror of the flag used by the hypervisor.
> > > + * Called just before entering the guest once we are no longer preemptable
> > > + * and interrupts are disabled. If we have managed to run anything using
> > > + * FP while we were preemptible (such as off the back of an interrupt),
> > > + * then neither the host nor the guest own the FP hardware (and it was the
> > > + * responsibility of the code that used FP to save the existing state).
> > > + *
> > > + * Note that not supporting FP is basically the same thing as far as the
> > > + * hypervisor is concerned (nothing to save).
> > >   */
> > >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> > >  {
> > > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > -       else
> > > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> > >  }
> >
> > Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> > perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> > FP is supported might be more consistent?
> > Then, checking system_supports_fpsimd() is unnecessary here.
> > (KVM_ARM64_FP_ENABLED is not set when FP is not supported)
>
> That's indeed a possibility. But I'm trying not to change the logic
> here, only to move it to a place that provides the same semantic
> without the need for an extra flag.
>
> I'm happy to stack an extra patch on top of this series though.

Thank you for your reply. I would prefer that.

Thanks,
Reiji



>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-07  4:47         ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-07  4:47 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, Jun 4, 2022 at 1:10 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 03 Jun 2022 06:23:25 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > > the vcpu whether it the FP regs contain something that is owned
> > > by the vcpu or not by updating the rest of the FP flags.
> > >
> > > We do this in the hypervisor code in order to make sure we're
> > > in a context where we are not interruptible. But we already
> > > have a hook in the run loop to generate this flag. We may as
> > > well update the FP flags directly and save the pointless flag
> > > tracking.
> > >
> > > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > > to indicate what the leftover of this helper actually do.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> >
> > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> >
> >
> > > --- a/arch/arm64/kvm/fpsimd.c
> > > +++ b/arch/arm64/kvm/fpsimd.c
> > > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> > >  }
> > >
> > >  /*
> > > - * Called just before entering the guest once we are no longer
> > > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > > - * mirror of the flag used by the hypervisor.
> > > + * Called just before entering the guest once we are no longer preemptable
> > > + * and interrupts are disabled. If we have managed to run anything using
> > > + * FP while we were preemptible (such as off the back of an interrupt),
> > > + * then neither the host nor the guest own the FP hardware (and it was the
> > > + * responsibility of the code that used FP to save the existing state).
> > > + *
> > > + * Note that not supporting FP is basically the same thing as far as the
> > > + * hypervisor is concerned (nothing to save).
> > >   */
> > >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> > >  {
> > > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > -       else
> > > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> > >  }
> >
> > Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> > perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> > FP is supported might be more consistent?
> > Then, checking system_supports_fpsimd() is unnecessary here.
> > (KVM_ARM64_FP_ENABLED is not set when FP is not supported)
>
> That's indeed a possibility. But I'm trying not to change the logic
> here, only to move it to a place that provides the same semantic
> without the need for an extra flag.
>
> I'm happy to stack an extra patch on top of this series though.

Thank you for your reply. I would prefer that.

Thanks,
Reiji



>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code
@ 2022-06-07  4:47         ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-07  4:47 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, Jun 4, 2022 at 1:10 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 03 Jun 2022 06:23:25 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > > the vcpu whether it the FP regs contain something that is owned
> > > by the vcpu or not by updating the rest of the FP flags.
> > >
> > > We do this in the hypervisor code in order to make sure we're
> > > in a context where we are not interruptible. But we already
> > > have a hook in the run loop to generate this flag. We may as
> > > well update the FP flags directly and save the pointless flag
> > > tracking.
> > >
> > > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > > to indicate what the leftover of this helper actually do.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> >
> > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> >
> >
> > > --- a/arch/arm64/kvm/fpsimd.c
> > > +++ b/arch/arm64/kvm/fpsimd.c
> > > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> > >  }
> > >
> > >  /*
> > > - * Called just before entering the guest once we are no longer
> > > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > > - * mirror of the flag used by the hypervisor.
> > > + * Called just before entering the guest once we are no longer preemptable
> > > + * and interrupts are disabled. If we have managed to run anything using
> > > + * FP while we were preemptible (such as off the back of an interrupt),
> > > + * then neither the host nor the guest own the FP hardware (and it was the
> > > + * responsibility of the code that used FP to save the existing state).
> > > + *
> > > + * Note that not supporting FP is basically the same thing as far as the
> > > + * hypervisor is concerned (nothing to save).
> > >   */
> > >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> > >  {
> > > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > -       else
> > > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> > >  }
> >
> > Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> > perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> > FP is supported might be more consistent?
> > Then, checking system_supports_fpsimd() is unnecessary here.
> > (KVM_ARM64_FP_ENABLED is not set when FP is not supported)
>
> That's indeed a possibility. But I'm trying not to change the logic
> here, only to move it to a place that provides the same semantic
> without the need for an extra flag.
>
> I'm happy to stack an extra patch on top of this series though.

Thank you for your reply. I would prefer that.

Thanks,
Reiji



>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
  2022-05-28 11:38 ` Marc Zyngier
  (?)
@ 2022-06-07 13:43   ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-07 13:43 UTC (permalink / raw)
  To: kvmarm, kvm, Marc Zyngier, linux-arm-kernel
  Cc: James Morse, Mark Brown, kernel-team, Will Deacon,
	Suzuki K Poulose, Quentin Perret, Alexandru Elisei, Fuad Tabba,
	Oliver Upton

On Sat, 28 May 2022 12:38:10 +0100, Marc Zyngier wrote:
> While working on pKVM, it slowly became apparent that dealing with the
> flags was a pain, as they serve multiple purposes:
> 
> - some flags are purely a configuration artefact,
> 
> - some are an input from the host kernel to the world switch,
> 
> [...]

Applied to fixes, thanks!

[01/18] KVM: arm64: Always start with clearing SVE flag on load
        commit: d52d165d67c5aa26c8c89909003c94a66492d23d
[02/18] KVM: arm64: Always start with clearing SME flag on load
        commit: 039f49c4cafb785504c678f28664d088e0108d35

Cheers,

	M.
-- 
Marc Zyngier <maz@kernel.org>


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
@ 2022-06-07 13:43   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-07 13:43 UTC (permalink / raw)
  To: kvmarm, kvm, Marc Zyngier, linux-arm-kernel
  Cc: Will Deacon, Mark Brown, kernel-team

On Sat, 28 May 2022 12:38:10 +0100, Marc Zyngier wrote:
> While working on pKVM, it slowly became apparent that dealing with the
> flags was a pain, as they serve multiple purposes:
> 
> - some flags are purely a configuration artefact,
> 
> - some are an input from the host kernel to the world switch,
> 
> [...]

Applied to fixes, thanks!

[01/18] KVM: arm64: Always start with clearing SVE flag on load
        commit: d52d165d67c5aa26c8c89909003c94a66492d23d
[02/18] KVM: arm64: Always start with clearing SME flag on load
        commit: 039f49c4cafb785504c678f28664d088e0108d35

Cheers,

	M.
-- 
Marc Zyngier <maz@kernel.org>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags
@ 2022-06-07 13:43   ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-07 13:43 UTC (permalink / raw)
  To: kvmarm, kvm, Marc Zyngier, linux-arm-kernel
  Cc: James Morse, Mark Brown, kernel-team, Will Deacon,
	Suzuki K Poulose, Quentin Perret, Alexandru Elisei, Fuad Tabba,
	Oliver Upton

On Sat, 28 May 2022 12:38:10 +0100, Marc Zyngier wrote:
> While working on pKVM, it slowly became apparent that dealing with the
> flags was a pain, as they serve multiple purposes:
> 
> - some flags are purely a configuration artefact,
> 
> - some are an input from the host kernel to the world switch,
> 
> [...]

Applied to fixes, thanks!

[01/18] KVM: arm64: Always start with clearing SVE flag on load
        commit: d52d165d67c5aa26c8c89909003c94a66492d23d
[02/18] KVM: arm64: Always start with clearing SME flag on load
        commit: 039f49c4cafb785504c678f28664d088e0108d35

Cheers,

	M.
-- 
Marc Zyngier <maz@kernel.org>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-08  5:26     ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-08  5:26 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Careful analysis of the vcpu flags show that this is a mix of
> configuration, communication between the host and the hypervisor,
> as well as anciliary state that has no consistency. It'd be a lot
> better if we could split these flags into consistent categories.
>
> However, even if we split these flags apart, we want to make sure
> that each flag can only be applied to its own set, and not across
> sets.
>
> To achieve this, use a preprocessor hack so that each flag is always
> associated with:
>
> - the set that contains it,
>
> - a mask that describe all the bits that contain it (for a simple
>   flag, this is the same thing as the flag itself, but we will
>   eventually have values that cover multiple bits at once).
>
> Each flag is thus a triplet that is not directly usable as a value,
> but used by three helpers that allow the flag to be set, cleared,
> and fetched. By mandating the use of such helper, we can easily
> enforce that a flag can only be used with the set it belongs to.
>
> Finally, one last helper "unpacks" the raw value from the triplet
> that represents a flag, which is useful for multi-bit values that
> need to be enumerated (in a switch statement, for example).
>
> Further patches will start making use of this infrastructure.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a46f952b97f6..5eb6791df608 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
>         } steal;
>  };
>
> +#define __vcpu_get_flag(v, flagset, f, m)                      \
> +       ({                                                      \
> +               v->arch.flagset & (m);                          \
> +       })
> +
> +#define __vcpu_set_flag(v, flagset, f, m)                      \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               if (HWEIGHT(m) > 1)                             \
> +                       *fset &= ~(m);                          \
> +               *fset |= (f);                                   \
> +       } while (0)
> +
> +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               *fset &= ~(m);                                  \
> +       } while (0)

I think 'v' should be enclosed in parentheses in those three macros.


> +
> +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> +
> +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> +
> +#define __flag_unpack(_set, _f, _m)    _f

Nit: Probably it might be worth adding a comment that explains the
above two macros ? (e.g. what is each element of the triplets ?)

> +#define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)

Minor nit: KVM Functions and macros whose names begin with "vcpu_"
make me think that they are the operations for a vCPU specified in
the argument, but this macro is not (this might just my own
assumption?). So, IMHO I would prefer a name whose prefix is not
"vcpu_". Having said that, I don't have any good suggestions though...
Perhaps I might prefer "unpack_vcpu_flag" a bit instead?

Thanks,
Reiji

> +
> +
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
>                              sve_ffr_offset((vcpu)->arch.sve_max_vl))
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-06-08  5:26     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-08  5:26 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Careful analysis of the vcpu flags show that this is a mix of
> configuration, communication between the host and the hypervisor,
> as well as anciliary state that has no consistency. It'd be a lot
> better if we could split these flags into consistent categories.
>
> However, even if we split these flags apart, we want to make sure
> that each flag can only be applied to its own set, and not across
> sets.
>
> To achieve this, use a preprocessor hack so that each flag is always
> associated with:
>
> - the set that contains it,
>
> - a mask that describe all the bits that contain it (for a simple
>   flag, this is the same thing as the flag itself, but we will
>   eventually have values that cover multiple bits at once).
>
> Each flag is thus a triplet that is not directly usable as a value,
> but used by three helpers that allow the flag to be set, cleared,
> and fetched. By mandating the use of such helper, we can easily
> enforce that a flag can only be used with the set it belongs to.
>
> Finally, one last helper "unpacks" the raw value from the triplet
> that represents a flag, which is useful for multi-bit values that
> need to be enumerated (in a switch statement, for example).
>
> Further patches will start making use of this infrastructure.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a46f952b97f6..5eb6791df608 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
>         } steal;
>  };
>
> +#define __vcpu_get_flag(v, flagset, f, m)                      \
> +       ({                                                      \
> +               v->arch.flagset & (m);                          \
> +       })
> +
> +#define __vcpu_set_flag(v, flagset, f, m)                      \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               if (HWEIGHT(m) > 1)                             \
> +                       *fset &= ~(m);                          \
> +               *fset |= (f);                                   \
> +       } while (0)
> +
> +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               *fset &= ~(m);                                  \
> +       } while (0)

I think 'v' should be enclosed in parentheses in those three macros.


> +
> +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> +
> +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> +
> +#define __flag_unpack(_set, _f, _m)    _f

Nit: Probably it might be worth adding a comment that explains the
above two macros ? (e.g. what is each element of the triplets ?)

> +#define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)

Minor nit: KVM Functions and macros whose names begin with "vcpu_"
make me think that they are the operations for a vCPU specified in
the argument, but this macro is not (this might just my own
assumption?). So, IMHO I would prefer a name whose prefix is not
"vcpu_". Having said that, I don't have any good suggestions though...
Perhaps I might prefer "unpack_vcpu_flag" a bit instead?

Thanks,
Reiji

> +
> +
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
>                              sve_ffr_offset((vcpu)->arch.sve_max_vl))
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-06-08  5:26     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-08  5:26 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Careful analysis of the vcpu flags show that this is a mix of
> configuration, communication between the host and the hypervisor,
> as well as anciliary state that has no consistency. It'd be a lot
> better if we could split these flags into consistent categories.
>
> However, even if we split these flags apart, we want to make sure
> that each flag can only be applied to its own set, and not across
> sets.
>
> To achieve this, use a preprocessor hack so that each flag is always
> associated with:
>
> - the set that contains it,
>
> - a mask that describe all the bits that contain it (for a simple
>   flag, this is the same thing as the flag itself, but we will
>   eventually have values that cover multiple bits at once).
>
> Each flag is thus a triplet that is not directly usable as a value,
> but used by three helpers that allow the flag to be set, cleared,
> and fetched. By mandating the use of such helper, we can easily
> enforce that a flag can only be used with the set it belongs to.
>
> Finally, one last helper "unpacks" the raw value from the triplet
> that represents a flag, which is useful for multi-bit values that
> need to be enumerated (in a switch statement, for example).
>
> Further patches will start making use of this infrastructure.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a46f952b97f6..5eb6791df608 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
>         } steal;
>  };
>
> +#define __vcpu_get_flag(v, flagset, f, m)                      \
> +       ({                                                      \
> +               v->arch.flagset & (m);                          \
> +       })
> +
> +#define __vcpu_set_flag(v, flagset, f, m)                      \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               if (HWEIGHT(m) > 1)                             \
> +                       *fset &= ~(m);                          \
> +               *fset |= (f);                                   \
> +       } while (0)
> +
> +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> +       do {                                                    \
> +               typeof(v->arch.flagset) *fset;                  \
> +                                                               \
> +               fset = &v->arch.flagset;                        \
> +               *fset &= ~(m);                                  \
> +       } while (0)

I think 'v' should be enclosed in parentheses in those three macros.


> +
> +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> +
> +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> +
> +#define __flag_unpack(_set, _f, _m)    _f

Nit: Probably it might be worth adding a comment that explains the
above two macros ? (e.g. what is each element of the triplets ?)

> +#define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)

Minor nit: KVM Functions and macros whose names begin with "vcpu_"
make me think that they are the operations for a vCPU specified in
the argument, but this macro is not (this might just my own
assumption?). So, IMHO I would prefer a name whose prefix is not
"vcpu_". Having said that, I don't have any good suggestions though...
Perhaps I might prefer "unpack_vcpu_flag" a bit instead?

Thanks,
Reiji

> +
> +
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
>                              sve_ffr_offset((vcpu)->arch.sve_max_vl))
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-06-08  5:26     ` Reiji Watanabe
  (?)
@ 2022-06-08  6:51       ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08  6:51 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Wed, 08 Jun 2022 06:26:44 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > Careful analysis of the vcpu flags show that this is a mix of
> > configuration, communication between the host and the hypervisor,
> > as well as anciliary state that has no consistency. It'd be a lot
> > better if we could split these flags into consistent categories.
> >
> > However, even if we split these flags apart, we want to make sure
> > that each flag can only be applied to its own set, and not across
> > sets.
> >
> > To achieve this, use a preprocessor hack so that each flag is always
> > associated with:
> >
> > - the set that contains it,
> >
> > - a mask that describe all the bits that contain it (for a simple
> >   flag, this is the same thing as the flag itself, but we will
> >   eventually have values that cover multiple bits at once).
> >
> > Each flag is thus a triplet that is not directly usable as a value,
> > but used by three helpers that allow the flag to be set, cleared,
> > and fetched. By mandating the use of such helper, we can easily
> > enforce that a flag can only be used with the set it belongs to.
> >
> > Finally, one last helper "unpacks" the raw value from the triplet
> > that represents a flag, which is useful for multi-bit values that
> > need to be enumerated (in a switch statement, for example).
> >
> > Further patches will start making use of this infrastructure.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
> >  1 file changed, 33 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index a46f952b97f6..5eb6791df608 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
> >         } steal;
> >  };
> >
> > +#define __vcpu_get_flag(v, flagset, f, m)                      \
> > +       ({                                                      \
> > +               v->arch.flagset & (m);                          \
> > +       })
> > +
> > +#define __vcpu_set_flag(v, flagset, f, m)                      \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               if (HWEIGHT(m) > 1)                             \
> > +                       *fset &= ~(m);                          \
> > +               *fset |= (f);                                   \
> > +       } while (0)
> > +
> > +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               *fset &= ~(m);                                  \
> > +       } while (0)
> 
> I think 'v' should be enclosed in parentheses in those three macros.

Fair enough.

>
> 
> > +
> > +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> > +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> > +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> > +
> > +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> > +
> > +#define __flag_unpack(_set, _f, _m)    _f
> 
> Nit: Probably it might be worth adding a comment that explains the
> above two macros ? (e.g. what is each element of the triplets ?)

How about this?

/*
 * Each 'flag' is composed of a comma-separated triplet:
 *
 * - the flag-set it belongs to in the vcpu->arch structure
 * - the value for that flag
 * - the mask for that flag
 *
 *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
 * unpack_vcpu_flag() extract the flag value from the triplet for
 * direct use outside of the flag accessors.
 */

>
> > +#define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)
> 
> Minor nit: KVM Functions and macros whose names begin with "vcpu_"
> make me think that they are the operations for a vCPU specified in
> the argument, but this macro is not (this might just my own
> assumption?). So, IMHO I would prefer a name whose prefix is not
> "vcpu_". Having said that, I don't have any good suggestions though...
> Perhaps I might prefer "unpack_vcpu_flag" a bit instead?

Sold!

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-06-08  6:51       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08  6:51 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Wed, 08 Jun 2022 06:26:44 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > Careful analysis of the vcpu flags show that this is a mix of
> > configuration, communication between the host and the hypervisor,
> > as well as anciliary state that has no consistency. It'd be a lot
> > better if we could split these flags into consistent categories.
> >
> > However, even if we split these flags apart, we want to make sure
> > that each flag can only be applied to its own set, and not across
> > sets.
> >
> > To achieve this, use a preprocessor hack so that each flag is always
> > associated with:
> >
> > - the set that contains it,
> >
> > - a mask that describe all the bits that contain it (for a simple
> >   flag, this is the same thing as the flag itself, but we will
> >   eventually have values that cover multiple bits at once).
> >
> > Each flag is thus a triplet that is not directly usable as a value,
> > but used by three helpers that allow the flag to be set, cleared,
> > and fetched. By mandating the use of such helper, we can easily
> > enforce that a flag can only be used with the set it belongs to.
> >
> > Finally, one last helper "unpacks" the raw value from the triplet
> > that represents a flag, which is useful for multi-bit values that
> > need to be enumerated (in a switch statement, for example).
> >
> > Further patches will start making use of this infrastructure.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
> >  1 file changed, 33 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index a46f952b97f6..5eb6791df608 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
> >         } steal;
> >  };
> >
> > +#define __vcpu_get_flag(v, flagset, f, m)                      \
> > +       ({                                                      \
> > +               v->arch.flagset & (m);                          \
> > +       })
> > +
> > +#define __vcpu_set_flag(v, flagset, f, m)                      \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               if (HWEIGHT(m) > 1)                             \
> > +                       *fset &= ~(m);                          \
> > +               *fset |= (f);                                   \
> > +       } while (0)
> > +
> > +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               *fset &= ~(m);                                  \
> > +       } while (0)
> 
> I think 'v' should be enclosed in parentheses in those three macros.

Fair enough.

>
> 
> > +
> > +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> > +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> > +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> > +
> > +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> > +
> > +#define __flag_unpack(_set, _f, _m)    _f
> 
> Nit: Probably it might be worth adding a comment that explains the
> above two macros ? (e.g. what is each element of the triplets ?)

How about this?

/*
 * Each 'flag' is composed of a comma-separated triplet:
 *
 * - the flag-set it belongs to in the vcpu->arch structure
 * - the value for that flag
 * - the mask for that flag
 *
 *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
 * unpack_vcpu_flag() extract the flag value from the triplet for
 * direct use outside of the flag accessors.
 */

>
> > +#define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)
> 
> Minor nit: KVM Functions and macros whose names begin with "vcpu_"
> make me think that they are the operations for a vCPU specified in
> the argument, but this macro is not (this might just my own
> assumption?). So, IMHO I would prefer a name whose prefix is not
> "vcpu_". Having said that, I don't have any good suggestions though...
> Perhaps I might prefer "unpack_vcpu_flag" a bit instead?

Sold!

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-06-08  6:51       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08  6:51 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Wed, 08 Jun 2022 06:26:44 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > Careful analysis of the vcpu flags show that this is a mix of
> > configuration, communication between the host and the hypervisor,
> > as well as anciliary state that has no consistency. It'd be a lot
> > better if we could split these flags into consistent categories.
> >
> > However, even if we split these flags apart, we want to make sure
> > that each flag can only be applied to its own set, and not across
> > sets.
> >
> > To achieve this, use a preprocessor hack so that each flag is always
> > associated with:
> >
> > - the set that contains it,
> >
> > - a mask that describe all the bits that contain it (for a simple
> >   flag, this is the same thing as the flag itself, but we will
> >   eventually have values that cover multiple bits at once).
> >
> > Each flag is thus a triplet that is not directly usable as a value,
> > but used by three helpers that allow the flag to be set, cleared,
> > and fetched. By mandating the use of such helper, we can easily
> > enforce that a flag can only be used with the set it belongs to.
> >
> > Finally, one last helper "unpacks" the raw value from the triplet
> > that represents a flag, which is useful for multi-bit values that
> > need to be enumerated (in a switch statement, for example).
> >
> > Further patches will start making use of this infrastructure.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++++++++++++++++++
> >  1 file changed, 33 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index a46f952b97f6..5eb6791df608 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -418,6 +418,39 @@ struct kvm_vcpu_arch {
> >         } steal;
> >  };
> >
> > +#define __vcpu_get_flag(v, flagset, f, m)                      \
> > +       ({                                                      \
> > +               v->arch.flagset & (m);                          \
> > +       })
> > +
> > +#define __vcpu_set_flag(v, flagset, f, m)                      \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               if (HWEIGHT(m) > 1)                             \
> > +                       *fset &= ~(m);                          \
> > +               *fset |= (f);                                   \
> > +       } while (0)
> > +
> > +#define __vcpu_clear_flag(v, flagset, f, m)                    \
> > +       do {                                                    \
> > +               typeof(v->arch.flagset) *fset;                  \
> > +                                                               \
> > +               fset = &v->arch.flagset;                        \
> > +               *fset &= ~(m);                                  \
> > +       } while (0)
> 
> I think 'v' should be enclosed in parentheses in those three macros.

Fair enough.

>
> 
> > +
> > +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> > +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> > +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> > +
> > +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> > +
> > +#define __flag_unpack(_set, _f, _m)    _f
> 
> Nit: Probably it might be worth adding a comment that explains the
> above two macros ? (e.g. what is each element of the triplets ?)

How about this?

/*
 * Each 'flag' is composed of a comma-separated triplet:
 *
 * - the flag-set it belongs to in the vcpu->arch structure
 * - the value for that flag
 * - the mask for that flag
 *
 *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
 * unpack_vcpu_flag() extract the flag value from the triplet for
 * direct use outside of the flag accessors.
 */

>
> > +#define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)
> 
> Minor nit: KVM Functions and macros whose names begin with "vcpu_"
> make me think that they are the operations for a vCPU specified in
> the argument, but this macro is not (this might just my own
> assumption?). So, IMHO I would prefer a name whose prefix is not
> "vcpu_". Having said that, I don't have any good suggestions though...
> Perhaps I might prefer "unpack_vcpu_flag" a bit instead?

Sold!

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-08 15:16     ` Fuad Tabba
  -1 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Will Deacon, Mark Brown, kvmarm, linux-arm-kernel

Hi Marc,

On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
>
> The three debug flags (which deal with the debug registers, SPE and
> TRBE) all are input flags to the hypervisor code.
>
> Move them into the input set and convert them to the new accessors.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
>  arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
>  arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
>  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
>  arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
>  arch/arm64/kvm/sys_regs.c                  |  8 ++++----
>  6 files changed, 30 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 078567f5709c..a426cd3aaa74 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
>  #define EXCEPT_AA64_EL2_IRQ    __vcpu_except_flags(5)
>  #define EXCEPT_AA64_EL2_FIQ    __vcpu_except_flags(6)
>  #define EXCEPT_AA64_EL2_SERR   __vcpu_except_flags(7)
> +/* Guest debug is live */
> +#define DEBUG_DIRTY            __vcpu_single_flag(iflags, BIT(4))
> +/* Save SPE context if active  */
> +#define DEBUG_STATE_SAVE_SPE   __vcpu_single_flag(iflags, BIT(5))
> +/* Save TRBE context if active  */
> +#define DEBUG_STATE_SAVE_TRBE  __vcpu_single_flag(iflags, BIT(6))
>
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> @@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
>  })
>
>  /* vcpu_arch flags field values: */
> -#define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active  */
> -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE        (1 << 13) /* Save TRBE context if active  */
>  #define KVM_ARM64_ON_UNSUPPORTED_CPU   (1 << 15) /* Physical CPU not in supported_cpus */
>  #define KVM_ARM64_HOST_SME_ENABLED     (1 << 16) /* SME enabled for EL0 */
>  #define KVM_ARM64_WFIT                 (1 << 17) /* WFIT instruction trapped */
> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> index 4fd5c216c4bb..c5c4c1837bf3 100644
> --- a/arch/arm64/kvm/debug.c
> +++ b/arch/arm64/kvm/debug.c
> @@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
>          * Trap debug register access when one of the following is true:
>          *  - Userspace is using the hardware to debug the guest
>          *  (KVM_GUESTDBG_USE_HW is set).
> -        *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
> +        *  - The guest is not using debug (DEBUG_DIRTY clear).
>          *  - The guest has enabled the OS Lock (debug exceptions are blocked).
>          */
>         if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
> -           !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
> +           !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
>             kvm_vcpu_os_lock_enabled(vcpu))
>                 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
>
> @@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
>   * debug related registers.
>   *
>   * Additionally, KVM only traps guest accesses to the debug registers if
> - * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
> - * flag on vcpu->arch.flags).  Since the guest must not interfere
> + * the guest is not actively using them (see the DEBUG_DIRTY
> + * flag on vcpu->arch.iflags).  Since the guest must not interfere
>   * with the hardware state when debugging the guest, we must ensure that
>   * trapping is enabled whenever we are debugging the guest using the
>   * debug registers.
> @@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>                  *
>                  * We simply switch the debug_ptr to point to our new
>                  * external_debug_state which has been populated by the
> -                * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
> +                * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY

This should be DEBUG_DIRTY.

Cheers,
/fuad


>                  * mechanism ensures the registers are updated on the
>                  * world switch.
>                  */
> @@ -216,7 +216,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>                         vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
>
>                         vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
> -                       vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +                       vcpu_set_flag(vcpu, DEBUG_DIRTY);
>
>                         trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
>                                                 &vcpu->arch.debug_ptr->dbg_bcr[0],
> @@ -246,7 +246,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>
>         /* If KDE or MDE are set, perform a full save/restore cycle. */
>         if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +               vcpu_set_flag(vcpu, DEBUG_DIRTY);
>
>         /* Write mdcr_el2 changes since vcpu_load on VHE systems */
>         if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
> @@ -298,16 +298,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
>          */
>         if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
>             !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
> +               vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE);
>
>         /* Check if we have TRBE implemented and available at the host */
>         if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
>             !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
> +               vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
>  }
>
>  void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
>  {
> -       vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
> -                             KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
> +       vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE);
> +       vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
>  }
> diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> index 4ebe9f558f3a..961bbef104a6 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> @@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
>         struct kvm_guest_debug_arch *host_dbg;
>         struct kvm_guest_debug_arch *guest_dbg;
>
> -       if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> +       if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 return;
>
>         host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> @@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
>         struct kvm_guest_debug_arch *host_dbg;
>         struct kvm_guest_debug_arch *guest_dbg;
>
> -       if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> +       if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 return;
>
>         host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> @@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
>         __debug_save_state(guest_dbg, guest_ctxt);
>         __debug_restore_state(host_dbg, host_ctxt);
>
> -       vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
> +       vcpu_clear_flag(vcpu, DEBUG_DIRTY);
>  }
>
>  #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
> diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> index 7ecca8b07851..baa5b9b3dde5 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> @@ -195,7 +195,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
>         __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
>         __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
>
> -       if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
> +       if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
>  }
>
> @@ -212,7 +212,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
>         write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
>         write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
>
> -       if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
> +       if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
>  }
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> index df361d839902..e17455773b98 100644
> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> @@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
>  void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>  {
>         /* Disable and flush SPE data generation */
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
>                 __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
>         /* Disable and flush Self-Hosted Trace generation */
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
>                 __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
>  }
>
> @@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
>
>  void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>  {
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
>                 __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
>                 __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
>  }
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d77be152cbd5..d6a55ed9ff10 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -387,7 +387,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
>  {
>         if (p->is_write) {
>                 vcpu_write_sys_reg(vcpu, p->regval, r->reg);
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +               vcpu_set_flag(vcpu, DEBUG_DIRTY);
>         } else {
>                 p->regval = vcpu_read_sys_reg(vcpu, r->reg);
>         }
> @@ -403,8 +403,8 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
>   * A 32 bit write to a debug register leave top bits alone
>   * A 32 bit read from a debug register only returns the bottom bits
>   *
> - * All writes will set the KVM_ARM64_DEBUG_DIRTY flag to ensure the
> - * hyp.S code switches between host and guest values in future.
> + * All writes will set the DEBUG_DIRTY flag to ensure the hyp code
> + * switches between host and guest values in future.
>   */
>  static void reg_to_dbg(struct kvm_vcpu *vcpu,
>                        struct sys_reg_params *p,
> @@ -420,7 +420,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
>         val |= (p->regval & (mask >> shift)) << shift;
>         *dbg_reg = val;
>
> -       vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +       vcpu_set_flag(vcpu, DEBUG_DIRTY);
>  }
>
>  static void dbg_to_reg(struct kvm_vcpu *vcpu,
> --
> 2.34.1
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
@ 2022-06-08 15:16     ` Fuad Tabba
  0 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

Hi Marc,

On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
>
> The three debug flags (which deal with the debug registers, SPE and
> TRBE) all are input flags to the hypervisor code.
>
> Move them into the input set and convert them to the new accessors.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
>  arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
>  arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
>  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
>  arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
>  arch/arm64/kvm/sys_regs.c                  |  8 ++++----
>  6 files changed, 30 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 078567f5709c..a426cd3aaa74 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
>  #define EXCEPT_AA64_EL2_IRQ    __vcpu_except_flags(5)
>  #define EXCEPT_AA64_EL2_FIQ    __vcpu_except_flags(6)
>  #define EXCEPT_AA64_EL2_SERR   __vcpu_except_flags(7)
> +/* Guest debug is live */
> +#define DEBUG_DIRTY            __vcpu_single_flag(iflags, BIT(4))
> +/* Save SPE context if active  */
> +#define DEBUG_STATE_SAVE_SPE   __vcpu_single_flag(iflags, BIT(5))
> +/* Save TRBE context if active  */
> +#define DEBUG_STATE_SAVE_TRBE  __vcpu_single_flag(iflags, BIT(6))
>
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> @@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
>  })
>
>  /* vcpu_arch flags field values: */
> -#define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active  */
> -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE        (1 << 13) /* Save TRBE context if active  */
>  #define KVM_ARM64_ON_UNSUPPORTED_CPU   (1 << 15) /* Physical CPU not in supported_cpus */
>  #define KVM_ARM64_HOST_SME_ENABLED     (1 << 16) /* SME enabled for EL0 */
>  #define KVM_ARM64_WFIT                 (1 << 17) /* WFIT instruction trapped */
> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> index 4fd5c216c4bb..c5c4c1837bf3 100644
> --- a/arch/arm64/kvm/debug.c
> +++ b/arch/arm64/kvm/debug.c
> @@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
>          * Trap debug register access when one of the following is true:
>          *  - Userspace is using the hardware to debug the guest
>          *  (KVM_GUESTDBG_USE_HW is set).
> -        *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
> +        *  - The guest is not using debug (DEBUG_DIRTY clear).
>          *  - The guest has enabled the OS Lock (debug exceptions are blocked).
>          */
>         if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
> -           !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
> +           !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
>             kvm_vcpu_os_lock_enabled(vcpu))
>                 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
>
> @@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
>   * debug related registers.
>   *
>   * Additionally, KVM only traps guest accesses to the debug registers if
> - * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
> - * flag on vcpu->arch.flags).  Since the guest must not interfere
> + * the guest is not actively using them (see the DEBUG_DIRTY
> + * flag on vcpu->arch.iflags).  Since the guest must not interfere
>   * with the hardware state when debugging the guest, we must ensure that
>   * trapping is enabled whenever we are debugging the guest using the
>   * debug registers.
> @@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>                  *
>                  * We simply switch the debug_ptr to point to our new
>                  * external_debug_state which has been populated by the
> -                * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
> +                * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY

This should be DEBUG_DIRTY.

Cheers,
/fuad


>                  * mechanism ensures the registers are updated on the
>                  * world switch.
>                  */
> @@ -216,7 +216,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>                         vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
>
>                         vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
> -                       vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +                       vcpu_set_flag(vcpu, DEBUG_DIRTY);
>
>                         trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
>                                                 &vcpu->arch.debug_ptr->dbg_bcr[0],
> @@ -246,7 +246,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>
>         /* If KDE or MDE are set, perform a full save/restore cycle. */
>         if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +               vcpu_set_flag(vcpu, DEBUG_DIRTY);
>
>         /* Write mdcr_el2 changes since vcpu_load on VHE systems */
>         if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
> @@ -298,16 +298,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
>          */
>         if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
>             !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
> +               vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE);
>
>         /* Check if we have TRBE implemented and available at the host */
>         if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
>             !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
> +               vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
>  }
>
>  void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
>  {
> -       vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
> -                             KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
> +       vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE);
> +       vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
>  }
> diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> index 4ebe9f558f3a..961bbef104a6 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> @@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
>         struct kvm_guest_debug_arch *host_dbg;
>         struct kvm_guest_debug_arch *guest_dbg;
>
> -       if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> +       if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 return;
>
>         host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> @@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
>         struct kvm_guest_debug_arch *host_dbg;
>         struct kvm_guest_debug_arch *guest_dbg;
>
> -       if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> +       if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 return;
>
>         host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> @@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
>         __debug_save_state(guest_dbg, guest_ctxt);
>         __debug_restore_state(host_dbg, host_ctxt);
>
> -       vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
> +       vcpu_clear_flag(vcpu, DEBUG_DIRTY);
>  }
>
>  #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
> diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> index 7ecca8b07851..baa5b9b3dde5 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> @@ -195,7 +195,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
>         __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
>         __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
>
> -       if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
> +       if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
>  }
>
> @@ -212,7 +212,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
>         write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
>         write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
>
> -       if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
> +       if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
>  }
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> index df361d839902..e17455773b98 100644
> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> @@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
>  void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>  {
>         /* Disable and flush SPE data generation */
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
>                 __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
>         /* Disable and flush Self-Hosted Trace generation */
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
>                 __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
>  }
>
> @@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
>
>  void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>  {
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
>                 __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
>                 __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
>  }
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d77be152cbd5..d6a55ed9ff10 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -387,7 +387,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
>  {
>         if (p->is_write) {
>                 vcpu_write_sys_reg(vcpu, p->regval, r->reg);
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +               vcpu_set_flag(vcpu, DEBUG_DIRTY);
>         } else {
>                 p->regval = vcpu_read_sys_reg(vcpu, r->reg);
>         }
> @@ -403,8 +403,8 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
>   * A 32 bit write to a debug register leave top bits alone
>   * A 32 bit read from a debug register only returns the bottom bits
>   *
> - * All writes will set the KVM_ARM64_DEBUG_DIRTY flag to ensure the
> - * hyp.S code switches between host and guest values in future.
> + * All writes will set the DEBUG_DIRTY flag to ensure the hyp code
> + * switches between host and guest values in future.
>   */
>  static void reg_to_dbg(struct kvm_vcpu *vcpu,
>                        struct sys_reg_params *p,
> @@ -420,7 +420,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
>         val |= (p->regval & (mask >> shift)) << shift;
>         *dbg_reg = val;
>
> -       vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +       vcpu_set_flag(vcpu, DEBUG_DIRTY);
>  }
>
>  static void dbg_to_reg(struct kvm_vcpu *vcpu,
> --
> 2.34.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
@ 2022-06-08 15:16     ` Fuad Tabba
  0 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

Hi Marc,

On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
>
> The three debug flags (which deal with the debug registers, SPE and
> TRBE) all are input flags to the hypervisor code.
>
> Move them into the input set and convert them to the new accessors.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
>  arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
>  arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
>  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
>  arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
>  arch/arm64/kvm/sys_regs.c                  |  8 ++++----
>  6 files changed, 30 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 078567f5709c..a426cd3aaa74 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
>  #define EXCEPT_AA64_EL2_IRQ    __vcpu_except_flags(5)
>  #define EXCEPT_AA64_EL2_FIQ    __vcpu_except_flags(6)
>  #define EXCEPT_AA64_EL2_SERR   __vcpu_except_flags(7)
> +/* Guest debug is live */
> +#define DEBUG_DIRTY            __vcpu_single_flag(iflags, BIT(4))
> +/* Save SPE context if active  */
> +#define DEBUG_STATE_SAVE_SPE   __vcpu_single_flag(iflags, BIT(5))
> +/* Save TRBE context if active  */
> +#define DEBUG_STATE_SAVE_TRBE  __vcpu_single_flag(iflags, BIT(6))
>
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> @@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
>  })
>
>  /* vcpu_arch flags field values: */
> -#define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active  */
> -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE        (1 << 13) /* Save TRBE context if active  */
>  #define KVM_ARM64_ON_UNSUPPORTED_CPU   (1 << 15) /* Physical CPU not in supported_cpus */
>  #define KVM_ARM64_HOST_SME_ENABLED     (1 << 16) /* SME enabled for EL0 */
>  #define KVM_ARM64_WFIT                 (1 << 17) /* WFIT instruction trapped */
> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> index 4fd5c216c4bb..c5c4c1837bf3 100644
> --- a/arch/arm64/kvm/debug.c
> +++ b/arch/arm64/kvm/debug.c
> @@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
>          * Trap debug register access when one of the following is true:
>          *  - Userspace is using the hardware to debug the guest
>          *  (KVM_GUESTDBG_USE_HW is set).
> -        *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
> +        *  - The guest is not using debug (DEBUG_DIRTY clear).
>          *  - The guest has enabled the OS Lock (debug exceptions are blocked).
>          */
>         if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
> -           !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
> +           !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
>             kvm_vcpu_os_lock_enabled(vcpu))
>                 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
>
> @@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
>   * debug related registers.
>   *
>   * Additionally, KVM only traps guest accesses to the debug registers if
> - * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
> - * flag on vcpu->arch.flags).  Since the guest must not interfere
> + * the guest is not actively using them (see the DEBUG_DIRTY
> + * flag on vcpu->arch.iflags).  Since the guest must not interfere
>   * with the hardware state when debugging the guest, we must ensure that
>   * trapping is enabled whenever we are debugging the guest using the
>   * debug registers.
> @@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>                  *
>                  * We simply switch the debug_ptr to point to our new
>                  * external_debug_state which has been populated by the
> -                * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
> +                * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY

This should be DEBUG_DIRTY.

Cheers,
/fuad


>                  * mechanism ensures the registers are updated on the
>                  * world switch.
>                  */
> @@ -216,7 +216,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>                         vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
>
>                         vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
> -                       vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +                       vcpu_set_flag(vcpu, DEBUG_DIRTY);
>
>                         trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
>                                                 &vcpu->arch.debug_ptr->dbg_bcr[0],
> @@ -246,7 +246,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>
>         /* If KDE or MDE are set, perform a full save/restore cycle. */
>         if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +               vcpu_set_flag(vcpu, DEBUG_DIRTY);
>
>         /* Write mdcr_el2 changes since vcpu_load on VHE systems */
>         if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
> @@ -298,16 +298,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
>          */
>         if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
>             !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
> +               vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE);
>
>         /* Check if we have TRBE implemented and available at the host */
>         if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
>             !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
> +               vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
>  }
>
>  void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
>  {
> -       vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
> -                             KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
> +       vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE);
> +       vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE);
>  }
> diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> index 4ebe9f558f3a..961bbef104a6 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> @@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
>         struct kvm_guest_debug_arch *host_dbg;
>         struct kvm_guest_debug_arch *guest_dbg;
>
> -       if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> +       if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 return;
>
>         host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> @@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
>         struct kvm_guest_debug_arch *host_dbg;
>         struct kvm_guest_debug_arch *guest_dbg;
>
> -       if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> +       if (!vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 return;
>
>         host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> @@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
>         __debug_save_state(guest_dbg, guest_ctxt);
>         __debug_restore_state(host_dbg, host_ctxt);
>
> -       vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
> +       vcpu_clear_flag(vcpu, DEBUG_DIRTY);
>  }
>
>  #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
> diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> index 7ecca8b07851..baa5b9b3dde5 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> @@ -195,7 +195,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
>         __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
>         __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
>
> -       if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
> +       if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
>  }
>
> @@ -212,7 +212,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
>         write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
>         write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
>
> -       if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
> +       if (has_vhe() || vcpu_get_flag(vcpu, DEBUG_DIRTY))
>                 write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
>  }
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> index df361d839902..e17455773b98 100644
> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> @@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
>  void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>  {
>         /* Disable and flush SPE data generation */
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
>                 __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
>         /* Disable and flush Self-Hosted Trace generation */
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
>                 __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
>  }
>
> @@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
>
>  void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>  {
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE))
>                 __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
> -       if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
> +       if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE))
>                 __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
>  }
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d77be152cbd5..d6a55ed9ff10 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -387,7 +387,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
>  {
>         if (p->is_write) {
>                 vcpu_write_sys_reg(vcpu, p->regval, r->reg);
> -               vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +               vcpu_set_flag(vcpu, DEBUG_DIRTY);
>         } else {
>                 p->regval = vcpu_read_sys_reg(vcpu, r->reg);
>         }
> @@ -403,8 +403,8 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
>   * A 32 bit write to a debug register leave top bits alone
>   * A 32 bit read from a debug register only returns the bottom bits
>   *
> - * All writes will set the KVM_ARM64_DEBUG_DIRTY flag to ensure the
> - * hyp.S code switches between host and guest values in future.
> + * All writes will set the DEBUG_DIRTY flag to ensure the hyp code
> + * switches between host and guest values in future.
>   */
>  static void reg_to_dbg(struct kvm_vcpu *vcpu,
>                        struct sys_reg_params *p,
> @@ -420,7 +420,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
>         val |= (p->regval & (mask >> shift)) << shift;
>         *dbg_reg = val;
>
> -       vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
> +       vcpu_set_flag(vcpu, DEBUG_DIRTY);
>  }
>
>  static void dbg_to_reg(struct kvm_vcpu *vcpu,
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-08 15:16     ` Fuad Tabba
  -1 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Will Deacon, Mark Brown, kvmarm, linux-arm-kernel

Hi Marc,

On Sat, May 28, 2022 at 12:49 PM Marc Zyngier <maz@kernel.org> wrote:
>
> We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
> set at the same time, as they are mutually exclusive. Add checks
> that will generate a warning should this ever happen.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 1 +
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
>  arch/arm64/kvm/inject_fault.c        | 8 ++++++++
>  3 files changed, 11 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 46e631cd8d9e..861fa0b24a7f 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
>
>  static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
>  {
> +       WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
>         vcpu_set_flag(vcpu, INCREMENT_PC);
>  }
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> index 2841a2d447a1..04973984b6db 100644
> --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> @@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>         *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
>         *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> index a9a7b513f3b0..2f4b9afc16ec 100644
> --- a/arch/arm64/kvm/inject_fault.c
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
>         bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
>         u32 esr = 0;
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +

Minor nit: While we're at it, should we just create a helper for
setting PENDING_EXCEPTION, same as we have for INCREMENT_PC? That
might make the code clearer and save us from the hassle of having this
WARN_ON before every instance of setting PENDING_EXCEPTION?

Cheers,
/fuad



>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> @@ -51,6 +53,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>  {
>         u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> @@ -71,6 +75,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>
>  static void inject_undef32(struct kvm_vcpu *vcpu)
>  {
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
>  }
> @@ -94,6 +100,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
>
>         far = vcpu_read_sys_reg(vcpu, FAR_EL1);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         if (is_pabt) {
>                 vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>                 vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
> --
> 2.34.1
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
@ 2022-06-08 15:16     ` Fuad Tabba
  0 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

Hi Marc,

On Sat, May 28, 2022 at 12:49 PM Marc Zyngier <maz@kernel.org> wrote:
>
> We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
> set at the same time, as they are mutually exclusive. Add checks
> that will generate a warning should this ever happen.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 1 +
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
>  arch/arm64/kvm/inject_fault.c        | 8 ++++++++
>  3 files changed, 11 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 46e631cd8d9e..861fa0b24a7f 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
>
>  static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
>  {
> +       WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
>         vcpu_set_flag(vcpu, INCREMENT_PC);
>  }
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> index 2841a2d447a1..04973984b6db 100644
> --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> @@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>         *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
>         *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> index a9a7b513f3b0..2f4b9afc16ec 100644
> --- a/arch/arm64/kvm/inject_fault.c
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
>         bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
>         u32 esr = 0;
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +

Minor nit: While we're at it, should we just create a helper for
setting PENDING_EXCEPTION, same as we have for INCREMENT_PC? That
might make the code clearer and save us from the hassle of having this
WARN_ON before every instance of setting PENDING_EXCEPTION?

Cheers,
/fuad



>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> @@ -51,6 +53,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>  {
>         u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> @@ -71,6 +75,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>
>  static void inject_undef32(struct kvm_vcpu *vcpu)
>  {
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
>  }
> @@ -94,6 +100,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
>
>         far = vcpu_read_sys_reg(vcpu, FAR_EL1);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         if (is_pabt) {
>                 vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>                 vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
> --
> 2.34.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
@ 2022-06-08 15:16     ` Fuad Tabba
  0 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

Hi Marc,

On Sat, May 28, 2022 at 12:49 PM Marc Zyngier <maz@kernel.org> wrote:
>
> We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
> set at the same time, as they are mutually exclusive. Add checks
> that will generate a warning should this ever happen.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 1 +
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
>  arch/arm64/kvm/inject_fault.c        | 8 ++++++++
>  3 files changed, 11 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 46e631cd8d9e..861fa0b24a7f 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
>
>  static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
>  {
> +       WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
>         vcpu_set_flag(vcpu, INCREMENT_PC);
>  }
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> index 2841a2d447a1..04973984b6db 100644
> --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> @@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>         *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
>         *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> index a9a7b513f3b0..2f4b9afc16ec 100644
> --- a/arch/arm64/kvm/inject_fault.c
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
>         bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
>         u32 esr = 0;
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +

Minor nit: While we're at it, should we just create a helper for
setting PENDING_EXCEPTION, same as we have for INCREMENT_PC? That
might make the code clearer and save us from the hassle of having this
WARN_ON before every instance of setting PENDING_EXCEPTION?

Cheers,
/fuad



>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> @@ -51,6 +53,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>  {
>         u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
>
> @@ -71,6 +75,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>
>  static void inject_undef32(struct kvm_vcpu *vcpu)
>  {
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>         vcpu_set_flag(vcpu, EXCEPT_AA32_UND);
>  }
> @@ -94,6 +100,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
>
>         far = vcpu_read_sys_reg(vcpu, FAR_EL1);
>
> +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> +
>         if (is_pabt) {
>                 vcpu_set_flag(vcpu, PENDING_EXCEPTION);
>                 vcpu_set_flag(vcpu, EXCEPT_AA32_IABT);
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-08 15:23     ` Fuad Tabba
  -1 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Will Deacon, Mark Brown, kvmarm, linux-arm-kernel

Hi Marc,

On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
>
> It so appears that each of the vcpu flags is really belonging to
> one of three categories:
>
> - a configuration flag, set once and for all
> - an input flag generated by the kernel for the hypervisor to use
> - a state flag that is only for the kernel's own bookkeeping

I think that this division makes sense and simplifies reasoning about
the state and what needs to be communicated to the hypervisor.

I had a couple of minor nits, which I have already pointed out in the
relevant patches. With that, patches 6~18:
Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad





>
> As we are going to split all the existing flags into these three
> sets, introduce all three in one go.
>
> No functional change other than a bit of bloat...
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5eb6791df608..c9dd0d4e22f2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> +       /* Configuration flags */
> +       u64 cflags;
> +
> +       /* Input flags to the hypervisor code */
> +       u64 iflags;
> +
> +       /* State flags, unused by the hypervisor code */
> +       u64 sflags;
> +
>         /*
>          * We maintain more than a single set of debug registers to support
>          * debugging the guest from the host and to maintain separate host and
> --
> 2.34.1
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-08 15:23     ` Fuad Tabba
  0 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

Hi Marc,

On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
>
> It so appears that each of the vcpu flags is really belonging to
> one of three categories:
>
> - a configuration flag, set once and for all
> - an input flag generated by the kernel for the hypervisor to use
> - a state flag that is only for the kernel's own bookkeeping

I think that this division makes sense and simplifies reasoning about
the state and what needs to be communicated to the hypervisor.

I had a couple of minor nits, which I have already pointed out in the
relevant patches. With that, patches 6~18:
Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad





>
> As we are going to split all the existing flags into these three
> sets, introduce all three in one go.
>
> No functional change other than a bit of bloat...
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5eb6791df608..c9dd0d4e22f2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> +       /* Configuration flags */
> +       u64 cflags;
> +
> +       /* Input flags to the hypervisor code */
> +       u64 iflags;
> +
> +       /* State flags, unused by the hypervisor code */
> +       u64 sflags;
> +
>         /*
>          * We maintain more than a single set of debug registers to support
>          * debugging the guest from the host and to maintain separate host and
> --
> 2.34.1
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-08 15:23     ` Fuad Tabba
  0 siblings, 0 replies; 141+ messages in thread
From: Fuad Tabba @ 2022-06-08 15:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

Hi Marc,

On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
>
> It so appears that each of the vcpu flags is really belonging to
> one of three categories:
>
> - a configuration flag, set once and for all
> - an input flag generated by the kernel for the hypervisor to use
> - a state flag that is only for the kernel's own bookkeeping

I think that this division makes sense and simplifies reasoning about
the state and what needs to be communicated to the hypervisor.

I had a couple of minor nits, which I have already pointed out in the
relevant patches. With that, patches 6~18:
Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad





>
> As we are going to split all the existing flags into these three
> sets, introduce all three in one go.
>
> No functional change other than a bit of bloat...
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5eb6791df608..c9dd0d4e22f2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> +       /* Configuration flags */
> +       u64 cflags;
> +
> +       /* Input flags to the hypervisor code */
> +       u64 iflags;
> +
> +       /* State flags, unused by the hypervisor code */
> +       u64 sflags;
> +
>         /*
>          * We maintain more than a single set of debug registers to support
>          * debugging the guest from the host and to maintain separate host and
> --
> 2.34.1
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
  2022-06-08 15:16     ` Fuad Tabba
  (?)
@ 2022-06-08 16:01       ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08 16:01 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

On Wed, 08 Jun 2022 16:16:16 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > The three debug flags (which deal with the debug registers, SPE and
> > TRBE) all are input flags to the hypervisor code.
> >
> > Move them into the input set and convert them to the new accessors.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
> >  arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
> >  arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
> >  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
> >  arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
> >  arch/arm64/kvm/sys_regs.c                  |  8 ++++----
> >  6 files changed, 30 insertions(+), 27 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 078567f5709c..a426cd3aaa74 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
> >  #define EXCEPT_AA64_EL2_IRQ    __vcpu_except_flags(5)
> >  #define EXCEPT_AA64_EL2_FIQ    __vcpu_except_flags(6)
> >  #define EXCEPT_AA64_EL2_SERR   __vcpu_except_flags(7)
> > +/* Guest debug is live */
> > +#define DEBUG_DIRTY            __vcpu_single_flag(iflags, BIT(4))
> > +/* Save SPE context if active  */
> > +#define DEBUG_STATE_SAVE_SPE   __vcpu_single_flag(iflags, BIT(5))
> > +/* Save TRBE context if active  */
> > +#define DEBUG_STATE_SAVE_TRBE  __vcpu_single_flag(iflags, BIT(6))
> >
> >  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
> >  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> > @@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
> >  })
> >
> >  /* vcpu_arch flags field values: */
> > -#define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
> >  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> > -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active  */
> > -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE        (1 << 13) /* Save TRBE context if active  */
> >  #define KVM_ARM64_ON_UNSUPPORTED_CPU   (1 << 15) /* Physical CPU not in supported_cpus */
> >  #define KVM_ARM64_HOST_SME_ENABLED     (1 << 16) /* SME enabled for EL0 */
> >  #define KVM_ARM64_WFIT                 (1 << 17) /* WFIT instruction trapped */
> > diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> > index 4fd5c216c4bb..c5c4c1837bf3 100644
> > --- a/arch/arm64/kvm/debug.c
> > +++ b/arch/arm64/kvm/debug.c
> > @@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
> >          * Trap debug register access when one of the following is true:
> >          *  - Userspace is using the hardware to debug the guest
> >          *  (KVM_GUESTDBG_USE_HW is set).
> > -        *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
> > +        *  - The guest is not using debug (DEBUG_DIRTY clear).
> >          *  - The guest has enabled the OS Lock (debug exceptions are blocked).
> >          */
> >         if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
> > -           !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
> > +           !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
> >             kvm_vcpu_os_lock_enabled(vcpu))
> >                 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
> >
> > @@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
> >   * debug related registers.
> >   *
> >   * Additionally, KVM only traps guest accesses to the debug registers if
> > - * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
> > - * flag on vcpu->arch.flags).  Since the guest must not interfere
> > + * the guest is not actively using them (see the DEBUG_DIRTY
> > + * flag on vcpu->arch.iflags).  Since the guest must not interfere
> >   * with the hardware state when debugging the guest, we must ensure that
> >   * trapping is enabled whenever we are debugging the guest using the
> >   * debug registers.
> > @@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
> >                  *
> >                  * We simply switch the debug_ptr to point to our new
> >                  * external_debug_state which has been populated by the
> > -                * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
> > +                * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY
> 
> This should be DEBUG_DIRTY.

Ah, nice catch. That's a left-over from a previous implementation that
didn't have the notion of flag-set built-in.

There is also another one of these in kvm_host.h, which I will fix as
well.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
@ 2022-06-08 16:01       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08 16:01 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kernel-team, kvm, Will Deacon, Mark Brown, kvmarm, linux-arm-kernel

On Wed, 08 Jun 2022 16:16:16 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > The three debug flags (which deal with the debug registers, SPE and
> > TRBE) all are input flags to the hypervisor code.
> >
> > Move them into the input set and convert them to the new accessors.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
> >  arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
> >  arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
> >  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
> >  arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
> >  arch/arm64/kvm/sys_regs.c                  |  8 ++++----
> >  6 files changed, 30 insertions(+), 27 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 078567f5709c..a426cd3aaa74 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
> >  #define EXCEPT_AA64_EL2_IRQ    __vcpu_except_flags(5)
> >  #define EXCEPT_AA64_EL2_FIQ    __vcpu_except_flags(6)
> >  #define EXCEPT_AA64_EL2_SERR   __vcpu_except_flags(7)
> > +/* Guest debug is live */
> > +#define DEBUG_DIRTY            __vcpu_single_flag(iflags, BIT(4))
> > +/* Save SPE context if active  */
> > +#define DEBUG_STATE_SAVE_SPE   __vcpu_single_flag(iflags, BIT(5))
> > +/* Save TRBE context if active  */
> > +#define DEBUG_STATE_SAVE_TRBE  __vcpu_single_flag(iflags, BIT(6))
> >
> >  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
> >  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> > @@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
> >  })
> >
> >  /* vcpu_arch flags field values: */
> > -#define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
> >  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> > -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active  */
> > -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE        (1 << 13) /* Save TRBE context if active  */
> >  #define KVM_ARM64_ON_UNSUPPORTED_CPU   (1 << 15) /* Physical CPU not in supported_cpus */
> >  #define KVM_ARM64_HOST_SME_ENABLED     (1 << 16) /* SME enabled for EL0 */
> >  #define KVM_ARM64_WFIT                 (1 << 17) /* WFIT instruction trapped */
> > diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> > index 4fd5c216c4bb..c5c4c1837bf3 100644
> > --- a/arch/arm64/kvm/debug.c
> > +++ b/arch/arm64/kvm/debug.c
> > @@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
> >          * Trap debug register access when one of the following is true:
> >          *  - Userspace is using the hardware to debug the guest
> >          *  (KVM_GUESTDBG_USE_HW is set).
> > -        *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
> > +        *  - The guest is not using debug (DEBUG_DIRTY clear).
> >          *  - The guest has enabled the OS Lock (debug exceptions are blocked).
> >          */
> >         if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
> > -           !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
> > +           !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
> >             kvm_vcpu_os_lock_enabled(vcpu))
> >                 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
> >
> > @@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
> >   * debug related registers.
> >   *
> >   * Additionally, KVM only traps guest accesses to the debug registers if
> > - * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
> > - * flag on vcpu->arch.flags).  Since the guest must not interfere
> > + * the guest is not actively using them (see the DEBUG_DIRTY
> > + * flag on vcpu->arch.iflags).  Since the guest must not interfere
> >   * with the hardware state when debugging the guest, we must ensure that
> >   * trapping is enabled whenever we are debugging the guest using the
> >   * debug registers.
> > @@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
> >                  *
> >                  * We simply switch the debug_ptr to point to our new
> >                  * external_debug_state which has been populated by the
> > -                * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
> > +                * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY
> 
> This should be DEBUG_DIRTY.

Ah, nice catch. That's a left-over from a previous implementation that
didn't have the notion of flag-set built-in.

There is also another one of these in kvm_host.h, which I will fix as
well.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE flags to the input flag set
@ 2022-06-08 16:01       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08 16:01 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

On Wed, 08 Jun 2022 16:16:16 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 12:38 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > The three debug flags (which deal with the debug registers, SPE and
> > TRBE) all are input flags to the hypervisor code.
> >
> > Move them into the input set and convert them to the new accessors.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h          |  9 ++++++---
> >  arch/arm64/kvm/debug.c                     | 22 +++++++++++-----------
> >  arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +++---
> >  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
> >  arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++++----
> >  arch/arm64/kvm/sys_regs.c                  |  8 ++++----
> >  6 files changed, 30 insertions(+), 27 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 078567f5709c..a426cd3aaa74 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -500,6 +500,12 @@ struct kvm_vcpu_arch {
> >  #define EXCEPT_AA64_EL2_IRQ    __vcpu_except_flags(5)
> >  #define EXCEPT_AA64_EL2_FIQ    __vcpu_except_flags(6)
> >  #define EXCEPT_AA64_EL2_SERR   __vcpu_except_flags(7)
> > +/* Guest debug is live */
> > +#define DEBUG_DIRTY            __vcpu_single_flag(iflags, BIT(4))
> > +/* Save SPE context if active  */
> > +#define DEBUG_STATE_SAVE_SPE   __vcpu_single_flag(iflags, BIT(5))
> > +/* Save TRBE context if active  */
> > +#define DEBUG_STATE_SAVE_TRBE  __vcpu_single_flag(iflags, BIT(6))
> >
> >  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
> >  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> > @@ -522,10 +528,7 @@ struct kvm_vcpu_arch {
> >  })
> >
> >  /* vcpu_arch flags field values: */
> > -#define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
> >  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> > -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active  */
> > -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE        (1 << 13) /* Save TRBE context if active  */
> >  #define KVM_ARM64_ON_UNSUPPORTED_CPU   (1 << 15) /* Physical CPU not in supported_cpus */
> >  #define KVM_ARM64_HOST_SME_ENABLED     (1 << 16) /* SME enabled for EL0 */
> >  #define KVM_ARM64_WFIT                 (1 << 17) /* WFIT instruction trapped */
> > diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> > index 4fd5c216c4bb..c5c4c1837bf3 100644
> > --- a/arch/arm64/kvm/debug.c
> > +++ b/arch/arm64/kvm/debug.c
> > @@ -104,11 +104,11 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
> >          * Trap debug register access when one of the following is true:
> >          *  - Userspace is using the hardware to debug the guest
> >          *  (KVM_GUESTDBG_USE_HW is set).
> > -        *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
> > +        *  - The guest is not using debug (DEBUG_DIRTY clear).
> >          *  - The guest has enabled the OS Lock (debug exceptions are blocked).
> >          */
> >         if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
> > -           !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
> > +           !vcpu_get_flag(vcpu, DEBUG_DIRTY) ||
> >             kvm_vcpu_os_lock_enabled(vcpu))
> >                 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
> >
> > @@ -147,8 +147,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
> >   * debug related registers.
> >   *
> >   * Additionally, KVM only traps guest accesses to the debug registers if
> > - * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
> > - * flag on vcpu->arch.flags).  Since the guest must not interfere
> > + * the guest is not actively using them (see the DEBUG_DIRTY
> > + * flag on vcpu->arch.iflags).  Since the guest must not interfere
> >   * with the hardware state when debugging the guest, we must ensure that
> >   * trapping is enabled whenever we are debugging the guest using the
> >   * debug registers.
> > @@ -205,7 +205,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
> >                  *
> >                  * We simply switch the debug_ptr to point to our new
> >                  * external_debug_state which has been populated by the
> > -                * debug ioctl. The existing KVM_ARM64_DEBUG_DIRTY
> > +                * debug ioctl. The existing KVM_ARM64_IFLAG_DEBUG_DIRTY
> 
> This should be DEBUG_DIRTY.

Ah, nice catch. That's a left-over from a previous implementation that
didn't have the notion of flag-set built-in.

There is also another one of these in kvm_host.h, which I will fix as
well.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
  2022-06-08 15:16     ` Fuad Tabba
  (?)
@ 2022-06-08 16:42       ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08 16:42 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

On Wed, 08 Jun 2022 16:16:55 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 12:49 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
> > set at the same time, as they are mutually exclusive. Add checks
> > that will generate a warning should this ever happen.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_emulate.h | 1 +
> >  arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
> >  arch/arm64/kvm/inject_fault.c        | 8 ++++++++
> >  3 files changed, 11 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> > index 46e631cd8d9e..861fa0b24a7f 100644
> > --- a/arch/arm64/include/asm/kvm_emulate.h
> > +++ b/arch/arm64/include/asm/kvm_emulate.h
> > @@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
> >
> >  static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
> >  {
> > +       WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
> >         vcpu_set_flag(vcpu, INCREMENT_PC);
> >  }
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > index 2841a2d447a1..04973984b6db 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > @@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
> >         *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
> >         *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
> >
> > +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> > +
> >         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
> >         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
> >
> > diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> > index a9a7b513f3b0..2f4b9afc16ec 100644
> > --- a/arch/arm64/kvm/inject_fault.c
> > +++ b/arch/arm64/kvm/inject_fault.c
> > @@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> >         bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> >         u32 esr = 0;
> >
> > +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> > +
> 
> Minor nit: While we're at it, should we just create a helper for
> setting PENDING_EXCEPTION, same as we have for INCREMENT_PC? That
> might make the code clearer and save us from the hassle of having this
> WARN_ON before every instance of setting PENDING_EXCEPTION?

Good point. I ended up with this:

#define kvm_pend_exception(v, e)					\
	do {								\
		WARN_ON(vcpu_get_flag((v), INCREMENT_PC));		\
		vcpu_set_flag((v), PENDING_EXCEPTION);			\
		vcpu_set_flag((v), e);					\
	} while (0)

It has to be a macro in order to deal with the flag expansion, but is
otherwise a welcome cleanup.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
@ 2022-06-08 16:42       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08 16:42 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kernel-team, kvm, Will Deacon, Mark Brown, kvmarm, linux-arm-kernel

On Wed, 08 Jun 2022 16:16:55 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 12:49 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
> > set at the same time, as they are mutually exclusive. Add checks
> > that will generate a warning should this ever happen.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_emulate.h | 1 +
> >  arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
> >  arch/arm64/kvm/inject_fault.c        | 8 ++++++++
> >  3 files changed, 11 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> > index 46e631cd8d9e..861fa0b24a7f 100644
> > --- a/arch/arm64/include/asm/kvm_emulate.h
> > +++ b/arch/arm64/include/asm/kvm_emulate.h
> > @@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
> >
> >  static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
> >  {
> > +       WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
> >         vcpu_set_flag(vcpu, INCREMENT_PC);
> >  }
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > index 2841a2d447a1..04973984b6db 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > @@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
> >         *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
> >         *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
> >
> > +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> > +
> >         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
> >         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
> >
> > diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> > index a9a7b513f3b0..2f4b9afc16ec 100644
> > --- a/arch/arm64/kvm/inject_fault.c
> > +++ b/arch/arm64/kvm/inject_fault.c
> > @@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> >         bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> >         u32 esr = 0;
> >
> > +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> > +
> 
> Minor nit: While we're at it, should we just create a helper for
> setting PENDING_EXCEPTION, same as we have for INCREMENT_PC? That
> might make the code clearer and save us from the hassle of having this
> WARN_ON before every instance of setting PENDING_EXCEPTION?

Good point. I ended up with this:

#define kvm_pend_exception(v, e)					\
	do {								\
		WARN_ON(vcpu_get_flag((v), INCREMENT_PC));		\
		vcpu_set_flag((v), PENDING_EXCEPTION);			\
		vcpu_set_flag((v), e);					\
	} while (0)

It has to be a macro in order to deal with the flag expansion, but is
otherwise a welcome cleanup.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together
@ 2022-06-08 16:42       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-08 16:42 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Alexandru Elisei, Oliver Upton, Will Deacon, Quentin Perret,
	Mark Brown, kernel-team

On Wed, 08 Jun 2022 16:16:55 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 12:49 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > We really don't want PENDING_EXCEPTION and INCREMENT_PC to ever be
> > set at the same time, as they are mutually exclusive. Add checks
> > that will generate a warning should this ever happen.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_emulate.h | 1 +
> >  arch/arm64/kvm/hyp/nvhe/sys_regs.c   | 2 ++
> >  arch/arm64/kvm/inject_fault.c        | 8 ++++++++
> >  3 files changed, 11 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> > index 46e631cd8d9e..861fa0b24a7f 100644
> > --- a/arch/arm64/include/asm/kvm_emulate.h
> > +++ b/arch/arm64/include/asm/kvm_emulate.h
> > @@ -473,6 +473,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
> >
> >  static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
> >  {
> > +       WARN_ON(vcpu_get_flag(vcpu, PENDING_EXCEPTION));
> >         vcpu_set_flag(vcpu, INCREMENT_PC);
> >  }
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > index 2841a2d447a1..04973984b6db 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > @@ -38,6 +38,8 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
> >         *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
> >         *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
> >
> > +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> > +
> >         vcpu_set_flag(vcpu, PENDING_EXCEPTION);
> >         vcpu_set_flag(vcpu, EXCEPT_AA64_EL1_SYNC);
> >
> > diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> > index a9a7b513f3b0..2f4b9afc16ec 100644
> > --- a/arch/arm64/kvm/inject_fault.c
> > +++ b/arch/arm64/kvm/inject_fault.c
> > @@ -20,6 +20,8 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> >         bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> >         u32 esr = 0;
> >
> > +       WARN_ON(vcpu_get_flag(vcpu, INCREMENT_PC));
> > +
> 
> Minor nit: While we're at it, should we just create a helper for
> setting PENDING_EXCEPTION, same as we have for INCREMENT_PC? That
> might make the code clearer and save us from the hassle of having this
> WARN_ON before every instance of setting PENDING_EXCEPTION?

Good point. I ended up with this:

#define kvm_pend_exception(v, e)					\
	do {								\
		WARN_ON(vcpu_get_flag((v), INCREMENT_PC));		\
		vcpu_set_flag((v), PENDING_EXCEPTION);			\
		vcpu_set_flag((v), e);					\
	} while (0)

It has to be a macro in order to deal with the flag expansion, but is
otherwise a welcome cleanup.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
  2022-06-08  6:51       ` Marc Zyngier
  (?)
@ 2022-06-09  2:25         ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  2:25 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

> > > +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> > > +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> > > +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> > > +
> > > +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> > > +
> > > +#define __flag_unpack(_set, _f, _m)    _f
> >
> > Nit: Probably it might be worth adding a comment that explains the
> > above two macros ? (e.g. what is each element of the triplets ?)
>
> How about this?
>
> /*
>  * Each 'flag' is composed of a comma-separated triplet:
>  *
>  * - the flag-set it belongs to in the vcpu->arch structure
>  * - the value for that flag
>  * - the mask for that flag
>  *
>  *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
>  * unpack_vcpu_flag() extract the flag value from the triplet for
>  * direct use outside of the flag accessors.
>  */

Looks good to me, thank you!
Reiji

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-06-09  2:25         ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  2:25 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

Hi Marc,

> > > +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> > > +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> > > +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> > > +
> > > +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> > > +
> > > +#define __flag_unpack(_set, _f, _m)    _f
> >
> > Nit: Probably it might be worth adding a comment that explains the
> > above two macros ? (e.g. what is each element of the triplets ?)
>
> How about this?
>
> /*
>  * Each 'flag' is composed of a comma-separated triplet:
>  *
>  * - the flag-set it belongs to in the vcpu->arch structure
>  * - the value for that flag
>  * - the mask for that flag
>  *
>  *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
>  * unpack_vcpu_flag() extract the flag value from the triplet for
>  * direct use outside of the flag accessors.
>  */

Looks good to me, thank you!
Reiji
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set
@ 2022-06-09  2:25         ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  2:25 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

> > > +#define vcpu_get_flag(v, ...)  __vcpu_get_flag(v, __VA_ARGS__)
> > > +#define vcpu_set_flag(v, ...)  __vcpu_set_flag(v, __VA_ARGS__)
> > > +#define vcpu_clear_flag(v, ...)        __vcpu_clear_flag(v, __VA_ARGS__)
> > > +
> > > +#define __vcpu_single_flag(_set, _f)   _set, (_f), (_f)
> > > +
> > > +#define __flag_unpack(_set, _f, _m)    _f
> >
> > Nit: Probably it might be worth adding a comment that explains the
> > above two macros ? (e.g. what is each element of the triplets ?)
>
> How about this?
>
> /*
>  * Each 'flag' is composed of a comma-separated triplet:
>  *
>  * - the flag-set it belongs to in the vcpu->arch structure
>  * - the value for that flag
>  * - the mask for that flag
>  *
>  *  __vcpu_single_flag() builds such a triplet for a single-bit flag.
>  * unpack_vcpu_flag() extract the flag value from the triplet for
>  * direct use outside of the flag accessors.
>  */

Looks good to me, thank you!
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-09  6:10     ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  6:10 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> It so appears that each of the vcpu flags is really belonging to
> one of three categories:
>
> - a configuration flag, set once and for all
> - an input flag generated by the kernel for the hypervisor to use
> - a state flag that is only for the kernel's own bookkeeping
>
> As we are going to split all the existing flags into these three
> sets, introduce all three in one go.
>
> No functional change other than a bit of bloat...
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5eb6791df608..c9dd0d4e22f2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> +       /* Configuration flags */
> +       u64 cflags;
> +
> +       /* Input flags to the hypervisor code */
> +       u64 iflags;
> +
> +       /* State flags, unused by the hypervisor code */
> +       u64 sflags;

Although I think VCPU_SVE_FINALIZED could be considered "state" rather
than "configuration", I assume the reason why it is handled by cflags
in the following patches is because VCPU_SVE_FINALIZED is set once
for all. If my assumption is correct, it would be clearer to add
"set once and for all" in the comment for cflags.

Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
then should it be handled by iflags instead of cflags ?

My understanding of how those flags should be used is as follows.
Is my understanding correct ?

 iflags: flags that are used by hypervisor code
 cflags: flags that are set once for all and unused by hypervisor code
 sflags: flags that could be set/cleared more than once and unused
         by hypervisor code

Thanks,
Reiji

> +
>         /*
>          * We maintain more than a single set of debug registers to support
>          * debugging the guest from the host and to maintain separate host and
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-09  6:10     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  6:10 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> It so appears that each of the vcpu flags is really belonging to
> one of three categories:
>
> - a configuration flag, set once and for all
> - an input flag generated by the kernel for the hypervisor to use
> - a state flag that is only for the kernel's own bookkeeping
>
> As we are going to split all the existing flags into these three
> sets, introduce all three in one go.
>
> No functional change other than a bit of bloat...
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5eb6791df608..c9dd0d4e22f2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> +       /* Configuration flags */
> +       u64 cflags;
> +
> +       /* Input flags to the hypervisor code */
> +       u64 iflags;
> +
> +       /* State flags, unused by the hypervisor code */
> +       u64 sflags;

Although I think VCPU_SVE_FINALIZED could be considered "state" rather
than "configuration", I assume the reason why it is handled by cflags
in the following patches is because VCPU_SVE_FINALIZED is set once
for all. If my assumption is correct, it would be clearer to add
"set once and for all" in the comment for cflags.

Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
then should it be handled by iflags instead of cflags ?

My understanding of how those flags should be used is as follows.
Is my understanding correct ?

 iflags: flags that are used by hypervisor code
 cflags: flags that are set once for all and unused by hypervisor code
 sflags: flags that could be set/cleared more than once and unused
         by hypervisor code

Thanks,
Reiji

> +
>         /*
>          * We maintain more than a single set of debug registers to support
>          * debugging the guest from the host and to maintain separate host and
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-09  6:10     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  6:10 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> It so appears that each of the vcpu flags is really belonging to
> one of three categories:
>
> - a configuration flag, set once and for all
> - an input flag generated by the kernel for the hypervisor to use
> - a state flag that is only for the kernel's own bookkeeping
>
> As we are going to split all the existing flags into these three
> sets, introduce all three in one go.
>
> No functional change other than a bit of bloat...
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5eb6791df608..c9dd0d4e22f2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
>         /* Miscellaneous vcpu state flags */
>         u64 flags;
>
> +       /* Configuration flags */
> +       u64 cflags;
> +
> +       /* Input flags to the hypervisor code */
> +       u64 iflags;
> +
> +       /* State flags, unused by the hypervisor code */
> +       u64 sflags;

Although I think VCPU_SVE_FINALIZED could be considered "state" rather
than "configuration", I assume the reason why it is handled by cflags
in the following patches is because VCPU_SVE_FINALIZED is set once
for all. If my assumption is correct, it would be clearer to add
"set once and for all" in the comment for cflags.

Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
then should it be handled by iflags instead of cflags ?

My understanding of how those flags should be used is as follows.
Is my understanding correct ?

 iflags: flags that are used by hypervisor code
 cflags: flags that are set once for all and unused by hypervisor code
 sflags: flags that could be set/cleared more than once and unused
         by hypervisor code

Thanks,
Reiji

> +
>         /*
>          * We maintain more than a single set of debug registers to support
>          * debugging the guest from the host and to maintain separate host and
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 07/18] KVM: arm64: Move vcpu configuration flags into their own set
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-09  6:15     ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  6:15 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The KVM_ARM64_{GUEST_HAS_SVE,VCPU_SVE_FINALIZED,GUEST_HAS_PTRAUTH}
> flags are purely configuration flags. Once set, they are never cleared,
> but evaluated all over the code base.
>
> Move these three flags into the configuration set in one go, using
> the new accessors, and take this opportunity to drop the KVM_ARM64_
> prefix which doesn't provide any help.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 17 ++++++++++-------
>  arch/arm64/kvm/reset.c            |  6 +++---
>  2 files changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c9dd0d4e22f2..2b8f1265eade 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -459,6 +459,13 @@ struct kvm_vcpu_arch {
>  #define __flag_unpack(_set, _f, _m)    _f
>  #define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)
>
> +/* SVE exposed to guest */
> +#define GUEST_HAS_SVE          __vcpu_single_flag(cflags, BIT(0))
> +/* SVE config completed */
> +#define VCPU_SVE_FINALIZED     __vcpu_single_flag(cflags, BIT(1))
> +/* PTRAUTH exposed to guest */
> +#define GUEST_HAS_PTRAUTH      __vcpu_single_flag(cflags, BIT(2))
> +
>
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> @@ -483,9 +490,6 @@ struct kvm_vcpu_arch {
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_GUEST_HAS_SVE                (1 << 5) /* SVE exposed to guest */
> -#define KVM_ARM64_VCPU_SVE_FINALIZED   (1 << 6) /* SVE config completed */
> -#define KVM_ARM64_GUEST_HAS_PTRAUTH    (1 << 7) /* PTRAUTH exposed to guest */
>  #define KVM_ARM64_PENDING_EXCEPTION    (1 << 8) /* Exception pending */
>  /*
>   * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
> @@ -522,13 +526,13 @@ struct kvm_vcpu_arch {
>                                  KVM_GUESTDBG_SINGLESTEP)
>
>  #define vcpu_has_sve(vcpu) (system_supports_sve() &&                   \
> -                           ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> +                           vcpu_get_flag((vcpu), GUEST_HAS_SVE))

Minor nit: The parentheses around the vcpu above would be unnecessary.
(as was omitted for vcpu_has_ptrauth/kvm_arm_vcpu_sve_finalized)

Reviewed-by: Reiji Watanabe <reijiw@google.com>

The new infrastructure for those flags looks nice.

Thanks!
Reiji



>
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  #define vcpu_has_ptrauth(vcpu)                                         \
>         ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||                \
>           cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&               \
> -        (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +         vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH))
>  #else
>  #define vcpu_has_ptrauth(vcpu)         false
>  #endif
> @@ -885,8 +889,7 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
>  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
>  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>
> -#define kvm_arm_vcpu_sve_finalized(vcpu) \
> -       ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
> +#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED)
>
>  #define kvm_has_mte(kvm)                                       \
>         (system_supports_mte() &&                               \
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 6c70c6f61c70..0e08fbe68715 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
>          * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
>          * kvm_arm_vcpu_finalize(), which freezes the configuration.
>          */
> -       vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
> +       vcpu_set_flag(vcpu, GUEST_HAS_SVE);
>
>         return 0;
>  }
> @@ -120,7 +120,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
>         }
>
>         vcpu->arch.sve_state = buf;
> -       vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
> +       vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED);
>         return 0;
>  }
>
> @@ -177,7 +177,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
>             !system_has_full_ptr_auth())
>                 return -EINVAL;
>
> -       vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
> +       vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
>         return 0;
>  }
>
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 07/18] KVM: arm64: Move vcpu configuration flags into their own set
@ 2022-06-09  6:15     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  6:15 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The KVM_ARM64_{GUEST_HAS_SVE,VCPU_SVE_FINALIZED,GUEST_HAS_PTRAUTH}
> flags are purely configuration flags. Once set, they are never cleared,
> but evaluated all over the code base.
>
> Move these three flags into the configuration set in one go, using
> the new accessors, and take this opportunity to drop the KVM_ARM64_
> prefix which doesn't provide any help.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 17 ++++++++++-------
>  arch/arm64/kvm/reset.c            |  6 +++---
>  2 files changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c9dd0d4e22f2..2b8f1265eade 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -459,6 +459,13 @@ struct kvm_vcpu_arch {
>  #define __flag_unpack(_set, _f, _m)    _f
>  #define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)
>
> +/* SVE exposed to guest */
> +#define GUEST_HAS_SVE          __vcpu_single_flag(cflags, BIT(0))
> +/* SVE config completed */
> +#define VCPU_SVE_FINALIZED     __vcpu_single_flag(cflags, BIT(1))
> +/* PTRAUTH exposed to guest */
> +#define GUEST_HAS_PTRAUTH      __vcpu_single_flag(cflags, BIT(2))
> +
>
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> @@ -483,9 +490,6 @@ struct kvm_vcpu_arch {
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_GUEST_HAS_SVE                (1 << 5) /* SVE exposed to guest */
> -#define KVM_ARM64_VCPU_SVE_FINALIZED   (1 << 6) /* SVE config completed */
> -#define KVM_ARM64_GUEST_HAS_PTRAUTH    (1 << 7) /* PTRAUTH exposed to guest */
>  #define KVM_ARM64_PENDING_EXCEPTION    (1 << 8) /* Exception pending */
>  /*
>   * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
> @@ -522,13 +526,13 @@ struct kvm_vcpu_arch {
>                                  KVM_GUESTDBG_SINGLESTEP)
>
>  #define vcpu_has_sve(vcpu) (system_supports_sve() &&                   \
> -                           ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> +                           vcpu_get_flag((vcpu), GUEST_HAS_SVE))

Minor nit: The parentheses around the vcpu above would be unnecessary.
(as was omitted for vcpu_has_ptrauth/kvm_arm_vcpu_sve_finalized)

Reviewed-by: Reiji Watanabe <reijiw@google.com>

The new infrastructure for those flags looks nice.

Thanks!
Reiji



>
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  #define vcpu_has_ptrauth(vcpu)                                         \
>         ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||                \
>           cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&               \
> -        (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +         vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH))
>  #else
>  #define vcpu_has_ptrauth(vcpu)         false
>  #endif
> @@ -885,8 +889,7 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
>  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
>  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>
> -#define kvm_arm_vcpu_sve_finalized(vcpu) \
> -       ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
> +#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED)
>
>  #define kvm_has_mte(kvm)                                       \
>         (system_supports_mte() &&                               \
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 6c70c6f61c70..0e08fbe68715 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
>          * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
>          * kvm_arm_vcpu_finalize(), which freezes the configuration.
>          */
> -       vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
> +       vcpu_set_flag(vcpu, GUEST_HAS_SVE);
>
>         return 0;
>  }
> @@ -120,7 +120,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
>         }
>
>         vcpu->arch.sve_state = buf;
> -       vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
> +       vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED);
>         return 0;
>  }
>
> @@ -177,7 +177,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
>             !system_has_full_ptr_auth())
>                 return -EINVAL;
>
> -       vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
> +       vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
>         return 0;
>  }
>
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 07/18] KVM: arm64: Move vcpu configuration flags into their own set
@ 2022-06-09  6:15     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09  6:15 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The KVM_ARM64_{GUEST_HAS_SVE,VCPU_SVE_FINALIZED,GUEST_HAS_PTRAUTH}
> flags are purely configuration flags. Once set, they are never cleared,
> but evaluated all over the code base.
>
> Move these three flags into the configuration set in one go, using
> the new accessors, and take this opportunity to drop the KVM_ARM64_
> prefix which doesn't provide any help.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 17 ++++++++++-------
>  arch/arm64/kvm/reset.c            |  6 +++---
>  2 files changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c9dd0d4e22f2..2b8f1265eade 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -459,6 +459,13 @@ struct kvm_vcpu_arch {
>  #define __flag_unpack(_set, _f, _m)    _f
>  #define vcpu_flag_unpack(...)          __flag_unpack(__VA_ARGS__)
>
> +/* SVE exposed to guest */
> +#define GUEST_HAS_SVE          __vcpu_single_flag(cflags, BIT(0))
> +/* SVE config completed */
> +#define VCPU_SVE_FINALIZED     __vcpu_single_flag(cflags, BIT(1))
> +/* PTRAUTH exposed to guest */
> +#define GUEST_HAS_PTRAUTH      __vcpu_single_flag(cflags, BIT(2))
> +
>
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>  #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +     \
> @@ -483,9 +490,6 @@ struct kvm_vcpu_arch {
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY          (1 << 0)
>  #define KVM_ARM64_HOST_SVE_ENABLED     (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_GUEST_HAS_SVE                (1 << 5) /* SVE exposed to guest */
> -#define KVM_ARM64_VCPU_SVE_FINALIZED   (1 << 6) /* SVE config completed */
> -#define KVM_ARM64_GUEST_HAS_PTRAUTH    (1 << 7) /* PTRAUTH exposed to guest */
>  #define KVM_ARM64_PENDING_EXCEPTION    (1 << 8) /* Exception pending */
>  /*
>   * Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
> @@ -522,13 +526,13 @@ struct kvm_vcpu_arch {
>                                  KVM_GUESTDBG_SINGLESTEP)
>
>  #define vcpu_has_sve(vcpu) (system_supports_sve() &&                   \
> -                           ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> +                           vcpu_get_flag((vcpu), GUEST_HAS_SVE))

Minor nit: The parentheses around the vcpu above would be unnecessary.
(as was omitted for vcpu_has_ptrauth/kvm_arm_vcpu_sve_finalized)

Reviewed-by: Reiji Watanabe <reijiw@google.com>

The new infrastructure for those flags looks nice.

Thanks!
Reiji



>
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  #define vcpu_has_ptrauth(vcpu)                                         \
>         ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||                \
>           cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&               \
> -        (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +         vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH))
>  #else
>  #define vcpu_has_ptrauth(vcpu)         false
>  #endif
> @@ -885,8 +889,7 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
>  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
>  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>
> -#define kvm_arm_vcpu_sve_finalized(vcpu) \
> -       ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
> +#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED)
>
>  #define kvm_has_mte(kvm)                                       \
>         (system_supports_mte() &&                               \
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 6c70c6f61c70..0e08fbe68715 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
>          * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
>          * kvm_arm_vcpu_finalize(), which freezes the configuration.
>          */
> -       vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
> +       vcpu_set_flag(vcpu, GUEST_HAS_SVE);
>
>         return 0;
>  }
> @@ -120,7 +120,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
>         }
>
>         vcpu->arch.sve_state = buf;
> -       vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
> +       vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED);
>         return 0;
>  }
>
> @@ -177,7 +177,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
>             !system_has_full_ptr_auth())
>                 return -EINVAL;
>
> -       vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
> +       vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
>         return 0;
>  }
>
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
  2022-06-09  6:10     ` Reiji Watanabe
  (?)
@ 2022-06-09  7:46       ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-09  7:46 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Thu, 09 Jun 2022 07:10:14 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > It so appears that each of the vcpu flags is really belonging to
> > one of three categories:
> >
> > - a configuration flag, set once and for all
> > - an input flag generated by the kernel for the hypervisor to use
> > - a state flag that is only for the kernel's own bookkeeping
> >
> > As we are going to split all the existing flags into these three
> > sets, introduce all three in one go.
> >
> > No functional change other than a bit of bloat...
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 5eb6791df608..c9dd0d4e22f2 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
> >         /* Miscellaneous vcpu state flags */
> >         u64 flags;
> >
> > +       /* Configuration flags */
> > +       u64 cflags;
> > +
> > +       /* Input flags to the hypervisor code */
> > +       u64 iflags;
> > +
> > +       /* State flags, unused by the hypervisor code */
> > +       u64 sflags;
> 
> Although I think VCPU_SVE_FINALIZED could be considered "state" rather
> than "configuration", I assume the reason why it is handled by cflags
> in the following patches is because VCPU_SVE_FINALIZED is set once
> for all. If my assumption is correct, it would be clearer to add
> "set once and for all" in the comment for cflags.

Yes, that's indeed the reason for this categorisation. In general,
these flags are, as you put it, set once and for all extremely early
(before the vcpu can run), and are never cleared. I'll update the
comment accordingly.

> Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
> then should it be handled by iflags instead of cflags ?

That'd be my expectation if they ended up changing state at some
point. My view is that the cflags are immutable once the vcpu has
run, and flags that can change state over the life if the vcpu
shouldn't be in that category.

> 
> My understanding of how those flags should be used is as follows.
> Is my understanding correct ?
> 
>  iflags: flags that are used by hypervisor code

Yes. Crucially, they are used as an input to the hypervisor code: it
either consumes these flags (INCREMENT_PC, PENDING_EXCEPTION), or
consult them to decide what to do.

>  cflags: flags that are set once for all and unused by hypervisor code

Yes.

>  sflags: flags that could be set/cleared more than once and unused
>          by hypervisor code

Yes. They are really bookkeeping flags for the kernel code.

I'll try to incorporate some of that in the comments before reposting
the series.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-09  7:46       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-09  7:46 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Thu, 09 Jun 2022 07:10:14 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > It so appears that each of the vcpu flags is really belonging to
> > one of three categories:
> >
> > - a configuration flag, set once and for all
> > - an input flag generated by the kernel for the hypervisor to use
> > - a state flag that is only for the kernel's own bookkeeping
> >
> > As we are going to split all the existing flags into these three
> > sets, introduce all three in one go.
> >
> > No functional change other than a bit of bloat...
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 5eb6791df608..c9dd0d4e22f2 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
> >         /* Miscellaneous vcpu state flags */
> >         u64 flags;
> >
> > +       /* Configuration flags */
> > +       u64 cflags;
> > +
> > +       /* Input flags to the hypervisor code */
> > +       u64 iflags;
> > +
> > +       /* State flags, unused by the hypervisor code */
> > +       u64 sflags;
> 
> Although I think VCPU_SVE_FINALIZED could be considered "state" rather
> than "configuration", I assume the reason why it is handled by cflags
> in the following patches is because VCPU_SVE_FINALIZED is set once
> for all. If my assumption is correct, it would be clearer to add
> "set once and for all" in the comment for cflags.

Yes, that's indeed the reason for this categorisation. In general,
these flags are, as you put it, set once and for all extremely early
(before the vcpu can run), and are never cleared. I'll update the
comment accordingly.

> Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
> then should it be handled by iflags instead of cflags ?

That'd be my expectation if they ended up changing state at some
point. My view is that the cflags are immutable once the vcpu has
run, and flags that can change state over the life if the vcpu
shouldn't be in that category.

> 
> My understanding of how those flags should be used is as follows.
> Is my understanding correct ?
> 
>  iflags: flags that are used by hypervisor code

Yes. Crucially, they are used as an input to the hypervisor code: it
either consumes these flags (INCREMENT_PC, PENDING_EXCEPTION), or
consult them to decide what to do.

>  cflags: flags that are set once for all and unused by hypervisor code

Yes.

>  sflags: flags that could be set/cleared more than once and unused
>          by hypervisor code

Yes. They are really bookkeeping flags for the kernel code.

I'll try to incorporate some of that in the comments before reposting
the series.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-09  7:46       ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-09  7:46 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Thu, 09 Jun 2022 07:10:14 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > It so appears that each of the vcpu flags is really belonging to
> > one of three categories:
> >
> > - a configuration flag, set once and for all
> > - an input flag generated by the kernel for the hypervisor to use
> > - a state flag that is only for the kernel's own bookkeeping
> >
> > As we are going to split all the existing flags into these three
> > sets, introduce all three in one go.
> >
> > No functional change other than a bit of bloat...
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 5eb6791df608..c9dd0d4e22f2 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
> >         /* Miscellaneous vcpu state flags */
> >         u64 flags;
> >
> > +       /* Configuration flags */
> > +       u64 cflags;
> > +
> > +       /* Input flags to the hypervisor code */
> > +       u64 iflags;
> > +
> > +       /* State flags, unused by the hypervisor code */
> > +       u64 sflags;
> 
> Although I think VCPU_SVE_FINALIZED could be considered "state" rather
> than "configuration", I assume the reason why it is handled by cflags
> in the following patches is because VCPU_SVE_FINALIZED is set once
> for all. If my assumption is correct, it would be clearer to add
> "set once and for all" in the comment for cflags.

Yes, that's indeed the reason for this categorisation. In general,
these flags are, as you put it, set once and for all extremely early
(before the vcpu can run), and are never cleared. I'll update the
comment accordingly.

> Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
> then should it be handled by iflags instead of cflags ?

That'd be my expectation if they ended up changing state at some
point. My view is that the cflags are immutable once the vcpu has
run, and flags that can change state over the life if the vcpu
shouldn't be in that category.

> 
> My understanding of how those flags should be used is as follows.
> Is my understanding correct ?
> 
>  iflags: flags that are used by hypervisor code

Yes. Crucially, they are used as an input to the hypervisor code: it
either consumes these flags (INCREMENT_PC, PENDING_EXCEPTION), or
consult them to decide what to do.

>  cflags: flags that are set once for all and unused by hypervisor code

Yes.

>  sflags: flags that could be set/cleared more than once and unused
>          by hypervisor code

Yes. They are really bookkeeping flags for the kernel code.

I'll try to incorporate some of that in the comments before reposting
the series.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
  2022-06-09  7:46       ` Marc Zyngier
  (?)
@ 2022-06-09 17:24         ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09 17:24 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Thu, Jun 9, 2022 at 12:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Thu, 09 Jun 2022 07:10:14 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > It so appears that each of the vcpu flags is really belonging to
> > > one of three categories:
> > >
> > > - a configuration flag, set once and for all
> > > - an input flag generated by the kernel for the hypervisor to use
> > > - a state flag that is only for the kernel's own bookkeeping
> > >
> > > As we are going to split all the existing flags into these three
> > > sets, introduce all three in one go.
> > >
> > > No functional change other than a bit of bloat...
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index 5eb6791df608..c9dd0d4e22f2 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
> > >         /* Miscellaneous vcpu state flags */
> > >         u64 flags;
> > >
> > > +       /* Configuration flags */
> > > +       u64 cflags;
> > > +
> > > +       /* Input flags to the hypervisor code */
> > > +       u64 iflags;
> > > +
> > > +       /* State flags, unused by the hypervisor code */
> > > +       u64 sflags;
> >
> > Although I think VCPU_SVE_FINALIZED could be considered "state" rather
> > than "configuration", I assume the reason why it is handled by cflags
> > in the following patches is because VCPU_SVE_FINALIZED is set once
> > for all. If my assumption is correct, it would be clearer to add
> > "set once and for all" in the comment for cflags.
>
> Yes, that's indeed the reason for this categorisation. In general,
> these flags are, as you put it, set once and for all extremely early
> (before the vcpu can run), and are never cleared. I'll update the
> comment accordingly.
>
> > Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
> > then should it be handled by iflags instead of cflags ?
>
> That'd be my expectation if they ended up changing state at some
> point. My view is that the cflags are immutable once the vcpu has
> run, and flags that can change state over the life if the vcpu
> shouldn't be in that category.
>
> >
> > My understanding of how those flags should be used is as follows.
> > Is my understanding correct ?
> >
> >  iflags: flags that are used by hypervisor code
>
> Yes. Crucially, they are used as an input to the hypervisor code: it
> either consumes these flags (INCREMENT_PC, PENDING_EXCEPTION), or
> consult them to decide what to do.
>
> >  cflags: flags that are set once for all and unused by hypervisor code
>
> Yes.

Thank you so much for the clarification.

I've just realized that GUEST_HAS_PTRAUTH (cflags) is used by
hypervisor code (kvm_hyp_handle_ptrauth and get_pvm_id_aa64isar{1,2}).
Shouldn't GUEST_HAS_PTRAUTH be handled as iflags ?
Or, in choosing one of these three for a flag, is immutability (once
the vcpu has run) the highest priority, followed by whether or not
it is used by hypervisor code ?

>
> >  sflags: flags that could be set/cleared more than once and unused
> >          by hypervisor code
>
> Yes. They are really bookkeeping flags for the kernel code.
>
> I'll try to incorporate some of that in the comments before reposting
> the series.

Thank you, that would be great since I was a bit concerned that
those flags might get mixed up in the future.

Regards,
Reiji

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-09 17:24         ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09 17:24 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Thu, Jun 9, 2022 at 12:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Thu, 09 Jun 2022 07:10:14 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > It so appears that each of the vcpu flags is really belonging to
> > > one of three categories:
> > >
> > > - a configuration flag, set once and for all
> > > - an input flag generated by the kernel for the hypervisor to use
> > > - a state flag that is only for the kernel's own bookkeeping
> > >
> > > As we are going to split all the existing flags into these three
> > > sets, introduce all three in one go.
> > >
> > > No functional change other than a bit of bloat...
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index 5eb6791df608..c9dd0d4e22f2 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
> > >         /* Miscellaneous vcpu state flags */
> > >         u64 flags;
> > >
> > > +       /* Configuration flags */
> > > +       u64 cflags;
> > > +
> > > +       /* Input flags to the hypervisor code */
> > > +       u64 iflags;
> > > +
> > > +       /* State flags, unused by the hypervisor code */
> > > +       u64 sflags;
> >
> > Although I think VCPU_SVE_FINALIZED could be considered "state" rather
> > than "configuration", I assume the reason why it is handled by cflags
> > in the following patches is because VCPU_SVE_FINALIZED is set once
> > for all. If my assumption is correct, it would be clearer to add
> > "set once and for all" in the comment for cflags.
>
> Yes, that's indeed the reason for this categorisation. In general,
> these flags are, as you put it, set once and for all extremely early
> (before the vcpu can run), and are never cleared. I'll update the
> comment accordingly.
>
> > Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
> > then should it be handled by iflags instead of cflags ?
>
> That'd be my expectation if they ended up changing state at some
> point. My view is that the cflags are immutable once the vcpu has
> run, and flags that can change state over the life if the vcpu
> shouldn't be in that category.
>
> >
> > My understanding of how those flags should be used is as follows.
> > Is my understanding correct ?
> >
> >  iflags: flags that are used by hypervisor code
>
> Yes. Crucially, they are used as an input to the hypervisor code: it
> either consumes these flags (INCREMENT_PC, PENDING_EXCEPTION), or
> consult them to decide what to do.
>
> >  cflags: flags that are set once for all and unused by hypervisor code
>
> Yes.

Thank you so much for the clarification.

I've just realized that GUEST_HAS_PTRAUTH (cflags) is used by
hypervisor code (kvm_hyp_handle_ptrauth and get_pvm_id_aa64isar{1,2}).
Shouldn't GUEST_HAS_PTRAUTH be handled as iflags ?
Or, in choosing one of these three for a flag, is immutability (once
the vcpu has run) the highest priority, followed by whether or not
it is used by hypervisor code ?

>
> >  sflags: flags that could be set/cleared more than once and unused
> >          by hypervisor code
>
> Yes. They are really bookkeeping flags for the kernel code.
>
> I'll try to incorporate some of that in the comments before reposting
> the series.

Thank you, that would be great since I was a bit concerned that
those flags might get mixed up in the future.

Regards,
Reiji
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-09 17:24         ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-09 17:24 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Thu, Jun 9, 2022 at 12:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Thu, 09 Jun 2022 07:10:14 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > It so appears that each of the vcpu flags is really belonging to
> > > one of three categories:
> > >
> > > - a configuration flag, set once and for all
> > > - an input flag generated by the kernel for the hypervisor to use
> > > - a state flag that is only for the kernel's own bookkeeping
> > >
> > > As we are going to split all the existing flags into these three
> > > sets, introduce all three in one go.
> > >
> > > No functional change other than a bit of bloat...
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index 5eb6791df608..c9dd0d4e22f2 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -338,6 +338,15 @@ struct kvm_vcpu_arch {
> > >         /* Miscellaneous vcpu state flags */
> > >         u64 flags;
> > >
> > > +       /* Configuration flags */
> > > +       u64 cflags;
> > > +
> > > +       /* Input flags to the hypervisor code */
> > > +       u64 iflags;
> > > +
> > > +       /* State flags, unused by the hypervisor code */
> > > +       u64 sflags;
> >
> > Although I think VCPU_SVE_FINALIZED could be considered "state" rather
> > than "configuration", I assume the reason why it is handled by cflags
> > in the following patches is because VCPU_SVE_FINALIZED is set once
> > for all. If my assumption is correct, it would be clearer to add
> > "set once and for all" in the comment for cflags.
>
> Yes, that's indeed the reason for this categorisation. In general,
> these flags are, as you put it, set once and for all extremely early
> (before the vcpu can run), and are never cleared. I'll update the
> comment accordingly.
>
> > Also, if we end up using VCPU_SVE_FINALIZED in hypervisor code later,
> > then should it be handled by iflags instead of cflags ?
>
> That'd be my expectation if they ended up changing state at some
> point. My view is that the cflags are immutable once the vcpu has
> run, and flags that can change state over the life if the vcpu
> shouldn't be in that category.
>
> >
> > My understanding of how those flags should be used is as follows.
> > Is my understanding correct ?
> >
> >  iflags: flags that are used by hypervisor code
>
> Yes. Crucially, they are used as an input to the hypervisor code: it
> either consumes these flags (INCREMENT_PC, PENDING_EXCEPTION), or
> consult them to decide what to do.
>
> >  cflags: flags that are set once for all and unused by hypervisor code
>
> Yes.

Thank you so much for the clarification.

I've just realized that GUEST_HAS_PTRAUTH (cflags) is used by
hypervisor code (kvm_hyp_handle_ptrauth and get_pvm_id_aa64isar{1,2}).
Shouldn't GUEST_HAS_PTRAUTH be handled as iflags ?
Or, in choosing one of these three for a flag, is immutability (once
the vcpu has run) the highest priority, followed by whether or not
it is used by hypervisor code ?

>
> >  sflags: flags that could be set/cleared more than once and unused
> >          by hypervisor code
>
> Yes. They are really bookkeeping flags for the kernel code.
>
> I'll try to incorporate some of that in the comments before reposting
> the series.

Thank you, that would be great since I was a bit concerned that
those flags might get mixed up in the future.

Regards,
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 08/18] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
  2022-05-28 11:38   ` Marc Zyngier
  (?)
@ 2022-06-10  6:13     ` Reiji Watanabe
  -1 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-10  6:13 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The PC update flags (which also deal with exception injection)
> is one of the most complicated use of the flag we have. Make it
> more fool prof by:
>
> - moving it over to the new accessors and assign it to the
>   input flag set
>
> - turn the combination of generic ELx flags with another flag
>   indicating the target EL itself into an explicit set of
>   flags for each EL and vector combination
>
> This is otherwise a pretty straightformward conversion.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 08/18] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
@ 2022-06-10  6:13     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-10  6:13 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The PC update flags (which also deal with exception injection)
> is one of the most complicated use of the flag we have. Make it
> more fool prof by:
>
> - moving it over to the new accessors and assign it to the
>   input flag set
>
> - turn the combination of generic ELx flags with another flag
>   indicating the target EL itself into an explicit set of
>   flags for each EL and vector combination
>
> This is otherwise a pretty straightformward conversion.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 08/18] KVM: arm64: Move vcpu PC/Exception flags to the input flag set
@ 2022-06-10  6:13     ` Reiji Watanabe
  0 siblings, 0 replies; 141+ messages in thread
From: Reiji Watanabe @ 2022-06-10  6:13 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The PC update flags (which also deal with exception injection)
> is one of the most complicated use of the flag we have. Make it
> more fool prof by:
>
> - moving it over to the new accessors and assign it to the
>   input flag set
>
> - turn the combination of generic ELx flags with another flag
>   indicating the target EL itself into an explicit set of
>   flags for each EL and vector combination
>
> This is otherwise a pretty straightformward conversion.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
  2022-06-09 17:24         ` Reiji Watanabe
  (?)
@ 2022-06-10  7:48           ` Marc Zyngier
  -1 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-10  7:48 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Thu, 09 Jun 2022 18:24:39 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> I've just realized that GUEST_HAS_PTRAUTH (cflags) is used by
> hypervisor code (kvm_hyp_handle_ptrauth and get_pvm_id_aa64isar{1,2}).
> Shouldn't GUEST_HAS_PTRAUTH be handled as iflags ?
> Or, in choosing one of these three for a flag, is immutability (once
> the vcpu has run) the highest priority, followed by whether or not
> it is used by hypervisor code ?

It can be construed that most configuration flags are also input flags
to the hypervisor, as they will eventually affect its behaviour. But
the fact that a flag is immutable once the vcpu has run is a clear
criterion for a configuration flag.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-10  7:48           ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-10  7:48 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvm, kernel-team, Mark Brown, Will Deacon, kvmarm, Linux ARM

On Thu, 09 Jun 2022 18:24:39 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> I've just realized that GUEST_HAS_PTRAUTH (cflags) is used by
> hypervisor code (kvm_hyp_handle_ptrauth and get_pvm_id_aa64isar{1,2}).
> Shouldn't GUEST_HAS_PTRAUTH be handled as iflags ?
> Or, in choosing one of these three for a flag, is immutability (once
> the vcpu has run) the highest priority, followed by whether or not
> it is used by hypervisor code ?

It can be construed that most configuration flags are also input flags
to the hypervisor, as they will eventually affect its behaviour. But
the fact that a flag is immutable once the vcpu has run is a clear
criterion for a configuration flag.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state
@ 2022-06-10  7:48           ` Marc Zyngier
  0 siblings, 0 replies; 141+ messages in thread
From: Marc Zyngier @ 2022-06-10  7:48 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: kvmarm, kvm, Linux ARM, kernel-team, Will Deacon, Mark Brown

On Thu, 09 Jun 2022 18:24:39 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> I've just realized that GUEST_HAS_PTRAUTH (cflags) is used by
> hypervisor code (kvm_hyp_handle_ptrauth and get_pvm_id_aa64isar{1,2}).
> Shouldn't GUEST_HAS_PTRAUTH be handled as iflags ?
> Or, in choosing one of these three for a flag, is immutability (once
> the vcpu has run) the highest priority, followed by whether or not
> it is used by hypervisor code ?

It can be construed that most configuration flags are also input flags
to the hypervisor, as they will eventually affect its behaviour. But
the fact that a flag is immutable once the vcpu has run is a clear
criterion for a configuration flag.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 141+ messages in thread

end of thread, other threads:[~2022-06-10  7:49 UTC | newest]

Thread overview: 141+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-28 11:38 [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
2022-05-28 11:38 ` Marc Zyngier
2022-05-28 11:38 ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 01/18] KVM: arm64: Always start with clearing SVE flag on load Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-30 14:41   ` Mark Brown
2022-05-30 14:41     ` Mark Brown
2022-05-30 14:41     ` Mark Brown
2022-06-06 11:28     ` Marc Zyngier
2022-06-06 11:28       ` Marc Zyngier
2022-06-06 11:28       ` Marc Zyngier
2022-06-06 12:16       ` Mark Brown
2022-06-06 12:16         ` Mark Brown
2022-06-06 12:16         ` Mark Brown
2022-05-28 11:38 ` [PATCH 02/18] KVM: arm64: Always start with clearing SME " Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-30 14:51   ` Mark Brown
2022-05-30 14:51     ` Mark Brown
2022-05-30 14:51     ` Mark Brown
2022-05-28 11:38 ` [PATCH 03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-03  5:23   ` Reiji Watanabe
2022-06-03  5:23     ` Reiji Watanabe
2022-06-03  5:23     ` Reiji Watanabe
2022-06-04  8:10     ` Marc Zyngier
2022-06-04  8:10       ` Marc Zyngier
2022-06-04  8:10       ` Marc Zyngier
2022-06-07  4:47       ` Reiji Watanabe
2022-06-07  4:47         ` Reiji Watanabe
2022-06-07  4:47         ` Reiji Watanabe
2022-06-03  9:09   ` Mark Brown
2022-06-03  9:09     ` Mark Brown
2022-06-03  9:09     ` Mark Brown
2022-05-28 11:38 ` [PATCH 04/18] KVM: arm64: Move FP state ownership from flag to a tristate Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-03  9:14   ` Mark Brown
2022-06-03  9:14     ` Mark Brown
2022-06-03  9:14     ` Mark Brown
2022-06-06  8:41     ` Marc Zyngier
2022-06-06  8:41       ` Marc Zyngier
2022-06-06  8:41       ` Marc Zyngier
2022-06-06 10:31       ` Mark Brown
2022-06-06 10:31         ` Mark Brown
2022-06-06 10:31         ` Mark Brown
2022-06-04  8:16   ` Reiji Watanabe
2022-06-04  8:16     ` Reiji Watanabe
2022-06-04  8:16     ` Reiji Watanabe
2022-05-28 11:38 ` [PATCH 05/18] KVM: arm64: Add helpers to manipulate vcpu flags among a set Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-08  5:26   ` Reiji Watanabe
2022-06-08  5:26     ` Reiji Watanabe
2022-06-08  5:26     ` Reiji Watanabe
2022-06-08  6:51     ` Marc Zyngier
2022-06-08  6:51       ` Marc Zyngier
2022-06-08  6:51       ` Marc Zyngier
2022-06-09  2:25       ` Reiji Watanabe
2022-06-09  2:25         ` Reiji Watanabe
2022-06-09  2:25         ` Reiji Watanabe
2022-05-28 11:38 ` [PATCH 06/18] KVM: arm64: Add three sets of flags to the vcpu state Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-08 15:23   ` Fuad Tabba
2022-06-08 15:23     ` Fuad Tabba
2022-06-08 15:23     ` Fuad Tabba
2022-06-09  6:10   ` Reiji Watanabe
2022-06-09  6:10     ` Reiji Watanabe
2022-06-09  6:10     ` Reiji Watanabe
2022-06-09  7:46     ` Marc Zyngier
2022-06-09  7:46       ` Marc Zyngier
2022-06-09  7:46       ` Marc Zyngier
2022-06-09 17:24       ` Reiji Watanabe
2022-06-09 17:24         ` Reiji Watanabe
2022-06-09 17:24         ` Reiji Watanabe
2022-06-10  7:48         ` Marc Zyngier
2022-06-10  7:48           ` Marc Zyngier
2022-06-10  7:48           ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 07/18] KVM: arm64: Move vcpu configuration flags into their own set Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-09  6:15   ` Reiji Watanabe
2022-06-09  6:15     ` Reiji Watanabe
2022-06-09  6:15     ` Reiji Watanabe
2022-05-28 11:38 ` [PATCH 08/18] KVM: arm64: Move vcpu PC/Exception flags to the input flag set Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-10  6:13   ` Reiji Watanabe
2022-06-10  6:13     ` Reiji Watanabe
2022-06-10  6:13     ` Reiji Watanabe
2022-05-28 11:38 ` [PATCH 09/18] KVM: arm64: Move vcpu debug/SPE/TRBE " Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-08 15:16   ` Fuad Tabba
2022-06-08 15:16     ` Fuad Tabba
2022-06-08 15:16     ` Fuad Tabba
2022-06-08 16:01     ` Marc Zyngier
2022-06-08 16:01       ` Marc Zyngier
2022-06-08 16:01       ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 10/18] KVM: arm64: Move vcpu SVE/SME flags to the state " Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 11/18] KVM: arm64: Move vcpu ON_UNSUPPORTED_CPU flag " Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 12/18] KVM: arm64: Move vcpu WFIT " Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 13/18] KVM: arm64: Kill unused vcpu flags field Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 14/18] KVM: arm64: Convert vcpu sysregs_loaded_on_cpu to a state flag Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 15/18] KVM: arm64: Warn when PENDING_EXCEPTION and INCREMENT_PC are set together Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-06-08 15:16   ` Fuad Tabba
2022-06-08 15:16     ` Fuad Tabba
2022-06-08 15:16     ` Fuad Tabba
2022-06-08 16:42     ` Marc Zyngier
2022-06-08 16:42       ` Marc Zyngier
2022-06-08 16:42       ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 16/18] KVM: arm64: Add build-time sanity checks for flags Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 17/18] KVM: arm64: Reduce the size of the vcpu flag members Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38 ` [PATCH 18/18] KVM: arm64: Document why pause cannot be turned into a flag Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-28 11:38   ` Marc Zyngier
2022-05-30  8:28 ` [PATCH 00/18] KVM/arm64: Refactoring the vcpu flags Marc Zyngier
2022-05-30  8:28   ` Marc Zyngier
2022-05-30  8:28   ` Marc Zyngier
2022-06-07 13:43 ` Marc Zyngier
2022-06-07 13:43   ` Marc Zyngier
2022-06-07 13:43   ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.