kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs
@ 2021-09-24 12:53 Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
                   ` (29 more replies)
  0 siblings, 30 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Hi,

This is a prolog to a series where we try to maintain virtual machine and vcpu
state for protected VMs at the hypervisor [1].

The main issue is that in KVM, the VM state (struct kvm) and the vcpu state
(struct kvm_vcpu) are created by the host and are always accessible by it. For
protected VMs (pKVM [2]), only the hypervisor should have access to their state
and not trust the host to access it. Therefore, the hypervisor should maintain
a copy of VM state for all protected VMs to use that is not accessible by the
host.

The problem with using and with maintaining a copy of the existing kvm_vcpu
struct at the hypervisor is that it's big. Depending on the configuration, it
is in the order of 10kB (ymmv) per vcpu. Whereas most of what it needs to run a
VM is the kvm_cpu_ctxt and some hyp-related registers and flags, which amount
to less than 2kB. Many of the functions use the vcpu struct when all they
access is kvm_cpu_ctxt. Other functions only need that as well as a few
hypervisor state variables. Moreover, we would like to use the existing code,
rather than write new code for protected VMs that use new or special
structures.

This patch series reduces the scope of the functions that only need
kvm_cpu_ctxt to just that. It also takes out the few elements that are relevant
to the hypervisor from kvm_vcpu_arch into a new structure, vcpu_hyp_state. This
allows the remainder of the series to reduce the scope of everything accessed
by the hypervisor, at least for protected VMs, to kvm_cpu_ctxt and
vcpu_hyp_state (and maybe vgic if supported for protected VMs).

This series uses coccinelle semantic patches [3] as much as possible when
changes are made repetitively across many files. All patches that use
coccinelle are prefixed with COCCI.

Based on Linux 5.13-rc6.

Cheers,
/fuad

[1] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/el2-state-cocci-out

[2] Once complete, protected KVM adds the ability to create protected VMs.
These protected VMs are protected from the host Linux kernel (and from other
VMs), where the host does not have access to guest memory,even if compromised.
Normal (nVHE) guests can still be created and run in parallel with protected
VMs. Their functionality should not be affected.

For protected VMs, the host should not even have access to a protected guest's
state or anything that would enable it to manipulate it (e.g., vcpu register
context and el2 system registers); only hyp would have that access. If the host
could access that state, then it might be able to get around the protection
provided.  Therefore, anything that is sensitive and that would require such
access needs to happen at hyp, hence the code in nvhe running only at hyp.

For more details about pKVM, please refer to Will's talk at KVM Forum 2020:
https://mirrors.edge.kernel.org/pub/linux/kernel/people/will/slides/kvmforum-2020-edited.pdf
https://www.youtube.com/watch?v=edqJSzsDRxk

[3] https://coccinelle.gitlabpages.inria.fr/website/

Fuad Tabba (30):
  KVM: arm64: placeholder to check if VM is protected
  [DONOTMERGE] Temporarily disable unused variable warning
  [DONOTMERGE] Coccinelle scripts for refactoring
  KVM: arm64: remove unused parameters and asm offsets
  KVM: arm64: add accessors for kvm_cpu_context
  KVM: arm64: COCCI: use_ctxt_access.cocci: use kvm_cpu_context
    accessors
  KVM: arm64: COCCI: add_ctxt.cocci use_ctxt.cocci: reduce scope of
    functions to kvm_cpu_ctxt
  KVM: arm64: add hypervisor state accessors
  KVM: arm64: COCCI: vcpu_hyp_accessors.cocci: use accessors for
    hypervisor state vcpu variables
  KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch
  KVM: arm64: create and use a new vcpu_hyp_state struct
  KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope
    of functions to hyp_state
  KVM: arm64: change function parameters to use kvm_cpu_ctxt and
    hyp_state
  KVM: arm64: reduce scope of vgic v2
  KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3
  KVM: arm64: reduce scope of vgic_v3 access parameters
  KVM: arm64: access __hyp_running_vcpu via accessors only
  KVM: arm64: reduce scope of __guest_exit to only depend on
    kvm_cpu_context
  KVM: arm64: change calls of get_loaded_vcpu to get_loaded_vcpu_ctxt
  KVM: arm64: add __hyp_running_ctxt and __hyp_running_hyps
  KVM: arm64: transition code to __hyp_running_ctxt and
    __hyp_running_hyps
  KVM: arm64: reduce scope of __guest_enter to depend only on
    kvm_cpu_ctxt
  KVM: arm64: COCCI: remove_unused.cocci: remove unused ctxt and
    hypstate variables
  KVM: arm64: remove unused functions
  KVM: arm64: separate kvm_run() for protected VMs
  KVM: arm64: pVM activate_traps to use vcpu_ctxt and vcpu_hyp_state
  KVM: arm64: remove unsupported pVM features
  KVM: arm64: reduce scope of pVM fixup_guest_exit to hyp_state and
    kvm_cpu_ctxt
  [DONOTMERGE] Remove Coccinelle scripts added for refactoring
  [DONOTMERGE] Re-enable warnings

 arch/arm64/include/asm/kvm_asm.h           |  33 ++-
 arch/arm64/include/asm/kvm_emulate.h       | 292 ++++++++++++++++-----
 arch/arm64/include/asm/kvm_host.h          | 110 ++++++--
 arch/arm64/include/asm/kvm_hyp.h           |  14 +-
 arch/arm64/kernel/asm-offsets.c            |   7 +-
 arch/arm64/kvm/arm.c                       |   2 +-
 arch/arm64/kvm/debug.c                     |  28 +-
 arch/arm64/kvm/fpsimd.c                    |  22 +-
 arch/arm64/kvm/guest.c                     |  30 +--
 arch/arm64/kvm/handle_exit.c               |   8 +-
 arch/arm64/kvm/hyp/aarch32.c               |  26 +-
 arch/arm64/kvm/hyp/entry.S                 |  23 +-
 arch/arm64/kvm/hyp/exception.c             | 113 ++++----
 arch/arm64/kvm/hyp/hyp-entry.S             |   8 +-
 arch/arm64/kvm/hyp/include/hyp/adjust_pc.h |  26 +-
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |   6 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h    | 101 ++++---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  43 +--
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |   8 +-
 arch/arm64/kvm/hyp/nvhe/host.S             |   4 +-
 arch/arm64/kvm/hyp/nvhe/switch.c           | 155 ++++++++---
 arch/arm64/kvm/hyp/nvhe/timer-sr.c         |   4 +-
 arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c   |  32 ++-
 arch/arm64/kvm/hyp/vgic-v3-sr.c            | 242 +++++++++++------
 arch/arm64/kvm/hyp/vhe/switch.c            |  40 +--
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |   3 +-
 arch/arm64/kvm/inject_fault.c              |  10 +-
 arch/arm64/kvm/reset.c                     |  16 +-
 arch/arm64/kvm/sys_regs.c                  |   4 +-
 29 files changed, 951 insertions(+), 459 deletions(-)


base-commit: 6d53b3be3b9be497fbe054f35154f508deac729c
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-27 15:50   ` Quentin Perret
  2021-09-24 12:53 ` [RFC PATCH v1 02/30] [DONOTMERGE] Temporarily disable unused variable warning Fuad Tabba
                   ` (28 subsequent siblings)
  29 siblings, 1 reply; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add a function to check whether a VM is protected (under pKVM).
Since the creation of protected VMs isn't enabled yet, this is a
placeholder that always returns false. The intention is for this
to become a check for protected VMs in the future (see Will's RFC).

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>

Link: https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
---
 arch/arm64/include/asm/kvm_host.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cd7d5c8c4bc..adb21a7f0891 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -763,6 +763,11 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
+static inline bool kvm_vm_is_protected(struct kvm *kvm)
+{
+	return false;
+}
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 02/30] [DONOTMERGE] Temporarily disable unused variable warning
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 03/30] [DONOTMERGE] Coccinelle scripts for refactoring Fuad Tabba
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Later patches add variables and functions that won't be used
immediately.  Disable the warnings until the variables are used.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index ed669b2d705d..0278bd28bd97 100644
--- a/Makefile
+++ b/Makefile
@@ -504,7 +504,7 @@ KBUILD_CFLAGS   := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \
 		   -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \
 		   -Werror=implicit-function-declaration -Werror=implicit-int \
 		   -Werror=return-type -Wno-format-security \
-		   -std=gnu89
+		   -std=gnu89 -Wno-unused-variable -Wno-unused-function
 KBUILD_CPPFLAGS := -D__KERNEL__
 KBUILD_AFLAGS_KERNEL :=
 KBUILD_CFLAGS_KERNEL :=
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 03/30] [DONOTMERGE] Coccinelle scripts for refactoring
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 02/30] [DONOTMERGE] Temporarily disable unused variable warning Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 04/30] KVM: arm64: remove unused parameters and asm offsets Fuad Tabba
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

These Coccinelle scripts are used in the coming refactoring
patches. Adding them as a commit to keep them as part of the
history.

For running the scripts please use a recent version of
Coccinelle, which has this patch [*].

Signed-off-by: Fuad Tabba <tabba@google.com>

[*]
Link: https://lore.kernel.org/cocci/alpine.DEB.2.22.394.2104211654020.13358@hadrien/T/#t
---
 cocci_refactor/add_ctxt.cocci           | 169 ++++++++++++++++++++++++
 cocci_refactor/add_hypstate.cocci       | 125 ++++++++++++++++++
 cocci_refactor/hyp_ctxt.cocci           |  38 ++++++
 cocci_refactor/range.cocci              |  50 +++++++
 cocci_refactor/remove_unused.cocci      |  69 ++++++++++
 cocci_refactor/test.cocci               |  20 +++
 cocci_refactor/use_ctxt.cocci           |  32 +++++
 cocci_refactor/use_ctxt_access.cocci    |  39 ++++++
 cocci_refactor/use_hypstate.cocci       |  63 +++++++++
 cocci_refactor/vcpu_arch_ctxt.cocci     |  13 ++
 cocci_refactor/vcpu_declr.cocci         |  59 +++++++++
 cocci_refactor/vcpu_flags.cocci         |  10 ++
 cocci_refactor/vcpu_hyp_accessors.cocci |  35 +++++
 cocci_refactor/vcpu_hyp_state.cocci     |  30 +++++
 cocci_refactor/vgic3_cpu.cocci          | 118 +++++++++++++++++
 15 files changed, 870 insertions(+)
 create mode 100644 cocci_refactor/add_ctxt.cocci
 create mode 100644 cocci_refactor/add_hypstate.cocci
 create mode 100644 cocci_refactor/hyp_ctxt.cocci
 create mode 100644 cocci_refactor/range.cocci
 create mode 100644 cocci_refactor/remove_unused.cocci
 create mode 100644 cocci_refactor/test.cocci
 create mode 100644 cocci_refactor/use_ctxt.cocci
 create mode 100644 cocci_refactor/use_ctxt_access.cocci
 create mode 100644 cocci_refactor/use_hypstate.cocci
 create mode 100644 cocci_refactor/vcpu_arch_ctxt.cocci
 create mode 100644 cocci_refactor/vcpu_declr.cocci
 create mode 100644 cocci_refactor/vcpu_flags.cocci
 create mode 100644 cocci_refactor/vcpu_hyp_accessors.cocci
 create mode 100644 cocci_refactor/vcpu_hyp_state.cocci
 create mode 100644 cocci_refactor/vgic3_cpu.cocci

diff --git a/cocci_refactor/add_ctxt.cocci b/cocci_refactor/add_ctxt.cocci
new file mode 100644
index 000000000000..203644944ace
--- /dev/null
+++ b/cocci_refactor/add_ctxt.cocci
@@ -0,0 +1,169 @@
+// <smpl>
+
+/*
+spatch --sp-file add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place
+*/
+
+
+@exists@
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+identifier fc;
+@@
+<...
+(
+  struct kvm_vcpu *vcpu = NULL;
++ struct kvm_cpu_context *vcpu_ctxt;
+|
+  struct kvm_vcpu *vcpu = ...;
++ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+|
+  struct kvm_vcpu *vcpu;
++ struct kvm_cpu_context *vcpu_ctxt;
+)
+<...
+  vcpu = ...;
++ vcpu_ctxt = &vcpu_ctxt(vcpu);
+...>
+fc(..., vcpu, ...)
+...>
+
+@exists@
+identifier func != {kvm_arch_vcpu_run_pid_change};
+identifier fc != {vcpu_ctxt};
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+@@
+func(..., struct kvm_vcpu *vcpu, ...) {
++ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+<+...
+fc(..., vcpu, ...)
+...+>
+ }
+
+@@
+expression a, b;
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+iterator name kvm_for_each_vcpu;
+identifier fc;
+@@
+kvm_for_each_vcpu(a, vcpu, b)
+ {
++ vcpu_ctxt = &vcpu_ctxt(vcpu);
+<+...
+fc(..., vcpu, ...)
+...+>
+ }
+
+@@
+identifier vcpu_ctxt, vcpu;
+iterator name kvm_for_each_vcpu;
+type T;
+identifier x;
+statement S1, S2;
+@@
+kvm_for_each_vcpu(...)
+ {
+- vcpu_ctxt = &vcpu_ctxt(vcpu);
+... when != S1
++ vcpu_ctxt = &vcpu_ctxt(vcpu);
+ S2
+ ... when any
+ }
+
+@
+disable optional_qualifier
+exists
+@
+identifier vcpu;
+identifier vcpu_ctxt;
+@@
+<...
+  const struct kvm_vcpu *vcpu = ...;
+- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
++ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+...>
+
+@disable optional_qualifier@
+identifier func, vcpu;
+identifier vcpu_ctxt;
+@@
+func(..., const struct kvm_vcpu *vcpu, ...) {
+- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
++ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+...
+ }
+
+@exists@
+expression r1, r2;
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+@@
+(
+- vcpu_gp_regs(vcpu)
++ ctxt_gp_regs(vcpu_ctxt)
+|
+- vcpu_spsr_abt(vcpu)
++ ctxt_spsr_abt(vcpu_ctxt)
+|
+- vcpu_spsr_und(vcpu)
++ ctxt_spsr_und(vcpu_ctxt)
+|
+- vcpu_spsr_irq(vcpu)
++ ctxt_spsr_irq(vcpu_ctxt)
+|
+- vcpu_spsr_fiq(vcpu)
++ ctxt_spsr_fiq(vcpu_ctxt)
+|
+- vcpu_fp_regs(vcpu)
++ ctxt_fp_regs(vcpu_ctxt)
+|
+- __vcpu_sys_reg(vcpu, r1)
++ ctxt_sys_reg(vcpu_ctxt, r1)
+|
+- __vcpu_read_sys_reg(vcpu, r1)
++ __ctxt_read_sys_reg(vcpu_ctxt, r1)
+|
+- __vcpu_write_sys_reg(vcpu, r1, r2)
++ __ctxt_write_sys_reg(vcpu_ctxt, r1, r2)
+|
+- __vcpu_write_spsr(vcpu, r1)
++ __ctxt_write_spsr(vcpu_ctxt, r1)
+|
+- __vcpu_write_spsr_abt(vcpu, r1)
++ __ctxt_write_spsr_abt(vcpu_ctxt, r1)
+|
+- __vcpu_write_spsr_und(vcpu, r1)
++ __ctxt_write_spsr_und(vcpu_ctxt, r1)
+|
+- vcpu_pc(vcpu)
++ ctxt_pc(vcpu_ctxt)
+|
+- vcpu_cpsr(vcpu)
++ ctxt_cpsr(vcpu_ctxt)
+|
+- vcpu_mode_is_32bit(vcpu)
++ ctxt_mode_is_32bit(vcpu_ctxt)
+|
+- vcpu_set_thumb(vcpu)
++ ctxt_set_thumb(vcpu_ctxt)
+|
+- vcpu_get_reg(vcpu, r1)
++ ctxt_get_reg(vcpu_ctxt, r1)
+|
+- vcpu_set_reg(vcpu, r1, r2)
++ ctxt_set_reg(vcpu_ctxt, r1, r2)
+)
+
+
+/* Handles one case of a call within a call. */
+@@
+expression r1, r2;
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+@@
+- vcpu_pc(vcpu)
++ ctxt_pc(vcpu_ctxt)
+
+// </smpl>
diff --git a/cocci_refactor/add_hypstate.cocci b/cocci_refactor/add_hypstate.cocci
new file mode 100644
index 000000000000..e8635d0e8f57
--- /dev/null
+++ b/cocci_refactor/add_hypstate.cocci
@@ -0,0 +1,125 @@
+// <smpl>
+
+/*
+FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h"
+spatch --sp-file add_hypstate.cocci $FILES --in-place
+*/
+
+@exists@
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+identifier fc;
+@@
+<...
+(
+  struct kvm_vcpu *vcpu = NULL;
++ struct vcpu_hyp_state *hyps;
+|
+  struct kvm_vcpu *vcpu = ...;
++ struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
+|
+  struct kvm_vcpu *vcpu;
++ struct vcpu_hyp_state *hyps;
+)
+<...
+  vcpu = ...;
++ hyps = &hyp_state(vcpu);
+...>
+fc(..., vcpu, ...)
+...>
+
+@exists@
+identifier func != {kvm_arch_vcpu_run_pid_change};
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+identifier fc;
+@@
+func(..., struct kvm_vcpu *vcpu, ...) {
++ struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
+<+...
+fc(..., vcpu, ...)
+...+>
+ }
+
+@@
+expression a, b;
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+iterator name kvm_for_each_vcpu;
+identifier fc;
+@@
+kvm_for_each_vcpu(a, vcpu, b)
+ {
++ hyps = &hyp_state(vcpu);
+<+...
+fc(..., vcpu, ...)
+...+>
+ }
+
+@@
+identifier hyps, vcpu;
+iterator name kvm_for_each_vcpu;
+statement S1, S2;
+@@
+kvm_for_each_vcpu(...)
+ {
+- hyps = &hyp_state(vcpu);
+... when != S1
++ hyps = &hyp_state(vcpu);
+ S2
+ ... when any
+ }
+
+@
+disable optional_qualifier
+exists
+@
+identifier vcpu, hyps;
+@@
+<...
+  const struct kvm_vcpu *vcpu = ...;
+- struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
++ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
+...>
+
+
+@@
+identifier func, vcpu, hyps;
+@@
+func(..., const struct kvm_vcpu *vcpu, ...) {
+- struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
++ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
+...
+ }
+
+@exists@
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+@@
+(
+- vcpu_hcr_el2(vcpu)
++ hyp_state_hcr_el2(hyps)
+|
+- vcpu_mdcr_el2(vcpu)
++ hyp_state_mdcr_el2(hyps)
+|
+- vcpu_vsesr_el2(vcpu)
++ hyp_state_vsesr_el2(hyps)
+|
+- vcpu_fault(vcpu)
++ hyp_state_fault(hyps)
+|
+- vcpu_flags(vcpu)
++ hyp_state_flags(hyps)
+|
+- vcpu_has_sve(vcpu)
++ hyp_state_has_sve(hyps)
+|
+- vcpu_has_ptrauth(vcpu)
++ hyp_state_has_ptrauth(hyps)
+|
+- kvm_arm_vcpu_sve_finalized(vcpu)
++ kvm_arm_hyp_state_sve_finalized(hyps)
+)
+
+// </smpl>
diff --git a/cocci_refactor/hyp_ctxt.cocci b/cocci_refactor/hyp_ctxt.cocci
new file mode 100644
index 000000000000..af7974e3a502
--- /dev/null
+++ b/cocci_refactor/hyp_ctxt.cocci
@@ -0,0 +1,38 @@
+// Remove vcpu if all we're using is hypstate and ctxt
+
+/*
+FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]")"
+spatch --sp-file hyp_ctxt.cocci $FILES --in-place;
+*/
+
+// <smpl>
+
+@remove@
+identifier func !~ "^trap_|^access_|dbg_to_reg|check_pmu_access_disabled|match_mpidr|get_ctr_el0|emulate_cp|unhandled_cp_access|index_to_sys_reg_desc|kvm_pmu_|pmu_counter_idx_valid|reset_|read_from_write_only|write_to_read_only|undef_access|vgic_|kvm_handle_|handle_sve|handle_smc|handle_no_fpsimd|id_visibility|reg_to_dbg|ptrauth_visibility|sve_visibility|kvm_arch_sched_in|kvm_arch_vcpu_|kvm_vcpu_pmu_|kvm_psci_|kvm_arm_copy_fw_reg_indices|kvm_arm_pvtime_|kvm_trng_|kvm_arm_timer_";
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+fresh identifier vcpu_hyps = vcpu ## "_hyps";
+identifier hyps_remove;
+identifier ctxt_remove;
+@@
+func(...,
+- struct kvm_vcpu *vcpu
++ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps
+,...) {
+?- struct vcpu_hyp_state *hyps_remove = ...;
+?- struct kvm_cpu_context *ctxt_remove = ...;
+... when != vcpu
+ }
+
+@@
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+fresh identifier vcpu_hyps = vcpu ## "_hyps";
+identifier remove.func;
+@@
+ func(
+- vcpu
++ vcpu_ctxt, vcpu_hyps
+  , ...)
+
+// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/range.cocci b/cocci_refactor/range.cocci
new file mode 100644
index 000000000000..d99b9ee30657
--- /dev/null
+++ b/cocci_refactor/range.cocci
@@ -0,0 +1,50 @@
+
+
+// <smpl>
+
+/*
+ FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file range.cocci $FILES
+*/
+
+@initialize:python@
+@@
+starts = ("start", "begin", "from", "floor", "addr", "kaddr")
+ends = ("size", "length", "len")
+
+//ends = ("end", "to", "ceiling", "size", "length", "len")
+
+
+@start_end@
+identifier f;
+type A, B;
+identifier start, end;
+parameter list[n] ps;
+@@
+f(ps, A start, B end, ...) {
+...
+}
+
+@script:python@
+start << start_end.start;
+end << start_end.end;
+ta << start_end.A;
+tb << start_end.B;
+@@
+
+if ta != tb and tb != "size_t":
+        cocci.include_match(False)
+elif not any(x in start for x in starts) and not any(x in end for x in ends):
+        cocci.include_match(False)
+
+@@
+identifier f = start_end.f;
+expression list[start_end.n] xs;
+expression a, b;
+@@
+(
+* f(xs, a, a, ...)
+|
+* f(xs, a, a - b, ...)
+)
+
+// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/remove_unused.cocci b/cocci_refactor/remove_unused.cocci
new file mode 100644
index 000000000000..c06278398198
--- /dev/null
+++ b/cocci_refactor/remove_unused.cocci
@@ -0,0 +1,69 @@
+// <smpl>
+
+/*
+spatch --sp-file remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff
+*/
+
+@@
+identifier hyps;
+@@
+{
+...
+(
+- struct vcpu_hyp_state *hyps = ...;
+|
+- struct vcpu_hyp_state *hyps;
+)
+... when != hyps
+    when != if (...) { <+...hyps...+> }
+?- hyps = ...;
+... when != hyps
+    when != if (...) { <+...hyps...+> }
+}
+
+@@
+identifier vcpu_ctxt;
+@@
+{
+...
+(
+- struct kvm_cpu_context *vcpu_ctxt = ...;
+|
+- struct kvm_cpu_context *vcpu_ctxt;
+)
+... when != vcpu_ctxt
+    when != if (...) { <+...vcpu_ctxt...+> }
+?- vcpu_ctxt = ...;
+... when != vcpu_ctxt
+    when != if (...) { <+...vcpu_ctxt...+> }
+}
+
+@@
+identifier x;
+identifier func;
+statement S;
+@@
+func(...)
+ {
+...
+struct kvm_cpu_context *x = ...;
++
+S
+...
+ }
+
+@@
+identifier x;
+identifier func;
+statement S;
+@@
+func(...)
+ {
+...
+struct vcpu_hyp_state *x = ...;
++
+S
+...
+ }
+
+// </smpl>
diff --git a/cocci_refactor/test.cocci b/cocci_refactor/test.cocci
new file mode 100644
index 000000000000..5eb685240ce7
--- /dev/null
+++ b/cocci_refactor/test.cocci
@@ -0,0 +1,20 @@
+/*
+ FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file test.cocci $FILES
+
+*/
+
+@r@
+identifier fn;
+@@
+fn(...) {
+ hello;
+ ...
+}
+
+@@
+identifier r.fn;
+@@
+static fn(...) {
++ world;
+ ...
+}
diff --git a/cocci_refactor/use_ctxt.cocci b/cocci_refactor/use_ctxt.cocci
new file mode 100644
index 000000000000..f3f961f567fd
--- /dev/null
+++ b/cocci_refactor/use_ctxt.cocci
@@ -0,0 +1,32 @@
+// <smpl>
+/*
+spatch --sp-file use_ctxt.cocci  --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers  --in-place
+spatch --sp-file use_ctxt.cocci  --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers  --in-place
+*/
+
+@remove_vcpu@
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+identifier ctxt_remove;
+identifier func !~ "(reset_unknown|reset_val|kvm_pmu_valid_counter_mask|reset_pmcr|kvm_arch_vcpu_in_kernel|__vgic_v3_)";
+@@
+func(
+- struct kvm_vcpu *vcpu
++ struct kvm_cpu_context *vcpu_ctxt
+, ...) {
+- struct kvm_cpu_context *ctxt_remove = ...;
+... when != vcpu
+    when != if (...) { <+...vcpu...+> }
+}
+
+@@
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+identifier func = remove_vcpu.func;
+@@
+func(
+- vcpu
++ vcpu_ctxt
+  , ...)
+
+// </smpl>
diff --git a/cocci_refactor/use_ctxt_access.cocci b/cocci_refactor/use_ctxt_access.cocci
new file mode 100644
index 000000000000..74f94141e662
--- /dev/null
+++ b/cocci_refactor/use_ctxt_access.cocci
@@ -0,0 +1,39 @@
+// </smpl>
+
+/*
+spatch --sp-file use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place
+*/
+
+@@
+constant r;
+@@
+- __ctxt_sys_reg(&vcpu->arch.ctxt, r)
++ &__vcpu_sys_reg(vcpu, r)
+
+@@
+identifier r;
+@@
+- vcpu->arch.ctxt.regs.r
++ vcpu_gp_regs(vcpu)->r
+
+@@
+identifier r;
+@@
+- vcpu->arch.ctxt.fp_regs.r
++ vcpu_fp_regs(vcpu)->r
+
+@@
+identifier r;
+fresh identifier accessor = "vcpu_" ## r;
+@@
+- &vcpu->arch.ctxt.r
++ accessor(vcpu)
+
+@@
+identifier r;
+fresh identifier accessor = "vcpu_" ## r;
+@@
+- vcpu->arch.ctxt.r
++ *accessor(vcpu)
+
+// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/use_hypstate.cocci b/cocci_refactor/use_hypstate.cocci
new file mode 100644
index 000000000000..f685149de748
--- /dev/null
+++ b/cocci_refactor/use_hypstate.cocci
@@ -0,0 +1,63 @@
+// <smpl>
+
+/*
+FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h"
+spatch --sp-file use_hypstate.cocci $FILES --in-place
+*/
+
+
+@remove_vcpu_hyps@
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+identifier hyps_remove;
+identifier func;
+@@
+func(
+- struct kvm_vcpu *vcpu
++ struct vcpu_hyp_state *hyps
+, ...) {
+- struct vcpu_hyp_state *hyps_remove = ...;
+... when != vcpu
+    when != if (...) { <+...vcpu...+> }
+}
+
+@@
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+identifier func = remove_vcpu_hyps.func;
+@@
+func(
+- vcpu
++ hyps
+  , ...)
+
+@remove_vcpu_hyps_ctxt@
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+identifier hyps_remove;
+identifier ctxt_remove;
+identifier func;
+@@
+func(
+- struct kvm_vcpu *vcpu
++ struct vcpu_hyp_state *hyps
+, ...) {
+- struct vcpu_hyp_state *hyps_remove = ...;
+- struct kvm_cpu_context *ctxt_remove = ...;
+... when != vcpu
+    when != if (...) { <+...vcpu...+> }
+    when != ctxt_remove
+    when != if (...) { <+...ctxt_remove...+> }
+}
+
+@@
+identifier vcpu;
+fresh identifier hyps = vcpu ## "_hyps";
+identifier func = remove_vcpu_hyps_ctxt.func;
+@@
+func(
+- vcpu
++ hyps
+  , ...)
+
+// </smpl>
diff --git a/cocci_refactor/vcpu_arch_ctxt.cocci b/cocci_refactor/vcpu_arch_ctxt.cocci
new file mode 100644
index 000000000000..69b3a000de4e
--- /dev/null
+++ b/cocci_refactor/vcpu_arch_ctxt.cocci
@@ -0,0 +1,13 @@
+// spatch --sp-file vcpu_arch_ctxt.cocci --no-includes --include-headers  --dir arch/arm64
+
+// <smpl>
+@@
+identifier vcpu;
+@@
+(
+- vcpu->arch.ctxt.regs
++ vcpu_gp_regs(vcpu)
+|
+- vcpu->arch.ctxt.fp_regs
++ vcpu_fp_regs(vcpu)
+)
diff --git a/cocci_refactor/vcpu_declr.cocci b/cocci_refactor/vcpu_declr.cocci
new file mode 100644
index 000000000000..59cd46bd6b2d
--- /dev/null
+++ b/cocci_refactor/vcpu_declr.cocci
@@ -0,0 +1,59 @@
+
+/*
+FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h";  spatch --sp-file vcpu_declr.cocci $FILES --in-place
+*/
+
+// <smpl>
+
+@@
+identifier vcpu;
+expression E;
+@@
+<...
+- struct kvm_vcpu *vcpu;
++ struct kvm_vcpu *vcpu = E;
+
+- vcpu = E;
+...>
+
+
+/*
+@@
+identifier vcpu;
+identifier f1, f2;
+@@
+f1(...)
+{
+- struct kvm_vcpu *vcpu = NULL;
++ struct kvm_vcpu *vcpu;
+... when != f2(..., vcpu, ...)
+}
+*/
+
+/*
+@find_after@
+identifier vcpu;
+position p;
+identifier f;
+@@
+<...
+ struct kvm_vcpu *vcpu@p;
+ ... when != vcpu = ...;
+ f(..., vcpu, ...);
+...>
+
+@@
+identifier vcpu;
+expression E;
+position p != find_after.p;
+@@
+<...
+- struct kvm_vcpu *vcpu@p;
++ struct kvm_vcpu *vcpu = E;
+ ...
+- vcpu = E;
+...>
+
+*/
+
+// </smpl>
diff --git a/cocci_refactor/vcpu_flags.cocci b/cocci_refactor/vcpu_flags.cocci
new file mode 100644
index 000000000000..609bb7bd7bd0
--- /dev/null
+++ b/cocci_refactor/vcpu_flags.cocci
@@ -0,0 +1,10 @@
+// spatch --sp-file el2_def_flags.cocci --no-includes --include-headers  --dir arch/arm64
+
+// <smpl>
+@@
+expression vcpu;
+@@
+
+- vcpu->arch.flags
++ vcpu_flags(vcpu)
+// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/vcpu_hyp_accessors.cocci b/cocci_refactor/vcpu_hyp_accessors.cocci
new file mode 100644
index 000000000000..506b56f7216f
--- /dev/null
+++ b/cocci_refactor/vcpu_hyp_accessors.cocci
@@ -0,0 +1,35 @@
+// <smpl>
+
+/*
+spatch --sp-file vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place
+*/
+
+@find_defines@
+identifier macro;
+identifier vcpu;
+position p;
+@@
+#define macro(vcpu) vcpu@p
+
+@@
+identifier vcpu;
+position p != find_defines.p;
+@@
+(
+- vcpu@p->arch.hcr_el2
++ vcpu_hcr_el2(vcpu)
+|
+- vcpu@p->arch.mdcr_el2
++ vcpu_mdcr_el2(vcpu)
+|
+- vcpu@p->arch.vsesr_el2
++ vcpu_vsesr_el2(vcpu)
+|
+- vcpu@p->arch.fault
++ vcpu_fault(vcpu)
+|
+- vcpu@p->arch.flags
++ vcpu_flags(vcpu)
+)
+
+// </smpl>
diff --git a/cocci_refactor/vcpu_hyp_state.cocci b/cocci_refactor/vcpu_hyp_state.cocci
new file mode 100644
index 000000000000..3005a6f11871
--- /dev/null
+++ b/cocci_refactor/vcpu_hyp_state.cocci
@@ -0,0 +1,30 @@
+// <smpl>
+
+// spatch --sp-file vcpu_hyp_state.cocci --no-includes --include-headers  --dir arch/arm64 --very-quiet --in-place
+
+@@
+expression vcpu;
+@@
+- vcpu->arch.
++ vcpu->arch.hyp_state.
+(
+ hcr_el2
+|
+ mdcr_el2
+|
+ vsesr_el2
+|
+ fault
+|
+ flags
+|
+ sysregs_loaded_on_cpu
+)
+
+@@
+identifier arch;
+@@
+- arch.fault
++ arch.hyp_state.fault
+
+// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/vgic3_cpu.cocci b/cocci_refactor/vgic3_cpu.cocci
new file mode 100644
index 000000000000..f7495b2e49cb
--- /dev/null
+++ b/cocci_refactor/vgic3_cpu.cocci
@@ -0,0 +1,118 @@
+// <smpl>
+
+/*
+spatch --sp-file vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place
+*/
+
+
+@@
+identifier vcpu;
+fresh identifier vcpu_hyps = vcpu ## "_hyps";
+@@
+(
+- kvm_vcpu_sys_get_rt
++ kvm_hyp_state_sys_get_rt
+|
+- kvm_vcpu_get_esr
++ kvm_hyp_state_get_esr
+)
+- (vcpu)
++ (vcpu_hyps)
+
+@add_cpu_if@
+identifier func;
+identifier c;
+@@
+int func(
+- struct kvm_vcpu *vcpu
++ struct vgic_v3_cpu_if *cpu_if
+ , ...)
+{
+<+...
+- vcpu->arch.vgic_cpu.vgic_v3.c
++ cpu_if->c
+...+>
+}
+
+@@
+identifier func = add_cpu_if.func;
+@@
+ func(
+- vcpu
++ cpu_if
+ , ...
+ )
+
+
+@add_vgic_ctxt_hyps@
+identifier func;
+@@
+void func(
+- struct kvm_vcpu *vcpu
++ struct vgic_v3_cpu_if *cpu_if, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps
+ , ...) {
+?- struct vcpu_hyp_state *vcpu_hyps = ...;
+?- struct kvm_cpu_context *vcpu_ctxt = ...;
+ ...
+ }
+
+@@
+identifier func = add_vgic_ctxt_hyps.func;
+@@
+ func(
+- vcpu,
++ cpu_if, vcpu_ctxt, vcpu_hyps,
+ ...
+ )
+
+
+@find_calls@
+identifier fn;
+type a, b;
+@@
+- void (*fn)(struct kvm_vcpu *, a, b);
++ void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, struct vcpu_hyp_state *, a, b);
+
+@@
+identifier fn = find_calls.fn;
+identifier a, b;
+@@
+- fn(vcpu, a, b);
++ fn(cpu_if, vcpu_ctxt, vcpu_hyps, a, b);
+
+@@
+@@
+int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) {
++ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+...
+}
+
+@remove@
+identifier func;
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+fresh identifier vcpu_hyps = vcpu ## "_hyps";
+identifier hyps_remove;
+identifier ctxt_remove;
+@@
+func(...,
+- struct kvm_vcpu *vcpu
++ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps
+,...) {
+?- struct vcpu_hyp_state *hyps_remove = ...;
+?- struct kvm_cpu_context *ctxt_remove = ...;
+... when != vcpu
+ }
+
+@@
+identifier vcpu;
+fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
+fresh identifier vcpu_hyps = vcpu ## "_hyps";
+identifier remove.func;
+@@
+ func(
+- vcpu
++ vcpu_ctxt, vcpu_hyps
+  , ...)
+
+// </smpl>
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 04/30] KVM: arm64: remove unused parameters and asm offsets
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (2 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 03/30] [DONOTMERGE] Coccinelle scripts for refactoring Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context Fuad Tabba
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Remove unused vcpu function parameters and asm-offset definitions.

Cleaner code and simplifies future refactoring.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h   | 4 ++--
 arch/arm64/kernel/asm-offsets.c    | 1 -
 arch/arm64/kvm/hyp/nvhe/switch.c   | 6 +++---
 arch/arm64/kvm/hyp/nvhe/timer-sr.c | 4 ++--
 4 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 9d60b3006efc..2e2b60a1b6c7 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -66,8 +66,8 @@ void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if);
 int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu);
 
 #ifdef __KVM_NVHE_HYPERVISOR__
-void __timer_enable_traps(struct kvm_vcpu *vcpu);
-void __timer_disable_traps(struct kvm_vcpu *vcpu);
+void __timer_enable_traps(void);
+void __timer_disable_traps(void);
 #endif
 
 #ifdef __KVM_NVHE_HYPERVISOR__
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 0cb34ccb6e73..c2cc3a2813e6 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -109,7 +109,6 @@ int main(void)
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
-  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_cpu_context, regs));
   DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
   DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f7af9688c1f7..9296d7108f93 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -217,7 +217,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__activate_traps(vcpu);
 
 	__hyp_vgic_restore_state(vcpu);
-	__timer_enable_traps(vcpu);
+	__timer_enable_traps();
 
 	__debug_switch_to_guest(vcpu);
 
@@ -230,7 +230,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
-	__timer_disable_traps(vcpu);
+	__timer_disable_traps();
 	__hyp_vgic_save_state(vcpu);
 
 	__deactivate_traps(vcpu);
@@ -272,7 +272,7 @@ void __noreturn hyp_panic(void)
 	vcpu = host_ctxt->__hyp_running_vcpu;
 
 	if (vcpu) {
-		__timer_disable_traps(vcpu);
+		__timer_disable_traps();
 		__deactivate_traps(vcpu);
 		__load_host_stage2();
 		__sysreg_restore_state_nvhe(host_ctxt);
diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c
index 9072e71693ba..7b2a23ccdb0a 100644
--- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c
@@ -19,7 +19,7 @@ void __kvm_timer_set_cntvoff(u64 cntvoff)
  * Should only be called on non-VHE systems.
  * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe().
  */
-void __timer_disable_traps(struct kvm_vcpu *vcpu)
+void __timer_disable_traps(void)
 {
 	u64 val;
 
@@ -33,7 +33,7 @@ void __timer_disable_traps(struct kvm_vcpu *vcpu)
  * Should only be called on non-VHE systems.
  * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe().
  */
-void __timer_enable_traps(struct kvm_vcpu *vcpu)
+void __timer_enable_traps(void)
 {
 	u64 val;
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (3 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 04/30] KVM: arm64: remove unused parameters and asm offsets Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-27 15:57   ` Quentin Perret
  2021-09-24 12:53 ` [RFC PATCH v1 06/30] KVM: arm64: COCCI: use_ctxt_access.cocci: use kvm_cpu_context accessors Fuad Tabba
                   ` (24 subsequent siblings)
  29 siblings, 1 reply; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add accessors to get/set elements of struct kvm_cpu_context.

Simplifies future refactoring, and makes the code more consistent.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 53 ++++++++++++++++++++++------
 arch/arm64/include/asm/kvm_host.h    | 18 +++++++++-
 arch/arm64/kvm/hyp/exception.c       | 43 +++++++++++++++++-----
 3 files changed, 94 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 01b9857757f2..ad6e53cef1a4 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -127,19 +127,34 @@ static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr)
 	vcpu->arch.vsesr_el2 = vsesr;
 }
 
+static __always_inline unsigned long *ctxt_pc(const struct kvm_cpu_context *ctxt)
+{
+	return (unsigned long *)&ctxt_gp_regs(ctxt)->pc;
+}
+
 static __always_inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu)
 {
-	return (unsigned long *)&vcpu_gp_regs(vcpu)->pc;
+	return ctxt_pc(&vcpu_ctxt(vcpu));
+}
+
+static __always_inline unsigned long *ctxt_cpsr(const struct kvm_cpu_context *ctxt)
+{
+	return (unsigned long *)&ctxt_gp_regs(ctxt)->pstate;
 }
 
 static __always_inline unsigned long *vcpu_cpsr(const struct kvm_vcpu *vcpu)
 {
-	return (unsigned long *)&vcpu_gp_regs(vcpu)->pstate;
+	return ctxt_cpsr(&vcpu_ctxt(vcpu));
+}
+
+static __always_inline bool ctxt_mode_is_32bit(const struct kvm_cpu_context *ctxt)
+{
+	return !!(*ctxt_cpsr(ctxt) & PSR_MODE32_BIT);
 }
 
 static __always_inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu)
 {
-	return !!(*vcpu_cpsr(vcpu) & PSR_MODE32_BIT);
+	return ctxt_mode_is_32bit(&vcpu_ctxt(vcpu));
 }
 
 static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu)
@@ -150,27 +165,45 @@ static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static inline void ctxt_set_thumb(struct kvm_cpu_context *ctxt)
+{
+	*ctxt_cpsr(ctxt) |= PSR_AA32_T_BIT;
+}
+
 static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
 {
-	*vcpu_cpsr(vcpu) |= PSR_AA32_T_BIT;
+	ctxt_set_thumb(&vcpu_ctxt(vcpu));
 }
 
 /*
- * vcpu_get_reg and vcpu_set_reg should always be passed a register number
- * coming from a read of ESR_EL2. Otherwise, it may give the wrong result on
- * AArch32 with banked registers.
+ * vcpu/ctxt_get_reg and vcpu/ctxt_set_reg should always be passed a register
+ * number coming from a read of ESR_EL2. Otherwise, it may give the wrong result
+ * on AArch32 with banked registers.
  */
+static __always_inline unsigned long
+ctxt_get_reg(const struct kvm_cpu_context *ctxt, u8 reg_num)
+{
+	return (reg_num == 31) ? 0 : ctxt_gp_regs(ctxt)->regs[reg_num];
+}
+
+static __always_inline void
+ctxt_set_reg(struct kvm_cpu_context *ctxt, u8 reg_num, unsigned long val)
+{
+	if (reg_num != 31)
+		ctxt_gp_regs(ctxt)->regs[reg_num] = val;
+}
+
 static __always_inline unsigned long vcpu_get_reg(const struct kvm_vcpu *vcpu,
 					 u8 reg_num)
 {
-	return (reg_num == 31) ? 0 : vcpu_gp_regs(vcpu)->regs[reg_num];
+	return ctxt_get_reg(&vcpu_ctxt(vcpu), reg_num);
+
 }
 
 static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num,
 				unsigned long val)
 {
-	if (reg_num != 31)
-		vcpu_gp_regs(vcpu)->regs[reg_num] = val;
+	ctxt_set_reg(&vcpu_ctxt(vcpu), reg_num, val);
 }
 
 /*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index adb21a7f0891..097e5f533af9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -446,7 +446,23 @@ struct kvm_vcpu_arch {
 #define vcpu_has_ptrauth(vcpu)		false
 #endif
 
-#define vcpu_gp_regs(v)		(&(v)->arch.ctxt.regs)
+#define vcpu_ctxt(vcpu) ((vcpu)->arch.ctxt)
+
+/* VCPU Context accessors (direct) */
+#define ctxt_gp_regs(c)		(&(c)->regs)
+#define ctxt_spsr_abt(c)	(&(c)->spsr_abt)
+#define ctxt_spsr_und(c)	(&(c)->spsr_und)
+#define ctxt_spsr_irq(c)	(&(c)->spsr_irq)
+#define ctxt_spsr_fiq(c)	(&(c)->spsr_fiq)
+#define ctxt_fp_regs(c)		(&(c)->fp_regs)
+
+/* VCPU Context accessors */
+#define vcpu_gp_regs(v)		ctxt_gp_regs(&vcpu_ctxt(v))
+#define vcpu_spsr_abt(v)	ctxt_spsr_abt(&vcpu_ctxt(v))
+#define vcpu_spsr_und(v)	ctxt_spsr_und(&vcpu_ctxt(v))
+#define vcpu_spsr_irq(v)	ctxt_spsr_irq(&vcpu_ctxt(v))
+#define vcpu_spsr_fiq(v)	ctxt_spsr_fiq(&vcpu_ctxt(v))
+#define vcpu_fp_regs(v)		ctxt_fp_regs(&vcpu_ctxt(v))
 
 /*
  * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 11541b94b328..643c5844f684 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -18,43 +18,68 @@
 #error Hypervisor code only!
 #endif
 
-static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
+static inline u64 __ctxt_read_sys_reg(const struct kvm_cpu_context *vcpu_ctxt, int reg)
 {
 	u64 val;
 
 	if (__vcpu_read_sys_reg_from_cpu(reg, &val))
 		return val;
 
-	return __vcpu_sys_reg(vcpu, reg);
+	return ctxt_sys_reg(vcpu_ctxt, reg);
 }
 
-static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
+static inline void __ctxt_write_sys_reg(struct kvm_cpu_context *vcpu_ctxt, u64 val, int reg)
 {
 	if (__vcpu_write_sys_reg_to_cpu(val, reg))
 		return;
 
-	 __vcpu_sys_reg(vcpu, reg) = val;
+	 ctxt_sys_reg(vcpu_ctxt, reg) = val;
 }
 
-static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
+static void __ctxt_write_spsr(struct kvm_cpu_context *vcpu_ctxt, u64 val)
 {
 	write_sysreg_el1(val, SYS_SPSR);
 }
 
-static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
+static void __ctxt_write_spsr_abt(struct kvm_cpu_context *vcpu_ctxt, u64 val)
 {
 	if (has_vhe())
 		write_sysreg(val, spsr_abt);
 	else
-		vcpu->arch.ctxt.spsr_abt = val;
+		*ctxt_spsr_abt(vcpu_ctxt) = val;
 }
 
-static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
+static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val)
 {
 	if (has_vhe())
 		write_sysreg(val, spsr_und);
 	else
-		vcpu->arch.ctxt.spsr_und = val;
+		*ctxt_spsr_und(vcpu_ctxt) = val;
+}
+
+static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
+{
+	return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg);
+}
+
+static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
+{
+	__ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg);
+}
+
+static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
+{
+	__ctxt_write_spsr(&vcpu_ctxt(vcpu), val);
+}
+
+static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
+{
+	__ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val);
+}
+
+static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
+{
+	__ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val);
 }
 
 /*
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 06/30] KVM: arm64: COCCI: use_ctxt_access.cocci: use kvm_cpu_context accessors
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (4 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 07/30] KVM: arm64: COCCI: add_ctxt.cocci use_ctxt.cocci: reduce scope of functions to kvm_cpu_ctxt Fuad Tabba
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Some parts of the code access vcpu->arch.ctxt directly instead of
using existing accessors. Refactor to use the existing accessors
to make the code more consistent and to simplify future patches.

This applies the semantic patch with the following command:

spatch --sp-file cocci_refactor/use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/fpsimd.c                    |  2 +-
 arch/arm64/kvm/guest.c                     | 28 +++++++++++-----------
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  4 ++--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 16 ++++++-------
 arch/arm64/kvm/reset.c                     | 10 ++++----
 5 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 5621020b28de..db135588236a 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -97,7 +97,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
 	WARN_ON_ONCE(!irqs_disabled());
 
 	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
-		fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs,
+		fpsimd_bind_state_to_cpu(vcpu_fp_regs(vcpu),
 					 vcpu->arch.sve_state,
 					 vcpu->arch.sve_max_vl);
 
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5cb4a1cd5603..c4429307a164 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -116,49 +116,49 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	     KVM_REG_ARM_CORE_REG(regs.regs[30]):
 		off -= KVM_REG_ARM_CORE_REG(regs.regs[0]);
 		off /= 2;
-		return &vcpu->arch.ctxt.regs.regs[off];
+		return &vcpu_gp_regs(vcpu)->regs[off];
 
 	case KVM_REG_ARM_CORE_REG(regs.sp):
-		return &vcpu->arch.ctxt.regs.sp;
+		return &vcpu_gp_regs(vcpu)->sp;
 
 	case KVM_REG_ARM_CORE_REG(regs.pc):
-		return &vcpu->arch.ctxt.regs.pc;
+		return &vcpu_gp_regs(vcpu)->pc;
 
 	case KVM_REG_ARM_CORE_REG(regs.pstate):
-		return &vcpu->arch.ctxt.regs.pstate;
+		return &vcpu_gp_regs(vcpu)->pstate;
 
 	case KVM_REG_ARM_CORE_REG(sp_el1):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, SP_EL1);
+		return &__vcpu_sys_reg(vcpu, SP_EL1);
 
 	case KVM_REG_ARM_CORE_REG(elr_el1):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, ELR_EL1);
+		return &__vcpu_sys_reg(vcpu, ELR_EL1);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_EL1]):
-		return __ctxt_sys_reg(&vcpu->arch.ctxt, SPSR_EL1);
+		return &__vcpu_sys_reg(vcpu, SPSR_EL1);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_ABT]):
-		return &vcpu->arch.ctxt.spsr_abt;
+		return vcpu_spsr_abt(vcpu);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_UND]):
-		return &vcpu->arch.ctxt.spsr_und;
+		return vcpu_spsr_und(vcpu);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_IRQ]):
-		return &vcpu->arch.ctxt.spsr_irq;
+		return vcpu_spsr_irq(vcpu);
 
 	case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_FIQ]):
-		return &vcpu->arch.ctxt.spsr_fiq;
+		return vcpu_spsr_fiq(vcpu);
 
 	case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ...
 	     KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]):
 		off -= KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]);
 		off /= 4;
-		return &vcpu->arch.ctxt.fp_regs.vregs[off];
+		return &vcpu_fp_regs(vcpu)->vregs[off];
 
 	case KVM_REG_ARM_CORE_REG(fp_regs.fpsr):
-		return &vcpu->arch.ctxt.fp_regs.fpsr;
+		return &vcpu_fp_regs(vcpu)->fpsr;
 
 	case KVM_REG_ARM_CORE_REG(fp_regs.fpcr):
-		return &vcpu->arch.ctxt.fp_regs.fpcr;
+		return &vcpu_fp_regs(vcpu)->fpcr;
 
 	default:
 		return NULL;
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..9fa9cf71eefa 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -217,7 +217,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 {
 	sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
 	__sve_restore_state(vcpu_sve_pffr(vcpu),
-			    &vcpu->arch.ctxt.fp_regs.fpsr);
+			    &vcpu_fp_regs(vcpu)->fpsr);
 	write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
 }
 
@@ -276,7 +276,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 	if (sve_guest)
 		__hyp_sve_restore_guest(vcpu);
 	else
-		__fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs);
+		__fpsimd_restore_state(vcpu_fp_regs(vcpu));
 
 	/* Skip restoring fpexc32 for AArch64 guests */
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index cce43bfe158f..9451206f512e 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -161,10 +161,10 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
-	vcpu->arch.ctxt.spsr_abt = read_sysreg(spsr_abt);
-	vcpu->arch.ctxt.spsr_und = read_sysreg(spsr_und);
-	vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq);
-	vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq);
+	*vcpu_spsr_abt(vcpu) = read_sysreg(spsr_abt);
+	*vcpu_spsr_und(vcpu) = read_sysreg(spsr_und);
+	*vcpu_spsr_irq(vcpu) = read_sysreg(spsr_irq);
+	*vcpu_spsr_fiq(vcpu) = read_sysreg(spsr_fiq);
 
 	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
 	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
@@ -178,10 +178,10 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
-	write_sysreg(vcpu->arch.ctxt.spsr_abt, spsr_abt);
-	write_sysreg(vcpu->arch.ctxt.spsr_und, spsr_und);
-	write_sysreg(vcpu->arch.ctxt.spsr_irq, spsr_irq);
-	write_sysreg(vcpu->arch.ctxt.spsr_fiq, spsr_fiq);
+	write_sysreg(*vcpu_spsr_abt(vcpu), spsr_abt);
+	write_sysreg(*vcpu_spsr_und(vcpu), spsr_und);
+	write_sysreg(*vcpu_spsr_irq(vcpu), spsr_irq);
+	write_sysreg(*vcpu_spsr_fiq(vcpu), spsr_fiq);
 
 	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
 	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index d37ebee085cf..ab1ef5313a3e 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -258,11 +258,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 
 	/* Reset core registers */
 	memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu)));
-	memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs));
-	vcpu->arch.ctxt.spsr_abt = 0;
-	vcpu->arch.ctxt.spsr_und = 0;
-	vcpu->arch.ctxt.spsr_irq = 0;
-	vcpu->arch.ctxt.spsr_fiq = 0;
+	memset(vcpu_fp_regs(vcpu), 0, sizeof(*vcpu_fp_regs(vcpu)));
+	*vcpu_spsr_abt(vcpu) = 0;
+	*vcpu_spsr_und(vcpu) = 0;
+	*vcpu_spsr_irq(vcpu) = 0;
+	*vcpu_spsr_fiq(vcpu) = 0;
 	vcpu_gp_regs(vcpu)->pstate = pstate;
 
 	/* Reset system registers */
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 07/30] KVM: arm64: COCCI: add_ctxt.cocci use_ctxt.cocci: reduce scope of functions to kvm_cpu_ctxt
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (5 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 06/30] KVM: arm64: COCCI: use_ctxt_access.cocci: use kvm_cpu_context accessors Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 08/30] KVM: arm64: add hypervisor state accessors Fuad Tabba
                   ` (22 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Many functions don't need access to the vcpu structure, but only
the kvm_cpu_ctxt. Reduce their scope.

This applies the semantic patches with the following commands:
spatch --sp-file cocci_refactor/add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place
spatch --sp-file cocci_refactor/use_ctxt.cocci  --dir arch/arm64/kvm/hyp --include-headers  --in-place
spatch --sp-file cocci_refactor/use_ctxt.cocci  --dir arch/arm64/kvm/hyp --include-headers  --in-place

This patch adds variables that may be unused. These will be
removed at the end of this patch series.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/aarch32.c               | 18 +++---
 arch/arm64/kvm/hyp/exception.c             | 60 ++++++++++--------
 arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 18 +++---
 arch/arm64/kvm/hyp/include/hyp/switch.h    | 20 ++++--
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 31 +++++-----
 arch/arm64/kvm/hyp/nvhe/switch.c           |  5 ++
 arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c   | 13 ++--
 arch/arm64/kvm/hyp/vgic-v3-sr.c            | 71 +++++++++++++++-------
 arch/arm64/kvm/hyp/vhe/switch.c            |  7 +++
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |  2 +
 10 files changed, 155 insertions(+), 90 deletions(-)

diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c
index f98cbe2626a1..27ebfff023ff 100644
--- a/arch/arm64/kvm/hyp/aarch32.c
+++ b/arch/arm64/kvm/hyp/aarch32.c
@@ -46,6 +46,7 @@ static const unsigned short cc_map[16] = {
  */
 bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
 {
+	const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	unsigned long cpsr;
 	u32 cpsr_cond;
 	int cond;
@@ -59,7 +60,7 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
 	if (cond == 0xE)
 		return true;
 
-	cpsr = *vcpu_cpsr(vcpu);
+	cpsr = *ctxt_cpsr(vcpu_ctxt);
 
 	if (cond < 0) {
 		/* This can happen in Thumb mode: examine IT state. */
@@ -93,10 +94,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
  *
  * IT[7:0] -> CPSR[26:25],CPSR[15:10]
  */
-static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
+static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt)
 {
 	unsigned long itbits, cond;
-	unsigned long cpsr = *vcpu_cpsr(vcpu);
+	unsigned long cpsr = *ctxt_cpsr(vcpu_ctxt);
 	bool is_arm = !(cpsr & PSR_AA32_T_BIT);
 
 	if (is_arm || !(cpsr & PSR_AA32_IT_MASK))
@@ -116,7 +117,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
 	cpsr |= cond << 13;
 	cpsr |= (itbits & 0x1c) << (10 - 2);
 	cpsr |= (itbits & 0x3) << 25;
-	*vcpu_cpsr(vcpu) = cpsr;
+	*ctxt_cpsr(vcpu_ctxt) = cpsr;
 }
 
 /**
@@ -125,16 +126,17 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
  */
 void kvm_skip_instr32(struct kvm_vcpu *vcpu)
 {
-	u32 pc = *vcpu_pc(vcpu);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u32 pc = *ctxt_pc(vcpu_ctxt);
 	bool is_thumb;
 
-	is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT);
+	is_thumb = !!(*ctxt_cpsr(vcpu_ctxt) & PSR_AA32_T_BIT);
 	if (is_thumb && !kvm_vcpu_trap_il_is32bit(vcpu))
 		pc += 2;
 	else
 		pc += 4;
 
-	*vcpu_pc(vcpu) = pc;
+	*ctxt_pc(vcpu_ctxt) = pc;
 
-	kvm_adjust_itstate(vcpu);
+	kvm_adjust_itstate(vcpu_ctxt);
 }
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 643c5844f684..e23b9cedb043 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -99,13 +99,14 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
  * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from
  * MSB to LSB.
  */
-static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
+static void enter_exception64(struct kvm_cpu_context *vcpu_ctxt,
+			      unsigned long target_mode,
 			      enum exception_type type)
 {
 	unsigned long sctlr, vbar, old, new, mode;
 	u64 exc_offset;
 
-	mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
+	mode = *ctxt_cpsr(vcpu_ctxt) & (PSR_MODE_MASK | PSR_MODE32_BIT);
 
 	if      (mode == target_mode)
 		exc_offset = CURRENT_EL_SP_ELx_VECTOR;
@@ -118,18 +119,18 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
 
 	switch (target_mode) {
 	case PSR_MODE_EL1h:
-		vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1);
-		sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
-		__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1);
+		vbar = __ctxt_read_sys_reg(vcpu_ctxt, VBAR_EL1);
+		sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1);
+		__ctxt_write_sys_reg(vcpu_ctxt, *ctxt_pc(vcpu_ctxt), ELR_EL1);
 		break;
 	default:
 		/* Don't do that */
 		BUG();
 	}
 
-	*vcpu_pc(vcpu) = vbar + exc_offset + type;
+	*ctxt_pc(vcpu_ctxt) = vbar + exc_offset + type;
 
-	old = *vcpu_cpsr(vcpu);
+	old = *ctxt_cpsr(vcpu_ctxt);
 	new = 0;
 
 	new |= (old & PSR_N_BIT);
@@ -172,8 +173,8 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
 
 	new |= target_mode;
 
-	*vcpu_cpsr(vcpu) = new;
-	__vcpu_write_spsr(vcpu, old);
+	*ctxt_cpsr(vcpu_ctxt) = new;
+	__ctxt_write_spsr(vcpu_ctxt, old);
 }
 
 /*
@@ -194,12 +195,13 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
  * Here we manipulate the fields in order of the AArch32 SPSR_ELx layout, from
  * MSB to LSB.
  */
-static unsigned long get_except32_cpsr(struct kvm_vcpu *vcpu, u32 mode)
+static unsigned long get_except32_cpsr(struct kvm_cpu_context *vcpu_ctxt,
+				       u32 mode)
 {
-	u32 sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
+	u32 sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1);
 	unsigned long old, new;
 
-	old = *vcpu_cpsr(vcpu);
+	old = *ctxt_cpsr(vcpu_ctxt);
 	new = 0;
 
 	new |= (old & PSR_AA32_N_BIT);
@@ -288,27 +290,28 @@ static const u8 return_offsets[8][2] = {
 	[7] = { 4, 4 },		/* FIQ, unused */
 };
 
-static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
+static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode,
+			      u32 vect_offset)
 {
-	unsigned long spsr = *vcpu_cpsr(vcpu);
+	unsigned long spsr = *ctxt_cpsr(vcpu_ctxt);
 	bool is_thumb = (spsr & PSR_AA32_T_BIT);
-	u32 sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
+	u32 sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1);
 	u32 return_address;
 
-	*vcpu_cpsr(vcpu) = get_except32_cpsr(vcpu, mode);
-	return_address   = *vcpu_pc(vcpu);
+	*ctxt_cpsr(vcpu_ctxt) = get_except32_cpsr(vcpu_ctxt, mode);
+	return_address   = *ctxt_pc(vcpu_ctxt);
 	return_address  += return_offsets[vect_offset >> 2][is_thumb];
 
 	/* KVM only enters the ABT and UND modes, so only deal with those */
 	switch(mode) {
 	case PSR_AA32_MODE_ABT:
-		__vcpu_write_spsr_abt(vcpu, host_spsr_to_spsr32(spsr));
-		vcpu_gp_regs(vcpu)->compat_lr_abt = return_address;
+		__ctxt_write_spsr_abt(vcpu_ctxt, host_spsr_to_spsr32(spsr));
+		ctxt_gp_regs(vcpu_ctxt)->compat_lr_abt = return_address;
 		break;
 
 	case PSR_AA32_MODE_UND:
-		__vcpu_write_spsr_und(vcpu, host_spsr_to_spsr32(spsr));
-		vcpu_gp_regs(vcpu)->compat_lr_und = return_address;
+		__ctxt_write_spsr_und(vcpu_ctxt, host_spsr_to_spsr32(spsr));
+		ctxt_gp_regs(vcpu_ctxt)->compat_lr_und = return_address;
 		break;
 	}
 
@@ -316,23 +319,24 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
 	if (sctlr & (1 << 13))
 		vect_offset += 0xffff0000;
 	else /* always have security exceptions */
-		vect_offset += __vcpu_read_sys_reg(vcpu, VBAR_EL1);
+		vect_offset += __ctxt_read_sys_reg(vcpu_ctxt, VBAR_EL1);
 
-	*vcpu_pc(vcpu) = vect_offset;
+	*ctxt_pc(vcpu_ctxt) = vect_offset;
 }
 
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (vcpu_el1_is_32bit(vcpu)) {
 		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
 		case KVM_ARM64_EXCEPT_AA32_UND:
-			enter_exception32(vcpu, PSR_AA32_MODE_UND, 4);
+			enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4);
 			break;
 		case KVM_ARM64_EXCEPT_AA32_IABT:
-			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12);
+			enter_exception32(vcpu_ctxt, PSR_AA32_MODE_ABT, 12);
 			break;
 		case KVM_ARM64_EXCEPT_AA32_DABT:
-			enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16);
+			enter_exception32(vcpu_ctxt, PSR_AA32_MODE_ABT, 16);
 			break;
 		default:
 			/* Err... */
@@ -342,7 +346,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
 		case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
 		      KVM_ARM64_EXCEPT_AA64_EL1):
-			enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
+			enter_exception64(vcpu_ctxt, PSR_MODE_EL1h,
+					  except_type_sync);
 			break;
 		default:
 			/*
@@ -361,6 +366,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
  */
 void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
 		kvm_inject_exception(vcpu);
 		vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
index 4fdfeabefeb4..20dde9dbc11b 100644
--- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
+++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
@@ -15,15 +15,16 @@
 
 static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)
 {
-	if (vcpu_mode_is_32bit(vcpu)) {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	if (ctxt_mode_is_32bit(vcpu_ctxt)) {
 		kvm_skip_instr32(vcpu);
 	} else {
-		*vcpu_pc(vcpu) += 4;
-		*vcpu_cpsr(vcpu) &= ~PSR_BTYPE_MASK;
+		*ctxt_pc(vcpu_ctxt) += 4;
+		*ctxt_cpsr(vcpu_ctxt) &= ~PSR_BTYPE_MASK;
 	}
 
 	/* advance the singlestep state machine */
-	*vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
+	*ctxt_cpsr(vcpu_ctxt) &= ~DBG_SPSR_SS;
 }
 
 /*
@@ -32,13 +33,14 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)
  */
 static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu)
 {
-	*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
-	vcpu_gp_regs(vcpu)->pstate = read_sysreg_el2(SYS_SPSR);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	*ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR);
+	ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR);
 
 	kvm_skip_instr(vcpu);
 
-	write_sysreg_el2(vcpu_gp_regs(vcpu)->pstate, SYS_SPSR);
-	write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
+	write_sysreg_el2(ctxt_gp_regs(vcpu_ctxt)->pstate, SYS_SPSR);
+	write_sysreg_el2(*ctxt_pc(vcpu_ctxt), SYS_ELR);
 }
 
 /*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 9fa9cf71eefa..41c553a7b5dd 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -54,14 +54,16 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 /* Save the 32-bit only FPSIMD system register state */
 static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
-	__vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2);
+	ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2) = read_sysreg(fpexc32_el2);
 }
 
 static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	/*
 	 * We are about to set CPTR_EL2.TFP to trap all floating point
 	 * register accesses to EL2, however, the ARM ARM clearly states that
@@ -215,15 +217,17 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu)
 
 static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
 	__sve_restore_state(vcpu_sve_pffr(vcpu),
-			    &vcpu_fp_regs(vcpu)->fpsr);
-	write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
+			    &ctxt_fp_regs(vcpu_ctxt)->fpsr);
+	write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, ZCR_EL1), SYS_ZCR);
 }
 
 /* Check for an FPSIMD/SVE trap and handle as appropriate */
 static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	bool sve_guest, sve_host;
 	u8 esr_ec;
 	u64 reg;
@@ -276,11 +280,12 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 	if (sve_guest)
 		__hyp_sve_restore_guest(vcpu);
 	else
-		__fpsimd_restore_state(vcpu_fp_regs(vcpu));
+		__fpsimd_restore_state(ctxt_fp_regs(vcpu_ctxt));
 
 	/* Skip restoring fpexc32 for AArch64 guests */
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
-		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
+		write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2),
+			     fpexc32_el2);
 
 	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
 
@@ -289,9 +294,10 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
 	int rt = kvm_vcpu_sys_get_rt(vcpu);
-	u64 val = vcpu_get_reg(vcpu, rt);
+	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	/*
 	 * The normal sysreg handling code expects to see the traps,
@@ -382,6 +388,7 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 
 static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *ctxt;
 	u64 val;
 
@@ -412,6 +419,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
  */
 static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
 		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 9451206f512e..c2668b85b67e 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -158,36 +158,39 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx
 
 static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
-	*vcpu_spsr_abt(vcpu) = read_sysreg(spsr_abt);
-	*vcpu_spsr_und(vcpu) = read_sysreg(spsr_und);
-	*vcpu_spsr_irq(vcpu) = read_sysreg(spsr_irq);
-	*vcpu_spsr_fiq(vcpu) = read_sysreg(spsr_fiq);
+	*ctxt_spsr_abt(vcpu_ctxt) = read_sysreg(spsr_abt);
+	*ctxt_spsr_und(vcpu_ctxt) = read_sysreg(spsr_und);
+	*ctxt_spsr_irq(vcpu_ctxt) = read_sysreg(spsr_irq);
+	*ctxt_spsr_fiq(vcpu_ctxt) = read_sysreg(spsr_fiq);
 
-	__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
-	__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
+	ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2);
+	ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
 	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
-		__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
+		ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
 static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
-	write_sysreg(*vcpu_spsr_abt(vcpu), spsr_abt);
-	write_sysreg(*vcpu_spsr_und(vcpu), spsr_und);
-	write_sysreg(*vcpu_spsr_irq(vcpu), spsr_irq);
-	write_sysreg(*vcpu_spsr_fiq(vcpu), spsr_fiq);
+	write_sysreg(*ctxt_spsr_abt(vcpu_ctxt), spsr_abt);
+	write_sysreg(*ctxt_spsr_und(vcpu_ctxt), spsr_und);
+	write_sysreg(*ctxt_spsr_irq(vcpu_ctxt), spsr_irq);
+	write_sysreg(*ctxt_spsr_fiq(vcpu_ctxt), spsr_fiq);
 
-	write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2);
-	write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2);
+	write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2);
+	write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2);
 
 	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
-		write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2);
+		write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2),
+		             dbgvcr32_el2);
 }
 
 #endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 9296d7108f93..d5780acab6c2 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -36,6 +36,7 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val;
 
 	___activate_traps(vcpu);
@@ -68,6 +69,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 static void __deactivate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	extern char __kvm_hyp_host_vector[];
 	u64 mdcr_el2, cptr;
 
@@ -168,6 +170,7 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	bool pmu_switch_needed;
@@ -267,9 +270,11 @@ void __noreturn hyp_panic(void)
 	u64 par = read_sysreg_par();
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *vcpu;
+	struct kvm_cpu_context *vcpu_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	vcpu = host_ctxt->__hyp_running_vcpu;
+	vcpu_ctxt = &vcpu_ctxt(vcpu);
 
 	if (vcpu) {
 		__timer_disable_traps();
diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
index 87a54375bd6e..8dbc39026cc5 100644
--- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
+++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
@@ -15,9 +15,9 @@
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
-static bool __is_be(struct kvm_vcpu *vcpu)
+static bool __is_be(struct kvm_cpu_context *vcpu_ctxt)
 {
-	if (vcpu_mode_is_32bit(vcpu))
+	if (ctxt_mode_is_32bit(vcpu_ctxt))
 		return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT);
 
 	return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE);
@@ -36,6 +36,7 @@ static bool __is_be(struct kvm_vcpu *vcpu)
  */
 int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_dist *vgic = &kvm->arch.vgic;
 	phys_addr_t fault_ipa;
@@ -68,19 +69,19 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
 	if (kvm_vcpu_dabt_iswrite(vcpu)) {
-		u32 data = vcpu_get_reg(vcpu, rd);
-		if (__is_be(vcpu)) {
+		u32 data = ctxt_get_reg(vcpu_ctxt, rd);
+		if (__is_be(vcpu_ctxt)) {
 			/* guest pre-swabbed data, undo this for writel() */
 			data = __kvm_swab32(data);
 		}
 		writel_relaxed(data, addr);
 	} else {
 		u32 data = readl_relaxed(addr);
-		if (__is_be(vcpu)) {
+		if (__is_be(vcpu_ctxt)) {
 			/* guest expects swabbed data */
 			data = __kvm_swab32(data);
 		}
-		vcpu_set_reg(vcpu, rd, data);
+		ctxt_set_reg(vcpu_ctxt, rd, data);
 	}
 
 	__kvm_skip_instr(vcpu);
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index 39f8f7f9227c..bdb03b8e50ab 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -473,6 +473,7 @@ static int __vgic_v3_bpr_min(void)
 
 static int __vgic_v3_get_group(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 esr = kvm_vcpu_get_esr(vcpu);
 	u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT;
 
@@ -673,6 +674,7 @@ static int __vgic_v3_clear_highest_active_priority(void)
 
 static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 lr_val;
 	u8 lr_prio, pmr;
 	int lr, grp;
@@ -700,11 +702,11 @@ static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 		lr_val |= ICH_LR_ACTIVE_BIT;
 	__gic_v3_set_lr(lr_val, lr);
 	__vgic_v3_set_active_priority(lr_prio, vmcr, grp);
-	vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK);
+	ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK);
 	return;
 
 spurious:
-	vcpu_set_reg(vcpu, rt, ICC_IAR1_EL1_SPURIOUS);
+	ctxt_set_reg(vcpu_ctxt, rt, ICC_IAR1_EL1_SPURIOUS);
 }
 
 static void __vgic_v3_clear_active_lr(int lr, u64 lr_val)
@@ -731,7 +733,8 @@ static void __vgic_v3_bump_eoicount(void)
 
 static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u32 vid = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u32 vid = ctxt_get_reg(vcpu_ctxt, rt);
 	u64 lr_val;
 	int lr;
 
@@ -754,7 +757,8 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u32 vid = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u32 vid = ctxt_get_reg(vcpu_ctxt, rt);
 	u64 lr_val;
 	u8 lr_prio, act_prio;
 	int lr, grp;
@@ -791,17 +795,20 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG0_MASK));
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK));
 }
 
 static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG1_MASK));
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK));
 }
 
 static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u64 val = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	if (val & 1)
 		vmcr |= ICH_VMCR_ENG0_MASK;
@@ -813,7 +820,8 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u64 val = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	if (val & 1)
 		vmcr |= ICH_VMCR_ENG1_MASK;
@@ -825,17 +833,20 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr0(vmcr));
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr));
 }
 
 static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr1(vmcr));
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr));
 }
 
 static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u64 val = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 	u8 bpr_min = __vgic_v3_bpr_min() - 1;
 
 	/* Enforce BPR limiting */
@@ -852,7 +863,8 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u64 val = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 	u8 bpr_min = __vgic_v3_bpr_min();
 
 	if (vmcr & ICH_VMCR_CBPR_MASK)
@@ -872,6 +884,7 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val;
 
 	if (!__vgic_v3_get_group(vcpu))
@@ -879,12 +892,13 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 	else
 		val = __vgic_v3_read_ap1rn(n);
 
-	vcpu_set_reg(vcpu, rt, val);
+	ctxt_set_reg(vcpu_ctxt, rt, val);
 }
 
 static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 {
-	u32 val = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	if (!__vgic_v3_get_group(vcpu))
 		__vgic_v3_write_ap0rn(val, n);
@@ -895,47 +909,56 @@ static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu,
 					    u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 0);
 }
 
 static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu,
 					    u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 1);
 }
 
 static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 2);
 }
 
 static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 3);
 }
 
 static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 0);
 }
 
 static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 1);
 }
 
 static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 2);
 }
 
 static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 3);
 }
 
 static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 lr_val;
 	int lr, lr_grp, grp;
 
@@ -950,19 +973,21 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 		lr_val = ICC_IAR1_EL1_SPURIOUS;
 
 spurious:
-	vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK);
+	ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK);
 }
 
 static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	vmcr &= ICH_VMCR_PMR_MASK;
 	vmcr >>= ICH_VMCR_PMR_SHIFT;
-	vcpu_set_reg(vcpu, rt, vmcr);
+	ctxt_set_reg(vcpu_ctxt, rt, vmcr);
 }
 
 static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u32 val = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	val <<= ICH_VMCR_PMR_SHIFT;
 	val &= ICH_VMCR_PMR_MASK;
@@ -974,12 +999,14 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = __vgic_v3_get_highest_active_priority();
-	vcpu_set_reg(vcpu, rt, val);
+	ctxt_set_reg(vcpu_ctxt, rt, val);
 }
 
 static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 vtr, val;
 
 	vtr = read_gicreg(ICH_VTR_EL2);
@@ -996,12 +1023,13 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	/* CBPR */
 	val |= (vmcr & ICH_VMCR_CBPR_MASK) >> ICH_VMCR_CBPR_SHIFT;
 
-	vcpu_set_reg(vcpu, rt, val);
+	ctxt_set_reg(vcpu_ctxt, rt, val);
 }
 
 static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
-	u32 val = vcpu_get_reg(vcpu, rt);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	if (val & ICC_CTLR_EL1_CBPR_MASK)
 		vmcr |= ICH_VMCR_CBPR_MASK;
@@ -1018,6 +1046,7 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	int rt;
 	u32 esr;
 	u32 vmcr;
@@ -1026,7 +1055,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 	u32 sysreg;
 
 	esr = kvm_vcpu_get_esr(vcpu);
-	if (vcpu_mode_is_32bit(vcpu)) {
+	if (ctxt_mode_is_32bit(vcpu_ctxt)) {
 		if (!kvm_condition_valid(vcpu)) {
 			__kvm_skip_instr(vcpu);
 			return 1;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index b3229924d243..c2e443202f8e 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -33,6 +33,7 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val;
 
 	___activate_traps(vcpu);
@@ -68,6 +69,7 @@ NOKPROBE_SYMBOL(__activate_traps);
 
 static void __deactivate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	extern char vectors[];	/* kernel exception vectors */
 
 	___deactivate_traps(vcpu);
@@ -88,6 +90,7 @@ NOKPROBE_SYMBOL(__deactivate_traps);
 
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__activate_traps_common(vcpu);
 }
 
@@ -107,6 +110,7 @@ void deactivate_traps_vhe_put(void)
 /* Switch to the guest for VHE systems running in EL2 */
 static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	u64 exit_code;
@@ -160,6 +164,7 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe);
 
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	int ret;
 
 	local_daif_mask();
@@ -197,9 +202,11 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par)
 {
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *vcpu;
+	struct kvm_cpu_context *vcpu_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	vcpu = host_ctxt->__hyp_running_vcpu;
+	vcpu_ctxt = &vcpu_ctxt(vcpu);
 
 	__deactivate_traps(vcpu);
 	sysreg_restore_host_state_vhe(host_ctxt);
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..37f56b4743d0 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -63,6 +63,7 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe);
  */
 void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
 
@@ -97,6 +98,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
  */
 void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 08/30] KVM: arm64: add hypervisor state accessors
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (6 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 07/30] KVM: arm64: COCCI: add_ctxt.cocci use_ctxt.cocci: reduce scope of functions to kvm_cpu_ctxt Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 09/30] KVM: arm64: COCCI: vcpu_hyp_accessors.cocci: use accessors for hypervisor state vcpu variables Fuad Tabba
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Part of the state in vcpu_arch is hypervisor-specific. To isolate
that state in future patches, start by creating accessors for
this state rather than by dereferencing vcpu.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 097e5f533af9..280ee23dfc5a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -373,6 +373,13 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+/* Accessors for vcpu parameters related to the hypervistor state. */
+#define vcpu_hcr_el2(vcpu) (vcpu)->arch.hcr_el2
+#define vcpu_mdcr_el2(vcpu) (vcpu)->arch.mdcr_el2
+#define vcpu_vsesr_el2(vcpu) (vcpu)->arch.vsesr_el2
+#define vcpu_fault(vcpu) (vcpu)->arch.fault
+#define vcpu_flags(vcpu) (vcpu)->arch.flags
+
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
 			     sve_ffr_offset((vcpu)->arch.sve_max_vl))
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 09/30] KVM: arm64: COCCI: vcpu_hyp_accessors.cocci: use accessors for hypervisor state vcpu variables
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (7 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 08/30] KVM: arm64: add hypervisor state accessors Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch Fuad Tabba
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

To simplify future refactoring, ensure that all access to the
hypervisor state related fields in vcpu use the accessors created
previously in this patch series, rather than by dereferencing the
vcpu directly.

The semantic patch is applied with the following command:
spatch --sp-file cocci_refactor/vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_emulate.h       | 52 +++++++++++-----------
 arch/arm64/kvm/arm.c                       |  2 +-
 arch/arm64/kvm/debug.c                     | 28 ++++++------
 arch/arm64/kvm/fpsimd.c                    | 20 ++++-----
 arch/arm64/kvm/guest.c                     |  2 +-
 arch/arm64/kvm/handle_exit.c               |  2 +-
 arch/arm64/kvm/hyp/exception.c             | 12 ++---
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  6 +--
 arch/arm64/kvm/hyp/include/hyp/switch.h    | 32 ++++++-------
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  8 ++--
 arch/arm64/kvm/hyp/nvhe/switch.c           |  4 +-
 arch/arm64/kvm/hyp/vhe/switch.c            |  2 +-
 arch/arm64/kvm/inject_fault.c              | 10 ++---
 arch/arm64/kvm/reset.c                     |  6 +--
 arch/arm64/kvm/sys_regs.c                  |  4 +-
 16 files changed, 97 insertions(+), 97 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index ad6e53cef1a4..7d09a9356d89 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -43,23 +43,23 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
 
 static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
 {
-	return !(vcpu->arch.hcr_el2 & HCR_RW);
+	return !(vcpu_hcr_el2(vcpu) & HCR_RW);
 }
 
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+	vcpu_hcr_el2(vcpu) = HCR_GUEST_FLAGS;
 	if (is_kernel_in_hyp_mode())
-		vcpu->arch.hcr_el2 |= HCR_E2H;
+		vcpu_hcr_el2(vcpu) |= HCR_E2H;
 	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) {
 		/* route synchronous external abort exceptions to EL2 */
-		vcpu->arch.hcr_el2 |= HCR_TEA;
+		vcpu_hcr_el2(vcpu) |= HCR_TEA;
 		/* trap error record accesses */
-		vcpu->arch.hcr_el2 |= HCR_TERR;
+		vcpu_hcr_el2(vcpu) |= HCR_TERR;
 	}
 
 	if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) {
-		vcpu->arch.hcr_el2 |= HCR_FWB;
+		vcpu_hcr_el2(vcpu) |= HCR_FWB;
 	} else {
 		/*
 		 * For non-FWB CPUs, we trap VM ops (HCR_EL2.TVM) until M+C
@@ -67,11 +67,11 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 		 * MMU gets turned on and do the necessary cache maintenance
 		 * then.
 		 */
-		vcpu->arch.hcr_el2 |= HCR_TVM;
+		vcpu_hcr_el2(vcpu) |= HCR_TVM;
 	}
 
 	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
-		vcpu->arch.hcr_el2 &= ~HCR_RW;
+		vcpu_hcr_el2(vcpu) &= ~HCR_RW;
 
 	/*
 	 * TID3: trap feature register accesses that we virtualise.
@@ -79,52 +79,52 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 	 * are currently virtualised.
 	 */
 	if (!vcpu_el1_is_32bit(vcpu))
-		vcpu->arch.hcr_el2 |= HCR_TID3;
+		vcpu_hcr_el2(vcpu) |= HCR_TID3;
 
 	if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) ||
 	    vcpu_el1_is_32bit(vcpu))
-		vcpu->arch.hcr_el2 |= HCR_TID2;
+		vcpu_hcr_el2(vcpu) |= HCR_TID2;
 }
 
 static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
 {
-	return (unsigned long *)&vcpu->arch.hcr_el2;
+	return (unsigned long *)&vcpu_hcr_el2(vcpu);
 }
 
 static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 &= ~HCR_TWE;
+	vcpu_hcr_el2(vcpu) &= ~HCR_TWE;
 	if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) ||
 	    vcpu->kvm->arch.vgic.nassgireq)
-		vcpu->arch.hcr_el2 &= ~HCR_TWI;
-	else
-		vcpu->arch.hcr_el2 |= HCR_TWI;
+		vcpu_hcr_el2(vcpu) &= ~HCR_TWI;
+		else
+			vcpu_hcr_el2(vcpu) |= HCR_TWI;
 }
 
 static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 |= HCR_TWE;
-	vcpu->arch.hcr_el2 |= HCR_TWI;
+	vcpu_hcr_el2(vcpu) |= HCR_TWE;
+	vcpu_hcr_el2(vcpu) |= HCR_TWI;
 }
 
 static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+	vcpu_hcr_el2(vcpu) |= (HCR_API | HCR_APK);
 }
 
 static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+	vcpu_hcr_el2(vcpu) &= ~(HCR_API | HCR_APK);
 }
 
 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.vsesr_el2;
+	return vcpu_vsesr_el2(vcpu);
 }
 
 static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr)
 {
-	vcpu->arch.vsesr_el2 = vsesr;
+	vcpu_vsesr_el2(vcpu) = vsesr;
 }
 
 static __always_inline unsigned long *ctxt_pc(const struct kvm_cpu_context *ctxt)
@@ -254,7 +254,7 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
 
 static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.fault.esr_el2;
+	return vcpu_fault(vcpu).esr_el2;
 }
 
 static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
@@ -269,17 +269,17 @@ static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
 
 static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.fault.far_el2;
+	return vcpu_fault(vcpu).far_el2;
 }
 
 static __always_inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu)
 {
-	return ((phys_addr_t)vcpu->arch.fault.hpfar_el2 & HPFAR_MASK) << 8;
+	return ((phys_addr_t) vcpu_fault(vcpu).hpfar_el2 & HPFAR_MASK) << 8;
 }
 
 static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.fault.disr_el1;
+	return vcpu_fault(vcpu).disr_el1;
 }
 
 static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu)
@@ -493,7 +493,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 
 static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
+	vcpu_flags(vcpu) |= KVM_ARM64_INCREMENT_PC;
 }
 
 static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e720148232a0..5f0e2f9821ec 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -907,7 +907,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	 * the vcpu state. Note that this relies on __kvm_adjust_pc()
 	 * being preempt-safe on VHE.
 	 */
-	if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION |
+	if (unlikely(vcpu_flags(vcpu) & (KVM_ARM64_PENDING_EXCEPTION |
 					 KVM_ARM64_INCREMENT_PC)))
 		kvm_call_hyp(__kvm_adjust_pc, vcpu);
 
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index d5e79d7ee6e9..e7a5956fe648 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -87,8 +87,8 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK
 	 * to disable guest access to the profiling and trace buffers
 	 */
-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+	vcpu_mdcr_el2(vcpu) = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
+	vcpu_mdcr_el2(vcpu) |= (MDCR_EL2_TPM |
 				MDCR_EL2_TPMS |
 				MDCR_EL2_TTRF |
 				MDCR_EL2_TPMCR |
@@ -98,7 +98,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	/* Is the VM being debugged by userspace? */
 	if (vcpu->guest_debug)
 		/* Route all software debug exceptions to EL2 */
-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+		vcpu_mdcr_el2(vcpu) |= MDCR_EL2_TDE;
 
 	/*
 	 * Trap debug register access when one of the following is true:
@@ -107,10 +107,10 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
-	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
+	    !(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY))
+		vcpu_mdcr_el2(vcpu) |= MDCR_EL2_TDA;
 
-	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
+	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu_mdcr_el2(vcpu));
 }
 
 /**
@@ -154,7 +154,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
 
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 {
-	unsigned long mdscr, orig_mdcr_el2 = vcpu->arch.mdcr_el2;
+	unsigned long mdscr, orig_mdcr_el2 = vcpu_mdcr_el2(vcpu);
 
 	trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug);
 
@@ -214,7 +214,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 			vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1);
 
 			vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
-			vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+			vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY;
 
 			trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
 						&vcpu->arch.debug_ptr->dbg_bcr[0],
@@ -231,11 +231,11 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
 	/* If KDE or MDE are set, perform a full save/restore cycle. */
 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY;
 
 	/* Write mdcr_el2 changes since vcpu_load on VHE systems */
-	if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
-		write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+	if (has_vhe() && orig_mdcr_el2 != vcpu_mdcr_el2(vcpu))
+		write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2);
 
 	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
 }
@@ -280,16 +280,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu)
 	 */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) &&
 	    !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT)))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
+		vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_STATE_SAVE_SPE;
 
 	/* Check if we have TRBE implemented and available at the host */
 	if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) &&
 	    !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG))
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
+		vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE;
 }
 
 void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
+	vcpu_flags(vcpu) &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE |
 			      KVM_ARM64_DEBUG_STATE_SAVE_TRBE);
 }
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index db135588236a..1871a267e2ed 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -74,16 +74,16 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 {
 	BUG_ON(!current->mm);
 
-	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-			      KVM_ARM64_HOST_SVE_IN_USE |
-			      KVM_ARM64_HOST_SVE_ENABLED);
-	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+	vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED |
+		              KVM_ARM64_HOST_SVE_IN_USE |
+		              KVM_ARM64_HOST_SVE_ENABLED);
+	vcpu_flags(vcpu) |= KVM_ARM64_FP_HOST;
 
 	if (test_thread_flag(TIF_SVE))
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
+		vcpu_flags(vcpu) |= KVM_ARM64_HOST_SVE_IN_USE;
 
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+		vcpu_flags(vcpu) |= KVM_ARM64_HOST_SVE_ENABLED;
 }
 
 /*
@@ -96,7 +96,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) {
 		fpsimd_bind_state_to_cpu(vcpu_fp_regs(vcpu),
 					 vcpu->arch.sve_state,
 					 vcpu->arch.sve_max_vl);
@@ -120,7 +120,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 
 	local_irq_save(flags);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) {
 		if (guest_has_sve) {
 			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
 
@@ -139,14 +139,14 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 		 * for EL0.  To avoid spurious traps, restore the trap state
 		 * seen by kvm_arch_vcpu_load_fp():
 		 */
-		if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED)
+		if (vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_ENABLED)
 			sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);
 		else
 			sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);
 	}
 
 	update_thread_flag(TIF_SVE,
-			   vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE);
+			   vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE);
 
 	local_irq_restore(flags);
 }
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index c4429307a164..fc63e55db2f0 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -782,7 +782,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
 int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
 			      struct kvm_vcpu_events *events)
 {
-	events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE);
+	events->exception.serror_pending = !!(vcpu_hcr_el2(vcpu) & HCR_VSE);
 	events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN);
 
 	if (events->exception.serror_pending && events->exception.serror_has_esr)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6f48336b1d86..22e9f03fe901 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -126,7 +126,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu)
 
 	switch (ESR_ELx_EC(esr)) {
 	case ESR_ELx_EC_WATCHPT_LOW:
-		run->debug.arch.far = vcpu->arch.fault.far_el2;
+		run->debug.arch.far = vcpu_fault(vcpu).far_el2;
 		fallthrough;
 	case ESR_ELx_EC_SOFTSTP_LOW:
 	case ESR_ELx_EC_BREAKPT_LOW:
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index e23b9cedb043..4514e345c26f 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -328,7 +328,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (vcpu_el1_is_32bit(vcpu)) {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
+		switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) {
 		case KVM_ARM64_EXCEPT_AA32_UND:
 			enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4);
 			break;
@@ -343,7 +343,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 			break;
 		}
 	} else {
-		switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
+		switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) {
 		case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
 		      KVM_ARM64_EXCEPT_AA64_EL1):
 			enter_exception64(vcpu_ctxt, PSR_MODE_EL1h,
@@ -367,12 +367,12 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
+	if (vcpu_flags(vcpu) & KVM_ARM64_PENDING_EXCEPTION) {
 		kvm_inject_exception(vcpu);
-		vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
+		vcpu_flags(vcpu) &= ~(KVM_ARM64_PENDING_EXCEPTION |
 				      KVM_ARM64_EXCEPT_MASK);
-	} else 	if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {
+	} else 	if (vcpu_flags(vcpu) & KVM_ARM64_INCREMENT_PC) {
 		kvm_skip_instr(vcpu);
-		vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;
+		vcpu_flags(vcpu) &= ~KVM_ARM64_INCREMENT_PC;
 	}
 }
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 4ebe9f558f3a..55735782d7e3 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+	if (!(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY))
 		return;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
@@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
 
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+	vcpu_flags(vcpu) &= ~KVM_ARM64_DEBUG_DIRTY;
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 41c553a7b5dd..370a8fb827be 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -45,10 +45,10 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 	 */
 	if (!system_supports_fpsimd() ||
 	    vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
+		vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED |
 				      KVM_ARM64_FP_HOST);
 
-	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
+	return !!(vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED);
 }
 
 /* Save the 32-bit only FPSIMD system register state */
@@ -94,7 +94,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(0, pmselr_el0);
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
-	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+	write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2);
 }
 
 static inline void __deactivate_traps_common(void)
@@ -106,7 +106,7 @@ static inline void __deactivate_traps_common(void)
 
 static inline void ___activate_traps(struct kvm_vcpu *vcpu)
 {
-	u64 hcr = vcpu->arch.hcr_el2;
+	u64 hcr = vcpu_hcr_el2(vcpu);
 
 	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
 		hcr |= HCR_TVM;
@@ -114,7 +114,7 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(hcr, hcr_el2);
 
 	if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
-		write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
+		write_sysreg_s(vcpu_vsesr_el2(vcpu), SYS_VSESR_EL2);
 }
 
 static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
@@ -125,9 +125,9 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
 	 * the crucial bit is "On taking a vSError interrupt,
 	 * HCR_EL2.VSE is cleared to 0."
 	 */
-	if (vcpu->arch.hcr_el2 & HCR_VSE) {
-		vcpu->arch.hcr_el2 &= ~HCR_VSE;
-		vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE;
+	if (vcpu_hcr_el2(vcpu) & HCR_VSE) {
+		vcpu_hcr_el2(vcpu) &= ~HCR_VSE;
+		vcpu_hcr_el2(vcpu) |= read_sysreg(hcr_el2) & HCR_VSE;
 	}
 }
 
@@ -196,13 +196,13 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
 	u8 ec;
 	u64 esr;
 
-	esr = vcpu->arch.fault.esr_el2;
+	esr = vcpu_fault(vcpu).esr_el2;
 	ec = ESR_ELx_EC(esr);
 
 	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
 		return true;
 
-	return __get_fault_info(esr, &vcpu->arch.fault);
+	return __get_fault_info(esr, &vcpu_fault(vcpu));
 }
 
 static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu)
@@ -237,7 +237,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 	if (system_supports_sve()) {
 		sve_guest = vcpu_has_sve(vcpu);
-		sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE;
+		sve_host = vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE;
 	} else {
 		sve_guest = false;
 		sve_host = false;
@@ -268,13 +268,13 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 	}
 	isb();
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
+	if (vcpu_flags(vcpu) & KVM_ARM64_FP_HOST) {
 		if (sve_host)
 			__hyp_sve_save_host(vcpu);
 		else
 			__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
 
-		vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
+		vcpu_flags(vcpu) &= ~KVM_ARM64_FP_HOST;
 	}
 
 	if (sve_guest)
@@ -287,7 +287,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 		write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2),
 			     fpexc32_el2);
 
-	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+	vcpu_flags(vcpu) |= KVM_ARM64_FP_ENABLED;
 
 	return true;
 }
@@ -303,7 +303,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu)
 	 * The normal sysreg handling code expects to see the traps,
 	 * let's not do anything here.
 	 */
-	if (vcpu->arch.hcr_el2 & HCR_TVM)
+	if (vcpu_hcr_el2(vcpu) & HCR_TVM)
 		return false;
 
 	switch (sysreg) {
@@ -421,7 +421,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
-		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
+		vcpu_fault(vcpu).esr_el2 = read_sysreg_el2(SYS_ESR);
 
 	if (ARM_SERROR_PENDING(*exit_code)) {
 		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index c2668b85b67e..d49985e825cd 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -170,7 +170,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2);
 	ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)
 		ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
@@ -188,7 +188,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2);
 	write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2);
 
-	if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)
 		write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2),
 		             dbgvcr32_el2);
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 7d3f25868cae..934737478d64 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1)
 void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
 	/* Disable and flush SPE data generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
 		__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
 	/* Disable and flush Self-Hosted Trace generation */
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
 		__debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1);
 }
 
@@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 
 void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
+	if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_SPE)
 		__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
-	if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
+	if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_TRBE)
 		__debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index d5780acab6c2..ac7529305717 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -104,7 +104,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
 
 	cptr = CPTR_EL2_DEFAULT;
-	if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
+	if (vcpu_has_sve(vcpu) && (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED))
 		cptr |= CPTR_EL2_TZ;
 
 	write_sysreg(cptr, cptr_el2);
@@ -241,7 +241,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index c2e443202f8e..0113d442bc95 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -153,7 +153,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index b47df73e98d7..867e8856bdcd 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -20,7 +20,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
 	u32 esr = 0;
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
+	vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA64_EL1		|
 			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
 			     KVM_ARM64_PENDING_EXCEPTION);
 
@@ -52,7 +52,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1		|
+	vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA64_EL1		|
 			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC	|
 			     KVM_ARM64_PENDING_EXCEPTION);
 
@@ -73,7 +73,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 
 static void inject_undef32(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_UND |
+	vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_UND |
 			     KVM_ARM64_PENDING_EXCEPTION);
 }
 
@@ -97,13 +97,13 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
 	far = vcpu_read_sys_reg(vcpu, FAR_EL1);
 
 	if (is_pabt) {
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_IABT |
+		vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_IABT |
 				     KVM_ARM64_PENDING_EXCEPTION);
 		far &= GENMASK(31, 0);
 		far |= (u64)addr << 32;
 		vcpu_write_sys_reg(vcpu, fsr, IFSR32_EL2);
 	} else { /* !iabt */
-		vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_DABT |
+		vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_DABT |
 				     KVM_ARM64_PENDING_EXCEPTION);
 		far &= GENMASK(63, 32);
 		far |= addr;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index ab1ef5313a3e..f94b5b07d2cf 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
 	 * KVM_REG_ARM64_SVE_VLS.  Allocation is deferred until
 	 * kvm_arm_vcpu_finalize(), which freezes the configuration.
 	 */
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+	vcpu_flags(vcpu) |= KVM_ARM64_GUEST_HAS_SVE;
 
 	return 0;
 }
@@ -111,7 +111,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
 		return -ENOMEM;
 
 	vcpu->arch.sve_state = buf;
-	vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
+	vcpu_flags(vcpu) |= KVM_ARM64_VCPU_SVE_FINALIZED;
 	return 0;
 }
 
@@ -162,7 +162,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
 	    !system_has_full_ptr_auth())
 		return -EINVAL;
 
-	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	vcpu_flags(vcpu) |= KVM_ARM64_GUEST_HAS_PTRAUTH;
 	return 0;
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1a7968ad078c..8fb57e83e9ec 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -348,7 +348,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
 {
 	if (p->is_write) {
 		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
-		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+		vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY;
 	} else {
 		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
 	}
@@ -381,7 +381,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu,
 	val |= (p->regval & (mask >> shift)) << shift;
 	*dbg_reg = val;
 
-	vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+	vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY;
 }
 
 static void dbg_to_reg(struct kvm_vcpu *vcpu,
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (8 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 09/30] KVM: arm64: COCCI: vcpu_hyp_accessors.cocci: use accessors for hypervisor state vcpu variables Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-27 16:10   ` Quentin Perret
  2021-09-24 12:53 ` [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct Fuad Tabba
                   ` (19 subsequent siblings)
  29 siblings, 1 reply; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Some of the members of vcpu_arch represent state that belongs to
the hypervisor. Future patches will factor these out into their
own structure. To simplify the refactoring and make it easier to
read, add accessors for the members of kvm_vcpu_arch that
represent the hypervisor state.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 182 ++++++++++++++++++++++-----
 arch/arm64/include/asm/kvm_host.h    |  38 ++++--
 2 files changed, 181 insertions(+), 39 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 7d09a9356d89..e095afeecd10 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -41,9 +41,14 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu);
 void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
 void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
 
+static __always_inline bool hyp_state_el1_is_32bit(struct vcpu_hyp_state *vcpu_hyps)
+{
+	return !(hyp_state_hcr_el2(vcpu_hyps) & HCR_RW);
+}
+
 static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
 {
-	return !(vcpu_hcr_el2(vcpu) & HCR_RW);
+	return hyp_state_el1_is_32bit(&hyp_state(vcpu));
 }
 
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
@@ -252,14 +257,19 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
 	return mode != PSR_MODE_EL0t;
 }
 
+static __always_inline u32 kvm_hyp_state_get_esr(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return hyp_state_fault(vcpu_hyps).esr_el2;
+}
+
 static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
 {
-	return vcpu_fault(vcpu).esr_el2;
+	return kvm_hyp_state_get_esr(&hyp_state(vcpu));
 }
 
-static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
+static __always_inline u32 kvm_hyp_state_get_condition(const struct vcpu_hyp_state *vcpu_hyps)
 {
-	u32 esr = kvm_vcpu_get_esr(vcpu);
+	u32 esr = kvm_hyp_state_get_esr(vcpu_hyps);
 
 	if (esr & ESR_ELx_CV)
 		return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
@@ -267,111 +277,216 @@ static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
 	return -1;
 }
 
+static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
+{
+	return kvm_hyp_state_get_condition(&hyp_state(vcpu));
+}
+
+static __always_inline phys_addr_t kvm_hyp_state_get_hfar(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return hyp_state_fault(vcpu_hyps).far_el2;
+}
+
 static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu)
 {
-	return vcpu_fault(vcpu).far_el2;
+	return kvm_hyp_state_get_hfar(&hyp_state(vcpu));
+}
+
+static __always_inline phys_addr_t kvm_hyp_state_get_fault_ipa(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return ((phys_addr_t) hyp_state_fault(vcpu_hyps).hpfar_el2 & HPFAR_MASK) << 8;
 }
 
 static __always_inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu)
 {
-	return ((phys_addr_t) vcpu_fault(vcpu).hpfar_el2 & HPFAR_MASK) << 8;
+	return kvm_hyp_state_get_fault_ipa(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_get_disr(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return hyp_state_fault(vcpu_hyps).disr_el1;
 }
 
 static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu)
 {
-	return vcpu_fault(vcpu).disr_el1;
+	return kvm_hyp_state_get_disr(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_get_imm(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_xVC_IMM_MASK;
 }
 
 static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK;
+	return kvm_hyp_state_get_imm(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_dabt_isvalid(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_ISV);
 }
 
 static __always_inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_ISV);
+	return kvm_hyp_state_dabt_isvalid(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_iss_nisv_sanitized(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_get_esr(vcpu_hyps) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC);
 }
 
 static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_esr(vcpu) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC);
+	return kvm_hyp_state_iss_nisv_sanitized(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_issext(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SSE);
 }
 
 static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SSE);
+	return kvm_hyp_state_issext(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_issf(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SF);
 }
 
 static inline bool kvm_vcpu_dabt_issf(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SF);
+	return kvm_hyp_state_issf(&hyp_state(vcpu));
+}
+
+static __always_inline phys_addr_t kvm_hyp_state_dabt_get_rd(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return (kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
 }
 
 static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu)
 {
-	return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
+	return kvm_hyp_state_dabt_get_rd(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_abt_iss1tw(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_S1PTW);
 }
 
 static __always_inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW);
+	return kvm_hyp_state_abt_iss1tw(&hyp_state(vcpu));
 }
 
 /* Always check for S1PTW *before* using this. */
+static __always_inline u32 kvm_hyp_state_dabt_iswrite(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_WNR;
+}
+
 static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR;
+	return kvm_hyp_state_dabt_iswrite(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_dabt_is_cm(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_CM);
 }
 
 static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_CM);
+	return kvm_hyp_state_dabt_is_cm(&hyp_state(vcpu));
+}
+
+static __always_inline phys_addr_t kvm_hyp_state_dabt_get_as(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return 1 << ((kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT);
 }
 
 static __always_inline unsigned int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu)
 {
-	return 1 << ((kvm_vcpu_get_esr(vcpu) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT);
+	return kvm_hyp_state_dabt_get_as(&hyp_state(vcpu));
 }
 
 /* This one is not specific to Data Abort */
+static __always_inline u32 kvm_hyp_state_trap_il_is32bit(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_IL);
+}
+
 static __always_inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_IL);
+	return kvm_hyp_state_trap_il_is32bit(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_trap_get_class(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return ESR_ELx_EC(kvm_hyp_state_get_esr(vcpu_hyps));
 }
 
 static __always_inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu)
 {
-	return ESR_ELx_EC(kvm_vcpu_get_esr(vcpu));
+	return kvm_hyp_state_trap_get_class(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_trap_is_iabt(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_trap_get_class(vcpu_hyps) == ESR_ELx_EC_IABT_LOW;
 }
 
 static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW;
+	return kvm_hyp_state_trap_is_iabt(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_trap_is_exec_fault(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_trap_is_iabt(vcpu_hyps) && !kvm_hyp_state_abt_iss1tw(vcpu_hyps);
 }
 
 static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu);
+	return kvm_hyp_state_trap_is_exec_fault(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_trap_get_fault(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC;
 }
 
 static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC;
+	return kvm_hyp_state_trap_get_fault(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_trap_get_fault_type(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC_TYPE;
 }
 
 static __always_inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_TYPE;
+	return kvm_hyp_state_trap_get_fault_type(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_trap_get_fault_level(const struct vcpu_hyp_state *vcpu_hyps)
+{
+	return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC_LEVEL;
 }
 
 static __always_inline u8 kvm_vcpu_trap_get_fault_level(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_LEVEL;
+	return kvm_hyp_state_trap_get_fault_level(&hyp_state(vcpu));
 }
 
-static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu)
+static __always_inline u32 kvm_hyp_state_abt_issea(const struct vcpu_hyp_state *vcpu_hyps)
 {
-	switch (kvm_vcpu_trap_get_fault(vcpu)) {
+	switch (kvm_hyp_state_trap_get_fault(vcpu_hyps)) {
 	case FSC_SEA:
 	case FSC_SEA_TTW0:
 	case FSC_SEA_TTW1:
@@ -388,12 +503,23 @@ static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu)
 	}
 }
 
-static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu)
+{
+	return kvm_hyp_state_abt_issea(&hyp_state(vcpu));
+}
+
+static __always_inline u32 kvm_hyp_state_sys_get_rt(const struct vcpu_hyp_state *vcpu_hyps)
 {
-	u32 esr = kvm_vcpu_get_esr(vcpu);
+	u32 esr = kvm_hyp_state_get_esr(vcpu_hyps);
 	return ESR_ELx_SYS64_ISS_RT(esr);
 }
 
+
+static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu)
+{
+	return kvm_hyp_state_sys_get_rt(&hyp_state(vcpu));
+}
+
 static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
 {
 	if (kvm_vcpu_abt_iss1tw(vcpu))
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 280ee23dfc5a..3e5c173d2360 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -373,12 +373,21 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
+#define hyp_state(vcpu) ((vcpu)->arch)
+
+/* Accessors for hyp_state parameters related to the hypervistor state. */
+#define hyp_state_hcr_el2(hyps) (hyps)->hcr_el2
+#define hyp_state_mdcr_el2(hyps) (hyps)->mdcr_el2
+#define hyp_state_vsesr_el2(hyps) (hyps)->vsesr_el2
+#define hyp_state_fault(hyps) (hyps)->fault
+#define hyp_state_flags(hyps) (hyps)->flags
+
 /* Accessors for vcpu parameters related to the hypervistor state. */
-#define vcpu_hcr_el2(vcpu) (vcpu)->arch.hcr_el2
-#define vcpu_mdcr_el2(vcpu) (vcpu)->arch.mdcr_el2
-#define vcpu_vsesr_el2(vcpu) (vcpu)->arch.vsesr_el2
-#define vcpu_fault(vcpu) (vcpu)->arch.fault
-#define vcpu_flags(vcpu) (vcpu)->arch.flags
+#define vcpu_hcr_el2(vcpu) hyp_state_hcr_el2(&hyp_state(vcpu))
+#define vcpu_mdcr_el2(vcpu) hyp_state_mdcr_el2(&hyp_state(vcpu))
+#define vcpu_vsesr_el2(vcpu) hyp_state_vsesr_el2(&hyp_state(vcpu))
+#define vcpu_fault(vcpu) hyp_state_fault(&hyp_state(vcpu))
+#define vcpu_flags(vcpu) hyp_state_flags(&hyp_state(vcpu))
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
 #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) +	\
@@ -441,18 +450,22 @@ struct kvm_vcpu_arch {
  */
 #define KVM_ARM64_INCREMENT_PC		(1 << 9) /* Increment PC */
 
-#define vcpu_has_sve(vcpu) (system_supports_sve() &&			\
-			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
+#define hyp_state_has_sve(hyps) (system_supports_sve() &&		\
+			    (hyp_state_flags((hyps)) & KVM_ARM64_GUEST_HAS_SVE))
+
+#define vcpu_has_sve(vcpu) hyp_state_has_sve(&hyp_state(vcpu))
 
 #ifdef CONFIG_ARM64_PTR_AUTH
-#define vcpu_has_ptrauth(vcpu)						\
+#define hyp_state_has_ptrauth(hyps)					\
 	((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||		\
 	  cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&		\
-	 (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+	 hyp_state_flags(hyps) & KVM_ARM64_GUEST_HAS_PTRAUTH)
 #else
-#define vcpu_has_ptrauth(vcpu)		false
+#define hyp_state_has_ptrauth(hyps)		false
 #endif
 
+#define vcpu_has_ptrauth(vcpu)	hyp_state_has_ptrauth(&hyp_state(vcpu))
+
 #define vcpu_ctxt(vcpu) ((vcpu)->arch.ctxt)
 
 /* VCPU Context accessors (direct) */
@@ -794,8 +807,11 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
+#define kvm_arm_hyp_state_sve_finalized(hyps) \
+	(hyp_state_flags((hyps)) & KVM_ARM64_VCPU_SVE_FINALIZED)
+
 #define kvm_arm_vcpu_sve_finalized(vcpu) \
-	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
+	kvm_arm_hyp_state_sve_finalized(&hyp_state(vcpu))
 
 #define kvm_vcpu_has_pmu(vcpu)					\
 	(test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (9 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-27 16:32   ` Quentin Perret
  2021-09-24 12:53 ` [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state Fuad Tabba
                   ` (18 subsequent siblings)
  29 siblings, 1 reply; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Create a struct for the hypervisor state from the related fields
in vcpu_arch. This is needed in future patches to reduce the
scope of functions from the vcpu as a whole to only the relevant
state, via this newly created struct.

Create a new instance of this struct in vcpu_arch and fix the
accessors to use the new fields. Remove the existing fields from
vcpu_arch.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 35 ++++++++++++++++++-------------
 arch/arm64/kernel/asm-offsets.c   |  2 +-
 2 files changed, 21 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3e5c173d2360..dc4b5e133d86 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -269,27 +269,35 @@ struct vcpu_reset_state {
 	bool		reset;
 };
 
+/* Holds the hyp-relevant data of a vcpu.*/
+struct vcpu_hyp_state {
+	/* HYP configuration */
+	u64 hcr_el2;
+	u32 mdcr_el2;
+
+	/* Virtual SError ESR to restore when HCR_EL2.VSE is set */
+	u64 vsesr_el2;
+
+	/* Exception Information */
+	struct kvm_vcpu_fault_info fault;
+
+	/* Miscellaneous vcpu state flags */
+	u64 flags;
+};
+
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 	void *sve_state;
 	unsigned int sve_max_vl;
 
+	struct vcpu_hyp_state hyp_state;
+
 	/* Stage 2 paging state used by the hardware on next switch */
 	struct kvm_s2_mmu *hw_mmu;
 
-	/* HYP configuration */
-	u64 hcr_el2;
-	u32 mdcr_el2;
-
-	/* Exception Information */
-	struct kvm_vcpu_fault_info fault;
-
 	/* State of various workarounds, see kvm_asm.h for bit assignment */
 	u64 workaround_flags;
 
-	/* Miscellaneous vcpu state flags */
-	u64 flags;
-
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
@@ -356,9 +364,6 @@ struct kvm_vcpu_arch {
 	/* Detect first run of a vcpu */
 	bool has_run_once;
 
-	/* Virtual SError ESR to restore when HCR_EL2.VSE is set */
-	u64 vsesr_el2;
-
 	/* Additional reset state */
 	struct vcpu_reset_state	reset_state;
 
@@ -373,7 +378,7 @@ struct kvm_vcpu_arch {
 	} steal;
 };
 
-#define hyp_state(vcpu) ((vcpu)->arch)
+#define hyp_state(vcpu) ((vcpu)->arch.hyp_state)
 
 /* Accessors for hyp_state parameters related to the hypervistor state. */
 #define hyp_state_hcr_el2(hyps) (hyps)->hcr_el2
@@ -633,7 +638,7 @@ void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
 
 #ifndef __KVM_NVHE_HYPERVISOR__
-#define kvm_call_hyp_nvhe(f, ...)						\
+#define kvm_call_hyp_nvhe(f, ...)					\
 	({								\
 		struct arm_smccc_res res;				\
 									\
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index c2cc3a2813e6..1776efc3cc9d 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -107,7 +107,7 @@ int main(void)
   BLANK();
 #ifdef CONFIG_KVM
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
-  DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
+  DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.hyp_state.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_cpu_context, regs));
   DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (10 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-27 16:40   ` Quentin Perret
  2021-09-24 12:53 ` [RFC PATCH v1 13/30] KVM: arm64: change function parameters to use kvm_cpu_ctxt and hyp_state Fuad Tabba
                   ` (17 subsequent siblings)
  29 siblings, 1 reply; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Many functions don't need access to the vcpu structure, but only
the hyp_state. Reduce their scope.

This applies the semantic patches with the following commands:
FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h"
spatch --sp-file cocci_refactor/add_hypstate.cocci $FILES --in-place
spatch --sp-file cocci_refactor/use_hypstate.cocci $FILES --in-place

This patch adds variables that may be unused. These will be
removed at the end of this patch series.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h           |  2 +-
 arch/arm64/kvm/hyp/aarch32.c               |  2 +
 arch/arm64/kvm/hyp/exception.c             | 19 +++++---
 arch/arm64/kvm/hyp/include/hyp/adjust_pc.h |  2 +
 arch/arm64/kvm/hyp/include/hyp/switch.h    | 54 +++++++++++++---------
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  6 ++-
 arch/arm64/kvm/hyp/nvhe/switch.c           | 21 +++++----
 arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c   |  1 +
 arch/arm64/kvm/hyp/vgic-v3-sr.c            | 29 ++++++++++++
 arch/arm64/kvm/hyp/vhe/switch.c            | 25 +++++-----
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |  4 +-
 11 files changed, 112 insertions(+), 53 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 2e2b60a1b6c7..2737e05a16b2 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -94,7 +94,7 @@ void __sve_save_state(void *sve_pffr, u32 *fpsr);
 void __sve_restore_state(void *sve_pffr, u32 *fpsr);
 
 #ifndef __KVM_NVHE_HYPERVISOR__
-void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
+void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps);
 void deactivate_traps_vhe_put(void);
 #endif
 
diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c
index 27ebfff023ff..2d45e13d1b12 100644
--- a/arch/arm64/kvm/hyp/aarch32.c
+++ b/arch/arm64/kvm/hyp/aarch32.c
@@ -46,6 +46,7 @@ static const unsigned short cc_map[16] = {
  */
 bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
 {
+	const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	unsigned long cpsr;
 	u32 cpsr_cond;
@@ -126,6 +127,7 @@ static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt)
  */
 void kvm_skip_instr32(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 pc = *ctxt_pc(vcpu_ctxt);
 	bool is_thumb;
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 4514e345c26f..d4c2905b595d 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -59,26 +59,31 @@ static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val)
 
 static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 {
+	const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg);
 }
 
 static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg);
 }
 
 static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_spsr(&vcpu_ctxt(vcpu), val);
 }
 
 static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val);
 }
 
 static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val);
 }
 
@@ -326,9 +331,10 @@ static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode,
 
 static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (vcpu_el1_is_32bit(vcpu)) {
-		switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) {
+		switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) {
 		case KVM_ARM64_EXCEPT_AA32_UND:
 			enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4);
 			break;
@@ -343,7 +349,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 			break;
 		}
 	} else {
-		switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) {
+		switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) {
 		case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
 		      KVM_ARM64_EXCEPT_AA64_EL1):
 			enter_exception64(vcpu_ctxt, PSR_MODE_EL1h,
@@ -366,13 +372,14 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
  */
 void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	if (vcpu_flags(vcpu) & KVM_ARM64_PENDING_EXCEPTION) {
+	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_PENDING_EXCEPTION) {
 		kvm_inject_exception(vcpu);
-		vcpu_flags(vcpu) &= ~(KVM_ARM64_PENDING_EXCEPTION |
+		hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_PENDING_EXCEPTION |
 				      KVM_ARM64_EXCEPT_MASK);
-	} else 	if (vcpu_flags(vcpu) & KVM_ARM64_INCREMENT_PC) {
+	} else 	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_INCREMENT_PC) {
 		kvm_skip_instr(vcpu);
-		vcpu_flags(vcpu) &= ~KVM_ARM64_INCREMENT_PC;
+		hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_INCREMENT_PC;
 	}
 }
diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
index 20dde9dbc11b..9bbe452a461a 100644
--- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
+++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
@@ -15,6 +15,7 @@
 
 static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (ctxt_mode_is_32bit(vcpu_ctxt)) {
 		kvm_skip_instr32(vcpu);
@@ -33,6 +34,7 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)
  */
 static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	*ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR);
 	ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 370a8fb827be..5ee8aac86fdc 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -36,6 +36,7 @@ extern struct exception_table_entry __stop___kvm_ex_table;
 /* Check whether the FP regs were dirtied while in the host-side run loop: */
 static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	/*
 	 * When the system doesn't support FP/SIMD, we cannot rely on
 	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
@@ -45,15 +46,16 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 	 */
 	if (!system_supports_fpsimd() ||
 	    vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE)
-		vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED |
+		hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_FP_ENABLED |
 				      KVM_ARM64_FP_HOST);
 
-	return !!(vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED);
+	return !!(hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED);
 }
 
 /* Save the 32-bit only FPSIMD system register state */
 static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
@@ -63,6 +65,7 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 
 static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	/*
 	 * We are about to set CPTR_EL2.TFP to trap all floating point
@@ -79,7 +82,7 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 	}
 }
 
-static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
+static inline void __activate_traps_common(struct vcpu_hyp_state *vcpu_hyps)
 {
 	/* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */
 	write_sysreg(1 << 15, hstr_el2);
@@ -94,7 +97,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(0, pmselr_el0);
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
-	write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2);
+	write_sysreg(hyp_state_mdcr_el2(vcpu_hyps), mdcr_el2);
 }
 
 static inline void __deactivate_traps_common(void)
@@ -104,9 +107,9 @@ static inline void __deactivate_traps_common(void)
 		write_sysreg(0, pmuserenr_el0);
 }
 
-static inline void ___activate_traps(struct kvm_vcpu *vcpu)
+static inline void ___activate_traps(struct vcpu_hyp_state *vcpu_hyps)
 {
-	u64 hcr = vcpu_hcr_el2(vcpu);
+	u64 hcr = hyp_state_hcr_el2(vcpu_hyps);
 
 	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
 		hcr |= HCR_TVM;
@@ -114,10 +117,10 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(hcr, hcr_el2);
 
 	if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
-		write_sysreg_s(vcpu_vsesr_el2(vcpu), SYS_VSESR_EL2);
+		write_sysreg_s(hyp_state_vsesr_el2(vcpu_hyps), SYS_VSESR_EL2);
 }
 
-static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
+static inline void ___deactivate_traps(struct vcpu_hyp_state *vcpu_hyps)
 {
 	/*
 	 * If we pended a virtual abort, preserve it until it gets
@@ -125,9 +128,9 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
 	 * the crucial bit is "On taking a vSError interrupt,
 	 * HCR_EL2.VSE is cleared to 0."
 	 */
-	if (vcpu_hcr_el2(vcpu) & HCR_VSE) {
-		vcpu_hcr_el2(vcpu) &= ~HCR_VSE;
-		vcpu_hcr_el2(vcpu) |= read_sysreg(hcr_el2) & HCR_VSE;
+	if (hyp_state_hcr_el2(vcpu_hyps) & HCR_VSE) {
+		hyp_state_hcr_el2(vcpu_hyps) &= ~HCR_VSE;
+		hyp_state_hcr_el2(vcpu_hyps) |= read_sysreg(hcr_el2) & HCR_VSE;
 	}
 }
 
@@ -191,18 +194,18 @@ static inline bool __get_fault_info(u64 esr, struct kvm_vcpu_fault_info *fault)
 	return true;
 }
 
-static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
+static inline bool __populate_fault_info(struct vcpu_hyp_state *vcpu_hyps)
 {
 	u8 ec;
 	u64 esr;
 
-	esr = vcpu_fault(vcpu).esr_el2;
+	esr = hyp_state_fault(vcpu_hyps).esr_el2;
 	ec = ESR_ELx_EC(esr);
 
 	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
 		return true;
 
-	return __get_fault_info(esr, &vcpu_fault(vcpu));
+	return __get_fault_info(esr, &hyp_state_fault(vcpu_hyps));
 }
 
 static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu)
@@ -217,6 +220,7 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu)
 
 static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
 	__sve_restore_state(vcpu_sve_pffr(vcpu),
@@ -227,6 +231,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 /* Check for an FPSIMD/SVE trap and handle as appropriate */
 static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	bool sve_guest, sve_host;
 	u8 esr_ec;
@@ -236,8 +241,8 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 		return false;
 
 	if (system_supports_sve()) {
-		sve_guest = vcpu_has_sve(vcpu);
-		sve_host = vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE;
+		sve_guest = hyp_state_has_sve(vcpu_hyps);
+		sve_host = hyp_state_flags(vcpu_hyps) & KVM_ARM64_HOST_SVE_IN_USE;
 	} else {
 		sve_guest = false;
 		sve_host = false;
@@ -268,13 +273,13 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 	}
 	isb();
 
-	if (vcpu_flags(vcpu) & KVM_ARM64_FP_HOST) {
+	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_HOST) {
 		if (sve_host)
 			__hyp_sve_save_host(vcpu);
 		else
 			__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
 
-		vcpu_flags(vcpu) &= ~KVM_ARM64_FP_HOST;
+		hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_FP_HOST;
 	}
 
 	if (sve_guest)
@@ -287,13 +292,14 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 		write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2),
 			     fpexc32_el2);
 
-	vcpu_flags(vcpu) |= KVM_ARM64_FP_ENABLED;
+	hyp_state_flags(vcpu_hyps) |= KVM_ARM64_FP_ENABLED;
 
 	return true;
 }
 
 static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
 	int rt = kvm_vcpu_sys_get_rt(vcpu);
@@ -303,7 +309,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu)
 	 * The normal sysreg handling code expects to see the traps,
 	 * let's not do anything here.
 	 */
-	if (vcpu_hcr_el2(vcpu) & HCR_TVM)
+	if (hyp_state_hcr_el2(vcpu_hyps) & HCR_TVM)
 		return false;
 
 	switch (sysreg) {
@@ -388,11 +394,12 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 
 static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *ctxt;
 	u64 val;
 
-	if (!vcpu_has_ptrauth(vcpu) ||
+	if (!hyp_state_has_ptrauth(vcpu_hyps) ||
 	    !esr_is_ptrauth_trap(kvm_vcpu_get_esr(vcpu)))
 		return false;
 
@@ -419,9 +426,10 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
  */
 static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
-		vcpu_fault(vcpu).esr_el2 = read_sysreg_el2(SYS_ESR);
+		hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR);
 
 	if (ARM_SERROR_PENDING(*exit_code)) {
 		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
@@ -465,7 +473,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (__hyp_handle_ptrauth(vcpu))
 		goto guest;
 
-	if (!__populate_fault_info(vcpu))
+	if (!__populate_fault_info(vcpu_hyps))
 		goto guest;
 
 	if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index d49985e825cd..7bc8b34b65b2 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -158,6 +158,7 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx
 
 static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
@@ -170,12 +171,13 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 	ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2);
 	ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2);
 
-	if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || hyp_state_flags(vcpu_hyps) & KVM_ARM64_DEBUG_DIRTY)
 		ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
 }
 
 static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
@@ -188,7 +190,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 	write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2);
 	write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2);
 
-	if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)
+	if (has_vhe() || hyp_state_flags(vcpu_hyps) & KVM_ARM64_DEBUG_DIRTY)
 		write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2),
 		             dbgvcr32_el2);
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index ac7529305717..d9326085387b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -36,11 +36,12 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val;
 
-	___activate_traps(vcpu);
-	__activate_traps_common(vcpu);
+	___activate_traps(vcpu_hyps);
+	__activate_traps_common(vcpu_hyps);
 
 	val = CPTR_EL2_DEFAULT;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
@@ -67,13 +68,12 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void __deactivate_traps(struct kvm_vcpu *vcpu)
+static void __deactivate_traps(struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	extern char __kvm_hyp_host_vector[];
 	u64 mdcr_el2, cptr;
 
-	___deactivate_traps(vcpu);
+	___deactivate_traps(vcpu_hyps);
 
 	mdcr_el2 = read_sysreg(mdcr_el2);
 
@@ -104,7 +104,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
 
 	cptr = CPTR_EL2_DEFAULT;
-	if (vcpu_has_sve(vcpu) && (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED))
+	if (hyp_state_has_sve(vcpu_hyps) && (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED))
 		cptr |= CPTR_EL2_TZ;
 
 	write_sysreg(cptr, cptr_el2);
@@ -170,6 +170,7 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
@@ -236,12 +237,12 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__timer_disable_traps();
 	__hyp_vgic_save_state(vcpu);
 
-	__deactivate_traps(vcpu);
+	__deactivate_traps(vcpu_hyps);
 	__load_host_stage2();
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)
+	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
@@ -270,15 +271,17 @@ void __noreturn hyp_panic(void)
 	u64 par = read_sysreg_par();
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *vcpu;
+	struct vcpu_hyp_state *vcpu_hyps;
 	struct kvm_cpu_context *vcpu_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	vcpu = host_ctxt->__hyp_running_vcpu;
+	vcpu_hyps = &hyp_state(vcpu);
 	vcpu_ctxt = &vcpu_ctxt(vcpu);
 
 	if (vcpu) {
 		__timer_disable_traps();
-		__deactivate_traps(vcpu);
+		__deactivate_traps(vcpu_hyps);
 		__load_host_stage2();
 		__sysreg_restore_state_nvhe(host_ctxt);
 	}
diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
index 8dbc39026cc5..84304d6d455a 100644
--- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
+++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
@@ -36,6 +36,7 @@ static bool __is_be(struct kvm_cpu_context *vcpu_ctxt)
  */
 int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct vgic_dist *vgic = &kvm->arch.vgic;
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index bdb03b8e50ab..725b2976e7c2 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -473,6 +473,7 @@ static int __vgic_v3_bpr_min(void)
 
 static int __vgic_v3_get_group(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 esr = kvm_vcpu_get_esr(vcpu);
 	u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT;
@@ -674,6 +675,7 @@ static int __vgic_v3_clear_highest_active_priority(void)
 
 static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 lr_val;
 	u8 lr_prio, pmr;
@@ -733,6 +735,7 @@ static void __vgic_v3_bump_eoicount(void)
 
 static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 vid = ctxt_get_reg(vcpu_ctxt, rt);
 	u64 lr_val;
@@ -757,6 +760,7 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 vid = ctxt_get_reg(vcpu_ctxt, rt);
 	u64 lr_val;
@@ -795,18 +799,21 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK));
 }
 
 static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK));
 }
 
 static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 
@@ -820,6 +827,7 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 
@@ -833,18 +841,21 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr));
 }
 
 static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr));
 }
 
 static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 	u8 bpr_min = __vgic_v3_bpr_min() - 1;
@@ -863,6 +874,7 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 	u8 bpr_min = __vgic_v3_bpr_min();
@@ -884,6 +896,7 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val;
 
@@ -897,6 +910,7 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 
 static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
@@ -909,6 +923,7 @@ static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu,
 					    u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 0);
 }
@@ -916,48 +931,56 @@ static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu,
 static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu,
 					    u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 1);
 }
 
 static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 2);
 }
 
 static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_read_apxrn(vcpu, rt, 3);
 }
 
 static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 0);
 }
 
 static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 1);
 }
 
 static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 2);
 }
 
 static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	__vgic_v3_write_apxrn(vcpu, rt, 3);
 }
 
 static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 lr_val;
 	int lr, lr_grp, grp;
@@ -978,6 +1001,7 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	vmcr &= ICH_VMCR_PMR_MASK;
 	vmcr >>= ICH_VMCR_PMR_SHIFT;
@@ -986,6 +1010,7 @@ static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
@@ -999,6 +1024,7 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = __vgic_v3_get_highest_active_priority();
 	ctxt_set_reg(vcpu_ctxt, rt, val);
@@ -1006,6 +1032,7 @@ static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 vtr, val;
 
@@ -1028,6 +1055,7 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
@@ -1046,6 +1074,7 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	int rt;
 	u32 esr;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 0113d442bc95..c9da0d1c7e72 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -33,10 +33,11 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val;
 
-	___activate_traps(vcpu);
+	___activate_traps(vcpu_hyps);
 
 	val = read_sysreg(cpacr_el1);
 	val |= CPACR_EL1_TTA;
@@ -54,7 +55,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	val |= CPTR_EL2_TAM;
 
 	if (update_fp_enabled(vcpu)) {
-		if (vcpu_has_sve(vcpu))
+		if (hyp_state_has_sve(vcpu_hyps))
 			val |= CPACR_EL1_ZEN;
 	} else {
 		val &= ~CPACR_EL1_FPEN;
@@ -67,12 +68,11 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 }
 NOKPROBE_SYMBOL(__activate_traps);
 
-static void __deactivate_traps(struct kvm_vcpu *vcpu)
+static void __deactivate_traps(struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	extern char vectors[];	/* kernel exception vectors */
 
-	___deactivate_traps(vcpu);
+	___deactivate_traps(vcpu_hyps);
 
 	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
 
@@ -88,10 +88,9 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 }
 NOKPROBE_SYMBOL(__deactivate_traps);
 
-void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
+void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__activate_traps_common(vcpu);
+	__activate_traps_common(vcpu_hyps);
 }
 
 void deactivate_traps_vhe_put(void)
@@ -110,6 +109,7 @@ void deactivate_traps_vhe_put(void)
 /* Switch to the guest for VHE systems running in EL2 */
 static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
@@ -149,11 +149,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
-	__deactivate_traps(vcpu);
+	__deactivate_traps(vcpu_hyps);
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)
+	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
@@ -164,6 +164,7 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe);
 
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	int ret;
 
@@ -202,13 +203,15 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par)
 {
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *vcpu;
+	struct vcpu_hyp_state *vcpu_hyps;
 	struct kvm_cpu_context *vcpu_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	vcpu = host_ctxt->__hyp_running_vcpu;
+	vcpu_hyps = &hyp_state(vcpu);
 	vcpu_ctxt = &vcpu_ctxt(vcpu);
 
-	__deactivate_traps(vcpu);
+	__deactivate_traps(vcpu_hyps);
 	sysreg_restore_host_state_vhe(host_ctxt);
 
 	panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n",
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 37f56b4743d0..1571c144e9b0 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -63,6 +63,7 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe);
  */
 void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
@@ -82,7 +83,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.sysregs_loaded_on_cpu = true;
 
-	activate_traps_vhe_load(vcpu);
+	activate_traps_vhe_load(vcpu_hyps);
 }
 
 /**
@@ -98,6 +99,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
  */
 void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 13/30] KVM: arm64: change function parameters to use kvm_cpu_ctxt and hyp_state
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (11 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 14/30] KVM: arm64: reduce scope of vgic v2 Fuad Tabba
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

__kvm_skip_instr, kvm_condition_valid, and __kvm_adjust_pc are
passed the vcpu when all they need is the context as well as the
hypervisor state. Refactor them to use these instead.

These functions are called directly or indirectly in future
patches from contexts that don't have access to the whole vcpu.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_emulate.h       | 15 ++++++++++-----
 arch/arm64/kvm/hyp/aarch32.c               | 14 +++++---------
 arch/arm64/kvm/hyp/exception.c             | 19 ++++++++++---------
 arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 14 ++++++--------
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  2 +-
 arch/arm64/kvm/hyp/nvhe/switch.c           |  2 +-
 arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c   |  6 +++---
 arch/arm64/kvm/hyp/vgic-v3-sr.c            |  4 ++--
 arch/arm64/kvm/hyp/vhe/switch.c            |  2 +-
 9 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index e095afeecd10..28fc4781249e 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -33,8 +33,8 @@ enum exception_type {
 	except_type_serror	= 0x180,
 };
 
-bool kvm_condition_valid32(const struct kvm_vcpu *vcpu);
-void kvm_skip_instr32(struct kvm_vcpu *vcpu);
+bool kvm_condition_valid32(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps);
+void kvm_skip_instr32(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps);
 
 void kvm_inject_undefined(struct kvm_vcpu *vcpu);
 void kvm_inject_vabt(struct kvm_vcpu *vcpu);
@@ -162,14 +162,19 @@ static __always_inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu)
 	return ctxt_mode_is_32bit(&vcpu_ctxt(vcpu));
 }
 
-static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu)
+static __always_inline bool __kvm_condition_valid(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps)
 {
-	if (vcpu_mode_is_32bit(vcpu))
-		return kvm_condition_valid32(vcpu);
+	if (ctxt_mode_is_32bit(vcpu_ctxt))
+		return kvm_condition_valid32(vcpu_ctxt, vcpu_hyps);
 
 	return true;
 }
 
+static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu)
+{
+	return __kvm_condition_valid(&vcpu->arch.ctxt, &hyp_state(vcpu));
+}
+
 static inline void ctxt_set_thumb(struct kvm_cpu_context *ctxt)
 {
 	*ctxt_cpsr(ctxt) |= PSR_AA32_T_BIT;
diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c
index 2d45e13d1b12..2feb2f8d9907 100644
--- a/arch/arm64/kvm/hyp/aarch32.c
+++ b/arch/arm64/kvm/hyp/aarch32.c
@@ -44,20 +44,18 @@ static const unsigned short cc_map[16] = {
 /*
  * Check if a trapped instruction should have been executed or not.
  */
-bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
+bool kvm_condition_valid32(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps)
 {
-	const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	unsigned long cpsr;
 	u32 cpsr_cond;
 	int cond;
 
 	/* Top two bits non-zero?  Unconditional. */
-	if (kvm_vcpu_get_esr(vcpu) >> 30)
+	if (kvm_hyp_state_get_esr(vcpu_hyps) >> 30)
 		return true;
 
 	/* Is condition field valid? */
-	cond = kvm_vcpu_get_condition(vcpu);
+	cond = kvm_hyp_state_get_condition(vcpu_hyps);
 	if (cond == 0xE)
 		return true;
 
@@ -125,15 +123,13 @@ static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt)
  * kvm_skip_instr - skip a trapped instruction and proceed to the next
  * @vcpu: The vcpu pointer
  */
-void kvm_skip_instr32(struct kvm_vcpu *vcpu)
+void kvm_skip_instr32(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 pc = *ctxt_pc(vcpu_ctxt);
 	bool is_thumb;
 
 	is_thumb = !!(*ctxt_cpsr(vcpu_ctxt) & PSR_AA32_T_BIT);
-	if (is_thumb && !kvm_vcpu_trap_il_is32bit(vcpu))
+	if (is_thumb && !kvm_hyp_state_trap_il_is32bit(vcpu_hyps))
 		pc += 2;
 	else
 		pc += 4;
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index d4c2905b595d..a08806efe031 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -329,11 +329,9 @@ static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode,
 	*ctxt_pc(vcpu_ctxt) = vect_offset;
 }
 
-static void kvm_inject_exception(struct kvm_vcpu *vcpu)
+static void kvm_inject_exception(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	if (vcpu_el1_is_32bit(vcpu)) {
+	if (hyp_state_el1_is_32bit(vcpu_hyps)) {
 		switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) {
 		case KVM_ARM64_EXCEPT_AA32_UND:
 			enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4);
@@ -370,16 +368,19 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
  * Adjust the guest PC (and potentially exception state) depending on
  * flags provided by the emulation code.
  */
-void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
+void kvm_adjust_pc(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_PENDING_EXCEPTION) {
-		kvm_inject_exception(vcpu);
+		kvm_inject_exception(vcpu_ctxt, vcpu_hyps);
 		hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_PENDING_EXCEPTION |
 				      KVM_ARM64_EXCEPT_MASK);
 	} else 	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_INCREMENT_PC) {
-		kvm_skip_instr(vcpu);
+		kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 		hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_INCREMENT_PC;
 	}
 }
+
+void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
+{
+	kvm_adjust_pc(&vcpu_ctxt(vcpu), &hyp_state(vcpu));
+}
diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
index 9bbe452a461a..4e0cfbe635e5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
+++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
@@ -13,12 +13,10 @@
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_host.h>
 
-static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)
+static inline void kvm_skip_instr(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	if (ctxt_mode_is_32bit(vcpu_ctxt)) {
-		kvm_skip_instr32(vcpu);
+		kvm_skip_instr32(vcpu_ctxt, vcpu_hyps);
 	} else {
 		*ctxt_pc(vcpu_ctxt) += 4;
 		*ctxt_cpsr(vcpu_ctxt) &= ~PSR_BTYPE_MASK;
@@ -32,14 +30,12 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)
  * Skip an instruction which has been emulated at hyp while most guest sysregs
  * are live.
  */
-static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu)
+static inline void __kvm_skip_instr(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	*ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR);
 	ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR);
 
-	kvm_skip_instr(vcpu);
+	kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 
 	write_sysreg_el2(ctxt_gp_regs(vcpu_ctxt)->pstate, SYS_SPSR);
 	write_sysreg_el2(*ctxt_pc(vcpu_ctxt), SYS_ELR);
@@ -54,4 +50,6 @@ static inline void kvm_skip_host_instr(void)
 	write_sysreg_el2(read_sysreg_el2(SYS_ELR) + 4, SYS_ELR);
 }
 
+void kvm_adjust_pc(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps);
+
 #endif
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5ee8aac86fdc..075719c07009 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -350,7 +350,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu)
 		return false;
 	}
 
-	__kvm_skip_instr(vcpu);
+	__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 	return true;
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index d9326085387b..eadbf2ccaf68 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -204,7 +204,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	 */
 	__debug_save_host_buffers_nvhe(vcpu);
 
-	__kvm_adjust_pc(vcpu);
+	kvm_adjust_pc(vcpu_ctxt, vcpu_hyps);
 
 	/*
 	 * We must restore the 32-bit state before the sysregs, thanks
diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
index 84304d6d455a..acd0d21394e3 100644
--- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
+++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
@@ -55,13 +55,13 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 
 	/* Reject anything but a 32bit access */
 	if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) {
-		__kvm_skip_instr(vcpu);
+		__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 		return -1;
 	}
 
 	/* Not aligned? Don't bother */
 	if (fault_ipa & 3) {
-		__kvm_skip_instr(vcpu);
+		__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 		return -1;
 	}
 
@@ -85,7 +85,7 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		ctxt_set_reg(vcpu_ctxt, rd, data);
 	}
 
-	__kvm_skip_instr(vcpu);
+	__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 
 	return 1;
 }
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index 725b2976e7c2..d025a5830dcc 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -1086,7 +1086,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 	esr = kvm_vcpu_get_esr(vcpu);
 	if (ctxt_mode_is_32bit(vcpu_ctxt)) {
 		if (!kvm_condition_valid(vcpu)) {
-			__kvm_skip_instr(vcpu);
+			__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 			return 1;
 		}
 
@@ -1198,7 +1198,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 	rt = kvm_vcpu_sys_get_rt(vcpu);
 	fn(vcpu, vmcr, rt);
 
-	__kvm_skip_instr(vcpu);
+	__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 
 	return 1;
 }
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index c9da0d1c7e72..395274532c20 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -135,7 +135,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	__load_guest_stage2(vcpu->arch.hw_mmu);
 	__activate_traps(vcpu);
 
-	__kvm_adjust_pc(vcpu);
+	kvm_adjust_pc(vcpu_ctxt, vcpu_hyps);
 
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 14/30] KVM: arm64: reduce scope of vgic v2
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (12 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 13/30] KVM: arm64: change function parameters to use kvm_cpu_ctxt and hyp_state Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 15/30] KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3 Fuad Tabba
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

vgic v2 interface functions are passed vcpu, when the state
that they need is the vgic distributor, as well as the
kvm_cpu_context and the recently created vcpu_hyp_state. Reduce
the scope of its interface functions to these structs.

Pass the vgic distributor to fixup_guest_exit so that it's not
dependent on struct kvm for the vgic state. NOTE: this change to
fixup_guest_exit is temporary, and will be tidied up in in a
subsequent patch in this series.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h         |  2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h  |  4 ++--
 arch/arm64/kvm/hyp/nvhe/switch.c         |  4 +++-
 arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 16 ++++++----------
 arch/arm64/kvm/hyp/vhe/switch.c          |  3 ++-
 5 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 2737e05a16b2..d9a8872a7efb 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -55,7 +55,7 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
  */
 #define __kvm_swab32(x)	___constant_swab32(x)
 
-int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
+int __vgic_v2_perform_cpuif_access(struct vgic_dist *vgic, struct kvm_cpu_context *ctxt, struct vcpu_hyp_state *hyps);
 
 void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if);
 void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 075719c07009..30fcfe84f609 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -424,7 +424,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
  * the guest, false when we should restore the host state and return to the
  * main run loop.
  */
-static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, u64 *exit_code)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
@@ -486,7 +486,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			!kvm_vcpu_abt_iss1tw(vcpu);
 
 		if (valid) {
-			int ret = __vgic_v2_perform_cpuif_access(vcpu);
+			int ret = __vgic_v2_perform_cpuif_access(vgic, vcpu_ctxt, vcpu_hyps);
 
 			if (ret == 1)
 				goto guest;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index eadbf2ccaf68..164b0f899f7b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -172,6 +172,8 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+	struct vgic_dist *vgic = &kvm->arch.vgic;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	bool pmu_switch_needed;
@@ -230,7 +232,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(vcpu);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, &exit_code));
+	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
index acd0d21394e3..787f973af43a 100644
--- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
+++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
@@ -34,19 +34,15 @@ static bool __is_be(struct kvm_cpu_context *vcpu_ctxt)
  *  0: Not a GICV access
  * -1: Illegal GICV access successfully performed
  */
-int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
+int __vgic_v2_perform_cpuif_access(struct vgic_dist *vgic, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-	struct vgic_dist *vgic = &kvm->arch.vgic;
 	phys_addr_t fault_ipa;
 	void __iomem *addr;
 	int rd;
 
 	/* Build the full address */
-	fault_ipa  = kvm_vcpu_get_fault_ipa(vcpu);
-	fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
+	fault_ipa  = kvm_hyp_state_get_fault_ipa(vcpu_hyps);
+	fault_ipa |= kvm_hyp_state_get_hfar(vcpu_hyps) & GENMASK(11, 0);
 
 	/* If not for GICV, move on */
 	if (fault_ipa <  vgic->vgic_cpu_base ||
@@ -54,7 +50,7 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return 0;
 
 	/* Reject anything but a 32bit access */
-	if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) {
+	if (kvm_hyp_state_dabt_get_as(vcpu_hyps) != sizeof(u32)) {
 		__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 		return -1;
 	}
@@ -65,11 +61,11 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 		return -1;
 	}
 
-	rd = kvm_vcpu_dabt_get_rd(vcpu);
+	rd = kvm_hyp_state_dabt_get_rd(vcpu_hyps);
 	addr  = kvm_vgic_global_state.vcpu_hyp_va;
 	addr += fault_ipa - vgic->vgic_cpu_base;
 
-	if (kvm_vcpu_dabt_iswrite(vcpu)) {
+	if (kvm_hyp_state_dabt_iswrite(vcpu_hyps)) {
 		u32 data = ctxt_get_reg(vcpu_ctxt, rd);
 		if (__is_be(vcpu_ctxt)) {
 			/* guest pre-swabbed data, undo this for writel() */
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 395274532c20..f315058a50ca 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -111,6 +111,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	u64 exit_code;
@@ -145,7 +146,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(vcpu);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, &exit_code));
+	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
 
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 15/30] KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (13 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 14/30] KVM: arm64: reduce scope of vgic v2 Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 16/30] KVM: arm64: reduce scope of vgic_v3 access parameters Fuad Tabba
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

vgic v3 interface functions are passed vcpu, when the state
that they need is the vgic interface, as well as the
kvm_cpu_context and the recently created vcpu_hyp_state. Reduce
the scope of its interface functions to these structs.

This applies the semantic patch with the following command:

spatch --sp-file cocci_refactor/vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/vgic-v3-sr.c | 247 ++++++++++++++++++--------------
 1 file changed, 137 insertions(+), 110 deletions(-)

diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index d025a5830dcc..3e1951b04fce 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -471,11 +471,10 @@ static int __vgic_v3_bpr_min(void)
 	return 8 - vtr_to_nr_pre_bits(read_gicreg(ICH_VTR_EL2));
 }
 
-static int __vgic_v3_get_group(struct kvm_vcpu *vcpu)
+static int __vgic_v3_get_group(struct kvm_cpu_context *vcpu_ctxt,
+			       struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	u32 esr = kvm_vcpu_get_esr(vcpu);
+	u32 esr = kvm_hyp_state_get_esr(vcpu_hyps);
 	u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT;
 
 	return crm != 8;
@@ -483,10 +482,11 @@ static int __vgic_v3_get_group(struct kvm_vcpu *vcpu)
 
 #define GICv3_IDLE_PRIORITY	0xff
 
-static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr,
+static int __vgic_v3_highest_priority_lr(struct vgic_v3_cpu_if *cpu_if,
+					 u32 vmcr,
 					 u64 *lr_val)
 {
-	unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs;
+	unsigned int used_lrs = cpu_if->used_lrs;
 	u8 priority = GICv3_IDLE_PRIORITY;
 	int i, lr = -1;
 
@@ -522,10 +522,10 @@ static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr,
 	return lr;
 }
 
-static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid,
+static int __vgic_v3_find_active_lr(struct vgic_v3_cpu_if *cpu_if, int intid,
 				    u64 *lr_val)
 {
-	unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs;
+	unsigned int used_lrs = cpu_if->used_lrs;
 	int i;
 
 	for (i = 0; i < used_lrs; i++) {
@@ -673,17 +673,18 @@ static int __vgic_v3_clear_highest_active_priority(void)
 	return GICv3_IDLE_PRIORITY;
 }
 
-static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_iar(struct vgic_v3_cpu_if *cpu_if,
+			       struct kvm_cpu_context *vcpu_ctxt,
+			       struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+			       int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 lr_val;
 	u8 lr_prio, pmr;
 	int lr, grp;
 
-	grp = __vgic_v3_get_group(vcpu);
+	grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps);
 
-	lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val);
+	lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val);
 	if (lr < 0)
 		goto spurious;
 
@@ -733,10 +734,11 @@ static void __vgic_v3_bump_eoicount(void)
 	write_gicreg(hcr, ICH_HCR_EL2);
 }
 
-static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_dir(struct vgic_v3_cpu_if *cpu_if,
+				struct kvm_cpu_context *vcpu_ctxt,
+				struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 vid = ctxt_get_reg(vcpu_ctxt, rt);
 	u64 lr_val;
 	int lr;
@@ -749,7 +751,7 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	if (vid >= VGIC_MIN_LPI)
 		return;
 
-	lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val);
+	lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val);
 	if (lr == -1) {
 		__vgic_v3_bump_eoicount();
 		return;
@@ -758,16 +760,17 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	__vgic_v3_clear_active_lr(lr, lr_val);
 }
 
-static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_eoir(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				 int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 vid = ctxt_get_reg(vcpu_ctxt, rt);
 	u64 lr_val;
 	u8 lr_prio, act_prio;
 	int lr, grp;
 
-	grp = __vgic_v3_get_group(vcpu);
+	grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps);
 
 	/* Drop priority in any case */
 	act_prio = __vgic_v3_clear_highest_active_priority();
@@ -780,7 +783,7 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	if (vmcr & ICH_VMCR_EOIM_MASK)
 		return;
 
-	lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val);
+	lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val);
 	if (lr == -1) {
 		__vgic_v3_bump_eoicount();
 		return;
@@ -797,24 +800,27 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	__vgic_v3_clear_active_lr(lr, lr_val);
 }
 
-static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_igrpen0(struct vgic_v3_cpu_if *cpu_if,
+				   struct kvm_cpu_context *vcpu_ctxt,
+				   struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				   int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK));
 }
 
-static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_igrpen1(struct vgic_v3_cpu_if *cpu_if,
+				   struct kvm_cpu_context *vcpu_ctxt,
+				   struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				   int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK));
 }
 
-static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_igrpen0(struct vgic_v3_cpu_if *cpu_if,
+				    struct kvm_cpu_context *vcpu_ctxt,
+				    struct vcpu_hyp_state *vcpu_hyps,
+				    u32 vmcr, int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	if (val & 1)
@@ -825,10 +831,11 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	__vgic_v3_write_vmcr(vmcr);
 }
 
-static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_igrpen1(struct vgic_v3_cpu_if *cpu_if,
+				    struct kvm_cpu_context *vcpu_ctxt,
+				    struct vcpu_hyp_state *vcpu_hyps,
+				    u32 vmcr, int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	if (val & 1)
@@ -839,24 +846,27 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	__vgic_v3_write_vmcr(vmcr);
 }
 
-static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_bpr0(struct vgic_v3_cpu_if *cpu_if,
+				struct kvm_cpu_context *vcpu_ctxt,
+				struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr));
 }
 
-static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_bpr1(struct vgic_v3_cpu_if *cpu_if,
+				struct kvm_cpu_context *vcpu_ctxt,
+				struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr));
 }
 
-static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_bpr0(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				 int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 	u8 bpr_min = __vgic_v3_bpr_min() - 1;
 
@@ -872,10 +882,11 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	__vgic_v3_write_vmcr(vmcr);
 }
 
-static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_bpr1(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				 int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val = ctxt_get_reg(vcpu_ctxt, rt);
 	u8 bpr_min = __vgic_v3_bpr_min();
 
@@ -894,13 +905,14 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	__vgic_v3_write_vmcr(vmcr);
 }
 
-static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
+static void __vgic_v3_read_apxrn(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, int rt,
+				 int n)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val;
 
-	if (!__vgic_v3_get_group(vcpu))
+	if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps))
 		val = __vgic_v3_read_ap0rn(n);
 	else
 		val = __vgic_v3_read_ap1rn(n);
@@ -908,86 +920,94 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
 	ctxt_set_reg(vcpu_ctxt, rt, val);
 }
 
-static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n)
+static void __vgic_v3_write_apxrn(struct vgic_v3_cpu_if *cpu_if,
+				  struct kvm_cpu_context *vcpu_ctxt,
+				  struct vcpu_hyp_state *vcpu_hyps, int rt,
+				  int n)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
-	if (!__vgic_v3_get_group(vcpu))
+	if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps))
 		__vgic_v3_write_ap0rn(val, n);
 	else
 		__vgic_v3_write_ap1rn(val, n);
 }
 
-static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu,
+static void __vgic_v3_read_apxr0(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+					    struct vcpu_hyp_state *vcpu_hyps,
 					    u32 vmcr, int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_read_apxrn(vcpu, rt, 0);
+	__vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0);
 }
 
-static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu,
+static void __vgic_v3_read_apxr1(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+					    struct vcpu_hyp_state *vcpu_hyps,
 					    u32 vmcr, int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_read_apxrn(vcpu, rt, 1);
+	__vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1);
 }
 
-static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_apxr2(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				 int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_read_apxrn(vcpu, rt, 2);
+	__vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2);
 }
 
-static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_apxr3(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				 int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_read_apxrn(vcpu, rt, 3);
+	__vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3);
 }
 
-static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_apxr0(struct vgic_v3_cpu_if *cpu_if,
+				  struct kvm_cpu_context *vcpu_ctxt,
+				  struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				  int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_write_apxrn(vcpu, rt, 0);
+	__vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0);
 }
 
-static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_apxr1(struct vgic_v3_cpu_if *cpu_if,
+				  struct kvm_cpu_context *vcpu_ctxt,
+				  struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				  int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_write_apxrn(vcpu, rt, 1);
+	__vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1);
 }
 
-static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_apxr2(struct vgic_v3_cpu_if *cpu_if,
+				  struct kvm_cpu_context *vcpu_ctxt,
+				  struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				  int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_write_apxrn(vcpu, rt, 2);
+	__vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2);
 }
 
-static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_apxr3(struct vgic_v3_cpu_if *cpu_if,
+				  struct kvm_cpu_context *vcpu_ctxt,
+				  struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				  int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	__vgic_v3_write_apxrn(vcpu, rt, 3);
+	__vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3);
 }
 
-static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_hppir(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				 int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 lr_val;
 	int lr, lr_grp, grp;
 
-	grp = __vgic_v3_get_group(vcpu);
+	grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps);
 
-	lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val);
+	lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val);
 	if (lr == -1)
 		goto spurious;
 
@@ -999,19 +1019,21 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK);
 }
 
-static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_pmr(struct vgic_v3_cpu_if *cpu_if,
+			       struct kvm_cpu_context *vcpu_ctxt,
+			       struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+			       int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	vmcr &= ICH_VMCR_PMR_MASK;
 	vmcr >>= ICH_VMCR_PMR_SHIFT;
 	ctxt_set_reg(vcpu_ctxt, rt, vmcr);
 }
 
-static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_pmr(struct vgic_v3_cpu_if *cpu_if,
+				struct kvm_cpu_context *vcpu_ctxt,
+				struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	val <<= ICH_VMCR_PMR_SHIFT;
@@ -1022,18 +1044,20 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	write_gicreg(vmcr, ICH_VMCR_EL2);
 }
 
-static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_rpr(struct vgic_v3_cpu_if *cpu_if,
+			       struct kvm_cpu_context *vcpu_ctxt,
+			       struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+			       int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = __vgic_v3_get_highest_active_priority();
 	ctxt_set_reg(vcpu_ctxt, rt, val);
 }
 
-static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_read_ctlr(struct vgic_v3_cpu_if *cpu_if,
+				struct kvm_cpu_context *vcpu_ctxt,
+				struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 vtr, val;
 
 	vtr = read_gicreg(ICH_VTR_EL2);
@@ -1053,10 +1077,11 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 	ctxt_set_reg(vcpu_ctxt, rt, val);
 }
 
-static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
+static void __vgic_v3_write_ctlr(struct vgic_v3_cpu_if *cpu_if,
+				 struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps, u32 vmcr,
+				 int rt)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u32 val = ctxt_get_reg(vcpu_ctxt, rt);
 
 	if (val & ICC_CTLR_EL1_CBPR_MASK)
@@ -1074,16 +1099,18 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt)
 
 int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 {
+	struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	int rt;
 	u32 esr;
 	u32 vmcr;
-	void (*fn)(struct kvm_vcpu *, u32, int);
+	void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *,
+		   struct vcpu_hyp_state *, u32, int);
 	bool is_read;
 	u32 sysreg;
 
-	esr = kvm_vcpu_get_esr(vcpu);
+	esr = kvm_hyp_state_get_esr(vcpu_hyps);
 	if (ctxt_mode_is_32bit(vcpu_ctxt)) {
 		if (!kvm_condition_valid(vcpu)) {
 			__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
@@ -1195,8 +1222,8 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 	}
 
 	vmcr = __vgic_v3_read_vmcr();
-	rt = kvm_vcpu_sys_get_rt(vcpu);
-	fn(vcpu, vmcr, rt);
+	rt = kvm_hyp_state_sys_get_rt(vcpu_hyps);
+	fn(cpu_if, vcpu_ctxt, vcpu_hyps, vmcr, rt);
 
 	__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 16/30] KVM: arm64: reduce scope of vgic_v3 access parameters
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (14 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 15/30] KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3 Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 17/30] KVM: arm64: access __hyp_running_vcpu via accessors only Fuad Tabba
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Now that the __vgic_v3_perform_cpuif_access only needs
vgic_v3_cpu_if, kvm_cpu_context, vcpu_hyps, pass these rather
than the whole vcpu.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h        | 4 +++-
 arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
 arch/arm64/kvm/hyp/vgic-v3-sr.c         | 9 ++++-----
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index d9a8872a7efb..b379c2b96f33 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -63,7 +63,9 @@ void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if);
 void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if);
 void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if);
 void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if);
-int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu);
+int __vgic_v3_perform_cpuif_access(struct vgic_v3_cpu_if *cpu_if,
+				   struct kvm_cpu_context *vcpu_ctxt,
+				   struct vcpu_hyp_state *vcpu_hyps);
 
 #ifdef __KVM_NVHE_HYPERVISOR__
 void __timer_enable_traps(void);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 30fcfe84f609..44e76993a9b4 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -502,7 +502,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi
 	if (static_branch_unlikely(&vgic_v3_cpuif_trap) &&
 	    (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 ||
 	     kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) {
-		int ret = __vgic_v3_perform_cpuif_access(vcpu);
+		int ret = __vgic_v3_perform_cpuif_access(&vcpu->arch.vgic_cpu.vgic_v3, vcpu_ctxt, vcpu_hyps);
 
 		if (ret == 1)
 			goto guest;
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index 3e1951b04fce..2c16e0cd45f0 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -1097,11 +1097,10 @@ static void __vgic_v3_write_ctlr(struct vgic_v3_cpu_if *cpu_if,
 	write_gicreg(vmcr, ICH_VMCR_EL2);
 }
 
-int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
+int __vgic_v3_perform_cpuif_access(struct vgic_v3_cpu_if *cpu_if,
+				   struct kvm_cpu_context *vcpu_ctxt,
+				   struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	int rt;
 	u32 esr;
 	u32 vmcr;
@@ -1112,7 +1111,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu)
 
 	esr = kvm_hyp_state_get_esr(vcpu_hyps);
 	if (ctxt_mode_is_32bit(vcpu_ctxt)) {
-		if (!kvm_condition_valid(vcpu)) {
+		if (!__kvm_condition_valid(vcpu_ctxt, vcpu_hyps)) {
 			__kvm_skip_instr(vcpu_ctxt, vcpu_hyps);
 			return 1;
 		}
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 17/30] KVM: arm64: access __hyp_running_vcpu via accessors only
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (15 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 16/30] KVM: arm64: reduce scope of vgic_v3 access parameters Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 18/30] KVM: arm64: reduce scope of __guest_exit to only depend on kvm_cpu_context Fuad Tabba
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

__hyp_running_vcpu exposes struct vcpu, but all that accesses it
only need the cpu_ctxt and the hyp state. Start this refactoring
by first ensuring that all accesses to __hyp_running_vcpu go via
accessors and not directly.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_asm.h           | 24 ++++++++++++++++++++++
 arch/arm64/include/asm/kvm_host.h          |  7 +++++++
 arch/arm64/kernel/asm-offsets.c            |  1 +
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
 arch/arm64/kvm/hyp/nvhe/switch.c           | 10 ++++-----
 arch/arm64/kvm/hyp/vhe/switch.c            |  8 +++-----
 6 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 5e9b33cbac51..766b6a852407 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -251,6 +251,18 @@ extern u32 __kvm_get_mdcr_el2(void);
 	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
 .endm
 
+.macro get_vcpu_ctxt_ptr vcpu, ctxt
+	get_host_ctxt \ctxt, \vcpu
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
+	add	\vcpu, \vcpu, #VCPU_CONTEXT
+.endm
+
+.macro get_vcpu_hyps_ptr vcpu, ctxt
+	get_host_ctxt \ctxt, \vcpu
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
+	add	\vcpu, \vcpu, #VCPU_HYPS
+.endm
+
 .macro get_loaded_vcpu vcpu, ctxt
 	adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu
 	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
@@ -261,6 +273,18 @@ extern u32 __kvm_get_mdcr_el2(void);
 	str	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
 .endm
 
+.macro get_loaded_vcpu_ctxt vcpu, ctxt
+	adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
+	add	\vcpu, \vcpu, #VCPU_CONTEXT
+.endm
+
+.macro get_loaded_vcpu_hyps vcpu, ctxt
+	adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
+	add	\vcpu, \vcpu, #VCPU_HYPS
+.endm
+
 /*
  * KVM extable for unexpected exceptions.
  * In the same format _asm_extable, but output to a different section so that
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index dc4b5e133d86..4b01c74705ad 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -230,6 +230,13 @@ struct kvm_cpu_context {
 	struct kvm_vcpu *__hyp_running_vcpu;
 };
 
+#define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu
+#define set_hyp_running_vcpu(ctxt, vcpu) (ctxt)->__hyp_running_vcpu = (vcpu)
+#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu
+
+#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.ctxt : NULL
+#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.hyp_state : NULL
+
 struct kvm_pmu_events {
 	u32 events_host;
 	u32 events_guest;
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 1776efc3cc9d..1ecc55570acc 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -107,6 +107,7 @@ int main(void)
   BLANK();
 #ifdef CONFIG_KVM
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
+  DEFINE(VCPU_HYPS,		offsetof(struct kvm_vcpu, arch.hyp_state));
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.hyp_state.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_cpu_context, regs));
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 7bc8b34b65b2..df9cd2177e71 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -80,7 +80,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 	    !cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1),	SYS_SCTLR);
 		write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1),	SYS_TCR);
-	} else	if (!ctxt->__hyp_running_vcpu) {
+	} else	if (!is_hyp_running_vcpu(ctxt)) {
 		/*
 		 * Must only be done for guest registers, hence the context
 		 * test. We're coming from the host, so SCTLR.M is already
@@ -109,7 +109,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 
 	if (!has_vhe() &&
 	    cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) &&
-	    ctxt->__hyp_running_vcpu) {
+	    is_hyp_running_vcpu(ctxt)) {
 		/*
 		 * Must only be done for host registers, hence the context
 		 * test. Pairs with nVHE's __deactivate_traps().
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 164b0f899f7b..12c673301210 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -191,7 +191,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	}
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	host_ctxt->__hyp_running_vcpu = vcpu;
+	set_hyp_running_vcpu(host_ctxt, vcpu);
 	guest_ctxt = &vcpu->arch.ctxt;
 
 	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
@@ -261,7 +261,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQOFF);
 
-	host_ctxt->__hyp_running_vcpu = NULL;
+	set_hyp_running_vcpu(host_ctxt, NULL);
 
 	return exit_code;
 }
@@ -274,12 +274,10 @@ void __noreturn hyp_panic(void)
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *vcpu;
 	struct vcpu_hyp_state *vcpu_hyps;
-	struct kvm_cpu_context *vcpu_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	vcpu = host_ctxt->__hyp_running_vcpu;
-	vcpu_hyps = &hyp_state(vcpu);
-	vcpu_ctxt = &vcpu_ctxt(vcpu);
+	vcpu = get_hyp_running_vcpu(host_ctxt);
+	vcpu_hyps = get_hyp_running_hyps(host_ctxt);
 
 	if (vcpu) {
 		__timer_disable_traps();
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index f315058a50ca..14c434e00914 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -117,7 +117,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	u64 exit_code;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	host_ctxt->__hyp_running_vcpu = vcpu;
+	set_hyp_running_vcpu(host_ctxt, vcpu);
 	guest_ctxt = &vcpu->arch.ctxt;
 
 	sysreg_save_host_state_vhe(host_ctxt);
@@ -205,12 +205,10 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par)
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *vcpu;
 	struct vcpu_hyp_state *vcpu_hyps;
-	struct kvm_cpu_context *vcpu_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	vcpu = host_ctxt->__hyp_running_vcpu;
-	vcpu_hyps = &hyp_state(vcpu);
-	vcpu_ctxt = &vcpu_ctxt(vcpu);
+	vcpu = get_hyp_running_vcpu(host_ctxt);
+	vcpu_hyps = get_hyp_running_hyps(host_ctxt);
 
 	__deactivate_traps(vcpu_hyps);
 	sysreg_restore_host_state_vhe(host_ctxt);
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 18/30] KVM: arm64: reduce scope of __guest_exit to only depend on kvm_cpu_context
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (16 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 17/30] KVM: arm64: access __hyp_running_vcpu via accessors only Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 19/30] KVM: arm64: change calls of get_loaded_vcpu to get_loaded_vcpu_ctxt Fuad Tabba
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

__guest_exit only needs kvm_cpu_context (via the offset
VCPU_CONTEXT). Only pass that to it, and fix it to ensure that it
only refers to kvm_cpu_context rather than vcpu.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/entry.S     | 7 ++-----
 arch/arm64/kvm/hyp/hyp-entry.S | 8 ++++----
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index e831d3dfd50d..996bdc9555da 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -99,15 +99,12 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
 	adr_l	x1, hyp_panic
 	str	x1, [x0, #CPU_XREG_OFFSET(30)]
 
-	get_vcpu_ptr	x1, x0
+	get_vcpu_ctxt_ptr	x1, x0
 
 SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
 	// x0: return code
-	// x1: vcpu
+	// x1: ctxt
 	// x2-x29,lr: vcpu regs
-	// vcpu x0-x1 on the stack
-
-	add	x1, x1, #VCPU_CONTEXT
 
 	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
 
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 5f49df4ffdd8..704b3388c86a 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -71,17 +71,17 @@ wa_epilogue:
 	sb
 
 el1_trap:
-	get_vcpu_ptr	x1, x0
+	get_vcpu_ctxt_ptr	x1, x0
 	mov	x0, #ARM_EXCEPTION_TRAP
 	b	__guest_exit
 
 el1_irq:
-	get_vcpu_ptr	x1, x0
+	get_vcpu_ctxt_ptr	x1, x0
 	mov	x0, #ARM_EXCEPTION_IRQ
 	b	__guest_exit
 
 el1_error:
-	get_vcpu_ptr	x1, x0
+	get_vcpu_ctxt_ptr	x1, x0
 	mov	x0, #ARM_EXCEPTION_EL1_SERROR
 	b	__guest_exit
 
@@ -100,7 +100,7 @@ el2_sync:
 
 1:
 	/* Let's attempt a recovery from the illegal exception return */
-	get_vcpu_ptr	x1, x0
+	get_vcpu_ctxt_ptr	x1, x0
 	mov	x0, #ARM_EXCEPTION_IL
 	b	__guest_exit
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 19/30] KVM: arm64: change calls of get_loaded_vcpu to get_loaded_vcpu_ctxt
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (17 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 18/30] KVM: arm64: reduce scope of __guest_exit to only depend on kvm_cpu_context Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 20/30] KVM: arm64: add __hyp_running_ctxt and __hyp_running_hyps Fuad Tabba
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

get_loaded_vcpu is used only as a NULL check.
get_loaded_vcpu_ctxt fills the same role and reduces the scope.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/entry.S     | 4 ++--
 arch/arm64/kvm/hyp/nvhe/host.S | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 996bdc9555da..1804be5b7ead 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -81,10 +81,10 @@ alternative_else_nop_endif
 
 SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
 	// x2-x29,lr: vcpu regs
-	// vcpu x0-x1 on the stack
+	// vcpu ctxt x0-x1 on the stack
 
 	// If the hyp context is loaded, go straight to hyp_panic
-	get_loaded_vcpu x0, x1
+	get_loaded_vcpu_ctxt x0, x1
 	cbnz	x0, 1f
 	b	hyp_panic
 
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 2b23400e0fb3..7de2e8716f69 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -134,7 +134,7 @@ SYM_FUNC_END(__hyp_do_panic)
 	.align 7
 	/* If a guest is loaded, panic out of it. */
 	stp	x0, x1, [sp, #-16]!
-	get_loaded_vcpu x0, x1
+	get_loaded_vcpu_ctxt x0, x1
 	cbnz	x0, __guest_exit_panic
 	add	sp, sp, #16
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 20/30] KVM: arm64: add __hyp_running_ctxt and __hyp_running_hyps
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (18 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 19/30] KVM: arm64: change calls of get_loaded_vcpu to get_loaded_vcpu_ctxt Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 21/30] KVM: arm64: transition code to " Fuad Tabba
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

In order to prepare to remove __hyp_running_vcpu, add
__hyp_running_ctxt and __hyp_running_hyps to access the running
kvm_cpu_ctxt and the hyp_state, as well as their associated
assembly offsets.

These new fields are updated but not accessed yet. Their state is
consistent with __hyp_running_vcpu.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_asm.h  | 13 +++++++++++++
 arch/arm64/include/asm/kvm_host.h | 19 ++++++++++++++++---
 arch/arm64/kernel/asm-offsets.c   |  2 ++
 arch/arm64/kvm/hyp/entry.S        |  2 +-
 4 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 766b6a852407..52079e937fcd 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -271,6 +271,19 @@ extern u32 __kvm_get_mdcr_el2(void);
 .macro set_loaded_vcpu vcpu, ctxt, tmp
 	adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp
 	str	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
+
+	add	\tmp, \vcpu, #VCPU_CONTEXT
+	str	\tmp, [\ctxt, #HOST_CONTEXT_CTXT]
+
+	add	\tmp, \vcpu, #VCPU_HYPS
+	str	\tmp, [\ctxt, #HOST_CONTEXT_HYPS]
+.endm
+
+.macro clear_loaded_vcpu ctxt, tmp
+	adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp
+	str	xzr, [\ctxt, #HOST_CONTEXT_VCPU]
+	str	xzr, [\ctxt, #HOST_CONTEXT_CTXT]
+	str	xzr, [\ctxt, #HOST_CONTEXT_HYPS]
 .endm
 
 .macro get_loaded_vcpu_ctxt vcpu, ctxt
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4b01c74705ad..b42d0c6c8004 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -228,14 +228,27 @@ struct kvm_cpu_context {
 	u64 sys_regs[NR_SYS_REGS];
 
 	struct kvm_vcpu *__hyp_running_vcpu;
+	struct kvm_cpu_context *__hyp_running_ctxt;
+	struct vcpu_hyp_state *__hyp_running_hyps;
 };
 
 #define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu
-#define set_hyp_running_vcpu(ctxt, vcpu) (ctxt)->__hyp_running_vcpu = (vcpu)
+#define set_hyp_running_vcpu(host_ctxt, vcpu) do { \
+	struct kvm_vcpu *v = (vcpu); \
+	(host_ctxt)->__hyp_running_vcpu = v; \
+	if (vcpu) { \
+		(host_ctxt)->__hyp_running_ctxt = &v->arch.ctxt; \
+		(host_ctxt)->__hyp_running_hyps = &v->arch.hyp_state; \
+	} else { \
+		(host_ctxt)->__hyp_running_ctxt = NULL; \
+		(host_ctxt)->__hyp_running_hyps = NULL;	\
+	}\
+} while(0)
+
 #define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu
 
-#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.ctxt : NULL
-#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.hyp_state : NULL
+#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_ctxt
+#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_hyps
 
 struct kvm_pmu_events {
 	u32 events_host;
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 1ecc55570acc..9c25078da294 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -117,6 +117,8 @@ int main(void)
   DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
   DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
   DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
+  DEFINE(HOST_CONTEXT_CTXT,	offsetof(struct kvm_cpu_context, __hyp_running_ctxt));
+  DEFINE(HOST_CONTEXT_HYPS,	offsetof(struct kvm_cpu_context, __hyp_running_hyps));
   DEFINE(HOST_DATA_CONTEXT,	offsetof(struct kvm_host_data, host_ctxt));
   DEFINE(NVHE_INIT_MAIR_EL2,	offsetof(struct kvm_nvhe_init_params, mair_el2));
   DEFINE(NVHE_INIT_TCR_EL2,	offsetof(struct kvm_nvhe_init_params, tcr_el2));
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 1804be5b7ead..8e7033aa5770 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -145,7 +145,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
 	// Now restore the hyp regs
 	restore_callee_saved_regs x2
 
-	set_loaded_vcpu xzr, x2, x3
+	clear_loaded_vcpu x2, x3
 
 alternative_if ARM64_HAS_RAS_EXTN
 	// If we have the RAS extensions we can consume a pending error
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 21/30] KVM: arm64: transition code to __hyp_running_ctxt and __hyp_running_hyps
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (19 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 20/30] KVM: arm64: add __hyp_running_ctxt and __hyp_running_hyps Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 22/30] KVM: arm64: reduce scope of __guest_enter to depend only on kvm_cpu_ctxt Fuad Tabba
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Transition code for to use the new hyp_running pointers.
Everything is consistent, because all fields are in-sync.

Remove __hyp_running_vcpu now that no one is using it.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_asm.h  | 24 ++++--------------------
 arch/arm64/include/asm/kvm_host.h |  5 +----
 arch/arm64/kernel/asm-offsets.c   |  1 -
 arch/arm64/kvm/handle_exit.c      |  6 +++---
 arch/arm64/kvm/hyp/nvhe/host.S    |  2 +-
 arch/arm64/kvm/hyp/nvhe/switch.c  |  4 +---
 arch/arm64/kvm/hyp/vhe/switch.c   |  8 ++++----
 7 files changed, 14 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 52079e937fcd..e24ebcf9e0d3 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -246,31 +246,18 @@ extern u32 __kvm_get_mdcr_el2(void);
 	add	\reg, \reg, #HOST_DATA_CONTEXT
 .endm
 
-.macro get_vcpu_ptr vcpu, ctxt
-	get_host_ctxt \ctxt, \vcpu
-	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-.endm
-
 .macro get_vcpu_ctxt_ptr vcpu, ctxt
 	get_host_ctxt \ctxt, \vcpu
-	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-	add	\vcpu, \vcpu, #VCPU_CONTEXT
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_CTXT]
 .endm
 
 .macro get_vcpu_hyps_ptr vcpu, ctxt
 	get_host_ctxt \ctxt, \vcpu
-	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-	add	\vcpu, \vcpu, #VCPU_HYPS
-.endm
-
-.macro get_loaded_vcpu vcpu, ctxt
-	adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu
-	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_HYPS]
 .endm
 
 .macro set_loaded_vcpu vcpu, ctxt, tmp
 	adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp
-	str	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
 
 	add	\tmp, \vcpu, #VCPU_CONTEXT
 	str	\tmp, [\ctxt, #HOST_CONTEXT_CTXT]
@@ -281,21 +268,18 @@ extern u32 __kvm_get_mdcr_el2(void);
 
 .macro clear_loaded_vcpu ctxt, tmp
 	adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp
-	str	xzr, [\ctxt, #HOST_CONTEXT_VCPU]
 	str	xzr, [\ctxt, #HOST_CONTEXT_CTXT]
 	str	xzr, [\ctxt, #HOST_CONTEXT_HYPS]
 .endm
 
 .macro get_loaded_vcpu_ctxt vcpu, ctxt
 	adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu
-	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-	add	\vcpu, \vcpu, #VCPU_CONTEXT
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_CTXT]
 .endm
 
 .macro get_loaded_vcpu_hyps vcpu, ctxt
 	adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu
-	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-	add	\vcpu, \vcpu, #VCPU_HYPS
+	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_HYPS]
 .endm
 
 /*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b42d0c6c8004..035ca5a49166 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -227,15 +227,12 @@ struct kvm_cpu_context {
 
 	u64 sys_regs[NR_SYS_REGS];
 
-	struct kvm_vcpu *__hyp_running_vcpu;
 	struct kvm_cpu_context *__hyp_running_ctxt;
 	struct vcpu_hyp_state *__hyp_running_hyps;
 };
 
-#define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu
 #define set_hyp_running_vcpu(host_ctxt, vcpu) do { \
 	struct kvm_vcpu *v = (vcpu); \
-	(host_ctxt)->__hyp_running_vcpu = v; \
 	if (vcpu) { \
 		(host_ctxt)->__hyp_running_ctxt = &v->arch.ctxt; \
 		(host_ctxt)->__hyp_running_hyps = &v->arch.hyp_state; \
@@ -245,7 +242,7 @@ struct kvm_cpu_context {
 	}\
 } while(0)
 
-#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu
+#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_ctxt
 
 #define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_ctxt
 #define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_hyps
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 9c25078da294..f42aea730cf4 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -116,7 +116,6 @@ int main(void)
   DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
   DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
   DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
-  DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
   DEFINE(HOST_CONTEXT_CTXT,	offsetof(struct kvm_cpu_context, __hyp_running_ctxt));
   DEFINE(HOST_CONTEXT_HYPS,	offsetof(struct kvm_cpu_context, __hyp_running_hyps));
   DEFINE(HOST_DATA_CONTEXT,	offsetof(struct kvm_host_data, host_ctxt));
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 22e9f03fe901..cb6a25b79e38 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -293,7 +293,7 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
 }
 
 void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr,
-					      u64 par, uintptr_t vcpu,
+					      u64 par, uintptr_t vcpu_ctxt,
 					      u64 far, u64 hpfar) {
 	u64 elr_in_kimg = __phys_to_kimg(__hyp_pa(elr));
 	u64 hyp_offset = elr_in_kimg - kaslr_offset() - elr;
@@ -333,6 +333,6 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr,
 	 */
 	kvm_err("Hyp Offset: 0x%llx\n", hyp_offset);
 
-	panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%016lx\n",
-	      spsr, elr, esr, far, hpfar, par, vcpu);
+	panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU_CTXT:%016lx\n",
+	      spsr, elr, esr, far, hpfar, par, vcpu_ctxt);
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 7de2e8716f69..975cf125d54c 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -87,7 +87,7 @@ SYM_FUNC_START(__hyp_do_panic)
 
 	/* Load the panic arguments into x0-7 */
 	mrs	x0, esr_el2
-	get_vcpu_ptr x4, x5
+	get_vcpu_ctxt_ptr x4, x5
 	mrs	x5, far_el2
 	mrs	x6, hpfar_el2
 	mov	x7, xzr			// Unused argument
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 12c673301210..483df8fe052e 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -272,14 +272,12 @@ void __noreturn hyp_panic(void)
 	u64 elr = read_sysreg_el2(SYS_ELR);
 	u64 par = read_sysreg_par();
 	struct kvm_cpu_context *host_ctxt;
-	struct kvm_vcpu *vcpu;
 	struct vcpu_hyp_state *vcpu_hyps;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	vcpu = get_hyp_running_vcpu(host_ctxt);
 	vcpu_hyps = get_hyp_running_hyps(host_ctxt);
 
-	if (vcpu) {
+	if (vcpu_hyps) {
 		__timer_disable_traps();
 		__deactivate_traps(vcpu_hyps);
 		__load_host_stage2();
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 14c434e00914..64de9f0d7636 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -203,20 +203,20 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 static void __hyp_call_panic(u64 spsr, u64 elr, u64 par)
 {
 	struct kvm_cpu_context *host_ctxt;
-	struct kvm_vcpu *vcpu;
+	struct kvm_cpu_context *vcpu_ctxt;
 	struct vcpu_hyp_state *vcpu_hyps;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	vcpu = get_hyp_running_vcpu(host_ctxt);
+	vcpu_ctxt = get_hyp_running_ctxt(host_ctxt);
 	vcpu_hyps = get_hyp_running_hyps(host_ctxt);
 
 	__deactivate_traps(vcpu_hyps);
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n",
+	panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU_CTXT:%p\n",
 	      spsr, elr,
 	      read_sysreg_el2(SYS_ESR), read_sysreg_el2(SYS_FAR),
-	      read_sysreg(hpfar_el2), par, vcpu);
+	      read_sysreg(hpfar_el2), par, vcpu_ctxt);
 }
 NOKPROBE_SYMBOL(__hyp_call_panic);
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 22/30] KVM: arm64: reduce scope of __guest_enter to depend only on kvm_cpu_ctxt
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (20 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 21/30] KVM: arm64: transition code to " Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 23/30] KVM: arm64: COCCI: remove_unused.cocci: remove unused ctxt and hypstate variables Fuad Tabba
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

guest_enter doesn't need the vcpu, only the guest's kvm_cpu_ctxt.
Reduce its scope to that.

With this commit, the only state in struct vcpu that the
hypervisor needs to save locally in future patches is guest
context (kvm_cpu_context) and the hypervisor state
(vcpu_hyp_state).

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h |  2 +-
 arch/arm64/kvm/hyp/entry.S       | 10 ++++------
 arch/arm64/kvm/hyp/nvhe/switch.c |  5 ++++-
 arch/arm64/kvm/hyp/vhe/switch.c  |  5 ++++-
 4 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index b379c2b96f33..c5206e958136 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -100,7 +100,7 @@ void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps);
 void deactivate_traps_vhe_put(void);
 #endif
 
-u64 __guest_enter(struct kvm_vcpu *vcpu);
+u64 __guest_enter(struct kvm_cpu_context *guest_ctxt);
 
 bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt);
 
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 8e7033aa5770..f553f184e402 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -18,12 +18,12 @@
 	.text
 
 /*
- * u64 __guest_enter(struct kvm_vcpu *vcpu);
+ * u64 __guest_enter(struct kvm_cpu_context *guest_ctxt);
  */
 SYM_FUNC_START(__guest_enter)
-	// x0: vcpu
+	// x0: guest context (input parameter)
 	// x1-x17: clobbered by macros
-	// x29: guest context
+	// x29: guest context (maintained for call duration)
 
 	adr_this_cpu x1, kvm_hyp_ctxt, x2
 
@@ -47,9 +47,7 @@ alternative_else_nop_endif
 	ret
 
 1:
-	set_loaded_vcpu x0, x1, x2
-
-	add	x29, x0, #VCPU_CONTEXT
+	mov	x29, x0
 
 	// Macro ptrauth_switch_to_guest format:
 	// 	ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3)
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 483df8fe052e..d9a69e66158c 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -228,8 +228,11 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__debug_switch_to_guest(vcpu);
 
 	do {
+		struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt);
+		set_hyp_running_vcpu(hyp_ctxt, vcpu);
+
 		/* Jump in the fire! */
-		exit_code = __guest_enter(vcpu);
+		exit_code = __guest_enter(guest_ctxt);
 
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 64de9f0d7636..5039910a7c80 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -142,8 +142,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	__debug_switch_to_guest(vcpu);
 
 	do {
+		struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt);
+		set_hyp_running_vcpu(hyp_ctxt, vcpu);
+
 		/* Jump in the fire! */
-		exit_code = __guest_enter(vcpu);
+		exit_code = __guest_enter(guest_ctxt);
 
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 23/30] KVM: arm64: COCCI: remove_unused.cocci: remove unused ctxt and hypstate variables
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (21 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 22/30] KVM: arm64: reduce scope of __guest_enter to depend only on kvm_cpu_ctxt Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 24/30] KVM: arm64: remove unused functions Fuad Tabba
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

These local variables were added aggressively. Remove the ones
that ended up not being used. Also, some of the added variables
are missing a new line after their definition. Insert that for
the remaining ones.

This applies the semantic patch with the following command:
spatch --sp-file cocci_refactor/remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/exception.c             | 5 -----
 arch/arm64/kvm/hyp/include/hyp/switch.h    | 9 ++++-----
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 2 ++
 arch/arm64/kvm/hyp/nvhe/switch.c           | 1 -
 arch/arm64/kvm/hyp/vhe/switch.c            | 3 ---
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         | 3 ---
 6 files changed, 6 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index a08806efe031..bb0bc1f5568c 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -59,31 +59,26 @@ static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val)
 
 static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
 {
-	const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg);
 }
 
 static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg);
 }
 
 static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_spsr(&vcpu_ctxt(vcpu), val);
 }
 
 static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val);
 }
 
 static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	__ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val);
 }
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 44e76993a9b4..433601f79b94 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -37,6 +37,7 @@ extern struct exception_table_entry __stop___kvm_ex_table;
 static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
+
 	/*
 	 * When the system doesn't support FP/SIMD, we cannot rely on
 	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
@@ -55,8 +56,8 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 /* Save the 32-bit only FPSIMD system register state */
 static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
@@ -65,8 +66,6 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 
 static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	/*
 	 * We are about to set CPTR_EL2.TFP to trap all floating point
 	 * register accesses to EL2, however, the ARM ARM clearly states that
@@ -220,8 +219,8 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu)
 
 static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+
 	sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
 	__sve_restore_state(vcpu_sve_pffr(vcpu),
 			    &ctxt_fp_regs(vcpu_ctxt)->fpsr);
@@ -395,7 +394,6 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *ctxt;
 	u64 val;
 
@@ -428,6 +426,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
 		hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index df9cd2177e71..b750ff40a604 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -160,6 +160,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
@@ -179,6 +180,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index d9a69e66158c..b90ec8db5864 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -37,7 +37,6 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val;
 
 	___activate_traps(vcpu_hyps);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 5039910a7c80..7f926016cebe 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -34,7 +34,6 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	u64 val;
 
 	___activate_traps(vcpu_hyps);
@@ -168,8 +167,6 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe);
 
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	int ret;
 
 	local_daif_mask();
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 1571c144e9b0..1ded8be83c5a 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -64,7 +64,6 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe);
 void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
 
@@ -99,8 +98,6 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
  */
 void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 24/30] KVM: arm64: remove unused functions
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (22 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 23/30] KVM: arm64: COCCI: remove_unused.cocci: remove unused ctxt and hypstate variables Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 25/30] KVM: arm64: separate kvm_run() for protected VMs Fuad Tabba
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

__vcpu_write_spsr*() functions are not used anymore.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/exception.c | 15 ---------------
 1 file changed, 15 deletions(-)

diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index bb0bc1f5568c..fdfc809f61b8 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -67,21 +67,6 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
 	__ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg);
 }
 
-static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
-{
-	__ctxt_write_spsr(&vcpu_ctxt(vcpu), val);
-}
-
-static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
-{
-	__ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val);
-}
-
-static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
-{
-	__ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val);
-}
-
 /*
  * This performs the exception entry at a given EL (@target_mode), stashing PC
  * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE.
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 25/30] KVM: arm64: separate kvm_run() for protected VMs
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (23 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 24/30] KVM: arm64: remove unused functions Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 26/30] KVM: arm64: pVM activate_traps to use vcpu_ctxt and vcpu_hyp_state Fuad Tabba
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Split kvm_run() for protected and non-protected VMs. Protected
VMs support fewer features, separating it out will ease the
refactoring and simplify the code.

This patch starts only by replicated the code from the
non-protected case, to make it easier to diff against future
patches.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 119 ++++++++++++++++++++++++++++++-
 1 file changed, 116 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index b90ec8db5864..9e79f97ba49e 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -119,7 +119,7 @@ static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu)
 	}
 }
 
-/* Restore VGICv3 state on non_VEH systems */
+/* Restore VGICv3 state on nVHE systems */
 static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
 {
 	if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) {
@@ -166,8 +166,110 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
-/* Switch to the guest for legacy non-VHE systems */
-int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+/* Switch to the non-protected guest */
+static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_hyp_state *vcpu_hyps = &vcpu->arch.hyp_state;
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt;
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+	struct vgic_dist *vgic = &kvm->arch.vgic;
+	struct kvm_cpu_context *host_ctxt;
+	struct kvm_cpu_context *guest_ctxt;
+	bool pmu_switch_needed;
+	u64 exit_code;
+
+	/*
+	 * Having IRQs masked via PMR when entering the guest means the GIC
+	 * will not signal the CPU of interrupts of lower priority, and the
+	 * only way to get out will be via guest exceptions.
+	 * Naturally, we want to avoid this.
+	 */
+	if (system_uses_irq_prio_masking()) {
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+		pmr_sync();
+	}
+
+	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
+	set_hyp_running_vcpu(host_ctxt, vcpu);
+	guest_ctxt = &vcpu->arch.ctxt;
+
+	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
+
+	__sysreg_save_state_nvhe(host_ctxt);
+	/*
+	 * We must flush and disable the SPE buffer for nVHE, as
+	 * the translation regime(EL1&0) is going to be loaded with
+	 * that of the guest. And we must do this before we change the
+	 * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and
+	 * before we load guest Stage1.
+	 */
+	__debug_save_host_buffers_nvhe(vcpu);
+
+	kvm_adjust_pc(vcpu_ctxt, vcpu_hyps);
+
+	/*
+	 * We must restore the 32-bit state before the sysregs, thanks
+	 * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).
+	 *
+	 * Also, and in order to be able to deal with erratum #1319537 (A57)
+	 * and #1319367 (A72), we must ensure that all VM-related sysreg are
+	 * restored before we enable S2 translation.
+	 */
+	__sysreg32_restore_state(vcpu);
+	__sysreg_restore_state_nvhe(guest_ctxt);
+
+	__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
+	__activate_traps(vcpu);
+
+	__hyp_vgic_restore_state(vcpu);
+	__timer_enable_traps();
+
+	__debug_switch_to_guest(vcpu);
+
+	do {
+		struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt);
+		set_hyp_running_vcpu(hyp_ctxt, vcpu);
+
+		/* Jump in the fire! */
+		exit_code = __guest_enter(guest_ctxt);
+
+		/* And we're baaack! */
+	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
+
+	__sysreg_save_state_nvhe(guest_ctxt);
+	__sysreg32_save_state(vcpu);
+	__timer_disable_traps();
+	__hyp_vgic_save_state(vcpu);
+
+	__deactivate_traps(vcpu_hyps);
+	__load_host_stage2();
+
+	__sysreg_restore_state_nvhe(host_ctxt);
+
+	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED)
+		__fpsimd_save_fpexc32(vcpu);
+
+	__debug_switch_to_host(vcpu);
+	/*
+	 * This must come after restoring the host sysregs, since a non-VHE
+	 * system may enable SPE here and make use of the TTBRs.
+	 */
+	__debug_restore_host_buffers_nvhe(vcpu);
+
+	if (pmu_switch_needed)
+		__pmu_switch_to_host(host_ctxt);
+
+	/* Returning to host will clear PSR.I, remask PMR if needed */
+	if (system_uses_irq_prio_masking())
+		gic_write_pmr(GIC_PRIO_IRQOFF);
+
+	set_hyp_running_vcpu(host_ctxt, NULL);
+
+	return exit_code;
+}
+
+/* Switch to the protected guest */
+static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
@@ -268,6 +370,17 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	return exit_code;
 }
 
+/* Switch to the guest for non-VHE and protected KVM systems */
+int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+{
+	vcpu = kern_hyp_va(vcpu);
+
+	if (likely(!kvm_vm_is_protected(kern_hyp_va(vcpu->kvm))))
+		return __kvm_vcpu_run_nvhe(vcpu);
+	else
+		return __kvm_vcpu_run_pvm(vcpu);
+}
+
 void __noreturn hyp_panic(void)
 {
 	u64 spsr = read_sysreg_el2(SYS_SPSR);
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 26/30] KVM: arm64: pVM activate_traps to use vcpu_ctxt and vcpu_hyp_state
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (24 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 25/30] KVM: arm64: separate kvm_run() for protected VMs Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 27/30] KVM: arm64: remove unsupported pVM features Fuad Tabba
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Refactor protected VM activate_traps not to use vcpu. Protected
32 bit VMs are not supported, and therefore the code for setting
the floating point traps at 32 bits isn't needed for the pvm
case.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 35 +++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 9e79f97ba49e..0d654b324612 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -34,9 +34,10 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
 DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
-static void __activate_traps(struct kvm_vcpu *vcpu)
+/* Activate traps for protected guests */
+static void __activate_traps_pvm(struct kvm_cpu_context *vcpu_ctxt,
+				 struct vcpu_hyp_state *vcpu_hyps)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	u64 val;
 
 	___activate_traps(vcpu_hyps);
@@ -44,26 +45,36 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = CPTR_EL2_DEFAULT;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
-		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
-		__activate_traps_fpsimd32(vcpu);
-	}
 
 	write_sysreg(val, cptr_el2);
 	write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2);
 
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
-		struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
-
 		isb();
 		/*
 		 * At this stage, and thanks to the above isb(), S2 is
 		 * configured and enabled. We can now restore the guest's S1
 		 * configuration: SCTLR, and only then TCR.
 		 */
-		write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1),	SYS_SCTLR);
+		write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, SCTLR_EL1), SYS_SCTLR);
 		isb();
-		write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1),	SYS_TCR);
+		write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, TCR_EL1), SYS_TCR);
+	}
+}
+
+/* Activate traps for non-protected guests in nVHE */
+static void __activate_traps_nvhe(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
+	struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt;
+
+	__activate_traps_pvm(vcpu_ctxt, vcpu_hyps);
+
+	if (!update_fp_enabled(vcpu)) {
+		u64 val = CPTR_EL2_DEFAULT | CPTR_EL2_TTA | CPTR_EL2_TAM |
+			  CPTR_EL2_TFP | CPTR_EL2_TZ;
+		__activate_traps_fpsimd32(vcpu);
+		write_sysreg(val, cptr_el2);
 	}
 }
 
@@ -219,7 +230,7 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_state_nvhe(guest_ctxt);
 
 	__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
-	__activate_traps(vcpu);
+	__activate_traps_nvhe(vcpu);
 
 	__hyp_vgic_restore_state(vcpu);
 	__timer_enable_traps();
@@ -321,7 +332,7 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 	__sysreg_restore_state_nvhe(guest_ctxt);
 
 	__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
-	__activate_traps(vcpu);
+	__activate_traps_pvm(vcpu_ctxt, vcpu_hyps);
 
 	__hyp_vgic_restore_state(vcpu);
 	__timer_enable_traps();
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 27/30] KVM: arm64: remove unsupported pVM features
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (25 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 26/30] KVM: arm64: pVM activate_traps to use vcpu_ctxt and vcpu_hyp_state Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 28/30] KVM: arm64: reduce scope of pVM fixup_guest_exit to hyp_state and kvm_cpu_ctxt Fuad Tabba
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Remove code for unsupported features for protected VMs from
__kvm_vcpu_run_pvm(). Do not run unsupported code (SVE) in
__hyp_handle_fpsimd().

Enforcement of this is in the fixed features patch series [1].
The code removed or disabled is related to the following:
- PMU
- Debug
- Arm32
- SPE
- SVE

[1]
Link: https://lore.kernel.org/kvmarm/20210922124704.600087-1-tabba@google.com/T/#u

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  5 ++--
 arch/arm64/kvm/hyp/nvhe/switch.c        | 36 -------------------------
 2 files changed, 3 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 433601f79b94..3ef429cfd9af 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -232,6 +232,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
+	const bool is_protected = is_nvhe_hyp_code() && kvm_vm_is_protected(kern_hyp_va(vcpu->kvm));
 	bool sve_guest, sve_host;
 	u8 esr_ec;
 	u64 reg;
@@ -239,7 +240,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 	if (!system_supports_fpsimd())
 		return false;
 
-	if (system_supports_sve()) {
+	if (system_supports_sve() && !is_protected) {
 		sve_guest = hyp_state_has_sve(vcpu_hyps);
 		sve_host = hyp_state_flags(vcpu_hyps) & KVM_ARM64_HOST_SVE_IN_USE;
 	} else {
@@ -247,7 +248,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 		sve_host = false;
 	}
 
-	esr_ec = kvm_vcpu_trap_get_class(vcpu);
+	esr_ec = kvm_hyp_state_trap_get_class(vcpu_hyps);
 	if (esr_ec != ESR_ELx_EC_FP_ASIMD &&
 	    esr_ec != ESR_ELx_EC_SVE)
 		return false;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 0d654b324612..aa0dc4f0433b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -288,7 +288,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 	struct vgic_dist *vgic = &kvm->arch.vgic;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
-	bool pmu_switch_needed;
 	u64 exit_code;
 
 	/*
@@ -306,29 +305,10 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 	set_hyp_running_vcpu(host_ctxt, vcpu);
 	guest_ctxt = &vcpu->arch.ctxt;
 
-	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
-
 	__sysreg_save_state_nvhe(host_ctxt);
-	/*
-	 * We must flush and disable the SPE buffer for nVHE, as
-	 * the translation regime(EL1&0) is going to be loaded with
-	 * that of the guest. And we must do this before we change the
-	 * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and
-	 * before we load guest Stage1.
-	 */
-	__debug_save_host_buffers_nvhe(vcpu);
 
 	kvm_adjust_pc(vcpu_ctxt, vcpu_hyps);
 
-	/*
-	 * We must restore the 32-bit state before the sysregs, thanks
-	 * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).
-	 *
-	 * Also, and in order to be able to deal with erratum #1319537 (A57)
-	 * and #1319367 (A72), we must ensure that all VM-related sysreg are
-	 * restored before we enable S2 translation.
-	 */
-	__sysreg32_restore_state(vcpu);
 	__sysreg_restore_state_nvhe(guest_ctxt);
 
 	__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
@@ -337,8 +317,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 	__hyp_vgic_restore_state(vcpu);
 	__timer_enable_traps();
 
-	__debug_switch_to_guest(vcpu);
-
 	do {
 		struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt);
 		set_hyp_running_vcpu(hyp_ctxt, vcpu);
@@ -350,7 +328,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
 
 	__sysreg_save_state_nvhe(guest_ctxt);
-	__sysreg32_save_state(vcpu);
 	__timer_disable_traps();
 	__hyp_vgic_save_state(vcpu);
 
@@ -359,19 +336,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED)
-		__fpsimd_save_fpexc32(vcpu);
-
-	__debug_switch_to_host(vcpu);
-	/*
-	 * This must come after restoring the host sysregs, since a non-VHE
-	 * system may enable SPE here and make use of the TTBRs.
-	 */
-	__debug_restore_host_buffers_nvhe(vcpu);
-
-	if (pmu_switch_needed)
-		__pmu_switch_to_host(host_ctxt);
-
 	/* Returning to host will clear PSR.I, remask PMR if needed */
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQOFF);
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 28/30] KVM: arm64: reduce scope of pVM fixup_guest_exit to hyp_state and kvm_cpu_ctxt
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (26 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 27/30] KVM: arm64: remove unsupported pVM features Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 29/30] [DONOTMERGE] Remove Coccinelle scripts added for refactoring Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 30/30] [DONOTMERGE] Re-enable warnings Fuad Tabba
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Reduce the scope of fixup_guest_exit for protected VMs to only
need hyp_state and kvm_cpu_ctxt

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 23 +++++++++++++++++++----
 arch/arm64/kvm/hyp/nvhe/switch.c        |  7 ++-----
 arch/arm64/kvm/hyp/vhe/switch.c         |  3 +--
 3 files changed, 22 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 3ef429cfd9af..ea9571f712c6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -423,11 +423,8 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
  * the guest, false when we should restore the host state and return to the
  * main run loop.
  */
-static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, u64 *exit_code)
+static inline bool _fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps, u64 *exit_code)
 {
-	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
-	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
 		hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR);
 
@@ -518,6 +515,24 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi
 	return true;
 }
 
+static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+	struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+	struct vcpu_hyp_state *hyps = &vcpu->arch.hyp_state;
+	// TODO: create helper for getting VA
+	struct kvm *kvm = vcpu->kvm;
+
+	if (is_nvhe_hyp_code())
+		kvm = kern_hyp_va(kvm);
+
+	return _fixup_guest_exit(vcpu, &kvm->arch.vgic, ctxt, hyps, exit_code);
+}
+
+static inline bool fixup_pvm_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, struct kvm_cpu_context *ctxt, struct vcpu_hyp_state *hyps, u64 *exit_code)
+{
+	return _fixup_guest_exit(vcpu, vgic, ctxt, hyps, exit_code);
+}
+
 static inline void __kvm_unexpected_el2_exception(void)
 {
 	extern char __guest_exit_panic[];
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index aa0dc4f0433b..1920aebbe49a 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -182,8 +182,6 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &vcpu->arch.hyp_state;
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt;
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-	struct vgic_dist *vgic = &kvm->arch.vgic;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	bool pmu_switch_needed;
@@ -245,7 +243,7 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(guest_ctxt);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
+	} while (fixup_guest_exit(vcpu, &exit_code));
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
@@ -285,7 +283,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-	struct vgic_dist *vgic = &kvm->arch.vgic;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	u64 exit_code;
@@ -325,7 +322,7 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(guest_ctxt);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
+	} while (fixup_pvm_guest_exit(vcpu, &kvm->arch.vgic, vcpu_ctxt, vcpu_hyps, &exit_code));
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__timer_disable_traps();
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 7f926016cebe..4a05aff37325 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -110,7 +110,6 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu);
 	struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	u64 exit_code;
@@ -148,7 +147,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(guest_ctxt);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, vgic, &exit_code));
+	} while (fixup_guest_exit(vcpu, &exit_code));
 
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 29/30] [DONOTMERGE] Remove Coccinelle scripts added for refactoring
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (27 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 28/30] KVM: arm64: reduce scope of pVM fixup_guest_exit to hyp_state and kvm_cpu_ctxt Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  2021-09-24 12:53 ` [RFC PATCH v1 30/30] [DONOTMERGE] Re-enable warnings Fuad Tabba
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

The scripts are not needed anymore, and were included for the git
history.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 cocci_refactor/add_ctxt.cocci           | 169 ------------------------
 cocci_refactor/add_hypstate.cocci       | 125 ------------------
 cocci_refactor/hyp_ctxt.cocci           |  38 ------
 cocci_refactor/range.cocci              |  50 -------
 cocci_refactor/remove_unused.cocci      |  69 ----------
 cocci_refactor/test.cocci               |  20 ---
 cocci_refactor/use_ctxt.cocci           |  32 -----
 cocci_refactor/use_ctxt_access.cocci    |  39 ------
 cocci_refactor/use_hypstate.cocci       |  63 ---------
 cocci_refactor/vcpu_arch_ctxt.cocci     |  13 --
 cocci_refactor/vcpu_declr.cocci         |  59 ---------
 cocci_refactor/vcpu_flags.cocci         |  10 --
 cocci_refactor/vcpu_hyp_accessors.cocci |  35 -----
 cocci_refactor/vcpu_hyp_state.cocci     |  30 -----
 cocci_refactor/vgic3_cpu.cocci          | 118 -----------------
 15 files changed, 870 deletions(-)
 delete mode 100644 cocci_refactor/add_ctxt.cocci
 delete mode 100644 cocci_refactor/add_hypstate.cocci
 delete mode 100644 cocci_refactor/hyp_ctxt.cocci
 delete mode 100644 cocci_refactor/range.cocci
 delete mode 100644 cocci_refactor/remove_unused.cocci
 delete mode 100644 cocci_refactor/test.cocci
 delete mode 100644 cocci_refactor/use_ctxt.cocci
 delete mode 100644 cocci_refactor/use_ctxt_access.cocci
 delete mode 100644 cocci_refactor/use_hypstate.cocci
 delete mode 100644 cocci_refactor/vcpu_arch_ctxt.cocci
 delete mode 100644 cocci_refactor/vcpu_declr.cocci
 delete mode 100644 cocci_refactor/vcpu_flags.cocci
 delete mode 100644 cocci_refactor/vcpu_hyp_accessors.cocci
 delete mode 100644 cocci_refactor/vcpu_hyp_state.cocci
 delete mode 100644 cocci_refactor/vgic3_cpu.cocci

diff --git a/cocci_refactor/add_ctxt.cocci b/cocci_refactor/add_ctxt.cocci
deleted file mode 100644
index 203644944ace..000000000000
--- a/cocci_refactor/add_ctxt.cocci
+++ /dev/null
@@ -1,169 +0,0 @@
-// <smpl>
-
-/*
-spatch --sp-file add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place
-*/
-
-
-@exists@
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-identifier fc;
-@@
-<...
-(
-  struct kvm_vcpu *vcpu = NULL;
-+ struct kvm_cpu_context *vcpu_ctxt;
-|
-  struct kvm_vcpu *vcpu = ...;
-+ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-|
-  struct kvm_vcpu *vcpu;
-+ struct kvm_cpu_context *vcpu_ctxt;
-)
-<...
-  vcpu = ...;
-+ vcpu_ctxt = &vcpu_ctxt(vcpu);
-...>
-fc(..., vcpu, ...)
-...>
-
-@exists@
-identifier func != {kvm_arch_vcpu_run_pid_change};
-identifier fc != {vcpu_ctxt};
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-@@
-func(..., struct kvm_vcpu *vcpu, ...) {
-+ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-<+...
-fc(..., vcpu, ...)
-...+>
- }
-
-@@
-expression a, b;
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-iterator name kvm_for_each_vcpu;
-identifier fc;
-@@
-kvm_for_each_vcpu(a, vcpu, b)
- {
-+ vcpu_ctxt = &vcpu_ctxt(vcpu);
-<+...
-fc(..., vcpu, ...)
-...+>
- }
-
-@@
-identifier vcpu_ctxt, vcpu;
-iterator name kvm_for_each_vcpu;
-type T;
-identifier x;
-statement S1, S2;
-@@
-kvm_for_each_vcpu(...)
- {
-- vcpu_ctxt = &vcpu_ctxt(vcpu);
-... when != S1
-+ vcpu_ctxt = &vcpu_ctxt(vcpu);
- S2
- ... when any
- }
-
-@
-disable optional_qualifier
-exists
-@
-identifier vcpu;
-identifier vcpu_ctxt;
-@@
-<...
-  const struct kvm_vcpu *vcpu = ...;
-- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-+ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-...>
-
-@disable optional_qualifier@
-identifier func, vcpu;
-identifier vcpu_ctxt;
-@@
-func(..., const struct kvm_vcpu *vcpu, ...) {
-- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-+ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu);
-...
- }
-
-@exists@
-expression r1, r2;
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-@@
-(
-- vcpu_gp_regs(vcpu)
-+ ctxt_gp_regs(vcpu_ctxt)
-|
-- vcpu_spsr_abt(vcpu)
-+ ctxt_spsr_abt(vcpu_ctxt)
-|
-- vcpu_spsr_und(vcpu)
-+ ctxt_spsr_und(vcpu_ctxt)
-|
-- vcpu_spsr_irq(vcpu)
-+ ctxt_spsr_irq(vcpu_ctxt)
-|
-- vcpu_spsr_fiq(vcpu)
-+ ctxt_spsr_fiq(vcpu_ctxt)
-|
-- vcpu_fp_regs(vcpu)
-+ ctxt_fp_regs(vcpu_ctxt)
-|
-- __vcpu_sys_reg(vcpu, r1)
-+ ctxt_sys_reg(vcpu_ctxt, r1)
-|
-- __vcpu_read_sys_reg(vcpu, r1)
-+ __ctxt_read_sys_reg(vcpu_ctxt, r1)
-|
-- __vcpu_write_sys_reg(vcpu, r1, r2)
-+ __ctxt_write_sys_reg(vcpu_ctxt, r1, r2)
-|
-- __vcpu_write_spsr(vcpu, r1)
-+ __ctxt_write_spsr(vcpu_ctxt, r1)
-|
-- __vcpu_write_spsr_abt(vcpu, r1)
-+ __ctxt_write_spsr_abt(vcpu_ctxt, r1)
-|
-- __vcpu_write_spsr_und(vcpu, r1)
-+ __ctxt_write_spsr_und(vcpu_ctxt, r1)
-|
-- vcpu_pc(vcpu)
-+ ctxt_pc(vcpu_ctxt)
-|
-- vcpu_cpsr(vcpu)
-+ ctxt_cpsr(vcpu_ctxt)
-|
-- vcpu_mode_is_32bit(vcpu)
-+ ctxt_mode_is_32bit(vcpu_ctxt)
-|
-- vcpu_set_thumb(vcpu)
-+ ctxt_set_thumb(vcpu_ctxt)
-|
-- vcpu_get_reg(vcpu, r1)
-+ ctxt_get_reg(vcpu_ctxt, r1)
-|
-- vcpu_set_reg(vcpu, r1, r2)
-+ ctxt_set_reg(vcpu_ctxt, r1, r2)
-)
-
-
-/* Handles one case of a call within a call. */
-@@
-expression r1, r2;
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-@@
-- vcpu_pc(vcpu)
-+ ctxt_pc(vcpu_ctxt)
-
-// </smpl>
diff --git a/cocci_refactor/add_hypstate.cocci b/cocci_refactor/add_hypstate.cocci
deleted file mode 100644
index e8635d0e8f57..000000000000
--- a/cocci_refactor/add_hypstate.cocci
+++ /dev/null
@@ -1,125 +0,0 @@
-// <smpl>
-
-/*
-FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h"
-spatch --sp-file add_hypstate.cocci $FILES --in-place
-*/
-
-@exists@
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-identifier fc;
-@@
-<...
-(
-  struct kvm_vcpu *vcpu = NULL;
-+ struct vcpu_hyp_state *hyps;
-|
-  struct kvm_vcpu *vcpu = ...;
-+ struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
-|
-  struct kvm_vcpu *vcpu;
-+ struct vcpu_hyp_state *hyps;
-)
-<...
-  vcpu = ...;
-+ hyps = &hyp_state(vcpu);
-...>
-fc(..., vcpu, ...)
-...>
-
-@exists@
-identifier func != {kvm_arch_vcpu_run_pid_change};
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-identifier fc;
-@@
-func(..., struct kvm_vcpu *vcpu, ...) {
-+ struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
-<+...
-fc(..., vcpu, ...)
-...+>
- }
-
-@@
-expression a, b;
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-iterator name kvm_for_each_vcpu;
-identifier fc;
-@@
-kvm_for_each_vcpu(a, vcpu, b)
- {
-+ hyps = &hyp_state(vcpu);
-<+...
-fc(..., vcpu, ...)
-...+>
- }
-
-@@
-identifier hyps, vcpu;
-iterator name kvm_for_each_vcpu;
-statement S1, S2;
-@@
-kvm_for_each_vcpu(...)
- {
-- hyps = &hyp_state(vcpu);
-... when != S1
-+ hyps = &hyp_state(vcpu);
- S2
- ... when any
- }
-
-@
-disable optional_qualifier
-exists
-@
-identifier vcpu, hyps;
-@@
-<...
-  const struct kvm_vcpu *vcpu = ...;
-- struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
-+ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
-...>
-
-
-@@
-identifier func, vcpu, hyps;
-@@
-func(..., const struct kvm_vcpu *vcpu, ...) {
-- struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
-+ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu);
-...
- }
-
-@exists@
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-@@
-(
-- vcpu_hcr_el2(vcpu)
-+ hyp_state_hcr_el2(hyps)
-|
-- vcpu_mdcr_el2(vcpu)
-+ hyp_state_mdcr_el2(hyps)
-|
-- vcpu_vsesr_el2(vcpu)
-+ hyp_state_vsesr_el2(hyps)
-|
-- vcpu_fault(vcpu)
-+ hyp_state_fault(hyps)
-|
-- vcpu_flags(vcpu)
-+ hyp_state_flags(hyps)
-|
-- vcpu_has_sve(vcpu)
-+ hyp_state_has_sve(hyps)
-|
-- vcpu_has_ptrauth(vcpu)
-+ hyp_state_has_ptrauth(hyps)
-|
-- kvm_arm_vcpu_sve_finalized(vcpu)
-+ kvm_arm_hyp_state_sve_finalized(hyps)
-)
-
-// </smpl>
diff --git a/cocci_refactor/hyp_ctxt.cocci b/cocci_refactor/hyp_ctxt.cocci
deleted file mode 100644
index af7974e3a502..000000000000
--- a/cocci_refactor/hyp_ctxt.cocci
+++ /dev/null
@@ -1,38 +0,0 @@
-// Remove vcpu if all we're using is hypstate and ctxt
-
-/*
-FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]")"
-spatch --sp-file hyp_ctxt.cocci $FILES --in-place;
-*/
-
-// <smpl>
-
-@remove@
-identifier func !~ "^trap_|^access_|dbg_to_reg|check_pmu_access_disabled|match_mpidr|get_ctr_el0|emulate_cp|unhandled_cp_access|index_to_sys_reg_desc|kvm_pmu_|pmu_counter_idx_valid|reset_|read_from_write_only|write_to_read_only|undef_access|vgic_|kvm_handle_|handle_sve|handle_smc|handle_no_fpsimd|id_visibility|reg_to_dbg|ptrauth_visibility|sve_visibility|kvm_arch_sched_in|kvm_arch_vcpu_|kvm_vcpu_pmu_|kvm_psci_|kvm_arm_copy_fw_reg_indices|kvm_arm_pvtime_|kvm_trng_|kvm_arm_timer_";
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-fresh identifier vcpu_hyps = vcpu ## "_hyps";
-identifier hyps_remove;
-identifier ctxt_remove;
-@@
-func(...,
-- struct kvm_vcpu *vcpu
-+ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps
-,...) {
-?- struct vcpu_hyp_state *hyps_remove = ...;
-?- struct kvm_cpu_context *ctxt_remove = ...;
-... when != vcpu
- }
-
-@@
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-fresh identifier vcpu_hyps = vcpu ## "_hyps";
-identifier remove.func;
-@@
- func(
-- vcpu
-+ vcpu_ctxt, vcpu_hyps
-  , ...)
-
-// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/range.cocci b/cocci_refactor/range.cocci
deleted file mode 100644
index d99b9ee30657..000000000000
--- a/cocci_refactor/range.cocci
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-// <smpl>
-
-/*
- FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file range.cocci $FILES
-*/
-
-@initialize:python@
-@@
-starts = ("start", "begin", "from", "floor", "addr", "kaddr")
-ends = ("size", "length", "len")
-
-//ends = ("end", "to", "ceiling", "size", "length", "len")
-
-
-@start_end@
-identifier f;
-type A, B;
-identifier start, end;
-parameter list[n] ps;
-@@
-f(ps, A start, B end, ...) {
-...
-}
-
-@script:python@
-start << start_end.start;
-end << start_end.end;
-ta << start_end.A;
-tb << start_end.B;
-@@
-
-if ta != tb and tb != "size_t":
-        cocci.include_match(False)
-elif not any(x in start for x in starts) and not any(x in end for x in ends):
-        cocci.include_match(False)
-
-@@
-identifier f = start_end.f;
-expression list[start_end.n] xs;
-expression a, b;
-@@
-(
-* f(xs, a, a, ...)
-|
-* f(xs, a, a - b, ...)
-)
-
-// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/remove_unused.cocci b/cocci_refactor/remove_unused.cocci
deleted file mode 100644
index c06278398198..000000000000
--- a/cocci_refactor/remove_unused.cocci
+++ /dev/null
@@ -1,69 +0,0 @@
-// <smpl>
-
-/*
-spatch --sp-file remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff
-*/
-
-@@
-identifier hyps;
-@@
-{
-...
-(
-- struct vcpu_hyp_state *hyps = ...;
-|
-- struct vcpu_hyp_state *hyps;
-)
-... when != hyps
-    when != if (...) { <+...hyps...+> }
-?- hyps = ...;
-... when != hyps
-    when != if (...) { <+...hyps...+> }
-}
-
-@@
-identifier vcpu_ctxt;
-@@
-{
-...
-(
-- struct kvm_cpu_context *vcpu_ctxt = ...;
-|
-- struct kvm_cpu_context *vcpu_ctxt;
-)
-... when != vcpu_ctxt
-    when != if (...) { <+...vcpu_ctxt...+> }
-?- vcpu_ctxt = ...;
-... when != vcpu_ctxt
-    when != if (...) { <+...vcpu_ctxt...+> }
-}
-
-@@
-identifier x;
-identifier func;
-statement S;
-@@
-func(...)
- {
-...
-struct kvm_cpu_context *x = ...;
-+
-S
-...
- }
-
-@@
-identifier x;
-identifier func;
-statement S;
-@@
-func(...)
- {
-...
-struct vcpu_hyp_state *x = ...;
-+
-S
-...
- }
-
-// </smpl>
diff --git a/cocci_refactor/test.cocci b/cocci_refactor/test.cocci
deleted file mode 100644
index 5eb685240ce7..000000000000
--- a/cocci_refactor/test.cocci
+++ /dev/null
@@ -1,20 +0,0 @@
-/*
- FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file test.cocci $FILES
-
-*/
-
-@r@
-identifier fn;
-@@
-fn(...) {
- hello;
- ...
-}
-
-@@
-identifier r.fn;
-@@
-static fn(...) {
-+ world;
- ...
-}
diff --git a/cocci_refactor/use_ctxt.cocci b/cocci_refactor/use_ctxt.cocci
deleted file mode 100644
index f3f961f567fd..000000000000
--- a/cocci_refactor/use_ctxt.cocci
+++ /dev/null
@@ -1,32 +0,0 @@
-// <smpl>
-/*
-spatch --sp-file use_ctxt.cocci  --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers  --in-place
-spatch --sp-file use_ctxt.cocci  --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers  --in-place
-*/
-
-@remove_vcpu@
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-identifier ctxt_remove;
-identifier func !~ "(reset_unknown|reset_val|kvm_pmu_valid_counter_mask|reset_pmcr|kvm_arch_vcpu_in_kernel|__vgic_v3_)";
-@@
-func(
-- struct kvm_vcpu *vcpu
-+ struct kvm_cpu_context *vcpu_ctxt
-, ...) {
-- struct kvm_cpu_context *ctxt_remove = ...;
-... when != vcpu
-    when != if (...) { <+...vcpu...+> }
-}
-
-@@
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-identifier func = remove_vcpu.func;
-@@
-func(
-- vcpu
-+ vcpu_ctxt
-  , ...)
-
-// </smpl>
diff --git a/cocci_refactor/use_ctxt_access.cocci b/cocci_refactor/use_ctxt_access.cocci
deleted file mode 100644
index 74f94141e662..000000000000
--- a/cocci_refactor/use_ctxt_access.cocci
+++ /dev/null
@@ -1,39 +0,0 @@
-// </smpl>
-
-/*
-spatch --sp-file use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place
-*/
-
-@@
-constant r;
-@@
-- __ctxt_sys_reg(&vcpu->arch.ctxt, r)
-+ &__vcpu_sys_reg(vcpu, r)
-
-@@
-identifier r;
-@@
-- vcpu->arch.ctxt.regs.r
-+ vcpu_gp_regs(vcpu)->r
-
-@@
-identifier r;
-@@
-- vcpu->arch.ctxt.fp_regs.r
-+ vcpu_fp_regs(vcpu)->r
-
-@@
-identifier r;
-fresh identifier accessor = "vcpu_" ## r;
-@@
-- &vcpu->arch.ctxt.r
-+ accessor(vcpu)
-
-@@
-identifier r;
-fresh identifier accessor = "vcpu_" ## r;
-@@
-- vcpu->arch.ctxt.r
-+ *accessor(vcpu)
-
-// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/use_hypstate.cocci b/cocci_refactor/use_hypstate.cocci
deleted file mode 100644
index f685149de748..000000000000
--- a/cocci_refactor/use_hypstate.cocci
+++ /dev/null
@@ -1,63 +0,0 @@
-// <smpl>
-
-/*
-FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h"
-spatch --sp-file use_hypstate.cocci $FILES --in-place
-*/
-
-
-@remove_vcpu_hyps@
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-identifier hyps_remove;
-identifier func;
-@@
-func(
-- struct kvm_vcpu *vcpu
-+ struct vcpu_hyp_state *hyps
-, ...) {
-- struct vcpu_hyp_state *hyps_remove = ...;
-... when != vcpu
-    when != if (...) { <+...vcpu...+> }
-}
-
-@@
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-identifier func = remove_vcpu_hyps.func;
-@@
-func(
-- vcpu
-+ hyps
-  , ...)
-
-@remove_vcpu_hyps_ctxt@
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-identifier hyps_remove;
-identifier ctxt_remove;
-identifier func;
-@@
-func(
-- struct kvm_vcpu *vcpu
-+ struct vcpu_hyp_state *hyps
-, ...) {
-- struct vcpu_hyp_state *hyps_remove = ...;
-- struct kvm_cpu_context *ctxt_remove = ...;
-... when != vcpu
-    when != if (...) { <+...vcpu...+> }
-    when != ctxt_remove
-    when != if (...) { <+...ctxt_remove...+> }
-}
-
-@@
-identifier vcpu;
-fresh identifier hyps = vcpu ## "_hyps";
-identifier func = remove_vcpu_hyps_ctxt.func;
-@@
-func(
-- vcpu
-+ hyps
-  , ...)
-
-// </smpl>
diff --git a/cocci_refactor/vcpu_arch_ctxt.cocci b/cocci_refactor/vcpu_arch_ctxt.cocci
deleted file mode 100644
index 69b3a000de4e..000000000000
--- a/cocci_refactor/vcpu_arch_ctxt.cocci
+++ /dev/null
@@ -1,13 +0,0 @@
-// spatch --sp-file vcpu_arch_ctxt.cocci --no-includes --include-headers  --dir arch/arm64
-
-// <smpl>
-@@
-identifier vcpu;
-@@
-(
-- vcpu->arch.ctxt.regs
-+ vcpu_gp_regs(vcpu)
-|
-- vcpu->arch.ctxt.fp_regs
-+ vcpu_fp_regs(vcpu)
-)
diff --git a/cocci_refactor/vcpu_declr.cocci b/cocci_refactor/vcpu_declr.cocci
deleted file mode 100644
index 59cd46bd6b2d..000000000000
--- a/cocci_refactor/vcpu_declr.cocci
+++ /dev/null
@@ -1,59 +0,0 @@
-
-/*
-FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h";  spatch --sp-file vcpu_declr.cocci $FILES --in-place
-*/
-
-// <smpl>
-
-@@
-identifier vcpu;
-expression E;
-@@
-<...
-- struct kvm_vcpu *vcpu;
-+ struct kvm_vcpu *vcpu = E;
-
-- vcpu = E;
-...>
-
-
-/*
-@@
-identifier vcpu;
-identifier f1, f2;
-@@
-f1(...)
-{
-- struct kvm_vcpu *vcpu = NULL;
-+ struct kvm_vcpu *vcpu;
-... when != f2(..., vcpu, ...)
-}
-*/
-
-/*
-@find_after@
-identifier vcpu;
-position p;
-identifier f;
-@@
-<...
- struct kvm_vcpu *vcpu@p;
- ... when != vcpu = ...;
- f(..., vcpu, ...);
-...>
-
-@@
-identifier vcpu;
-expression E;
-position p != find_after.p;
-@@
-<...
-- struct kvm_vcpu *vcpu@p;
-+ struct kvm_vcpu *vcpu = E;
- ...
-- vcpu = E;
-...>
-
-*/
-
-// </smpl>
diff --git a/cocci_refactor/vcpu_flags.cocci b/cocci_refactor/vcpu_flags.cocci
deleted file mode 100644
index 609bb7bd7bd0..000000000000
--- a/cocci_refactor/vcpu_flags.cocci
+++ /dev/null
@@ -1,10 +0,0 @@
-// spatch --sp-file el2_def_flags.cocci --no-includes --include-headers  --dir arch/arm64
-
-// <smpl>
-@@
-expression vcpu;
-@@
-
-- vcpu->arch.flags
-+ vcpu_flags(vcpu)
-// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/vcpu_hyp_accessors.cocci b/cocci_refactor/vcpu_hyp_accessors.cocci
deleted file mode 100644
index 506b56f7216f..000000000000
--- a/cocci_refactor/vcpu_hyp_accessors.cocci
+++ /dev/null
@@ -1,35 +0,0 @@
-// <smpl>
-
-/*
-spatch --sp-file vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place
-*/
-
-@find_defines@
-identifier macro;
-identifier vcpu;
-position p;
-@@
-#define macro(vcpu) vcpu@p
-
-@@
-identifier vcpu;
-position p != find_defines.p;
-@@
-(
-- vcpu@p->arch.hcr_el2
-+ vcpu_hcr_el2(vcpu)
-|
-- vcpu@p->arch.mdcr_el2
-+ vcpu_mdcr_el2(vcpu)
-|
-- vcpu@p->arch.vsesr_el2
-+ vcpu_vsesr_el2(vcpu)
-|
-- vcpu@p->arch.fault
-+ vcpu_fault(vcpu)
-|
-- vcpu@p->arch.flags
-+ vcpu_flags(vcpu)
-)
-
-// </smpl>
diff --git a/cocci_refactor/vcpu_hyp_state.cocci b/cocci_refactor/vcpu_hyp_state.cocci
deleted file mode 100644
index 3005a6f11871..000000000000
--- a/cocci_refactor/vcpu_hyp_state.cocci
+++ /dev/null
@@ -1,30 +0,0 @@
-// <smpl>
-
-// spatch --sp-file vcpu_hyp_state.cocci --no-includes --include-headers  --dir arch/arm64 --very-quiet --in-place
-
-@@
-expression vcpu;
-@@
-- vcpu->arch.
-+ vcpu->arch.hyp_state.
-(
- hcr_el2
-|
- mdcr_el2
-|
- vsesr_el2
-|
- fault
-|
- flags
-|
- sysregs_loaded_on_cpu
-)
-
-@@
-identifier arch;
-@@
-- arch.fault
-+ arch.hyp_state.fault
-
-// </smpl>
\ No newline at end of file
diff --git a/cocci_refactor/vgic3_cpu.cocci b/cocci_refactor/vgic3_cpu.cocci
deleted file mode 100644
index f7495b2e49cb..000000000000
--- a/cocci_refactor/vgic3_cpu.cocci
+++ /dev/null
@@ -1,118 +0,0 @@
-// <smpl>
-
-/*
-spatch --sp-file vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place
-*/
-
-
-@@
-identifier vcpu;
-fresh identifier vcpu_hyps = vcpu ## "_hyps";
-@@
-(
-- kvm_vcpu_sys_get_rt
-+ kvm_hyp_state_sys_get_rt
-|
-- kvm_vcpu_get_esr
-+ kvm_hyp_state_get_esr
-)
-- (vcpu)
-+ (vcpu_hyps)
-
-@add_cpu_if@
-identifier func;
-identifier c;
-@@
-int func(
-- struct kvm_vcpu *vcpu
-+ struct vgic_v3_cpu_if *cpu_if
- , ...)
-{
-<+...
-- vcpu->arch.vgic_cpu.vgic_v3.c
-+ cpu_if->c
-...+>
-}
-
-@@
-identifier func = add_cpu_if.func;
-@@
- func(
-- vcpu
-+ cpu_if
- , ...
- )
-
-
-@add_vgic_ctxt_hyps@
-identifier func;
-@@
-void func(
-- struct kvm_vcpu *vcpu
-+ struct vgic_v3_cpu_if *cpu_if, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps
- , ...) {
-?- struct vcpu_hyp_state *vcpu_hyps = ...;
-?- struct kvm_cpu_context *vcpu_ctxt = ...;
- ...
- }
-
-@@
-identifier func = add_vgic_ctxt_hyps.func;
-@@
- func(
-- vcpu,
-+ cpu_if, vcpu_ctxt, vcpu_hyps,
- ...
- )
-
-
-@find_calls@
-identifier fn;
-type a, b;
-@@
-- void (*fn)(struct kvm_vcpu *, a, b);
-+ void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, struct vcpu_hyp_state *, a, b);
-
-@@
-identifier fn = find_calls.fn;
-identifier a, b;
-@@
-- fn(vcpu, a, b);
-+ fn(cpu_if, vcpu_ctxt, vcpu_hyps, a, b);
-
-@@
-@@
-int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) {
-+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
-...
-}
-
-@remove@
-identifier func;
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-fresh identifier vcpu_hyps = vcpu ## "_hyps";
-identifier hyps_remove;
-identifier ctxt_remove;
-@@
-func(...,
-- struct kvm_vcpu *vcpu
-+ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps
-,...) {
-?- struct vcpu_hyp_state *hyps_remove = ...;
-?- struct kvm_cpu_context *ctxt_remove = ...;
-... when != vcpu
- }
-
-@@
-identifier vcpu;
-fresh identifier vcpu_ctxt = vcpu ## "_ctxt";
-fresh identifier vcpu_hyps = vcpu ## "_hyps";
-identifier remove.func;
-@@
- func(
-- vcpu
-+ vcpu_ctxt, vcpu_hyps
-  , ...)
-
-// </smpl>
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH v1 30/30] [DONOTMERGE] Re-enable warnings
  2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
                   ` (28 preceding siblings ...)
  2021-09-24 12:53 ` [RFC PATCH v1 29/30] [DONOTMERGE] Remove Coccinelle scripts added for refactoring Fuad Tabba
@ 2021-09-24 12:53 ` Fuad Tabba
  29 siblings, 0 replies; 36+ messages in thread
From: Fuad Tabba @ 2021-09-24 12:53 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

---
 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index 0278bd28bd97..ed669b2d705d 100644
--- a/Makefile
+++ b/Makefile
@@ -504,7 +504,7 @@ KBUILD_CFLAGS   := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \
 		   -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \
 		   -Werror=implicit-function-declaration -Werror=implicit-int \
 		   -Werror=return-type -Wno-format-security \
-		   -std=gnu89 -Wno-unused-variable -Wno-unused-function
+		   -std=gnu89
 KBUILD_CPPFLAGS := -D__KERNEL__
 KBUILD_AFLAGS_KERNEL :=
 KBUILD_CFLAGS_KERNEL :=
-- 
2.33.0.685.g46640cef36-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected
  2021-09-24 12:53 ` [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
@ 2021-09-27 15:50   ` Quentin Perret
  0 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-09-27 15:50 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, kvm, linux-arm-kernel,
	kernel-team

Hey Fuad,

On Friday 24 Sep 2021 at 13:53:30 (+0100), Fuad Tabba wrote:
> Add a function to check whether a VM is protected (under pKVM).
> Since the creation of protected VMs isn't enabled yet, this is a
> placeholder that always returns false. The intention is for this
> to become a check for protected VMs in the future (see Will's RFC).
> 
> No functional change intended.
> 
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> 
> Link: https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
> ---
>  arch/arm64/include/asm/kvm_host.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7cd7d5c8c4bc..adb21a7f0891 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -763,6 +763,11 @@ void kvm_arch_free_vm(struct kvm *kvm);
>  
>  int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
>  
> +static inline bool kvm_vm_is_protected(struct kvm *kvm)
> +{
> +	return false;
> +}

Nit: this isn't used before patch 25 I think, so maybe move to a later
point in the series? That confused me a tiny bit :)

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context
  2021-09-24 12:53 ` [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context Fuad Tabba
@ 2021-09-27 15:57   ` Quentin Perret
  0 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-09-27 15:57 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, kvm, linux-arm-kernel,
	kernel-team

On Friday 24 Sep 2021 at 13:53:34 (+0100), Fuad Tabba wrote:
> +static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	__ctxt_write_spsr(&vcpu_ctxt(vcpu), val);
> +}
> +
> +static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	__ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val);
> +}
> +
> +static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	__ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val);
>  }

I think you remove those at a later point in the series, do we really
need to add them here?

Cheers,
Quentin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch
  2021-09-24 12:53 ` [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch Fuad Tabba
@ 2021-09-27 16:10   ` Quentin Perret
  0 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-09-27 16:10 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, kvm, linux-arm-kernel,
	kernel-team

On Friday 24 Sep 2021 at 13:53:39 (+0100), Fuad Tabba wrote:
> Some of the members of vcpu_arch represent state that belongs to
> the hypervisor. Future patches will factor these out into their
> own structure. To simplify the refactoring and make it easier to
> read, add accessors for the members of kvm_vcpu_arch that
> represent the hypervisor state.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 182 ++++++++++++++++++++++-----
>  arch/arm64/include/asm/kvm_host.h    |  38 ++++--
>  2 files changed, 181 insertions(+), 39 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 7d09a9356d89..e095afeecd10 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -41,9 +41,14 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu);
>  void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
>  void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
>  
> +static __always_inline bool hyp_state_el1_is_32bit(struct vcpu_hyp_state *vcpu_hyps)
> +{
> +	return !(hyp_state_hcr_el2(vcpu_hyps) & HCR_RW);
> +}
> +
>  static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
>  {
> -	return !(vcpu_hcr_el2(vcpu) & HCR_RW);
> +	return hyp_state_el1_is_32bit(&hyp_state(vcpu));
>  }
>  
>  static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
> @@ -252,14 +257,19 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
>  	return mode != PSR_MODE_EL0t;
>  }
>  
> +static __always_inline u32 kvm_hyp_state_get_esr(const struct vcpu_hyp_state *vcpu_hyps)
> +{
> +	return hyp_state_fault(vcpu_hyps).esr_el2;
> +}
> +
>  static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
>  {
> -	return vcpu_fault(vcpu).esr_el2;
> +	return kvm_hyp_state_get_esr(&hyp_state(vcpu));
>  }
>  
> -static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
> +static __always_inline u32 kvm_hyp_state_get_condition(const struct vcpu_hyp_state *vcpu_hyps)
>  {
> -	u32 esr = kvm_vcpu_get_esr(vcpu);
> +	u32 esr = kvm_hyp_state_get_esr(vcpu_hyps);
>  
>  	if (esr & ESR_ELx_CV)
>  		return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
> @@ -267,111 +277,216 @@ static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
>  	return -1;
>  }
>  
> +static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
> +{
> +	return kvm_hyp_state_get_condition(&hyp_state(vcpu));
> +}
> +
> +static __always_inline phys_addr_t kvm_hyp_state_get_hfar(const struct vcpu_hyp_state *vcpu_hyps)
> +{
> +	return hyp_state_fault(vcpu_hyps).far_el2;
> +}
> +
>  static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu)
>  {
> -	return vcpu_fault(vcpu).far_el2;
> +	return kvm_hyp_state_get_hfar(&hyp_state(vcpu));
> +}
> +
> +static __always_inline phys_addr_t kvm_hyp_state_get_fault_ipa(const struct vcpu_hyp_state *vcpu_hyps)
> +{
> +	return ((phys_addr_t) hyp_state_fault(vcpu_hyps).hpfar_el2 & HPFAR_MASK) << 8;
>  }
>  
>  static __always_inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu)
>  {
> -	return ((phys_addr_t) vcpu_fault(vcpu).hpfar_el2 & HPFAR_MASK) << 8;
> +	return kvm_hyp_state_get_fault_ipa(&hyp_state(vcpu));
> +}
> +
> +static __always_inline u32 kvm_hyp_state_get_disr(const struct vcpu_hyp_state *vcpu_hyps)
> +{
> +	return hyp_state_fault(vcpu_hyps).disr_el1;
>  }

Looks like kvm_hyp_state_get_disr() (as well as most of the
kvm_hyp_state_*() helpers below) are never used outside of their
kvm_vcpu_*() counterparts, so maybe let's merge them for now? This series
is really quite large, so I'm just hoping we can trim a bit the bits
that aren't strictly necessary :)

Cheers,
Quentin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct
  2021-09-24 12:53 ` [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct Fuad Tabba
@ 2021-09-27 16:32   ` Quentin Perret
  0 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-09-27 16:32 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, kvm, linux-arm-kernel,
	kernel-team

On Friday 24 Sep 2021 at 13:53:40 (+0100), Fuad Tabba wrote:
> Create a struct for the hypervisor state from the related fields
> in vcpu_arch. This is needed in future patches to reduce the
> scope of functions from the vcpu as a whole to only the relevant
> state, via this newly created struct.
> 
> Create a new instance of this struct in vcpu_arch and fix the
> accessors to use the new fields. Remove the existing fields from
> vcpu_arch.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 35 ++++++++++++++++++-------------
>  arch/arm64/kernel/asm-offsets.c   |  2 +-
>  2 files changed, 21 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 3e5c173d2360..dc4b5e133d86 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -269,27 +269,35 @@ struct vcpu_reset_state {
>  	bool		reset;
>  };
>  
> +/* Holds the hyp-relevant data of a vcpu.*/
> +struct vcpu_hyp_state {
> +	/* HYP configuration */
> +	u64 hcr_el2;
> +	u32 mdcr_el2;
> +
> +	/* Virtual SError ESR to restore when HCR_EL2.VSE is set */
> +	u64 vsesr_el2;
> +
> +	/* Exception Information */
> +	struct kvm_vcpu_fault_info fault;
> +
> +	/* Miscellaneous vcpu state flags */
> +	u64 flags;
> +};
> +
>  struct kvm_vcpu_arch {
>  	struct kvm_cpu_context ctxt;
>  	void *sve_state;
>  	unsigned int sve_max_vl;
>  
> +	struct vcpu_hyp_state hyp_state;
> +
>  	/* Stage 2 paging state used by the hardware on next switch */
>  	struct kvm_s2_mmu *hw_mmu;
>  
> -	/* HYP configuration */
> -	u64 hcr_el2;
> -	u32 mdcr_el2;
> -
> -	/* Exception Information */
> -	struct kvm_vcpu_fault_info fault;
> -
>  	/* State of various workarounds, see kvm_asm.h for bit assignment */
>  	u64 workaround_flags;
>  
> -	/* Miscellaneous vcpu state flags */
> -	u64 flags;
> -
>  	/*
>  	 * We maintain more than a single set of debug registers to support
>  	 * debugging the guest from the host and to maintain separate host and
> @@ -356,9 +364,6 @@ struct kvm_vcpu_arch {
>  	/* Detect first run of a vcpu */
>  	bool has_run_once;
>  
> -	/* Virtual SError ESR to restore when HCR_EL2.VSE is set */
> -	u64 vsesr_el2;
> -
>  	/* Additional reset state */
>  	struct vcpu_reset_state	reset_state;
>  
> @@ -373,7 +378,7 @@ struct kvm_vcpu_arch {
>  	} steal;
>  };
>  
> -#define hyp_state(vcpu) ((vcpu)->arch)
> +#define hyp_state(vcpu) ((vcpu)->arch.hyp_state)

Aha, so _that_ is the nice thing about the previous patches ... I need
to stare at this series for a little longer, but wouldn't it be easier
to simply apply the struct kvm_vcpu_arch change and fixup all the users
at once instead of having all these preparatory patches? It's probably
personal preference at this point, but I must admit I'm finding all
these layers of accessors a little confusing. Happy to hear what others
think.

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state
  2021-09-24 12:53 ` [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state Fuad Tabba
@ 2021-09-27 16:40   ` Quentin Perret
  0 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-09-27 16:40 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, drjones, kvm, linux-arm-kernel,
	kernel-team

On Friday 24 Sep 2021 at 13:53:41 (+0100), Fuad Tabba wrote:
> Many functions don't need access to the vcpu structure, but only
> the hyp_state. Reduce their scope.
> 
> This applies the semantic patches with the following commands:
> FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h"
> spatch --sp-file cocci_refactor/add_hypstate.cocci $FILES --in-place
> spatch --sp-file cocci_refactor/use_hypstate.cocci $FILES --in-place
> 
> This patch adds variables that may be unused. These will be
> removed at the end of this patch series.

I'm guessing you decided to separate things out to make sure this patch
is purely the result of a coccinelle run w/o manual changes?

It looks like the patch to remove the unused variables is a 'COCCI'
patch too, so maybe it would make sense to run it here directly after
the first coccinelle run, and squash the result into this patch? The
resulting patch would still be entirely auto-generated, and wouldn't
have these somewhat odd unused variables. Thoughts?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2021-09-27 16:40 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-24 12:53 [RFC PATCH v1 00/30] Reduce scope of vcpu state at hyp by refactoring out state hyp needs Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
2021-09-27 15:50   ` Quentin Perret
2021-09-24 12:53 ` [RFC PATCH v1 02/30] [DONOTMERGE] Temporarily disable unused variable warning Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 03/30] [DONOTMERGE] Coccinelle scripts for refactoring Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 04/30] KVM: arm64: remove unused parameters and asm offsets Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context Fuad Tabba
2021-09-27 15:57   ` Quentin Perret
2021-09-24 12:53 ` [RFC PATCH v1 06/30] KVM: arm64: COCCI: use_ctxt_access.cocci: use kvm_cpu_context accessors Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 07/30] KVM: arm64: COCCI: add_ctxt.cocci use_ctxt.cocci: reduce scope of functions to kvm_cpu_ctxt Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 08/30] KVM: arm64: add hypervisor state accessors Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 09/30] KVM: arm64: COCCI: vcpu_hyp_accessors.cocci: use accessors for hypervisor state vcpu variables Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch Fuad Tabba
2021-09-27 16:10   ` Quentin Perret
2021-09-24 12:53 ` [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct Fuad Tabba
2021-09-27 16:32   ` Quentin Perret
2021-09-24 12:53 ` [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state Fuad Tabba
2021-09-27 16:40   ` Quentin Perret
2021-09-24 12:53 ` [RFC PATCH v1 13/30] KVM: arm64: change function parameters to use kvm_cpu_ctxt and hyp_state Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 14/30] KVM: arm64: reduce scope of vgic v2 Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 15/30] KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3 Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 16/30] KVM: arm64: reduce scope of vgic_v3 access parameters Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 17/30] KVM: arm64: access __hyp_running_vcpu via accessors only Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 18/30] KVM: arm64: reduce scope of __guest_exit to only depend on kvm_cpu_context Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 19/30] KVM: arm64: change calls of get_loaded_vcpu to get_loaded_vcpu_ctxt Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 20/30] KVM: arm64: add __hyp_running_ctxt and __hyp_running_hyps Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 21/30] KVM: arm64: transition code to " Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 22/30] KVM: arm64: reduce scope of __guest_enter to depend only on kvm_cpu_ctxt Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 23/30] KVM: arm64: COCCI: remove_unused.cocci: remove unused ctxt and hypstate variables Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 24/30] KVM: arm64: remove unused functions Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 25/30] KVM: arm64: separate kvm_run() for protected VMs Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 26/30] KVM: arm64: pVM activate_traps to use vcpu_ctxt and vcpu_hyp_state Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 27/30] KVM: arm64: remove unsupported pVM features Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 28/30] KVM: arm64: reduce scope of pVM fixup_guest_exit to hyp_state and kvm_cpu_ctxt Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 29/30] [DONOTMERGE] Remove Coccinelle scripts added for refactoring Fuad Tabba
2021-09-24 12:53 ` [RFC PATCH v1 30/30] [DONOTMERGE] Re-enable warnings Fuad Tabba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).