* [PATCH v4 00/66] KVM: arm64: ARMv8.3/8.4 Nested Virtualization support
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Here the bi-annual drop of the KVM/arm64 NV support code.
Not a lot has changed since [1], except for a discovery mechanism for
the EL2 support, some tidying up in the idreg emulation, dropping RMR
support, and a rebase on top of 5.13-rc1.
As usual, blame me for any bug, and nobody else.
It is still massively painful to run on the FVP, but if you have a
Neoverse V1 or N2 system that is collecting dust, I have the right
stuff to keep it busy!
M.
[1] https://lore.kernel.org/r/20201210160002.1407373-1-maz@kernel.org
Andre Przywara (1):
KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ
Christoffer Dall (15):
KVM: arm64: nv: Introduce nested virtualization VCPU feature
KVM: arm64: nv: Reset VCPU to EL2 registers if VCPU nested virt is set
KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x
KVM: arm64: nv: Add nested virt VCPU primitives for vEL2 VCPU state
KVM: arm64: nv: Reset VMPIDR_EL2 and VPIDR_EL2 to sane values
KVM: arm64: nv: Handle trapped ERET from virtual EL2
KVM: arm64: nv: Emulate PSTATE.M for a guest hypervisor
KVM: arm64: nv: Trap EL1 VM register accesses in virtual EL2
KVM: arm64: nv: Only toggle cache for virtual EL2 when SCTLR_EL2
changes
KVM: arm64: nv: Implement nested Stage-2 page table walk logic
KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
KVM: arm64: nv: arch_timer: Support hyp timer emulation
KVM: arm64: nv: vgic: Emulate the HW bit in software
KVM: arm64: nv: Add nested GICv3 tracepoints
KVM: arm64: nv: Sync nested timer state with ARMv8.4
Jintack Lim (18):
arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
KVM: arm64: nv: Handle HCR_EL2.NV system register traps
KVM: arm64: nv: Support virtual EL2 exceptions
KVM: arm64: nv: Inject HVC exceptions to the virtual EL2
KVM: arm64: nv: Trap SPSR_EL1, ELR_EL1 and VBAR_EL1 from virtual EL2
KVM: arm64: nv: Trap CPACR_EL1 access in virtual EL2
KVM: arm64: nv: Handle PSCI call via smc from the guest
KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings
KVM: arm64: nv: Respect the virtual HCR_EL2.NV bit setting
KVM: arm64: nv: Respect virtual HCR_EL2.TVM and TRVM settings
KVM: arm64: nv: Respect the virtual HCR_EL2.NV1 bit setting
KVM: arm64: nv: Emulate EL12 register accesses from the virtual EL2
KVM: arm64: nv: Configure HCR_EL2 for nested virtualization
KVM: arm64: nv: Introduce sys_reg_desc.forward_trap
KVM: arm64: nv: Set a handler for the system instruction traps
KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
KVM: arm64: nv: Nested GICv3 Support
Marc Zyngier (32):
KVM: arm64: nv: Add EL2 system registers to vcpu context
KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
KVM: arm64: nv: Handle virtual EL2 registers in
vcpu_read/write_sys_reg()
KVM: arm64: nv: Handle SPSR_EL2 specially
KVM: arm64: nv: Handle HCR_EL2.E2H specially
KVM: arm64: nv: Save/Restore vEL2 sysregs
KVM: arm64: nv: Forward debug traps to the nested guest
KVM: arm64: nv: Filter out unsupported features from ID regs
KVM: arm64: nv: Hide RAS from nested guests
KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
KVM: arm64: nv: Handle shadow stage 2 page faults
KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
KVM: arm64: nv: Fold guest's HCR_EL2 configuration into the host's
KVM: arm64: nv: Add handling of EL2-specific timer registers
KVM: arm64: nv: Load timer before the GIC
KVM: arm64: nv: Don't load the GICv4 context on entering a nested
guest
KVM: arm64: nv: Implement maintenance interrupt forwarding
KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT
KVM: arm64: nv: Add handling of ARMv8.4-TTL TLB invalidation
KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like
information
KVM: arm64: Allow populating S2 SW bits
KVM: arm64: nv: Tag shadow S2 entries with nested level
KVM: arm64: nv: Add include containing the VNCR_EL2 offsets
KVM: arm64: Map VNCR-capable registers to a separate page
KVM: arm64: nv: Move nested vgic state into the sysreg file
KVM: arm64: Add ARMv8.4 Enhanced Nested Virt cpufeature
KVM: arm64: nv: Synchronize PSTATE early on exit
KVM: arm64: nv: Allocate VNCR page when required
KVM: arm64: nv: Enable ARMv8.4-NV support
KVM: arm64: nv: Fast-track 'InHost' exception returns
KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests
.../admin-guide/kernel-parameters.txt | 4 +
.../virt/kvm/devices/arm-vgic-v3.rst | 12 +-
arch/arm64/include/asm/cpucaps.h | 2 +
arch/arm64/include/asm/esr.h | 6 +
arch/arm64/include/asm/kvm_arm.h | 28 +-
arch/arm64/include/asm/kvm_asm.h | 4 +
arch/arm64/include/asm/kvm_emulate.h | 145 +-
arch/arm64/include/asm/kvm_host.h | 174 ++-
arch/arm64/include/asm/kvm_hyp.h | 2 +
arch/arm64/include/asm/kvm_mmu.h | 18 +-
arch/arm64/include/asm/kvm_nested.h | 152 +++
arch/arm64/include/asm/kvm_pgtable.h | 10 +
arch/arm64/include/asm/sysreg.h | 105 +-
arch/arm64/include/asm/vncr_mapping.h | 73 +
arch/arm64/include/uapi/asm/kvm.h | 2 +
arch/arm64/kernel/cpufeature.c | 35 +
arch/arm64/kvm/Makefile | 4 +-
arch/arm64/kvm/arch_timer.c | 189 ++-
arch/arm64/kvm/arm.c | 37 +-
arch/arm64/kvm/at.c | 231 ++++
arch/arm64/kvm/emulate-nested.c | 186 +++
arch/arm64/kvm/guest.c | 6 +
arch/arm64/kvm/handle_exit.c | 81 +-
arch/arm64/kvm/hyp/exception.c | 45 +-
arch/arm64/kvm/hyp/include/hyp/switch.h | 28 +-
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 28 +-
arch/arm64/kvm/hyp/nvhe/switch.c | 10 +-
arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
arch/arm64/kvm/hyp/pgtable.c | 6 +
arch/arm64/kvm/hyp/vgic-v3-sr.c | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 207 ++-
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 125 +-
arch/arm64/kvm/hyp/vhe/tlb.c | 83 ++
arch/arm64/kvm/inject_fault.c | 63 +-
arch/arm64/kvm/mmu.c | 191 ++-
arch/arm64/kvm/nested.c | 910 ++++++++++++
arch/arm64/kvm/reset.c | 14 +-
arch/arm64/kvm/sys_regs.c | 1215 ++++++++++++++++-
arch/arm64/kvm/sys_regs.h | 8 +
arch/arm64/kvm/trace_arm.h | 65 +-
arch/arm64/kvm/vgic/vgic-init.c | 30 +
arch/arm64/kvm/vgic/vgic-kvm-device.c | 22 +
arch/arm64/kvm/vgic/vgic-nested-trace.h | 137 ++
arch/arm64/kvm/vgic/vgic-v3-nested.c | 240 ++++
arch/arm64/kvm/vgic/vgic-v3.c | 39 +-
arch/arm64/kvm/vgic/vgic.c | 44 +
arch/arm64/kvm/vgic/vgic.h | 10 +
include/kvm/arm_arch_timer.h | 7 +
include/kvm/arm_vgic.h | 16 +
include/uapi/linux/kvm.h | 1 +
tools/arch/arm/include/uapi/asm/kvm.h | 1 +
51 files changed, 4893 insertions(+), 162 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_nested.h
create mode 100644 arch/arm64/include/asm/vncr_mapping.h
create mode 100644 arch/arm64/kvm/at.c
create mode 100644 arch/arm64/kvm/emulate-nested.c
create mode 100644 arch/arm64/kvm/nested.c
create mode 100644 arch/arm64/kvm/vgic/vgic-nested-trace.h
create mode 100644 arch/arm64/kvm/vgic/vgic-v3-nested.c
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 229+ messages in thread
* [PATCH v4 00/66] KVM: arm64: ARMv8.3/8.4 Nested Virtualization support
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
Here the bi-annual drop of the KVM/arm64 NV support code.
Not a lot has changed since [1], except for a discovery mechanism for
the EL2 support, some tidying up in the idreg emulation, dropping RMR
support, and a rebase on top of 5.13-rc1.
As usual, blame me for any bug, and nobody else.
It is still massively painful to run on the FVP, but if you have a
Neoverse V1 or N2 system that is collecting dust, I have the right
stuff to keep it busy!
M.
[1] https://lore.kernel.org/r/20201210160002.1407373-1-maz@kernel.org
Andre Przywara (1):
KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ
Christoffer Dall (15):
KVM: arm64: nv: Introduce nested virtualization VCPU feature
KVM: arm64: nv: Reset VCPU to EL2 registers if VCPU nested virt is set
KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x
KVM: arm64: nv: Add nested virt VCPU primitives for vEL2 VCPU state
KVM: arm64: nv: Reset VMPIDR_EL2 and VPIDR_EL2 to sane values
KVM: arm64: nv: Handle trapped ERET from virtual EL2
KVM: arm64: nv: Emulate PSTATE.M for a guest hypervisor
KVM: arm64: nv: Trap EL1 VM register accesses in virtual EL2
KVM: arm64: nv: Only toggle cache for virtual EL2 when SCTLR_EL2
changes
KVM: arm64: nv: Implement nested Stage-2 page table walk logic
KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
KVM: arm64: nv: arch_timer: Support hyp timer emulation
KVM: arm64: nv: vgic: Emulate the HW bit in software
KVM: arm64: nv: Add nested GICv3 tracepoints
KVM: arm64: nv: Sync nested timer state with ARMv8.4
Jintack Lim (18):
arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
KVM: arm64: nv: Handle HCR_EL2.NV system register traps
KVM: arm64: nv: Support virtual EL2 exceptions
KVM: arm64: nv: Inject HVC exceptions to the virtual EL2
KVM: arm64: nv: Trap SPSR_EL1, ELR_EL1 and VBAR_EL1 from virtual EL2
KVM: arm64: nv: Trap CPACR_EL1 access in virtual EL2
KVM: arm64: nv: Handle PSCI call via smc from the guest
KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings
KVM: arm64: nv: Respect the virtual HCR_EL2.NV bit setting
KVM: arm64: nv: Respect virtual HCR_EL2.TVM and TRVM settings
KVM: arm64: nv: Respect the virtual HCR_EL2.NV1 bit setting
KVM: arm64: nv: Emulate EL12 register accesses from the virtual EL2
KVM: arm64: nv: Configure HCR_EL2 for nested virtualization
KVM: arm64: nv: Introduce sys_reg_desc.forward_trap
KVM: arm64: nv: Set a handler for the system instruction traps
KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
KVM: arm64: nv: Nested GICv3 Support
Marc Zyngier (32):
KVM: arm64: nv: Add EL2 system registers to vcpu context
KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
KVM: arm64: nv: Handle virtual EL2 registers in
vcpu_read/write_sys_reg()
KVM: arm64: nv: Handle SPSR_EL2 specially
KVM: arm64: nv: Handle HCR_EL2.E2H specially
KVM: arm64: nv: Save/Restore vEL2 sysregs
KVM: arm64: nv: Forward debug traps to the nested guest
KVM: arm64: nv: Filter out unsupported features from ID regs
KVM: arm64: nv: Hide RAS from nested guests
KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
KVM: arm64: nv: Handle shadow stage 2 page faults
KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
KVM: arm64: nv: Fold guest's HCR_EL2 configuration into the host's
KVM: arm64: nv: Add handling of EL2-specific timer registers
KVM: arm64: nv: Load timer before the GIC
KVM: arm64: nv: Don't load the GICv4 context on entering a nested
guest
KVM: arm64: nv: Implement maintenance interrupt forwarding
KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT
KVM: arm64: nv: Add handling of ARMv8.4-TTL TLB invalidation
KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like
information
KVM: arm64: Allow populating S2 SW bits
KVM: arm64: nv: Tag shadow S2 entries with nested level
KVM: arm64: nv: Add include containing the VNCR_EL2 offsets
KVM: arm64: Map VNCR-capable registers to a separate page
KVM: arm64: nv: Move nested vgic state into the sysreg file
KVM: arm64: Add ARMv8.4 Enhanced Nested Virt cpufeature
KVM: arm64: nv: Synchronize PSTATE early on exit
KVM: arm64: nv: Allocate VNCR page when required
KVM: arm64: nv: Enable ARMv8.4-NV support
KVM: arm64: nv: Fast-track 'InHost' exception returns
KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests
.../admin-guide/kernel-parameters.txt | 4 +
.../virt/kvm/devices/arm-vgic-v3.rst | 12 +-
arch/arm64/include/asm/cpucaps.h | 2 +
arch/arm64/include/asm/esr.h | 6 +
arch/arm64/include/asm/kvm_arm.h | 28 +-
arch/arm64/include/asm/kvm_asm.h | 4 +
arch/arm64/include/asm/kvm_emulate.h | 145 +-
arch/arm64/include/asm/kvm_host.h | 174 ++-
arch/arm64/include/asm/kvm_hyp.h | 2 +
arch/arm64/include/asm/kvm_mmu.h | 18 +-
arch/arm64/include/asm/kvm_nested.h | 152 +++
arch/arm64/include/asm/kvm_pgtable.h | 10 +
arch/arm64/include/asm/sysreg.h | 105 +-
arch/arm64/include/asm/vncr_mapping.h | 73 +
arch/arm64/include/uapi/asm/kvm.h | 2 +
arch/arm64/kernel/cpufeature.c | 35 +
arch/arm64/kvm/Makefile | 4 +-
arch/arm64/kvm/arch_timer.c | 189 ++-
arch/arm64/kvm/arm.c | 37 +-
arch/arm64/kvm/at.c | 231 ++++
arch/arm64/kvm/emulate-nested.c | 186 +++
arch/arm64/kvm/guest.c | 6 +
arch/arm64/kvm/handle_exit.c | 81 +-
arch/arm64/kvm/hyp/exception.c | 45 +-
arch/arm64/kvm/hyp/include/hyp/switch.h | 28 +-
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 28 +-
arch/arm64/kvm/hyp/nvhe/switch.c | 10 +-
arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
arch/arm64/kvm/hyp/pgtable.c | 6 +
arch/arm64/kvm/hyp/vgic-v3-sr.c | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 207 ++-
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 125 +-
arch/arm64/kvm/hyp/vhe/tlb.c | 83 ++
arch/arm64/kvm/inject_fault.c | 63 +-
arch/arm64/kvm/mmu.c | 191 ++-
arch/arm64/kvm/nested.c | 910 ++++++++++++
arch/arm64/kvm/reset.c | 14 +-
arch/arm64/kvm/sys_regs.c | 1215 ++++++++++++++++-
arch/arm64/kvm/sys_regs.h | 8 +
arch/arm64/kvm/trace_arm.h | 65 +-
arch/arm64/kvm/vgic/vgic-init.c | 30 +
arch/arm64/kvm/vgic/vgic-kvm-device.c | 22 +
arch/arm64/kvm/vgic/vgic-nested-trace.h | 137 ++
arch/arm64/kvm/vgic/vgic-v3-nested.c | 240 ++++
arch/arm64/kvm/vgic/vgic-v3.c | 39 +-
arch/arm64/kvm/vgic/vgic.c | 44 +
arch/arm64/kvm/vgic/vgic.h | 10 +
include/kvm/arm_arch_timer.h | 7 +
include/kvm/arm_vgic.h | 16 +
include/uapi/linux/kvm.h | 1 +
tools/arch/arm/include/uapi/asm/kvm.h | 1 +
51 files changed, 4893 insertions(+), 162 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_nested.h
create mode 100644 arch/arm64/include/asm/vncr_mapping.h
create mode 100644 arch/arm64/kvm/at.c
create mode 100644 arch/arm64/kvm/emulate-nested.c
create mode 100644 arch/arm64/kvm/nested.c
create mode 100644 arch/arm64/kvm/vgic/vgic-nested-trace.h
create mode 100644 arch/arm64/kvm/vgic/vgic-v3-nested.c
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 229+ messages in thread
* [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
CPU has the ARMv8.3 nested virtualization capability.
This will be used to support nested virtualization in KVM.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
.../admin-guide/kernel-parameters.txt | 4 +++
arch/arm64/include/asm/cpucaps.h | 1 +
arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
3 files changed, 30 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index cb89dbdedc46..78ca804a6754 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2320,6 +2320,10 @@
[KVM,ARM] Allow use of GICv4 for direct injection of
LPIs.
+ kvm-arm.nested=
+ [KVM,ARM] Allow nested virtualization in KVM/ARM.
+ Default is 0 (disabled)
+
kvm_cma_resv_ratio=n [PPC]
Reserves given percentage from system memory area for
contiguous memory allocation for KVM hash pagetable
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index b0c5eda0498f..8d0cf022010f 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -16,6 +16,7 @@
#define ARM64_WORKAROUND_CAVIUM_23154 6
#define ARM64_WORKAROUND_834220 7
#define ARM64_HAS_NO_HW_PREFETCH 8
+#define ARM64_HAS_NESTED_VIRT 9
#define ARM64_HAS_VIRT_HOST_EXTN 11
#define ARM64_WORKAROUND_CAVIUM_27456 12
#define ARM64_HAS_32BIT_EL0 13
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index efed2830d141..056de86d7f6f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
}
+static bool nested_param;
+static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
+ int scope)
+{
+ return has_cpuid_feature(cap, scope) &&
+ nested_param;
+}
+
+static int __init kvmarm_nested_cfg(char *buf)
+{
+ return strtobool(buf, &nested_param);
+}
+
+early_param("kvm-arm.nested", kvmarm_nested_cfg);
+
static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
{
u64 val = read_sysreg_s(SYS_CLIDR_EL1);
@@ -1865,6 +1880,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = runs_at_el2,
.cpu_enable = cpu_copy_el2regs,
},
+ {
+ .desc = "Nested Virtualization Support",
+ .capability = ARM64_HAS_NESTED_VIRT,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_nested_virt_support,
+ .sys_reg = SYS_ID_AA64MMFR2_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64MMFR2_NV_SHIFT,
+ .min_field_value = 1,
+ },
{
.desc = "32-bit EL0 Support",
.capability = ARM64_HAS_32BIT_EL0,
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
CPU has the ARMv8.3 nested virtualization capability.
This will be used to support nested virtualization in KVM.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
.../admin-guide/kernel-parameters.txt | 4 +++
arch/arm64/include/asm/cpucaps.h | 1 +
arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
3 files changed, 30 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index cb89dbdedc46..78ca804a6754 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2320,6 +2320,10 @@
[KVM,ARM] Allow use of GICv4 for direct injection of
LPIs.
+ kvm-arm.nested=
+ [KVM,ARM] Allow nested virtualization in KVM/ARM.
+ Default is 0 (disabled)
+
kvm_cma_resv_ratio=n [PPC]
Reserves given percentage from system memory area for
contiguous memory allocation for KVM hash pagetable
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index b0c5eda0498f..8d0cf022010f 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -16,6 +16,7 @@
#define ARM64_WORKAROUND_CAVIUM_23154 6
#define ARM64_WORKAROUND_834220 7
#define ARM64_HAS_NO_HW_PREFETCH 8
+#define ARM64_HAS_NESTED_VIRT 9
#define ARM64_HAS_VIRT_HOST_EXTN 11
#define ARM64_WORKAROUND_CAVIUM_27456 12
#define ARM64_HAS_32BIT_EL0 13
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index efed2830d141..056de86d7f6f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
}
+static bool nested_param;
+static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
+ int scope)
+{
+ return has_cpuid_feature(cap, scope) &&
+ nested_param;
+}
+
+static int __init kvmarm_nested_cfg(char *buf)
+{
+ return strtobool(buf, &nested_param);
+}
+
+early_param("kvm-arm.nested", kvmarm_nested_cfg);
+
static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
{
u64 val = read_sysreg_s(SYS_CLIDR_EL1);
@@ -1865,6 +1880,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = runs_at_el2,
.cpu_enable = cpu_copy_el2regs,
},
+ {
+ .desc = "Nested Virtualization Support",
+ .capability = ARM64_HAS_NESTED_VIRT,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_nested_virt_support,
+ .sys_reg = SYS_ID_AA64MMFR2_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64MMFR2_NV_SHIFT,
+ .min_field_value = 1,
+ },
{
.desc = "32-bit EL0 Support",
.capability = ARM64_HAS_32BIT_EL0,
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
CPU has the ARMv8.3 nested virtualization capability.
This will be used to support nested virtualization in KVM.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
.../admin-guide/kernel-parameters.txt | 4 +++
arch/arm64/include/asm/cpucaps.h | 1 +
arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
3 files changed, 30 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index cb89dbdedc46..78ca804a6754 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2320,6 +2320,10 @@
[KVM,ARM] Allow use of GICv4 for direct injection of
LPIs.
+ kvm-arm.nested=
+ [KVM,ARM] Allow nested virtualization in KVM/ARM.
+ Default is 0 (disabled)
+
kvm_cma_resv_ratio=n [PPC]
Reserves given percentage from system memory area for
contiguous memory allocation for KVM hash pagetable
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index b0c5eda0498f..8d0cf022010f 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -16,6 +16,7 @@
#define ARM64_WORKAROUND_CAVIUM_23154 6
#define ARM64_WORKAROUND_834220 7
#define ARM64_HAS_NO_HW_PREFETCH 8
+#define ARM64_HAS_NESTED_VIRT 9
#define ARM64_HAS_VIRT_HOST_EXTN 11
#define ARM64_WORKAROUND_CAVIUM_27456 12
#define ARM64_HAS_32BIT_EL0 13
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index efed2830d141..056de86d7f6f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
}
+static bool nested_param;
+static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
+ int scope)
+{
+ return has_cpuid_feature(cap, scope) &&
+ nested_param;
+}
+
+static int __init kvmarm_nested_cfg(char *buf)
+{
+ return strtobool(buf, &nested_param);
+}
+
+early_param("kvm-arm.nested", kvmarm_nested_cfg);
+
static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
{
u64 val = read_sysreg_s(SYS_CLIDR_EL1);
@@ -1865,6 +1880,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = runs_at_el2,
.cpu_enable = cpu_copy_el2regs,
},
+ {
+ .desc = "Nested Virtualization Support",
+ .capability = ARM64_HAS_NESTED_VIRT,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_nested_virt_support,
+ .sys_reg = SYS_ID_AA64MMFR2_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64MMFR2_NV_SHIFT,
+ .min_field_value = 1,
+ },
{
.desc = "32-bit EL0 Support",
.capability = ARM64_HAS_32BIT_EL0,
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* Re: [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-20 13:32 ` Zenghui Yu
-1 siblings, 0 replies; 229+ messages in thread
From: Zenghui Yu @ 2021-05-20 13:32 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, kvm, Andre Przywara, Christoffer Dall,
Jintack Lim, Haibo Xu, James Morse, Suzuki K Poulose,
Alexandru Elisei, kernel-team, Jintack Lim
On 2021/5/11 0:58, Marc Zyngier wrote:
> From: Jintack Lim <jintack.lim@linaro.org>
>
> Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
> CPU has the ARMv8.3 nested virtualization capability.
>
> This will be used to support nested virtualization in KVM.
>
> Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> .../admin-guide/kernel-parameters.txt | 4 +++
> arch/arm64/include/asm/cpucaps.h | 1 +
> arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
> 3 files changed, 30 insertions(+)
[...]
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index efed2830d141..056de86d7f6f 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
> write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
> }
>
> +static bool nested_param;
> +static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
> + int scope)
> +{
> + return has_cpuid_feature(cap, scope) &&
> + nested_param;
> +}
Nitpick:
How about putting them into a single line (they still perfectly
fit within the 80-colume limit)?
Zenghui
^ permalink raw reply [flat|nested] 229+ messages in thread
* Re: [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
@ 2021-05-20 13:32 ` Zenghui Yu
0 siblings, 0 replies; 229+ messages in thread
From: Zenghui Yu @ 2021-05-20 13:32 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, kvm, Andre Przywara, Christoffer Dall,
Jintack Lim, Haibo Xu, James Morse, Suzuki K Poulose,
Alexandru Elisei, kernel-team, Jintack Lim
On 2021/5/11 0:58, Marc Zyngier wrote:
> From: Jintack Lim <jintack.lim@linaro.org>
>
> Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
> CPU has the ARMv8.3 nested virtualization capability.
>
> This will be used to support nested virtualization in KVM.
>
> Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> .../admin-guide/kernel-parameters.txt | 4 +++
> arch/arm64/include/asm/cpucaps.h | 1 +
> arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
> 3 files changed, 30 insertions(+)
[...]
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index efed2830d141..056de86d7f6f 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
> write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
> }
>
> +static bool nested_param;
> +static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
> + int scope)
> +{
> + return has_cpuid_feature(cap, scope) &&
> + nested_param;
> +}
Nitpick:
How about putting them into a single line (they still perfectly
fit within the 80-colume limit)?
Zenghui
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 229+ messages in thread
* Re: [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
@ 2021-05-20 13:32 ` Zenghui Yu
0 siblings, 0 replies; 229+ messages in thread
From: Zenghui Yu @ 2021-05-20 13:32 UTC (permalink / raw)
To: Marc Zyngier
Cc: kernel-team, kvm, Andre Przywara, kvmarm, Jintack Lim, linux-arm-kernel
On 2021/5/11 0:58, Marc Zyngier wrote:
> From: Jintack Lim <jintack.lim@linaro.org>
>
> Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
> CPU has the ARMv8.3 nested virtualization capability.
>
> This will be used to support nested virtualization in KVM.
>
> Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> .../admin-guide/kernel-parameters.txt | 4 +++
> arch/arm64/include/asm/cpucaps.h | 1 +
> arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
> 3 files changed, 30 insertions(+)
[...]
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index efed2830d141..056de86d7f6f 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
> write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
> }
>
> +static bool nested_param;
> +static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
> + int scope)
> +{
> + return has_cpuid_feature(cap, scope) &&
> + nested_param;
> +}
Nitpick:
How about putting them into a single line (they still perfectly
fit within the 80-colume limit)?
Zenghui
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 229+ messages in thread
* Re: [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
2021-05-20 13:32 ` Zenghui Yu
@ 2021-05-24 12:38 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-24 12:38 UTC (permalink / raw)
To: Zenghui Yu
Cc: kernel-team, kvm, Andre Przywara, kvmarm, Jintack Lim, linux-arm-kernel
On Thu, 20 May 2021 14:32:17 +0100,
Zenghui Yu <yuzenghui@huawei.com> wrote:
>
> On 2021/5/11 0:58, Marc Zyngier wrote:
> > From: Jintack Lim <jintack.lim@linaro.org>
> >
> > Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
> > CPU has the ARMv8.3 nested virtualization capability.
> >
> > This will be used to support nested virtualization in KVM.
> >
> > Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > .../admin-guide/kernel-parameters.txt | 4 +++
> > arch/arm64/include/asm/cpucaps.h | 1 +
> > arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
> > 3 files changed, 30 insertions(+)
>
> [...]
>
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index efed2830d141..056de86d7f6f 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
> > write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
> > }
> > +static bool nested_param;
> > +static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
> > + int scope)
> > +{
> > + return has_cpuid_feature(cap, scope) &&
> > + nested_param;
> > +}
>
> Nitpick:
>
> How about putting them into a single line (they still perfectly
> fit within the 80-colume limit)?
Fair enough.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 229+ messages in thread
* Re: [PATCH v4 01/66] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature
@ 2021-05-24 12:38 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-24 12:38 UTC (permalink / raw)
To: Zenghui Yu
Cc: linux-arm-kernel, kvmarm, kvm, Andre Przywara, Christoffer Dall,
Jintack Lim, Haibo Xu, James Morse, Suzuki K Poulose,
Alexandru Elisei, kernel-team, Jintack Lim
On Thu, 20 May 2021 14:32:17 +0100,
Zenghui Yu <yuzenghui@huawei.com> wrote:
>
> On 2021/5/11 0:58, Marc Zyngier wrote:
> > From: Jintack Lim <jintack.lim@linaro.org>
> >
> > Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
> > CPU has the ARMv8.3 nested virtualization capability.
> >
> > This will be used to support nested virtualization in KVM.
> >
> > Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > .../admin-guide/kernel-parameters.txt | 4 +++
> > arch/arm64/include/asm/cpucaps.h | 1 +
> > arch/arm64/kernel/cpufeature.c | 25 +++++++++++++++++++
> > 3 files changed, 30 insertions(+)
>
> [...]
>
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index efed2830d141..056de86d7f6f 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -1645,6 +1645,21 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
> > write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
> > }
> > +static bool nested_param;
> > +static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
> > + int scope)
> > +{
> > + return has_cpuid_feature(cap, scope) &&
> > + nested_param;
> > +}
>
> Nitpick:
>
> How about putting them into a single line (they still perfectly
> fit within the 80-colume limit)?
Fair enough.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 229+ messages in thread
* [PATCH v4 02/66] KVM: arm64: nv: Introduce nested virtualization VCPU feature
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
Introduce the feature bit and a primitive that checks if the feature is
set behind a static key check based on the cpus_have_const_cap check.
Checking nested_virt_in_use() on systems without nested virt enabled
should have neglgible overhead.
We don't yet allow userspace to actually set this feature.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 14 ++++++++++++++
arch/arm64/include/uapi/asm/kvm.h | 1 +
2 files changed, 15 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_nested.h
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
new file mode 100644
index 000000000000..1028ac65a897
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ARM64_KVM_NESTED_H
+#define __ARM64_KVM_NESTED_H
+
+#include <linux/kvm_host.h>
+
+static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
+{
+ return (!__is_defined(__KVM_NVHE_HYPERVISOR__) &&
+ cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) &&
+ test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
+}
+
+#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 24223adae150..fe3cb67f0d26 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -106,6 +106,7 @@ struct kvm_regs {
#define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */
#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
+#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
struct kvm_vcpu_init {
__u32 target;
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 02/66] KVM: arm64: nv: Introduce nested virtualization VCPU feature
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
Introduce the feature bit and a primitive that checks if the feature is
set behind a static key check based on the cpus_have_const_cap check.
Checking nested_virt_in_use() on systems without nested virt enabled
should have neglgible overhead.
We don't yet allow userspace to actually set this feature.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 14 ++++++++++++++
arch/arm64/include/uapi/asm/kvm.h | 1 +
2 files changed, 15 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_nested.h
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
new file mode 100644
index 000000000000..1028ac65a897
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ARM64_KVM_NESTED_H
+#define __ARM64_KVM_NESTED_H
+
+#include <linux/kvm_host.h>
+
+static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
+{
+ return (!__is_defined(__KVM_NVHE_HYPERVISOR__) &&
+ cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) &&
+ test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
+}
+
+#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 24223adae150..fe3cb67f0d26 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -106,6 +106,7 @@ struct kvm_regs {
#define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */
#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
+#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
struct kvm_vcpu_init {
__u32 target;
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 02/66] KVM: arm64: nv: Introduce nested virtualization VCPU feature
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
From: Christoffer Dall <christoffer.dall@arm.com>
Introduce the feature bit and a primitive that checks if the feature is
set behind a static key check based on the cpus_have_const_cap check.
Checking nested_virt_in_use() on systems without nested virt enabled
should have neglgible overhead.
We don't yet allow userspace to actually set this feature.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 14 ++++++++++++++
arch/arm64/include/uapi/asm/kvm.h | 1 +
2 files changed, 15 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_nested.h
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
new file mode 100644
index 000000000000..1028ac65a897
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ARM64_KVM_NESTED_H
+#define __ARM64_KVM_NESTED_H
+
+#include <linux/kvm_host.h>
+
+static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
+{
+ return (!__is_defined(__KVM_NVHE_HYPERVISOR__) &&
+ cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) &&
+ test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
+}
+
+#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 24223adae150..fe3cb67f0d26 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -106,6 +106,7 @@ struct kvm_regs {
#define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */
#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
+#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
struct kvm_vcpu_init {
__u32 target;
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 03/66] KVM: arm64: nv: Reset VCPU to EL2 registers if VCPU nested virt is set
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
Reset the VCPU with PSTATE.M = EL2h when the nested virtualization
feature is enabled on the VCPU.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: rework register reset not to use empty data structures]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/reset.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 956cdc240148..55863e8f4b0c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -27,6 +27,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/virt.h>
/* Maximum phys_shift supported for any VM on this host */
@@ -38,6 +39,9 @@ static u32 kvm_ipa_limit;
#define VCPU_RESET_PSTATE_EL1 (PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | \
PSR_F_BIT | PSR_D_BIT)
+#define VCPU_RESET_PSTATE_EL2 (PSR_MODE_EL2h | PSR_A_BIT | PSR_I_BIT | \
+ PSR_F_BIT | PSR_D_BIT)
+
#define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \
PSR_AA32_I_BIT | PSR_AA32_F_BIT)
@@ -220,11 +224,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) {
default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
- if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {
+ /*
+ * The CPU must support 32bit EL1, and 32bit
+ * NV is just not a thing...
+ */
+ if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) ||
+ nested_virt_in_use(vcpu)) {
ret = -EINVAL;
goto out;
}
pstate = VCPU_RESET_PSTATE_SVC;
+ } else if (nested_virt_in_use(vcpu)) {
+ pstate = VCPU_RESET_PSTATE_EL2;
} else {
pstate = VCPU_RESET_PSTATE_EL1;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 03/66] KVM: arm64: nv: Reset VCPU to EL2 registers if VCPU nested virt is set
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
Reset the VCPU with PSTATE.M = EL2h when the nested virtualization
feature is enabled on the VCPU.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: rework register reset not to use empty data structures]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/reset.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 956cdc240148..55863e8f4b0c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -27,6 +27,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/virt.h>
/* Maximum phys_shift supported for any VM on this host */
@@ -38,6 +39,9 @@ static u32 kvm_ipa_limit;
#define VCPU_RESET_PSTATE_EL1 (PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | \
PSR_F_BIT | PSR_D_BIT)
+#define VCPU_RESET_PSTATE_EL2 (PSR_MODE_EL2h | PSR_A_BIT | PSR_I_BIT | \
+ PSR_F_BIT | PSR_D_BIT)
+
#define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \
PSR_AA32_I_BIT | PSR_AA32_F_BIT)
@@ -220,11 +224,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) {
default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
- if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {
+ /*
+ * The CPU must support 32bit EL1, and 32bit
+ * NV is just not a thing...
+ */
+ if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) ||
+ nested_virt_in_use(vcpu)) {
ret = -EINVAL;
goto out;
}
pstate = VCPU_RESET_PSTATE_SVC;
+ } else if (nested_virt_in_use(vcpu)) {
+ pstate = VCPU_RESET_PSTATE_EL2;
} else {
pstate = VCPU_RESET_PSTATE_EL1;
}
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 03/66] KVM: arm64: nv: Reset VCPU to EL2 registers if VCPU nested virt is set
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
From: Christoffer Dall <christoffer.dall@arm.com>
Reset the VCPU with PSTATE.M = EL2h when the nested virtualization
feature is enabled on the VCPU.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: rework register reset not to use empty data structures]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/reset.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 956cdc240148..55863e8f4b0c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -27,6 +27,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/virt.h>
/* Maximum phys_shift supported for any VM on this host */
@@ -38,6 +39,9 @@ static u32 kvm_ipa_limit;
#define VCPU_RESET_PSTATE_EL1 (PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | \
PSR_F_BIT | PSR_D_BIT)
+#define VCPU_RESET_PSTATE_EL2 (PSR_MODE_EL2h | PSR_A_BIT | PSR_I_BIT | \
+ PSR_F_BIT | PSR_D_BIT)
+
#define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \
PSR_AA32_I_BIT | PSR_AA32_F_BIT)
@@ -220,11 +224,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) {
default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
- if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {
+ /*
+ * The CPU must support 32bit EL1, and 32bit
+ * NV is just not a thing...
+ */
+ if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) ||
+ nested_virt_in_use(vcpu)) {
ret = -EINVAL;
goto out;
}
pstate = VCPU_RESET_PSTATE_SVC;
+ } else if (nested_virt_in_use(vcpu)) {
+ pstate = VCPU_RESET_PSTATE_EL2;
} else {
pstate = VCPU_RESET_PSTATE_EL1;
}
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 04/66] KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
We were not allowing userspace to set a more privileged mode for the VCPU
than EL1, but we should allow this when nested virtualization is enabled
for the VCPU.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/guest.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5cb4a1cd5603..e8388a01f763 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -24,6 +24,7 @@
#include <asm/fpsimd.h>
#include <asm/kvm.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#include <asm/sigcontext.h>
#include "trace.h"
@@ -242,6 +243,11 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (vcpu_el1_is_32bit(vcpu))
return -EINVAL;
break;
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ if (vcpu_el1_is_32bit(vcpu) || !nested_virt_in_use(vcpu))
+ return -EINVAL;
+ break;
default:
err = -EINVAL;
goto out;
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 04/66] KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
We were not allowing userspace to set a more privileged mode for the VCPU
than EL1, but we should allow this when nested virtualization is enabled
for the VCPU.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/guest.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5cb4a1cd5603..e8388a01f763 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -24,6 +24,7 @@
#include <asm/fpsimd.h>
#include <asm/kvm.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#include <asm/sigcontext.h>
#include "trace.h"
@@ -242,6 +243,11 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (vcpu_el1_is_32bit(vcpu))
return -EINVAL;
break;
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ if (vcpu_el1_is_32bit(vcpu) || !nested_virt_in_use(vcpu))
+ return -EINVAL;
+ break;
default:
err = -EINVAL;
goto out;
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 04/66] KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: kernel-team, Andre Przywara, Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
We were not allowing userspace to set a more privileged mode for the VCPU
than EL1, but we should allow this when nested virtualization is enabled
for the VCPU.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/guest.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5cb4a1cd5603..e8388a01f763 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -24,6 +24,7 @@
#include <asm/fpsimd.h>
#include <asm/kvm.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#include <asm/sigcontext.h>
#include "trace.h"
@@ -242,6 +243,11 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (vcpu_el1_is_32bit(vcpu))
return -EINVAL;
break;
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ if (vcpu_el1_is_32bit(vcpu) || !nested_virt_in_use(vcpu))
+ return -EINVAL;
+ break;
default:
err = -EINVAL;
goto out;
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 05/66] KVM: arm64: nv: Add EL2 system registers to vcpu context
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Add the minimal set of EL2 system registers to the vcpu context.
Nothing uses them just yet.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 33 ++++++++++++++++++++++++++++++-
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cd7d5c8c4bc..1d82ad3c63b8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -206,12 +206,43 @@ enum vcpu_sysreg {
CNTP_CVAL_EL0,
CNTP_CTL_EL0,
- /* 32bit specific registers. Keep them at the end of the range */
+ /* 32bit specific registers. */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
FPEXC32_EL2, /* Floating-Point Exception Control Register */
DBGVCR32_EL2, /* Debug Vector Catch Register */
+ /* EL2 registers */
+ VPIDR_EL2, /* Virtualization Processor ID Register */
+ VMPIDR_EL2, /* Virtualization Multiprocessor ID Register */
+ SCTLR_EL2, /* System Control Register (EL2) */
+ ACTLR_EL2, /* Auxiliary Control Register (EL2) */
+ HCR_EL2, /* Hypervisor Configuration Register */
+ MDCR_EL2, /* Monitor Debug Configuration Register (EL2) */
+ CPTR_EL2, /* Architectural Feature Trap Register (EL2) */
+ HSTR_EL2, /* Hypervisor System Trap Register */
+ HACR_EL2, /* Hypervisor Auxiliary Control Register */
+ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */
+ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */
+ TCR_EL2, /* Translation Control Register (EL2) */
+ VTTBR_EL2, /* Virtualization Translation Table Base Register */
+ VTCR_EL2, /* Virtualization Translation Control Register */
+ SPSR_EL2, /* EL2 saved program status register */
+ ELR_EL2, /* EL2 exception link register */
+ AFSR0_EL2, /* Auxiliary Fault Status Register 0 (EL2) */
+ AFSR1_EL2, /* Auxiliary Fault Status Register 1 (EL2) */
+ ESR_EL2, /* Exception Syndrome Register (EL2) */
+ FAR_EL2, /* Hypervisor IPA Fault Address Register */
+ HPFAR_EL2, /* Hypervisor IPA Fault Address Register */
+ MAIR_EL2, /* Memory Attribute Indirection Register (EL2) */
+ AMAIR_EL2, /* Auxiliary Memory Attribute Indirection Register (EL2) */
+ VBAR_EL2, /* Vector Base Address Register (EL2) */
+ RVBAR_EL2, /* Reset Vector Base Address Register */
+ CONTEXTIDR_EL2, /* Context ID Register (EL2) */
+ TPIDR_EL2, /* EL2 Software Thread ID Register */
+ CNTHCTL_EL2, /* Counter-timer Hypervisor Control register */
+ SP_EL2, /* EL2 Stack Pointer */
+
NR_SYS_REGS /* Nothing after this line! */
};
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 05/66] KVM: arm64: nv: Add EL2 system registers to vcpu context
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Add the minimal set of EL2 system registers to the vcpu context.
Nothing uses them just yet.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 33 ++++++++++++++++++++++++++++++-
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cd7d5c8c4bc..1d82ad3c63b8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -206,12 +206,43 @@ enum vcpu_sysreg {
CNTP_CVAL_EL0,
CNTP_CTL_EL0,
- /* 32bit specific registers. Keep them at the end of the range */
+ /* 32bit specific registers. */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
FPEXC32_EL2, /* Floating-Point Exception Control Register */
DBGVCR32_EL2, /* Debug Vector Catch Register */
+ /* EL2 registers */
+ VPIDR_EL2, /* Virtualization Processor ID Register */
+ VMPIDR_EL2, /* Virtualization Multiprocessor ID Register */
+ SCTLR_EL2, /* System Control Register (EL2) */
+ ACTLR_EL2, /* Auxiliary Control Register (EL2) */
+ HCR_EL2, /* Hypervisor Configuration Register */
+ MDCR_EL2, /* Monitor Debug Configuration Register (EL2) */
+ CPTR_EL2, /* Architectural Feature Trap Register (EL2) */
+ HSTR_EL2, /* Hypervisor System Trap Register */
+ HACR_EL2, /* Hypervisor Auxiliary Control Register */
+ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */
+ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */
+ TCR_EL2, /* Translation Control Register (EL2) */
+ VTTBR_EL2, /* Virtualization Translation Table Base Register */
+ VTCR_EL2, /* Virtualization Translation Control Register */
+ SPSR_EL2, /* EL2 saved program status register */
+ ELR_EL2, /* EL2 exception link register */
+ AFSR0_EL2, /* Auxiliary Fault Status Register 0 (EL2) */
+ AFSR1_EL2, /* Auxiliary Fault Status Register 1 (EL2) */
+ ESR_EL2, /* Exception Syndrome Register (EL2) */
+ FAR_EL2, /* Hypervisor IPA Fault Address Register */
+ HPFAR_EL2, /* Hypervisor IPA Fault Address Register */
+ MAIR_EL2, /* Memory Attribute Indirection Register (EL2) */
+ AMAIR_EL2, /* Auxiliary Memory Attribute Indirection Register (EL2) */
+ VBAR_EL2, /* Vector Base Address Register (EL2) */
+ RVBAR_EL2, /* Reset Vector Base Address Register */
+ CONTEXTIDR_EL2, /* Context ID Register (EL2) */
+ TPIDR_EL2, /* EL2 Software Thread ID Register */
+ CNTHCTL_EL2, /* Counter-timer Hypervisor Control register */
+ SP_EL2, /* EL2 Stack Pointer */
+
NR_SYS_REGS /* Nothing after this line! */
};
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 05/66] KVM: arm64: nv: Add EL2 system registers to vcpu context
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
Add the minimal set of EL2 system registers to the vcpu context.
Nothing uses them just yet.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 33 ++++++++++++++++++++++++++++++-
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cd7d5c8c4bc..1d82ad3c63b8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -206,12 +206,43 @@ enum vcpu_sysreg {
CNTP_CVAL_EL0,
CNTP_CTL_EL0,
- /* 32bit specific registers. Keep them at the end of the range */
+ /* 32bit specific registers. */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
FPEXC32_EL2, /* Floating-Point Exception Control Register */
DBGVCR32_EL2, /* Debug Vector Catch Register */
+ /* EL2 registers */
+ VPIDR_EL2, /* Virtualization Processor ID Register */
+ VMPIDR_EL2, /* Virtualization Multiprocessor ID Register */
+ SCTLR_EL2, /* System Control Register (EL2) */
+ ACTLR_EL2, /* Auxiliary Control Register (EL2) */
+ HCR_EL2, /* Hypervisor Configuration Register */
+ MDCR_EL2, /* Monitor Debug Configuration Register (EL2) */
+ CPTR_EL2, /* Architectural Feature Trap Register (EL2) */
+ HSTR_EL2, /* Hypervisor System Trap Register */
+ HACR_EL2, /* Hypervisor Auxiliary Control Register */
+ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */
+ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */
+ TCR_EL2, /* Translation Control Register (EL2) */
+ VTTBR_EL2, /* Virtualization Translation Table Base Register */
+ VTCR_EL2, /* Virtualization Translation Control Register */
+ SPSR_EL2, /* EL2 saved program status register */
+ ELR_EL2, /* EL2 exception link register */
+ AFSR0_EL2, /* Auxiliary Fault Status Register 0 (EL2) */
+ AFSR1_EL2, /* Auxiliary Fault Status Register 1 (EL2) */
+ ESR_EL2, /* Exception Syndrome Register (EL2) */
+ FAR_EL2, /* Hypervisor IPA Fault Address Register */
+ HPFAR_EL2, /* Hypervisor IPA Fault Address Register */
+ MAIR_EL2, /* Memory Attribute Indirection Register (EL2) */
+ AMAIR_EL2, /* Auxiliary Memory Attribute Indirection Register (EL2) */
+ VBAR_EL2, /* Vector Base Address Register (EL2) */
+ RVBAR_EL2, /* Reset Vector Base Address Register */
+ CONTEXTIDR_EL2, /* Context ID Register (EL2) */
+ TPIDR_EL2, /* EL2 Software Thread ID Register */
+ CNTHCTL_EL2, /* Counter-timer Hypervisor Control register */
+ SP_EL2, /* EL2 Stack Pointer */
+
NR_SYS_REGS /* Nothing after this line! */
};
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 06/66] KVM: arm64: nv: Add nested virt VCPU primitives for vEL2 VCPU state
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
When running a nested hypervisor we commonly have to figure out if
the VCPU mode is running in the context of a guest hypervisor or guest
guest, or just a normal guest.
Add convenient primitives for this.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 55 ++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f612c090f2e4..706ef06c5f2b 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -173,6 +173,61 @@ static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num,
vcpu_gp_regs(vcpu)->regs[reg_num] = val;
}
+static inline bool vcpu_mode_el2_ctxt(const struct kvm_cpu_context *ctxt)
+{
+ unsigned long cpsr = ctxt->regs.pstate;
+
+ switch (cpsr & (PSR_MODE32_BIT | PSR_MODE_MASK)) {
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static inline bool vcpu_mode_el2(const struct kvm_vcpu *vcpu)
+{
+ return vcpu_mode_el2_ctxt(&vcpu->arch.ctxt);
+}
+
+static inline bool __vcpu_el2_e2h_is_set(const struct kvm_cpu_context *ctxt)
+{
+ return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_E2H;
+}
+
+static inline bool vcpu_el2_e2h_is_set(const struct kvm_vcpu *vcpu)
+{
+ return __vcpu_el2_e2h_is_set(&vcpu->arch.ctxt);
+}
+
+static inline bool __vcpu_el2_tge_is_set(const struct kvm_cpu_context *ctxt)
+{
+ return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_TGE;
+}
+
+static inline bool vcpu_el2_tge_is_set(const struct kvm_vcpu *vcpu)
+{
+ return __vcpu_el2_tge_is_set(&vcpu->arch.ctxt);
+}
+
+static inline bool __is_hyp_ctxt(const struct kvm_cpu_context *ctxt)
+{
+ /*
+ * We are in a hypervisor context if the vcpu mode is EL2 or
+ * E2H and TGE bits are set. The latter means we are in the user space
+ * of the VHE kernel. ARMv8.1 ARM describes this as 'InHost'
+ */
+ return vcpu_mode_el2_ctxt(ctxt) ||
+ (__vcpu_el2_e2h_is_set(ctxt) && __vcpu_el2_tge_is_set(ctxt)) ||
+ WARN_ON(__vcpu_el2_tge_is_set(ctxt));
+}
+
+static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
+{
+ return __is_hyp_ctxt(&vcpu->arch.ctxt);
+}
+
/*
* The layout of SPSR for an AArch32 state is different when observed from an
* AArch64 SPSR_ELx or an AArch32 SPSR_*. This function generates the AArch32
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 06/66] KVM: arm64: nv: Add nested virt VCPU primitives for vEL2 VCPU state
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
When running a nested hypervisor we commonly have to figure out if
the VCPU mode is running in the context of a guest hypervisor or guest
guest, or just a normal guest.
Add convenient primitives for this.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 55 ++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f612c090f2e4..706ef06c5f2b 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -173,6 +173,61 @@ static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num,
vcpu_gp_regs(vcpu)->regs[reg_num] = val;
}
+static inline bool vcpu_mode_el2_ctxt(const struct kvm_cpu_context *ctxt)
+{
+ unsigned long cpsr = ctxt->regs.pstate;
+
+ switch (cpsr & (PSR_MODE32_BIT | PSR_MODE_MASK)) {
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static inline bool vcpu_mode_el2(const struct kvm_vcpu *vcpu)
+{
+ return vcpu_mode_el2_ctxt(&vcpu->arch.ctxt);
+}
+
+static inline bool __vcpu_el2_e2h_is_set(const struct kvm_cpu_context *ctxt)
+{
+ return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_E2H;
+}
+
+static inline bool vcpu_el2_e2h_is_set(const struct kvm_vcpu *vcpu)
+{
+ return __vcpu_el2_e2h_is_set(&vcpu->arch.ctxt);
+}
+
+static inline bool __vcpu_el2_tge_is_set(const struct kvm_cpu_context *ctxt)
+{
+ return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_TGE;
+}
+
+static inline bool vcpu_el2_tge_is_set(const struct kvm_vcpu *vcpu)
+{
+ return __vcpu_el2_tge_is_set(&vcpu->arch.ctxt);
+}
+
+static inline bool __is_hyp_ctxt(const struct kvm_cpu_context *ctxt)
+{
+ /*
+ * We are in a hypervisor context if the vcpu mode is EL2 or
+ * E2H and TGE bits are set. The latter means we are in the user space
+ * of the VHE kernel. ARMv8.1 ARM describes this as 'InHost'
+ */
+ return vcpu_mode_el2_ctxt(ctxt) ||
+ (__vcpu_el2_e2h_is_set(ctxt) && __vcpu_el2_tge_is_set(ctxt)) ||
+ WARN_ON(__vcpu_el2_tge_is_set(ctxt));
+}
+
+static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
+{
+ return __is_hyp_ctxt(&vcpu->arch.ctxt);
+}
+
/*
* The layout of SPSR for an AArch32 state is different when observed from an
* AArch64 SPSR_ELx or an AArch32 SPSR_*. This function generates the AArch32
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 06/66] KVM: arm64: nv: Add nested virt VCPU primitives for vEL2 VCPU state
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
From: Christoffer Dall <christoffer.dall@arm.com>
When running a nested hypervisor we commonly have to figure out if
the VCPU mode is running in the context of a guest hypervisor or guest
guest, or just a normal guest.
Add convenient primitives for this.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 55 ++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f612c090f2e4..706ef06c5f2b 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -173,6 +173,61 @@ static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num,
vcpu_gp_regs(vcpu)->regs[reg_num] = val;
}
+static inline bool vcpu_mode_el2_ctxt(const struct kvm_cpu_context *ctxt)
+{
+ unsigned long cpsr = ctxt->regs.pstate;
+
+ switch (cpsr & (PSR_MODE32_BIT | PSR_MODE_MASK)) {
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static inline bool vcpu_mode_el2(const struct kvm_vcpu *vcpu)
+{
+ return vcpu_mode_el2_ctxt(&vcpu->arch.ctxt);
+}
+
+static inline bool __vcpu_el2_e2h_is_set(const struct kvm_cpu_context *ctxt)
+{
+ return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_E2H;
+}
+
+static inline bool vcpu_el2_e2h_is_set(const struct kvm_vcpu *vcpu)
+{
+ return __vcpu_el2_e2h_is_set(&vcpu->arch.ctxt);
+}
+
+static inline bool __vcpu_el2_tge_is_set(const struct kvm_cpu_context *ctxt)
+{
+ return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_TGE;
+}
+
+static inline bool vcpu_el2_tge_is_set(const struct kvm_vcpu *vcpu)
+{
+ return __vcpu_el2_tge_is_set(&vcpu->arch.ctxt);
+}
+
+static inline bool __is_hyp_ctxt(const struct kvm_cpu_context *ctxt)
+{
+ /*
+ * We are in a hypervisor context if the vcpu mode is EL2 or
+ * E2H and TGE bits are set. The latter means we are in the user space
+ * of the VHE kernel. ARMv8.1 ARM describes this as 'InHost'
+ */
+ return vcpu_mode_el2_ctxt(ctxt) ||
+ (__vcpu_el2_e2h_is_set(ctxt) && __vcpu_el2_tge_is_set(ctxt)) ||
+ WARN_ON(__vcpu_el2_tge_is_set(ctxt));
+}
+
+static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
+{
+ return __is_hyp_ctxt(&vcpu->arch.ctxt);
+}
+
/*
* The layout of SPSR for an AArch32 state is different when observed from an
* AArch64 SPSR_ELx or an AArch32 SPSR_*. This function generates the AArch32
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 07/66] KVM: arm64: nv: Handle HCR_EL2.NV system register traps
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
ARM v8.3 introduces a new bit in the HCR_EL2, which is the NV bit. When
this bit is set, accessing EL2 registers in EL1 traps to EL2. In
addition, executing the following instructions in EL1 will trap to EL2:
tlbi, at, eret, and msr/mrs instructions to access SP_EL1. Most of the
instructions that trap to EL2 with the NV bit were undef at EL1 prior to
ARM v8.3. The only instruction that was not undef is eret.
This patch sets up a handler for EL2 registers and SP_EL1 register
accesses at EL1. The host hypervisor keeps those register values in
memory, and will emulate their behavior.
This patch doesn't set the NV bit yet. It will be set in a later patch
once nested virtualization support is completed.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: added SCTLR_EL2 RES0/RES1 handling]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/sysreg.h | 41 +++++++++++-
arch/arm64/kvm/sys_regs.c | 109 ++++++++++++++++++++++++++++++--
2 files changed, 144 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 65d15700a168..10c0d3a476e2 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -525,10 +525,26 @@
#define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7)
+#define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0)
+#define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5)
+
#define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0)
+#define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1)
+#define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0)
+#define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1)
+#define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2)
+#define SYS_HSTR_EL2 sys_reg(3, 4, 1, 1, 3)
#define SYS_HFGRTR_EL2 sys_reg(3, 4, 1, 1, 4)
#define SYS_HFGWTR_EL2 sys_reg(3, 4, 1, 1, 5)
#define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6)
+#define SYS_HACR_EL2 sys_reg(3, 4, 1, 1, 7)
+
+#define SYS_TTBR0_EL2 sys_reg(3, 4, 2, 0, 0)
+#define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1)
+#define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)
+#define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
+#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
+
#define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0)
#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1)
#define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0)
@@ -537,14 +553,26 @@
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
+#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
#define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1)
+#define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0)
+#define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1)
#define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0)
#define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3)
#define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0)
#define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0)
#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
-#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
+#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
+#define SYS_HPFAR_EL2 sys_reg(3, 4, 6, 0, 4)
+
+#define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0)
+#define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0)
+
+#define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0)
+#define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1)
+#define SYS_RMR_EL2 sys_reg(3, 4, 12, 0, 2)
+#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
#define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
#define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0)
#define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1)
@@ -586,15 +614,24 @@
#define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6)
#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)
+#define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1)
+#define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2)
+
+#define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3)
+#define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0)
+
/* VHE encodings for architectural EL0/1 system registers */
#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
#define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2)
#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0)
+
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
#define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1)
#define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2)
+
#define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0)
#define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1)
+
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
@@ -612,6 +649,8 @@
#define SYS_CNTV_CTL_EL02 sys_reg(3, 5, 14, 3, 1)
#define SYS_CNTV_CVAL_EL02 sys_reg(3, 5, 14, 3, 2)
+#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 76ea2800c33e..68266eb139b7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -109,6 +109,46 @@ static u32 get_ccsidr(u32 csselr)
return ccsidr;
}
+static bool access_rw(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+
+ return true;
+}
+
+static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write) {
+ u64 val = p->regval;
+
+ if (vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)) {
+ val &= ~(GENMASK_ULL(63,45) | GENMASK_ULL(34, 32) |
+ BIT_ULL(17) | BIT_ULL(9));
+ val |= SCTLR_EL1_RES1;
+ } else {
+ val &= ~(GENMASK_ULL(63,45) | BIT_ULL(42) |
+ GENMASK_ULL(39, 38) | GENMASK_ULL(35, 32) |
+ BIT_ULL(26) | BIT_ULL(24) | BIT_ULL(20) |
+ BIT_ULL(17) | GENMASK_ULL(15, 14) |
+ GENMASK(10, 7));
+ val |= SCTLR_EL2_RES1;
+ }
+
+ vcpu_write_sys_reg(vcpu, val, r->reg);
+ } else {
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+ }
+
+ return true;
+}
+
/*
* See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
*/
@@ -267,6 +307,14 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
return read_zero(vcpu, p);
}
+static bool trap_undef(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ kvm_inject_undefined(vcpu);
+ return false;
+}
+
/*
* ARMv8.1 mandates at least a trivial LORegion implementation, where all the
* RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -346,12 +394,9 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- if (p->is_write) {
- vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+ access_rw(vcpu, p, r);
+ if (p->is_write)
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
- } else {
- p->regval = vcpu_read_sys_reg(vcpu, r->reg);
- }
trace_trap_reg(__func__, r->reg, p->is_write, p->regval);
@@ -1335,6 +1380,18 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
.set_user = set_raz_id_reg, \
}
+static bool access_sp_el1(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ __vcpu_sys_reg(vcpu, SP_EL1) = p->regval;
+ else
+ p->regval = __vcpu_sys_reg(vcpu, SP_EL1);
+
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1744,9 +1801,51 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
+ { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_val, VPIDR_EL2, 0 },
+ { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_val, VMPIDR_EL2, 0 },
+
+ { SYS_DESC(SYS_SCTLR_EL2), access_sctlr_el2, reset_val, SCTLR_EL2, SCTLR_EL2_RES1 },
+ { SYS_DESC(SYS_ACTLR_EL2), access_rw, reset_val, ACTLR_EL2, 0 },
+ { SYS_DESC(SYS_HCR_EL2), access_rw, reset_val, HCR_EL2, 0 },
+ { SYS_DESC(SYS_MDCR_EL2), access_rw, reset_val, MDCR_EL2, 0 },
+ { SYS_DESC(SYS_CPTR_EL2), access_rw, reset_val, CPTR_EL2, CPTR_EL2_RES1 },
+ { SYS_DESC(SYS_HSTR_EL2), access_rw, reset_val, HSTR_EL2, 0 },
+ { SYS_DESC(SYS_HACR_EL2), access_rw, reset_val, HACR_EL2, 0 },
+
+ { SYS_DESC(SYS_TTBR0_EL2), access_rw, reset_val, TTBR0_EL2, 0 },
+ { SYS_DESC(SYS_TTBR1_EL2), access_rw, reset_val, TTBR1_EL2, 0 },
+ { SYS_DESC(SYS_TCR_EL2), access_rw, reset_val, TCR_EL2, TCR_EL2_RES1 },
+ { SYS_DESC(SYS_VTTBR_EL2), access_rw, reset_val, VTTBR_EL2, 0 },
+ { SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 },
+
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+ { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 },
+ { SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 },
+ { SYS_DESC(SYS_SP_EL1), access_sp_el1},
+
{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+ { SYS_DESC(SYS_AFSR0_EL2), access_rw, reset_val, AFSR0_EL2, 0 },
+ { SYS_DESC(SYS_AFSR1_EL2), access_rw, reset_val, AFSR1_EL2, 0 },
+ { SYS_DESC(SYS_ESR_EL2), access_rw, reset_val, ESR_EL2, 0 },
{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
+
+ { SYS_DESC(SYS_FAR_EL2), access_rw, reset_val, FAR_EL2, 0 },
+ { SYS_DESC(SYS_HPFAR_EL2), access_rw, reset_val, HPFAR_EL2, 0 },
+
+ { SYS_DESC(SYS_MAIR_EL2), access_rw, reset_val, MAIR_EL2, 0 },
+ { SYS_DESC(SYS_AMAIR_EL2), access_rw, reset_val, AMAIR_EL2, 0 },
+
+ { SYS_DESC(SYS_VBAR_EL2), access_rw, reset_val, VBAR_EL2, 0 },
+ { SYS_DESC(SYS_RVBAR_EL2), access_rw, reset_val, RVBAR_EL2, 0 },
+ { SYS_DESC(SYS_RMR_EL2), trap_undef },
+
+ { SYS_DESC(SYS_CONTEXTIDR_EL2), access_rw, reset_val, CONTEXTIDR_EL2, 0 },
+ { SYS_DESC(SYS_TPIDR_EL2), access_rw, reset_val, TPIDR_EL2, 0 },
+
+ { SYS_DESC(SYS_CNTVOFF_EL2), access_rw, reset_val, CNTVOFF_EL2, 0 },
+ { SYS_DESC(SYS_CNTHCTL_EL2), access_rw, reset_val, CNTHCTL_EL2, 0 },
+
+ { SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 07/66] KVM: arm64: nv: Handle HCR_EL2.NV system register traps
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
ARM v8.3 introduces a new bit in the HCR_EL2, which is the NV bit. When
this bit is set, accessing EL2 registers in EL1 traps to EL2. In
addition, executing the following instructions in EL1 will trap to EL2:
tlbi, at, eret, and msr/mrs instructions to access SP_EL1. Most of the
instructions that trap to EL2 with the NV bit were undef at EL1 prior to
ARM v8.3. The only instruction that was not undef is eret.
This patch sets up a handler for EL2 registers and SP_EL1 register
accesses at EL1. The host hypervisor keeps those register values in
memory, and will emulate their behavior.
This patch doesn't set the NV bit yet. It will be set in a later patch
once nested virtualization support is completed.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: added SCTLR_EL2 RES0/RES1 handling]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/sysreg.h | 41 +++++++++++-
arch/arm64/kvm/sys_regs.c | 109 ++++++++++++++++++++++++++++++--
2 files changed, 144 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 65d15700a168..10c0d3a476e2 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -525,10 +525,26 @@
#define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7)
+#define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0)
+#define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5)
+
#define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0)
+#define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1)
+#define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0)
+#define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1)
+#define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2)
+#define SYS_HSTR_EL2 sys_reg(3, 4, 1, 1, 3)
#define SYS_HFGRTR_EL2 sys_reg(3, 4, 1, 1, 4)
#define SYS_HFGWTR_EL2 sys_reg(3, 4, 1, 1, 5)
#define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6)
+#define SYS_HACR_EL2 sys_reg(3, 4, 1, 1, 7)
+
+#define SYS_TTBR0_EL2 sys_reg(3, 4, 2, 0, 0)
+#define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1)
+#define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)
+#define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
+#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
+
#define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0)
#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1)
#define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0)
@@ -537,14 +553,26 @@
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
+#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
#define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1)
+#define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0)
+#define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1)
#define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0)
#define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3)
#define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0)
#define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0)
#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
-#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
+#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
+#define SYS_HPFAR_EL2 sys_reg(3, 4, 6, 0, 4)
+
+#define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0)
+#define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0)
+
+#define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0)
+#define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1)
+#define SYS_RMR_EL2 sys_reg(3, 4, 12, 0, 2)
+#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
#define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
#define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0)
#define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1)
@@ -586,15 +614,24 @@
#define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6)
#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)
+#define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1)
+#define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2)
+
+#define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3)
+#define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0)
+
/* VHE encodings for architectural EL0/1 system registers */
#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
#define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2)
#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0)
+
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
#define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1)
#define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2)
+
#define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0)
#define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1)
+
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
@@ -612,6 +649,8 @@
#define SYS_CNTV_CTL_EL02 sys_reg(3, 5, 14, 3, 1)
#define SYS_CNTV_CVAL_EL02 sys_reg(3, 5, 14, 3, 2)
+#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 76ea2800c33e..68266eb139b7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -109,6 +109,46 @@ static u32 get_ccsidr(u32 csselr)
return ccsidr;
}
+static bool access_rw(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+
+ return true;
+}
+
+static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write) {
+ u64 val = p->regval;
+
+ if (vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)) {
+ val &= ~(GENMASK_ULL(63,45) | GENMASK_ULL(34, 32) |
+ BIT_ULL(17) | BIT_ULL(9));
+ val |= SCTLR_EL1_RES1;
+ } else {
+ val &= ~(GENMASK_ULL(63,45) | BIT_ULL(42) |
+ GENMASK_ULL(39, 38) | GENMASK_ULL(35, 32) |
+ BIT_ULL(26) | BIT_ULL(24) | BIT_ULL(20) |
+ BIT_ULL(17) | GENMASK_ULL(15, 14) |
+ GENMASK(10, 7));
+ val |= SCTLR_EL2_RES1;
+ }
+
+ vcpu_write_sys_reg(vcpu, val, r->reg);
+ } else {
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+ }
+
+ return true;
+}
+
/*
* See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
*/
@@ -267,6 +307,14 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
return read_zero(vcpu, p);
}
+static bool trap_undef(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ kvm_inject_undefined(vcpu);
+ return false;
+}
+
/*
* ARMv8.1 mandates at least a trivial LORegion implementation, where all the
* RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -346,12 +394,9 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- if (p->is_write) {
- vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+ access_rw(vcpu, p, r);
+ if (p->is_write)
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
- } else {
- p->regval = vcpu_read_sys_reg(vcpu, r->reg);
- }
trace_trap_reg(__func__, r->reg, p->is_write, p->regval);
@@ -1335,6 +1380,18 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
.set_user = set_raz_id_reg, \
}
+static bool access_sp_el1(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ __vcpu_sys_reg(vcpu, SP_EL1) = p->regval;
+ else
+ p->regval = __vcpu_sys_reg(vcpu, SP_EL1);
+
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1744,9 +1801,51 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
+ { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_val, VPIDR_EL2, 0 },
+ { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_val, VMPIDR_EL2, 0 },
+
+ { SYS_DESC(SYS_SCTLR_EL2), access_sctlr_el2, reset_val, SCTLR_EL2, SCTLR_EL2_RES1 },
+ { SYS_DESC(SYS_ACTLR_EL2), access_rw, reset_val, ACTLR_EL2, 0 },
+ { SYS_DESC(SYS_HCR_EL2), access_rw, reset_val, HCR_EL2, 0 },
+ { SYS_DESC(SYS_MDCR_EL2), access_rw, reset_val, MDCR_EL2, 0 },
+ { SYS_DESC(SYS_CPTR_EL2), access_rw, reset_val, CPTR_EL2, CPTR_EL2_RES1 },
+ { SYS_DESC(SYS_HSTR_EL2), access_rw, reset_val, HSTR_EL2, 0 },
+ { SYS_DESC(SYS_HACR_EL2), access_rw, reset_val, HACR_EL2, 0 },
+
+ { SYS_DESC(SYS_TTBR0_EL2), access_rw, reset_val, TTBR0_EL2, 0 },
+ { SYS_DESC(SYS_TTBR1_EL2), access_rw, reset_val, TTBR1_EL2, 0 },
+ { SYS_DESC(SYS_TCR_EL2), access_rw, reset_val, TCR_EL2, TCR_EL2_RES1 },
+ { SYS_DESC(SYS_VTTBR_EL2), access_rw, reset_val, VTTBR_EL2, 0 },
+ { SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 },
+
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+ { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 },
+ { SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 },
+ { SYS_DESC(SYS_SP_EL1), access_sp_el1},
+
{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+ { SYS_DESC(SYS_AFSR0_EL2), access_rw, reset_val, AFSR0_EL2, 0 },
+ { SYS_DESC(SYS_AFSR1_EL2), access_rw, reset_val, AFSR1_EL2, 0 },
+ { SYS_DESC(SYS_ESR_EL2), access_rw, reset_val, ESR_EL2, 0 },
{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
+
+ { SYS_DESC(SYS_FAR_EL2), access_rw, reset_val, FAR_EL2, 0 },
+ { SYS_DESC(SYS_HPFAR_EL2), access_rw, reset_val, HPFAR_EL2, 0 },
+
+ { SYS_DESC(SYS_MAIR_EL2), access_rw, reset_val, MAIR_EL2, 0 },
+ { SYS_DESC(SYS_AMAIR_EL2), access_rw, reset_val, AMAIR_EL2, 0 },
+
+ { SYS_DESC(SYS_VBAR_EL2), access_rw, reset_val, VBAR_EL2, 0 },
+ { SYS_DESC(SYS_RVBAR_EL2), access_rw, reset_val, RVBAR_EL2, 0 },
+ { SYS_DESC(SYS_RMR_EL2), trap_undef },
+
+ { SYS_DESC(SYS_CONTEXTIDR_EL2), access_rw, reset_val, CONTEXTIDR_EL2, 0 },
+ { SYS_DESC(SYS_TPIDR_EL2), access_rw, reset_val, TPIDR_EL2, 0 },
+
+ { SYS_DESC(SYS_CNTVOFF_EL2), access_rw, reset_val, CNTVOFF_EL2, 0 },
+ { SYS_DESC(SYS_CNTHCTL_EL2), access_rw, reset_val, CNTHCTL_EL2, 0 },
+
+ { SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 07/66] KVM: arm64: nv: Handle HCR_EL2.NV system register traps
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
ARM v8.3 introduces a new bit in the HCR_EL2, which is the NV bit. When
this bit is set, accessing EL2 registers in EL1 traps to EL2. In
addition, executing the following instructions in EL1 will trap to EL2:
tlbi, at, eret, and msr/mrs instructions to access SP_EL1. Most of the
instructions that trap to EL2 with the NV bit were undef at EL1 prior to
ARM v8.3. The only instruction that was not undef is eret.
This patch sets up a handler for EL2 registers and SP_EL1 register
accesses at EL1. The host hypervisor keeps those register values in
memory, and will emulate their behavior.
This patch doesn't set the NV bit yet. It will be set in a later patch
once nested virtualization support is completed.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: added SCTLR_EL2 RES0/RES1 handling]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/sysreg.h | 41 +++++++++++-
arch/arm64/kvm/sys_regs.c | 109 ++++++++++++++++++++++++++++++--
2 files changed, 144 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 65d15700a168..10c0d3a476e2 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -525,10 +525,26 @@
#define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7)
+#define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0)
+#define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5)
+
#define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0)
+#define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1)
+#define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0)
+#define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1)
+#define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2)
+#define SYS_HSTR_EL2 sys_reg(3, 4, 1, 1, 3)
#define SYS_HFGRTR_EL2 sys_reg(3, 4, 1, 1, 4)
#define SYS_HFGWTR_EL2 sys_reg(3, 4, 1, 1, 5)
#define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6)
+#define SYS_HACR_EL2 sys_reg(3, 4, 1, 1, 7)
+
+#define SYS_TTBR0_EL2 sys_reg(3, 4, 2, 0, 0)
+#define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1)
+#define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)
+#define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
+#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
+
#define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0)
#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1)
#define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0)
@@ -537,14 +553,26 @@
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
+#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
#define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1)
+#define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0)
+#define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1)
#define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0)
#define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3)
#define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0)
#define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0)
#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
-#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
+#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
+#define SYS_HPFAR_EL2 sys_reg(3, 4, 6, 0, 4)
+
+#define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0)
+#define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0)
+
+#define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0)
+#define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1)
+#define SYS_RMR_EL2 sys_reg(3, 4, 12, 0, 2)
+#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
#define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
#define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0)
#define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1)
@@ -586,15 +614,24 @@
#define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6)
#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)
+#define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1)
+#define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2)
+
+#define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3)
+#define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0)
+
/* VHE encodings for architectural EL0/1 system registers */
#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
#define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2)
#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0)
+
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
#define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1)
#define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2)
+
#define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0)
#define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1)
+
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
@@ -612,6 +649,8 @@
#define SYS_CNTV_CTL_EL02 sys_reg(3, 5, 14, 3, 1)
#define SYS_CNTV_CVAL_EL02 sys_reg(3, 5, 14, 3, 2)
+#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 76ea2800c33e..68266eb139b7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -109,6 +109,46 @@ static u32 get_ccsidr(u32 csselr)
return ccsidr;
}
+static bool access_rw(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+
+ return true;
+}
+
+static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write) {
+ u64 val = p->regval;
+
+ if (vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)) {
+ val &= ~(GENMASK_ULL(63,45) | GENMASK_ULL(34, 32) |
+ BIT_ULL(17) | BIT_ULL(9));
+ val |= SCTLR_EL1_RES1;
+ } else {
+ val &= ~(GENMASK_ULL(63,45) | BIT_ULL(42) |
+ GENMASK_ULL(39, 38) | GENMASK_ULL(35, 32) |
+ BIT_ULL(26) | BIT_ULL(24) | BIT_ULL(20) |
+ BIT_ULL(17) | GENMASK_ULL(15, 14) |
+ GENMASK(10, 7));
+ val |= SCTLR_EL2_RES1;
+ }
+
+ vcpu_write_sys_reg(vcpu, val, r->reg);
+ } else {
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+ }
+
+ return true;
+}
+
/*
* See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
*/
@@ -267,6 +307,14 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
return read_zero(vcpu, p);
}
+static bool trap_undef(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ kvm_inject_undefined(vcpu);
+ return false;
+}
+
/*
* ARMv8.1 mandates at least a trivial LORegion implementation, where all the
* RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -346,12 +394,9 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- if (p->is_write) {
- vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+ access_rw(vcpu, p, r);
+ if (p->is_write)
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
- } else {
- p->regval = vcpu_read_sys_reg(vcpu, r->reg);
- }
trace_trap_reg(__func__, r->reg, p->is_write, p->regval);
@@ -1335,6 +1380,18 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
.set_user = set_raz_id_reg, \
}
+static bool access_sp_el1(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ __vcpu_sys_reg(vcpu, SP_EL1) = p->regval;
+ else
+ p->regval = __vcpu_sys_reg(vcpu, SP_EL1);
+
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1744,9 +1801,51 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
+ { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_val, VPIDR_EL2, 0 },
+ { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_val, VMPIDR_EL2, 0 },
+
+ { SYS_DESC(SYS_SCTLR_EL2), access_sctlr_el2, reset_val, SCTLR_EL2, SCTLR_EL2_RES1 },
+ { SYS_DESC(SYS_ACTLR_EL2), access_rw, reset_val, ACTLR_EL2, 0 },
+ { SYS_DESC(SYS_HCR_EL2), access_rw, reset_val, HCR_EL2, 0 },
+ { SYS_DESC(SYS_MDCR_EL2), access_rw, reset_val, MDCR_EL2, 0 },
+ { SYS_DESC(SYS_CPTR_EL2), access_rw, reset_val, CPTR_EL2, CPTR_EL2_RES1 },
+ { SYS_DESC(SYS_HSTR_EL2), access_rw, reset_val, HSTR_EL2, 0 },
+ { SYS_DESC(SYS_HACR_EL2), access_rw, reset_val, HACR_EL2, 0 },
+
+ { SYS_DESC(SYS_TTBR0_EL2), access_rw, reset_val, TTBR0_EL2, 0 },
+ { SYS_DESC(SYS_TTBR1_EL2), access_rw, reset_val, TTBR1_EL2, 0 },
+ { SYS_DESC(SYS_TCR_EL2), access_rw, reset_val, TCR_EL2, TCR_EL2_RES1 },
+ { SYS_DESC(SYS_VTTBR_EL2), access_rw, reset_val, VTTBR_EL2, 0 },
+ { SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 },
+
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+ { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 },
+ { SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 },
+ { SYS_DESC(SYS_SP_EL1), access_sp_el1},
+
{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+ { SYS_DESC(SYS_AFSR0_EL2), access_rw, reset_val, AFSR0_EL2, 0 },
+ { SYS_DESC(SYS_AFSR1_EL2), access_rw, reset_val, AFSR1_EL2, 0 },
+ { SYS_DESC(SYS_ESR_EL2), access_rw, reset_val, ESR_EL2, 0 },
{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
+
+ { SYS_DESC(SYS_FAR_EL2), access_rw, reset_val, FAR_EL2, 0 },
+ { SYS_DESC(SYS_HPFAR_EL2), access_rw, reset_val, HPFAR_EL2, 0 },
+
+ { SYS_DESC(SYS_MAIR_EL2), access_rw, reset_val, MAIR_EL2, 0 },
+ { SYS_DESC(SYS_AMAIR_EL2), access_rw, reset_val, AMAIR_EL2, 0 },
+
+ { SYS_DESC(SYS_VBAR_EL2), access_rw, reset_val, VBAR_EL2, 0 },
+ { SYS_DESC(SYS_RVBAR_EL2), access_rw, reset_val, RVBAR_EL2, 0 },
+ { SYS_DESC(SYS_RMR_EL2), trap_undef },
+
+ { SYS_DESC(SYS_CONTEXTIDR_EL2), access_rw, reset_val, CONTEXTIDR_EL2, 0 },
+ { SYS_DESC(SYS_TPIDR_EL2), access_rw, reset_val, TPIDR_EL2, 0 },
+
+ { SYS_DESC(SYS_CNTVOFF_EL2), access_rw, reset_val, CNTVOFF_EL2, 0 },
+ { SYS_DESC(SYS_CNTHCTL_EL2), access_rw, reset_val, CNTHCTL_EL2, 0 },
+
+ { SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 08/66] KVM: arm64: nv: Reset VMPIDR_EL2 and VPIDR_EL2 to sane values
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
The VMPIDR_EL2 and VPIDR_EL2 are architecturally UNKNOWN at reset, but
let's be nice to a guest hypervisor behaving foolishly and reset these
to something reasonable anyway.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 68266eb139b7..912535c25615 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -622,7 +622,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_write_sys_reg(vcpu, actlr, ACTLR_EL1);
}
-static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 compute_reset_mpidr(struct kvm_vcpu *vcpu)
{
u64 mpidr;
@@ -636,7 +636,24 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
- vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1);
+ mpidr |= (1ULL << 31);
+
+ return mpidr;
+}
+
+static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), MPIDR_EL1);
+}
+
+static void reset_vmpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), VMPIDR_EL2);
+}
+
+static void reset_vpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, read_cpuid_id(), VPIDR_EL2);
}
static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
@@ -1801,8 +1818,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
- { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_val, VPIDR_EL2, 0 },
- { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_val, VMPIDR_EL2, 0 },
+ { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_vpidr, VPIDR_EL2 },
+ { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_vmpidr, VMPIDR_EL2 },
{ SYS_DESC(SYS_SCTLR_EL2), access_sctlr_el2, reset_val, SCTLR_EL2, SCTLR_EL2_RES1 },
{ SYS_DESC(SYS_ACTLR_EL2), access_rw, reset_val, ACTLR_EL2, 0 },
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 08/66] KVM: arm64: nv: Reset VMPIDR_EL2 and VPIDR_EL2 to sane values
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
The VMPIDR_EL2 and VPIDR_EL2 are architecturally UNKNOWN at reset, but
let's be nice to a guest hypervisor behaving foolishly and reset these
to something reasonable anyway.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 68266eb139b7..912535c25615 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -622,7 +622,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_write_sys_reg(vcpu, actlr, ACTLR_EL1);
}
-static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 compute_reset_mpidr(struct kvm_vcpu *vcpu)
{
u64 mpidr;
@@ -636,7 +636,24 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
- vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1);
+ mpidr |= (1ULL << 31);
+
+ return mpidr;
+}
+
+static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), MPIDR_EL1);
+}
+
+static void reset_vmpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), VMPIDR_EL2);
+}
+
+static void reset_vpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, read_cpuid_id(), VPIDR_EL2);
}
static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
@@ -1801,8 +1818,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
- { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_val, VPIDR_EL2, 0 },
- { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_val, VMPIDR_EL2, 0 },
+ { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_vpidr, VPIDR_EL2 },
+ { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_vmpidr, VMPIDR_EL2 },
{ SYS_DESC(SYS_SCTLR_EL2), access_sctlr_el2, reset_val, SCTLR_EL2, SCTLR_EL2_RES1 },
{ SYS_DESC(SYS_ACTLR_EL2), access_rw, reset_val, ACTLR_EL2, 0 },
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 08/66] KVM: arm64: nv: Reset VMPIDR_EL2 and VPIDR_EL2 to sane values
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
From: Christoffer Dall <christoffer.dall@arm.com>
The VMPIDR_EL2 and VPIDR_EL2 are architecturally UNKNOWN at reset, but
let's be nice to a guest hypervisor behaving foolishly and reset these
to something reasonable anyway.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 68266eb139b7..912535c25615 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -622,7 +622,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu_write_sys_reg(vcpu, actlr, ACTLR_EL1);
}
-static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 compute_reset_mpidr(struct kvm_vcpu *vcpu)
{
u64 mpidr;
@@ -636,7 +636,24 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
- vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1);
+ mpidr |= (1ULL << 31);
+
+ return mpidr;
+}
+
+static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), MPIDR_EL1);
+}
+
+static void reset_vmpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), VMPIDR_EL2);
+}
+
+static void reset_vpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ vcpu_write_sys_reg(vcpu, read_cpuid_id(), VPIDR_EL2);
}
static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
@@ -1801,8 +1818,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
- { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_val, VPIDR_EL2, 0 },
- { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_val, VMPIDR_EL2, 0 },
+ { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_vpidr, VPIDR_EL2 },
+ { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_vmpidr, VMPIDR_EL2 },
{ SYS_DESC(SYS_SCTLR_EL2), access_sctlr_el2, reset_val, SCTLR_EL2, SCTLR_EL2_RES1 },
{ SYS_DESC(SYS_ACTLR_EL2), access_rw, reset_val, ACTLR_EL2, 0 },
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Support injecting exceptions and performing exception returns to and
from virtual EL2. This must be done entirely in software except when
taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
== {1,1} (a VHE guest hypervisor).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: switch to common exception injection framework]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 17 +++
arch/arm64/include/asm/kvm_emulate.h | 10 ++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
arch/arm64/kvm/hyp/exception.c | 45 +++++--
arch/arm64/kvm/inject_fault.c | 63 ++++++++--
arch/arm64/kvm/trace_arm.h | 59 +++++++++
7 files changed, 354 insertions(+), 18 deletions(-)
create mode 100644 arch/arm64/kvm/emulate-nested.c
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 692c9049befa..e27da4165d43 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -335,4 +335,21 @@
#define CPACR_EL1_TTA (1 << 28)
#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN)
+#define kvm_mode_names \
+ { PSR_MODE_EL0t, "EL0t" }, \
+ { PSR_MODE_EL1t, "EL1t" }, \
+ { PSR_MODE_EL1h, "EL1h" }, \
+ { PSR_MODE_EL2t, "EL2t" }, \
+ { PSR_MODE_EL2h, "EL2h" }, \
+ { PSR_MODE_EL3t, "EL3t" }, \
+ { PSR_MODE_EL3h, "EL3h" }, \
+ { PSR_AA32_MODE_USR, "32-bit USR" }, \
+ { PSR_AA32_MODE_FIQ, "32-bit FIQ" }, \
+ { PSR_AA32_MODE_IRQ, "32-bit IRQ" }, \
+ { PSR_AA32_MODE_SVC, "32-bit SVC" }, \
+ { PSR_AA32_MODE_ABT, "32-bit ABT" }, \
+ { PSR_AA32_MODE_HYP, "32-bit HYP" }, \
+ { PSR_AA32_MODE_UND, "32-bit UND" }, \
+ { PSR_AA32_MODE_SYS, "32-bit SYS" }
+
#endif /* __ARM64_KVM_ARM_H__ */
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 706ef06c5f2b..1c10fbe7d1c3 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -33,6 +33,12 @@ enum exception_type {
except_type_serror = 0x180,
};
+#define kvm_exception_type_names \
+ { except_type_sync, "SYNC" }, \
+ { except_type_irq, "IRQ" }, \
+ { except_type_fiq, "FIQ" }, \
+ { except_type_serror, "SERROR" }
+
bool kvm_condition_valid32(const struct kvm_vcpu *vcpu);
void kvm_skip_instr32(struct kvm_vcpu *vcpu);
@@ -41,6 +47,10 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
+int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2);
+int kvm_inject_nested_irq(struct kvm_vcpu *vcpu);
+
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
return !(vcpu->arch.hcr_el2 & HCR_RW);
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 589921392cb1..ebf6ddb3cd77 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o\
+ arch_timer.o trng.o emulate-nested.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
new file mode 100644
index 000000000000..ee91bcd925d8
--- /dev/null
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -0,0 +1,176 @@
+/*
+ * Copyright (C) 2016 - Linaro and Columbia University
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
+
+#include "hyp/include/hyp/adjust_pc.h"
+
+#include "trace.h"
+
+void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
+{
+ u64 spsr, elr, mode;
+ bool direct_eret;
+
+ /*
+ * Going through the whole put/load motions is a waste of time
+ * if this is a VHE guest hypervisor returning to its own
+ * userspace, or the hypervisor performing a local exception
+ * return. No need to save/restore registers, no need to
+ * switch S2 MMU. Just do the canonical ERET.
+ */
+ spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
+ mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ direct_eret = (mode == PSR_MODE_EL0t &&
+ vcpu_el2_e2h_is_set(vcpu) &&
+ vcpu_el2_tge_is_set(vcpu));
+ direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
+
+ if (direct_eret) {
+ *vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
+ *vcpu_cpsr(vcpu) = spsr;
+ trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
+ return;
+ }
+
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+
+ elr = __vcpu_sys_reg(vcpu, ELR_EL2);
+
+ trace_kvm_nested_eret(vcpu, elr, spsr);
+
+ /*
+ * Note that the current exception level is always the virtual EL2,
+ * since we set HCR_EL2.NV bit only when entering the virtual EL2.
+ */
+ *vcpu_pc(vcpu) = elr;
+ *vcpu_cpsr(vcpu) = spsr;
+
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+}
+
+static void kvm_inject_el2_exception(struct kvm_vcpu *vcpu, u64 esr_el2,
+ enum exception_type type)
+{
+ trace_kvm_inject_nested_exception(vcpu, esr_el2, type);
+
+ switch (type) {
+ case except_type_sync:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_ELx_SYNC;
+ break;
+ case except_type_irq:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_ELx_IRQ;
+ break;
+ default:
+ WARN_ONCE(1, "Unsupported EL2 exception injection %d\n", type);
+ }
+
+ vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL2 |
+ KVM_ARM64_PENDING_EXCEPTION);
+
+ vcpu_write_sys_reg(vcpu, esr_el2, ESR_EL2);
+}
+
+/*
+ * Emulate taking an exception to EL2.
+ * See ARM ARM J8.1.2 AArch64.TakeException()
+ */
+static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2,
+ enum exception_type type)
+{
+ u64 pstate, mode;
+ bool direct_inject;
+
+ if (!nested_virt_in_use(vcpu)) {
+ kvm_err("Unexpected call to %s for the non-nesting configuration\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ /*
+ * As for ERET, we can avoid doing too much on the injection path by
+ * checking that we either took the exception from a VHE host
+ * userspace or from vEL2. In these cases, there is no change in
+ * translation regime (or anything else), so let's do as little as
+ * possible.
+ */
+ pstate = *vcpu_cpsr(vcpu);
+ mode = pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ direct_inject = (mode == PSR_MODE_EL0t &&
+ vcpu_el2_e2h_is_set(vcpu) &&
+ vcpu_el2_tge_is_set(vcpu));
+ direct_inject |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
+
+ if (direct_inject) {
+ kvm_inject_el2_exception(vcpu, esr_el2, type);
+ return 1;
+ }
+
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+
+ kvm_inject_el2_exception(vcpu, esr_el2, type);
+
+ /*
+ * A hard requirement is that a switch between EL1 and EL2
+ * contexts has to happen between a put/load, so that we can
+ * pick the correct timer and interrupt configuration, among
+ * other things.
+ *
+ * Make sure the exception actually took place before we load
+ * the new context.
+ */
+ __adjust_pc(vcpu);
+
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+
+ return 1;
+}
+
+int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2)
+{
+ return kvm_inject_nested(vcpu, esr_el2, except_type_sync);
+}
+
+int kvm_inject_nested_irq(struct kvm_vcpu *vcpu)
+{
+ /*
+ * Do not inject an irq if the:
+ * - Current exception level is EL2, and
+ * - virtual HCR_EL2.TGE == 0
+ * - virtual HCR_EL2.IMO == 0
+ *
+ * See Table D1-17 "Physical interrupt target and masking when EL3 is
+ * not implemented and EL2 is implemented" in ARM DDI 0487C.a.
+ */
+
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_tge_is_set(vcpu) &&
+ !(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_IMO))
+ return 1;
+
+ /* esr_el2 value doesn't matter for exits due to irqs. */
+ return kvm_inject_nested(vcpu, 0, except_type_irq);
+}
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 73629094f903..b6424c01a85a 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -13,6 +13,7 @@
#include <hyp/adjust_pc.h>
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#if !defined (__KVM_NVHE_HYPERVISOR__) && !defined (__KVM_VHE_HYPERVISOR__)
#error Hypervisor code only!
@@ -22,7 +23,9 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
{
u64 val;
- if (__vcpu_read_sys_reg_from_cpu(reg, &val))
+ if (unlikely(nested_virt_in_use(vcpu)))
+ return vcpu_read_sys_reg(vcpu, reg);
+ else if (__vcpu_read_sys_reg_from_cpu(reg, &val))
return val;
return __vcpu_sys_reg(vcpu, reg);
@@ -30,14 +33,26 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
{
- if (__vcpu_write_sys_reg_to_cpu(val, reg))
+ if (unlikely(nested_virt_in_use(vcpu)))
+ vcpu_write_sys_reg(vcpu, val, reg);
+ else if (__vcpu_write_sys_reg_to_cpu(val, reg))
return;
__vcpu_sys_reg(vcpu, reg) = val;
}
-static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
+static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode,
+ u64 val)
{
+ if (unlikely(nested_virt_in_use(vcpu))) {
+ if (target_mode == PSR_MODE_EL1h)
+ vcpu_write_sys_reg(vcpu, val, SPSR_EL1);
+ else
+ vcpu_write_sys_reg(vcpu, val, SPSR_EL2);
+
+ return;
+ }
+
write_sysreg_el1(val, SYS_SPSR);
}
@@ -97,6 +112,11 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1);
break;
+ case PSR_MODE_EL2h:
+ vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL2);
+ sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL2);
+ __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL2);
+ break;
default:
/* Don't do that */
BUG();
@@ -148,7 +168,7 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
new |= target_mode;
*vcpu_cpsr(vcpu) = new;
- __vcpu_write_spsr(vcpu, old);
+ __vcpu_write_spsr(vcpu, target_mode, old);
}
/*
@@ -319,11 +339,22 @@ void kvm_inject_exception(struct kvm_vcpu *vcpu)
KVM_ARM64_EXCEPT_AA64_EL1):
enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
break;
+
+ case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+ KVM_ARM64_EXCEPT_AA64_EL2):
+ enter_exception64(vcpu, PSR_MODE_EL2h, except_type_sync);
+ break;
+
+ case (KVM_ARM64_EXCEPT_AA64_ELx_IRQ |
+ KVM_ARM64_EXCEPT_AA64_EL2):
+ enter_exception64(vcpu, PSR_MODE_EL2h, except_type_irq);
+ break;
+
default:
/*
- * Only EL1_SYNC makes sense so far, EL2_{SYNC,IRQ}
- * will be implemented at some point. Everything
- * else gets silently ignored.
+ * Only EL1_SYNC and EL2_{SYNC,IRQ} makes
+ * sense so far. Everything else gets silently
+ * ignored.
*/
break;
}
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index b47df73e98d7..06df0bb848ca 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -12,19 +12,53 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#include <asm/esr.h>
+static void pend_sync_exception(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+ KVM_ARM64_PENDING_EXCEPTION);
+
+ /* If not nesting, EL1 is the only possible exception target */
+ if (likely(!nested_virt_in_use(vcpu))) {
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ return;
+ }
+
+ /*
+ * With NV, we need to pick between EL1 and EL2. Note that we
+ * never deal with a nesting exception here, hence never
+ * changing context, and the exception itself can be delayed
+ * until the next entry.
+ */
+ switch(*vcpu_cpsr(vcpu) & PSR_MODE_MASK) {
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL2;
+ break;
+ case PSR_MODE_EL1h:
+ case PSR_MODE_EL1t:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ break;
+ case PSR_MODE_EL0t:
+ if (vcpu_el2_tge_is_set(vcpu) & HCR_TGE)
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL2;
+ else
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ break;
+ default:
+ BUG();
+ }
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
u32 esr = 0;
- vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
- KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
- KVM_ARM64_PENDING_EXCEPTION);
-
- vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
+ pend_sync_exception(vcpu);
/*
* Build an {i,d}abort, depending on the level and the
@@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
if (!is_iabt)
esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
- vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
+ esr |= ESR_ELx_FSC_EXTABT;
+
+ if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
+ vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ } else {
+ vcpu_write_sys_reg(vcpu, addr, FAR_EL2);
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
+ }
}
static void inject_undef64(struct kvm_vcpu *vcpu)
{
u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
- vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
- KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
- KVM_ARM64_PENDING_EXCEPTION);
+ pend_sync_exception(vcpu);
/*
* Build an unknown exception, depending on the instruction
@@ -63,7 +103,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
if (kvm_vcpu_trap_il_is32bit(vcpu))
esr |= ESR_ELx_IL;
- vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ else
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
}
#define DFSR_FSC_EXTABT_LPAE 0x10
diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h
index 33e4e7dd2719..f3e46a976125 100644
--- a/arch/arm64/kvm/trace_arm.h
+++ b/arch/arm64/kvm/trace_arm.h
@@ -2,6 +2,7 @@
#if !defined(_TRACE_ARM_ARM64_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_ARM_ARM64_KVM_H
+#include <asm/kvm_emulate.h>
#include <kvm/arm_arch_timer.h>
#include <linux/tracepoint.h>
@@ -301,6 +302,64 @@ TRACE_EVENT(kvm_timer_emulate,
__entry->timer_idx, __entry->should_fire)
);
+TRACE_EVENT(kvm_nested_eret,
+ TP_PROTO(struct kvm_vcpu *vcpu, unsigned long elr_el2,
+ unsigned long spsr_el2),
+ TP_ARGS(vcpu, elr_el2, spsr_el2),
+
+ TP_STRUCT__entry(
+ __field(struct kvm_vcpu *, vcpu)
+ __field(unsigned long, elr_el2)
+ __field(unsigned long, spsr_el2)
+ __field(unsigned long, target_mode)
+ __field(unsigned long, hcr_el2)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->elr_el2 = elr_el2;
+ __entry->spsr_el2 = spsr_el2;
+ __entry->target_mode = spsr_el2 & (PSR_MODE_MASK | PSR_MODE32_BIT);
+ __entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+ ),
+
+ TP_printk("elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
+ __entry->elr_el2, __entry->spsr_el2,
+ __print_symbolic(__entry->target_mode, kvm_mode_names),
+ __entry->hcr_el2)
+);
+
+TRACE_EVENT(kvm_inject_nested_exception,
+ TP_PROTO(struct kvm_vcpu *vcpu, u64 esr_el2, int type),
+ TP_ARGS(vcpu, esr_el2, type),
+
+ TP_STRUCT__entry(
+ __field(struct kvm_vcpu *, vcpu)
+ __field(unsigned long, esr_el2)
+ __field(int, type)
+ __field(unsigned long, spsr_el2)
+ __field(unsigned long, pc)
+ __field(unsigned long, source_mode)
+ __field(unsigned long, hcr_el2)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->esr_el2 = esr_el2;
+ __entry->type = type;
+ __entry->spsr_el2 = *vcpu_cpsr(vcpu);
+ __entry->pc = *vcpu_pc(vcpu);
+ __entry->source_mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
+ __entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+ ),
+
+ TP_printk("%s: esr_el2 0x%lx elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
+ __print_symbolic(__entry->type, kvm_exception_type_names),
+ __entry->esr_el2, __entry->pc, __entry->spsr_el2,
+ __print_symbolic(__entry->source_mode, kvm_mode_names),
+ __entry->hcr_el2)
+);
+
#endif /* _TRACE_ARM_ARM64_KVM_H */
#undef TRACE_INCLUDE_PATH
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Support injecting exceptions and performing exception returns to and
from virtual EL2. This must be done entirely in software except when
taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
== {1,1} (a VHE guest hypervisor).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: switch to common exception injection framework]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 17 +++
arch/arm64/include/asm/kvm_emulate.h | 10 ++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
arch/arm64/kvm/hyp/exception.c | 45 +++++--
arch/arm64/kvm/inject_fault.c | 63 ++++++++--
arch/arm64/kvm/trace_arm.h | 59 +++++++++
7 files changed, 354 insertions(+), 18 deletions(-)
create mode 100644 arch/arm64/kvm/emulate-nested.c
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 692c9049befa..e27da4165d43 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -335,4 +335,21 @@
#define CPACR_EL1_TTA (1 << 28)
#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN)
+#define kvm_mode_names \
+ { PSR_MODE_EL0t, "EL0t" }, \
+ { PSR_MODE_EL1t, "EL1t" }, \
+ { PSR_MODE_EL1h, "EL1h" }, \
+ { PSR_MODE_EL2t, "EL2t" }, \
+ { PSR_MODE_EL2h, "EL2h" }, \
+ { PSR_MODE_EL3t, "EL3t" }, \
+ { PSR_MODE_EL3h, "EL3h" }, \
+ { PSR_AA32_MODE_USR, "32-bit USR" }, \
+ { PSR_AA32_MODE_FIQ, "32-bit FIQ" }, \
+ { PSR_AA32_MODE_IRQ, "32-bit IRQ" }, \
+ { PSR_AA32_MODE_SVC, "32-bit SVC" }, \
+ { PSR_AA32_MODE_ABT, "32-bit ABT" }, \
+ { PSR_AA32_MODE_HYP, "32-bit HYP" }, \
+ { PSR_AA32_MODE_UND, "32-bit UND" }, \
+ { PSR_AA32_MODE_SYS, "32-bit SYS" }
+
#endif /* __ARM64_KVM_ARM_H__ */
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 706ef06c5f2b..1c10fbe7d1c3 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -33,6 +33,12 @@ enum exception_type {
except_type_serror = 0x180,
};
+#define kvm_exception_type_names \
+ { except_type_sync, "SYNC" }, \
+ { except_type_irq, "IRQ" }, \
+ { except_type_fiq, "FIQ" }, \
+ { except_type_serror, "SERROR" }
+
bool kvm_condition_valid32(const struct kvm_vcpu *vcpu);
void kvm_skip_instr32(struct kvm_vcpu *vcpu);
@@ -41,6 +47,10 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
+int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2);
+int kvm_inject_nested_irq(struct kvm_vcpu *vcpu);
+
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
return !(vcpu->arch.hcr_el2 & HCR_RW);
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 589921392cb1..ebf6ddb3cd77 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o\
+ arch_timer.o trng.o emulate-nested.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
new file mode 100644
index 000000000000..ee91bcd925d8
--- /dev/null
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -0,0 +1,176 @@
+/*
+ * Copyright (C) 2016 - Linaro and Columbia University
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
+
+#include "hyp/include/hyp/adjust_pc.h"
+
+#include "trace.h"
+
+void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
+{
+ u64 spsr, elr, mode;
+ bool direct_eret;
+
+ /*
+ * Going through the whole put/load motions is a waste of time
+ * if this is a VHE guest hypervisor returning to its own
+ * userspace, or the hypervisor performing a local exception
+ * return. No need to save/restore registers, no need to
+ * switch S2 MMU. Just do the canonical ERET.
+ */
+ spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
+ mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ direct_eret = (mode == PSR_MODE_EL0t &&
+ vcpu_el2_e2h_is_set(vcpu) &&
+ vcpu_el2_tge_is_set(vcpu));
+ direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
+
+ if (direct_eret) {
+ *vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
+ *vcpu_cpsr(vcpu) = spsr;
+ trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
+ return;
+ }
+
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+
+ elr = __vcpu_sys_reg(vcpu, ELR_EL2);
+
+ trace_kvm_nested_eret(vcpu, elr, spsr);
+
+ /*
+ * Note that the current exception level is always the virtual EL2,
+ * since we set HCR_EL2.NV bit only when entering the virtual EL2.
+ */
+ *vcpu_pc(vcpu) = elr;
+ *vcpu_cpsr(vcpu) = spsr;
+
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+}
+
+static void kvm_inject_el2_exception(struct kvm_vcpu *vcpu, u64 esr_el2,
+ enum exception_type type)
+{
+ trace_kvm_inject_nested_exception(vcpu, esr_el2, type);
+
+ switch (type) {
+ case except_type_sync:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_ELx_SYNC;
+ break;
+ case except_type_irq:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_ELx_IRQ;
+ break;
+ default:
+ WARN_ONCE(1, "Unsupported EL2 exception injection %d\n", type);
+ }
+
+ vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL2 |
+ KVM_ARM64_PENDING_EXCEPTION);
+
+ vcpu_write_sys_reg(vcpu, esr_el2, ESR_EL2);
+}
+
+/*
+ * Emulate taking an exception to EL2.
+ * See ARM ARM J8.1.2 AArch64.TakeException()
+ */
+static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2,
+ enum exception_type type)
+{
+ u64 pstate, mode;
+ bool direct_inject;
+
+ if (!nested_virt_in_use(vcpu)) {
+ kvm_err("Unexpected call to %s for the non-nesting configuration\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ /*
+ * As for ERET, we can avoid doing too much on the injection path by
+ * checking that we either took the exception from a VHE host
+ * userspace or from vEL2. In these cases, there is no change in
+ * translation regime (or anything else), so let's do as little as
+ * possible.
+ */
+ pstate = *vcpu_cpsr(vcpu);
+ mode = pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ direct_inject = (mode == PSR_MODE_EL0t &&
+ vcpu_el2_e2h_is_set(vcpu) &&
+ vcpu_el2_tge_is_set(vcpu));
+ direct_inject |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
+
+ if (direct_inject) {
+ kvm_inject_el2_exception(vcpu, esr_el2, type);
+ return 1;
+ }
+
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+
+ kvm_inject_el2_exception(vcpu, esr_el2, type);
+
+ /*
+ * A hard requirement is that a switch between EL1 and EL2
+ * contexts has to happen between a put/load, so that we can
+ * pick the correct timer and interrupt configuration, among
+ * other things.
+ *
+ * Make sure the exception actually took place before we load
+ * the new context.
+ */
+ __adjust_pc(vcpu);
+
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+
+ return 1;
+}
+
+int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2)
+{
+ return kvm_inject_nested(vcpu, esr_el2, except_type_sync);
+}
+
+int kvm_inject_nested_irq(struct kvm_vcpu *vcpu)
+{
+ /*
+ * Do not inject an irq if the:
+ * - Current exception level is EL2, and
+ * - virtual HCR_EL2.TGE == 0
+ * - virtual HCR_EL2.IMO == 0
+ *
+ * See Table D1-17 "Physical interrupt target and masking when EL3 is
+ * not implemented and EL2 is implemented" in ARM DDI 0487C.a.
+ */
+
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_tge_is_set(vcpu) &&
+ !(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_IMO))
+ return 1;
+
+ /* esr_el2 value doesn't matter for exits due to irqs. */
+ return kvm_inject_nested(vcpu, 0, except_type_irq);
+}
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 73629094f903..b6424c01a85a 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -13,6 +13,7 @@
#include <hyp/adjust_pc.h>
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#if !defined (__KVM_NVHE_HYPERVISOR__) && !defined (__KVM_VHE_HYPERVISOR__)
#error Hypervisor code only!
@@ -22,7 +23,9 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
{
u64 val;
- if (__vcpu_read_sys_reg_from_cpu(reg, &val))
+ if (unlikely(nested_virt_in_use(vcpu)))
+ return vcpu_read_sys_reg(vcpu, reg);
+ else if (__vcpu_read_sys_reg_from_cpu(reg, &val))
return val;
return __vcpu_sys_reg(vcpu, reg);
@@ -30,14 +33,26 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
{
- if (__vcpu_write_sys_reg_to_cpu(val, reg))
+ if (unlikely(nested_virt_in_use(vcpu)))
+ vcpu_write_sys_reg(vcpu, val, reg);
+ else if (__vcpu_write_sys_reg_to_cpu(val, reg))
return;
__vcpu_sys_reg(vcpu, reg) = val;
}
-static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
+static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode,
+ u64 val)
{
+ if (unlikely(nested_virt_in_use(vcpu))) {
+ if (target_mode == PSR_MODE_EL1h)
+ vcpu_write_sys_reg(vcpu, val, SPSR_EL1);
+ else
+ vcpu_write_sys_reg(vcpu, val, SPSR_EL2);
+
+ return;
+ }
+
write_sysreg_el1(val, SYS_SPSR);
}
@@ -97,6 +112,11 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1);
break;
+ case PSR_MODE_EL2h:
+ vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL2);
+ sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL2);
+ __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL2);
+ break;
default:
/* Don't do that */
BUG();
@@ -148,7 +168,7 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
new |= target_mode;
*vcpu_cpsr(vcpu) = new;
- __vcpu_write_spsr(vcpu, old);
+ __vcpu_write_spsr(vcpu, target_mode, old);
}
/*
@@ -319,11 +339,22 @@ void kvm_inject_exception(struct kvm_vcpu *vcpu)
KVM_ARM64_EXCEPT_AA64_EL1):
enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
break;
+
+ case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+ KVM_ARM64_EXCEPT_AA64_EL2):
+ enter_exception64(vcpu, PSR_MODE_EL2h, except_type_sync);
+ break;
+
+ case (KVM_ARM64_EXCEPT_AA64_ELx_IRQ |
+ KVM_ARM64_EXCEPT_AA64_EL2):
+ enter_exception64(vcpu, PSR_MODE_EL2h, except_type_irq);
+ break;
+
default:
/*
- * Only EL1_SYNC makes sense so far, EL2_{SYNC,IRQ}
- * will be implemented at some point. Everything
- * else gets silently ignored.
+ * Only EL1_SYNC and EL2_{SYNC,IRQ} makes
+ * sense so far. Everything else gets silently
+ * ignored.
*/
break;
}
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index b47df73e98d7..06df0bb848ca 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -12,19 +12,53 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#include <asm/esr.h>
+static void pend_sync_exception(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+ KVM_ARM64_PENDING_EXCEPTION);
+
+ /* If not nesting, EL1 is the only possible exception target */
+ if (likely(!nested_virt_in_use(vcpu))) {
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ return;
+ }
+
+ /*
+ * With NV, we need to pick between EL1 and EL2. Note that we
+ * never deal with a nesting exception here, hence never
+ * changing context, and the exception itself can be delayed
+ * until the next entry.
+ */
+ switch(*vcpu_cpsr(vcpu) & PSR_MODE_MASK) {
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL2;
+ break;
+ case PSR_MODE_EL1h:
+ case PSR_MODE_EL1t:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ break;
+ case PSR_MODE_EL0t:
+ if (vcpu_el2_tge_is_set(vcpu) & HCR_TGE)
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL2;
+ else
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ break;
+ default:
+ BUG();
+ }
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
u32 esr = 0;
- vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
- KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
- KVM_ARM64_PENDING_EXCEPTION);
-
- vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
+ pend_sync_exception(vcpu);
/*
* Build an {i,d}abort, depending on the level and the
@@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
if (!is_iabt)
esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
- vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
+ esr |= ESR_ELx_FSC_EXTABT;
+
+ if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
+ vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ } else {
+ vcpu_write_sys_reg(vcpu, addr, FAR_EL2);
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
+ }
}
static void inject_undef64(struct kvm_vcpu *vcpu)
{
u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
- vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
- KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
- KVM_ARM64_PENDING_EXCEPTION);
+ pend_sync_exception(vcpu);
/*
* Build an unknown exception, depending on the instruction
@@ -63,7 +103,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
if (kvm_vcpu_trap_il_is32bit(vcpu))
esr |= ESR_ELx_IL;
- vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ else
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
}
#define DFSR_FSC_EXTABT_LPAE 0x10
diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h
index 33e4e7dd2719..f3e46a976125 100644
--- a/arch/arm64/kvm/trace_arm.h
+++ b/arch/arm64/kvm/trace_arm.h
@@ -2,6 +2,7 @@
#if !defined(_TRACE_ARM_ARM64_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_ARM_ARM64_KVM_H
+#include <asm/kvm_emulate.h>
#include <kvm/arm_arch_timer.h>
#include <linux/tracepoint.h>
@@ -301,6 +302,64 @@ TRACE_EVENT(kvm_timer_emulate,
__entry->timer_idx, __entry->should_fire)
);
+TRACE_EVENT(kvm_nested_eret,
+ TP_PROTO(struct kvm_vcpu *vcpu, unsigned long elr_el2,
+ unsigned long spsr_el2),
+ TP_ARGS(vcpu, elr_el2, spsr_el2),
+
+ TP_STRUCT__entry(
+ __field(struct kvm_vcpu *, vcpu)
+ __field(unsigned long, elr_el2)
+ __field(unsigned long, spsr_el2)
+ __field(unsigned long, target_mode)
+ __field(unsigned long, hcr_el2)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->elr_el2 = elr_el2;
+ __entry->spsr_el2 = spsr_el2;
+ __entry->target_mode = spsr_el2 & (PSR_MODE_MASK | PSR_MODE32_BIT);
+ __entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+ ),
+
+ TP_printk("elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
+ __entry->elr_el2, __entry->spsr_el2,
+ __print_symbolic(__entry->target_mode, kvm_mode_names),
+ __entry->hcr_el2)
+);
+
+TRACE_EVENT(kvm_inject_nested_exception,
+ TP_PROTO(struct kvm_vcpu *vcpu, u64 esr_el2, int type),
+ TP_ARGS(vcpu, esr_el2, type),
+
+ TP_STRUCT__entry(
+ __field(struct kvm_vcpu *, vcpu)
+ __field(unsigned long, esr_el2)
+ __field(int, type)
+ __field(unsigned long, spsr_el2)
+ __field(unsigned long, pc)
+ __field(unsigned long, source_mode)
+ __field(unsigned long, hcr_el2)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->esr_el2 = esr_el2;
+ __entry->type = type;
+ __entry->spsr_el2 = *vcpu_cpsr(vcpu);
+ __entry->pc = *vcpu_pc(vcpu);
+ __entry->source_mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
+ __entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+ ),
+
+ TP_printk("%s: esr_el2 0x%lx elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
+ __print_symbolic(__entry->type, kvm_exception_type_names),
+ __entry->esr_el2, __entry->pc, __entry->spsr_el2,
+ __print_symbolic(__entry->source_mode, kvm_mode_names),
+ __entry->hcr_el2)
+);
+
#endif /* _TRACE_ARM_ARM64_KVM_H */
#undef TRACE_INCLUDE_PATH
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Support injecting exceptions and performing exception returns to and
from virtual EL2. This must be done entirely in software except when
taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
== {1,1} (a VHE guest hypervisor).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: switch to common exception injection framework]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 17 +++
arch/arm64/include/asm/kvm_emulate.h | 10 ++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
arch/arm64/kvm/hyp/exception.c | 45 +++++--
arch/arm64/kvm/inject_fault.c | 63 ++++++++--
arch/arm64/kvm/trace_arm.h | 59 +++++++++
7 files changed, 354 insertions(+), 18 deletions(-)
create mode 100644 arch/arm64/kvm/emulate-nested.c
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 692c9049befa..e27da4165d43 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -335,4 +335,21 @@
#define CPACR_EL1_TTA (1 << 28)
#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN)
+#define kvm_mode_names \
+ { PSR_MODE_EL0t, "EL0t" }, \
+ { PSR_MODE_EL1t, "EL1t" }, \
+ { PSR_MODE_EL1h, "EL1h" }, \
+ { PSR_MODE_EL2t, "EL2t" }, \
+ { PSR_MODE_EL2h, "EL2h" }, \
+ { PSR_MODE_EL3t, "EL3t" }, \
+ { PSR_MODE_EL3h, "EL3h" }, \
+ { PSR_AA32_MODE_USR, "32-bit USR" }, \
+ { PSR_AA32_MODE_FIQ, "32-bit FIQ" }, \
+ { PSR_AA32_MODE_IRQ, "32-bit IRQ" }, \
+ { PSR_AA32_MODE_SVC, "32-bit SVC" }, \
+ { PSR_AA32_MODE_ABT, "32-bit ABT" }, \
+ { PSR_AA32_MODE_HYP, "32-bit HYP" }, \
+ { PSR_AA32_MODE_UND, "32-bit UND" }, \
+ { PSR_AA32_MODE_SYS, "32-bit SYS" }
+
#endif /* __ARM64_KVM_ARM_H__ */
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 706ef06c5f2b..1c10fbe7d1c3 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -33,6 +33,12 @@ enum exception_type {
except_type_serror = 0x180,
};
+#define kvm_exception_type_names \
+ { except_type_sync, "SYNC" }, \
+ { except_type_irq, "IRQ" }, \
+ { except_type_fiq, "FIQ" }, \
+ { except_type_serror, "SERROR" }
+
bool kvm_condition_valid32(const struct kvm_vcpu *vcpu);
void kvm_skip_instr32(struct kvm_vcpu *vcpu);
@@ -41,6 +47,10 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
+int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2);
+int kvm_inject_nested_irq(struct kvm_vcpu *vcpu);
+
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
return !(vcpu->arch.hcr_el2 & HCR_RW);
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 589921392cb1..ebf6ddb3cd77 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o\
+ arch_timer.o trng.o emulate-nested.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
new file mode 100644
index 000000000000..ee91bcd925d8
--- /dev/null
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -0,0 +1,176 @@
+/*
+ * Copyright (C) 2016 - Linaro and Columbia University
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
+
+#include "hyp/include/hyp/adjust_pc.h"
+
+#include "trace.h"
+
+void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
+{
+ u64 spsr, elr, mode;
+ bool direct_eret;
+
+ /*
+ * Going through the whole put/load motions is a waste of time
+ * if this is a VHE guest hypervisor returning to its own
+ * userspace, or the hypervisor performing a local exception
+ * return. No need to save/restore registers, no need to
+ * switch S2 MMU. Just do the canonical ERET.
+ */
+ spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
+ mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ direct_eret = (mode == PSR_MODE_EL0t &&
+ vcpu_el2_e2h_is_set(vcpu) &&
+ vcpu_el2_tge_is_set(vcpu));
+ direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
+
+ if (direct_eret) {
+ *vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
+ *vcpu_cpsr(vcpu) = spsr;
+ trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
+ return;
+ }
+
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+
+ elr = __vcpu_sys_reg(vcpu, ELR_EL2);
+
+ trace_kvm_nested_eret(vcpu, elr, spsr);
+
+ /*
+ * Note that the current exception level is always the virtual EL2,
+ * since we set HCR_EL2.NV bit only when entering the virtual EL2.
+ */
+ *vcpu_pc(vcpu) = elr;
+ *vcpu_cpsr(vcpu) = spsr;
+
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+}
+
+static void kvm_inject_el2_exception(struct kvm_vcpu *vcpu, u64 esr_el2,
+ enum exception_type type)
+{
+ trace_kvm_inject_nested_exception(vcpu, esr_el2, type);
+
+ switch (type) {
+ case except_type_sync:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_ELx_SYNC;
+ break;
+ case except_type_irq:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_ELx_IRQ;
+ break;
+ default:
+ WARN_ONCE(1, "Unsupported EL2 exception injection %d\n", type);
+ }
+
+ vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL2 |
+ KVM_ARM64_PENDING_EXCEPTION);
+
+ vcpu_write_sys_reg(vcpu, esr_el2, ESR_EL2);
+}
+
+/*
+ * Emulate taking an exception to EL2.
+ * See ARM ARM J8.1.2 AArch64.TakeException()
+ */
+static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2,
+ enum exception_type type)
+{
+ u64 pstate, mode;
+ bool direct_inject;
+
+ if (!nested_virt_in_use(vcpu)) {
+ kvm_err("Unexpected call to %s for the non-nesting configuration\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ /*
+ * As for ERET, we can avoid doing too much on the injection path by
+ * checking that we either took the exception from a VHE host
+ * userspace or from vEL2. In these cases, there is no change in
+ * translation regime (or anything else), so let's do as little as
+ * possible.
+ */
+ pstate = *vcpu_cpsr(vcpu);
+ mode = pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ direct_inject = (mode == PSR_MODE_EL0t &&
+ vcpu_el2_e2h_is_set(vcpu) &&
+ vcpu_el2_tge_is_set(vcpu));
+ direct_inject |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
+
+ if (direct_inject) {
+ kvm_inject_el2_exception(vcpu, esr_el2, type);
+ return 1;
+ }
+
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+
+ kvm_inject_el2_exception(vcpu, esr_el2, type);
+
+ /*
+ * A hard requirement is that a switch between EL1 and EL2
+ * contexts has to happen between a put/load, so that we can
+ * pick the correct timer and interrupt configuration, among
+ * other things.
+ *
+ * Make sure the exception actually took place before we load
+ * the new context.
+ */
+ __adjust_pc(vcpu);
+
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+
+ return 1;
+}
+
+int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2)
+{
+ return kvm_inject_nested(vcpu, esr_el2, except_type_sync);
+}
+
+int kvm_inject_nested_irq(struct kvm_vcpu *vcpu)
+{
+ /*
+ * Do not inject an irq if the:
+ * - Current exception level is EL2, and
+ * - virtual HCR_EL2.TGE == 0
+ * - virtual HCR_EL2.IMO == 0
+ *
+ * See Table D1-17 "Physical interrupt target and masking when EL3 is
+ * not implemented and EL2 is implemented" in ARM DDI 0487C.a.
+ */
+
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_tge_is_set(vcpu) &&
+ !(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_IMO))
+ return 1;
+
+ /* esr_el2 value doesn't matter for exits due to irqs. */
+ return kvm_inject_nested(vcpu, 0, except_type_irq);
+}
diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
index 73629094f903..b6424c01a85a 100644
--- a/arch/arm64/kvm/hyp/exception.c
+++ b/arch/arm64/kvm/hyp/exception.c
@@ -13,6 +13,7 @@
#include <hyp/adjust_pc.h>
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#if !defined (__KVM_NVHE_HYPERVISOR__) && !defined (__KVM_VHE_HYPERVISOR__)
#error Hypervisor code only!
@@ -22,7 +23,9 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
{
u64 val;
- if (__vcpu_read_sys_reg_from_cpu(reg, &val))
+ if (unlikely(nested_virt_in_use(vcpu)))
+ return vcpu_read_sys_reg(vcpu, reg);
+ else if (__vcpu_read_sys_reg_from_cpu(reg, &val))
return val;
return __vcpu_sys_reg(vcpu, reg);
@@ -30,14 +33,26 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
{
- if (__vcpu_write_sys_reg_to_cpu(val, reg))
+ if (unlikely(nested_virt_in_use(vcpu)))
+ vcpu_write_sys_reg(vcpu, val, reg);
+ else if (__vcpu_write_sys_reg_to_cpu(val, reg))
return;
__vcpu_sys_reg(vcpu, reg) = val;
}
-static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
+static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode,
+ u64 val)
{
+ if (unlikely(nested_virt_in_use(vcpu))) {
+ if (target_mode == PSR_MODE_EL1h)
+ vcpu_write_sys_reg(vcpu, val, SPSR_EL1);
+ else
+ vcpu_write_sys_reg(vcpu, val, SPSR_EL2);
+
+ return;
+ }
+
write_sysreg_el1(val, SYS_SPSR);
}
@@ -97,6 +112,11 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1);
break;
+ case PSR_MODE_EL2h:
+ vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL2);
+ sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL2);
+ __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL2);
+ break;
default:
/* Don't do that */
BUG();
@@ -148,7 +168,7 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
new |= target_mode;
*vcpu_cpsr(vcpu) = new;
- __vcpu_write_spsr(vcpu, old);
+ __vcpu_write_spsr(vcpu, target_mode, old);
}
/*
@@ -319,11 +339,22 @@ void kvm_inject_exception(struct kvm_vcpu *vcpu)
KVM_ARM64_EXCEPT_AA64_EL1):
enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
break;
+
+ case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+ KVM_ARM64_EXCEPT_AA64_EL2):
+ enter_exception64(vcpu, PSR_MODE_EL2h, except_type_sync);
+ break;
+
+ case (KVM_ARM64_EXCEPT_AA64_ELx_IRQ |
+ KVM_ARM64_EXCEPT_AA64_EL2):
+ enter_exception64(vcpu, PSR_MODE_EL2h, except_type_irq);
+ break;
+
default:
/*
- * Only EL1_SYNC makes sense so far, EL2_{SYNC,IRQ}
- * will be implemented at some point. Everything
- * else gets silently ignored.
+ * Only EL1_SYNC and EL2_{SYNC,IRQ} makes
+ * sense so far. Everything else gets silently
+ * ignored.
*/
break;
}
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index b47df73e98d7..06df0bb848ca 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -12,19 +12,53 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
#include <asm/esr.h>
+static void pend_sync_exception(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+ KVM_ARM64_PENDING_EXCEPTION);
+
+ /* If not nesting, EL1 is the only possible exception target */
+ if (likely(!nested_virt_in_use(vcpu))) {
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ return;
+ }
+
+ /*
+ * With NV, we need to pick between EL1 and EL2. Note that we
+ * never deal with a nesting exception here, hence never
+ * changing context, and the exception itself can be delayed
+ * until the next entry.
+ */
+ switch(*vcpu_cpsr(vcpu) & PSR_MODE_MASK) {
+ case PSR_MODE_EL2h:
+ case PSR_MODE_EL2t:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL2;
+ break;
+ case PSR_MODE_EL1h:
+ case PSR_MODE_EL1t:
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ break;
+ case PSR_MODE_EL0t:
+ if (vcpu_el2_tge_is_set(vcpu) & HCR_TGE)
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL2;
+ else
+ vcpu->arch.flags |= KVM_ARM64_EXCEPT_AA64_EL1;
+ break;
+ default:
+ BUG();
+ }
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
u32 esr = 0;
- vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
- KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
- KVM_ARM64_PENDING_EXCEPTION);
-
- vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
+ pend_sync_exception(vcpu);
/*
* Build an {i,d}abort, depending on the level and the
@@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
if (!is_iabt)
esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
- vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
+ esr |= ESR_ELx_FSC_EXTABT;
+
+ if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
+ vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ } else {
+ vcpu_write_sys_reg(vcpu, addr, FAR_EL2);
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
+ }
}
static void inject_undef64(struct kvm_vcpu *vcpu)
{
u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
- vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
- KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
- KVM_ARM64_PENDING_EXCEPTION);
+ pend_sync_exception(vcpu);
/*
* Build an unknown exception, depending on the instruction
@@ -63,7 +103,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
if (kvm_vcpu_trap_il_is32bit(vcpu))
esr |= ESR_ELx_IL;
- vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
+ else
+ vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
}
#define DFSR_FSC_EXTABT_LPAE 0x10
diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h
index 33e4e7dd2719..f3e46a976125 100644
--- a/arch/arm64/kvm/trace_arm.h
+++ b/arch/arm64/kvm/trace_arm.h
@@ -2,6 +2,7 @@
#if !defined(_TRACE_ARM_ARM64_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_ARM_ARM64_KVM_H
+#include <asm/kvm_emulate.h>
#include <kvm/arm_arch_timer.h>
#include <linux/tracepoint.h>
@@ -301,6 +302,64 @@ TRACE_EVENT(kvm_timer_emulate,
__entry->timer_idx, __entry->should_fire)
);
+TRACE_EVENT(kvm_nested_eret,
+ TP_PROTO(struct kvm_vcpu *vcpu, unsigned long elr_el2,
+ unsigned long spsr_el2),
+ TP_ARGS(vcpu, elr_el2, spsr_el2),
+
+ TP_STRUCT__entry(
+ __field(struct kvm_vcpu *, vcpu)
+ __field(unsigned long, elr_el2)
+ __field(unsigned long, spsr_el2)
+ __field(unsigned long, target_mode)
+ __field(unsigned long, hcr_el2)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->elr_el2 = elr_el2;
+ __entry->spsr_el2 = spsr_el2;
+ __entry->target_mode = spsr_el2 & (PSR_MODE_MASK | PSR_MODE32_BIT);
+ __entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+ ),
+
+ TP_printk("elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
+ __entry->elr_el2, __entry->spsr_el2,
+ __print_symbolic(__entry->target_mode, kvm_mode_names),
+ __entry->hcr_el2)
+);
+
+TRACE_EVENT(kvm_inject_nested_exception,
+ TP_PROTO(struct kvm_vcpu *vcpu, u64 esr_el2, int type),
+ TP_ARGS(vcpu, esr_el2, type),
+
+ TP_STRUCT__entry(
+ __field(struct kvm_vcpu *, vcpu)
+ __field(unsigned long, esr_el2)
+ __field(int, type)
+ __field(unsigned long, spsr_el2)
+ __field(unsigned long, pc)
+ __field(unsigned long, source_mode)
+ __field(unsigned long, hcr_el2)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->esr_el2 = esr_el2;
+ __entry->type = type;
+ __entry->spsr_el2 = *vcpu_cpsr(vcpu);
+ __entry->pc = *vcpu_pc(vcpu);
+ __entry->source_mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
+ __entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+ ),
+
+ TP_printk("%s: esr_el2 0x%lx elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
+ __print_symbolic(__entry->type, kvm_exception_type_names),
+ __entry->esr_el2, __entry->pc, __entry->spsr_el2,
+ __print_symbolic(__entry->source_mode, kvm_mode_names),
+ __entry->hcr_el2)
+);
+
#endif /* _TRACE_ARM_ARM64_KVM_H */
#undef TRACE_INCLUDE_PATH
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* Re: [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-20 12:55 ` Zenghui Yu
-1 siblings, 0 replies; 229+ messages in thread
From: Zenghui Yu @ 2021-05-20 12:55 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, kvm, kernel-team, Andre Przywara,
christoffer.dall, jintack, haibo.xu, James Morse,
Suzuki K Poulose, Alexandru Elisei, jintack.lim
On 2021/5/11 0:58, Marc Zyngier wrote:
> From: Jintack Lim <jintack.lim@linaro.org>
>
> Support injecting exceptions and performing exception returns to and
> from virtual EL2. This must be done entirely in software except when
> taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
> == {1,1} (a VHE guest hypervisor).
>
> Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> [maz: switch to common exception injection framework]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_arm.h | 17 +++
> arch/arm64/include/asm/kvm_emulate.h | 10 ++
> arch/arm64/kvm/Makefile | 2 +-
> arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
> arch/arm64/kvm/hyp/exception.c | 45 +++++--
> arch/arm64/kvm/inject_fault.c | 63 ++++++++--
> arch/arm64/kvm/trace_arm.h | 59 +++++++++
> 7 files changed, 354 insertions(+), 18 deletions(-)
> create mode 100644 arch/arm64/kvm/emulate-nested.c
[...]
> static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
> {
> unsigned long cpsr = *vcpu_cpsr(vcpu);
> bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> u32 esr = 0;
>
> - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> - KVM_ARM64_PENDING_EXCEPTION);
> -
> - vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> + pend_sync_exception(vcpu);
>
> /*
> * Build an {i,d}abort, depending on the level and the
> @@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> if (!is_iabt)
> esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
>
> - vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
> + esr |= ESR_ELx_FSC_EXTABT;
> +
> + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
This isn't the right way to pick between EL1 and EL2 since
KVM_ARM64_EXCEPT_AA64_EL1 is (0 << 11), we will not be able
to inject abort to EL1 that way.
> + vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + } else {
> + vcpu_write_sys_reg(vcpu, addr, FAR_EL2);
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
> + }
> }
>
> static void inject_undef64(struct kvm_vcpu *vcpu)
> {
> u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
>
> - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> - KVM_ARM64_PENDING_EXCEPTION);
> + pend_sync_exception(vcpu);
>
> /*
> * Build an unknown exception, depending on the instruction
> @@ -63,7 +103,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
> if (kvm_vcpu_trap_il_is32bit(vcpu))
> esr |= ESR_ELx_IL;
>
> - vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + else
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
Same here.
Thanks,
Zenghui
^ permalink raw reply [flat|nested] 229+ messages in thread
* Re: [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
@ 2021-05-20 12:55 ` Zenghui Yu
0 siblings, 0 replies; 229+ messages in thread
From: Zenghui Yu @ 2021-05-20 12:55 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, kvm, kernel-team, Andre Przywara,
christoffer.dall, jintack, haibo.xu, James Morse,
Suzuki K Poulose, Alexandru Elisei, jintack.lim
On 2021/5/11 0:58, Marc Zyngier wrote:
> From: Jintack Lim <jintack.lim@linaro.org>
>
> Support injecting exceptions and performing exception returns to and
> from virtual EL2. This must be done entirely in software except when
> taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
> == {1,1} (a VHE guest hypervisor).
>
> Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> [maz: switch to common exception injection framework]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_arm.h | 17 +++
> arch/arm64/include/asm/kvm_emulate.h | 10 ++
> arch/arm64/kvm/Makefile | 2 +-
> arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
> arch/arm64/kvm/hyp/exception.c | 45 +++++--
> arch/arm64/kvm/inject_fault.c | 63 ++++++++--
> arch/arm64/kvm/trace_arm.h | 59 +++++++++
> 7 files changed, 354 insertions(+), 18 deletions(-)
> create mode 100644 arch/arm64/kvm/emulate-nested.c
[...]
> static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
> {
> unsigned long cpsr = *vcpu_cpsr(vcpu);
> bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> u32 esr = 0;
>
> - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> - KVM_ARM64_PENDING_EXCEPTION);
> -
> - vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> + pend_sync_exception(vcpu);
>
> /*
> * Build an {i,d}abort, depending on the level and the
> @@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> if (!is_iabt)
> esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
>
> - vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
> + esr |= ESR_ELx_FSC_EXTABT;
> +
> + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
This isn't the right way to pick between EL1 and EL2 since
KVM_ARM64_EXCEPT_AA64_EL1 is (0 << 11), we will not be able
to inject abort to EL1 that way.
> + vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + } else {
> + vcpu_write_sys_reg(vcpu, addr, FAR_EL2);
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
> + }
> }
>
> static void inject_undef64(struct kvm_vcpu *vcpu)
> {
> u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
>
> - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> - KVM_ARM64_PENDING_EXCEPTION);
> + pend_sync_exception(vcpu);
>
> /*
> * Build an unknown exception, depending on the instruction
> @@ -63,7 +103,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
> if (kvm_vcpu_trap_il_is32bit(vcpu))
> esr |= ESR_ELx_IL;
>
> - vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + else
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
Same here.
Thanks,
Zenghui
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 229+ messages in thread
* Re: [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
@ 2021-05-20 12:55 ` Zenghui Yu
0 siblings, 0 replies; 229+ messages in thread
From: Zenghui Yu @ 2021-05-20 12:55 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvm, Andre Przywara, jintack.lim, kernel-team, kvmarm, linux-arm-kernel
On 2021/5/11 0:58, Marc Zyngier wrote:
> From: Jintack Lim <jintack.lim@linaro.org>
>
> Support injecting exceptions and performing exception returns to and
> from virtual EL2. This must be done entirely in software except when
> taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
> == {1,1} (a VHE guest hypervisor).
>
> Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> [maz: switch to common exception injection framework]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_arm.h | 17 +++
> arch/arm64/include/asm/kvm_emulate.h | 10 ++
> arch/arm64/kvm/Makefile | 2 +-
> arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
> arch/arm64/kvm/hyp/exception.c | 45 +++++--
> arch/arm64/kvm/inject_fault.c | 63 ++++++++--
> arch/arm64/kvm/trace_arm.h | 59 +++++++++
> 7 files changed, 354 insertions(+), 18 deletions(-)
> create mode 100644 arch/arm64/kvm/emulate-nested.c
[...]
> static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
> {
> unsigned long cpsr = *vcpu_cpsr(vcpu);
> bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> u32 esr = 0;
>
> - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> - KVM_ARM64_PENDING_EXCEPTION);
> -
> - vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> + pend_sync_exception(vcpu);
>
> /*
> * Build an {i,d}abort, depending on the level and the
> @@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> if (!is_iabt)
> esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
>
> - vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
> + esr |= ESR_ELx_FSC_EXTABT;
> +
> + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
This isn't the right way to pick between EL1 and EL2 since
KVM_ARM64_EXCEPT_AA64_EL1 is (0 << 11), we will not be able
to inject abort to EL1 that way.
> + vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + } else {
> + vcpu_write_sys_reg(vcpu, addr, FAR_EL2);
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
> + }
> }
>
> static void inject_undef64(struct kvm_vcpu *vcpu)
> {
> u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
>
> - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> - KVM_ARM64_PENDING_EXCEPTION);
> + pend_sync_exception(vcpu);
>
> /*
> * Build an unknown exception, depending on the instruction
> @@ -63,7 +103,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
> if (kvm_vcpu_trap_il_is32bit(vcpu))
> esr |= ESR_ELx_IL;
>
> - vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
> + else
> + vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
Same here.
Thanks,
Zenghui
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 229+ messages in thread
* Re: [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
2021-05-20 12:55 ` Zenghui Yu
@ 2021-05-24 12:35 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-24 12:35 UTC (permalink / raw)
To: Zenghui Yu
Cc: kvm, Andre Przywara, jintack.lim, kernel-team, kvmarm, linux-arm-kernel
On Thu, 20 May 2021 13:55:48 +0100,
Zenghui Yu <yuzenghui@huawei.com> wrote:
>
> On 2021/5/11 0:58, Marc Zyngier wrote:
> > From: Jintack Lim <jintack.lim@linaro.org>
> >
> > Support injecting exceptions and performing exception returns to and
> > from virtual EL2. This must be done entirely in software except when
> > taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
> > == {1,1} (a VHE guest hypervisor).
> >
> > Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> > Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> > [maz: switch to common exception injection framework]
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/include/asm/kvm_arm.h | 17 +++
> > arch/arm64/include/asm/kvm_emulate.h | 10 ++
> > arch/arm64/kvm/Makefile | 2 +-
> > arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
> > arch/arm64/kvm/hyp/exception.c | 45 +++++--
> > arch/arm64/kvm/inject_fault.c | 63 ++++++++--
> > arch/arm64/kvm/trace_arm.h | 59 +++++++++
> > 7 files changed, 354 insertions(+), 18 deletions(-)
> > create mode 100644 arch/arm64/kvm/emulate-nested.c
>
> [...]
>
> > static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
> > {
> > unsigned long cpsr = *vcpu_cpsr(vcpu);
> > bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> > u32 esr = 0;
> > - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> > - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> > - KVM_ARM64_PENDING_EXCEPTION);
> > -
> > - vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> > + pend_sync_exception(vcpu);
> > /*
> > * Build an {i,d}abort, depending on the level and the
> > @@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> > if (!is_iabt)
> > esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
> > - vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
> > + esr |= ESR_ELx_FSC_EXTABT;
> > +
> > + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
>
> This isn't the right way to pick between EL1 and EL2 since
> KVM_ARM64_EXCEPT_AA64_EL1 is (0 << 11), we will not be able
> to inject abort to EL1 that way.
Indeed, well observed. I'm squashing the following fix in this patch.
Thanks,
M.
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 27397ecf9a23..fe781557e42c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -524,6 +524,7 @@ struct kvm_vcpu_arch {
#define KVM_ARM64_EXCEPT_AA64_ELx_SERR (3 << 9)
#define KVM_ARM64_EXCEPT_AA64_EL1 (0 << 11)
#define KVM_ARM64_EXCEPT_AA64_EL2 (1 << 11)
+#define KVM_ARM64_EXCEPT_AA64_EL_MASK (1 << 11)
/*
* Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 06df0bb848ca..5dcf3f8b08b8 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -52,6 +52,11 @@ static void pend_sync_exception(struct kvm_vcpu *vcpu)
}
}
+static bool match_target_el(struct kvm_vcpu *vcpu, unsigned long target)
+{
+ return (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL_MASK) == target;
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
@@ -81,7 +86,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
esr |= ESR_ELx_FSC_EXTABT;
- if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
+ if (match_target_el(vcpu, KVM_ARM64_EXCEPT_AA64_EL1)) {
vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
} else {
@@ -103,7 +108,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
if (kvm_vcpu_trap_il_is32bit(vcpu))
esr |= ESR_ELx_IL;
- if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
+ if (match_target_el(vcpu, KVM_ARM64_EXCEPT_AA64_EL1))
vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
else
vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
--
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* Re: [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions
@ 2021-05-24 12:35 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-24 12:35 UTC (permalink / raw)
To: Zenghui Yu
Cc: linux-arm-kernel, kvmarm, kvm, kernel-team, Andre Przywara,
christoffer.dall, jintack, haibo.xu, James Morse,
Suzuki K Poulose, Alexandru Elisei, jintack.lim
On Thu, 20 May 2021 13:55:48 +0100,
Zenghui Yu <yuzenghui@huawei.com> wrote:
>
> On 2021/5/11 0:58, Marc Zyngier wrote:
> > From: Jintack Lim <jintack.lim@linaro.org>
> >
> > Support injecting exceptions and performing exception returns to and
> > from virtual EL2. This must be done entirely in software except when
> > taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE}
> > == {1,1} (a VHE guest hypervisor).
> >
> > Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> > Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> > [maz: switch to common exception injection framework]
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/include/asm/kvm_arm.h | 17 +++
> > arch/arm64/include/asm/kvm_emulate.h | 10 ++
> > arch/arm64/kvm/Makefile | 2 +-
> > arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++
> > arch/arm64/kvm/hyp/exception.c | 45 +++++--
> > arch/arm64/kvm/inject_fault.c | 63 ++++++++--
> > arch/arm64/kvm/trace_arm.h | 59 +++++++++
> > 7 files changed, 354 insertions(+), 18 deletions(-)
> > create mode 100644 arch/arm64/kvm/emulate-nested.c
>
> [...]
>
> > static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
> > {
> > unsigned long cpsr = *vcpu_cpsr(vcpu);
> > bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
> > u32 esr = 0;
> > - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> > - KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> > - KVM_ARM64_PENDING_EXCEPTION);
> > -
> > - vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
> > + pend_sync_exception(vcpu);
> > /*
> > * Build an {i,d}abort, depending on the level and the
> > @@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
> > if (!is_iabt)
> > esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
> > - vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
> > + esr |= ESR_ELx_FSC_EXTABT;
> > +
> > + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
>
> This isn't the right way to pick between EL1 and EL2 since
> KVM_ARM64_EXCEPT_AA64_EL1 is (0 << 11), we will not be able
> to inject abort to EL1 that way.
Indeed, well observed. I'm squashing the following fix in this patch.
Thanks,
M.
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 27397ecf9a23..fe781557e42c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -524,6 +524,7 @@ struct kvm_vcpu_arch {
#define KVM_ARM64_EXCEPT_AA64_ELx_SERR (3 << 9)
#define KVM_ARM64_EXCEPT_AA64_EL1 (0 << 11)
#define KVM_ARM64_EXCEPT_AA64_EL2 (1 << 11)
+#define KVM_ARM64_EXCEPT_AA64_EL_MASK (1 << 11)
/*
* Overlaps with KVM_ARM64_EXCEPT_MASK on purpose so that it can't be
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 06df0bb848ca..5dcf3f8b08b8 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -52,6 +52,11 @@ static void pend_sync_exception(struct kvm_vcpu *vcpu)
}
}
+static bool match_target_el(struct kvm_vcpu *vcpu, unsigned long target)
+{
+ return (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL_MASK) == target;
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
@@ -81,7 +86,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
esr |= ESR_ELx_FSC_EXTABT;
- if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) {
+ if (match_target_el(vcpu, KVM_ARM64_EXCEPT_AA64_EL1)) {
vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
} else {
@@ -103,7 +108,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
if (kvm_vcpu_trap_il_is32bit(vcpu))
esr |= ESR_ELx_IL;
- if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1)
+ if (match_target_el(vcpu, KVM_ARM64_EXCEPT_AA64_EL1))
vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
else
vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 10/66] KVM: arm64: nv: Inject HVC exceptions to the virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
As we expect all PSCI calls from the L1 hypervisor to be performed
using SMC when nested virtualization is enabled, it is clear that
all HVC instruction from the VM (including from the virtual EL2)
are supposed to handled in the virtual EL2.
Forward these to EL2 as required.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: add handling of HCR_EL2.HCD]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6f48336b1d86..6add422d63b8 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -16,6 +16,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/debug-monitors.h>
#include <asm/traps.h>
@@ -40,6 +41,16 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
kvm_vcpu_hvc_get_imm(vcpu));
vcpu->stat.hvc_exit_stat++;
+ /* Forward hvc instructions to the virtual EL2 if the guest has EL2. */
+ if (nested_virt_in_use(vcpu)) {
+ if (vcpu_read_sys_reg(vcpu, HCR_EL2) & HCR_HCD)
+ kvm_inject_undefined(vcpu);
+ else
+ kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ return 1;
+ }
+
ret = kvm_hvc_call_handler(vcpu);
if (ret < 0) {
vcpu_set_reg(vcpu, 0, ~0UL);
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 10/66] KVM: arm64: nv: Inject HVC exceptions to the virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
As we expect all PSCI calls from the L1 hypervisor to be performed
using SMC when nested virtualization is enabled, it is clear that
all HVC instruction from the VM (including from the virtual EL2)
are supposed to handled in the virtual EL2.
Forward these to EL2 as required.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: add handling of HCR_EL2.HCD]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6f48336b1d86..6add422d63b8 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -16,6 +16,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/debug-monitors.h>
#include <asm/traps.h>
@@ -40,6 +41,16 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
kvm_vcpu_hvc_get_imm(vcpu));
vcpu->stat.hvc_exit_stat++;
+ /* Forward hvc instructions to the virtual EL2 if the guest has EL2. */
+ if (nested_virt_in_use(vcpu)) {
+ if (vcpu_read_sys_reg(vcpu, HCR_EL2) & HCR_HCD)
+ kvm_inject_undefined(vcpu);
+ else
+ kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ return 1;
+ }
+
ret = kvm_hvc_call_handler(vcpu);
if (ret < 0) {
vcpu_set_reg(vcpu, 0, ~0UL);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 10/66] KVM: arm64: nv: Inject HVC exceptions to the virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
As we expect all PSCI calls from the L1 hypervisor to be performed
using SMC when nested virtualization is enabled, it is clear that
all HVC instruction from the VM (including from the virtual EL2)
are supposed to handled in the virtual EL2.
Forward these to EL2 as required.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: add handling of HCR_EL2.HCD]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6f48336b1d86..6add422d63b8 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -16,6 +16,7 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/debug-monitors.h>
#include <asm/traps.h>
@@ -40,6 +41,16 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
kvm_vcpu_hvc_get_imm(vcpu));
vcpu->stat.hvc_exit_stat++;
+ /* Forward hvc instructions to the virtual EL2 if the guest has EL2. */
+ if (nested_virt_in_use(vcpu)) {
+ if (vcpu_read_sys_reg(vcpu, HCR_EL2) & HCR_HCD)
+ kvm_inject_undefined(vcpu);
+ else
+ kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ return 1;
+ }
+
ret = kvm_hvc_call_handler(vcpu);
if (ret < 0) {
vcpu_set_reg(vcpu, 0, ~0UL);
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 11/66] KVM: arm64: nv: Handle trapped ERET from virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
When a guest hypervisor running virtual EL2 in EL1 executes an ERET
instruction, we will have set HCR_EL2.NV which traps ERET to EL2, so
that we can emulate the exception return in software.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 5 +++++
arch/arm64/include/asm/kvm_arm.h | 2 +-
arch/arm64/kvm/handle_exit.c | 10 ++++++++++
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 29f97eb3dad4..9e6bc2757a05 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -253,6 +253,11 @@
(((e) & ESR_ELx_SYS64_ISS_OP2_MASK) >> \
ESR_ELx_SYS64_ISS_OP2_SHIFT))
+/* ISS field definitions for ERET/ERETAA/ERETAB trapping */
+
+#define ESR_ELx_ERET_ISS_ERET_ERETAx 0x2
+#define ESR_ELx_ERET_ISS_ERETA_ERATAB 0x1
+
/*
* ISS field definitions for floating-point exception traps
* (FP_EXC_32/FP_EXC_64).
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index e27da4165d43..4fd52af7f538 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -329,7 +329,7 @@
ECN(SP_ALIGN), ECN(FP_EXC32), ECN(FP_EXC64), ECN(SERROR), \
ECN(BREAKPT_LOW), ECN(BREAKPT_CUR), ECN(SOFTSTP_LOW), \
ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
- ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
+ ECN(BKPT32), ECN(VECTOR32), ECN(BRK64), ECN(ERET)
#define CPACR_EL1_FPEN (3 << 20)
#define CPACR_EL1_TTA (1 << 28)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6add422d63b8..fca0d44afe96 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -183,6 +183,15 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
return 1;
}
+static int kvm_handle_eret(struct kvm_vcpu *vcpu)
+{
+ if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET_ERETAx)
+ return kvm_handle_ptrauth(vcpu);
+
+ kvm_emulate_nested_eret(vcpu);
+ return 1;
+}
+
static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx] = kvm_handle_wfx,
@@ -197,6 +206,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_SMC64] = handle_smc,
[ESR_ELx_EC_SYS64] = kvm_handle_sys_reg,
[ESR_ELx_EC_SVE] = handle_sve,
+ [ESR_ELx_EC_ERET] = kvm_handle_eret,
[ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort,
[ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort,
[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 11/66] KVM: arm64: nv: Handle trapped ERET from virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
When a guest hypervisor running virtual EL2 in EL1 executes an ERET
instruction, we will have set HCR_EL2.NV which traps ERET to EL2, so
that we can emulate the exception return in software.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 5 +++++
arch/arm64/include/asm/kvm_arm.h | 2 +-
arch/arm64/kvm/handle_exit.c | 10 ++++++++++
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 29f97eb3dad4..9e6bc2757a05 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -253,6 +253,11 @@
(((e) & ESR_ELx_SYS64_ISS_OP2_MASK) >> \
ESR_ELx_SYS64_ISS_OP2_SHIFT))
+/* ISS field definitions for ERET/ERETAA/ERETAB trapping */
+
+#define ESR_ELx_ERET_ISS_ERET_ERETAx 0x2
+#define ESR_ELx_ERET_ISS_ERETA_ERATAB 0x1
+
/*
* ISS field definitions for floating-point exception traps
* (FP_EXC_32/FP_EXC_64).
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index e27da4165d43..4fd52af7f538 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -329,7 +329,7 @@
ECN(SP_ALIGN), ECN(FP_EXC32), ECN(FP_EXC64), ECN(SERROR), \
ECN(BREAKPT_LOW), ECN(BREAKPT_CUR), ECN(SOFTSTP_LOW), \
ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
- ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
+ ECN(BKPT32), ECN(VECTOR32), ECN(BRK64), ECN(ERET)
#define CPACR_EL1_FPEN (3 << 20)
#define CPACR_EL1_TTA (1 << 28)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6add422d63b8..fca0d44afe96 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -183,6 +183,15 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
return 1;
}
+static int kvm_handle_eret(struct kvm_vcpu *vcpu)
+{
+ if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET_ERETAx)
+ return kvm_handle_ptrauth(vcpu);
+
+ kvm_emulate_nested_eret(vcpu);
+ return 1;
+}
+
static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx] = kvm_handle_wfx,
@@ -197,6 +206,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_SMC64] = handle_smc,
[ESR_ELx_EC_SYS64] = kvm_handle_sys_reg,
[ESR_ELx_EC_SVE] = handle_sve,
+ [ESR_ELx_EC_ERET] = kvm_handle_eret,
[ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort,
[ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort,
[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 11/66] KVM: arm64: nv: Handle trapped ERET from virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
From: Christoffer Dall <christoffer.dall@arm.com>
When a guest hypervisor running virtual EL2 in EL1 executes an ERET
instruction, we will have set HCR_EL2.NV which traps ERET to EL2, so
that we can emulate the exception return in software.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 5 +++++
arch/arm64/include/asm/kvm_arm.h | 2 +-
arch/arm64/kvm/handle_exit.c | 10 ++++++++++
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 29f97eb3dad4..9e6bc2757a05 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -253,6 +253,11 @@
(((e) & ESR_ELx_SYS64_ISS_OP2_MASK) >> \
ESR_ELx_SYS64_ISS_OP2_SHIFT))
+/* ISS field definitions for ERET/ERETAA/ERETAB trapping */
+
+#define ESR_ELx_ERET_ISS_ERET_ERETAx 0x2
+#define ESR_ELx_ERET_ISS_ERETA_ERATAB 0x1
+
/*
* ISS field definitions for floating-point exception traps
* (FP_EXC_32/FP_EXC_64).
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index e27da4165d43..4fd52af7f538 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -329,7 +329,7 @@
ECN(SP_ALIGN), ECN(FP_EXC32), ECN(FP_EXC64), ECN(SERROR), \
ECN(BREAKPT_LOW), ECN(BREAKPT_CUR), ECN(SOFTSTP_LOW), \
ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
- ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
+ ECN(BKPT32), ECN(VECTOR32), ECN(BRK64), ECN(ERET)
#define CPACR_EL1_FPEN (3 << 20)
#define CPACR_EL1_TTA (1 << 28)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6add422d63b8..fca0d44afe96 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -183,6 +183,15 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
return 1;
}
+static int kvm_handle_eret(struct kvm_vcpu *vcpu)
+{
+ if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET_ERETAx)
+ return kvm_handle_ptrauth(vcpu);
+
+ kvm_emulate_nested_eret(vcpu);
+ return 1;
+}
+
static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx] = kvm_handle_wfx,
@@ -197,6 +206,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_SMC64] = handle_smc,
[ESR_ELx_EC_SYS64] = kvm_handle_sys_reg,
[ESR_ELx_EC_SVE] = handle_sve,
+ [ESR_ELx_EC_ERET] = kvm_handle_eret,
[ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort,
[ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort,
[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 12/66] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Some EL2 system registers immediately affect the current execution
of the system, so we need to use their respective EL1 counterparts.
For this we need to define a mapping between the two. In general,
this only affects non-VHE guest hypervisors, as VHE system registers
are compatible with the EL1 counterparts.
These helpers will get used in subsequent patches.
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 50 +++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 1028ac65a897..67a2c0d05233 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -2,6 +2,7 @@
#ifndef __ARM64_KVM_NESTED_H
#define __ARM64_KVM_NESTED_H
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
@@ -11,4 +12,53 @@ static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
}
+/* Translation helpers from non-VHE EL2 to EL1 */
+static inline u64 tcr_el2_ips_to_tcr_el1_ps(u64 tcr_el2)
+{
+ return (u64)FIELD_GET(TCR_EL2_PS_MASK, tcr_el2) << TCR_IPS_SHIFT;
+}
+
+static inline u64 translate_tcr_el2_to_tcr_el1(u64 tcr)
+{
+ return TCR_EPD1_MASK | /* disable TTBR1_EL1 */
+ ((tcr & TCR_EL2_TBI) ? TCR_TBI0 : 0) |
+ tcr_el2_ips_to_tcr_el1_ps(tcr) |
+ (tcr & TCR_EL2_TG0_MASK) |
+ (tcr & TCR_EL2_ORGN0_MASK) |
+ (tcr & TCR_EL2_IRGN0_MASK) |
+ (tcr & TCR_EL2_T0SZ_MASK);
+}
+
+static inline u64 translate_cptr_el2_to_cpacr_el1(u64 cptr_el2)
+{
+ u64 cpacr_el1 = 0;
+
+ if (!(cptr_el2 & CPTR_EL2_TFP))
+ cpacr_el1 |= CPACR_EL1_FPEN;
+ if (cptr_el2 & CPTR_EL2_TTA)
+ cpacr_el1 |= CPACR_EL1_TTA;
+ if (!(cptr_el2 & CPTR_EL2_TZ))
+ cpacr_el1 |= CPACR_EL1_ZEN;
+
+ return cpacr_el1;
+}
+
+static inline u64 translate_sctlr_el2_to_sctlr_el1(u64 sctlr)
+{
+ /* Bit 20 is RES1 in SCTLR_EL1, but RES0 in SCTLR_EL2 */
+ return sctlr | BIT(20);
+}
+
+static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
+{
+ /* Force ASID to 0 (ASID 0 or RES0) */
+ return ttbr0 & ~GENMASK_ULL(63, 48);
+}
+
+static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
+{
+ return ((FIELD_GET(CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN, cnthctl) << 10) |
+ (cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
+}
+
#endif /* __ARM64_KVM_NESTED_H */
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 12/66] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Some EL2 system registers immediately affect the current execution
of the system, so we need to use their respective EL1 counterparts.
For this we need to define a mapping between the two. In general,
this only affects non-VHE guest hypervisors, as VHE system registers
are compatible with the EL1 counterparts.
These helpers will get used in subsequent patches.
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 50 +++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 1028ac65a897..67a2c0d05233 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -2,6 +2,7 @@
#ifndef __ARM64_KVM_NESTED_H
#define __ARM64_KVM_NESTED_H
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
@@ -11,4 +12,53 @@ static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
}
+/* Translation helpers from non-VHE EL2 to EL1 */
+static inline u64 tcr_el2_ips_to_tcr_el1_ps(u64 tcr_el2)
+{
+ return (u64)FIELD_GET(TCR_EL2_PS_MASK, tcr_el2) << TCR_IPS_SHIFT;
+}
+
+static inline u64 translate_tcr_el2_to_tcr_el1(u64 tcr)
+{
+ return TCR_EPD1_MASK | /* disable TTBR1_EL1 */
+ ((tcr & TCR_EL2_TBI) ? TCR_TBI0 : 0) |
+ tcr_el2_ips_to_tcr_el1_ps(tcr) |
+ (tcr & TCR_EL2_TG0_MASK) |
+ (tcr & TCR_EL2_ORGN0_MASK) |
+ (tcr & TCR_EL2_IRGN0_MASK) |
+ (tcr & TCR_EL2_T0SZ_MASK);
+}
+
+static inline u64 translate_cptr_el2_to_cpacr_el1(u64 cptr_el2)
+{
+ u64 cpacr_el1 = 0;
+
+ if (!(cptr_el2 & CPTR_EL2_TFP))
+ cpacr_el1 |= CPACR_EL1_FPEN;
+ if (cptr_el2 & CPTR_EL2_TTA)
+ cpacr_el1 |= CPACR_EL1_TTA;
+ if (!(cptr_el2 & CPTR_EL2_TZ))
+ cpacr_el1 |= CPACR_EL1_ZEN;
+
+ return cpacr_el1;
+}
+
+static inline u64 translate_sctlr_el2_to_sctlr_el1(u64 sctlr)
+{
+ /* Bit 20 is RES1 in SCTLR_EL1, but RES0 in SCTLR_EL2 */
+ return sctlr | BIT(20);
+}
+
+static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
+{
+ /* Force ASID to 0 (ASID 0 or RES0) */
+ return ttbr0 & ~GENMASK_ULL(63, 48);
+}
+
+static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
+{
+ return ((FIELD_GET(CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN, cnthctl) << 10) |
+ (cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
+}
+
#endif /* __ARM64_KVM_NESTED_H */
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 12/66] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
Some EL2 system registers immediately affect the current execution
of the system, so we need to use their respective EL1 counterparts.
For this we need to define a mapping between the two. In general,
this only affects non-VHE guest hypervisors, as VHE system registers
are compatible with the EL1 counterparts.
These helpers will get used in subsequent patches.
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 50 +++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 1028ac65a897..67a2c0d05233 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -2,6 +2,7 @@
#ifndef __ARM64_KVM_NESTED_H
#define __ARM64_KVM_NESTED_H
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
@@ -11,4 +12,53 @@ static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
}
+/* Translation helpers from non-VHE EL2 to EL1 */
+static inline u64 tcr_el2_ips_to_tcr_el1_ps(u64 tcr_el2)
+{
+ return (u64)FIELD_GET(TCR_EL2_PS_MASK, tcr_el2) << TCR_IPS_SHIFT;
+}
+
+static inline u64 translate_tcr_el2_to_tcr_el1(u64 tcr)
+{
+ return TCR_EPD1_MASK | /* disable TTBR1_EL1 */
+ ((tcr & TCR_EL2_TBI) ? TCR_TBI0 : 0) |
+ tcr_el2_ips_to_tcr_el1_ps(tcr) |
+ (tcr & TCR_EL2_TG0_MASK) |
+ (tcr & TCR_EL2_ORGN0_MASK) |
+ (tcr & TCR_EL2_IRGN0_MASK) |
+ (tcr & TCR_EL2_T0SZ_MASK);
+}
+
+static inline u64 translate_cptr_el2_to_cpacr_el1(u64 cptr_el2)
+{
+ u64 cpacr_el1 = 0;
+
+ if (!(cptr_el2 & CPTR_EL2_TFP))
+ cpacr_el1 |= CPACR_EL1_FPEN;
+ if (cptr_el2 & CPTR_EL2_TTA)
+ cpacr_el1 |= CPACR_EL1_TTA;
+ if (!(cptr_el2 & CPTR_EL2_TZ))
+ cpacr_el1 |= CPACR_EL1_ZEN;
+
+ return cpacr_el1;
+}
+
+static inline u64 translate_sctlr_el2_to_sctlr_el1(u64 sctlr)
+{
+ /* Bit 20 is RES1 in SCTLR_EL1, but RES0 in SCTLR_EL2 */
+ return sctlr | BIT(20);
+}
+
+static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
+{
+ /* Force ASID to 0 (ASID 0 or RES0) */
+ return ttbr0 & ~GENMASK_ULL(63, 48);
+}
+
+static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
+{
+ return ((FIELD_GET(CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN, cnthctl) << 10) |
+ (cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
+}
+
#endif /* __ARM64_KVM_NESTED_H */
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 13/66] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg()
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
KVM internally uses accessor functions when reading or writing the
guest's system registers. This takes care of accessing either the stored
copy or using the "live" EL1 system registers when the host uses VHE.
With the introduction of virtual EL2 we add a bunch of EL2 system
registers, which now must also be taken care of:
- If the guest is running in vEL2, and we access an EL1 sysreg, we must
revert to the stored version of that, and not use the CPU's copy.
- If the guest is running in vEL1, and we access an EL2 sysreg, we must
also use the stored version, since the CPU carries the EL1 copy.
- Some EL2 system registers are supposed to affect the current execution
of the system, so we need to put them into their respective EL1
counterparts. For this we need to define a mapping between the two.
This is done using the newly introduced struct el2_sysreg_map.
- Some EL2 system registers have a different format than their EL1
counterpart, so we need to translate them before writing them to the
CPU. This is done using an (optional) translate function in the map.
- There are the three special registers SP_EL2, SPSR_EL2 and ELR_EL2,
which need some separate handling (SPSR_EL2 is being handled in a
separate patch).
All of these cases are now wrapped into the existing accessor functions,
so KVM users wouldn't need to care whether they access EL2 or EL1
registers and also which state the guest is in.
This handles what was formerly known as the "shadow state" dynamically,
without requiring a separate copy for each vCPU EL.
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 143 ++++++++++++++++++++++++++++++++++++--
1 file changed, 139 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 912535c25615..70fa43f599e4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -24,6 +24,7 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/perf_event.h>
#include <asm/sysreg.h>
@@ -68,23 +69,157 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu,
return false;
}
+#define PURE_EL2_SYSREG(el2) \
+ case el2: { \
+ *el1r = el2; \
+ return true; \
+ }
+
+#define MAPPED_EL2_SYSREG(el2, el1, fn) \
+ case el2: { \
+ *xlate = fn; \
+ *el1r = el1; \
+ return true; \
+ }
+
+static bool get_el2_mapping(unsigned int reg,
+ unsigned int *el1r, u64 (**xlate)(u64))
+{
+ switch (reg) {
+ PURE_EL2_SYSREG( VPIDR_EL2 );
+ PURE_EL2_SYSREG( VMPIDR_EL2 );
+ PURE_EL2_SYSREG( ACTLR_EL2 );
+ PURE_EL2_SYSREG( HCR_EL2 );
+ PURE_EL2_SYSREG( MDCR_EL2 );
+ PURE_EL2_SYSREG( HSTR_EL2 );
+ PURE_EL2_SYSREG( HACR_EL2 );
+ PURE_EL2_SYSREG( VTTBR_EL2 );
+ PURE_EL2_SYSREG( VTCR_EL2 );
+ PURE_EL2_SYSREG( RVBAR_EL2 );
+ PURE_EL2_SYSREG( TPIDR_EL2 );
+ PURE_EL2_SYSREG( HPFAR_EL2 );
+ PURE_EL2_SYSREG( ELR_EL2 );
+ PURE_EL2_SYSREG( SPSR_EL2 );
+ MAPPED_EL2_SYSREG(SCTLR_EL2, SCTLR_EL1,
+ translate_sctlr_el2_to_sctlr_el1 );
+ MAPPED_EL2_SYSREG(CPTR_EL2, CPACR_EL1,
+ translate_cptr_el2_to_cpacr_el1 );
+ MAPPED_EL2_SYSREG(TTBR0_EL2, TTBR0_EL1,
+ translate_ttbr0_el2_to_ttbr0_el1 );
+ MAPPED_EL2_SYSREG(TTBR1_EL2, TTBR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(TCR_EL2, TCR_EL1,
+ translate_tcr_el2_to_tcr_el1 );
+ MAPPED_EL2_SYSREG(VBAR_EL2, VBAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR0_EL2, AFSR0_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR1_EL2, AFSR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(ESR_EL2, ESR_EL1, NULL );
+ MAPPED_EL2_SYSREG(FAR_EL2, FAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(MAIR_EL2, MAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(CNTHCTL_EL2, CNTKCTL_EL1,
+ translate_cnthctl_el2_to_cntkctl_el1 );
+ default:
+ return false;
+ }
+}
+
u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
{
u64 val = 0x8badf00d8badf00d;
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu->arch.sysregs_loaded_on_cpu)
+ goto memory_read;
+
+ if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_read;
+
+ /*
+ * ELR_EL2 is special cased for now.
+ */
+ switch (reg) {
+ case ELR_EL2:
+ return read_sysreg_el1(SYS_ELR);
+ }
+
+ /*
+ * If this register does not have an EL1 counterpart,
+ * then read the stored EL2 version.
+ */
+ if (reg == el1r)
+ goto memory_read;
+
+ /*
+ * If we have a non-VHE guest and that the sysreg
+ * requires translation to be used at EL1, use the
+ * in-memory copy instead.
+ */
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ goto memory_read;
+
+ /* Get the current version of the EL1 counterpart. */
+ WARN_ON(!__vcpu_read_sys_reg_from_cpu(el1r, &val));
+ return val;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_read;
- if (vcpu->arch.sysregs_loaded_on_cpu &&
- __vcpu_read_sys_reg_from_cpu(reg, &val))
+ if (__vcpu_read_sys_reg_from_cpu(reg, &val))
return val;
+memory_read:
return __vcpu_sys_reg(vcpu, reg);
}
void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
{
- if (vcpu->arch.sysregs_loaded_on_cpu &&
- __vcpu_write_sys_reg_to_cpu(val, reg))
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu->arch.sysregs_loaded_on_cpu)
+ goto memory_write;
+
+ if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_write;
+
+ /*
+ * Always store a copy of the write to memory to avoid having
+ * to reverse-translate virtual EL2 system registers for a
+ * non-VHE guest hypervisor.
+ */
+ __vcpu_sys_reg(vcpu, reg) = val;
+
+ switch (reg) {
+ case ELR_EL2:
+ write_sysreg_el1(val, SYS_ELR);
+ return;
+ }
+
+ /* No EL1 counterpart? We're done here.? */
+ if (reg == el1r)
+ return;
+
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ val = xlate(val);
+
+ /* Redirect this to the EL1 version of the register. */
+ WARN_ON(!__vcpu_write_sys_reg_to_cpu(val, el1r));
+ return;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_write;
+
+ if (__vcpu_write_sys_reg_to_cpu(val, reg))
return;
+memory_write:
__vcpu_sys_reg(vcpu, reg) = val;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 13/66] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg()
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
KVM internally uses accessor functions when reading or writing the
guest's system registers. This takes care of accessing either the stored
copy or using the "live" EL1 system registers when the host uses VHE.
With the introduction of virtual EL2 we add a bunch of EL2 system
registers, which now must also be taken care of:
- If the guest is running in vEL2, and we access an EL1 sysreg, we must
revert to the stored version of that, and not use the CPU's copy.
- If the guest is running in vEL1, and we access an EL2 sysreg, we must
also use the stored version, since the CPU carries the EL1 copy.
- Some EL2 system registers are supposed to affect the current execution
of the system, so we need to put them into their respective EL1
counterparts. For this we need to define a mapping between the two.
This is done using the newly introduced struct el2_sysreg_map.
- Some EL2 system registers have a different format than their EL1
counterpart, so we need to translate them before writing them to the
CPU. This is done using an (optional) translate function in the map.
- There are the three special registers SP_EL2, SPSR_EL2 and ELR_EL2,
which need some separate handling (SPSR_EL2 is being handled in a
separate patch).
All of these cases are now wrapped into the existing accessor functions,
so KVM users wouldn't need to care whether they access EL2 or EL1
registers and also which state the guest is in.
This handles what was formerly known as the "shadow state" dynamically,
without requiring a separate copy for each vCPU EL.
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 143 ++++++++++++++++++++++++++++++++++++--
1 file changed, 139 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 912535c25615..70fa43f599e4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -24,6 +24,7 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/perf_event.h>
#include <asm/sysreg.h>
@@ -68,23 +69,157 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu,
return false;
}
+#define PURE_EL2_SYSREG(el2) \
+ case el2: { \
+ *el1r = el2; \
+ return true; \
+ }
+
+#define MAPPED_EL2_SYSREG(el2, el1, fn) \
+ case el2: { \
+ *xlate = fn; \
+ *el1r = el1; \
+ return true; \
+ }
+
+static bool get_el2_mapping(unsigned int reg,
+ unsigned int *el1r, u64 (**xlate)(u64))
+{
+ switch (reg) {
+ PURE_EL2_SYSREG( VPIDR_EL2 );
+ PURE_EL2_SYSREG( VMPIDR_EL2 );
+ PURE_EL2_SYSREG( ACTLR_EL2 );
+ PURE_EL2_SYSREG( HCR_EL2 );
+ PURE_EL2_SYSREG( MDCR_EL2 );
+ PURE_EL2_SYSREG( HSTR_EL2 );
+ PURE_EL2_SYSREG( HACR_EL2 );
+ PURE_EL2_SYSREG( VTTBR_EL2 );
+ PURE_EL2_SYSREG( VTCR_EL2 );
+ PURE_EL2_SYSREG( RVBAR_EL2 );
+ PURE_EL2_SYSREG( TPIDR_EL2 );
+ PURE_EL2_SYSREG( HPFAR_EL2 );
+ PURE_EL2_SYSREG( ELR_EL2 );
+ PURE_EL2_SYSREG( SPSR_EL2 );
+ MAPPED_EL2_SYSREG(SCTLR_EL2, SCTLR_EL1,
+ translate_sctlr_el2_to_sctlr_el1 );
+ MAPPED_EL2_SYSREG(CPTR_EL2, CPACR_EL1,
+ translate_cptr_el2_to_cpacr_el1 );
+ MAPPED_EL2_SYSREG(TTBR0_EL2, TTBR0_EL1,
+ translate_ttbr0_el2_to_ttbr0_el1 );
+ MAPPED_EL2_SYSREG(TTBR1_EL2, TTBR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(TCR_EL2, TCR_EL1,
+ translate_tcr_el2_to_tcr_el1 );
+ MAPPED_EL2_SYSREG(VBAR_EL2, VBAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR0_EL2, AFSR0_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR1_EL2, AFSR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(ESR_EL2, ESR_EL1, NULL );
+ MAPPED_EL2_SYSREG(FAR_EL2, FAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(MAIR_EL2, MAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(CNTHCTL_EL2, CNTKCTL_EL1,
+ translate_cnthctl_el2_to_cntkctl_el1 );
+ default:
+ return false;
+ }
+}
+
u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
{
u64 val = 0x8badf00d8badf00d;
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu->arch.sysregs_loaded_on_cpu)
+ goto memory_read;
+
+ if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_read;
+
+ /*
+ * ELR_EL2 is special cased for now.
+ */
+ switch (reg) {
+ case ELR_EL2:
+ return read_sysreg_el1(SYS_ELR);
+ }
+
+ /*
+ * If this register does not have an EL1 counterpart,
+ * then read the stored EL2 version.
+ */
+ if (reg == el1r)
+ goto memory_read;
+
+ /*
+ * If we have a non-VHE guest and that the sysreg
+ * requires translation to be used at EL1, use the
+ * in-memory copy instead.
+ */
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ goto memory_read;
+
+ /* Get the current version of the EL1 counterpart. */
+ WARN_ON(!__vcpu_read_sys_reg_from_cpu(el1r, &val));
+ return val;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_read;
- if (vcpu->arch.sysregs_loaded_on_cpu &&
- __vcpu_read_sys_reg_from_cpu(reg, &val))
+ if (__vcpu_read_sys_reg_from_cpu(reg, &val))
return val;
+memory_read:
return __vcpu_sys_reg(vcpu, reg);
}
void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
{
- if (vcpu->arch.sysregs_loaded_on_cpu &&
- __vcpu_write_sys_reg_to_cpu(val, reg))
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu->arch.sysregs_loaded_on_cpu)
+ goto memory_write;
+
+ if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_write;
+
+ /*
+ * Always store a copy of the write to memory to avoid having
+ * to reverse-translate virtual EL2 system registers for a
+ * non-VHE guest hypervisor.
+ */
+ __vcpu_sys_reg(vcpu, reg) = val;
+
+ switch (reg) {
+ case ELR_EL2:
+ write_sysreg_el1(val, SYS_ELR);
+ return;
+ }
+
+ /* No EL1 counterpart? We're done here.? */
+ if (reg == el1r)
+ return;
+
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ val = xlate(val);
+
+ /* Redirect this to the EL1 version of the register. */
+ WARN_ON(!__vcpu_write_sys_reg_to_cpu(val, el1r));
+ return;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_write;
+
+ if (__vcpu_write_sys_reg_to_cpu(val, reg))
return;
+memory_write:
__vcpu_sys_reg(vcpu, reg) = val;
}
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 13/66] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg()
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
KVM internally uses accessor functions when reading or writing the
guest's system registers. This takes care of accessing either the stored
copy or using the "live" EL1 system registers when the host uses VHE.
With the introduction of virtual EL2 we add a bunch of EL2 system
registers, which now must also be taken care of:
- If the guest is running in vEL2, and we access an EL1 sysreg, we must
revert to the stored version of that, and not use the CPU's copy.
- If the guest is running in vEL1, and we access an EL2 sysreg, we must
also use the stored version, since the CPU carries the EL1 copy.
- Some EL2 system registers are supposed to affect the current execution
of the system, so we need to put them into their respective EL1
counterparts. For this we need to define a mapping between the two.
This is done using the newly introduced struct el2_sysreg_map.
- Some EL2 system registers have a different format than their EL1
counterpart, so we need to translate them before writing them to the
CPU. This is done using an (optional) translate function in the map.
- There are the three special registers SP_EL2, SPSR_EL2 and ELR_EL2,
which need some separate handling (SPSR_EL2 is being handled in a
separate patch).
All of these cases are now wrapped into the existing accessor functions,
so KVM users wouldn't need to care whether they access EL2 or EL1
registers and also which state the guest is in.
This handles what was formerly known as the "shadow state" dynamically,
without requiring a separate copy for each vCPU EL.
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 143 ++++++++++++++++++++++++++++++++++++--
1 file changed, 139 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 912535c25615..70fa43f599e4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -24,6 +24,7 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/perf_event.h>
#include <asm/sysreg.h>
@@ -68,23 +69,157 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu,
return false;
}
+#define PURE_EL2_SYSREG(el2) \
+ case el2: { \
+ *el1r = el2; \
+ return true; \
+ }
+
+#define MAPPED_EL2_SYSREG(el2, el1, fn) \
+ case el2: { \
+ *xlate = fn; \
+ *el1r = el1; \
+ return true; \
+ }
+
+static bool get_el2_mapping(unsigned int reg,
+ unsigned int *el1r, u64 (**xlate)(u64))
+{
+ switch (reg) {
+ PURE_EL2_SYSREG( VPIDR_EL2 );
+ PURE_EL2_SYSREG( VMPIDR_EL2 );
+ PURE_EL2_SYSREG( ACTLR_EL2 );
+ PURE_EL2_SYSREG( HCR_EL2 );
+ PURE_EL2_SYSREG( MDCR_EL2 );
+ PURE_EL2_SYSREG( HSTR_EL2 );
+ PURE_EL2_SYSREG( HACR_EL2 );
+ PURE_EL2_SYSREG( VTTBR_EL2 );
+ PURE_EL2_SYSREG( VTCR_EL2 );
+ PURE_EL2_SYSREG( RVBAR_EL2 );
+ PURE_EL2_SYSREG( TPIDR_EL2 );
+ PURE_EL2_SYSREG( HPFAR_EL2 );
+ PURE_EL2_SYSREG( ELR_EL2 );
+ PURE_EL2_SYSREG( SPSR_EL2 );
+ MAPPED_EL2_SYSREG(SCTLR_EL2, SCTLR_EL1,
+ translate_sctlr_el2_to_sctlr_el1 );
+ MAPPED_EL2_SYSREG(CPTR_EL2, CPACR_EL1,
+ translate_cptr_el2_to_cpacr_el1 );
+ MAPPED_EL2_SYSREG(TTBR0_EL2, TTBR0_EL1,
+ translate_ttbr0_el2_to_ttbr0_el1 );
+ MAPPED_EL2_SYSREG(TTBR1_EL2, TTBR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(TCR_EL2, TCR_EL1,
+ translate_tcr_el2_to_tcr_el1 );
+ MAPPED_EL2_SYSREG(VBAR_EL2, VBAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR0_EL2, AFSR0_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR1_EL2, AFSR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(ESR_EL2, ESR_EL1, NULL );
+ MAPPED_EL2_SYSREG(FAR_EL2, FAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(MAIR_EL2, MAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(CNTHCTL_EL2, CNTKCTL_EL1,
+ translate_cnthctl_el2_to_cntkctl_el1 );
+ default:
+ return false;
+ }
+}
+
u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
{
u64 val = 0x8badf00d8badf00d;
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu->arch.sysregs_loaded_on_cpu)
+ goto memory_read;
+
+ if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_read;
+
+ /*
+ * ELR_EL2 is special cased for now.
+ */
+ switch (reg) {
+ case ELR_EL2:
+ return read_sysreg_el1(SYS_ELR);
+ }
+
+ /*
+ * If this register does not have an EL1 counterpart,
+ * then read the stored EL2 version.
+ */
+ if (reg == el1r)
+ goto memory_read;
+
+ /*
+ * If we have a non-VHE guest and that the sysreg
+ * requires translation to be used at EL1, use the
+ * in-memory copy instead.
+ */
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ goto memory_read;
+
+ /* Get the current version of the EL1 counterpart. */
+ WARN_ON(!__vcpu_read_sys_reg_from_cpu(el1r, &val));
+ return val;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_read;
- if (vcpu->arch.sysregs_loaded_on_cpu &&
- __vcpu_read_sys_reg_from_cpu(reg, &val))
+ if (__vcpu_read_sys_reg_from_cpu(reg, &val))
return val;
+memory_read:
return __vcpu_sys_reg(vcpu, reg);
}
void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
{
- if (vcpu->arch.sysregs_loaded_on_cpu &&
- __vcpu_write_sys_reg_to_cpu(val, reg))
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu->arch.sysregs_loaded_on_cpu)
+ goto memory_write;
+
+ if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_write;
+
+ /*
+ * Always store a copy of the write to memory to avoid having
+ * to reverse-translate virtual EL2 system registers for a
+ * non-VHE guest hypervisor.
+ */
+ __vcpu_sys_reg(vcpu, reg) = val;
+
+ switch (reg) {
+ case ELR_EL2:
+ write_sysreg_el1(val, SYS_ELR);
+ return;
+ }
+
+ /* No EL1 counterpart? We're done here.? */
+ if (reg == el1r)
+ return;
+
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ val = xlate(val);
+
+ /* Redirect this to the EL1 version of the register. */
+ WARN_ON(!__vcpu_write_sys_reg_to_cpu(val, el1r));
+ return;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_write;
+
+ if (__vcpu_write_sys_reg_to_cpu(val, reg))
return;
+memory_write:
__vcpu_sys_reg(vcpu, reg) = val;
}
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 14/66] KVM: arm64: nv: Handle SPSR_EL2 specially
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
SPSR_EL2 needs special attention when running nested on ARMv8.3:
If taking an exception while running at vEL2 (actually EL1), the
HW will update the SPSR_EL1 register with the EL1 mode. We need
to track this in order to make sure that accesses to the virtual
view of SPSR_EL2 is correct.
To do so, we place an illegal value in SPSR_EL1.M, and patch it
accordingly if required when accessing it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 37 ++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 23 +++++++++++++++--
2 files changed, 58 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 1c10fbe7d1c3..f63872db1e39 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -238,6 +238,43 @@ static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
return __is_hyp_ctxt(&vcpu->arch.ctxt);
}
+static inline u64 __fixup_spsr_el2_write(struct kvm_cpu_context *ctxt, u64 val)
+{
+ if (!__vcpu_el2_e2h_is_set(ctxt)) {
+ /*
+ * Clear the .M field when writing SPSR to the CPU, so that we
+ * can detect when the CPU clobbered our SPSR copy during a
+ * local exception.
+ */
+ val &= ~0xc;
+ }
+
+ return val;
+}
+
+static inline u64 __fixup_spsr_el2_read(const struct kvm_cpu_context *ctxt, u64 val)
+{
+ if (__vcpu_el2_e2h_is_set(ctxt))
+ return val;
+
+ /*
+ * SPSR.M == 0 means the CPU has not touched the SPSR, so the
+ * register has still the value we saved on the last write.
+ */
+ if ((val & 0xc) == 0)
+ return ctxt_sys_reg(ctxt, SPSR_EL2);
+
+ /*
+ * Otherwise there was a "local" exception on the CPU,
+ * which from the guest's point of view was being taken from
+ * EL2 to EL2, although it actually happened to be from
+ * EL1 to EL1.
+ * So we need to fix the .M field in SPSR, to make it look
+ * like EL2, which is what the guest would expect.
+ */
+ return (val & ~0x0c) | CurrentEL_EL2;
+}
+
/*
* The layout of SPSR for an AArch32 state is different when observed from an
* AArch64 SPSR_ELx or an AArch32 SPSR_*. This function generates the AArch32
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 70fa43f599e4..2c1ab8cd58b4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -137,11 +137,14 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
goto memory_read;
/*
- * ELR_EL2 is special cased for now.
+ * ELR_EL2 and SPSR_EL2 are special cased for now.
*/
switch (reg) {
case ELR_EL2:
return read_sysreg_el1(SYS_ELR);
+ case SPSR_EL2:
+ val = read_sysreg_el1(SYS_SPSR);
+ return __fixup_spsr_el2_read(&vcpu->arch.ctxt, val);
}
/*
@@ -198,6 +201,10 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
case ELR_EL2:
write_sysreg_el1(val, SYS_ELR);
return;
+ case SPSR_EL2:
+ val = __fixup_spsr_el2_write(&vcpu->arch.ctxt, val);
+ write_sysreg_el1(val, SYS_SPSR);
+ return;
}
/* No EL1 counterpart? We're done here.? */
@@ -1544,6 +1551,18 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool access_spsr_el2(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, SPSR_EL2);
+
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1971,7 +1990,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 },
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
- { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 },
+ { SYS_DESC(SYS_SPSR_EL2), access_spsr_el2, reset_val, SPSR_EL2, 0 },
{ SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 },
{ SYS_DESC(SYS_SP_EL1), access_sp_el1},
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 14/66] KVM: arm64: nv: Handle SPSR_EL2 specially
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
SPSR_EL2 needs special attention when running nested on ARMv8.3:
If taking an exception while running at vEL2 (actually EL1), the
HW will update the SPSR_EL1 register with the EL1 mode. We need
to track this in order to make sure that accesses to the virtual
view of SPSR_EL2 is correct.
To do so, we place an illegal value in SPSR_EL1.M, and patch it
accordingly if required when accessing it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 37 ++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 23 +++++++++++++++--
2 files changed, 58 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 1c10fbe7d1c3..f63872db1e39 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -238,6 +238,43 @@ static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
return __is_hyp_ctxt(&vcpu->arch.ctxt);
}
+static inline u64 __fixup_spsr_el2_write(struct kvm_cpu_context *ctxt, u64 val)
+{
+ if (!__vcpu_el2_e2h_is_set(ctxt)) {
+ /*
+ * Clear the .M field when writing SPSR to the CPU, so that we
+ * can detect when the CPU clobbered our SPSR copy during a
+ * local exception.
+ */
+ val &= ~0xc;
+ }
+
+ return val;
+}
+
+static inline u64 __fixup_spsr_el2_read(const struct kvm_cpu_context *ctxt, u64 val)
+{
+ if (__vcpu_el2_e2h_is_set(ctxt))
+ return val;
+
+ /*
+ * SPSR.M == 0 means the CPU has not touched the SPSR, so the
+ * register has still the value we saved on the last write.
+ */
+ if ((val & 0xc) == 0)
+ return ctxt_sys_reg(ctxt, SPSR_EL2);
+
+ /*
+ * Otherwise there was a "local" exception on the CPU,
+ * which from the guest's point of view was being taken from
+ * EL2 to EL2, although it actually happened to be from
+ * EL1 to EL1.
+ * So we need to fix the .M field in SPSR, to make it look
+ * like EL2, which is what the guest would expect.
+ */
+ return (val & ~0x0c) | CurrentEL_EL2;
+}
+
/*
* The layout of SPSR for an AArch32 state is different when observed from an
* AArch64 SPSR_ELx or an AArch32 SPSR_*. This function generates the AArch32
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 70fa43f599e4..2c1ab8cd58b4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -137,11 +137,14 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
goto memory_read;
/*
- * ELR_EL2 is special cased for now.
+ * ELR_EL2 and SPSR_EL2 are special cased for now.
*/
switch (reg) {
case ELR_EL2:
return read_sysreg_el1(SYS_ELR);
+ case SPSR_EL2:
+ val = read_sysreg_el1(SYS_SPSR);
+ return __fixup_spsr_el2_read(&vcpu->arch.ctxt, val);
}
/*
@@ -198,6 +201,10 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
case ELR_EL2:
write_sysreg_el1(val, SYS_ELR);
return;
+ case SPSR_EL2:
+ val = __fixup_spsr_el2_write(&vcpu->arch.ctxt, val);
+ write_sysreg_el1(val, SYS_SPSR);
+ return;
}
/* No EL1 counterpart? We're done here.? */
@@ -1544,6 +1551,18 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool access_spsr_el2(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, SPSR_EL2);
+
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1971,7 +1990,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 },
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
- { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 },
+ { SYS_DESC(SYS_SPSR_EL2), access_spsr_el2, reset_val, SPSR_EL2, 0 },
{ SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 },
{ SYS_DESC(SYS_SP_EL1), access_sp_el1},
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 14/66] KVM: arm64: nv: Handle SPSR_EL2 specially
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
SPSR_EL2 needs special attention when running nested on ARMv8.3:
If taking an exception while running at vEL2 (actually EL1), the
HW will update the SPSR_EL1 register with the EL1 mode. We need
to track this in order to make sure that accesses to the virtual
view of SPSR_EL2 is correct.
To do so, we place an illegal value in SPSR_EL1.M, and patch it
accordingly if required when accessing it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 37 ++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 23 +++++++++++++++--
2 files changed, 58 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 1c10fbe7d1c3..f63872db1e39 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -238,6 +238,43 @@ static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
return __is_hyp_ctxt(&vcpu->arch.ctxt);
}
+static inline u64 __fixup_spsr_el2_write(struct kvm_cpu_context *ctxt, u64 val)
+{
+ if (!__vcpu_el2_e2h_is_set(ctxt)) {
+ /*
+ * Clear the .M field when writing SPSR to the CPU, so that we
+ * can detect when the CPU clobbered our SPSR copy during a
+ * local exception.
+ */
+ val &= ~0xc;
+ }
+
+ return val;
+}
+
+static inline u64 __fixup_spsr_el2_read(const struct kvm_cpu_context *ctxt, u64 val)
+{
+ if (__vcpu_el2_e2h_is_set(ctxt))
+ return val;
+
+ /*
+ * SPSR.M == 0 means the CPU has not touched the SPSR, so the
+ * register has still the value we saved on the last write.
+ */
+ if ((val & 0xc) == 0)
+ return ctxt_sys_reg(ctxt, SPSR_EL2);
+
+ /*
+ * Otherwise there was a "local" exception on the CPU,
+ * which from the guest's point of view was being taken from
+ * EL2 to EL2, although it actually happened to be from
+ * EL1 to EL1.
+ * So we need to fix the .M field in SPSR, to make it look
+ * like EL2, which is what the guest would expect.
+ */
+ return (val & ~0x0c) | CurrentEL_EL2;
+}
+
/*
* The layout of SPSR for an AArch32 state is different when observed from an
* AArch64 SPSR_ELx or an AArch32 SPSR_*. This function generates the AArch32
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 70fa43f599e4..2c1ab8cd58b4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -137,11 +137,14 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
goto memory_read;
/*
- * ELR_EL2 is special cased for now.
+ * ELR_EL2 and SPSR_EL2 are special cased for now.
*/
switch (reg) {
case ELR_EL2:
return read_sysreg_el1(SYS_ELR);
+ case SPSR_EL2:
+ val = read_sysreg_el1(SYS_SPSR);
+ return __fixup_spsr_el2_read(&vcpu->arch.ctxt, val);
}
/*
@@ -198,6 +201,10 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
case ELR_EL2:
write_sysreg_el1(val, SYS_ELR);
return;
+ case SPSR_EL2:
+ val = __fixup_spsr_el2_write(&vcpu->arch.ctxt, val);
+ write_sysreg_el1(val, SYS_SPSR);
+ return;
}
/* No EL1 counterpart? We're done here.? */
@@ -1544,6 +1551,18 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool access_spsr_el2(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, SPSR_EL2);
+
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1971,7 +1990,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 },
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
- { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 },
+ { SYS_DESC(SYS_SPSR_EL2), access_spsr_el2, reset_val, SPSR_EL2, 0 },
{ SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 },
{ SYS_DESC(SYS_SP_EL1), access_sp_el1},
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 15/66] KVM: arm64: nv: Handle HCR_EL2.E2H specially
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
HCR_EL2.E2H is nasty, as a flip of this bit completely changes the way
we deal with a lot of the state. So when the guest flips this bit
(sysregs are live), do the put/load dance so that we have a consistent
state.
Yes, this is slow. Don't do it.
Suggested-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2c1ab8cd58b4..0d29d9d59c3f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -187,9 +187,24 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
goto memory_write;
if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ bool need_put_load;
+
if (!is_hyp_ctxt(vcpu))
goto memory_write;
+ /*
+ * HCR_EL2.E2H is nasty: it changes the way we interpret a
+ * lot of the EL2 state, so treat is as a full state
+ * transition.
+ */
+ need_put_load = ((reg == HCR_EL2) &&
+ vcpu_el2_e2h_is_set(vcpu) != !!(val & HCR_E2H));
+
+ if (need_put_load) {
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+ }
+
/*
* Always store a copy of the write to memory to avoid having
* to reverse-translate virtual EL2 system registers for a
@@ -197,6 +212,11 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
*/
__vcpu_sys_reg(vcpu, reg) = val;
+ if (need_put_load) {
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+ }
+
switch (reg) {
case ELR_EL2:
write_sysreg_el1(val, SYS_ELR);
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 15/66] KVM: arm64: nv: Handle HCR_EL2.E2H specially
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
HCR_EL2.E2H is nasty, as a flip of this bit completely changes the way
we deal with a lot of the state. So when the guest flips this bit
(sysregs are live), do the put/load dance so that we have a consistent
state.
Yes, this is slow. Don't do it.
Suggested-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2c1ab8cd58b4..0d29d9d59c3f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -187,9 +187,24 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
goto memory_write;
if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ bool need_put_load;
+
if (!is_hyp_ctxt(vcpu))
goto memory_write;
+ /*
+ * HCR_EL2.E2H is nasty: it changes the way we interpret a
+ * lot of the EL2 state, so treat is as a full state
+ * transition.
+ */
+ need_put_load = ((reg == HCR_EL2) &&
+ vcpu_el2_e2h_is_set(vcpu) != !!(val & HCR_E2H));
+
+ if (need_put_load) {
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+ }
+
/*
* Always store a copy of the write to memory to avoid having
* to reverse-translate virtual EL2 system registers for a
@@ -197,6 +212,11 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
*/
__vcpu_sys_reg(vcpu, reg) = val;
+ if (need_put_load) {
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+ }
+
switch (reg) {
case ELR_EL2:
write_sysreg_el1(val, SYS_ELR);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 15/66] KVM: arm64: nv: Handle HCR_EL2.E2H specially
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
HCR_EL2.E2H is nasty, as a flip of this bit completely changes the way
we deal with a lot of the state. So when the guest flips this bit
(sysregs are live), do the put/load dance so that we have a consistent
state.
Yes, this is slow. Don't do it.
Suggested-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2c1ab8cd58b4..0d29d9d59c3f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -187,9 +187,24 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
goto memory_write;
if (unlikely(get_el2_mapping(reg, &el1r, &xlate))) {
+ bool need_put_load;
+
if (!is_hyp_ctxt(vcpu))
goto memory_write;
+ /*
+ * HCR_EL2.E2H is nasty: it changes the way we interpret a
+ * lot of the EL2 state, so treat is as a full state
+ * transition.
+ */
+ need_put_load = ((reg == HCR_EL2) &&
+ vcpu_el2_e2h_is_set(vcpu) != !!(val & HCR_E2H));
+
+ if (need_put_load) {
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+ }
+
/*
* Always store a copy of the write to memory to avoid having
* to reverse-translate virtual EL2 system registers for a
@@ -197,6 +212,11 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
*/
__vcpu_sys_reg(vcpu, reg) = val;
+ if (need_put_load) {
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+ }
+
switch (reg) {
case ELR_EL2:
write_sysreg_el1(val, SYS_ELR);
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 16/66] KVM: arm64: nv: Save/Restore vEL2 sysregs
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Whenever we need to restore the guest's system registers to the CPU, we
now need to take care of the EL2 system registers as well. Most of them
are accessed via traps only, but some have an immediate effect and also
a guest running in VHE mode would expect them to be accessible via their
EL1 encoding, which we do not trap.
For vEL2 we write the virtual EL2 registers with an identical format directly
into their EL1 counterpart, and translate the few registers that have a
different format for the same effect on the execution when running a
non-VHE guest guest hypervisor.
Based on an initial patch from Andre Przywara, rewritten many times
since.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 5 +-
arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 125 ++++++++++++++++++++-
3 files changed, 127 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index cce43bfe158f..e3901c73893e 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -71,9 +71,10 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0);
}
-static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
+static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
+ u64 mpidr)
{
- write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg(mpidr, vmpidr_el2);
write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1), csselr_el1);
if (has_vhe() ||
diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
index 29305022bc04..dba101565de3 100644
--- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
@@ -28,7 +28,7 @@ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt)
void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
{
- __sysreg_restore_el1_state(ctxt);
+ __sysreg_restore_el1_state(ctxt, ctxt_sys_reg(ctxt, MPIDR_EL1));
__sysreg_restore_common_state(ctxt);
__sysreg_restore_user_state(ctxt);
__sysreg_restore_el2_return_state(ctxt);
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..53835fcc0ac6 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -13,6 +13,96 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_nested.h>
+
+static void __sysreg_save_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ /* These registers are common with EL1 */
+ ctxt_sys_reg(ctxt, CSSELR_EL1) = read_sysreg(csselr_el1);
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+ ctxt_sys_reg(ctxt, TPIDR_EL1) = read_sysreg(tpidr_el1);
+
+ ctxt_sys_reg(ctxt, ESR_EL2) = read_sysreg_el1(SYS_ESR);
+ ctxt_sys_reg(ctxt, AFSR0_EL2) = read_sysreg_el1(SYS_AFSR0);
+ ctxt_sys_reg(ctxt, AFSR1_EL2) = read_sysreg_el1(SYS_AFSR1);
+ ctxt_sys_reg(ctxt, FAR_EL2) = read_sysreg_el1(SYS_FAR);
+ ctxt_sys_reg(ctxt, MAIR_EL2) = read_sysreg_el1(SYS_MAIR);
+ ctxt_sys_reg(ctxt, VBAR_EL2) = read_sysreg_el1(SYS_VBAR);
+ ctxt_sys_reg(ctxt, CONTEXTIDR_EL2) = read_sysreg_el1(SYS_CONTEXTIDR);
+ ctxt_sys_reg(ctxt, AMAIR_EL2) = read_sysreg_el1(SYS_AMAIR);
+
+ /*
+ * In VHE mode those registers are compatible between EL1 and EL2,
+ * and the guest uses the _EL1 versions on the CPU naturally.
+ * So we save them into their _EL2 versions here.
+ * For nVHE mode we trap accesses to those registers, so our
+ * _EL2 copy in sys_regs[] is always up-to-date and we don't need
+ * to save anything here.
+ */
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ ctxt_sys_reg(ctxt, SCTLR_EL2) = read_sysreg_el1(SYS_SCTLR);
+ ctxt_sys_reg(ctxt, CPTR_EL2) = read_sysreg_el1(SYS_CPACR);
+ ctxt_sys_reg(ctxt, TTBR0_EL2) = read_sysreg_el1(SYS_TTBR0);
+ ctxt_sys_reg(ctxt, TTBR1_EL2) = read_sysreg_el1(SYS_TTBR1);
+ ctxt_sys_reg(ctxt, TCR_EL2) = read_sysreg_el1(SYS_TCR);
+ ctxt_sys_reg(ctxt, CNTHCTL_EL2) = read_sysreg_el1(SYS_CNTKCTL);
+ }
+
+ ctxt_sys_reg(ctxt, SP_EL2) = read_sysreg(sp_el1);
+ ctxt_sys_reg(ctxt, ELR_EL2) = read_sysreg_el1(SYS_ELR);
+ ctxt_sys_reg(ctxt, SPSR_EL2) = __fixup_spsr_el2_read(ctxt, read_sysreg_el1(SYS_SPSR));
+}
+
+static void __sysreg_restore_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ u64 val;
+
+ /* These registers are common with EL1 */
+ write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1), csselr_el1);
+ write_sysreg(ctxt_sys_reg(ctxt, PAR_EL1), par_el1);
+ write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL1), tpidr_el1);
+
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+ write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, MAIR_EL2), SYS_MAIR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, VBAR_EL2), SYS_VBAR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CONTEXTIDR_EL2),SYS_CONTEXTIDR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AMAIR_EL2), SYS_AMAIR);
+
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ /*
+ * In VHE mode those registers are compatible between
+ * EL1 and EL2.
+ */
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CPTR_EL2), SYS_CPACR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CNTHCTL_EL2), SYS_CNTKCTL);
+ } else {
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+ val = translate_cptr_el2_to_cpacr_el1(ctxt_sys_reg(ctxt, CPTR_EL2));
+ write_sysreg_el1(val, SYS_CPACR);
+ val = translate_ttbr0_el2_to_ttbr0_el1(ctxt_sys_reg(ctxt, TTBR0_EL2));
+ write_sysreg_el1(val, SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ val = translate_cnthctl_el2_to_cntkctl_el1(ctxt_sys_reg(ctxt, CNTHCTL_EL2));
+ write_sysreg_el1(val, SYS_CNTKCTL);
+ }
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ESR_EL2), SYS_ESR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR0_EL2), SYS_AFSR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR1_EL2), SYS_AFSR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, FAR_EL2), SYS_FAR);
+ write_sysreg(ctxt_sys_reg(ctxt, SP_EL2), sp_el1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ELR_EL2), SYS_ELR);
+
+ val = __fixup_spsr_el2_write(ctxt, ctxt_sys_reg(ctxt, SPSR_EL2));
+ write_sysreg_el1(val, SYS_SPSR);
+}
/*
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and
@@ -65,6 +155,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
struct kvm_cpu_context *host_ctxt;
+ u64 mpidr;
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
__sysreg_save_user_state(host_ctxt);
@@ -77,7 +168,29 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
*/
__sysreg32_restore_state(vcpu);
__sysreg_restore_user_state(guest_ctxt);
- __sysreg_restore_el1_state(guest_ctxt);
+
+ if (unlikely(__is_hyp_ctxt(guest_ctxt))) {
+ __sysreg_restore_vel2_state(guest_ctxt);
+ } else {
+ if (nested_virt_in_use(vcpu)) {
+ /*
+ * Only set VPIDR_EL2 for nested VMs, as this is the
+ * only time it changes. We'll restore the MIDR_EL1
+ * view on put.
+ */
+ write_sysreg(ctxt_sys_reg(guest_ctxt, VPIDR_EL2), vpidr_el2);
+
+ /*
+ * As we're restoring a nested guest, set the value
+ * provided by the guest hypervisor.
+ */
+ mpidr = ctxt_sys_reg(guest_ctxt, VMPIDR_EL2);
+ } else {
+ mpidr = ctxt_sys_reg(guest_ctxt, MPIDR_EL1);
+ }
+
+ __sysreg_restore_el1_state(guest_ctxt, mpidr);
+ }
vcpu->arch.sysregs_loaded_on_cpu = true;
@@ -103,12 +216,20 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
deactivate_traps_vhe_put();
- __sysreg_save_el1_state(guest_ctxt);
+ if (unlikely(__is_hyp_ctxt(guest_ctxt)))
+ __sysreg_save_vel2_state(guest_ctxt);
+ else
+ __sysreg_save_el1_state(guest_ctxt);
+
__sysreg_save_user_state(guest_ctxt);
__sysreg32_save_state(vcpu);
/* Restore host user state */
__sysreg_restore_user_state(host_ctxt);
+ /* If leaving a nesting guest, restore MPIDR_EL1 default view */
+ if (nested_virt_in_use(vcpu))
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+
vcpu->arch.sysregs_loaded_on_cpu = false;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 16/66] KVM: arm64: nv: Save/Restore vEL2 sysregs
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Whenever we need to restore the guest's system registers to the CPU, we
now need to take care of the EL2 system registers as well. Most of them
are accessed via traps only, but some have an immediate effect and also
a guest running in VHE mode would expect them to be accessible via their
EL1 encoding, which we do not trap.
For vEL2 we write the virtual EL2 registers with an identical format directly
into their EL1 counterpart, and translate the few registers that have a
different format for the same effect on the execution when running a
non-VHE guest guest hypervisor.
Based on an initial patch from Andre Przywara, rewritten many times
since.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 5 +-
arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 125 ++++++++++++++++++++-
3 files changed, 127 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index cce43bfe158f..e3901c73893e 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -71,9 +71,10 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0);
}
-static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
+static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
+ u64 mpidr)
{
- write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg(mpidr, vmpidr_el2);
write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1), csselr_el1);
if (has_vhe() ||
diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
index 29305022bc04..dba101565de3 100644
--- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
@@ -28,7 +28,7 @@ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt)
void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
{
- __sysreg_restore_el1_state(ctxt);
+ __sysreg_restore_el1_state(ctxt, ctxt_sys_reg(ctxt, MPIDR_EL1));
__sysreg_restore_common_state(ctxt);
__sysreg_restore_user_state(ctxt);
__sysreg_restore_el2_return_state(ctxt);
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..53835fcc0ac6 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -13,6 +13,96 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_nested.h>
+
+static void __sysreg_save_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ /* These registers are common with EL1 */
+ ctxt_sys_reg(ctxt, CSSELR_EL1) = read_sysreg(csselr_el1);
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+ ctxt_sys_reg(ctxt, TPIDR_EL1) = read_sysreg(tpidr_el1);
+
+ ctxt_sys_reg(ctxt, ESR_EL2) = read_sysreg_el1(SYS_ESR);
+ ctxt_sys_reg(ctxt, AFSR0_EL2) = read_sysreg_el1(SYS_AFSR0);
+ ctxt_sys_reg(ctxt, AFSR1_EL2) = read_sysreg_el1(SYS_AFSR1);
+ ctxt_sys_reg(ctxt, FAR_EL2) = read_sysreg_el1(SYS_FAR);
+ ctxt_sys_reg(ctxt, MAIR_EL2) = read_sysreg_el1(SYS_MAIR);
+ ctxt_sys_reg(ctxt, VBAR_EL2) = read_sysreg_el1(SYS_VBAR);
+ ctxt_sys_reg(ctxt, CONTEXTIDR_EL2) = read_sysreg_el1(SYS_CONTEXTIDR);
+ ctxt_sys_reg(ctxt, AMAIR_EL2) = read_sysreg_el1(SYS_AMAIR);
+
+ /*
+ * In VHE mode those registers are compatible between EL1 and EL2,
+ * and the guest uses the _EL1 versions on the CPU naturally.
+ * So we save them into their _EL2 versions here.
+ * For nVHE mode we trap accesses to those registers, so our
+ * _EL2 copy in sys_regs[] is always up-to-date and we don't need
+ * to save anything here.
+ */
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ ctxt_sys_reg(ctxt, SCTLR_EL2) = read_sysreg_el1(SYS_SCTLR);
+ ctxt_sys_reg(ctxt, CPTR_EL2) = read_sysreg_el1(SYS_CPACR);
+ ctxt_sys_reg(ctxt, TTBR0_EL2) = read_sysreg_el1(SYS_TTBR0);
+ ctxt_sys_reg(ctxt, TTBR1_EL2) = read_sysreg_el1(SYS_TTBR1);
+ ctxt_sys_reg(ctxt, TCR_EL2) = read_sysreg_el1(SYS_TCR);
+ ctxt_sys_reg(ctxt, CNTHCTL_EL2) = read_sysreg_el1(SYS_CNTKCTL);
+ }
+
+ ctxt_sys_reg(ctxt, SP_EL2) = read_sysreg(sp_el1);
+ ctxt_sys_reg(ctxt, ELR_EL2) = read_sysreg_el1(SYS_ELR);
+ ctxt_sys_reg(ctxt, SPSR_EL2) = __fixup_spsr_el2_read(ctxt, read_sysreg_el1(SYS_SPSR));
+}
+
+static void __sysreg_restore_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ u64 val;
+
+ /* These registers are common with EL1 */
+ write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1), csselr_el1);
+ write_sysreg(ctxt_sys_reg(ctxt, PAR_EL1), par_el1);
+ write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL1), tpidr_el1);
+
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+ write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, MAIR_EL2), SYS_MAIR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, VBAR_EL2), SYS_VBAR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CONTEXTIDR_EL2),SYS_CONTEXTIDR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AMAIR_EL2), SYS_AMAIR);
+
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ /*
+ * In VHE mode those registers are compatible between
+ * EL1 and EL2.
+ */
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CPTR_EL2), SYS_CPACR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CNTHCTL_EL2), SYS_CNTKCTL);
+ } else {
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+ val = translate_cptr_el2_to_cpacr_el1(ctxt_sys_reg(ctxt, CPTR_EL2));
+ write_sysreg_el1(val, SYS_CPACR);
+ val = translate_ttbr0_el2_to_ttbr0_el1(ctxt_sys_reg(ctxt, TTBR0_EL2));
+ write_sysreg_el1(val, SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ val = translate_cnthctl_el2_to_cntkctl_el1(ctxt_sys_reg(ctxt, CNTHCTL_EL2));
+ write_sysreg_el1(val, SYS_CNTKCTL);
+ }
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ESR_EL2), SYS_ESR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR0_EL2), SYS_AFSR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR1_EL2), SYS_AFSR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, FAR_EL2), SYS_FAR);
+ write_sysreg(ctxt_sys_reg(ctxt, SP_EL2), sp_el1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ELR_EL2), SYS_ELR);
+
+ val = __fixup_spsr_el2_write(ctxt, ctxt_sys_reg(ctxt, SPSR_EL2));
+ write_sysreg_el1(val, SYS_SPSR);
+}
/*
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and
@@ -65,6 +155,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
struct kvm_cpu_context *host_ctxt;
+ u64 mpidr;
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
__sysreg_save_user_state(host_ctxt);
@@ -77,7 +168,29 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
*/
__sysreg32_restore_state(vcpu);
__sysreg_restore_user_state(guest_ctxt);
- __sysreg_restore_el1_state(guest_ctxt);
+
+ if (unlikely(__is_hyp_ctxt(guest_ctxt))) {
+ __sysreg_restore_vel2_state(guest_ctxt);
+ } else {
+ if (nested_virt_in_use(vcpu)) {
+ /*
+ * Only set VPIDR_EL2 for nested VMs, as this is the
+ * only time it changes. We'll restore the MIDR_EL1
+ * view on put.
+ */
+ write_sysreg(ctxt_sys_reg(guest_ctxt, VPIDR_EL2), vpidr_el2);
+
+ /*
+ * As we're restoring a nested guest, set the value
+ * provided by the guest hypervisor.
+ */
+ mpidr = ctxt_sys_reg(guest_ctxt, VMPIDR_EL2);
+ } else {
+ mpidr = ctxt_sys_reg(guest_ctxt, MPIDR_EL1);
+ }
+
+ __sysreg_restore_el1_state(guest_ctxt, mpidr);
+ }
vcpu->arch.sysregs_loaded_on_cpu = true;
@@ -103,12 +216,20 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
deactivate_traps_vhe_put();
- __sysreg_save_el1_state(guest_ctxt);
+ if (unlikely(__is_hyp_ctxt(guest_ctxt)))
+ __sysreg_save_vel2_state(guest_ctxt);
+ else
+ __sysreg_save_el1_state(guest_ctxt);
+
__sysreg_save_user_state(guest_ctxt);
__sysreg32_save_state(vcpu);
/* Restore host user state */
__sysreg_restore_user_state(host_ctxt);
+ /* If leaving a nesting guest, restore MPIDR_EL1 default view */
+ if (nested_virt_in_use(vcpu))
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+
vcpu->arch.sysregs_loaded_on_cpu = false;
}
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 16/66] KVM: arm64: nv: Save/Restore vEL2 sysregs
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
Whenever we need to restore the guest's system registers to the CPU, we
now need to take care of the EL2 system registers as well. Most of them
are accessed via traps only, but some have an immediate effect and also
a guest running in VHE mode would expect them to be accessible via their
EL1 encoding, which we do not trap.
For vEL2 we write the virtual EL2 registers with an identical format directly
into their EL1 counterpart, and translate the few registers that have a
different format for the same effect on the execution when running a
non-VHE guest guest hypervisor.
Based on an initial patch from Andre Przywara, rewritten many times
since.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 5 +-
arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 125 ++++++++++++++++++++-
3 files changed, 127 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index cce43bfe158f..e3901c73893e 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -71,9 +71,10 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0);
}
-static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
+static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
+ u64 mpidr)
{
- write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg(mpidr, vmpidr_el2);
write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1), csselr_el1);
if (has_vhe() ||
diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
index 29305022bc04..dba101565de3 100644
--- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
@@ -28,7 +28,7 @@ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt)
void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
{
- __sysreg_restore_el1_state(ctxt);
+ __sysreg_restore_el1_state(ctxt, ctxt_sys_reg(ctxt, MPIDR_EL1));
__sysreg_restore_common_state(ctxt);
__sysreg_restore_user_state(ctxt);
__sysreg_restore_el2_return_state(ctxt);
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..53835fcc0ac6 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -13,6 +13,96 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_nested.h>
+
+static void __sysreg_save_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ /* These registers are common with EL1 */
+ ctxt_sys_reg(ctxt, CSSELR_EL1) = read_sysreg(csselr_el1);
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+ ctxt_sys_reg(ctxt, TPIDR_EL1) = read_sysreg(tpidr_el1);
+
+ ctxt_sys_reg(ctxt, ESR_EL2) = read_sysreg_el1(SYS_ESR);
+ ctxt_sys_reg(ctxt, AFSR0_EL2) = read_sysreg_el1(SYS_AFSR0);
+ ctxt_sys_reg(ctxt, AFSR1_EL2) = read_sysreg_el1(SYS_AFSR1);
+ ctxt_sys_reg(ctxt, FAR_EL2) = read_sysreg_el1(SYS_FAR);
+ ctxt_sys_reg(ctxt, MAIR_EL2) = read_sysreg_el1(SYS_MAIR);
+ ctxt_sys_reg(ctxt, VBAR_EL2) = read_sysreg_el1(SYS_VBAR);
+ ctxt_sys_reg(ctxt, CONTEXTIDR_EL2) = read_sysreg_el1(SYS_CONTEXTIDR);
+ ctxt_sys_reg(ctxt, AMAIR_EL2) = read_sysreg_el1(SYS_AMAIR);
+
+ /*
+ * In VHE mode those registers are compatible between EL1 and EL2,
+ * and the guest uses the _EL1 versions on the CPU naturally.
+ * So we save them into their _EL2 versions here.
+ * For nVHE mode we trap accesses to those registers, so our
+ * _EL2 copy in sys_regs[] is always up-to-date and we don't need
+ * to save anything here.
+ */
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ ctxt_sys_reg(ctxt, SCTLR_EL2) = read_sysreg_el1(SYS_SCTLR);
+ ctxt_sys_reg(ctxt, CPTR_EL2) = read_sysreg_el1(SYS_CPACR);
+ ctxt_sys_reg(ctxt, TTBR0_EL2) = read_sysreg_el1(SYS_TTBR0);
+ ctxt_sys_reg(ctxt, TTBR1_EL2) = read_sysreg_el1(SYS_TTBR1);
+ ctxt_sys_reg(ctxt, TCR_EL2) = read_sysreg_el1(SYS_TCR);
+ ctxt_sys_reg(ctxt, CNTHCTL_EL2) = read_sysreg_el1(SYS_CNTKCTL);
+ }
+
+ ctxt_sys_reg(ctxt, SP_EL2) = read_sysreg(sp_el1);
+ ctxt_sys_reg(ctxt, ELR_EL2) = read_sysreg_el1(SYS_ELR);
+ ctxt_sys_reg(ctxt, SPSR_EL2) = __fixup_spsr_el2_read(ctxt, read_sysreg_el1(SYS_SPSR));
+}
+
+static void __sysreg_restore_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ u64 val;
+
+ /* These registers are common with EL1 */
+ write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1), csselr_el1);
+ write_sysreg(ctxt_sys_reg(ctxt, PAR_EL1), par_el1);
+ write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL1), tpidr_el1);
+
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+ write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, MAIR_EL2), SYS_MAIR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, VBAR_EL2), SYS_VBAR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CONTEXTIDR_EL2),SYS_CONTEXTIDR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AMAIR_EL2), SYS_AMAIR);
+
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ /*
+ * In VHE mode those registers are compatible between
+ * EL1 and EL2.
+ */
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CPTR_EL2), SYS_CPACR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CNTHCTL_EL2), SYS_CNTKCTL);
+ } else {
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+ val = translate_cptr_el2_to_cpacr_el1(ctxt_sys_reg(ctxt, CPTR_EL2));
+ write_sysreg_el1(val, SYS_CPACR);
+ val = translate_ttbr0_el2_to_ttbr0_el1(ctxt_sys_reg(ctxt, TTBR0_EL2));
+ write_sysreg_el1(val, SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ val = translate_cnthctl_el2_to_cntkctl_el1(ctxt_sys_reg(ctxt, CNTHCTL_EL2));
+ write_sysreg_el1(val, SYS_CNTKCTL);
+ }
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ESR_EL2), SYS_ESR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR0_EL2), SYS_AFSR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR1_EL2), SYS_AFSR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, FAR_EL2), SYS_FAR);
+ write_sysreg(ctxt_sys_reg(ctxt, SP_EL2), sp_el1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ELR_EL2), SYS_ELR);
+
+ val = __fixup_spsr_el2_write(ctxt, ctxt_sys_reg(ctxt, SPSR_EL2));
+ write_sysreg_el1(val, SYS_SPSR);
+}
/*
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and
@@ -65,6 +155,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
struct kvm_cpu_context *host_ctxt;
+ u64 mpidr;
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
__sysreg_save_user_state(host_ctxt);
@@ -77,7 +168,29 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
*/
__sysreg32_restore_state(vcpu);
__sysreg_restore_user_state(guest_ctxt);
- __sysreg_restore_el1_state(guest_ctxt);
+
+ if (unlikely(__is_hyp_ctxt(guest_ctxt))) {
+ __sysreg_restore_vel2_state(guest_ctxt);
+ } else {
+ if (nested_virt_in_use(vcpu)) {
+ /*
+ * Only set VPIDR_EL2 for nested VMs, as this is the
+ * only time it changes. We'll restore the MIDR_EL1
+ * view on put.
+ */
+ write_sysreg(ctxt_sys_reg(guest_ctxt, VPIDR_EL2), vpidr_el2);
+
+ /*
+ * As we're restoring a nested guest, set the value
+ * provided by the guest hypervisor.
+ */
+ mpidr = ctxt_sys_reg(guest_ctxt, VMPIDR_EL2);
+ } else {
+ mpidr = ctxt_sys_reg(guest_ctxt, MPIDR_EL1);
+ }
+
+ __sysreg_restore_el1_state(guest_ctxt, mpidr);
+ }
vcpu->arch.sysregs_loaded_on_cpu = true;
@@ -103,12 +216,20 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
deactivate_traps_vhe_put();
- __sysreg_save_el1_state(guest_ctxt);
+ if (unlikely(__is_hyp_ctxt(guest_ctxt)))
+ __sysreg_save_vel2_state(guest_ctxt);
+ else
+ __sysreg_save_el1_state(guest_ctxt);
+
__sysreg_save_user_state(guest_ctxt);
__sysreg32_save_state(vcpu);
/* Restore host user state */
__sysreg_restore_user_state(host_ctxt);
+ /* If leaving a nesting guest, restore MPIDR_EL1 default view */
+ if (nested_virt_in_use(vcpu))
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+
vcpu->arch.sysregs_loaded_on_cpu = false;
}
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 17/66] KVM: arm64: nv: Emulate PSTATE.M for a guest hypervisor
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
We can no longer blindly copy the VCPU's PSTATE into SPSR_EL2 and return
to the guest and vice versa when taking an exception to the hypervisor,
because we emulate virtual EL2 in EL1 and therefore have to translate
the mode field from EL2 to EL1 and vice versa.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 43 +++++++++++++++++++++-
1 file changed, 41 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index e3901c73893e..92715fa01e88 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -51,10 +51,32 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
ctxt_sys_reg(ctxt, SPSR_EL1) = read_sysreg_el1(SYS_SPSR);
}
+static inline u64 from_hw_pstate(const struct kvm_cpu_context *ctxt)
+{
+ u64 reg = read_sysreg_el2(SYS_SPSR);
+
+ if (__is_hyp_ctxt(ctxt)) {
+ u64 mode = reg & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ switch (mode) {
+ case PSR_MODE_EL1t:
+ mode = PSR_MODE_EL2t;
+ break;
+ case PSR_MODE_EL1h:
+ mode = PSR_MODE_EL2h;
+ break;
+ }
+
+ return (reg & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+ }
+
+ return reg;
+}
+
static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt)
{
ctxt->regs.pc = read_sysreg_el2(SYS_ELR);
- ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);
+ ctxt->regs.pstate = from_hw_pstate(ctxt);
if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
@@ -131,9 +153,26 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
write_sysreg_el1(ctxt_sys_reg(ctxt, SPSR_EL1), SYS_SPSR);
}
+/* Read the VCPU state's PSTATE, but translate (v)EL2 to EL1. */
+static inline u64 to_hw_pstate(const struct kvm_cpu_context *ctxt)
+{
+ u64 mode = ctxt->regs.pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ switch (mode) {
+ case PSR_MODE_EL2t:
+ mode = PSR_MODE_EL1t;
+ break;
+ case PSR_MODE_EL2h:
+ mode = PSR_MODE_EL1h;
+ break;
+ }
+
+ return (ctxt->regs.pstate & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+}
+
static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
{
- u64 pstate = ctxt->regs.pstate;
+ u64 pstate = to_hw_pstate(ctxt);
u64 mode = pstate & PSR_AA32_MODE_MASK;
/*
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 17/66] KVM: arm64: nv: Emulate PSTATE.M for a guest hypervisor
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Christoffer Dall <christoffer.dall@arm.com>
We can no longer blindly copy the VCPU's PSTATE into SPSR_EL2 and return
to the guest and vice versa when taking an exception to the hypervisor,
because we emulate virtual EL2 in EL1 and therefore have to translate
the mode field from EL2 to EL1 and vice versa.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 43 +++++++++++++++++++++-
1 file changed, 41 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index e3901c73893e..92715fa01e88 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -51,10 +51,32 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
ctxt_sys_reg(ctxt, SPSR_EL1) = read_sysreg_el1(SYS_SPSR);
}
+static inline u64 from_hw_pstate(const struct kvm_cpu_context *ctxt)
+{
+ u64 reg = read_sysreg_el2(SYS_SPSR);
+
+ if (__is_hyp_ctxt(ctxt)) {
+ u64 mode = reg & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ switch (mode) {
+ case PSR_MODE_EL1t:
+ mode = PSR_MODE_EL2t;
+ break;
+ case PSR_MODE_EL1h:
+ mode = PSR_MODE_EL2h;
+ break;
+ }
+
+ return (reg & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+ }
+
+ return reg;
+}
+
static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt)
{
ctxt->regs.pc = read_sysreg_el2(SYS_ELR);
- ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);
+ ctxt->regs.pstate = from_hw_pstate(ctxt);
if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
@@ -131,9 +153,26 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
write_sysreg_el1(ctxt_sys_reg(ctxt, SPSR_EL1), SYS_SPSR);
}
+/* Read the VCPU state's PSTATE, but translate (v)EL2 to EL1. */
+static inline u64 to_hw_pstate(const struct kvm_cpu_context *ctxt)
+{
+ u64 mode = ctxt->regs.pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ switch (mode) {
+ case PSR_MODE_EL2t:
+ mode = PSR_MODE_EL1t;
+ break;
+ case PSR_MODE_EL2h:
+ mode = PSR_MODE_EL1h;
+ break;
+ }
+
+ return (ctxt->regs.pstate & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+}
+
static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
{
- u64 pstate = ctxt->regs.pstate;
+ u64 pstate = to_hw_pstate(ctxt);
u64 mode = pstate & PSR_AA32_MODE_MASK;
/*
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 17/66] KVM: arm64: nv: Emulate PSTATE.M for a guest hypervisor
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
From: Christoffer Dall <christoffer.dall@arm.com>
We can no longer blindly copy the VCPU's PSTATE into SPSR_EL2 and return
to the guest and vice versa when taking an exception to the hypervisor,
because we emulate virtual EL2 in EL1 and therefore have to translate
the mode field from EL2 to EL1 and vice versa.
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 43 +++++++++++++++++++++-
1 file changed, 41 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index e3901c73893e..92715fa01e88 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -51,10 +51,32 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
ctxt_sys_reg(ctxt, SPSR_EL1) = read_sysreg_el1(SYS_SPSR);
}
+static inline u64 from_hw_pstate(const struct kvm_cpu_context *ctxt)
+{
+ u64 reg = read_sysreg_el2(SYS_SPSR);
+
+ if (__is_hyp_ctxt(ctxt)) {
+ u64 mode = reg & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ switch (mode) {
+ case PSR_MODE_EL1t:
+ mode = PSR_MODE_EL2t;
+ break;
+ case PSR_MODE_EL1h:
+ mode = PSR_MODE_EL2h;
+ break;
+ }
+
+ return (reg & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+ }
+
+ return reg;
+}
+
static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt)
{
ctxt->regs.pc = read_sysreg_el2(SYS_ELR);
- ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);
+ ctxt->regs.pstate = from_hw_pstate(ctxt);
if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
@@ -131,9 +153,26 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
write_sysreg_el1(ctxt_sys_reg(ctxt, SPSR_EL1), SYS_SPSR);
}
+/* Read the VCPU state's PSTATE, but translate (v)EL2 to EL1. */
+static inline u64 to_hw_pstate(const struct kvm_cpu_context *ctxt)
+{
+ u64 mode = ctxt->regs.pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ switch (mode) {
+ case PSR_MODE_EL2t:
+ mode = PSR_MODE_EL1t;
+ break;
+ case PSR_MODE_EL2h:
+ mode = PSR_MODE_EL1h;
+ break;
+ }
+
+ return (ctxt->regs.pstate & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+}
+
static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
{
- u64 pstate = ctxt->regs.pstate;
+ u64 pstate = to_hw_pstate(ctxt);
u64 mode = pstate & PSR_AA32_MODE_MASK;
/*
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 18/66] KVM: arm64: nv: Trap EL1 VM register accesses in virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
When running in virtual EL2 mode, we actually run the hardware in EL1
and therefore have to use the EL1 registers to ensure correct operation.
By setting the HCR.TVM and HCR.TVRM we ensure that the virtual EL2 mode
doesn't shoot itself in the foot when setting up what it believes to be
a different mode's system register state (for example when preparing to
switch to a VM).
We can leverage the existing sysregs infrastructure to support trapped
accesses to these registers.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +---
arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 7 ++++++-
arch/arm64/kvm/sys_regs.c | 19 ++++++++++++++++---
4 files changed, 24 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..bd2a7a6ae5d7 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -102,10 +102,8 @@ static inline void __deactivate_traps_common(void)
write_sysreg(0, pmuserenr_el0);
}
-static inline void ___activate_traps(struct kvm_vcpu *vcpu)
+static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr)
{
- u64 hcr = vcpu->arch.hcr_el2;
-
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
hcr |= HCR_TVM;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index e9f6ea704d07..2b0f8675fe3b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -39,7 +39,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
- ___activate_traps(vcpu);
+ ___activate_traps(vcpu, vcpu->arch.hcr_el2);
__activate_traps_common(vcpu);
val = CPTR_EL2_DEFAULT;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 7b8f7db5c1ed..6764bfd73ff6 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -34,9 +34,14 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
static void __activate_traps(struct kvm_vcpu *vcpu)
{
+ u64 hcr = vcpu->arch.hcr_el2;
u64 val;
- ___activate_traps(vcpu);
+ /* Trap VM sysreg accesses if an EL2 guest is not using VHE. */
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
+ hcr |= HCR_TVM | HCR_TRVM;
+
+ ___activate_traps(vcpu, hcr);
val = read_sysreg(cpacr_el1);
val |= CPACR_EL1_TTA;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0d29d9d59c3f..9fb1cc8bf836 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -354,8 +354,15 @@ static void get_access_mask(const struct sys_reg_desc *r, u64 *mask, u64 *shift)
/*
* Generic accessor for VM registers. Only called as long as HCR_TVM
- * is set. If the guest enables the MMU, we stop trapping the VM
- * sys_regs and leave it in complete control of the caches.
+ * is set.
+ *
+ * This is set in two cases: either (1) we're running at vEL2, or (2)
+ * we're running at EL1 and the guest has its MMU off.
+ *
+ * (1) TVM/TRVM is set, as we need to virtualise some of the VM
+ * registers for the guest hypervisor
+ * (2) Once the guest enables the MMU, we stop trapping the VM sys_regs
+ * and leave it in complete control of the caches.
*/
static bool access_vm_reg(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -364,7 +371,13 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
bool was_enabled = vcpu_has_cache_enabled(vcpu);
u64 val, mask, shift;
- BUG_ON(!p->is_write);
+ /* We don't expect TRVM on the host */
+ BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
+
+ if (!p->is_write) {
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+ return true;
+ }
get_access_mask(r, &mask, &shift);
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 18/66] KVM: arm64: nv: Trap EL1 VM register accesses in virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
When running in virtual EL2 mode, we actually run the hardware in EL1
and therefore have to use the EL1 registers to ensure correct operation.
By setting the HCR.TVM and HCR.TVRM we ensure that the virtual EL2 mode
doesn't shoot itself in the foot when setting up what it believes to be
a different mode's system register state (for example when preparing to
switch to a VM).
We can leverage the existing sysregs infrastructure to support trapped
accesses to these registers.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +---
arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 7 ++++++-
arch/arm64/kvm/sys_regs.c | 19 ++++++++++++++++---
4 files changed, 24 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..bd2a7a6ae5d7 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -102,10 +102,8 @@ static inline void __deactivate_traps_common(void)
write_sysreg(0, pmuserenr_el0);
}
-static inline void ___activate_traps(struct kvm_vcpu *vcpu)
+static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr)
{
- u64 hcr = vcpu->arch.hcr_el2;
-
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
hcr |= HCR_TVM;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index e9f6ea704d07..2b0f8675fe3b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -39,7 +39,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
- ___activate_traps(vcpu);
+ ___activate_traps(vcpu, vcpu->arch.hcr_el2);
__activate_traps_common(vcpu);
val = CPTR_EL2_DEFAULT;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 7b8f7db5c1ed..6764bfd73ff6 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -34,9 +34,14 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
static void __activate_traps(struct kvm_vcpu *vcpu)
{
+ u64 hcr = vcpu->arch.hcr_el2;
u64 val;
- ___activate_traps(vcpu);
+ /* Trap VM sysreg accesses if an EL2 guest is not using VHE. */
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
+ hcr |= HCR_TVM | HCR_TRVM;
+
+ ___activate_traps(vcpu, hcr);
val = read_sysreg(cpacr_el1);
val |= CPACR_EL1_TTA;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0d29d9d59c3f..9fb1cc8bf836 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -354,8 +354,15 @@ static void get_access_mask(const struct sys_reg_desc *r, u64 *mask, u64 *shift)
/*
* Generic accessor for VM registers. Only called as long as HCR_TVM
- * is set. If the guest enables the MMU, we stop trapping the VM
- * sys_regs and leave it in complete control of the caches.
+ * is set.
+ *
+ * This is set in two cases: either (1) we're running at vEL2, or (2)
+ * we're running at EL1 and the guest has its MMU off.
+ *
+ * (1) TVM/TRVM is set, as we need to virtualise some of the VM
+ * registers for the guest hypervisor
+ * (2) Once the guest enables the MMU, we stop trapping the VM sys_regs
+ * and leave it in complete control of the caches.
*/
static bool access_vm_reg(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -364,7 +371,13 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
bool was_enabled = vcpu_has_cache_enabled(vcpu);
u64 val, mask, shift;
- BUG_ON(!p->is_write);
+ /* We don't expect TRVM on the host */
+ BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
+
+ if (!p->is_write) {
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+ return true;
+ }
get_access_mask(r, &mask, &shift);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 18/66] KVM: arm64: nv: Trap EL1 VM register accesses in virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: kernel-team, Andre Przywara, Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
When running in virtual EL2 mode, we actually run the hardware in EL1
and therefore have to use the EL1 registers to ensure correct operation.
By setting the HCR.TVM and HCR.TVRM we ensure that the virtual EL2 mode
doesn't shoot itself in the foot when setting up what it believes to be
a different mode's system register state (for example when preparing to
switch to a VM).
We can leverage the existing sysregs infrastructure to support trapped
accesses to these registers.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +---
arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 7 ++++++-
arch/arm64/kvm/sys_regs.c | 19 ++++++++++++++++---
4 files changed, 24 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..bd2a7a6ae5d7 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -102,10 +102,8 @@ static inline void __deactivate_traps_common(void)
write_sysreg(0, pmuserenr_el0);
}
-static inline void ___activate_traps(struct kvm_vcpu *vcpu)
+static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr)
{
- u64 hcr = vcpu->arch.hcr_el2;
-
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
hcr |= HCR_TVM;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index e9f6ea704d07..2b0f8675fe3b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -39,7 +39,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
- ___activate_traps(vcpu);
+ ___activate_traps(vcpu, vcpu->arch.hcr_el2);
__activate_traps_common(vcpu);
val = CPTR_EL2_DEFAULT;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 7b8f7db5c1ed..6764bfd73ff6 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -34,9 +34,14 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
static void __activate_traps(struct kvm_vcpu *vcpu)
{
+ u64 hcr = vcpu->arch.hcr_el2;
u64 val;
- ___activate_traps(vcpu);
+ /* Trap VM sysreg accesses if an EL2 guest is not using VHE. */
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
+ hcr |= HCR_TVM | HCR_TRVM;
+
+ ___activate_traps(vcpu, hcr);
val = read_sysreg(cpacr_el1);
val |= CPACR_EL1_TTA;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0d29d9d59c3f..9fb1cc8bf836 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -354,8 +354,15 @@ static void get_access_mask(const struct sys_reg_desc *r, u64 *mask, u64 *shift)
/*
* Generic accessor for VM registers. Only called as long as HCR_TVM
- * is set. If the guest enables the MMU, we stop trapping the VM
- * sys_regs and leave it in complete control of the caches.
+ * is set.
+ *
+ * This is set in two cases: either (1) we're running at vEL2, or (2)
+ * we're running at EL1 and the guest has its MMU off.
+ *
+ * (1) TVM/TRVM is set, as we need to virtualise some of the VM
+ * registers for the guest hypervisor
+ * (2) Once the guest enables the MMU, we stop trapping the VM sys_regs
+ * and leave it in complete control of the caches.
*/
static bool access_vm_reg(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -364,7 +371,13 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
bool was_enabled = vcpu_has_cache_enabled(vcpu);
u64 val, mask, shift;
- BUG_ON(!p->is_write);
+ /* We don't expect TRVM on the host */
+ BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
+
+ if (!p->is_write) {
+ p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+ return true;
+ }
get_access_mask(r, &mask, &shift);
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 19/66] KVM: arm64: nv: Trap SPSR_EL1, ELR_EL1 and VBAR_EL1 from virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
For the same reason we trap virtual memory register accesses at virtual
EL2, we need to trap SPSR_EL1, ELR_EL1 and VBAR_EL1 accesses. ARM v8.3
introduces the HCR_EL2.NV1 bit to be able to trap on those register
accesses in EL1. Do not set this bit until the whole nesting support is
completed.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9fb1cc8bf836..dccc2d758f68 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1584,6 +1584,30 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool access_elr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, ELR_EL1);
+
+ return true;
+}
+
+static bool access_spsr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ __vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
+ else
+ p->regval = __vcpu_sys_reg(vcpu, SPSR_EL1);
+
+ return true;
+}
+
static bool access_spsr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1746,6 +1770,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
PTRAUTH_KEY(APDB),
PTRAUTH_KEY(APGA),
+ { SYS_DESC(SYS_SPSR_EL1), access_spsr},
+ { SYS_DESC(SYS_ELR_EL1), access_elr},
+
{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
@@ -1793,7 +1820,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
- { SYS_DESC(SYS_VBAR_EL1), NULL, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
{ SYS_DESC(SYS_ICC_IAR0_EL1), write_to_read_only },
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 19/66] KVM: arm64: nv: Trap SPSR_EL1, ELR_EL1 and VBAR_EL1 from virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
For the same reason we trap virtual memory register accesses at virtual
EL2, we need to trap SPSR_EL1, ELR_EL1 and VBAR_EL1 accesses. ARM v8.3
introduces the HCR_EL2.NV1 bit to be able to trap on those register
accesses in EL1. Do not set this bit until the whole nesting support is
completed.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9fb1cc8bf836..dccc2d758f68 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1584,6 +1584,30 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool access_elr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, ELR_EL1);
+
+ return true;
+}
+
+static bool access_spsr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ __vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
+ else
+ p->regval = __vcpu_sys_reg(vcpu, SPSR_EL1);
+
+ return true;
+}
+
static bool access_spsr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1746,6 +1770,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
PTRAUTH_KEY(APDB),
PTRAUTH_KEY(APGA),
+ { SYS_DESC(SYS_SPSR_EL1), access_spsr},
+ { SYS_DESC(SYS_ELR_EL1), access_elr},
+
{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
@@ -1793,7 +1820,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
- { SYS_DESC(SYS_VBAR_EL1), NULL, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
{ SYS_DESC(SYS_ICC_IAR0_EL1), write_to_read_only },
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 19/66] KVM: arm64: nv: Trap SPSR_EL1, ELR_EL1 and VBAR_EL1 from virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
For the same reason we trap virtual memory register accesses at virtual
EL2, we need to trap SPSR_EL1, ELR_EL1 and VBAR_EL1 accesses. ARM v8.3
introduces the HCR_EL2.NV1 bit to be able to trap on those register
accesses in EL1. Do not set this bit until the whole nesting support is
completed.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9fb1cc8bf836..dccc2d758f68 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1584,6 +1584,30 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool access_elr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
+ else
+ p->regval = vcpu_read_sys_reg(vcpu, ELR_EL1);
+
+ return true;
+}
+
+static bool access_spsr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ __vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
+ else
+ p->regval = __vcpu_sys_reg(vcpu, SPSR_EL1);
+
+ return true;
+}
+
static bool access_spsr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1746,6 +1770,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
PTRAUTH_KEY(APDB),
PTRAUTH_KEY(APGA),
+ { SYS_DESC(SYS_SPSR_EL1), access_spsr},
+ { SYS_DESC(SYS_ELR_EL1), access_elr},
+
{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
@@ -1793,7 +1820,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
- { SYS_DESC(SYS_VBAR_EL1), NULL, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
{ SYS_DESC(SYS_ICC_IAR0_EL1), write_to_read_only },
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 20/66] KVM: arm64: nv: Trap CPACR_EL1 access in virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
For the same reason we trap virtual memory register accesses in virtual
EL2, we trap CPACR_EL1 access too; We allow the virtual EL2 mode to
access EL1 system register state instead of the virtual EL2 one.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 3 +++
arch/arm64/kvm/sys_regs.c | 2 +-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 4fd52af7f538..4f987bd8fcf3 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -269,7 +269,7 @@
#define CPTR_EL2_TFP_SHIFT 10
/* Hyp Coprocessor Trap Register */
-#define CPTR_EL2_TCPAC (1 << 31)
+#define CPTR_EL2_TCPAC (1U << 31)
#define CPTR_EL2_TAM (1 << 30)
#define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 6764bfd73ff6..a238f52955c5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -66,6 +66,9 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu);
}
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
+ val |= CPTR_EL2_TCPAC;
+
write_sysreg(val, cpacr_el1);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dccc2d758f68..d5c4455713c4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1753,7 +1753,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
{ SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 },
- { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
+ { SYS_DESC(SYS_CPACR_EL1), access_rw, reset_val, CPACR_EL1, 0 },
{ SYS_DESC(SYS_RGSR_EL1), undef_access },
{ SYS_DESC(SYS_GCR_EL1), undef_access },
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 20/66] KVM: arm64: nv: Trap CPACR_EL1 access in virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
For the same reason we trap virtual memory register accesses in virtual
EL2, we trap CPACR_EL1 access too; We allow the virtual EL2 mode to
access EL1 system register state instead of the virtual EL2 one.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 3 +++
arch/arm64/kvm/sys_regs.c | 2 +-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 4fd52af7f538..4f987bd8fcf3 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -269,7 +269,7 @@
#define CPTR_EL2_TFP_SHIFT 10
/* Hyp Coprocessor Trap Register */
-#define CPTR_EL2_TCPAC (1 << 31)
+#define CPTR_EL2_TCPAC (1U << 31)
#define CPTR_EL2_TAM (1 << 30)
#define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 6764bfd73ff6..a238f52955c5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -66,6 +66,9 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu);
}
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
+ val |= CPTR_EL2_TCPAC;
+
write_sysreg(val, cpacr_el1);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dccc2d758f68..d5c4455713c4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1753,7 +1753,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
{ SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 },
- { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
+ { SYS_DESC(SYS_CPACR_EL1), access_rw, reset_val, CPACR_EL1, 0 },
{ SYS_DESC(SYS_RGSR_EL1), undef_access },
{ SYS_DESC(SYS_GCR_EL1), undef_access },
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 20/66] KVM: arm64: nv: Trap CPACR_EL1 access in virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
For the same reason we trap virtual memory register accesses in virtual
EL2, we trap CPACR_EL1 access too; We allow the virtual EL2 mode to
access EL1 system register state instead of the virtual EL2 one.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 3 +++
arch/arm64/kvm/sys_regs.c | 2 +-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 4fd52af7f538..4f987bd8fcf3 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -269,7 +269,7 @@
#define CPTR_EL2_TFP_SHIFT 10
/* Hyp Coprocessor Trap Register */
-#define CPTR_EL2_TCPAC (1 << 31)
+#define CPTR_EL2_TCPAC (1U << 31)
#define CPTR_EL2_TAM (1 << 30)
#define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 6764bfd73ff6..a238f52955c5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -66,6 +66,9 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu);
}
+ if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
+ val |= CPTR_EL2_TCPAC;
+
write_sysreg(val, cpacr_el1);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dccc2d758f68..d5c4455713c4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1753,7 +1753,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
{ SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 },
- { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
+ { SYS_DESC(SYS_CPACR_EL1), access_rw, reset_val, CPACR_EL1, 0 },
{ SYS_DESC(SYS_RGSR_EL1), undef_access },
{ SYS_DESC(SYS_GCR_EL1), undef_access },
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 21/66] KVM: arm64: nv: Handle PSCI call via smc from the guest
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
VMs used to execute hvc #0 for the psci call if EL3 is not implemented.
However, when we come to provide the virtual EL2 mode to the VM, the
host OS inside the VM calls kvm_call_hyp() which is also hvc #0. So,
it's hard to differentiate between them from the host hypervisor's point
of view.
So, let the VM execute smc instruction for the psci call. On ARMv8.3,
even if EL3 is not implemented, a smc instruction executed at non-secure
EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than being treated as
UNDEFINED. So, the host hypervisor can handle this psci call without any
confusion.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index fca0d44afe96..f0fc99c7c9ca 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -62,6 +62,8 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
static int handle_smc(struct kvm_vcpu *vcpu)
{
+ int ret;
+
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
@@ -69,10 +71,28 @@ static int handle_smc(struct kvm_vcpu *vcpu)
*
* We need to advance the PC after the trap, as it would
* otherwise return to the same address...
+ *
+ * If imm is non-zero, it's not defined, so just skip it.
+ */
+ if (kvm_vcpu_hvc_get_imm(vcpu)) {
+ vcpu_set_reg(vcpu, 0, ~0UL);
+ kvm_incr_pc(vcpu);
+ return 1;
+ }
+
+ /*
+ * If imm is zero, it's a psci call.
+ * Note that on ARMv8.3, even if EL3 is not implemented, SMC executed
+ * at Non-secure EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than
+ * being treated as UNDEFINED.
*/
- vcpu_set_reg(vcpu, 0, ~0UL);
+ ret = kvm_hvc_call_handler(vcpu);
+ if (ret < 0)
+ vcpu_set_reg(vcpu, 0, ~0UL);
+
kvm_incr_pc(vcpu);
- return 1;
+
+ return ret;
}
/*
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 21/66] KVM: arm64: nv: Handle PSCI call via smc from the guest
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
VMs used to execute hvc #0 for the psci call if EL3 is not implemented.
However, when we come to provide the virtual EL2 mode to the VM, the
host OS inside the VM calls kvm_call_hyp() which is also hvc #0. So,
it's hard to differentiate between them from the host hypervisor's point
of view.
So, let the VM execute smc instruction for the psci call. On ARMv8.3,
even if EL3 is not implemented, a smc instruction executed at non-secure
EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than being treated as
UNDEFINED. So, the host hypervisor can handle this psci call without any
confusion.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index fca0d44afe96..f0fc99c7c9ca 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -62,6 +62,8 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
static int handle_smc(struct kvm_vcpu *vcpu)
{
+ int ret;
+
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
@@ -69,10 +71,28 @@ static int handle_smc(struct kvm_vcpu *vcpu)
*
* We need to advance the PC after the trap, as it would
* otherwise return to the same address...
+ *
+ * If imm is non-zero, it's not defined, so just skip it.
+ */
+ if (kvm_vcpu_hvc_get_imm(vcpu)) {
+ vcpu_set_reg(vcpu, 0, ~0UL);
+ kvm_incr_pc(vcpu);
+ return 1;
+ }
+
+ /*
+ * If imm is zero, it's a psci call.
+ * Note that on ARMv8.3, even if EL3 is not implemented, SMC executed
+ * at Non-secure EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than
+ * being treated as UNDEFINED.
*/
- vcpu_set_reg(vcpu, 0, ~0UL);
+ ret = kvm_hvc_call_handler(vcpu);
+ if (ret < 0)
+ vcpu_set_reg(vcpu, 0, ~0UL);
+
kvm_incr_pc(vcpu);
- return 1;
+
+ return ret;
}
/*
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 21/66] KVM: arm64: nv: Handle PSCI call via smc from the guest
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
VMs used to execute hvc #0 for the psci call if EL3 is not implemented.
However, when we come to provide the virtual EL2 mode to the VM, the
host OS inside the VM calls kvm_call_hyp() which is also hvc #0. So,
it's hard to differentiate between them from the host hypervisor's point
of view.
So, let the VM execute smc instruction for the psci call. On ARMv8.3,
even if EL3 is not implemented, a smc instruction executed at non-secure
EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than being treated as
UNDEFINED. So, the host hypervisor can handle this psci call without any
confusion.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index fca0d44afe96..f0fc99c7c9ca 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -62,6 +62,8 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
static int handle_smc(struct kvm_vcpu *vcpu)
{
+ int ret;
+
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
@@ -69,10 +71,28 @@ static int handle_smc(struct kvm_vcpu *vcpu)
*
* We need to advance the PC after the trap, as it would
* otherwise return to the same address...
+ *
+ * If imm is non-zero, it's not defined, so just skip it.
+ */
+ if (kvm_vcpu_hvc_get_imm(vcpu)) {
+ vcpu_set_reg(vcpu, 0, ~0UL);
+ kvm_incr_pc(vcpu);
+ return 1;
+ }
+
+ /*
+ * If imm is zero, it's a psci call.
+ * Note that on ARMv8.3, even if EL3 is not implemented, SMC executed
+ * at Non-secure EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than
+ * being treated as UNDEFINED.
*/
- vcpu_set_reg(vcpu, 0, ~0UL);
+ ret = kvm_hvc_call_handler(vcpu);
+ if (ret < 0)
+ vcpu_set_reg(vcpu, 0, ~0UL);
+
kvm_incr_pc(vcpu);
- return 1;
+
+ return ret;
}
/*
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 22/66] KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward exceptions due to WFI or WFE instructions to the virtual EL2 if
they are not coming from the virtual EL2 and virtual HCR_EL2.TWX is set.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/handle_exit.c | 11 +++++++-
arch/arm64/kvm/nested.c | 40 +++++++++++++++++++++++++++++
4 files changed, 53 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/kvm/nested.c
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 67a2c0d05233..4c2ac9650a3e 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -61,4 +61,6 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
(cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
}
+int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index ebf6ddb3cd77..598526c064f7 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o emulate-nested.o \
+ arch_timer.o trng.o emulate-nested.o nested.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index f0fc99c7c9ca..2f579152df0b 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -119,7 +119,16 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
*/
static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
{
- if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
+ bool is_wfe = !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE);
+
+ if (nested_virt_in_use(vcpu)) {
+ int ret = handle_wfx_nested(vcpu, is_wfe);
+
+ if (ret != -EINVAL)
+ return ret;
+ }
+
+ if (is_wfe) {
trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
vcpu->stat.wfe_exit_stat++;
kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
new file mode 100644
index 000000000000..42a96c8d2adc
--- /dev/null
+++ b/arch/arm64/kvm/nested.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2017 - Columbia University and Linaro Ltd.
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_emulate.h>
+
+/*
+ * Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
+ * the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
+ * handle this.
+ */
+int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
+{
+ u64 hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ if (vcpu_mode_el2(vcpu))
+ return -EINVAL;
+
+ if ((is_wfe && (hcr_el2 & HCR_TWE)) || (!is_wfe && (hcr_el2 & HCR_TWI)))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ return -EINVAL;
+}
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 22/66] KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward exceptions due to WFI or WFE instructions to the virtual EL2 if
they are not coming from the virtual EL2 and virtual HCR_EL2.TWX is set.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/handle_exit.c | 11 +++++++-
arch/arm64/kvm/nested.c | 40 +++++++++++++++++++++++++++++
4 files changed, 53 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/kvm/nested.c
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 67a2c0d05233..4c2ac9650a3e 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -61,4 +61,6 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
(cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
}
+int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index ebf6ddb3cd77..598526c064f7 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o emulate-nested.o \
+ arch_timer.o trng.o emulate-nested.o nested.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index f0fc99c7c9ca..2f579152df0b 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -119,7 +119,16 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
*/
static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
{
- if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
+ bool is_wfe = !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE);
+
+ if (nested_virt_in_use(vcpu)) {
+ int ret = handle_wfx_nested(vcpu, is_wfe);
+
+ if (ret != -EINVAL)
+ return ret;
+ }
+
+ if (is_wfe) {
trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
vcpu->stat.wfe_exit_stat++;
kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
new file mode 100644
index 000000000000..42a96c8d2adc
--- /dev/null
+++ b/arch/arm64/kvm/nested.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2017 - Columbia University and Linaro Ltd.
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_emulate.h>
+
+/*
+ * Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
+ * the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
+ * handle this.
+ */
+int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
+{
+ u64 hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ if (vcpu_mode_el2(vcpu))
+ return -EINVAL;
+
+ if ((is_wfe && (hcr_el2 & HCR_TWE)) || (!is_wfe && (hcr_el2 & HCR_TWI)))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ return -EINVAL;
+}
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 22/66] KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward exceptions due to WFI or WFE instructions to the virtual EL2 if
they are not coming from the virtual EL2 and virtual HCR_EL2.TWX is set.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/handle_exit.c | 11 +++++++-
arch/arm64/kvm/nested.c | 40 +++++++++++++++++++++++++++++
4 files changed, 53 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/kvm/nested.c
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 67a2c0d05233..4c2ac9650a3e 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -61,4 +61,6 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
(cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
}
+int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index ebf6ddb3cd77..598526c064f7 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o emulate-nested.o \
+ arch_timer.o trng.o emulate-nested.o nested.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index f0fc99c7c9ca..2f579152df0b 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -119,7 +119,16 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
*/
static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
{
- if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
+ bool is_wfe = !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE);
+
+ if (nested_virt_in_use(vcpu)) {
+ int ret = handle_wfx_nested(vcpu, is_wfe);
+
+ if (ret != -EINVAL)
+ return ret;
+ }
+
+ if (is_wfe) {
trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
vcpu->stat.wfe_exit_stat++;
kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
new file mode 100644
index 000000000000..42a96c8d2adc
--- /dev/null
+++ b/arch/arm64/kvm/nested.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2017 - Columbia University and Linaro Ltd.
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_emulate.h>
+
+/*
+ * Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
+ * the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
+ * handle this.
+ */
+int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
+{
+ u64 hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ if (vcpu_mode_el2(vcpu))
+ return -EINVAL;
+
+ if ((is_wfe && (hcr_el2 & HCR_TWE)) || (!is_wfe && (hcr_el2 & HCR_TWI)))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ return -EINVAL;
+}
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 23/66] KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward traps due to FP/ASIMD register accesses to the virtual EL2
if virtual CPTR_EL2.TFP is set (with HCR_EL2.E2H == 0) or
CPTR_EL2.FPEN is configure to do so (with HCR_EL2.E2h == 1).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: account for HCR_EL2.E2H when testing for TFP/FPEN, with
all the hard work actually being done by Chase Conklin]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 26 +++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 16 +++++++++++----
arch/arm64/kvm/hyp/include/hyp/switch.h | 8 ++++++--
3 files changed, 44 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f63872db1e39..12935f6a7aa9 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -11,12 +11,14 @@
#ifndef __ARM64_KVM_EMULATE_H__
#define __ARM64_KVM_EMULATE_H__
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/debug-monitors.h>
#include <asm/esr.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_nested.h>
#include <asm/ptrace.h>
#include <asm/cputype.h>
#include <asm/virt.h>
@@ -321,6 +323,30 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
return mode != PSR_MODE_EL0t;
}
+static inline bool guest_hyp_fpsimd_traps_enabled(const struct kvm_vcpu *vcpu)
+{
+ u64 val;
+
+ if (!nested_virt_in_use(vcpu))
+ return false;
+
+ val = vcpu_read_sys_reg(vcpu, CPTR_EL2);
+
+ if (!vcpu_el2_e2h_is_set(vcpu))
+ return (val & CPTR_EL2_TFP);
+
+ switch (FIELD_GET(CPACR_EL1_FPEN, val)) {
+ case 0b00:
+ case 0b10:
+ return true;
+ case 0b01:
+ return vcpu_el2_tge_is_set(vcpu) && !vcpu_mode_el2(vcpu);
+ case 0b11:
+ default: /* GCC is dumb */
+ return false;
+ }
+}
+
static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.esr_el2;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 2f579152df0b..3380c5a91b60 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -96,11 +96,19 @@ static int handle_smc(struct kvm_vcpu *vcpu)
}
/*
- * Guest access to FP/ASIMD registers are routed to this handler only
- * when the system doesn't support FP/ASIMD.
+ * This handles the cases where the system does not support FP/ASIMD or when
+ * we are running nested virtualization and the guest hypervisor is trapping
+ * FP/ASIMD accesses by its guest guest.
+ *
+ * All other handling of guest vs. host FP/ASIMD register state is handled in
+ * fixup_guest_exit().
*/
-static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
+static int kvm_handle_fpasimd(struct kvm_vcpu *vcpu)
{
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ /* This is the case when the system doesn't support FP/ASIMD. */
kvm_inject_undefined(vcpu);
return 1;
}
@@ -243,7 +251,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BREAKPT_LOW]= kvm_handle_guest_debug,
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
- [ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
+ [ESR_ELx_EC_FP_ASIMD] = kvm_handle_fpasimd,
[ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index bd2a7a6ae5d7..0790eb2b7545 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -242,8 +242,12 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
esr_ec != ESR_ELx_EC_SVE)
return false;
- /* Don't handle SVE traps for non-SVE vcpus here: */
- if (!sve_guest && esr_ec != ESR_ELx_EC_FP_ASIMD)
+ /*
+ * Don't handle SVE traps for non-SVE vcpus here. This
+ * includes NV guests for the time being.
+ */
+ if (!sve_guest && (esr_ec != ESR_ELx_EC_FP_ASIMD ||
+ guest_hyp_fpsimd_traps_enabled(vcpu)))
return false;
/* Valid trap. Switch the context: */
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 23/66] KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP, FPEN} settings
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward traps due to FP/ASIMD register accesses to the virtual EL2
if virtual CPTR_EL2.TFP is set (with HCR_EL2.E2H == 0) or
CPTR_EL2.FPEN is configure to do so (with HCR_EL2.E2h == 1).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: account for HCR_EL2.E2H when testing for TFP/FPEN, with
all the hard work actually being done by Chase Conklin]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 26 +++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 16 +++++++++++----
arch/arm64/kvm/hyp/include/hyp/switch.h | 8 ++++++--
3 files changed, 44 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f63872db1e39..12935f6a7aa9 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -11,12 +11,14 @@
#ifndef __ARM64_KVM_EMULATE_H__
#define __ARM64_KVM_EMULATE_H__
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/debug-monitors.h>
#include <asm/esr.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_nested.h>
#include <asm/ptrace.h>
#include <asm/cputype.h>
#include <asm/virt.h>
@@ -321,6 +323,30 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
return mode != PSR_MODE_EL0t;
}
+static inline bool guest_hyp_fpsimd_traps_enabled(const struct kvm_vcpu *vcpu)
+{
+ u64 val;
+
+ if (!nested_virt_in_use(vcpu))
+ return false;
+
+ val = vcpu_read_sys_reg(vcpu, CPTR_EL2);
+
+ if (!vcpu_el2_e2h_is_set(vcpu))
+ return (val & CPTR_EL2_TFP);
+
+ switch (FIELD_GET(CPACR_EL1_FPEN, val)) {
+ case 0b00:
+ case 0b10:
+ return true;
+ case 0b01:
+ return vcpu_el2_tge_is_set(vcpu) && !vcpu_mode_el2(vcpu);
+ case 0b11:
+ default: /* GCC is dumb */
+ return false;
+ }
+}
+
static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.esr_el2;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 2f579152df0b..3380c5a91b60 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -96,11 +96,19 @@ static int handle_smc(struct kvm_vcpu *vcpu)
}
/*
- * Guest access to FP/ASIMD registers are routed to this handler only
- * when the system doesn't support FP/ASIMD.
+ * This handles the cases where the system does not support FP/ASIMD or when
+ * we are running nested virtualization and the guest hypervisor is trapping
+ * FP/ASIMD accesses by its guest guest.
+ *
+ * All other handling of guest vs. host FP/ASIMD register state is handled in
+ * fixup_guest_exit().
*/
-static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
+static int kvm_handle_fpasimd(struct kvm_vcpu *vcpu)
{
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ /* This is the case when the system doesn't support FP/ASIMD. */
kvm_inject_undefined(vcpu);
return 1;
}
@@ -243,7 +251,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BREAKPT_LOW]= kvm_handle_guest_debug,
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
- [ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
+ [ESR_ELx_EC_FP_ASIMD] = kvm_handle_fpasimd,
[ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index bd2a7a6ae5d7..0790eb2b7545 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -242,8 +242,12 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
esr_ec != ESR_ELx_EC_SVE)
return false;
- /* Don't handle SVE traps for non-SVE vcpus here: */
- if (!sve_guest && esr_ec != ESR_ELx_EC_FP_ASIMD)
+ /*
+ * Don't handle SVE traps for non-SVE vcpus here. This
+ * includes NV guests for the time being.
+ */
+ if (!sve_guest && (esr_ec != ESR_ELx_EC_FP_ASIMD ||
+ guest_hyp_fpsimd_traps_enabled(vcpu)))
return false;
/* Valid trap. Switch the context: */
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 23/66] KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP, FPEN} settings
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward traps due to FP/ASIMD register accesses to the virtual EL2
if virtual CPTR_EL2.TFP is set (with HCR_EL2.E2H == 0) or
CPTR_EL2.FPEN is configure to do so (with HCR_EL2.E2h == 1).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: account for HCR_EL2.E2H when testing for TFP/FPEN, with
all the hard work actually being done by Chase Conklin]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 26 +++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 16 +++++++++++----
arch/arm64/kvm/hyp/include/hyp/switch.h | 8 ++++++--
3 files changed, 44 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f63872db1e39..12935f6a7aa9 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -11,12 +11,14 @@
#ifndef __ARM64_KVM_EMULATE_H__
#define __ARM64_KVM_EMULATE_H__
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/debug-monitors.h>
#include <asm/esr.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_nested.h>
#include <asm/ptrace.h>
#include <asm/cputype.h>
#include <asm/virt.h>
@@ -321,6 +323,30 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
return mode != PSR_MODE_EL0t;
}
+static inline bool guest_hyp_fpsimd_traps_enabled(const struct kvm_vcpu *vcpu)
+{
+ u64 val;
+
+ if (!nested_virt_in_use(vcpu))
+ return false;
+
+ val = vcpu_read_sys_reg(vcpu, CPTR_EL2);
+
+ if (!vcpu_el2_e2h_is_set(vcpu))
+ return (val & CPTR_EL2_TFP);
+
+ switch (FIELD_GET(CPACR_EL1_FPEN, val)) {
+ case 0b00:
+ case 0b10:
+ return true;
+ case 0b01:
+ return vcpu_el2_tge_is_set(vcpu) && !vcpu_mode_el2(vcpu);
+ case 0b11:
+ default: /* GCC is dumb */
+ return false;
+ }
+}
+
static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.esr_el2;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 2f579152df0b..3380c5a91b60 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -96,11 +96,19 @@ static int handle_smc(struct kvm_vcpu *vcpu)
}
/*
- * Guest access to FP/ASIMD registers are routed to this handler only
- * when the system doesn't support FP/ASIMD.
+ * This handles the cases where the system does not support FP/ASIMD or when
+ * we are running nested virtualization and the guest hypervisor is trapping
+ * FP/ASIMD accesses by its guest guest.
+ *
+ * All other handling of guest vs. host FP/ASIMD register state is handled in
+ * fixup_guest_exit().
*/
-static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
+static int kvm_handle_fpasimd(struct kvm_vcpu *vcpu)
{
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ /* This is the case when the system doesn't support FP/ASIMD. */
kvm_inject_undefined(vcpu);
return 1;
}
@@ -243,7 +251,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BREAKPT_LOW]= kvm_handle_guest_debug,
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
- [ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
+ [ESR_ELx_EC_FP_ASIMD] = kvm_handle_fpasimd,
[ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index bd2a7a6ae5d7..0790eb2b7545 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -242,8 +242,12 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
esr_ec != ESR_ELx_EC_SVE)
return false;
- /* Don't handle SVE traps for non-SVE vcpus here: */
- if (!sve_guest && esr_ec != ESR_ELx_EC_FP_ASIMD)
+ /*
+ * Don't handle SVE traps for non-SVE vcpus here. This
+ * includes NV guests for the time being.
+ */
+ if (!sve_guest && (esr_ec != ESR_ELx_EC_FP_ASIMD ||
+ guest_hyp_fpsimd_traps_enabled(vcpu)))
return false;
/* Valid trap. Switch the context: */
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 24/66] KVM: arm64: nv: Respect the virtual HCR_EL2.NV bit setting
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward traps due to HCR_EL2.NV bit to the virtual EL2 if they are not
coming from the virtual EL2 and the virtual HCR_EL2.NV bit is set.
In addition to EL2 register accesses, setting NV bit will also make EL12
register accesses trap to EL2. To emulate this for the virtual EL2,
forword traps due to EL12 register accessses to the virtual EL2 if the
virtual HCR_EL2.NV bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[Moved code to emulate-nested.c]
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/emulate-nested.c | 27 +++++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 7 +++++++
arch/arm64/kvm/sys_regs.c | 24 ++++++++++++++++++++++++
5 files changed, 61 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 4f987bd8fcf3..a85d7ea646cd 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
#define HCR_APK (UL(1) << 40)
#define HCR_TEA (UL(1) << 37)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4c2ac9650a3e..26cba7b4d743 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -62,5 +62,7 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
}
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
+extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index ee91bcd925d8..feb9b5eded96 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -25,11 +25,38 @@
#include "trace.h"
+bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+ bool control_bit_set;
+
+ if (!nested_virt_in_use(vcpu))
+ return false;
+
+ control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+ if (!vcpu_mode_el2(vcpu) && control_bit_set) {
+ kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+ return true;
+ }
+ return false;
+}
+
+bool forward_nv_traps(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_NV);
+}
+
void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
{
u64 spsr, elr, mode;
bool direct_eret;
+ /*
+ * Forward this trap to the virtual EL2 if the virtual
+ * HCR_EL2.NV bit is set and this is coming from !EL2.
+ */
+ if (forward_nv_traps(vcpu))
+ return;
+
/*
* Going through the whole put/load motions is a waste of time
* if this is a VHE guest hypervisor returning to its own
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 3380c5a91b60..5f1e3989c4bc 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -64,6 +64,13 @@ static int handle_smc(struct kvm_vcpu *vcpu)
{
int ret;
+ /*
+ * Forward this trapped smc instruction to the virtual EL2 if
+ * the guest has asked for it.
+ */
+ if (forward_traps(vcpu, HCR_TSC))
+ return 1;
+
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d5c4455713c4..98b6fe77b53e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,10 +271,19 @@ static u32 get_ccsidr(u32 csselr)
return ccsidr;
}
+static bool el12_reg(struct sys_reg_params *p)
+{
+ /* All *_EL12 registers have Op1=5. */
+ return (p->Op1 == 5);
+}
+
static bool access_rw(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, r->reg);
else
@@ -287,6 +296,9 @@ static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write) {
u64 val = p->regval;
@@ -371,6 +383,9 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
bool was_enabled = vcpu_has_cache_enabled(vcpu);
u64 val, mask, shift;
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
/* We don't expect TRVM on the host */
BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
@@ -1588,6 +1603,9 @@ static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
else
@@ -1600,6 +1618,9 @@ static bool access_spsr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
else
@@ -1612,6 +1633,9 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
else
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 24/66] KVM: arm64: nv: Respect the virtual HCR_EL2.NV bit setting
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward traps due to HCR_EL2.NV bit to the virtual EL2 if they are not
coming from the virtual EL2 and the virtual HCR_EL2.NV bit is set.
In addition to EL2 register accesses, setting NV bit will also make EL12
register accesses trap to EL2. To emulate this for the virtual EL2,
forword traps due to EL12 register accessses to the virtual EL2 if the
virtual HCR_EL2.NV bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[Moved code to emulate-nested.c]
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/emulate-nested.c | 27 +++++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 7 +++++++
arch/arm64/kvm/sys_regs.c | 24 ++++++++++++++++++++++++
5 files changed, 61 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 4f987bd8fcf3..a85d7ea646cd 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
#define HCR_APK (UL(1) << 40)
#define HCR_TEA (UL(1) << 37)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4c2ac9650a3e..26cba7b4d743 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -62,5 +62,7 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
}
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
+extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index ee91bcd925d8..feb9b5eded96 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -25,11 +25,38 @@
#include "trace.h"
+bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+ bool control_bit_set;
+
+ if (!nested_virt_in_use(vcpu))
+ return false;
+
+ control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+ if (!vcpu_mode_el2(vcpu) && control_bit_set) {
+ kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+ return true;
+ }
+ return false;
+}
+
+bool forward_nv_traps(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_NV);
+}
+
void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
{
u64 spsr, elr, mode;
bool direct_eret;
+ /*
+ * Forward this trap to the virtual EL2 if the virtual
+ * HCR_EL2.NV bit is set and this is coming from !EL2.
+ */
+ if (forward_nv_traps(vcpu))
+ return;
+
/*
* Going through the whole put/load motions is a waste of time
* if this is a VHE guest hypervisor returning to its own
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 3380c5a91b60..5f1e3989c4bc 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -64,6 +64,13 @@ static int handle_smc(struct kvm_vcpu *vcpu)
{
int ret;
+ /*
+ * Forward this trapped smc instruction to the virtual EL2 if
+ * the guest has asked for it.
+ */
+ if (forward_traps(vcpu, HCR_TSC))
+ return 1;
+
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d5c4455713c4..98b6fe77b53e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,10 +271,19 @@ static u32 get_ccsidr(u32 csselr)
return ccsidr;
}
+static bool el12_reg(struct sys_reg_params *p)
+{
+ /* All *_EL12 registers have Op1=5. */
+ return (p->Op1 == 5);
+}
+
static bool access_rw(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, r->reg);
else
@@ -287,6 +296,9 @@ static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write) {
u64 val = p->regval;
@@ -371,6 +383,9 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
bool was_enabled = vcpu_has_cache_enabled(vcpu);
u64 val, mask, shift;
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
/* We don't expect TRVM on the host */
BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
@@ -1588,6 +1603,9 @@ static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
else
@@ -1600,6 +1618,9 @@ static bool access_spsr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
else
@@ -1612,6 +1633,9 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
else
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 24/66] KVM: arm64: nv: Respect the virtual HCR_EL2.NV bit setting
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward traps due to HCR_EL2.NV bit to the virtual EL2 if they are not
coming from the virtual EL2 and the virtual HCR_EL2.NV bit is set.
In addition to EL2 register accesses, setting NV bit will also make EL12
register accesses trap to EL2. To emulate this for the virtual EL2,
forword traps due to EL12 register accessses to the virtual EL2 if the
virtual HCR_EL2.NV bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[Moved code to emulate-nested.c]
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/emulate-nested.c | 27 +++++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 7 +++++++
arch/arm64/kvm/sys_regs.c | 24 ++++++++++++++++++++++++
5 files changed, 61 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 4f987bd8fcf3..a85d7ea646cd 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
#define HCR_APK (UL(1) << 40)
#define HCR_TEA (UL(1) << 37)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4c2ac9650a3e..26cba7b4d743 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -62,5 +62,7 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
}
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
+extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index ee91bcd925d8..feb9b5eded96 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -25,11 +25,38 @@
#include "trace.h"
+bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+ bool control_bit_set;
+
+ if (!nested_virt_in_use(vcpu))
+ return false;
+
+ control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+ if (!vcpu_mode_el2(vcpu) && control_bit_set) {
+ kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+ return true;
+ }
+ return false;
+}
+
+bool forward_nv_traps(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_NV);
+}
+
void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
{
u64 spsr, elr, mode;
bool direct_eret;
+ /*
+ * Forward this trap to the virtual EL2 if the virtual
+ * HCR_EL2.NV bit is set and this is coming from !EL2.
+ */
+ if (forward_nv_traps(vcpu))
+ return;
+
/*
* Going through the whole put/load motions is a waste of time
* if this is a VHE guest hypervisor returning to its own
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 3380c5a91b60..5f1e3989c4bc 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -64,6 +64,13 @@ static int handle_smc(struct kvm_vcpu *vcpu)
{
int ret;
+ /*
+ * Forward this trapped smc instruction to the virtual EL2 if
+ * the guest has asked for it.
+ */
+ if (forward_traps(vcpu, HCR_TSC))
+ return 1;
+
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d5c4455713c4..98b6fe77b53e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,10 +271,19 @@ static u32 get_ccsidr(u32 csselr)
return ccsidr;
}
+static bool el12_reg(struct sys_reg_params *p)
+{
+ /* All *_EL12 registers have Op1=5. */
+ return (p->Op1 == 5);
+}
+
static bool access_rw(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, r->reg);
else
@@ -287,6 +296,9 @@ static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write) {
u64 val = p->regval;
@@ -371,6 +383,9 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
bool was_enabled = vcpu_has_cache_enabled(vcpu);
u64 val, mask, shift;
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
/* We don't expect TRVM on the host */
BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
@@ -1588,6 +1603,9 @@ static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
else
@@ -1600,6 +1618,9 @@ static bool access_spsr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
else
@@ -1612,6 +1633,9 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (el12_reg(p) && forward_nv_traps(vcpu))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
else
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 25/66] KVM: arm64: nv: Respect virtual HCR_EL2.TVM and TRVM settings
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward the EL1 virtual memory register traps to the virtual EL2 if they
are not coming from the virtual EL2 and the virtual HCR_EL2.TVM or TRVM
bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 98b6fe77b53e..35bdfa0cb1ff 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -386,6 +386,13 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p)) {
+ u64 bit = p->is_write ? HCR_TVM : HCR_TRVM;
+
+ if (forward_traps(vcpu, bit))
+ return false;
+ }
+
/* We don't expect TRVM on the host */
BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 25/66] KVM: arm64: nv: Respect virtual HCR_EL2.TVM and TRVM settings
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward the EL1 virtual memory register traps to the virtual EL2 if they
are not coming from the virtual EL2 and the virtual HCR_EL2.TVM or TRVM
bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 98b6fe77b53e..35bdfa0cb1ff 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -386,6 +386,13 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p)) {
+ u64 bit = p->is_write ? HCR_TVM : HCR_TRVM;
+
+ if (forward_traps(vcpu, bit))
+ return false;
+ }
+
/* We don't expect TRVM on the host */
BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 25/66] KVM: arm64: nv: Respect virtual HCR_EL2.TVM and TRVM settings
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
Forward the EL1 virtual memory register traps to the virtual EL2 if they
are not coming from the virtual EL2 and the virtual HCR_EL2.TVM or TRVM
bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 98b6fe77b53e..35bdfa0cb1ff 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -386,6 +386,13 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p)) {
+ u64 bit = p->is_write ? HCR_TVM : HCR_TRVM;
+
+ if (forward_traps(vcpu, bit))
+ return false;
+ }
+
/* We don't expect TRVM on the host */
BUG_ON(!vcpu_mode_el2(vcpu) && !p->is_write);
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 26/66] KVM: arm64: nv: Respect the virtual HCR_EL2.NV1 bit setting
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Jintack Lim <jintack@cs.columbia.edu>
Forward ELR_EL1, SPSR_EL1 and VBAR_EL1 traps to the virtual EL2 if the
virtual HCR_EL2.NV bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/kvm/sys_regs.c | 28 +++++++++++++++++++++++++++-
2 files changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index a85d7ea646cd..c2ab4e1802cf 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_NV1 (UL(1) << 43)
#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
#define HCR_APK (UL(1) << 40)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 35bdfa0cb1ff..a577cdbacc17 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -292,6 +292,22 @@ static bool access_rw(struct kvm_vcpu *vcpu,
return true;
}
+/* This function is to support the recursive nested virtualization */
+static bool forward_nv1_traps(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ return forward_traps(vcpu, HCR_NV1);
+}
+
+static bool access_vbar_el1(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (forward_nv1_traps(vcpu, p))
+ return false;
+
+ return access_rw(vcpu, p, r);
+}
+
static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1606,6 +1622,7 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+
static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1613,6 +1630,9 @@ static bool access_elr(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
else
@@ -1628,6 +1648,9 @@ static bool access_spsr(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
else
@@ -1643,6 +1666,9 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
else
@@ -1851,7 +1877,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
- { SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_VBAR_EL1), access_vbar_el1, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
{ SYS_DESC(SYS_ICC_IAR0_EL1), write_to_read_only },
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 26/66] KVM: arm64: nv: Respect the virtual HCR_EL2.NV1 bit setting
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
From: Jintack Lim <jintack@cs.columbia.edu>
Forward ELR_EL1, SPSR_EL1 and VBAR_EL1 traps to the virtual EL2 if the
virtual HCR_EL2.NV bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/kvm/sys_regs.c | 28 +++++++++++++++++++++++++++-
2 files changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index a85d7ea646cd..c2ab4e1802cf 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_NV1 (UL(1) << 43)
#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
#define HCR_APK (UL(1) << 40)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 35bdfa0cb1ff..a577cdbacc17 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -292,6 +292,22 @@ static bool access_rw(struct kvm_vcpu *vcpu,
return true;
}
+/* This function is to support the recursive nested virtualization */
+static bool forward_nv1_traps(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ return forward_traps(vcpu, HCR_NV1);
+}
+
+static bool access_vbar_el1(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (forward_nv1_traps(vcpu, p))
+ return false;
+
+ return access_rw(vcpu, p, r);
+}
+
static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1606,6 +1622,7 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+
static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1613,6 +1630,9 @@ static bool access_elr(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
else
@@ -1628,6 +1648,9 @@ static bool access_spsr(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
else
@@ -1643,6 +1666,9 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
else
@@ -1851,7 +1877,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
- { SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_VBAR_EL1), access_vbar_el1, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
{ SYS_DESC(SYS_ICC_IAR0_EL1), write_to_read_only },
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 26/66] KVM: arm64: nv: Respect the virtual HCR_EL2.NV1 bit setting
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
From: Jintack Lim <jintack@cs.columbia.edu>
Forward ELR_EL1, SPSR_EL1 and VBAR_EL1 traps to the virtual EL2 if the
virtual HCR_EL2.NV bit is set.
This is for recursive nested virtualization.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/kvm/sys_regs.c | 28 +++++++++++++++++++++++++++-
2 files changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index a85d7ea646cd..c2ab4e1802cf 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_NV1 (UL(1) << 43)
#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
#define HCR_APK (UL(1) << 40)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 35bdfa0cb1ff..a577cdbacc17 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -292,6 +292,22 @@ static bool access_rw(struct kvm_vcpu *vcpu,
return true;
}
+/* This function is to support the recursive nested virtualization */
+static bool forward_nv1_traps(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ return forward_traps(vcpu, HCR_NV1);
+}
+
+static bool access_vbar_el1(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (forward_nv1_traps(vcpu, p))
+ return false;
+
+ return access_rw(vcpu, p, r);
+}
+
static bool access_sctlr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1606,6 +1622,7 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+
static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1613,6 +1630,9 @@ static bool access_elr(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
else
@@ -1628,6 +1648,9 @@ static bool access_spsr(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
else
@@ -1643,6 +1666,9 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
if (el12_reg(p) && forward_nv_traps(vcpu))
return false;
+ if (!el12_reg(p) && forward_nv1_traps(vcpu, p))
+ return false;
+
if (p->is_write)
vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2);
else
@@ -1851,7 +1877,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
- { SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_VBAR_EL1), access_vbar_el1, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
{ SYS_DESC(SYS_ICC_IAR0_EL1), write_to_read_only },
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 27/66] KVM: arm64: nv: Emulate EL12 register accesses from the virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
With HCR_EL2.NV bit set, accesses to EL12 registers in the virtual EL2
trap to EL2. Handle those traps just like we do for EL1 registers.
One exception is CNTKCTL_EL12. We don't trap on CNTKCTL_EL1 for non-VHE
virtual EL2 because we don't have to. However, accessing CNTKCTL_EL12
will trap since it's one of the EL12 registers controlled by HCR_EL2.NV
bit. Therefore, add a handler for it and don't treat it as a
non-trap-registers when preparing a shadow context.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a577cdbacc17..a6ef065fe7f2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2133,6 +2133,23 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_CNTVOFF_EL2), access_rw, reset_val, CNTVOFF_EL2, 0 },
{ SYS_DESC(SYS_CNTHCTL_EL2), access_rw, reset_val, CNTHCTL_EL2, 0 },
+ { SYS_DESC(SYS_SCTLR_EL12), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
+ { SYS_DESC(SYS_CPACR_EL12), access_rw, reset_val, CPACR_EL1, 0 },
+ { SYS_DESC(SYS_TTBR0_EL12), access_vm_reg, reset_unknown, TTBR0_EL1 },
+ { SYS_DESC(SYS_TTBR1_EL12), access_vm_reg, reset_unknown, TTBR1_EL1 },
+ { SYS_DESC(SYS_TCR_EL12), access_vm_reg, reset_val, TCR_EL1, 0 },
+ { SYS_DESC(SYS_SPSR_EL12), access_spsr},
+ { SYS_DESC(SYS_ELR_EL12), access_elr},
+ { SYS_DESC(SYS_AFSR0_EL12), access_vm_reg, reset_unknown, AFSR0_EL1 },
+ { SYS_DESC(SYS_AFSR1_EL12), access_vm_reg, reset_unknown, AFSR1_EL1 },
+ { SYS_DESC(SYS_ESR_EL12), access_vm_reg, reset_unknown, ESR_EL1 },
+ { SYS_DESC(SYS_FAR_EL12), access_vm_reg, reset_unknown, FAR_EL1 },
+ { SYS_DESC(SYS_MAIR_EL12), access_vm_reg, reset_unknown, MAIR_EL1 },
+ { SYS_DESC(SYS_AMAIR_EL12), access_vm_reg, reset_amair_el1, AMAIR_EL1 },
+ { SYS_DESC(SYS_VBAR_EL12), access_rw, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_CONTEXTIDR_EL12), access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
+ { SYS_DESC(SYS_CNTKCTL_EL12), access_rw, reset_val, CNTKCTL_EL1, 0 },
+
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 27/66] KVM: arm64: nv: Emulate EL12 register accesses from the virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
With HCR_EL2.NV bit set, accesses to EL12 registers in the virtual EL2
trap to EL2. Handle those traps just like we do for EL1 registers.
One exception is CNTKCTL_EL12. We don't trap on CNTKCTL_EL1 for non-VHE
virtual EL2 because we don't have to. However, accessing CNTKCTL_EL12
will trap since it's one of the EL12 registers controlled by HCR_EL2.NV
bit. Therefore, add a handler for it and don't treat it as a
non-trap-registers when preparing a shadow context.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a577cdbacc17..a6ef065fe7f2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2133,6 +2133,23 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_CNTVOFF_EL2), access_rw, reset_val, CNTVOFF_EL2, 0 },
{ SYS_DESC(SYS_CNTHCTL_EL2), access_rw, reset_val, CNTHCTL_EL2, 0 },
+ { SYS_DESC(SYS_SCTLR_EL12), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
+ { SYS_DESC(SYS_CPACR_EL12), access_rw, reset_val, CPACR_EL1, 0 },
+ { SYS_DESC(SYS_TTBR0_EL12), access_vm_reg, reset_unknown, TTBR0_EL1 },
+ { SYS_DESC(SYS_TTBR1_EL12), access_vm_reg, reset_unknown, TTBR1_EL1 },
+ { SYS_DESC(SYS_TCR_EL12), access_vm_reg, reset_val, TCR_EL1, 0 },
+ { SYS_DESC(SYS_SPSR_EL12), access_spsr},
+ { SYS_DESC(SYS_ELR_EL12), access_elr},
+ { SYS_DESC(SYS_AFSR0_EL12), access_vm_reg, reset_unknown, AFSR0_EL1 },
+ { SYS_DESC(SYS_AFSR1_EL12), access_vm_reg, reset_unknown, AFSR1_EL1 },
+ { SYS_DESC(SYS_ESR_EL12), access_vm_reg, reset_unknown, ESR_EL1 },
+ { SYS_DESC(SYS_FAR_EL12), access_vm_reg, reset_unknown, FAR_EL1 },
+ { SYS_DESC(SYS_MAIR_EL12), access_vm_reg, reset_unknown, MAIR_EL1 },
+ { SYS_DESC(SYS_AMAIR_EL12), access_vm_reg, reset_amair_el1, AMAIR_EL1 },
+ { SYS_DESC(SYS_VBAR_EL12), access_rw, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_CONTEXTIDR_EL12), access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
+ { SYS_DESC(SYS_CNTKCTL_EL12), access_rw, reset_val, CNTKCTL_EL1, 0 },
+
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 27/66] KVM: arm64: nv: Emulate EL12 register accesses from the virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
With HCR_EL2.NV bit set, accesses to EL12 registers in the virtual EL2
trap to EL2. Handle those traps just like we do for EL1 registers.
One exception is CNTKCTL_EL12. We don't trap on CNTKCTL_EL1 for non-VHE
virtual EL2 because we don't have to. However, accessing CNTKCTL_EL12
will trap since it's one of the EL12 registers controlled by HCR_EL2.NV
bit. Therefore, add a handler for it and don't treat it as a
non-trap-registers when preparing a shadow context.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a577cdbacc17..a6ef065fe7f2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2133,6 +2133,23 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_CNTVOFF_EL2), access_rw, reset_val, CNTVOFF_EL2, 0 },
{ SYS_DESC(SYS_CNTHCTL_EL2), access_rw, reset_val, CNTHCTL_EL2, 0 },
+ { SYS_DESC(SYS_SCTLR_EL12), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
+ { SYS_DESC(SYS_CPACR_EL12), access_rw, reset_val, CPACR_EL1, 0 },
+ { SYS_DESC(SYS_TTBR0_EL12), access_vm_reg, reset_unknown, TTBR0_EL1 },
+ { SYS_DESC(SYS_TTBR1_EL12), access_vm_reg, reset_unknown, TTBR1_EL1 },
+ { SYS_DESC(SYS_TCR_EL12), access_vm_reg, reset_val, TCR_EL1, 0 },
+ { SYS_DESC(SYS_SPSR_EL12), access_spsr},
+ { SYS_DESC(SYS_ELR_EL12), access_elr},
+ { SYS_DESC(SYS_AFSR0_EL12), access_vm_reg, reset_unknown, AFSR0_EL1 },
+ { SYS_DESC(SYS_AFSR1_EL12), access_vm_reg, reset_unknown, AFSR1_EL1 },
+ { SYS_DESC(SYS_ESR_EL12), access_vm_reg, reset_unknown, ESR_EL1 },
+ { SYS_DESC(SYS_FAR_EL12), access_vm_reg, reset_unknown, FAR_EL1 },
+ { SYS_DESC(SYS_MAIR_EL12), access_vm_reg, reset_unknown, MAIR_EL1 },
+ { SYS_DESC(SYS_AMAIR_EL12), access_vm_reg, reset_amair_el1, AMAIR_EL1 },
+ { SYS_DESC(SYS_VBAR_EL12), access_rw, reset_val, VBAR_EL1, 0 },
+ { SYS_DESC(SYS_CONTEXTIDR_EL12), access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
+ { SYS_DESC(SYS_CNTKCTL_EL12), access_rw, reset_val, CNTKCTL_EL1, 0 },
+
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 28/66] KVM: arm64: nv: Forward debug traps to the nested guest
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
On handling a debug trap, check whether we need to forward it to the
guest before handling it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/emulate-nested.c | 9 +++++++--
arch/arm64/kvm/sys_regs.c | 3 +++
3 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 26cba7b4d743..07c15f51cf86 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -62,6 +62,8 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
}
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
+ u64 control_bit);
extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index feb9b5eded96..df4661515183 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -25,14 +25,14 @@
#include "trace.h"
-bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg, u64 control_bit)
{
bool control_bit_set;
if (!nested_virt_in_use(vcpu))
return false;
- control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+ control_bit_set = __vcpu_sys_reg(vcpu, reg) & control_bit;
if (!vcpu_mode_el2(vcpu) && control_bit_set) {
kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
return true;
@@ -40,6 +40,11 @@ bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
return false;
}
+bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+ return __forward_traps(vcpu, HCR_EL2, control_bit);
+}
+
bool forward_nv_traps(struct kvm_vcpu *vcpu)
{
return forward_traps(vcpu, HCR_NV);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a6ef065fe7f2..2b8f3875faf2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -607,6 +607,9 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (__forward_traps(vcpu, MDCR_EL2, MDCR_EL2_TDA | MDCR_EL2_TDE))
+ return false;
+
access_rw(vcpu, p, r);
if (p->is_write)
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 28/66] KVM: arm64: nv: Forward debug traps to the nested guest
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
On handling a debug trap, check whether we need to forward it to the
guest before handling it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/emulate-nested.c | 9 +++++++--
arch/arm64/kvm/sys_regs.c | 3 +++
3 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 26cba7b4d743..07c15f51cf86 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -62,6 +62,8 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
}
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
+ u64 control_bit);
extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index feb9b5eded96..df4661515183 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -25,14 +25,14 @@
#include "trace.h"
-bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg, u64 control_bit)
{
bool control_bit_set;
if (!nested_virt_in_use(vcpu))
return false;
- control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+ control_bit_set = __vcpu_sys_reg(vcpu, reg) & control_bit;
if (!vcpu_mode_el2(vcpu) && control_bit_set) {
kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
return true;
@@ -40,6 +40,11 @@ bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
return false;
}
+bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+ return __forward_traps(vcpu, HCR_EL2, control_bit);
+}
+
bool forward_nv_traps(struct kvm_vcpu *vcpu)
{
return forward_traps(vcpu, HCR_NV);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a6ef065fe7f2..2b8f3875faf2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -607,6 +607,9 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (__forward_traps(vcpu, MDCR_EL2, MDCR_EL2_TDA | MDCR_EL2_TDE))
+ return false;
+
access_rw(vcpu, p, r);
if (p->is_write)
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 28/66] KVM: arm64: nv: Forward debug traps to the nested guest
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
On handling a debug trap, check whether we need to forward it to the
guest before handling it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/emulate-nested.c | 9 +++++++--
arch/arm64/kvm/sys_regs.c | 3 +++
3 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 26cba7b4d743..07c15f51cf86 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -62,6 +62,8 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
}
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
+extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
+ u64 control_bit);
extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index feb9b5eded96..df4661515183 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -25,14 +25,14 @@
#include "trace.h"
-bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg, u64 control_bit)
{
bool control_bit_set;
if (!nested_virt_in_use(vcpu))
return false;
- control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+ control_bit_set = __vcpu_sys_reg(vcpu, reg) & control_bit;
if (!vcpu_mode_el2(vcpu) && control_bit_set) {
kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
return true;
@@ -40,6 +40,11 @@ bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
return false;
}
+bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+ return __forward_traps(vcpu, HCR_EL2, control_bit);
+}
+
bool forward_nv_traps(struct kvm_vcpu *vcpu)
{
return forward_traps(vcpu, HCR_NV);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a6ef065fe7f2..2b8f3875faf2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -607,6 +607,9 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
+ if (__forward_traps(vcpu, MDCR_EL2, MDCR_EL2_TDA | MDCR_EL2_TDE))
+ return false;
+
access_rw(vcpu, p, r);
if (p->is_write)
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 29/66] KVM: arm64: nv: Configure HCR_EL2 for nested virtualization
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
We enable nested virtualization by setting the HCR NV and NV1 bit.
When the virtual E2H bit is set, we can support EL2 register accesses
via EL1 registers from the virtual EL2 by doing trap-and-emulate. A
better alternative, however, is to allow the virtual EL2 to access EL2
register states without trap. This can be easily achieved by not traping
EL1 registers since those registers already have EL2 register states.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/vhe/switch.c | 36 ++++++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index a238f52955c5..79789850639b 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -37,9 +37,39 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
u64 hcr = vcpu->arch.hcr_el2;
u64 val;
- /* Trap VM sysreg accesses if an EL2 guest is not using VHE. */
- if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
- hcr |= HCR_TVM | HCR_TRVM;
+ if (is_hyp_ctxt(vcpu)) {
+ hcr |= HCR_NV;
+
+ if (!vcpu_el2_e2h_is_set(vcpu)) {
+ /*
+ * For a guest hypervisor on v8.0, trap and emulate
+ * the EL1 virtual memory control register accesses.
+ */
+ hcr |= HCR_TVM | HCR_TRVM | HCR_NV1;
+ } else {
+ /*
+ * For a guest hypervisor on v8.1 (VHE), allow to
+ * access the EL1 virtual memory control registers
+ * natively. These accesses are to access EL2 register
+ * states.
+ * Note that we still need to respect the virtual
+ * HCR_EL2 state.
+ */
+ u64 vhcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ /*
+ * We already set TVM to handle set/way cache maint
+ * ops traps, this somewhat collides with the nested
+ * virt trapping for nVHE. So turn this off for now
+ * here, in the hope that VHE guests won't ever do this.
+ * TODO: find out whether it's worth to support both
+ * cases at the same time.
+ */
+ hcr &= ~HCR_TVM;
+
+ hcr |= vhcr_el2 & (HCR_TVM | HCR_TRVM);
+ }
+ }
___activate_traps(vcpu, hcr);
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 29/66] KVM: arm64: nv: Configure HCR_EL2 for nested virtualization
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
We enable nested virtualization by setting the HCR NV and NV1 bit.
When the virtual E2H bit is set, we can support EL2 register accesses
via EL1 registers from the virtual EL2 by doing trap-and-emulate. A
better alternative, however, is to allow the virtual EL2 to access EL2
register states without trap. This can be easily achieved by not traping
EL1 registers since those registers already have EL2 register states.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/vhe/switch.c | 36 ++++++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index a238f52955c5..79789850639b 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -37,9 +37,39 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
u64 hcr = vcpu->arch.hcr_el2;
u64 val;
- /* Trap VM sysreg accesses if an EL2 guest is not using VHE. */
- if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
- hcr |= HCR_TVM | HCR_TRVM;
+ if (is_hyp_ctxt(vcpu)) {
+ hcr |= HCR_NV;
+
+ if (!vcpu_el2_e2h_is_set(vcpu)) {
+ /*
+ * For a guest hypervisor on v8.0, trap and emulate
+ * the EL1 virtual memory control register accesses.
+ */
+ hcr |= HCR_TVM | HCR_TRVM | HCR_NV1;
+ } else {
+ /*
+ * For a guest hypervisor on v8.1 (VHE), allow to
+ * access the EL1 virtual memory control registers
+ * natively. These accesses are to access EL2 register
+ * states.
+ * Note that we still need to respect the virtual
+ * HCR_EL2 state.
+ */
+ u64 vhcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ /*
+ * We already set TVM to handle set/way cache maint
+ * ops traps, this somewhat collides with the nested
+ * virt trapping for nVHE. So turn this off for now
+ * here, in the hope that VHE guests won't ever do this.
+ * TODO: find out whether it's worth to support both
+ * cases at the same time.
+ */
+ hcr &= ~HCR_TVM;
+
+ hcr |= vhcr_el2 & (HCR_TVM | HCR_TRVM);
+ }
+ }
___activate_traps(vcpu, hcr);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 29/66] KVM: arm64: nv: Configure HCR_EL2 for nested virtualization
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
We enable nested virtualization by setting the HCR NV and NV1 bit.
When the virtual E2H bit is set, we can support EL2 register accesses
via EL1 registers from the virtual EL2 by doing trap-and-emulate. A
better alternative, however, is to allow the virtual EL2 to access EL2
register states without trap. This can be easily achieved by not traping
EL1 registers since those registers already have EL2 register states.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/vhe/switch.c | 36 ++++++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index a238f52955c5..79789850639b 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -37,9 +37,39 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
u64 hcr = vcpu->arch.hcr_el2;
u64 val;
- /* Trap VM sysreg accesses if an EL2 guest is not using VHE. */
- if (vcpu_mode_el2(vcpu) && !vcpu_el2_e2h_is_set(vcpu))
- hcr |= HCR_TVM | HCR_TRVM;
+ if (is_hyp_ctxt(vcpu)) {
+ hcr |= HCR_NV;
+
+ if (!vcpu_el2_e2h_is_set(vcpu)) {
+ /*
+ * For a guest hypervisor on v8.0, trap and emulate
+ * the EL1 virtual memory control register accesses.
+ */
+ hcr |= HCR_TVM | HCR_TRVM | HCR_NV1;
+ } else {
+ /*
+ * For a guest hypervisor on v8.1 (VHE), allow to
+ * access the EL1 virtual memory control registers
+ * natively. These accesses are to access EL2 register
+ * states.
+ * Note that we still need to respect the virtual
+ * HCR_EL2 state.
+ */
+ u64 vhcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ /*
+ * We already set TVM to handle set/way cache maint
+ * ops traps, this somewhat collides with the nested
+ * virt trapping for nVHE. So turn this off for now
+ * here, in the hope that VHE guests won't ever do this.
+ * TODO: find out whether it's worth to support both
+ * cases at the same time.
+ */
+ hcr &= ~HCR_TVM;
+
+ hcr |= vhcr_el2 & (HCR_TVM | HCR_TRVM);
+ }
+ }
___activate_traps(vcpu, hcr);
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 30/66] KVM: arm64: nv: Only toggle cache for virtual EL2 when SCTLR_EL2 changes
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
From: Christoffer Dall <christoffer.dall@linaro.org>
So far we were flushing almost the entire universe whenever a VM would
load/unload the SCTLR_EL1 and the two versions of that register had
different MMU enabled settings. This turned out to be so slow that it
prevented forward progress for a nested VM, because a scheduler timer
tick interrupt would always be pending when we reached the nested VM.
To avoid this problem, we consider the SCTLR_EL2 when evaluating if
caches are on or off when entering virtual EL2 (because this is the
value that we end up shadowing onto the hardware EL1 register).
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_mmu.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 25ed956f9af1..0be00ec66e0f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -115,6 +115,7 @@ alternative_cb_end
#include <asm/cache.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
+#include <asm/kvm_emulate.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@@ -184,7 +185,10 @@ struct kvm;
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{
- return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
+ if (vcpu_mode_el2(vcpu))
+ return (__vcpu_sys_reg(vcpu, SCTLR_EL2) & 0b101) == 0b101;
+ else
+ return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
}
static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 30/66] KVM: arm64: nv: Only toggle cache for virtual EL2 when SCTLR_EL2 changes
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
From: Christoffer Dall <christoffer.dall@linaro.org>
So far we were flushing almost the entire universe whenever a VM would
load/unload the SCTLR_EL1 and the two versions of that register had
different MMU enabled settings. This turned out to be so slow that it
prevented forward progress for a nested VM, because a scheduler timer
tick interrupt would always be pending when we reached the nested VM.
To avoid this problem, we consider the SCTLR_EL2 when evaluating if
caches are on or off when entering virtual EL2 (because this is the
value that we end up shadowing onto the hardware EL1 register).
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_mmu.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 25ed956f9af1..0be00ec66e0f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -115,6 +115,7 @@ alternative_cb_end
#include <asm/cache.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
+#include <asm/kvm_emulate.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@@ -184,7 +185,10 @@ struct kvm;
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{
- return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
+ if (vcpu_mode_el2(vcpu))
+ return (__vcpu_sys_reg(vcpu, SCTLR_EL2) & 0b101) == 0b101;
+ else
+ return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
}
static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 30/66] KVM: arm64: nv: Only toggle cache for virtual EL2 when SCTLR_EL2 changes
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: kernel-team, Andre Przywara, Jintack Lim, Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
So far we were flushing almost the entire universe whenever a VM would
load/unload the SCTLR_EL1 and the two versions of that register had
different MMU enabled settings. This turned out to be so slow that it
prevented forward progress for a nested VM, because a scheduler timer
tick interrupt would always be pending when we reached the nested VM.
To avoid this problem, we consider the SCTLR_EL2 when evaluating if
caches are on or off when entering virtual EL2 (because this is the
value that we end up shadowing onto the hardware EL1 register).
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_mmu.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 25ed956f9af1..0be00ec66e0f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -115,6 +115,7 @@ alternative_cb_end
#include <asm/cache.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
+#include <asm/kvm_emulate.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@@ -184,7 +185,10 @@ struct kvm;
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{
- return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
+ if (vcpu_mode_el2(vcpu))
+ return (__vcpu_sys_reg(vcpu, SCTLR_EL2) & 0b101) == 0b101;
+ else
+ return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
}
static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 31/66] KVM: arm64: nv: Filter out unsupported features from ID regs
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
As there is a number of features that we either can't support,
or don't want to support right away with NV, let's add some
basic filtering so that we don't advertize silly things to the
EL2 guest.
Whilst we are at it, avertize ARMv8.4-TTL as well as ARMv8.5-GTG.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 6 ++
arch/arm64/include/asm/sysreg.h | 4 +
arch/arm64/kvm/nested.c | 152 ++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 6 +-
arch/arm64/kvm/sys_regs.h | 2 +
5 files changed, 167 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 07c15f51cf86..026ddaad972c 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -67,4 +67,10 @@ extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
+struct sys_reg_params;
+struct sys_reg_desc;
+
+void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
+ const struct sys_reg_desc *r);
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 10c0d3a476e2..2704738d644a 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -828,6 +828,10 @@
#define ID_AA64PFR0_FP_SUPPORTED 0x0
#define ID_AA64PFR0_ASIMD_NI 0xf
#define ID_AA64PFR0_ASIMD_SUPPORTED 0x0
+#define ID_AA64PFR0_EL3_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL3_32BIT_64BIT 0x2
+#define ID_AA64PFR0_EL2_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL2_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL1_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 42a96c8d2adc..99e1b97ae3ca 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -20,6 +20,10 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
+#include <asm/sysreg.h>
+
+#include "sys_regs.h"
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
@@ -38,3 +42,151 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
return -EINVAL;
}
+
+/*
+ * Our emulated CPU doesn't support all the possible features. For the
+ * sake of simplicity (and probably mental sanity), wipe out a number
+ * of feature bits we don't intend to support for the time being.
+ * This list should get updated as new features get added to the NV
+ * support, and new extension to the architecture.
+ */
+void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
+ (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
+ u64 val, tmp;
+
+ if (!nested_virt_in_use(v))
+ return;
+
+ val = p->regval;
+
+ switch (id) {
+ case SYS_ID_AA64ISAR0_EL1:
+ /* Support everything but O.S. and Range TLBIs */
+ val &= ~(FEATURE(ID_AA64ISAR0_TLB) |
+ GENMASK_ULL(27, 24) |
+ GENMASK_ULL(3, 0));
+ break;
+
+ case SYS_ID_AA64ISAR1_EL1:
+ /* Support everything but PtrAuth and Spec Invalidation */
+ val &= ~(GENMASK_ULL(63, 56) |
+ FEATURE(ID_AA64ISAR1_SPECRES) |
+ FEATURE(ID_AA64ISAR1_GPI) |
+ FEATURE(ID_AA64ISAR1_GPA) |
+ FEATURE(ID_AA64ISAR1_API) |
+ FEATURE(ID_AA64ISAR1_APA));
+ break;
+
+ case SYS_ID_AA64PFR0_EL1:
+ /* No AMU, MPAM, S-EL2, RAS or SVE */
+ val &= ~(GENMASK_ULL(55, 52) |
+ FEATURE(ID_AA64PFR0_AMU) |
+ FEATURE(ID_AA64PFR0_MPAM) |
+ FEATURE(ID_AA64PFR0_SEL2) |
+ FEATURE(ID_AA64PFR0_RAS) |
+ FEATURE(ID_AA64PFR0_SVE) |
+ FEATURE(ID_AA64PFR0_EL3) |
+ FEATURE(ID_AA64PFR0_EL2));
+ /* 64bit EL2/EL3 only */
+ val |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), 0b0001);
+ val |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), 0b0001);
+ break;
+
+ case SYS_ID_AA64PFR1_EL1:
+ /* Only support SSBS */
+ val &= FEATURE(ID_AA64PFR1_SSBS);
+ break;
+
+ case SYS_ID_AA64MMFR0_EL1:
+ /* Hide ECV, FGT, ExS, Secure Memory */
+ val &= ~(GENMASK_ULL(63, 43) |
+ FEATURE(ID_AA64MMFR0_TGRAN4_2) |
+ FEATURE(ID_AA64MMFR0_TGRAN16_2) |
+ FEATURE(ID_AA64MMFR0_TGRAN64_2) |
+ FEATURE(ID_AA64MMFR0_SNSMEM));
+
+ /* Disallow unsupported S2 page sizes */
+ switch (PAGE_SIZE) {
+ case SZ_64K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN16_2), 0b0001);
+ /* Fall through */
+ case SZ_16K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN4_2), 0b0001);
+ /* Fall through */
+ case SZ_4K:
+ /* Support everything */
+ break;
+ }
+ /* Advertize supported S2 page sizes */
+ switch (PAGE_SIZE) {
+ case SZ_4K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN4_2), 0b0010);
+ /* Fall through */
+ case SZ_16K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN16_2), 0b0010);
+ /* Fall through */
+ case SZ_64K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64_2), 0b0010);
+ break;
+ }
+ /* Cap PARange to 40bits */
+ tmp = FIELD_GET(FEATURE(ID_AA64MMFR0_PARANGE), val);
+ if (tmp > 0b0010) {
+ val &= ~FEATURE(ID_AA64MMFR0_PARANGE);
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), 0b0010);
+ }
+ break;
+
+ case SYS_ID_AA64MMFR1_EL1:
+ val &= (FEATURE(ID_AA64MMFR1_PAN) |
+ FEATURE(ID_AA64MMFR1_LOR) |
+ FEATURE(ID_AA64MMFR1_HPD) |
+ FEATURE(ID_AA64MMFR1_VHE) |
+ FEATURE(ID_AA64MMFR1_VMIDBITS));
+ break;
+
+ case SYS_ID_AA64MMFR2_EL1:
+ val &= ~(FEATURE(ID_AA64MMFR2_EVT) |
+ FEATURE(ID_AA64MMFR2_BBM) |
+ FEATURE(ID_AA64MMFR2_TTL) |
+ GENMASK_ULL(47, 44) |
+ FEATURE(ID_AA64MMFR2_ST) |
+ FEATURE(ID_AA64MMFR2_CCIDX) |
+ FEATURE(ID_AA64MMFR2_LVA));
+
+ /* Force TTL support */
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR2_TTL), 0b0001);
+ break;
+
+ case SYS_ID_AA64DFR0_EL1:
+ /* Only limited support for PMU, Debug, BPs and WPs */
+ val &= (FEATURE(ID_AA64DFR0_PMSVER) |
+ FEATURE(ID_AA64DFR0_WRPS) |
+ FEATURE(ID_AA64DFR0_BRPS) |
+ FEATURE(ID_AA64DFR0_DEBUGVER));
+
+ /* Cap PMU to ARMv8.1 */
+ tmp = FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), val);
+ if (tmp > 0b0100) {
+ val &= ~FEATURE(ID_AA64DFR0_PMUVER);
+ val |= FIELD_PREP(FEATURE(ID_AA64DFR0_PMUVER), 0b0100);
+ }
+ /* Cap Debug to ARMv8.1 */
+ tmp = FIELD_GET(FEATURE(ID_AA64DFR0_DEBUGVER), val);
+ if (tmp > 0b0111) {
+ val &= ~FEATURE(ID_AA64DFR0_DEBUGVER);
+ val |= FIELD_PREP(FEATURE(ID_AA64DFR0_DEBUGVER), 0b0111);
+ }
+ break;
+
+ default:
+ /* Unknown register, just wipe it clean */
+ val = 0;
+ break;
+ }
+
+ p->regval = val;
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2b8f3875faf2..6bd5e4084cee 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1304,8 +1304,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
return true;
}
-#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
-
/* Read a sanitised cpufeature ID register by sys_reg_desc */
static u64 read_id_reg(const struct kvm_vcpu *vcpu,
struct sys_reg_desc const *r, bool raz)
@@ -1389,8 +1387,10 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
bool raz = sysreg_visible_as_raz(vcpu, r);
+ bool ret = __access_id_reg(vcpu, p, r, raz);
- return __access_id_reg(vcpu, p, r, raz);
+ access_nested_id_reg(vcpu, p, r);
+ return ret;
}
static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..c6fbe3a7855e 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -170,4 +170,6 @@ const struct sys_reg_desc *find_reg_by_id(u64 id,
CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \
Op2(sys_reg_Op2(reg))
+#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
+
#endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 31/66] KVM: arm64: nv: Filter out unsupported features from ID regs
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
As there is a number of features that we either can't support,
or don't want to support right away with NV, let's add some
basic filtering so that we don't advertize silly things to the
EL2 guest.
Whilst we are at it, avertize ARMv8.4-TTL as well as ARMv8.5-GTG.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 6 ++
arch/arm64/include/asm/sysreg.h | 4 +
arch/arm64/kvm/nested.c | 152 ++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 6 +-
arch/arm64/kvm/sys_regs.h | 2 +
5 files changed, 167 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 07c15f51cf86..026ddaad972c 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -67,4 +67,10 @@ extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
+struct sys_reg_params;
+struct sys_reg_desc;
+
+void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
+ const struct sys_reg_desc *r);
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 10c0d3a476e2..2704738d644a 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -828,6 +828,10 @@
#define ID_AA64PFR0_FP_SUPPORTED 0x0
#define ID_AA64PFR0_ASIMD_NI 0xf
#define ID_AA64PFR0_ASIMD_SUPPORTED 0x0
+#define ID_AA64PFR0_EL3_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL3_32BIT_64BIT 0x2
+#define ID_AA64PFR0_EL2_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL2_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL1_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 42a96c8d2adc..99e1b97ae3ca 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -20,6 +20,10 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
+#include <asm/sysreg.h>
+
+#include "sys_regs.h"
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
@@ -38,3 +42,151 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
return -EINVAL;
}
+
+/*
+ * Our emulated CPU doesn't support all the possible features. For the
+ * sake of simplicity (and probably mental sanity), wipe out a number
+ * of feature bits we don't intend to support for the time being.
+ * This list should get updated as new features get added to the NV
+ * support, and new extension to the architecture.
+ */
+void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
+ (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
+ u64 val, tmp;
+
+ if (!nested_virt_in_use(v))
+ return;
+
+ val = p->regval;
+
+ switch (id) {
+ case SYS_ID_AA64ISAR0_EL1:
+ /* Support everything but O.S. and Range TLBIs */
+ val &= ~(FEATURE(ID_AA64ISAR0_TLB) |
+ GENMASK_ULL(27, 24) |
+ GENMASK_ULL(3, 0));
+ break;
+
+ case SYS_ID_AA64ISAR1_EL1:
+ /* Support everything but PtrAuth and Spec Invalidation */
+ val &= ~(GENMASK_ULL(63, 56) |
+ FEATURE(ID_AA64ISAR1_SPECRES) |
+ FEATURE(ID_AA64ISAR1_GPI) |
+ FEATURE(ID_AA64ISAR1_GPA) |
+ FEATURE(ID_AA64ISAR1_API) |
+ FEATURE(ID_AA64ISAR1_APA));
+ break;
+
+ case SYS_ID_AA64PFR0_EL1:
+ /* No AMU, MPAM, S-EL2, RAS or SVE */
+ val &= ~(GENMASK_ULL(55, 52) |
+ FEATURE(ID_AA64PFR0_AMU) |
+ FEATURE(ID_AA64PFR0_MPAM) |
+ FEATURE(ID_AA64PFR0_SEL2) |
+ FEATURE(ID_AA64PFR0_RAS) |
+ FEATURE(ID_AA64PFR0_SVE) |
+ FEATURE(ID_AA64PFR0_EL3) |
+ FEATURE(ID_AA64PFR0_EL2));
+ /* 64bit EL2/EL3 only */
+ val |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), 0b0001);
+ val |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), 0b0001);
+ break;
+
+ case SYS_ID_AA64PFR1_EL1:
+ /* Only support SSBS */
+ val &= FEATURE(ID_AA64PFR1_SSBS);
+ break;
+
+ case SYS_ID_AA64MMFR0_EL1:
+ /* Hide ECV, FGT, ExS, Secure Memory */
+ val &= ~(GENMASK_ULL(63, 43) |
+ FEATURE(ID_AA64MMFR0_TGRAN4_2) |
+ FEATURE(ID_AA64MMFR0_TGRAN16_2) |
+ FEATURE(ID_AA64MMFR0_TGRAN64_2) |
+ FEATURE(ID_AA64MMFR0_SNSMEM));
+
+ /* Disallow unsupported S2 page sizes */
+ switch (PAGE_SIZE) {
+ case SZ_64K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN16_2), 0b0001);
+ /* Fall through */
+ case SZ_16K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN4_2), 0b0001);
+ /* Fall through */
+ case SZ_4K:
+ /* Support everything */
+ break;
+ }
+ /* Advertize supported S2 page sizes */
+ switch (PAGE_SIZE) {
+ case SZ_4K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN4_2), 0b0010);
+ /* Fall through */
+ case SZ_16K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN16_2), 0b0010);
+ /* Fall through */
+ case SZ_64K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64_2), 0b0010);
+ break;
+ }
+ /* Cap PARange to 40bits */
+ tmp = FIELD_GET(FEATURE(ID_AA64MMFR0_PARANGE), val);
+ if (tmp > 0b0010) {
+ val &= ~FEATURE(ID_AA64MMFR0_PARANGE);
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), 0b0010);
+ }
+ break;
+
+ case SYS_ID_AA64MMFR1_EL1:
+ val &= (FEATURE(ID_AA64MMFR1_PAN) |
+ FEATURE(ID_AA64MMFR1_LOR) |
+ FEATURE(ID_AA64MMFR1_HPD) |
+ FEATURE(ID_AA64MMFR1_VHE) |
+ FEATURE(ID_AA64MMFR1_VMIDBITS));
+ break;
+
+ case SYS_ID_AA64MMFR2_EL1:
+ val &= ~(FEATURE(ID_AA64MMFR2_EVT) |
+ FEATURE(ID_AA64MMFR2_BBM) |
+ FEATURE(ID_AA64MMFR2_TTL) |
+ GENMASK_ULL(47, 44) |
+ FEATURE(ID_AA64MMFR2_ST) |
+ FEATURE(ID_AA64MMFR2_CCIDX) |
+ FEATURE(ID_AA64MMFR2_LVA));
+
+ /* Force TTL support */
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR2_TTL), 0b0001);
+ break;
+
+ case SYS_ID_AA64DFR0_EL1:
+ /* Only limited support for PMU, Debug, BPs and WPs */
+ val &= (FEATURE(ID_AA64DFR0_PMSVER) |
+ FEATURE(ID_AA64DFR0_WRPS) |
+ FEATURE(ID_AA64DFR0_BRPS) |
+ FEATURE(ID_AA64DFR0_DEBUGVER));
+
+ /* Cap PMU to ARMv8.1 */
+ tmp = FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), val);
+ if (tmp > 0b0100) {
+ val &= ~FEATURE(ID_AA64DFR0_PMUVER);
+ val |= FIELD_PREP(FEATURE(ID_AA64DFR0_PMUVER), 0b0100);
+ }
+ /* Cap Debug to ARMv8.1 */
+ tmp = FIELD_GET(FEATURE(ID_AA64DFR0_DEBUGVER), val);
+ if (tmp > 0b0111) {
+ val &= ~FEATURE(ID_AA64DFR0_DEBUGVER);
+ val |= FIELD_PREP(FEATURE(ID_AA64DFR0_DEBUGVER), 0b0111);
+ }
+ break;
+
+ default:
+ /* Unknown register, just wipe it clean */
+ val = 0;
+ break;
+ }
+
+ p->regval = val;
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2b8f3875faf2..6bd5e4084cee 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1304,8 +1304,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
return true;
}
-#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
-
/* Read a sanitised cpufeature ID register by sys_reg_desc */
static u64 read_id_reg(const struct kvm_vcpu *vcpu,
struct sys_reg_desc const *r, bool raz)
@@ -1389,8 +1387,10 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
bool raz = sysreg_visible_as_raz(vcpu, r);
+ bool ret = __access_id_reg(vcpu, p, r, raz);
- return __access_id_reg(vcpu, p, r, raz);
+ access_nested_id_reg(vcpu, p, r);
+ return ret;
}
static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..c6fbe3a7855e 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -170,4 +170,6 @@ const struct sys_reg_desc *find_reg_by_id(u64 id,
CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \
Op2(sys_reg_Op2(reg))
+#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
+
#endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 31/66] KVM: arm64: nv: Filter out unsupported features from ID regs
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
As there is a number of features that we either can't support,
or don't want to support right away with NV, let's add some
basic filtering so that we don't advertize silly things to the
EL2 guest.
Whilst we are at it, avertize ARMv8.4-TTL as well as ARMv8.5-GTG.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 6 ++
arch/arm64/include/asm/sysreg.h | 4 +
arch/arm64/kvm/nested.c | 152 ++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 6 +-
arch/arm64/kvm/sys_regs.h | 2 +
5 files changed, 167 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 07c15f51cf86..026ddaad972c 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -67,4 +67,10 @@ extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
extern bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit);
extern bool forward_nv_traps(struct kvm_vcpu *vcpu);
+struct sys_reg_params;
+struct sys_reg_desc;
+
+void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
+ const struct sys_reg_desc *r);
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 10c0d3a476e2..2704738d644a 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -828,6 +828,10 @@
#define ID_AA64PFR0_FP_SUPPORTED 0x0
#define ID_AA64PFR0_ASIMD_NI 0xf
#define ID_AA64PFR0_ASIMD_SUPPORTED 0x0
+#define ID_AA64PFR0_EL3_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL3_32BIT_64BIT 0x2
+#define ID_AA64PFR0_EL2_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL2_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL1_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 42a96c8d2adc..99e1b97ae3ca 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -20,6 +20,10 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
+#include <asm/sysreg.h>
+
+#include "sys_regs.h"
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
@@ -38,3 +42,151 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
return -EINVAL;
}
+
+/*
+ * Our emulated CPU doesn't support all the possible features. For the
+ * sake of simplicity (and probably mental sanity), wipe out a number
+ * of feature bits we don't intend to support for the time being.
+ * This list should get updated as new features get added to the NV
+ * support, and new extension to the architecture.
+ */
+void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
+ (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
+ u64 val, tmp;
+
+ if (!nested_virt_in_use(v))
+ return;
+
+ val = p->regval;
+
+ switch (id) {
+ case SYS_ID_AA64ISAR0_EL1:
+ /* Support everything but O.S. and Range TLBIs */
+ val &= ~(FEATURE(ID_AA64ISAR0_TLB) |
+ GENMASK_ULL(27, 24) |
+ GENMASK_ULL(3, 0));
+ break;
+
+ case SYS_ID_AA64ISAR1_EL1:
+ /* Support everything but PtrAuth and Spec Invalidation */
+ val &= ~(GENMASK_ULL(63, 56) |
+ FEATURE(ID_AA64ISAR1_SPECRES) |
+ FEATURE(ID_AA64ISAR1_GPI) |
+ FEATURE(ID_AA64ISAR1_GPA) |
+ FEATURE(ID_AA64ISAR1_API) |
+ FEATURE(ID_AA64ISAR1_APA));
+ break;
+
+ case SYS_ID_AA64PFR0_EL1:
+ /* No AMU, MPAM, S-EL2, RAS or SVE */
+ val &= ~(GENMASK_ULL(55, 52) |
+ FEATURE(ID_AA64PFR0_AMU) |
+ FEATURE(ID_AA64PFR0_MPAM) |
+ FEATURE(ID_AA64PFR0_SEL2) |
+ FEATURE(ID_AA64PFR0_RAS) |
+ FEATURE(ID_AA64PFR0_SVE) |
+ FEATURE(ID_AA64PFR0_EL3) |
+ FEATURE(ID_AA64PFR0_EL2));
+ /* 64bit EL2/EL3 only */
+ val |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), 0b0001);
+ val |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), 0b0001);
+ break;
+
+ case SYS_ID_AA64PFR1_EL1:
+ /* Only support SSBS */
+ val &= FEATURE(ID_AA64PFR1_SSBS);
+ break;
+
+ case SYS_ID_AA64MMFR0_EL1:
+ /* Hide ECV, FGT, ExS, Secure Memory */
+ val &= ~(GENMASK_ULL(63, 43) |
+ FEATURE(ID_AA64MMFR0_TGRAN4_2) |
+ FEATURE(ID_AA64MMFR0_TGRAN16_2) |
+ FEATURE(ID_AA64MMFR0_TGRAN64_2) |
+ FEATURE(ID_AA64MMFR0_SNSMEM));
+
+ /* Disallow unsupported S2 page sizes */
+ switch (PAGE_SIZE) {
+ case SZ_64K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN16_2), 0b0001);
+ /* Fall through */
+ case SZ_16K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN4_2), 0b0001);
+ /* Fall through */
+ case SZ_4K:
+ /* Support everything */
+ break;
+ }
+ /* Advertize supported S2 page sizes */
+ switch (PAGE_SIZE) {
+ case SZ_4K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN4_2), 0b0010);
+ /* Fall through */
+ case SZ_16K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN16_2), 0b0010);
+ /* Fall through */
+ case SZ_64K:
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64_2), 0b0010);
+ break;
+ }
+ /* Cap PARange to 40bits */
+ tmp = FIELD_GET(FEATURE(ID_AA64MMFR0_PARANGE), val);
+ if (tmp > 0b0010) {
+ val &= ~FEATURE(ID_AA64MMFR0_PARANGE);
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), 0b0010);
+ }
+ break;
+
+ case SYS_ID_AA64MMFR1_EL1:
+ val &= (FEATURE(ID_AA64MMFR1_PAN) |
+ FEATURE(ID_AA64MMFR1_LOR) |
+ FEATURE(ID_AA64MMFR1_HPD) |
+ FEATURE(ID_AA64MMFR1_VHE) |
+ FEATURE(ID_AA64MMFR1_VMIDBITS));
+ break;
+
+ case SYS_ID_AA64MMFR2_EL1:
+ val &= ~(FEATURE(ID_AA64MMFR2_EVT) |
+ FEATURE(ID_AA64MMFR2_BBM) |
+ FEATURE(ID_AA64MMFR2_TTL) |
+ GENMASK_ULL(47, 44) |
+ FEATURE(ID_AA64MMFR2_ST) |
+ FEATURE(ID_AA64MMFR2_CCIDX) |
+ FEATURE(ID_AA64MMFR2_LVA));
+
+ /* Force TTL support */
+ val |= FIELD_PREP(FEATURE(ID_AA64MMFR2_TTL), 0b0001);
+ break;
+
+ case SYS_ID_AA64DFR0_EL1:
+ /* Only limited support for PMU, Debug, BPs and WPs */
+ val &= (FEATURE(ID_AA64DFR0_PMSVER) |
+ FEATURE(ID_AA64DFR0_WRPS) |
+ FEATURE(ID_AA64DFR0_BRPS) |
+ FEATURE(ID_AA64DFR0_DEBUGVER));
+
+ /* Cap PMU to ARMv8.1 */
+ tmp = FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), val);
+ if (tmp > 0b0100) {
+ val &= ~FEATURE(ID_AA64DFR0_PMUVER);
+ val |= FIELD_PREP(FEATURE(ID_AA64DFR0_PMUVER), 0b0100);
+ }
+ /* Cap Debug to ARMv8.1 */
+ tmp = FIELD_GET(FEATURE(ID_AA64DFR0_DEBUGVER), val);
+ if (tmp > 0b0111) {
+ val &= ~FEATURE(ID_AA64DFR0_DEBUGVER);
+ val |= FIELD_PREP(FEATURE(ID_AA64DFR0_DEBUGVER), 0b0111);
+ }
+ break;
+
+ default:
+ /* Unknown register, just wipe it clean */
+ val = 0;
+ break;
+ }
+
+ p->regval = val;
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2b8f3875faf2..6bd5e4084cee 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1304,8 +1304,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
return true;
}
-#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
-
/* Read a sanitised cpufeature ID register by sys_reg_desc */
static u64 read_id_reg(const struct kvm_vcpu *vcpu,
struct sys_reg_desc const *r, bool raz)
@@ -1389,8 +1387,10 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
bool raz = sysreg_visible_as_raz(vcpu, r);
+ bool ret = __access_id_reg(vcpu, p, r, raz);
- return __access_id_reg(vcpu, p, r, raz);
+ access_nested_id_reg(vcpu, p, r);
+ return ret;
}
static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..c6fbe3a7855e 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -170,4 +170,6 @@ const struct sys_reg_desc *find_reg_by_id(u64 id,
CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \
Op2(sys_reg_Op2(reg))
+#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
+
#endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 32/66] KVM: arm64: nv: Hide RAS from nested guests
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
We don't want to expose complicated features to guests until we have
a good grasp on the basic CPU emulation. So let's pretend that RAS,
doesn't exist in a nested guest. We already hide the feature bits,
let's now make sure VDISR_EL1 will UNDEF.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6bd5e4084cee..7353d5eaeaca 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2129,6 +2129,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_VBAR_EL2), access_rw, reset_val, VBAR_EL2, 0 },
{ SYS_DESC(SYS_RVBAR_EL2), access_rw, reset_val, RVBAR_EL2, 0 },
{ SYS_DESC(SYS_RMR_EL2), trap_undef },
+ { SYS_DESC(SYS_VDISR_EL2), trap_undef },
{ SYS_DESC(SYS_CONTEXTIDR_EL2), access_rw, reset_val, CONTEXTIDR_EL2, 0 },
{ SYS_DESC(SYS_TPIDR_EL2), access_rw, reset_val, TPIDR_EL2, 0 },
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 32/66] KVM: arm64: nv: Hide RAS from nested guests
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
We don't want to expose complicated features to guests until we have
a good grasp on the basic CPU emulation. So let's pretend that RAS,
doesn't exist in a nested guest. We already hide the feature bits,
let's now make sure VDISR_EL1 will UNDEF.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6bd5e4084cee..7353d5eaeaca 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2129,6 +2129,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_VBAR_EL2), access_rw, reset_val, VBAR_EL2, 0 },
{ SYS_DESC(SYS_RVBAR_EL2), access_rw, reset_val, RVBAR_EL2, 0 },
{ SYS_DESC(SYS_RMR_EL2), trap_undef },
+ { SYS_DESC(SYS_VDISR_EL2), trap_undef },
{ SYS_DESC(SYS_CONTEXTIDR_EL2), access_rw, reset_val, CONTEXTIDR_EL2, 0 },
{ SYS_DESC(SYS_TPIDR_EL2), access_rw, reset_val, TPIDR_EL2, 0 },
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 32/66] KVM: arm64: nv: Hide RAS from nested guests
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
We don't want to expose complicated features to guests until we have
a good grasp on the basic CPU emulation. So let's pretend that RAS,
doesn't exist in a nested guest. We already hide the feature bits,
let's now make sure VDISR_EL1 will UNDEF.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6bd5e4084cee..7353d5eaeaca 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2129,6 +2129,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_VBAR_EL2), access_rw, reset_val, VBAR_EL2, 0 },
{ SYS_DESC(SYS_RVBAR_EL2), access_rw, reset_val, RVBAR_EL2, 0 },
{ SYS_DESC(SYS_RMR_EL2), trap_undef },
+ { SYS_DESC(SYS_VDISR_EL2), trap_undef },
{ SYS_DESC(SYS_CONTEXTIDR_EL2), access_rw, reset_val, CONTEXTIDR_EL2, 0 },
{ SYS_DESC(SYS_TPIDR_EL2), access_rw, reset_val, TPIDR_EL2, 0 },
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 33/66] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Add Stage-2 mmu data structures for virtual EL2 and for nested guests.
We don't yet populate shadow Stage-2 page tables, but we now have a
framework for getting to a shadow Stage-2 pgd.
We allocate twice the number of vcpus as Stage-2 mmu structures because
that's sufficient for each vcpu running two translation regimes without
having to flush the Stage-2 page tables.
Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 29 +++++
arch/arm64/include/asm/kvm_mmu.h | 9 ++
arch/arm64/include/asm/kvm_nested.h | 7 ++
arch/arm64/kvm/arm.c | 16 ++-
arch/arm64/kvm/mmu.c | 31 +++--
arch/arm64/kvm/nested.c | 183 ++++++++++++++++++++++++++++
6 files changed, 264 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 1d82ad3c63b8..b7d6a829091c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -95,14 +95,43 @@ struct kvm_s2_mmu {
int __percpu *last_vcpu_ran;
struct kvm_arch *arch;
+
+ /*
+ * For a shadow stage-2 MMU, the virtual vttbr programmed by the guest
+ * hypervisor. Unused for kvm_arch->mmu. Set to 1 when the structure
+ * contains no valid information.
+ */
+ u64 vttbr;
+
+ /* true when this represents a nested context where virtual HCR_EL2.VM == 1 */
+ bool nested_stage2_enabled;
+
+ /*
+ * 0: Nobody is currently using this, check vttbr for validity
+ * >0: Somebody is actively using this.
+ */
+ atomic_t refcnt;
};
+static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu)
+{
+ return !(mmu->vttbr & 1);
+}
+
struct kvm_arch_memory_slot {
};
struct kvm_arch {
struct kvm_s2_mmu mmu;
+ /*
+ * Stage 2 paging stage for VMs with nested virtual using a virtual
+ * VMID.
+ */
+ struct kvm_s2_mmu *nested_mmus;
+ size_t nested_mmus_size;
+ int nested_mmus_next;
+
/* VTCR_EL2 value for this VM */
u64 vtcr;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 0be00ec66e0f..579980a8b05f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -116,6 +116,7 @@ alternative_cb_end
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@@ -159,6 +160,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void **haddr);
void free_hyp_pgds(void);
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
void stage2_unmap_vm(struct kvm *kvm);
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
@@ -298,5 +300,12 @@ static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
{
return container_of(mmu->arch, struct kvm, arch);
}
+
+static inline u64 get_vmid(u64 vttbr)
+{
+ return (vttbr & VTTBR_VMID_MASK(kvm_get_vmid_bits())) >>
+ VTTBR_VMID_SHIFT;
+}
+
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 026ddaad972c..473ecd1d60d0 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -61,6 +61,13 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
(cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
}
+extern void kvm_init_nested(struct kvm *kvm);
+extern int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu);
+extern void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu);
+extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr);
+extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
+extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1cb39c0803a4..8cadfaa2a310 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -35,6 +35,7 @@
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/kvm_emulate.h>
#include <asm/sections.h>
@@ -138,6 +139,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret)
return ret;
+ kvm_init_nested(kvm);
+
ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP);
if (ret)
goto out_free_stage2_pgd;
@@ -384,6 +387,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
struct kvm_s2_mmu *mmu;
int *last_ran;
+ if (nested_virt_in_use(vcpu))
+ kvm_vcpu_load_hw_mmu(vcpu);
+
mmu = vcpu->arch.hw_mmu;
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
@@ -432,6 +438,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_vgic_put(vcpu);
kvm_vcpu_pmu_restore_host(vcpu);
+ if (nested_virt_in_use(vcpu))
+ kvm_vcpu_put_hw_mmu(vcpu);
+
vcpu->cpu = -1;
}
@@ -1032,8 +1041,13 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
vcpu->arch.target = phys_target;
+ /* Prepare for nested if required */
+ ret = kvm_vcpu_init_nested(vcpu);
+
/* Now we know what it is, we can reset it. */
- ret = kvm_reset_vcpu(vcpu);
+ if (!ret)
+ ret = kvm_reset_vcpu(vcpu);
+
if (ret) {
vcpu->arch.target = -1;
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c5d1f3c87dbd..5c1a9966ff31 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -151,7 +151,7 @@ static void *kvm_host_va(phys_addr_t phys)
* does.
*/
/**
- * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
+ * kvm_unmap_stage2_range -- Clear stage2 page table entries to unmap a range
* @mmu: The KVM stage-2 MMU pointer
* @start: The intermediate physical base address of the range to unmap
* @size: The size of the area to unmap
@@ -174,7 +174,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
may_block));
}
-static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
{
__unmap_stage2_range(mmu, start, size, true);
}
@@ -448,7 +448,20 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
int cpu, err;
struct kvm_pgtable *pgt;
+ /*
+ * If we already have our page tables in place, and that the
+ * MMU context is the canonical one, we have a bug somewhere,
+ * as this is only supposed to ever happen once per VM.
+ *
+ * Otherwise, we're building nested page tables, and that's
+ * probably because userspace called KVM_ARM_VCPU_INIT more
+ * than once on the same vcpu. Since that's actually legal,
+ * don't kick a fuss and leave gracefully.
+ */
if (mmu->pgt != NULL) {
+ if (&kvm->arch.mmu != mmu)
+ return 0;
+
kvm_err("kvm_arch already initialized?\n");
return -EINVAL;
}
@@ -474,6 +487,9 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
mmu->pgt = pgt;
mmu->pgd_phys = __pa(pgt->pgd);
mmu->vmid.vmid_gen = 0;
+
+ kvm_init_nested_s2_mmu(mmu);
+
return 0;
out_destroy_pgtable:
@@ -519,7 +535,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
if (!(vma->vm_flags & VM_PFNMAP)) {
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
- unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
}
hva = vm_end;
} while (hva < reg_end);
@@ -1415,7 +1431,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
if (ret)
- unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size);
else if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
stage2_flush_memslot(kvm, memslot);
spin_unlock(&kvm->mmu_lock);
@@ -1432,11 +1448,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
{
}
-void kvm_arch_flush_shadow_all(struct kvm *kvm)
-{
- kvm_free_stage2_pgd(&kvm->arch.mmu);
-}
-
void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
@@ -1444,7 +1455,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
phys_addr_t size = slot->npages << PAGE_SHIFT;
spin_lock(&kvm->mmu_lock);
- unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
spin_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 99e1b97ae3ca..c33cc29756fa 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -19,12 +19,177 @@
#include <linux/kvm.h>
#include <linux/kvm_host.h>
+#include <asm/kvm_arm.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_mmu.h>
#include <asm/kvm_nested.h>
#include <asm/sysreg.h>
#include "sys_regs.h"
+void kvm_init_nested(struct kvm *kvm)
+{
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+}
+
+int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_s2_mmu *tmp;
+ int num_mmus;
+ int ret = -ENOMEM;
+
+ if (!test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features))
+ return 0;
+
+ if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
+ return -EINVAL;
+
+ mutex_lock(&kvm->lock);
+
+ /*
+ * Let's treat memory allocation failures as benign: If we fail to
+ * allocate anything, return an error and keep the allocated array
+ * alive. Userspace may try to recover by intializing the vcpu
+ * again, and there is no reason to affect the whole VM for this.
+ */
+ num_mmus = atomic_read(&kvm->online_vcpus) * 2;
+ tmp = krealloc(kvm->arch.nested_mmus,
+ num_mmus * sizeof(*kvm->arch.nested_mmus),
+ GFP_KERNEL | __GFP_ZERO);
+ if (tmp) {
+ if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
+ kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
+ kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
+ kvm_free_stage2_pgd(&tmp[num_mmus - 2]);
+ } else {
+ kvm->arch.nested_mmus_size = num_mmus;
+ ret = 0;
+ }
+
+ kvm->arch.nested_mmus = tmp;
+ }
+
+ mutex_unlock(&kvm->lock);
+ return ret;
+}
+
+/* Must be called with kvm->lock held */
+struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr)
+{
+ bool nested_stage2_enabled = hcr & HCR_VM;
+ int i;
+
+ /* Don't consider the CnP bit for the vttbr match */
+ vttbr = vttbr & ~VTTBR_CNP_BIT;
+
+ /*
+ * Two possibilities when looking up a S2 MMU context:
+ *
+ * - either S2 is enabled in the guest, and we need a context that
+ * is S2-enabled and matches the full VTTBR (VMID+BADDR), which
+ * makes it safe from a TLB conflict perspective (a broken guest
+ * won't be able to generate them),
+ *
+ * - or S2 is disabled, and we need a context that is S2-disabled
+ * and matches the VMID only, as all TLBs are tagged by VMID even
+ * if S2 translation is enabled.
+ */
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (!kvm_s2_mmu_valid(mmu))
+ continue;
+
+ if (nested_stage2_enabled &&
+ mmu->nested_stage2_enabled &&
+ vttbr == mmu->vttbr)
+ return mmu;
+
+ if (!nested_stage2_enabled &&
+ !mmu->nested_stage2_enabled &&
+ get_vmid(vttbr) == get_vmid(mmu->vttbr))
+ return mmu;
+ }
+ return NULL;
+}
+
+static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ u64 hcr= vcpu_read_sys_reg(vcpu, HCR_EL2);
+ struct kvm_s2_mmu *s2_mmu;
+ int i;
+
+ s2_mmu = lookup_s2_mmu(kvm, vttbr, hcr);
+ if (s2_mmu)
+ goto out;
+
+ /*
+ * Make sure we don't always search from the same point, or we
+ * will always reuse a potentially active context, leaving
+ * free contexts unused.
+ */
+ for (i = kvm->arch.nested_mmus_next;
+ i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next);
+ i++) {
+ s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size];
+
+ if (atomic_read(&s2_mmu->refcnt) == 0)
+ break;
+ }
+ BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */
+
+ /* Set the scene for the next search */
+ kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size;
+
+ if (kvm_s2_mmu_valid(s2_mmu)) {
+ /* Clear the old state */
+ kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(kvm));
+ if (s2_mmu->vmid.vmid_gen)
+ kvm_call_hyp(__kvm_tlb_flush_vmid, s2_mmu);
+ }
+
+ /*
+ * The virtual VMID (modulo CnP) will be used as a key when matching
+ * an existing kvm_s2_mmu.
+ */
+ s2_mmu->vttbr = vttbr & ~VTTBR_CNP_BIT;
+ s2_mmu->nested_stage2_enabled = hcr & HCR_VM;
+
+out:
+ atomic_inc(&s2_mmu->refcnt);
+ return s2_mmu;
+}
+
+void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
+{
+ mmu->vttbr = 1;
+ mmu->nested_stage2_enabled = false;
+ atomic_set(&mmu->refcnt, 0);
+}
+
+void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (is_hyp_ctxt(vcpu)) {
+ vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
+ } else {
+ spin_lock(&vcpu->kvm->mmu_lock);
+ vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
+ spin_unlock(&vcpu->kvm->mmu_lock);
+ }
+}
+
+void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu) {
+ atomic_dec(&vcpu->arch.hw_mmu->refcnt);
+ vcpu->arch.hw_mmu = NULL;
+ }
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
@@ -43,6 +208,24 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
return -EINVAL;
}
+void kvm_arch_flush_shadow_all(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ WARN_ON(atomic_read(&mmu->refcnt));
+
+ if (!atomic_read(&mmu->refcnt))
+ kvm_free_stage2_pgd(mmu);
+ }
+ kfree(kvm->arch.nested_mmus);
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+ kvm_free_stage2_pgd(&kvm->arch.mmu);
+}
+
/*
* Our emulated CPU doesn't support all the possible features. For the
* sake of simplicity (and probably mental sanity), wipe out a number
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 33/66] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
Add Stage-2 mmu data structures for virtual EL2 and for nested guests.
We don't yet populate shadow Stage-2 page tables, but we now have a
framework for getting to a shadow Stage-2 pgd.
We allocate twice the number of vcpus as Stage-2 mmu structures because
that's sufficient for each vcpu running two translation regimes without
having to flush the Stage-2 page tables.
Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 29 +++++
arch/arm64/include/asm/kvm_mmu.h | 9 ++
arch/arm64/include/asm/kvm_nested.h | 7 ++
arch/arm64/kvm/arm.c | 16 ++-
arch/arm64/kvm/mmu.c | 31 +++--
arch/arm64/kvm/nested.c | 183 ++++++++++++++++++++++++++++
6 files changed, 264 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 1d82ad3c63b8..b7d6a829091c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -95,14 +95,43 @@ struct kvm_s2_mmu {
int __percpu *last_vcpu_ran;
struct kvm_arch *arch;
+
+ /*
+ * For a shadow stage-2 MMU, the virtual vttbr programmed by the guest
+ * hypervisor. Unused for kvm_arch->mmu. Set to 1 when the structure
+ * contains no valid information.
+ */
+ u64 vttbr;
+
+ /* true when this represents a nested context where virtual HCR_EL2.VM == 1 */
+ bool nested_stage2_enabled;
+
+ /*
+ * 0: Nobody is currently using this, check vttbr for validity
+ * >0: Somebody is actively using this.
+ */
+ atomic_t refcnt;
};
+static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu)
+{
+ return !(mmu->vttbr & 1);
+}
+
struct kvm_arch_memory_slot {
};
struct kvm_arch {
struct kvm_s2_mmu mmu;
+ /*
+ * Stage 2 paging stage for VMs with nested virtual using a virtual
+ * VMID.
+ */
+ struct kvm_s2_mmu *nested_mmus;
+ size_t nested_mmus_size;
+ int nested_mmus_next;
+
/* VTCR_EL2 value for this VM */
u64 vtcr;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 0be00ec66e0f..579980a8b05f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -116,6 +116,7 @@ alternative_cb_end
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@@ -159,6 +160,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void **haddr);
void free_hyp_pgds(void);
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
void stage2_unmap_vm(struct kvm *kvm);
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
@@ -298,5 +300,12 @@ static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
{
return container_of(mmu->arch, struct kvm, arch);
}
+
+static inline u64 get_vmid(u64 vttbr)
+{
+ return (vttbr & VTTBR_VMID_MASK(kvm_get_vmid_bits())) >>
+ VTTBR_VMID_SHIFT;
+}
+
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 026ddaad972c..473ecd1d60d0 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -61,6 +61,13 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
(cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
}
+extern void kvm_init_nested(struct kvm *kvm);
+extern int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu);
+extern void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu);
+extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr);
+extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
+extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1cb39c0803a4..8cadfaa2a310 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -35,6 +35,7 @@
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/kvm_emulate.h>
#include <asm/sections.h>
@@ -138,6 +139,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret)
return ret;
+ kvm_init_nested(kvm);
+
ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP);
if (ret)
goto out_free_stage2_pgd;
@@ -384,6 +387,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
struct kvm_s2_mmu *mmu;
int *last_ran;
+ if (nested_virt_in_use(vcpu))
+ kvm_vcpu_load_hw_mmu(vcpu);
+
mmu = vcpu->arch.hw_mmu;
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
@@ -432,6 +438,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_vgic_put(vcpu);
kvm_vcpu_pmu_restore_host(vcpu);
+ if (nested_virt_in_use(vcpu))
+ kvm_vcpu_put_hw_mmu(vcpu);
+
vcpu->cpu = -1;
}
@@ -1032,8 +1041,13 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
vcpu->arch.target = phys_target;
+ /* Prepare for nested if required */
+ ret = kvm_vcpu_init_nested(vcpu);
+
/* Now we know what it is, we can reset it. */
- ret = kvm_reset_vcpu(vcpu);
+ if (!ret)
+ ret = kvm_reset_vcpu(vcpu);
+
if (ret) {
vcpu->arch.target = -1;
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c5d1f3c87dbd..5c1a9966ff31 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -151,7 +151,7 @@ static void *kvm_host_va(phys_addr_t phys)
* does.
*/
/**
- * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
+ * kvm_unmap_stage2_range -- Clear stage2 page table entries to unmap a range
* @mmu: The KVM stage-2 MMU pointer
* @start: The intermediate physical base address of the range to unmap
* @size: The size of the area to unmap
@@ -174,7 +174,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
may_block));
}
-static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
{
__unmap_stage2_range(mmu, start, size, true);
}
@@ -448,7 +448,20 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
int cpu, err;
struct kvm_pgtable *pgt;
+ /*
+ * If we already have our page tables in place, and that the
+ * MMU context is the canonical one, we have a bug somewhere,
+ * as this is only supposed to ever happen once per VM.
+ *
+ * Otherwise, we're building nested page tables, and that's
+ * probably because userspace called KVM_ARM_VCPU_INIT more
+ * than once on the same vcpu. Since that's actually legal,
+ * don't kick a fuss and leave gracefully.
+ */
if (mmu->pgt != NULL) {
+ if (&kvm->arch.mmu != mmu)
+ return 0;
+
kvm_err("kvm_arch already initialized?\n");
return -EINVAL;
}
@@ -474,6 +487,9 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
mmu->pgt = pgt;
mmu->pgd_phys = __pa(pgt->pgd);
mmu->vmid.vmid_gen = 0;
+
+ kvm_init_nested_s2_mmu(mmu);
+
return 0;
out_destroy_pgtable:
@@ -519,7 +535,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
if (!(vma->vm_flags & VM_PFNMAP)) {
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
- unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
}
hva = vm_end;
} while (hva < reg_end);
@@ -1415,7 +1431,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
if (ret)
- unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size);
else if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
stage2_flush_memslot(kvm, memslot);
spin_unlock(&kvm->mmu_lock);
@@ -1432,11 +1448,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
{
}
-void kvm_arch_flush_shadow_all(struct kvm *kvm)
-{
- kvm_free_stage2_pgd(&kvm->arch.mmu);
-}
-
void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
@@ -1444,7 +1455,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
phys_addr_t size = slot->npages << PAGE_SHIFT;
spin_lock(&kvm->mmu_lock);
- unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
spin_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 99e1b97ae3ca..c33cc29756fa 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -19,12 +19,177 @@
#include <linux/kvm.h>
#include <linux/kvm_host.h>
+#include <asm/kvm_arm.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_mmu.h>
#include <asm/kvm_nested.h>
#include <asm/sysreg.h>
#include "sys_regs.h"
+void kvm_init_nested(struct kvm *kvm)
+{
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+}
+
+int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_s2_mmu *tmp;
+ int num_mmus;
+ int ret = -ENOMEM;
+
+ if (!test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features))
+ return 0;
+
+ if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
+ return -EINVAL;
+
+ mutex_lock(&kvm->lock);
+
+ /*
+ * Let's treat memory allocation failures as benign: If we fail to
+ * allocate anything, return an error and keep the allocated array
+ * alive. Userspace may try to recover by intializing the vcpu
+ * again, and there is no reason to affect the whole VM for this.
+ */
+ num_mmus = atomic_read(&kvm->online_vcpus) * 2;
+ tmp = krealloc(kvm->arch.nested_mmus,
+ num_mmus * sizeof(*kvm->arch.nested_mmus),
+ GFP_KERNEL | __GFP_ZERO);
+ if (tmp) {
+ if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
+ kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
+ kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
+ kvm_free_stage2_pgd(&tmp[num_mmus - 2]);
+ } else {
+ kvm->arch.nested_mmus_size = num_mmus;
+ ret = 0;
+ }
+
+ kvm->arch.nested_mmus = tmp;
+ }
+
+ mutex_unlock(&kvm->lock);
+ return ret;
+}
+
+/* Must be called with kvm->lock held */
+struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr)
+{
+ bool nested_stage2_enabled = hcr & HCR_VM;
+ int i;
+
+ /* Don't consider the CnP bit for the vttbr match */
+ vttbr = vttbr & ~VTTBR_CNP_BIT;
+
+ /*
+ * Two possibilities when looking up a S2 MMU context:
+ *
+ * - either S2 is enabled in the guest, and we need a context that
+ * is S2-enabled and matches the full VTTBR (VMID+BADDR), which
+ * makes it safe from a TLB conflict perspective (a broken guest
+ * won't be able to generate them),
+ *
+ * - or S2 is disabled, and we need a context that is S2-disabled
+ * and matches the VMID only, as all TLBs are tagged by VMID even
+ * if S2 translation is enabled.
+ */
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (!kvm_s2_mmu_valid(mmu))
+ continue;
+
+ if (nested_stage2_enabled &&
+ mmu->nested_stage2_enabled &&
+ vttbr == mmu->vttbr)
+ return mmu;
+
+ if (!nested_stage2_enabled &&
+ !mmu->nested_stage2_enabled &&
+ get_vmid(vttbr) == get_vmid(mmu->vttbr))
+ return mmu;
+ }
+ return NULL;
+}
+
+static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ u64 hcr= vcpu_read_sys_reg(vcpu, HCR_EL2);
+ struct kvm_s2_mmu *s2_mmu;
+ int i;
+
+ s2_mmu = lookup_s2_mmu(kvm, vttbr, hcr);
+ if (s2_mmu)
+ goto out;
+
+ /*
+ * Make sure we don't always search from the same point, or we
+ * will always reuse a potentially active context, leaving
+ * free contexts unused.
+ */
+ for (i = kvm->arch.nested_mmus_next;
+ i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next);
+ i++) {
+ s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size];
+
+ if (atomic_read(&s2_mmu->refcnt) == 0)
+ break;
+ }
+ BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */
+
+ /* Set the scene for the next search */
+ kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size;
+
+ if (kvm_s2_mmu_valid(s2_mmu)) {
+ /* Clear the old state */
+ kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(kvm));
+ if (s2_mmu->vmid.vmid_gen)
+ kvm_call_hyp(__kvm_tlb_flush_vmid, s2_mmu);
+ }
+
+ /*
+ * The virtual VMID (modulo CnP) will be used as a key when matching
+ * an existing kvm_s2_mmu.
+ */
+ s2_mmu->vttbr = vttbr & ~VTTBR_CNP_BIT;
+ s2_mmu->nested_stage2_enabled = hcr & HCR_VM;
+
+out:
+ atomic_inc(&s2_mmu->refcnt);
+ return s2_mmu;
+}
+
+void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
+{
+ mmu->vttbr = 1;
+ mmu->nested_stage2_enabled = false;
+ atomic_set(&mmu->refcnt, 0);
+}
+
+void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (is_hyp_ctxt(vcpu)) {
+ vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
+ } else {
+ spin_lock(&vcpu->kvm->mmu_lock);
+ vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
+ spin_unlock(&vcpu->kvm->mmu_lock);
+ }
+}
+
+void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu) {
+ atomic_dec(&vcpu->arch.hw_mmu->refcnt);
+ vcpu->arch.hw_mmu = NULL;
+ }
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
@@ -43,6 +208,24 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
return -EINVAL;
}
+void kvm_arch_flush_shadow_all(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ WARN_ON(atomic_read(&mmu->refcnt));
+
+ if (!atomic_read(&mmu->refcnt))
+ kvm_free_stage2_pgd(mmu);
+ }
+ kfree(kvm->arch.nested_mmus);
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+ kvm_free_stage2_pgd(&kvm->arch.mmu);
+}
+
/*
* Our emulated CPU doesn't support all the possible features. For the
* sake of simplicity (and probably mental sanity), wipe out a number
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 33/66] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
Add Stage-2 mmu data structures for virtual EL2 and for nested guests.
We don't yet populate shadow Stage-2 page tables, but we now have a
framework for getting to a shadow Stage-2 pgd.
We allocate twice the number of vcpus as Stage-2 mmu structures because
that's sufficient for each vcpu running two translation regimes without
having to flush the Stage-2 page tables.
Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 29 +++++
arch/arm64/include/asm/kvm_mmu.h | 9 ++
arch/arm64/include/asm/kvm_nested.h | 7 ++
arch/arm64/kvm/arm.c | 16 ++-
arch/arm64/kvm/mmu.c | 31 +++--
arch/arm64/kvm/nested.c | 183 ++++++++++++++++++++++++++++
6 files changed, 264 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 1d82ad3c63b8..b7d6a829091c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -95,14 +95,43 @@ struct kvm_s2_mmu {
int __percpu *last_vcpu_ran;
struct kvm_arch *arch;
+
+ /*
+ * For a shadow stage-2 MMU, the virtual vttbr programmed by the guest
+ * hypervisor. Unused for kvm_arch->mmu. Set to 1 when the structure
+ * contains no valid information.
+ */
+ u64 vttbr;
+
+ /* true when this represents a nested context where virtual HCR_EL2.VM == 1 */
+ bool nested_stage2_enabled;
+
+ /*
+ * 0: Nobody is currently using this, check vttbr for validity
+ * >0: Somebody is actively using this.
+ */
+ atomic_t refcnt;
};
+static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu)
+{
+ return !(mmu->vttbr & 1);
+}
+
struct kvm_arch_memory_slot {
};
struct kvm_arch {
struct kvm_s2_mmu mmu;
+ /*
+ * Stage 2 paging stage for VMs with nested virtual using a virtual
+ * VMID.
+ */
+ struct kvm_s2_mmu *nested_mmus;
+ size_t nested_mmus_size;
+ int nested_mmus_next;
+
/* VTCR_EL2 value for this VM */
u64 vtcr;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 0be00ec66e0f..579980a8b05f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -116,6 +116,7 @@ alternative_cb_end
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_nested.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@@ -159,6 +160,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void **haddr);
void free_hyp_pgds(void);
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
void stage2_unmap_vm(struct kvm *kvm);
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
@@ -298,5 +300,12 @@ static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
{
return container_of(mmu->arch, struct kvm, arch);
}
+
+static inline u64 get_vmid(u64 vttbr)
+{
+ return (vttbr & VTTBR_VMID_MASK(kvm_get_vmid_bits())) >>
+ VTTBR_VMID_SHIFT;
+}
+
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 026ddaad972c..473ecd1d60d0 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -61,6 +61,13 @@ static inline u64 translate_cnthctl_el2_to_cntkctl_el1(u64 cnthctl)
(cnthctl & (CNTHCTL_EVNTI | CNTHCTL_EVNTDIR | CNTHCTL_EVNTEN)));
}
+extern void kvm_init_nested(struct kvm *kvm);
+extern int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu);
+extern void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu);
+extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr);
+extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
+extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1cb39c0803a4..8cadfaa2a310 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -35,6 +35,7 @@
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/kvm_emulate.h>
#include <asm/sections.h>
@@ -138,6 +139,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret)
return ret;
+ kvm_init_nested(kvm);
+
ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP);
if (ret)
goto out_free_stage2_pgd;
@@ -384,6 +387,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
struct kvm_s2_mmu *mmu;
int *last_ran;
+ if (nested_virt_in_use(vcpu))
+ kvm_vcpu_load_hw_mmu(vcpu);
+
mmu = vcpu->arch.hw_mmu;
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
@@ -432,6 +438,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_vgic_put(vcpu);
kvm_vcpu_pmu_restore_host(vcpu);
+ if (nested_virt_in_use(vcpu))
+ kvm_vcpu_put_hw_mmu(vcpu);
+
vcpu->cpu = -1;
}
@@ -1032,8 +1041,13 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
vcpu->arch.target = phys_target;
+ /* Prepare for nested if required */
+ ret = kvm_vcpu_init_nested(vcpu);
+
/* Now we know what it is, we can reset it. */
- ret = kvm_reset_vcpu(vcpu);
+ if (!ret)
+ ret = kvm_reset_vcpu(vcpu);
+
if (ret) {
vcpu->arch.target = -1;
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c5d1f3c87dbd..5c1a9966ff31 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -151,7 +151,7 @@ static void *kvm_host_va(phys_addr_t phys)
* does.
*/
/**
- * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
+ * kvm_unmap_stage2_range -- Clear stage2 page table entries to unmap a range
* @mmu: The KVM stage-2 MMU pointer
* @start: The intermediate physical base address of the range to unmap
* @size: The size of the area to unmap
@@ -174,7 +174,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
may_block));
}
-static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
{
__unmap_stage2_range(mmu, start, size, true);
}
@@ -448,7 +448,20 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
int cpu, err;
struct kvm_pgtable *pgt;
+ /*
+ * If we already have our page tables in place, and that the
+ * MMU context is the canonical one, we have a bug somewhere,
+ * as this is only supposed to ever happen once per VM.
+ *
+ * Otherwise, we're building nested page tables, and that's
+ * probably because userspace called KVM_ARM_VCPU_INIT more
+ * than once on the same vcpu. Since that's actually legal,
+ * don't kick a fuss and leave gracefully.
+ */
if (mmu->pgt != NULL) {
+ if (&kvm->arch.mmu != mmu)
+ return 0;
+
kvm_err("kvm_arch already initialized?\n");
return -EINVAL;
}
@@ -474,6 +487,9 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
mmu->pgt = pgt;
mmu->pgd_phys = __pa(pgt->pgd);
mmu->vmid.vmid_gen = 0;
+
+ kvm_init_nested_s2_mmu(mmu);
+
return 0;
out_destroy_pgtable:
@@ -519,7 +535,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
if (!(vma->vm_flags & VM_PFNMAP)) {
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
- unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
}
hva = vm_end;
} while (hva < reg_end);
@@ -1415,7 +1431,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
if (ret)
- unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size);
else if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
stage2_flush_memslot(kvm, memslot);
spin_unlock(&kvm->mmu_lock);
@@ -1432,11 +1448,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
{
}
-void kvm_arch_flush_shadow_all(struct kvm *kvm)
-{
- kvm_free_stage2_pgd(&kvm->arch.mmu);
-}
-
void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
@@ -1444,7 +1455,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
phys_addr_t size = slot->npages << PAGE_SHIFT;
spin_lock(&kvm->mmu_lock);
- unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
spin_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 99e1b97ae3ca..c33cc29756fa 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -19,12 +19,177 @@
#include <linux/kvm.h>
#include <linux/kvm_host.h>
+#include <asm/kvm_arm.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_mmu.h>
#include <asm/kvm_nested.h>
#include <asm/sysreg.h>
#include "sys_regs.h"
+void kvm_init_nested(struct kvm *kvm)
+{
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+}
+
+int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_s2_mmu *tmp;
+ int num_mmus;
+ int ret = -ENOMEM;
+
+ if (!test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features))
+ return 0;
+
+ if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
+ return -EINVAL;
+
+ mutex_lock(&kvm->lock);
+
+ /*
+ * Let's treat memory allocation failures as benign: If we fail to
+ * allocate anything, return an error and keep the allocated array
+ * alive. Userspace may try to recover by intializing the vcpu
+ * again, and there is no reason to affect the whole VM for this.
+ */
+ num_mmus = atomic_read(&kvm->online_vcpus) * 2;
+ tmp = krealloc(kvm->arch.nested_mmus,
+ num_mmus * sizeof(*kvm->arch.nested_mmus),
+ GFP_KERNEL | __GFP_ZERO);
+ if (tmp) {
+ if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
+ kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
+ kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
+ kvm_free_stage2_pgd(&tmp[num_mmus - 2]);
+ } else {
+ kvm->arch.nested_mmus_size = num_mmus;
+ ret = 0;
+ }
+
+ kvm->arch.nested_mmus = tmp;
+ }
+
+ mutex_unlock(&kvm->lock);
+ return ret;
+}
+
+/* Must be called with kvm->lock held */
+struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr)
+{
+ bool nested_stage2_enabled = hcr & HCR_VM;
+ int i;
+
+ /* Don't consider the CnP bit for the vttbr match */
+ vttbr = vttbr & ~VTTBR_CNP_BIT;
+
+ /*
+ * Two possibilities when looking up a S2 MMU context:
+ *
+ * - either S2 is enabled in the guest, and we need a context that
+ * is S2-enabled and matches the full VTTBR (VMID+BADDR), which
+ * makes it safe from a TLB conflict perspective (a broken guest
+ * won't be able to generate them),
+ *
+ * - or S2 is disabled, and we need a context that is S2-disabled
+ * and matches the VMID only, as all TLBs are tagged by VMID even
+ * if S2 translation is enabled.
+ */
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (!kvm_s2_mmu_valid(mmu))
+ continue;
+
+ if (nested_stage2_enabled &&
+ mmu->nested_stage2_enabled &&
+ vttbr == mmu->vttbr)
+ return mmu;
+
+ if (!nested_stage2_enabled &&
+ !mmu->nested_stage2_enabled &&
+ get_vmid(vttbr) == get_vmid(mmu->vttbr))
+ return mmu;
+ }
+ return NULL;
+}
+
+static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ u64 hcr= vcpu_read_sys_reg(vcpu, HCR_EL2);
+ struct kvm_s2_mmu *s2_mmu;
+ int i;
+
+ s2_mmu = lookup_s2_mmu(kvm, vttbr, hcr);
+ if (s2_mmu)
+ goto out;
+
+ /*
+ * Make sure we don't always search from the same point, or we
+ * will always reuse a potentially active context, leaving
+ * free contexts unused.
+ */
+ for (i = kvm->arch.nested_mmus_next;
+ i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next);
+ i++) {
+ s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size];
+
+ if (atomic_read(&s2_mmu->refcnt) == 0)
+ break;
+ }
+ BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */
+
+ /* Set the scene for the next search */
+ kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size;
+
+ if (kvm_s2_mmu_valid(s2_mmu)) {
+ /* Clear the old state */
+ kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(kvm));
+ if (s2_mmu->vmid.vmid_gen)
+ kvm_call_hyp(__kvm_tlb_flush_vmid, s2_mmu);
+ }
+
+ /*
+ * The virtual VMID (modulo CnP) will be used as a key when matching
+ * an existing kvm_s2_mmu.
+ */
+ s2_mmu->vttbr = vttbr & ~VTTBR_CNP_BIT;
+ s2_mmu->nested_stage2_enabled = hcr & HCR_VM;
+
+out:
+ atomic_inc(&s2_mmu->refcnt);
+ return s2_mmu;
+}
+
+void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
+{
+ mmu->vttbr = 1;
+ mmu->nested_stage2_enabled = false;
+ atomic_set(&mmu->refcnt, 0);
+}
+
+void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (is_hyp_ctxt(vcpu)) {
+ vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
+ } else {
+ spin_lock(&vcpu->kvm->mmu_lock);
+ vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
+ spin_unlock(&vcpu->kvm->mmu_lock);
+ }
+}
+
+void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu) {
+ atomic_dec(&vcpu->arch.hw_mmu->refcnt);
+ vcpu->arch.hw_mmu = NULL;
+ }
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
@@ -43,6 +208,24 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
return -EINVAL;
}
+void kvm_arch_flush_shadow_all(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ WARN_ON(atomic_read(&mmu->refcnt));
+
+ if (!atomic_read(&mmu->refcnt))
+ kvm_free_stage2_pgd(mmu);
+ }
+ kfree(kvm->arch.nested_mmus);
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+ kvm_free_stage2_pgd(&kvm->arch.mmu);
+}
+
/*
* Our emulated CPU doesn't support all the possible features. For the
* sake of simplicity (and probably mental sanity), wipe out a number
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 34/66] KVM: arm64: nv: Implement nested Stage-2 page table walk logic
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
From: Christoffer Dall <christoffer.dall@linaro.org>
Based on the pseudo-code in the ARM ARM, implement a stage 2 software
page table walker.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: heavily reworked for future ARMv8.4-TTL support]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 1 +
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_nested.h | 13 ++
arch/arm64/kvm/nested.c | 267 ++++++++++++++++++++++++++++
4 files changed, 283 insertions(+)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 9e6bc2757a05..05388231adea 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -129,6 +129,7 @@
#define ESR_ELx_CM (UL(1) << ESR_ELx_CM_SHIFT)
/* ISS field definitions for exceptions taken in to Hyp */
+#define ESR_ELx_FSC_ADDRSZ (0x00)
#define ESR_ELx_CV (UL(1) << 24)
#define ESR_ELx_COND_SHIFT (20)
#define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c2ab4e1802cf..eb0d00d8a431 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -264,6 +264,8 @@
#define VTTBR_VMID_SHIFT (UL(48))
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
+#define SCTLR_EE (UL(1) << 25)
+
/* Hyp System Trap Register */
#define HSTR_EL2_T(x) (1 << x)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 473ecd1d60d0..b784d7891851 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -68,6 +68,19 @@ extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr);
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+struct kvm_s2_trans {
+ phys_addr_t output;
+ unsigned long block_size;
+ bool writable;
+ bool readable;
+ int level;
+ u32 esr;
+ u64 upper_attr;
+};
+
+extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result);
+
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c33cc29756fa..1067b6422cf2 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -75,6 +75,273 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
return ret;
}
+struct s2_walk_info {
+ int (*read_desc)(phys_addr_t pa, u64 *desc, void *data);
+ void *data;
+ u64 baddr;
+ unsigned int max_pa_bits;
+ unsigned int pgshift;
+ unsigned int pgsize;
+ unsigned int ps;
+ unsigned int sl;
+ unsigned int t0sz;
+ bool be;
+ bool el1_aarch32;
+};
+
+static unsigned int ps_to_output_size(unsigned int ps)
+{
+ switch (ps) {
+ case 0: return 32;
+ case 1: return 36;
+ case 2: return 40;
+ case 3: return 42;
+ case 4: return 44;
+ case 5:
+ default:
+ return 48;
+ }
+}
+
+static u32 compute_fsc(int level, u32 fsc)
+{
+ return fsc | (level & 0x3);
+}
+
+static int check_base_s2_limits(struct s2_walk_info *wi,
+ int level, int input_size, int stride)
+{
+ int start_size;
+
+ /* Check translation limits */
+ switch (wi->pgsize) {
+ case SZ_64K:
+ if (level == 0 || (level == 1 && wi->max_pa_bits <= 42))
+ return -EFAULT;
+ break;
+ case SZ_16K:
+ if (level == 0 || (level == 1 && wi->max_pa_bits <= 40))
+ return -EFAULT;
+ break;
+ case SZ_4K:
+ if (level < 0 || (level == 0 && wi->max_pa_bits <= 42))
+ return -EFAULT;
+ break;
+ }
+
+ /* Check input size limits */
+ if (input_size > wi->max_pa_bits &&
+ (!wi->el1_aarch32 || input_size > 40))
+ return -EFAULT;
+
+ /* Check number of entries in starting level table */
+ start_size = input_size - ((3 - level) * stride + wi->pgshift);
+ if (start_size < 1 || start_size > stride + 4)
+ return -EFAULT;
+
+ return 0;
+}
+
+/* Check if output is within boundaries */
+static int check_output_size(struct s2_walk_info *wi, phys_addr_t output)
+{
+ unsigned int output_size = ps_to_output_size(wi->ps);
+
+ if (output_size > wi->max_pa_bits)
+ output_size = wi->max_pa_bits;
+
+ if (output_size != 48 && (output & GENMASK_ULL(47, output_size)))
+ return -1;
+
+ return 0;
+}
+
+/*
+ * This is essentially a C-version of the pseudo code from the ARM ARM
+ * AArch64.TranslationTableWalk function. I strongly recommend looking at
+ * that pseudocode in trying to understand this.
+ *
+ * Must be called with the kvm->srcu read lock held
+ */
+static int walk_nested_s2_pgd(phys_addr_t ipa,
+ struct s2_walk_info *wi, struct kvm_s2_trans *out)
+{
+ int first_block_level, level, stride, input_size, base_lower_bound;
+ phys_addr_t base_addr;
+ unsigned int addr_top, addr_bottom;
+ u64 desc; /* page table entry */
+ int ret;
+ phys_addr_t paddr;
+
+ switch (wi->pgsize) {
+ case SZ_64K:
+ case SZ_16K:
+ level = 3 - wi->sl;
+ first_block_level = 2;
+ break;
+ case SZ_4K:
+ level = 2 - wi->sl;
+ first_block_level = 1;
+ break;
+ default:
+ /* GCC is braindead */
+ unreachable();
+ }
+
+ stride = wi->pgshift - 3;
+ input_size = 64 - wi->t0sz;
+ if (input_size > 48 || input_size < 25)
+ return -EFAULT;
+
+ ret = check_base_s2_limits(wi, level, input_size, stride);
+ if (WARN_ON(ret))
+ return ret;
+
+ base_lower_bound = 3 + input_size - ((3 - level) * stride +
+ wi->pgshift);
+ base_addr = wi->baddr & GENMASK_ULL(47, base_lower_bound);
+
+ if (check_output_size(wi, base_addr)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ return 1;
+ }
+
+ addr_top = input_size - 1;
+
+ while (1) {
+ phys_addr_t index;
+
+ addr_bottom = (3 - level) * stride + wi->pgshift;
+ index = (ipa & GENMASK_ULL(addr_top, addr_bottom))
+ >> (addr_bottom - 3);
+
+ paddr = base_addr | index;
+ ret = wi->read_desc(paddr, &desc, wi->data);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * Handle reversedescriptors if endianness differs between the
+ * host and the guest hypervisor.
+ */
+ if (wi->be)
+ desc = be64_to_cpu(desc);
+ else
+ desc = le64_to_cpu(desc);
+
+ /* Check for valid descriptor at this point */
+ if (!(desc & 1) || ((desc & 3) == 1 && level == 3)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* We're at the final level or block translation level */
+ if ((desc & 3) == 1 || level == 3)
+ break;
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ base_addr = desc & GENMASK_ULL(47, wi->pgshift);
+
+ level += 1;
+ addr_top = addr_bottom - 1;
+ }
+
+ if (level < first_block_level) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /*
+ * We don't use the contiguous bit in the stage-2 ptes, so skip check
+ * for misprogramming of the contiguous bit.
+ */
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ if (!(desc & BIT(10))) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ACCESS);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* Calculate and return the result */
+ paddr = (desc & GENMASK_ULL(47, addr_bottom)) |
+ (ipa & GENMASK_ULL(addr_bottom - 1, 0));
+ out->output = paddr;
+ out->block_size = 1UL << ((3 - level) * stride + wi->pgshift);
+ out->readable = desc & (0b01 << 6);
+ out->writable = desc & (0b10 << 6);
+ out->level = level;
+ out->upper_attr = desc & GENMASK_ULL(63, 52);
+ return 0;
+}
+
+static int read_guest_s2_desc(phys_addr_t pa, u64 *desc, void *data)
+{
+ struct kvm_vcpu *vcpu = data;
+
+ return kvm_read_guest(vcpu->kvm, pa, desc, sizeof(*desc));
+}
+
+static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
+{
+ wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
+
+ switch (vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ wi->pgshift = 12; break;
+ case VTCR_EL2_TG0_16K:
+ wi->pgshift = 14; break;
+ case VTCR_EL2_TG0_64K:
+ default:
+ wi->pgshift = 16; break;
+ }
+
+ wi->pgsize = 1UL << wi->pgshift;
+ wi->ps = (vtcr & VTCR_EL2_PS_MASK) >> VTCR_EL2_PS_SHIFT;
+ wi->sl = (vtcr & VTCR_EL2_SL0_MASK) >> VTCR_EL2_SL0_SHIFT;
+ wi->max_pa_bits = VTCR_EL2_IPA(vtcr);
+}
+
+int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result)
+{
+ u64 vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
+ struct s2_walk_info wi;
+ int ret;
+
+ result->esr = 0;
+
+ if (!nested_virt_in_use(vcpu))
+ return 0;
+
+ wi.read_desc = read_guest_s2_desc;
+ wi.data = vcpu;
+ wi.baddr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+
+ vtcr_to_walk_info(vtcr, &wi);
+
+ wi.be = vcpu_read_sys_reg(vcpu, SCTLR_EL2) & SCTLR_EE;
+ wi.el1_aarch32 = vcpu_mode_is_32bit(vcpu);
+
+ ret = walk_nested_s2_pgd(gipa, &wi, result);
+ if (ret)
+ result->esr |= (kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC);
+
+ return ret;
+}
+
/* Must be called with kvm->lock held */
struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr)
{
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 34/66] KVM: arm64: nv: Implement nested Stage-2 page table walk logic
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
From: Christoffer Dall <christoffer.dall@linaro.org>
Based on the pseudo-code in the ARM ARM, implement a stage 2 software
page table walker.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: heavily reworked for future ARMv8.4-TTL support]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 1 +
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_nested.h | 13 ++
arch/arm64/kvm/nested.c | 267 ++++++++++++++++++++++++++++
4 files changed, 283 insertions(+)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 9e6bc2757a05..05388231adea 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -129,6 +129,7 @@
#define ESR_ELx_CM (UL(1) << ESR_ELx_CM_SHIFT)
/* ISS field definitions for exceptions taken in to Hyp */
+#define ESR_ELx_FSC_ADDRSZ (0x00)
#define ESR_ELx_CV (UL(1) << 24)
#define ESR_ELx_COND_SHIFT (20)
#define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c2ab4e1802cf..eb0d00d8a431 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -264,6 +264,8 @@
#define VTTBR_VMID_SHIFT (UL(48))
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
+#define SCTLR_EE (UL(1) << 25)
+
/* Hyp System Trap Register */
#define HSTR_EL2_T(x) (1 << x)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 473ecd1d60d0..b784d7891851 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -68,6 +68,19 @@ extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr);
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+struct kvm_s2_trans {
+ phys_addr_t output;
+ unsigned long block_size;
+ bool writable;
+ bool readable;
+ int level;
+ u32 esr;
+ u64 upper_attr;
+};
+
+extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result);
+
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c33cc29756fa..1067b6422cf2 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -75,6 +75,273 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
return ret;
}
+struct s2_walk_info {
+ int (*read_desc)(phys_addr_t pa, u64 *desc, void *data);
+ void *data;
+ u64 baddr;
+ unsigned int max_pa_bits;
+ unsigned int pgshift;
+ unsigned int pgsize;
+ unsigned int ps;
+ unsigned int sl;
+ unsigned int t0sz;
+ bool be;
+ bool el1_aarch32;
+};
+
+static unsigned int ps_to_output_size(unsigned int ps)
+{
+ switch (ps) {
+ case 0: return 32;
+ case 1: return 36;
+ case 2: return 40;
+ case 3: return 42;
+ case 4: return 44;
+ case 5:
+ default:
+ return 48;
+ }
+}
+
+static u32 compute_fsc(int level, u32 fsc)
+{
+ return fsc | (level & 0x3);
+}
+
+static int check_base_s2_limits(struct s2_walk_info *wi,
+ int level, int input_size, int stride)
+{
+ int start_size;
+
+ /* Check translation limits */
+ switch (wi->pgsize) {
+ case SZ_64K:
+ if (level == 0 || (level == 1 && wi->max_pa_bits <= 42))
+ return -EFAULT;
+ break;
+ case SZ_16K:
+ if (level == 0 || (level == 1 && wi->max_pa_bits <= 40))
+ return -EFAULT;
+ break;
+ case SZ_4K:
+ if (level < 0 || (level == 0 && wi->max_pa_bits <= 42))
+ return -EFAULT;
+ break;
+ }
+
+ /* Check input size limits */
+ if (input_size > wi->max_pa_bits &&
+ (!wi->el1_aarch32 || input_size > 40))
+ return -EFAULT;
+
+ /* Check number of entries in starting level table */
+ start_size = input_size - ((3 - level) * stride + wi->pgshift);
+ if (start_size < 1 || start_size > stride + 4)
+ return -EFAULT;
+
+ return 0;
+}
+
+/* Check if output is within boundaries */
+static int check_output_size(struct s2_walk_info *wi, phys_addr_t output)
+{
+ unsigned int output_size = ps_to_output_size(wi->ps);
+
+ if (output_size > wi->max_pa_bits)
+ output_size = wi->max_pa_bits;
+
+ if (output_size != 48 && (output & GENMASK_ULL(47, output_size)))
+ return -1;
+
+ return 0;
+}
+
+/*
+ * This is essentially a C-version of the pseudo code from the ARM ARM
+ * AArch64.TranslationTableWalk function. I strongly recommend looking at
+ * that pseudocode in trying to understand this.
+ *
+ * Must be called with the kvm->srcu read lock held
+ */
+static int walk_nested_s2_pgd(phys_addr_t ipa,
+ struct s2_walk_info *wi, struct kvm_s2_trans *out)
+{
+ int first_block_level, level, stride, input_size, base_lower_bound;
+ phys_addr_t base_addr;
+ unsigned int addr_top, addr_bottom;
+ u64 desc; /* page table entry */
+ int ret;
+ phys_addr_t paddr;
+
+ switch (wi->pgsize) {
+ case SZ_64K:
+ case SZ_16K:
+ level = 3 - wi->sl;
+ first_block_level = 2;
+ break;
+ case SZ_4K:
+ level = 2 - wi->sl;
+ first_block_level = 1;
+ break;
+ default:
+ /* GCC is braindead */
+ unreachable();
+ }
+
+ stride = wi->pgshift - 3;
+ input_size = 64 - wi->t0sz;
+ if (input_size > 48 || input_size < 25)
+ return -EFAULT;
+
+ ret = check_base_s2_limits(wi, level, input_size, stride);
+ if (WARN_ON(ret))
+ return ret;
+
+ base_lower_bound = 3 + input_size - ((3 - level) * stride +
+ wi->pgshift);
+ base_addr = wi->baddr & GENMASK_ULL(47, base_lower_bound);
+
+ if (check_output_size(wi, base_addr)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ return 1;
+ }
+
+ addr_top = input_size - 1;
+
+ while (1) {
+ phys_addr_t index;
+
+ addr_bottom = (3 - level) * stride + wi->pgshift;
+ index = (ipa & GENMASK_ULL(addr_top, addr_bottom))
+ >> (addr_bottom - 3);
+
+ paddr = base_addr | index;
+ ret = wi->read_desc(paddr, &desc, wi->data);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * Handle reversedescriptors if endianness differs between the
+ * host and the guest hypervisor.
+ */
+ if (wi->be)
+ desc = be64_to_cpu(desc);
+ else
+ desc = le64_to_cpu(desc);
+
+ /* Check for valid descriptor at this point */
+ if (!(desc & 1) || ((desc & 3) == 1 && level == 3)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* We're at the final level or block translation level */
+ if ((desc & 3) == 1 || level == 3)
+ break;
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ base_addr = desc & GENMASK_ULL(47, wi->pgshift);
+
+ level += 1;
+ addr_top = addr_bottom - 1;
+ }
+
+ if (level < first_block_level) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /*
+ * We don't use the contiguous bit in the stage-2 ptes, so skip check
+ * for misprogramming of the contiguous bit.
+ */
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ if (!(desc & BIT(10))) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ACCESS);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* Calculate and return the result */
+ paddr = (desc & GENMASK_ULL(47, addr_bottom)) |
+ (ipa & GENMASK_ULL(addr_bottom - 1, 0));
+ out->output = paddr;
+ out->block_size = 1UL << ((3 - level) * stride + wi->pgshift);
+ out->readable = desc & (0b01 << 6);
+ out->writable = desc & (0b10 << 6);
+ out->level = level;
+ out->upper_attr = desc & GENMASK_ULL(63, 52);
+ return 0;
+}
+
+static int read_guest_s2_desc(phys_addr_t pa, u64 *desc, void *data)
+{
+ struct kvm_vcpu *vcpu = data;
+
+ return kvm_read_guest(vcpu->kvm, pa, desc, sizeof(*desc));
+}
+
+static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
+{
+ wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
+
+ switch (vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ wi->pgshift = 12; break;
+ case VTCR_EL2_TG0_16K:
+ wi->pgshift = 14; break;
+ case VTCR_EL2_TG0_64K:
+ default:
+ wi->pgshift = 16; break;
+ }
+
+ wi->pgsize = 1UL << wi->pgshift;
+ wi->ps = (vtcr & VTCR_EL2_PS_MASK) >> VTCR_EL2_PS_SHIFT;
+ wi->sl = (vtcr & VTCR_EL2_SL0_MASK) >> VTCR_EL2_SL0_SHIFT;
+ wi->max_pa_bits = VTCR_EL2_IPA(vtcr);
+}
+
+int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result)
+{
+ u64 vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
+ struct s2_walk_info wi;
+ int ret;
+
+ result->esr = 0;
+
+ if (!nested_virt_in_use(vcpu))
+ return 0;
+
+ wi.read_desc = read_guest_s2_desc;
+ wi.data = vcpu;
+ wi.baddr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+
+ vtcr_to_walk_info(vtcr, &wi);
+
+ wi.be = vcpu_read_sys_reg(vcpu, SCTLR_EL2) & SCTLR_EE;
+ wi.el1_aarch32 = vcpu_mode_is_32bit(vcpu);
+
+ ret = walk_nested_s2_pgd(gipa, &wi, result);
+ if (ret)
+ result->esr |= (kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC);
+
+ return ret;
+}
+
/* Must be called with kvm->lock held */
struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr)
{
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 34/66] KVM: arm64: nv: Implement nested Stage-2 page table walk logic
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: kernel-team, Andre Przywara, Jintack Lim, Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
Based on the pseudo-code in the ARM ARM, implement a stage 2 software
page table walker.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: heavily reworked for future ARMv8.4-TTL support]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 1 +
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_nested.h | 13 ++
arch/arm64/kvm/nested.c | 267 ++++++++++++++++++++++++++++
4 files changed, 283 insertions(+)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 9e6bc2757a05..05388231adea 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -129,6 +129,7 @@
#define ESR_ELx_CM (UL(1) << ESR_ELx_CM_SHIFT)
/* ISS field definitions for exceptions taken in to Hyp */
+#define ESR_ELx_FSC_ADDRSZ (0x00)
#define ESR_ELx_CV (UL(1) << 24)
#define ESR_ELx_COND_SHIFT (20)
#define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c2ab4e1802cf..eb0d00d8a431 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -264,6 +264,8 @@
#define VTTBR_VMID_SHIFT (UL(48))
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
+#define SCTLR_EE (UL(1) << 25)
+
/* Hyp System Trap Register */
#define HSTR_EL2_T(x) (1 << x)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 473ecd1d60d0..b784d7891851 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -68,6 +68,19 @@ extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr);
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+struct kvm_s2_trans {
+ phys_addr_t output;
+ unsigned long block_size;
+ bool writable;
+ bool readable;
+ int level;
+ u32 esr;
+ u64 upper_attr;
+};
+
+extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result);
+
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c33cc29756fa..1067b6422cf2 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -75,6 +75,273 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
return ret;
}
+struct s2_walk_info {
+ int (*read_desc)(phys_addr_t pa, u64 *desc, void *data);
+ void *data;
+ u64 baddr;
+ unsigned int max_pa_bits;
+ unsigned int pgshift;
+ unsigned int pgsize;
+ unsigned int ps;
+ unsigned int sl;
+ unsigned int t0sz;
+ bool be;
+ bool el1_aarch32;
+};
+
+static unsigned int ps_to_output_size(unsigned int ps)
+{
+ switch (ps) {
+ case 0: return 32;
+ case 1: return 36;
+ case 2: return 40;
+ case 3: return 42;
+ case 4: return 44;
+ case 5:
+ default:
+ return 48;
+ }
+}
+
+static u32 compute_fsc(int level, u32 fsc)
+{
+ return fsc | (level & 0x3);
+}
+
+static int check_base_s2_limits(struct s2_walk_info *wi,
+ int level, int input_size, int stride)
+{
+ int start_size;
+
+ /* Check translation limits */
+ switch (wi->pgsize) {
+ case SZ_64K:
+ if (level == 0 || (level == 1 && wi->max_pa_bits <= 42))
+ return -EFAULT;
+ break;
+ case SZ_16K:
+ if (level == 0 || (level == 1 && wi->max_pa_bits <= 40))
+ return -EFAULT;
+ break;
+ case SZ_4K:
+ if (level < 0 || (level == 0 && wi->max_pa_bits <= 42))
+ return -EFAULT;
+ break;
+ }
+
+ /* Check input size limits */
+ if (input_size > wi->max_pa_bits &&
+ (!wi->el1_aarch32 || input_size > 40))
+ return -EFAULT;
+
+ /* Check number of entries in starting level table */
+ start_size = input_size - ((3 - level) * stride + wi->pgshift);
+ if (start_size < 1 || start_size > stride + 4)
+ return -EFAULT;
+
+ return 0;
+}
+
+/* Check if output is within boundaries */
+static int check_output_size(struct s2_walk_info *wi, phys_addr_t output)
+{
+ unsigned int output_size = ps_to_output_size(wi->ps);
+
+ if (output_size > wi->max_pa_bits)
+ output_size = wi->max_pa_bits;
+
+ if (output_size != 48 && (output & GENMASK_ULL(47, output_size)))
+ return -1;
+
+ return 0;
+}
+
+/*
+ * This is essentially a C-version of the pseudo code from the ARM ARM
+ * AArch64.TranslationTableWalk function. I strongly recommend looking at
+ * that pseudocode in trying to understand this.
+ *
+ * Must be called with the kvm->srcu read lock held
+ */
+static int walk_nested_s2_pgd(phys_addr_t ipa,
+ struct s2_walk_info *wi, struct kvm_s2_trans *out)
+{
+ int first_block_level, level, stride, input_size, base_lower_bound;
+ phys_addr_t base_addr;
+ unsigned int addr_top, addr_bottom;
+ u64 desc; /* page table entry */
+ int ret;
+ phys_addr_t paddr;
+
+ switch (wi->pgsize) {
+ case SZ_64K:
+ case SZ_16K:
+ level = 3 - wi->sl;
+ first_block_level = 2;
+ break;
+ case SZ_4K:
+ level = 2 - wi->sl;
+ first_block_level = 1;
+ break;
+ default:
+ /* GCC is braindead */
+ unreachable();
+ }
+
+ stride = wi->pgshift - 3;
+ input_size = 64 - wi->t0sz;
+ if (input_size > 48 || input_size < 25)
+ return -EFAULT;
+
+ ret = check_base_s2_limits(wi, level, input_size, stride);
+ if (WARN_ON(ret))
+ return ret;
+
+ base_lower_bound = 3 + input_size - ((3 - level) * stride +
+ wi->pgshift);
+ base_addr = wi->baddr & GENMASK_ULL(47, base_lower_bound);
+
+ if (check_output_size(wi, base_addr)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ return 1;
+ }
+
+ addr_top = input_size - 1;
+
+ while (1) {
+ phys_addr_t index;
+
+ addr_bottom = (3 - level) * stride + wi->pgshift;
+ index = (ipa & GENMASK_ULL(addr_top, addr_bottom))
+ >> (addr_bottom - 3);
+
+ paddr = base_addr | index;
+ ret = wi->read_desc(paddr, &desc, wi->data);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * Handle reversedescriptors if endianness differs between the
+ * host and the guest hypervisor.
+ */
+ if (wi->be)
+ desc = be64_to_cpu(desc);
+ else
+ desc = le64_to_cpu(desc);
+
+ /* Check for valid descriptor at this point */
+ if (!(desc & 1) || ((desc & 3) == 1 && level == 3)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* We're at the final level or block translation level */
+ if ((desc & 3) == 1 || level == 3)
+ break;
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ base_addr = desc & GENMASK_ULL(47, wi->pgshift);
+
+ level += 1;
+ addr_top = addr_bottom - 1;
+ }
+
+ if (level < first_block_level) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /*
+ * We don't use the contiguous bit in the stage-2 ptes, so skip check
+ * for misprogramming of the contiguous bit.
+ */
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ if (!(desc & BIT(10))) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ACCESS);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* Calculate and return the result */
+ paddr = (desc & GENMASK_ULL(47, addr_bottom)) |
+ (ipa & GENMASK_ULL(addr_bottom - 1, 0));
+ out->output = paddr;
+ out->block_size = 1UL << ((3 - level) * stride + wi->pgshift);
+ out->readable = desc & (0b01 << 6);
+ out->writable = desc & (0b10 << 6);
+ out->level = level;
+ out->upper_attr = desc & GENMASK_ULL(63, 52);
+ return 0;
+}
+
+static int read_guest_s2_desc(phys_addr_t pa, u64 *desc, void *data)
+{
+ struct kvm_vcpu *vcpu = data;
+
+ return kvm_read_guest(vcpu->kvm, pa, desc, sizeof(*desc));
+}
+
+static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
+{
+ wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
+
+ switch (vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ wi->pgshift = 12; break;
+ case VTCR_EL2_TG0_16K:
+ wi->pgshift = 14; break;
+ case VTCR_EL2_TG0_64K:
+ default:
+ wi->pgshift = 16; break;
+ }
+
+ wi->pgsize = 1UL << wi->pgshift;
+ wi->ps = (vtcr & VTCR_EL2_PS_MASK) >> VTCR_EL2_PS_SHIFT;
+ wi->sl = (vtcr & VTCR_EL2_SL0_MASK) >> VTCR_EL2_SL0_SHIFT;
+ wi->max_pa_bits = VTCR_EL2_IPA(vtcr);
+}
+
+int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result)
+{
+ u64 vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
+ struct s2_walk_info wi;
+ int ret;
+
+ result->esr = 0;
+
+ if (!nested_virt_in_use(vcpu))
+ return 0;
+
+ wi.read_desc = read_guest_s2_desc;
+ wi.data = vcpu;
+ wi.baddr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+
+ vtcr_to_walk_info(vtcr, &wi);
+
+ wi.be = vcpu_read_sys_reg(vcpu, SCTLR_EL2) & SCTLR_EE;
+ wi.el1_aarch32 = vcpu_mode_is_32bit(vcpu);
+
+ ret = walk_nested_s2_pgd(gipa, &wi, result);
+ if (ret)
+ result->esr |= (kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC);
+
+ return ret;
+}
+
/* Must be called with kvm->lock held */
struct kvm_s2_mmu *lookup_s2_mmu(struct kvm *kvm, u64 vttbr, u64 hcr)
{
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 35/66] KVM: arm64: nv: Handle shadow stage 2 page faults
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
If we are faulting on a shadow stage 2 translation, we first walk the
guest hypervisor's stage 2 page table to see if it has a mapping. If
not, we inject a stage 2 page fault to the virtual EL2. Otherwise, we
create a mapping in the shadow stage 2 page table.
Note that we have to deal with two IPAs when we got a shadow stage 2
page fault. One is the address we faulted on, and is in the L2 guest
phys space. The other is from the guest stage-2 page table walk, and is
in the L1 guest phys space. To differentiate them, we rename variables
so that fault_ipa is used for the former and ipa is used for the latter.
Co-developed-by: Christoffer Dall <christoffer.dall@linaro.org>
Co-developed-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: rewrote this multiple times...]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 6 ++
arch/arm64/include/asm/kvm_nested.h | 18 ++++++
arch/arm64/kvm/mmu.c | 93 ++++++++++++++++++++++------
arch/arm64/kvm/nested.c | 48 ++++++++++++++
4 files changed, 147 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 12935f6a7aa9..1fb2edc923cc 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -591,4 +591,10 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
}
+static inline bool kvm_is_shadow_s2_fault(struct kvm_vcpu *vcpu)
+{
+ return (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu &&
+ vcpu->arch.hw_mmu->nested_stage2_enabled);
+}
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index b784d7891851..4f93a5dab183 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -78,9 +78,27 @@ struct kvm_s2_trans {
u64 upper_attr;
};
+static inline phys_addr_t kvm_s2_trans_output(struct kvm_s2_trans *trans)
+{
+ return trans->output;
+}
+
+static inline unsigned long kvm_s2_trans_size(struct kvm_s2_trans *trans)
+{
+ return trans->block_size;
+}
+
+static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
+{
+ return trans->esr;
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
+extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
+ struct kvm_s2_trans *trans);
+extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 5c1a9966ff31..6b3753460293 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -796,7 +796,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
static unsigned long
transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
unsigned long hva, kvm_pfn_t *pfnp,
- phys_addr_t *ipap)
+ phys_addr_t *ipap, phys_addr_t *fault_ipap)
{
kvm_pfn_t pfn = *pfnp;
@@ -825,6 +825,7 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
* to PG_head and switch the pfn from a tail page to the head
* page accordingly.
*/
+ *fault_ipap &= PMD_MASK;
*ipap &= PMD_MASK;
kvm_release_pfn_clean(pfn);
pfn &= ~(PTRS_PER_PMD - 1);
@@ -839,14 +840,16 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
}
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
- struct kvm_memory_slot *memslot, unsigned long hva,
- unsigned long fault_status)
+ struct kvm_s2_trans *nested,
+ struct kvm_memory_slot *memslot,
+ unsigned long hva, unsigned long fault_status)
{
int ret = 0;
- bool write_fault, writable, force_pte = false;
+ bool write_fault, writable;
bool exec_fault;
bool device = false;
unsigned long mmu_seq;
+ phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct vm_area_struct *vma;
@@ -858,6 +861,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
unsigned long vma_pagesize, fault_granule;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt;
+ unsigned long max_map_size = PUD_SIZE;
fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
write_fault = kvm_is_write_fault(vcpu);
@@ -885,7 +889,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (logging_active ||
(vma->vm_flags & VM_PFNMAP)) {
- force_pte = true;
+ max_map_size = vma_pagesize = PAGE_SIZE;
vma_shift = PAGE_SHIFT;
}
@@ -905,7 +909,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
fallthrough;
case CONT_PTE_SHIFT:
vma_shift = PAGE_SHIFT;
- force_pte = true;
+ max_map_size = PAGE_SIZE;
fallthrough;
case PAGE_SHIFT:
break;
@@ -914,10 +918,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
vma_pagesize = 1UL << vma_shift;
+
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ ipa = kvm_s2_trans_output(nested);
+
+ /*
+ * If we're about to create a shadow stage 2 entry, then we
+ * can only create a block mapping if the guest stage 2 page
+ * table uses at least as big a mapping.
+ */
+ max_map_size = min(kvm_s2_trans_size(nested), max_map_size);
+ }
+
+ vma_pagesize = min(vma_pagesize, max_map_size);
+
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);
- gfn = fault_ipa >> PAGE_SHIFT;
+ gfn = ipa >> PAGE_SHIFT;
+
mmap_read_unlock(current->mm);
/*
@@ -960,7 +979,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (kvm_is_device_pfn(pfn)) {
device = true;
- force_pte = true;
+ max_map_size = PAGE_SIZE;
} else if (logging_active && !write_fault) {
/*
* Only actually map the page as writable if this was a write
@@ -981,9 +1000,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* If we are not forced to use page mapping, check if we are
* backed by a THP and thus use block mapping if possible.
*/
- if (vma_pagesize == PAGE_SIZE && !force_pte)
- vma_pagesize = transparent_hugepage_adjust(memslot, hva,
- &pfn, &fault_ipa);
+ if (vma_pagesize == PAGE_SIZE && max_map_size >= PMD_SIZE)
+ vma_pagesize = transparent_hugepage_adjust(memslot, hva, &pfn,
+ &ipa, &fault_ipa);
if (writable)
prot |= KVM_PGTABLE_PROT_W;
@@ -1059,8 +1078,10 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
{
unsigned long fault_status;
- phys_addr_t fault_ipa;
+ phys_addr_t fault_ipa; /* The address we faulted on */
+ phys_addr_t ipa; /* Always the IPA in the L1 guest phys space */
struct kvm_memory_slot *memslot;
+ struct kvm_s2_trans nested_trans;
unsigned long hva;
bool is_iabt, write_fault, writable;
gfn_t gfn;
@@ -1068,7 +1089,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
- fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
+ ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
/* Synchronous External Abort? */
@@ -1089,6 +1110,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
/* Check the stage-2 fault is trans. fault or write fault */
if (fault_status != FSC_FAULT && fault_status != FSC_PERM &&
fault_status != FSC_ACCESS) {
+ /*
+ * We must never see an address size fault on shadow stage 2
+ * page table walk, because we would have injected an addr
+ * size fault when we walked the nested s2 page and not
+ * create the shadow entry.
+ */
kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n",
kvm_vcpu_trap_get_class(vcpu),
(unsigned long)kvm_vcpu_trap_get_fault(vcpu),
@@ -1098,7 +1125,36 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
idx = srcu_read_lock(&vcpu->kvm->srcu);
- gfn = fault_ipa >> PAGE_SHIFT;
+ /*
+ * We may have faulted on a shadow stage 2 page table if we are
+ * running a nested guest. In this case, we have to resolve the L2
+ * IPA to the L1 IPA first, before knowing what kind of memory should
+ * back the L1 IPA.
+ *
+ * If the shadow stage 2 page table walk faults, then we simply inject
+ * this to the guest and carry on.
+ */
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ u32 esr;
+
+ ret = kvm_walk_nested_s2(vcpu, fault_ipa, &nested_trans);
+ esr = kvm_s2_trans_esr(&nested_trans);
+ if (esr)
+ kvm_inject_s2_fault(vcpu, esr);
+ if (ret)
+ goto out_unlock;
+
+ ret = kvm_s2_handle_perm_fault(vcpu, &nested_trans);
+ esr = kvm_s2_trans_esr(&nested_trans);
+ if (esr)
+ kvm_inject_s2_fault(vcpu, esr);
+ if (ret)
+ goto out_unlock;
+
+ ipa = kvm_s2_trans_output(&nested_trans);
+ }
+
+ gfn = ipa >> PAGE_SHIFT;
memslot = gfn_to_memslot(vcpu->kvm, gfn);
hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
write_fault = kvm_is_write_fault(vcpu);
@@ -1142,13 +1198,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
* faulting VA. This is always 12 bits, irrespective
* of the page size.
*/
- fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
- ret = io_mem_abort(vcpu, fault_ipa);
+ ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
+ ret = io_mem_abort(vcpu, ipa);
goto out_unlock;
}
/* Userspace should not be able to register out-of-bounds IPAs */
- VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm));
+ VM_BUG_ON(ipa >= kvm_phys_size(vcpu->kvm));
if (fault_status == FSC_ACCESS) {
handle_access_fault(vcpu, fault_ipa);
@@ -1156,7 +1212,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
goto out_unlock;
}
- ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
+ ret = user_mem_abort(vcpu, fault_ipa, &nested_trans,
+ memslot, hva, fault_status);
if (ret == 0)
ret = 1;
out:
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 1067b6422cf2..57f32768d04d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -108,6 +108,15 @@ static u32 compute_fsc(int level, u32 fsc)
return fsc | (level & 0x3);
}
+static int esr_s2_fault(struct kvm_vcpu *vcpu, int level, u32 fsc)
+{
+ u32 esr;
+
+ esr = kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC;
+ esr |= compute_fsc(level, fsc);
+ return esr;
+}
+
static int check_base_s2_limits(struct s2_walk_info *wi,
int level, int input_size, int stride)
{
@@ -457,6 +466,45 @@ void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
}
}
+/*
+ * Returns non-zero if permission fault is handled by injecting it to the next
+ * level hypervisor.
+ */
+int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
+{
+ unsigned long fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
+ bool forward_fault = false;
+
+ trans->esr = 0;
+
+ if (fault_status != FSC_PERM)
+ return 0;
+
+ if (kvm_vcpu_trap_is_iabt(vcpu)) {
+ forward_fault = (trans->upper_attr & BIT(54));
+ } else {
+ bool write_fault = kvm_is_write_fault(vcpu);
+
+ forward_fault = ((write_fault && !trans->writable) ||
+ (!write_fault && !trans->readable));
+ }
+
+ if (forward_fault) {
+ trans->esr = esr_s2_fault(vcpu, trans->level, ESR_ELx_FSC_PERM);
+ return 1;
+ }
+
+ return 0;
+}
+
+int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
+{
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.far_el2, FAR_EL2);
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.hpfar_el2, HPFAR_EL2);
+
+ return kvm_inject_nested_sync(vcpu, esr_el2);
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 35/66] KVM: arm64: nv: Handle shadow stage 2 page faults
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
If we are faulting on a shadow stage 2 translation, we first walk the
guest hypervisor's stage 2 page table to see if it has a mapping. If
not, we inject a stage 2 page fault to the virtual EL2. Otherwise, we
create a mapping in the shadow stage 2 page table.
Note that we have to deal with two IPAs when we got a shadow stage 2
page fault. One is the address we faulted on, and is in the L2 guest
phys space. The other is from the guest stage-2 page table walk, and is
in the L1 guest phys space. To differentiate them, we rename variables
so that fault_ipa is used for the former and ipa is used for the latter.
Co-developed-by: Christoffer Dall <christoffer.dall@linaro.org>
Co-developed-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: rewrote this multiple times...]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 6 ++
arch/arm64/include/asm/kvm_nested.h | 18 ++++++
arch/arm64/kvm/mmu.c | 93 ++++++++++++++++++++++------
arch/arm64/kvm/nested.c | 48 ++++++++++++++
4 files changed, 147 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 12935f6a7aa9..1fb2edc923cc 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -591,4 +591,10 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
}
+static inline bool kvm_is_shadow_s2_fault(struct kvm_vcpu *vcpu)
+{
+ return (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu &&
+ vcpu->arch.hw_mmu->nested_stage2_enabled);
+}
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index b784d7891851..4f93a5dab183 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -78,9 +78,27 @@ struct kvm_s2_trans {
u64 upper_attr;
};
+static inline phys_addr_t kvm_s2_trans_output(struct kvm_s2_trans *trans)
+{
+ return trans->output;
+}
+
+static inline unsigned long kvm_s2_trans_size(struct kvm_s2_trans *trans)
+{
+ return trans->block_size;
+}
+
+static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
+{
+ return trans->esr;
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
+extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
+ struct kvm_s2_trans *trans);
+extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 5c1a9966ff31..6b3753460293 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -796,7 +796,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
static unsigned long
transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
unsigned long hva, kvm_pfn_t *pfnp,
- phys_addr_t *ipap)
+ phys_addr_t *ipap, phys_addr_t *fault_ipap)
{
kvm_pfn_t pfn = *pfnp;
@@ -825,6 +825,7 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
* to PG_head and switch the pfn from a tail page to the head
* page accordingly.
*/
+ *fault_ipap &= PMD_MASK;
*ipap &= PMD_MASK;
kvm_release_pfn_clean(pfn);
pfn &= ~(PTRS_PER_PMD - 1);
@@ -839,14 +840,16 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
}
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
- struct kvm_memory_slot *memslot, unsigned long hva,
- unsigned long fault_status)
+ struct kvm_s2_trans *nested,
+ struct kvm_memory_slot *memslot,
+ unsigned long hva, unsigned long fault_status)
{
int ret = 0;
- bool write_fault, writable, force_pte = false;
+ bool write_fault, writable;
bool exec_fault;
bool device = false;
unsigned long mmu_seq;
+ phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct vm_area_struct *vma;
@@ -858,6 +861,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
unsigned long vma_pagesize, fault_granule;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt;
+ unsigned long max_map_size = PUD_SIZE;
fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
write_fault = kvm_is_write_fault(vcpu);
@@ -885,7 +889,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (logging_active ||
(vma->vm_flags & VM_PFNMAP)) {
- force_pte = true;
+ max_map_size = vma_pagesize = PAGE_SIZE;
vma_shift = PAGE_SHIFT;
}
@@ -905,7 +909,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
fallthrough;
case CONT_PTE_SHIFT:
vma_shift = PAGE_SHIFT;
- force_pte = true;
+ max_map_size = PAGE_SIZE;
fallthrough;
case PAGE_SHIFT:
break;
@@ -914,10 +918,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
vma_pagesize = 1UL << vma_shift;
+
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ ipa = kvm_s2_trans_output(nested);
+
+ /*
+ * If we're about to create a shadow stage 2 entry, then we
+ * can only create a block mapping if the guest stage 2 page
+ * table uses at least as big a mapping.
+ */
+ max_map_size = min(kvm_s2_trans_size(nested), max_map_size);
+ }
+
+ vma_pagesize = min(vma_pagesize, max_map_size);
+
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);
- gfn = fault_ipa >> PAGE_SHIFT;
+ gfn = ipa >> PAGE_SHIFT;
+
mmap_read_unlock(current->mm);
/*
@@ -960,7 +979,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (kvm_is_device_pfn(pfn)) {
device = true;
- force_pte = true;
+ max_map_size = PAGE_SIZE;
} else if (logging_active && !write_fault) {
/*
* Only actually map the page as writable if this was a write
@@ -981,9 +1000,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* If we are not forced to use page mapping, check if we are
* backed by a THP and thus use block mapping if possible.
*/
- if (vma_pagesize == PAGE_SIZE && !force_pte)
- vma_pagesize = transparent_hugepage_adjust(memslot, hva,
- &pfn, &fault_ipa);
+ if (vma_pagesize == PAGE_SIZE && max_map_size >= PMD_SIZE)
+ vma_pagesize = transparent_hugepage_adjust(memslot, hva, &pfn,
+ &ipa, &fault_ipa);
if (writable)
prot |= KVM_PGTABLE_PROT_W;
@@ -1059,8 +1078,10 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
{
unsigned long fault_status;
- phys_addr_t fault_ipa;
+ phys_addr_t fault_ipa; /* The address we faulted on */
+ phys_addr_t ipa; /* Always the IPA in the L1 guest phys space */
struct kvm_memory_slot *memslot;
+ struct kvm_s2_trans nested_trans;
unsigned long hva;
bool is_iabt, write_fault, writable;
gfn_t gfn;
@@ -1068,7 +1089,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
- fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
+ ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
/* Synchronous External Abort? */
@@ -1089,6 +1110,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
/* Check the stage-2 fault is trans. fault or write fault */
if (fault_status != FSC_FAULT && fault_status != FSC_PERM &&
fault_status != FSC_ACCESS) {
+ /*
+ * We must never see an address size fault on shadow stage 2
+ * page table walk, because we would have injected an addr
+ * size fault when we walked the nested s2 page and not
+ * create the shadow entry.
+ */
kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n",
kvm_vcpu_trap_get_class(vcpu),
(unsigned long)kvm_vcpu_trap_get_fault(vcpu),
@@ -1098,7 +1125,36 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
idx = srcu_read_lock(&vcpu->kvm->srcu);
- gfn = fault_ipa >> PAGE_SHIFT;
+ /*
+ * We may have faulted on a shadow stage 2 page table if we are
+ * running a nested guest. In this case, we have to resolve the L2
+ * IPA to the L1 IPA first, before knowing what kind of memory should
+ * back the L1 IPA.
+ *
+ * If the shadow stage 2 page table walk faults, then we simply inject
+ * this to the guest and carry on.
+ */
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ u32 esr;
+
+ ret = kvm_walk_nested_s2(vcpu, fault_ipa, &nested_trans);
+ esr = kvm_s2_trans_esr(&nested_trans);
+ if (esr)
+ kvm_inject_s2_fault(vcpu, esr);
+ if (ret)
+ goto out_unlock;
+
+ ret = kvm_s2_handle_perm_fault(vcpu, &nested_trans);
+ esr = kvm_s2_trans_esr(&nested_trans);
+ if (esr)
+ kvm_inject_s2_fault(vcpu, esr);
+ if (ret)
+ goto out_unlock;
+
+ ipa = kvm_s2_trans_output(&nested_trans);
+ }
+
+ gfn = ipa >> PAGE_SHIFT;
memslot = gfn_to_memslot(vcpu->kvm, gfn);
hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
write_fault = kvm_is_write_fault(vcpu);
@@ -1142,13 +1198,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
* faulting VA. This is always 12 bits, irrespective
* of the page size.
*/
- fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
- ret = io_mem_abort(vcpu, fault_ipa);
+ ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
+ ret = io_mem_abort(vcpu, ipa);
goto out_unlock;
}
/* Userspace should not be able to register out-of-bounds IPAs */
- VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm));
+ VM_BUG_ON(ipa >= kvm_phys_size(vcpu->kvm));
if (fault_status == FSC_ACCESS) {
handle_access_fault(vcpu, fault_ipa);
@@ -1156,7 +1212,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
goto out_unlock;
}
- ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
+ ret = user_mem_abort(vcpu, fault_ipa, &nested_trans,
+ memslot, hva, fault_status);
if (ret == 0)
ret = 1;
out:
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 1067b6422cf2..57f32768d04d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -108,6 +108,15 @@ static u32 compute_fsc(int level, u32 fsc)
return fsc | (level & 0x3);
}
+static int esr_s2_fault(struct kvm_vcpu *vcpu, int level, u32 fsc)
+{
+ u32 esr;
+
+ esr = kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC;
+ esr |= compute_fsc(level, fsc);
+ return esr;
+}
+
static int check_base_s2_limits(struct s2_walk_info *wi,
int level, int input_size, int stride)
{
@@ -457,6 +466,45 @@ void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
}
}
+/*
+ * Returns non-zero if permission fault is handled by injecting it to the next
+ * level hypervisor.
+ */
+int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
+{
+ unsigned long fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
+ bool forward_fault = false;
+
+ trans->esr = 0;
+
+ if (fault_status != FSC_PERM)
+ return 0;
+
+ if (kvm_vcpu_trap_is_iabt(vcpu)) {
+ forward_fault = (trans->upper_attr & BIT(54));
+ } else {
+ bool write_fault = kvm_is_write_fault(vcpu);
+
+ forward_fault = ((write_fault && !trans->writable) ||
+ (!write_fault && !trans->readable));
+ }
+
+ if (forward_fault) {
+ trans->esr = esr_s2_fault(vcpu, trans->level, ESR_ELx_FSC_PERM);
+ return 1;
+ }
+
+ return 0;
+}
+
+int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
+{
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.far_el2, FAR_EL2);
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.hpfar_el2, HPFAR_EL2);
+
+ return kvm_inject_nested_sync(vcpu, esr_el2);
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 35/66] KVM: arm64: nv: Handle shadow stage 2 page faults
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: kernel-team, Andre Przywara, Jintack Lim, Christoffer Dall
If we are faulting on a shadow stage 2 translation, we first walk the
guest hypervisor's stage 2 page table to see if it has a mapping. If
not, we inject a stage 2 page fault to the virtual EL2. Otherwise, we
create a mapping in the shadow stage 2 page table.
Note that we have to deal with two IPAs when we got a shadow stage 2
page fault. One is the address we faulted on, and is in the L2 guest
phys space. The other is from the guest stage-2 page table walk, and is
in the L1 guest phys space. To differentiate them, we rename variables
so that fault_ipa is used for the former and ipa is used for the latter.
Co-developed-by: Christoffer Dall <christoffer.dall@linaro.org>
Co-developed-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: rewrote this multiple times...]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 6 ++
arch/arm64/include/asm/kvm_nested.h | 18 ++++++
arch/arm64/kvm/mmu.c | 93 ++++++++++++++++++++++------
arch/arm64/kvm/nested.c | 48 ++++++++++++++
4 files changed, 147 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 12935f6a7aa9..1fb2edc923cc 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -591,4 +591,10 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
}
+static inline bool kvm_is_shadow_s2_fault(struct kvm_vcpu *vcpu)
+{
+ return (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu &&
+ vcpu->arch.hw_mmu->nested_stage2_enabled);
+}
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index b784d7891851..4f93a5dab183 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -78,9 +78,27 @@ struct kvm_s2_trans {
u64 upper_attr;
};
+static inline phys_addr_t kvm_s2_trans_output(struct kvm_s2_trans *trans)
+{
+ return trans->output;
+}
+
+static inline unsigned long kvm_s2_trans_size(struct kvm_s2_trans *trans)
+{
+ return trans->block_size;
+}
+
+static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
+{
+ return trans->esr;
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
+extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
+ struct kvm_s2_trans *trans);
+extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 5c1a9966ff31..6b3753460293 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -796,7 +796,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
static unsigned long
transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
unsigned long hva, kvm_pfn_t *pfnp,
- phys_addr_t *ipap)
+ phys_addr_t *ipap, phys_addr_t *fault_ipap)
{
kvm_pfn_t pfn = *pfnp;
@@ -825,6 +825,7 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
* to PG_head and switch the pfn from a tail page to the head
* page accordingly.
*/
+ *fault_ipap &= PMD_MASK;
*ipap &= PMD_MASK;
kvm_release_pfn_clean(pfn);
pfn &= ~(PTRS_PER_PMD - 1);
@@ -839,14 +840,16 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
}
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
- struct kvm_memory_slot *memslot, unsigned long hva,
- unsigned long fault_status)
+ struct kvm_s2_trans *nested,
+ struct kvm_memory_slot *memslot,
+ unsigned long hva, unsigned long fault_status)
{
int ret = 0;
- bool write_fault, writable, force_pte = false;
+ bool write_fault, writable;
bool exec_fault;
bool device = false;
unsigned long mmu_seq;
+ phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct vm_area_struct *vma;
@@ -858,6 +861,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
unsigned long vma_pagesize, fault_granule;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt;
+ unsigned long max_map_size = PUD_SIZE;
fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
write_fault = kvm_is_write_fault(vcpu);
@@ -885,7 +889,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (logging_active ||
(vma->vm_flags & VM_PFNMAP)) {
- force_pte = true;
+ max_map_size = vma_pagesize = PAGE_SIZE;
vma_shift = PAGE_SHIFT;
}
@@ -905,7 +909,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
fallthrough;
case CONT_PTE_SHIFT:
vma_shift = PAGE_SHIFT;
- force_pte = true;
+ max_map_size = PAGE_SIZE;
fallthrough;
case PAGE_SHIFT:
break;
@@ -914,10 +918,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
vma_pagesize = 1UL << vma_shift;
+
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ ipa = kvm_s2_trans_output(nested);
+
+ /*
+ * If we're about to create a shadow stage 2 entry, then we
+ * can only create a block mapping if the guest stage 2 page
+ * table uses at least as big a mapping.
+ */
+ max_map_size = min(kvm_s2_trans_size(nested), max_map_size);
+ }
+
+ vma_pagesize = min(vma_pagesize, max_map_size);
+
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);
- gfn = fault_ipa >> PAGE_SHIFT;
+ gfn = ipa >> PAGE_SHIFT;
+
mmap_read_unlock(current->mm);
/*
@@ -960,7 +979,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (kvm_is_device_pfn(pfn)) {
device = true;
- force_pte = true;
+ max_map_size = PAGE_SIZE;
} else if (logging_active && !write_fault) {
/*
* Only actually map the page as writable if this was a write
@@ -981,9 +1000,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* If we are not forced to use page mapping, check if we are
* backed by a THP and thus use block mapping if possible.
*/
- if (vma_pagesize == PAGE_SIZE && !force_pte)
- vma_pagesize = transparent_hugepage_adjust(memslot, hva,
- &pfn, &fault_ipa);
+ if (vma_pagesize == PAGE_SIZE && max_map_size >= PMD_SIZE)
+ vma_pagesize = transparent_hugepage_adjust(memslot, hva, &pfn,
+ &ipa, &fault_ipa);
if (writable)
prot |= KVM_PGTABLE_PROT_W;
@@ -1059,8 +1078,10 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
{
unsigned long fault_status;
- phys_addr_t fault_ipa;
+ phys_addr_t fault_ipa; /* The address we faulted on */
+ phys_addr_t ipa; /* Always the IPA in the L1 guest phys space */
struct kvm_memory_slot *memslot;
+ struct kvm_s2_trans nested_trans;
unsigned long hva;
bool is_iabt, write_fault, writable;
gfn_t gfn;
@@ -1068,7 +1089,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
- fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
+ ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
/* Synchronous External Abort? */
@@ -1089,6 +1110,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
/* Check the stage-2 fault is trans. fault or write fault */
if (fault_status != FSC_FAULT && fault_status != FSC_PERM &&
fault_status != FSC_ACCESS) {
+ /*
+ * We must never see an address size fault on shadow stage 2
+ * page table walk, because we would have injected an addr
+ * size fault when we walked the nested s2 page and not
+ * create the shadow entry.
+ */
kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n",
kvm_vcpu_trap_get_class(vcpu),
(unsigned long)kvm_vcpu_trap_get_fault(vcpu),
@@ -1098,7 +1125,36 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
idx = srcu_read_lock(&vcpu->kvm->srcu);
- gfn = fault_ipa >> PAGE_SHIFT;
+ /*
+ * We may have faulted on a shadow stage 2 page table if we are
+ * running a nested guest. In this case, we have to resolve the L2
+ * IPA to the L1 IPA first, before knowing what kind of memory should
+ * back the L1 IPA.
+ *
+ * If the shadow stage 2 page table walk faults, then we simply inject
+ * this to the guest and carry on.
+ */
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ u32 esr;
+
+ ret = kvm_walk_nested_s2(vcpu, fault_ipa, &nested_trans);
+ esr = kvm_s2_trans_esr(&nested_trans);
+ if (esr)
+ kvm_inject_s2_fault(vcpu, esr);
+ if (ret)
+ goto out_unlock;
+
+ ret = kvm_s2_handle_perm_fault(vcpu, &nested_trans);
+ esr = kvm_s2_trans_esr(&nested_trans);
+ if (esr)
+ kvm_inject_s2_fault(vcpu, esr);
+ if (ret)
+ goto out_unlock;
+
+ ipa = kvm_s2_trans_output(&nested_trans);
+ }
+
+ gfn = ipa >> PAGE_SHIFT;
memslot = gfn_to_memslot(vcpu->kvm, gfn);
hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
write_fault = kvm_is_write_fault(vcpu);
@@ -1142,13 +1198,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
* faulting VA. This is always 12 bits, irrespective
* of the page size.
*/
- fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
- ret = io_mem_abort(vcpu, fault_ipa);
+ ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
+ ret = io_mem_abort(vcpu, ipa);
goto out_unlock;
}
/* Userspace should not be able to register out-of-bounds IPAs */
- VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm));
+ VM_BUG_ON(ipa >= kvm_phys_size(vcpu->kvm));
if (fault_status == FSC_ACCESS) {
handle_access_fault(vcpu, fault_ipa);
@@ -1156,7 +1212,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
goto out_unlock;
}
- ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
+ ret = user_mem_abort(vcpu, fault_ipa, &nested_trans,
+ memslot, hva, fault_status);
if (ret == 0)
ret = 1;
out:
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 1067b6422cf2..57f32768d04d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -108,6 +108,15 @@ static u32 compute_fsc(int level, u32 fsc)
return fsc | (level & 0x3);
}
+static int esr_s2_fault(struct kvm_vcpu *vcpu, int level, u32 fsc)
+{
+ u32 esr;
+
+ esr = kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC;
+ esr |= compute_fsc(level, fsc);
+ return esr;
+}
+
static int check_base_s2_limits(struct s2_walk_info *wi,
int level, int input_size, int stride)
{
@@ -457,6 +466,45 @@ void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
}
}
+/*
+ * Returns non-zero if permission fault is handled by injecting it to the next
+ * level hypervisor.
+ */
+int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
+{
+ unsigned long fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
+ bool forward_fault = false;
+
+ trans->esr = 0;
+
+ if (fault_status != FSC_PERM)
+ return 0;
+
+ if (kvm_vcpu_trap_is_iabt(vcpu)) {
+ forward_fault = (trans->upper_attr & BIT(54));
+ } else {
+ bool write_fault = kvm_is_write_fault(vcpu);
+
+ forward_fault = ((write_fault && !trans->writable) ||
+ (!write_fault && !trans->readable));
+ }
+
+ if (forward_fault) {
+ trans->esr = esr_s2_fault(vcpu, trans->level, ESR_ELx_FSC_PERM);
+ return 1;
+ }
+
+ return 0;
+}
+
+int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
+{
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.far_el2, FAR_EL2);
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.hpfar_el2, HPFAR_EL2);
+
+ return kvm_inject_nested_sync(vcpu, esr_el2);
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 36/66] KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
When mapping a page in a shadow stage-2, special care must be
taken not to be more permissive than the guest is (writable or
readable page when the guest hasn't set that permission).
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 15 +++++++++++++++
arch/arm64/kvm/mmu.c | 14 +++++++++++++-
arch/arm64/kvm/nested.c | 2 +-
3 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4f93a5dab183..3f3d8e10bd99 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -93,6 +93,21 @@ static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
return trans->esr;
}
+static inline bool kvm_s2_trans_readable(struct kvm_s2_trans *trans)
+{
+ return trans->readable;
+}
+
+static inline bool kvm_s2_trans_writable(struct kvm_s2_trans *trans)
+{
+ return trans->writable;
+}
+
+static inline bool kvm_s2_trans_executable(struct kvm_s2_trans *trans)
+{
+ return !(trans->upper_attr & BIT(54));
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 6b3753460293..6db8fa8bc5a3 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -991,6 +991,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && device)
return -ENOEXEC;
+ /*
+ * Potentially reduce shadow S2 permissions to match the guest's own
+ * S2. For exec faults, we'd only reach this point if the guest
+ * actually allowed it (see kvm_s2_handle_perm_fault).
+ */
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ writable &= kvm_s2_trans_writable(nested);
+ if (!kvm_s2_trans_readable(nested))
+ prot &= ~KVM_PGTABLE_PROT_R;
+ }
+
spin_lock(&kvm->mmu_lock);
pgt = vcpu->arch.hw_mmu->pgt;
if (mmu_notifier_retry(kvm, mmu_seq))
@@ -1016,7 +1027,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (device)
prot |= KVM_PGTABLE_PROT_DEVICE;
- else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
+ else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC) &&
+ kvm_s2_trans_executable(nested))
prot |= KVM_PGTABLE_PROT_X;
/*
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 57f32768d04d..2e6a97e43396 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -481,7 +481,7 @@ int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
return 0;
if (kvm_vcpu_trap_is_iabt(vcpu)) {
- forward_fault = (trans->upper_attr & BIT(54));
+ forward_fault = !kvm_s2_trans_executable(trans);
} else {
bool write_fault = kvm_is_write_fault(vcpu);
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 36/66] KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team
When mapping a page in a shadow stage-2, special care must be
taken not to be more permissive than the guest is (writable or
readable page when the guest hasn't set that permission).
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 15 +++++++++++++++
arch/arm64/kvm/mmu.c | 14 +++++++++++++-
arch/arm64/kvm/nested.c | 2 +-
3 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4f93a5dab183..3f3d8e10bd99 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -93,6 +93,21 @@ static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
return trans->esr;
}
+static inline bool kvm_s2_trans_readable(struct kvm_s2_trans *trans)
+{
+ return trans->readable;
+}
+
+static inline bool kvm_s2_trans_writable(struct kvm_s2_trans *trans)
+{
+ return trans->writable;
+}
+
+static inline bool kvm_s2_trans_executable(struct kvm_s2_trans *trans)
+{
+ return !(trans->upper_attr & BIT(54));
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 6b3753460293..6db8fa8bc5a3 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -991,6 +991,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && device)
return -ENOEXEC;
+ /*
+ * Potentially reduce shadow S2 permissions to match the guest's own
+ * S2. For exec faults, we'd only reach this point if the guest
+ * actually allowed it (see kvm_s2_handle_perm_fault).
+ */
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ writable &= kvm_s2_trans_writable(nested);
+ if (!kvm_s2_trans_readable(nested))
+ prot &= ~KVM_PGTABLE_PROT_R;
+ }
+
spin_lock(&kvm->mmu_lock);
pgt = vcpu->arch.hw_mmu->pgt;
if (mmu_notifier_retry(kvm, mmu_seq))
@@ -1016,7 +1027,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (device)
prot |= KVM_PGTABLE_PROT_DEVICE;
- else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
+ else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC) &&
+ kvm_s2_trans_executable(nested))
prot |= KVM_PGTABLE_PROT_X;
/*
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 57f32768d04d..2e6a97e43396 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -481,7 +481,7 @@ int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
return 0;
if (kvm_vcpu_trap_is_iabt(vcpu)) {
- forward_fault = (trans->upper_attr & BIT(54));
+ forward_fault = !kvm_s2_trans_executable(trans);
} else {
bool write_fault = kvm_is_write_fault(vcpu);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 36/66] KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara
When mapping a page in a shadow stage-2, special care must be
taken not to be more permissive than the guest is (writable or
readable page when the guest hasn't set that permission).
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 15 +++++++++++++++
arch/arm64/kvm/mmu.c | 14 +++++++++++++-
arch/arm64/kvm/nested.c | 2 +-
3 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4f93a5dab183..3f3d8e10bd99 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -93,6 +93,21 @@ static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
return trans->esr;
}
+static inline bool kvm_s2_trans_readable(struct kvm_s2_trans *trans)
+{
+ return trans->readable;
+}
+
+static inline bool kvm_s2_trans_writable(struct kvm_s2_trans *trans)
+{
+ return trans->writable;
+}
+
+static inline bool kvm_s2_trans_executable(struct kvm_s2_trans *trans)
+{
+ return !(trans->upper_attr & BIT(54));
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 6b3753460293..6db8fa8bc5a3 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -991,6 +991,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && device)
return -ENOEXEC;
+ /*
+ * Potentially reduce shadow S2 permissions to match the guest's own
+ * S2. For exec faults, we'd only reach this point if the guest
+ * actually allowed it (see kvm_s2_handle_perm_fault).
+ */
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ writable &= kvm_s2_trans_writable(nested);
+ if (!kvm_s2_trans_readable(nested))
+ prot &= ~KVM_PGTABLE_PROT_R;
+ }
+
spin_lock(&kvm->mmu_lock);
pgt = vcpu->arch.hw_mmu->pgt;
if (mmu_notifier_retry(kvm, mmu_seq))
@@ -1016,7 +1027,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (device)
prot |= KVM_PGTABLE_PROT_DEVICE;
- else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
+ else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC) &&
+ kvm_s2_trans_executable(nested))
prot |= KVM_PGTABLE_PROT_X;
/*
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 57f32768d04d..2e6a97e43396 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -481,7 +481,7 @@ int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
return 0;
if (kvm_vcpu_trap_is_iabt(vcpu)) {
- forward_fault = (trans->upper_attr & BIT(54));
+ forward_fault = !kvm_s2_trans_executable(trans);
} else {
bool write_fault = kvm_is_write_fault(vcpu);
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 37/66] KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
From: Christoffer Dall <christoffer.dall@linaro.org>
Unmap/flush shadow stage 2 page tables for the nested VMs as well as the
stage 2 page table for the guest hypervisor.
Note: A bunch of the code in mmu.c relating to MMU notifiers is
currently dealt with in an extremely abrupt way, for example by clearing
out an entire shadow stage-2 table. This will be handled in a more
efficient way using the reverse mapping feature in a later version of
the patch series.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_mmu.h | 3 +++
arch/arm64/include/asm/kvm_nested.h | 3 +++
arch/arm64/kvm/mmu.c | 29 ++++++++++++++++++---
arch/arm64/kvm/nested.c | 39 +++++++++++++++++++++++++++++
4 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 579980a8b05f..eaec0366526d 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -158,6 +158,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
void __iomem **haddr);
int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void **haddr);
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end);
void free_hyp_pgds(void);
void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
@@ -166,6 +168,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
phys_addr_t pa, unsigned long size, bool writable);
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 3f3d8e10bd99..2987806850f0 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -114,6 +114,9 @@ extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
struct kvm_s2_trans *trans);
extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
+extern void kvm_nested_s2_wp(struct kvm *kvm);
+extern void kvm_nested_s2_clear(struct kvm *kvm);
+extern void kvm_nested_s2_flush(struct kvm *kvm);
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 6db8fa8bc5a3..fddcbe200573 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -179,13 +179,20 @@ void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
__unmap_stage2_range(mmu, start, size, true);
}
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end)
+{
+ stage2_apply_range_resched(kvm_s2_mmu_to_kvm(mmu), addr, end, kvm_pgtable_stage2_flush);
+}
+
static void stage2_flush_memslot(struct kvm *kvm,
struct kvm_memory_slot *memslot)
{
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
+ struct kvm_s2_mmu *mmu = &kvm->arch.mmu;
- stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush);
+ kvm_stage2_flush_range(mmu, addr, end);
}
/**
@@ -208,6 +215,8 @@ static void stage2_flush_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, slots)
stage2_flush_memslot(kvm, memslot);
+ kvm_nested_s2_flush(kvm);
+
spin_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, idx);
}
@@ -562,6 +571,8 @@ void stage2_unmap_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, slots)
stage2_unmap_memslot(kvm, memslot);
+ kvm_nested_s2_clear(kvm);
+
spin_unlock(&kvm->mmu_lock);
mmap_read_unlock(current->mm);
srcu_read_unlock(&kvm->srcu, idx);
@@ -636,7 +647,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
* @addr: Start address of range
* @end: End address of range
*/
-static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
{
struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect);
@@ -668,7 +679,8 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
spin_lock(&kvm->mmu_lock);
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_nested_s2_wp(kvm);
spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
}
@@ -692,7 +704,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT;
phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT;
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
}
/*
@@ -707,6 +719,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
gfn_t gfn_offset, unsigned long mask)
{
kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+ kvm_nested_s2_wp(kvm);
}
static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
@@ -1247,6 +1260,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
(range->end - range->start) << PAGE_SHIFT,
range->may_block);
+ kvm_nested_s2_clear(kvm);
return 0;
}
@@ -1275,6 +1289,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
PAGE_SIZE, __pfn_to_phys(pfn),
KVM_PGTABLE_PROT_R, NULL);
+ kvm_nested_s2_clear(kvm);
return 0;
}
@@ -1293,6 +1308,11 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
range->start << PAGE_SHIFT);
pte = __pte(kpte);
return pte_valid(pte) && pte_young(pte);
+
+ /*
+ * TODO: Handle nested_mmu structures here using the reverse mapping in
+ * a later version of patch series.
+ */
}
bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
@@ -1525,6 +1545,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_nested_s2_clear(kvm);
spin_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 2e6a97e43396..9aa4cefc954d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -505,6 +505,45 @@ int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
return kvm_inject_nested_sync(vcpu, esr_el2);
}
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_wp(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_wp_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_clear(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_unmap_stage2_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_flush(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_flush_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 37/66] KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Christoffer Dall, Jintack Lim
From: Christoffer Dall <christoffer.dall@linaro.org>
Unmap/flush shadow stage 2 page tables for the nested VMs as well as the
stage 2 page table for the guest hypervisor.
Note: A bunch of the code in mmu.c relating to MMU notifiers is
currently dealt with in an extremely abrupt way, for example by clearing
out an entire shadow stage-2 table. This will be handled in a more
efficient way using the reverse mapping feature in a later version of
the patch series.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_mmu.h | 3 +++
arch/arm64/include/asm/kvm_nested.h | 3 +++
arch/arm64/kvm/mmu.c | 29 ++++++++++++++++++---
arch/arm64/kvm/nested.c | 39 +++++++++++++++++++++++++++++
4 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 579980a8b05f..eaec0366526d 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -158,6 +158,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
void __iomem **haddr);
int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void **haddr);
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end);
void free_hyp_pgds(void);
void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
@@ -166,6 +168,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
phys_addr_t pa, unsigned long size, bool writable);
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 3f3d8e10bd99..2987806850f0 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -114,6 +114,9 @@ extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
struct kvm_s2_trans *trans);
extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
+extern void kvm_nested_s2_wp(struct kvm *kvm);
+extern void kvm_nested_s2_clear(struct kvm *kvm);
+extern void kvm_nested_s2_flush(struct kvm *kvm);
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 6db8fa8bc5a3..fddcbe200573 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -179,13 +179,20 @@ void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
__unmap_stage2_range(mmu, start, size, true);
}
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end)
+{
+ stage2_apply_range_resched(kvm_s2_mmu_to_kvm(mmu), addr, end, kvm_pgtable_stage2_flush);
+}
+
static void stage2_flush_memslot(struct kvm *kvm,
struct kvm_memory_slot *memslot)
{
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
+ struct kvm_s2_mmu *mmu = &kvm->arch.mmu;
- stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush);
+ kvm_stage2_flush_range(mmu, addr, end);
}
/**
@@ -208,6 +215,8 @@ static void stage2_flush_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, slots)
stage2_flush_memslot(kvm, memslot);
+ kvm_nested_s2_flush(kvm);
+
spin_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, idx);
}
@@ -562,6 +571,8 @@ void stage2_unmap_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, slots)
stage2_unmap_memslot(kvm, memslot);
+ kvm_nested_s2_clear(kvm);
+
spin_unlock(&kvm->mmu_lock);
mmap_read_unlock(current->mm);
srcu_read_unlock(&kvm->srcu, idx);
@@ -636,7 +647,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
* @addr: Start address of range
* @end: End address of range
*/
-static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
{
struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect);
@@ -668,7 +679,8 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
spin_lock(&kvm->mmu_lock);
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_nested_s2_wp(kvm);
spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
}
@@ -692,7 +704,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT;
phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT;
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
}
/*
@@ -707,6 +719,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
gfn_t gfn_offset, unsigned long mask)
{
kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+ kvm_nested_s2_wp(kvm);
}
static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
@@ -1247,6 +1260,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
(range->end - range->start) << PAGE_SHIFT,
range->may_block);
+ kvm_nested_s2_clear(kvm);
return 0;
}
@@ -1275,6 +1289,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
PAGE_SIZE, __pfn_to_phys(pfn),
KVM_PGTABLE_PROT_R, NULL);
+ kvm_nested_s2_clear(kvm);
return 0;
}
@@ -1293,6 +1308,11 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
range->start << PAGE_SHIFT);
pte = __pte(kpte);
return pte_valid(pte) && pte_young(pte);
+
+ /*
+ * TODO: Handle nested_mmu structures here using the reverse mapping in
+ * a later version of patch series.
+ */
}
bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
@@ -1525,6 +1545,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_nested_s2_clear(kvm);
spin_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 2e6a97e43396..9aa4cefc954d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -505,6 +505,45 @@ int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
return kvm_inject_nested_sync(vcpu, esr_el2);
}
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_wp(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_wp_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_clear(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_unmap_stage2_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_flush(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_flush_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 37/66] KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: kernel-team, Andre Przywara, Jintack Lim, Christoffer Dall
From: Christoffer Dall <christoffer.dall@linaro.org>
Unmap/flush shadow stage 2 page tables for the nested VMs as well as the
stage 2 page table for the guest hypervisor.
Note: A bunch of the code in mmu.c relating to MMU notifiers is
currently dealt with in an extremely abrupt way, for example by clearing
out an entire shadow stage-2 table. This will be handled in a more
efficient way using the reverse mapping feature in a later version of
the patch series.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_mmu.h | 3 +++
arch/arm64/include/asm/kvm_nested.h | 3 +++
arch/arm64/kvm/mmu.c | 29 ++++++++++++++++++---
arch/arm64/kvm/nested.c | 39 +++++++++++++++++++++++++++++
4 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 579980a8b05f..eaec0366526d 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -158,6 +158,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
void __iomem **haddr);
int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void **haddr);
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end);
void free_hyp_pgds(void);
void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
@@ -166,6 +168,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
phys_addr_t pa, unsigned long size, bool writable);
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 3f3d8e10bd99..2987806850f0 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -114,6 +114,9 @@ extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
struct kvm_s2_trans *trans);
extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
+extern void kvm_nested_s2_wp(struct kvm *kvm);
+extern void kvm_nested_s2_clear(struct kvm *kvm);
+extern void kvm_nested_s2_flush(struct kvm *kvm);
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool __forward_traps(struct kvm_vcpu *vcpu, unsigned int reg,
u64 control_bit);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 6db8fa8bc5a3..fddcbe200573 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -179,13 +179,20 @@ void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
__unmap_stage2_range(mmu, start, size, true);
}
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end)
+{
+ stage2_apply_range_resched(kvm_s2_mmu_to_kvm(mmu), addr, end, kvm_pgtable_stage2_flush);
+}
+
static void stage2_flush_memslot(struct kvm *kvm,
struct kvm_memory_slot *memslot)
{
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
+ struct kvm_s2_mmu *mmu = &kvm->arch.mmu;
- stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush);
+ kvm_stage2_flush_range(mmu, addr, end);
}
/**
@@ -208,6 +215,8 @@ static void stage2_flush_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, slots)
stage2_flush_memslot(kvm, memslot);
+ kvm_nested_s2_flush(kvm);
+
spin_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, idx);
}
@@ -562,6 +571,8 @@ void stage2_unmap_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, slots)
stage2_unmap_memslot(kvm, memslot);
+ kvm_nested_s2_clear(kvm);
+
spin_unlock(&kvm->mmu_lock);
mmap_read_unlock(current->mm);
srcu_read_unlock(&kvm->srcu, idx);
@@ -636,7 +647,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
* @addr: Start address of range
* @end: End address of range
*/
-static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
{
struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect);
@@ -668,7 +679,8 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
spin_lock(&kvm->mmu_lock);
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_nested_s2_wp(kvm);
spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
}
@@ -692,7 +704,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT;
phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT;
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
}
/*
@@ -707,6 +719,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
gfn_t gfn_offset, unsigned long mask)
{
kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+ kvm_nested_s2_wp(kvm);
}
static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
@@ -1247,6 +1260,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
(range->end - range->start) << PAGE_SHIFT,
range->may_block);
+ kvm_nested_s2_clear(kvm);
return 0;
}
@@ -1275,6 +1289,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
PAGE_SIZE, __pfn_to_phys(pfn),
KVM_PGTABLE_PROT_R, NULL);
+ kvm_nested_s2_clear(kvm);
return 0;
}
@@ -1293,6 +1308,11 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
range->start << PAGE_SHIFT);
pte = __pte(kpte);
return pte_valid(pte) && pte_young(pte);
+
+ /*
+ * TODO: Handle nested_mmu structures here using the reverse mapping in
+ * a later version of patch series.
+ */
}
bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
@@ -1525,6 +1545,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_nested_s2_clear(kvm);
spin_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 2e6a97e43396..9aa4cefc954d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -505,6 +505,45 @@ int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
return kvm_inject_nested_sync(vcpu, esr_el2);
}
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_wp(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_wp_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_clear(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_unmap_stage2_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_flush(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_flush_range(mmu, 0, kvm_phys_size(kvm));
+ }
+}
+
/*
* Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
* the virtual HCR_EL2.TWX is set. Otherwise, let the host hypervisor
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 38/66] KVM: arm64: nv: Introduce sys_reg_desc.forward_trap
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
This introduces a function prototype to determine if we need to forward
system instruction traps to the virtual EL2. The implementation of
forward_trap functions for each system instruction will be added in
later patches.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 8 ++++++++
arch/arm64/kvm/sys_regs.h | 6 ++++++
2 files changed, 14 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7353d5eaeaca..14d1aac58f26 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2532,6 +2532,14 @@ static void perform_access(struct kvm_vcpu *vcpu,
*/
BUG_ON(!r->access);
+ /*
+ * Forward this trap to the virtual EL2 if the guest hypervisor has
+ * configured to trap the current instruction.
+ */
+ if (nested_virt_in_use(vcpu) && r->forward_trap
+ && unlikely(r->forward_trap(vcpu)))
+ return;
+
/* Skip instruction if instructed so */
if (likely(r->access(vcpu, params, r)))
kvm_incr_pc(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index c6fbe3a7855e..12227a3a6bb3 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -58,6 +58,12 @@ struct sys_reg_desc {
int (*set_user)(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr);
+ /*
+ * Forward the trap to the virtual EL2 if the guest hypervisor has
+ * configured to trap the current instruction.
+ */
+ bool (*forward_trap)(struct kvm_vcpu *vcpu);
+
/* Return mask of REG_* runtime visibility overrides */
unsigned int (*visibility)(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd);
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 38/66] KVM: arm64: nv: Introduce sys_reg_desc.forward_trap
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
This introduces a function prototype to determine if we need to forward
system instruction traps to the virtual EL2. The implementation of
forward_trap functions for each system instruction will be added in
later patches.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 8 ++++++++
arch/arm64/kvm/sys_regs.h | 6 ++++++
2 files changed, 14 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7353d5eaeaca..14d1aac58f26 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2532,6 +2532,14 @@ static void perform_access(struct kvm_vcpu *vcpu,
*/
BUG_ON(!r->access);
+ /*
+ * Forward this trap to the virtual EL2 if the guest hypervisor has
+ * configured to trap the current instruction.
+ */
+ if (nested_virt_in_use(vcpu) && r->forward_trap
+ && unlikely(r->forward_trap(vcpu)))
+ return;
+
/* Skip instruction if instructed so */
if (likely(r->access(vcpu, params, r)))
kvm_incr_pc(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index c6fbe3a7855e..12227a3a6bb3 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -58,6 +58,12 @@ struct sys_reg_desc {
int (*set_user)(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr);
+ /*
+ * Forward the trap to the virtual EL2 if the guest hypervisor has
+ * configured to trap the current instruction.
+ */
+ bool (*forward_trap)(struct kvm_vcpu *vcpu);
+
/* Return mask of REG_* runtime visibility overrides */
unsigned int (*visibility)(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 38/66] KVM: arm64: nv: Introduce sys_reg_desc.forward_trap
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
This introduces a function prototype to determine if we need to forward
system instruction traps to the virtual EL2. The implementation of
forward_trap functions for each system instruction will be added in
later patches.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 8 ++++++++
arch/arm64/kvm/sys_regs.h | 6 ++++++
2 files changed, 14 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7353d5eaeaca..14d1aac58f26 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2532,6 +2532,14 @@ static void perform_access(struct kvm_vcpu *vcpu,
*/
BUG_ON(!r->access);
+ /*
+ * Forward this trap to the virtual EL2 if the guest hypervisor has
+ * configured to trap the current instruction.
+ */
+ if (nested_virt_in_use(vcpu) && r->forward_trap
+ && unlikely(r->forward_trap(vcpu)))
+ return;
+
/* Skip instruction if instructed so */
if (likely(r->access(vcpu, params, r)))
kvm_incr_pc(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index c6fbe3a7855e..12227a3a6bb3 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -58,6 +58,12 @@ struct sys_reg_desc {
int (*set_user)(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr);
+ /*
+ * Forward the trap to the virtual EL2 if the guest hypervisor has
+ * configured to trap the current instruction.
+ */
+ bool (*forward_trap)(struct kvm_vcpu *vcpu);
+
/* Return mask of REG_* runtime visibility overrides */
unsigned int (*visibility)(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd);
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 39/66] KVM: arm64: nv: Set a handler for the system instruction traps
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
When HCR.NV bit is set, execution of the EL2 translation regime address
aranslation instructions and TLB maintenance instructions are trapped to
EL2. In addition, execution of the EL1 translation regime address
aranslation instructions and TLB maintenance instructions that are only
accessible from EL2 and above are trapped to EL2. In these cases,
ESR_EL2.EC will be set to 0x18.
Rework the system instruction emulation framework to handle potentially
all system instruction traps other than MSR/MRS instructions. Those
system instructions would be AT and TLBI instructions controlled by
HCR_EL2.NV, AT, and TTLB bits.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: squashed two patches together, redispatched various bits around]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 4 +--
arch/arm64/kvm/handle_exit.c | 2 +-
arch/arm64/kvm/sys_regs.c | 48 +++++++++++++++++++++++++------
3 files changed, 42 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b7d6a829091c..9a811778106c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -392,7 +392,7 @@ struct kvm_vcpu_arch {
/*
* Guest registers we preserve during guest debugging.
*
- * These shadow registers are updated by the kvm_handle_sys_reg
+ * These shadow registers are updated by the kvm_handle_sys
* trap handler if the guest accesses or updates them while we
* are using guest debug.
*/
@@ -712,7 +712,7 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu);
int kvm_handle_cp14_64(struct kvm_vcpu *vcpu);
int kvm_handle_cp15_32(struct kvm_vcpu *vcpu);
int kvm_handle_cp15_64(struct kvm_vcpu *vcpu);
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu);
+int kvm_handle_sys(struct kvm_vcpu *vcpu);
void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5f1e3989c4bc..b63653004fa8 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -248,7 +248,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_SMC32] = handle_smc,
[ESR_ELx_EC_HVC64] = handle_hvc,
[ESR_ELx_EC_SMC64] = handle_smc,
- [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg,
+ [ESR_ELx_EC_SYS64] = kvm_handle_sys,
[ESR_ELx_EC_SVE] = handle_sve,
[ESR_ELx_EC_ERET] = kvm_handle_eret,
[ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort,
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 14d1aac58f26..79f6c88425de 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1692,10 +1692,6 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
* more demanding guest...
*/
static const struct sys_reg_desc sys_reg_descs[] = {
- { SYS_DESC(SYS_DC_ISW), access_dcsw },
- { SYS_DESC(SYS_DC_CSW), access_dcsw },
- { SYS_DESC(SYS_DC_CISW), access_dcsw },
-
DBG_BCR_BVR_WCR_WVR_EL1(0),
DBG_BCR_BVR_WCR_WVR_EL1(1),
{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
@@ -2157,6 +2153,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
+#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
+ { SYS_DESC((insn)), (access_fn), NULL, 0, 0, NULL, NULL, (forward_fn) }
+static struct sys_reg_desc sys_insn_descs[] = {
+ { SYS_DESC(SYS_DC_ISW), access_dcsw },
+ { SYS_DESC(SYS_DC_CSW), access_dcsw },
+ { SYS_DESC(SYS_DC_CISW), access_dcsw },
+};
+
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -2728,6 +2732,24 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
return 1;
}
+static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ const struct sys_reg_desc *r;
+
+ /* Search from the system instruction table. */
+ r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+
+ if (likely(r)) {
+ perform_access(vcpu, p, r);
+ } else {
+ kvm_err("Unsupported guest sys instruction at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(p);
+ kvm_inject_undefined(vcpu);
+ }
+ return 1;
+}
+
/**
* kvm_reset_sys_regs - sets system registers to reset value
* @vcpu: The VCPU pointer
@@ -2745,10 +2767,11 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
}
/**
- * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * kvm_handle_sys-- handles a system instruction or mrs/msr instruction trap
+ on a guest execution
* @vcpu: The VCPU pointer
*/
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
+int kvm_handle_sys(struct kvm_vcpu *vcpu)
{
struct sys_reg_params params;
unsigned long esr = kvm_vcpu_get_esr(vcpu);
@@ -2765,10 +2788,16 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
params.regval = vcpu_get_reg(vcpu, Rt);
params.is_write = !(esr & 1);
- ret = emulate_sys_reg(vcpu, ¶ms);
+ if (params.Op0 == 1) {
+ /* System instructions */
+ ret = emulate_sys_instr(vcpu, ¶ms);
+ } else {
+ /* MRS/MSR instructions */
+ ret = emulate_sys_reg(vcpu, ¶ms);
+ if (!params.is_write)
+ vcpu_set_reg(vcpu, Rt, params.regval);
+ }
- if (!params.is_write)
- vcpu_set_reg(vcpu, Rt, params.regval);
return ret;
}
@@ -3184,6 +3213,7 @@ void kvm_sys_reg_table_init(void)
BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true));
BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true));
BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false));
+ BUG_ON(check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false));
/* We abuse the reset function to overwrite the table itself. */
for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 39/66] KVM: arm64: nv: Set a handler for the system instruction traps
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
When HCR.NV bit is set, execution of the EL2 translation regime address
aranslation instructions and TLB maintenance instructions are trapped to
EL2. In addition, execution of the EL1 translation regime address
aranslation instructions and TLB maintenance instructions that are only
accessible from EL2 and above are trapped to EL2. In these cases,
ESR_EL2.EC will be set to 0x18.
Rework the system instruction emulation framework to handle potentially
all system instruction traps other than MSR/MRS instructions. Those
system instructions would be AT and TLBI instructions controlled by
HCR_EL2.NV, AT, and TTLB bits.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: squashed two patches together, redispatched various bits around]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 4 +--
arch/arm64/kvm/handle_exit.c | 2 +-
arch/arm64/kvm/sys_regs.c | 48 +++++++++++++++++++++++++------
3 files changed, 42 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b7d6a829091c..9a811778106c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -392,7 +392,7 @@ struct kvm_vcpu_arch {
/*
* Guest registers we preserve during guest debugging.
*
- * These shadow registers are updated by the kvm_handle_sys_reg
+ * These shadow registers are updated by the kvm_handle_sys
* trap handler if the guest accesses or updates them while we
* are using guest debug.
*/
@@ -712,7 +712,7 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu);
int kvm_handle_cp14_64(struct kvm_vcpu *vcpu);
int kvm_handle_cp15_32(struct kvm_vcpu *vcpu);
int kvm_handle_cp15_64(struct kvm_vcpu *vcpu);
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu);
+int kvm_handle_sys(struct kvm_vcpu *vcpu);
void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5f1e3989c4bc..b63653004fa8 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -248,7 +248,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_SMC32] = handle_smc,
[ESR_ELx_EC_HVC64] = handle_hvc,
[ESR_ELx_EC_SMC64] = handle_smc,
- [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg,
+ [ESR_ELx_EC_SYS64] = kvm_handle_sys,
[ESR_ELx_EC_SVE] = handle_sve,
[ESR_ELx_EC_ERET] = kvm_handle_eret,
[ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort,
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 14d1aac58f26..79f6c88425de 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1692,10 +1692,6 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
* more demanding guest...
*/
static const struct sys_reg_desc sys_reg_descs[] = {
- { SYS_DESC(SYS_DC_ISW), access_dcsw },
- { SYS_DESC(SYS_DC_CSW), access_dcsw },
- { SYS_DESC(SYS_DC_CISW), access_dcsw },
-
DBG_BCR_BVR_WCR_WVR_EL1(0),
DBG_BCR_BVR_WCR_WVR_EL1(1),
{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
@@ -2157,6 +2153,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
+#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
+ { SYS_DESC((insn)), (access_fn), NULL, 0, 0, NULL, NULL, (forward_fn) }
+static struct sys_reg_desc sys_insn_descs[] = {
+ { SYS_DESC(SYS_DC_ISW), access_dcsw },
+ { SYS_DESC(SYS_DC_CSW), access_dcsw },
+ { SYS_DESC(SYS_DC_CISW), access_dcsw },
+};
+
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -2728,6 +2732,24 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
return 1;
}
+static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ const struct sys_reg_desc *r;
+
+ /* Search from the system instruction table. */
+ r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+
+ if (likely(r)) {
+ perform_access(vcpu, p, r);
+ } else {
+ kvm_err("Unsupported guest sys instruction at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(p);
+ kvm_inject_undefined(vcpu);
+ }
+ return 1;
+}
+
/**
* kvm_reset_sys_regs - sets system registers to reset value
* @vcpu: The VCPU pointer
@@ -2745,10 +2767,11 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
}
/**
- * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * kvm_handle_sys-- handles a system instruction or mrs/msr instruction trap
+ on a guest execution
* @vcpu: The VCPU pointer
*/
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
+int kvm_handle_sys(struct kvm_vcpu *vcpu)
{
struct sys_reg_params params;
unsigned long esr = kvm_vcpu_get_esr(vcpu);
@@ -2765,10 +2788,16 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
params.regval = vcpu_get_reg(vcpu, Rt);
params.is_write = !(esr & 1);
- ret = emulate_sys_reg(vcpu, ¶ms);
+ if (params.Op0 == 1) {
+ /* System instructions */
+ ret = emulate_sys_instr(vcpu, ¶ms);
+ } else {
+ /* MRS/MSR instructions */
+ ret = emulate_sys_reg(vcpu, ¶ms);
+ if (!params.is_write)
+ vcpu_set_reg(vcpu, Rt, params.regval);
+ }
- if (!params.is_write)
- vcpu_set_reg(vcpu, Rt, params.regval);
return ret;
}
@@ -3184,6 +3213,7 @@ void kvm_sys_reg_table_init(void)
BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true));
BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true));
BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false));
+ BUG_ON(check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false));
/* We abuse the reset function to overwrite the table itself. */
for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 39/66] KVM: arm64: nv: Set a handler for the system instruction traps
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
When HCR.NV bit is set, execution of the EL2 translation regime address
aranslation instructions and TLB maintenance instructions are trapped to
EL2. In addition, execution of the EL1 translation regime address
aranslation instructions and TLB maintenance instructions that are only
accessible from EL2 and above are trapped to EL2. In these cases,
ESR_EL2.EC will be set to 0x18.
Rework the system instruction emulation framework to handle potentially
all system instruction traps other than MSR/MRS instructions. Those
system instructions would be AT and TLBI instructions controlled by
HCR_EL2.NV, AT, and TTLB bits.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: squashed two patches together, redispatched various bits around]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 4 +--
arch/arm64/kvm/handle_exit.c | 2 +-
arch/arm64/kvm/sys_regs.c | 48 +++++++++++++++++++++++++------
3 files changed, 42 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b7d6a829091c..9a811778106c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -392,7 +392,7 @@ struct kvm_vcpu_arch {
/*
* Guest registers we preserve during guest debugging.
*
- * These shadow registers are updated by the kvm_handle_sys_reg
+ * These shadow registers are updated by the kvm_handle_sys
* trap handler if the guest accesses or updates them while we
* are using guest debug.
*/
@@ -712,7 +712,7 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu);
int kvm_handle_cp14_64(struct kvm_vcpu *vcpu);
int kvm_handle_cp15_32(struct kvm_vcpu *vcpu);
int kvm_handle_cp15_64(struct kvm_vcpu *vcpu);
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu);
+int kvm_handle_sys(struct kvm_vcpu *vcpu);
void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5f1e3989c4bc..b63653004fa8 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -248,7 +248,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_SMC32] = handle_smc,
[ESR_ELx_EC_HVC64] = handle_hvc,
[ESR_ELx_EC_SMC64] = handle_smc,
- [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg,
+ [ESR_ELx_EC_SYS64] = kvm_handle_sys,
[ESR_ELx_EC_SVE] = handle_sve,
[ESR_ELx_EC_ERET] = kvm_handle_eret,
[ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort,
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 14d1aac58f26..79f6c88425de 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1692,10 +1692,6 @@ static bool access_spsr_el2(struct kvm_vcpu *vcpu,
* more demanding guest...
*/
static const struct sys_reg_desc sys_reg_descs[] = {
- { SYS_DESC(SYS_DC_ISW), access_dcsw },
- { SYS_DESC(SYS_DC_CSW), access_dcsw },
- { SYS_DESC(SYS_DC_CISW), access_dcsw },
-
DBG_BCR_BVR_WCR_WVR_EL1(0),
DBG_BCR_BVR_WCR_WVR_EL1(1),
{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
@@ -2157,6 +2153,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
+#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
+ { SYS_DESC((insn)), (access_fn), NULL, 0, 0, NULL, NULL, (forward_fn) }
+static struct sys_reg_desc sys_insn_descs[] = {
+ { SYS_DESC(SYS_DC_ISW), access_dcsw },
+ { SYS_DESC(SYS_DC_CSW), access_dcsw },
+ { SYS_DESC(SYS_DC_CISW), access_dcsw },
+};
+
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -2728,6 +2732,24 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
return 1;
}
+static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ const struct sys_reg_desc *r;
+
+ /* Search from the system instruction table. */
+ r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+
+ if (likely(r)) {
+ perform_access(vcpu, p, r);
+ } else {
+ kvm_err("Unsupported guest sys instruction at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(p);
+ kvm_inject_undefined(vcpu);
+ }
+ return 1;
+}
+
/**
* kvm_reset_sys_regs - sets system registers to reset value
* @vcpu: The VCPU pointer
@@ -2745,10 +2767,11 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
}
/**
- * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * kvm_handle_sys-- handles a system instruction or mrs/msr instruction trap
+ on a guest execution
* @vcpu: The VCPU pointer
*/
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
+int kvm_handle_sys(struct kvm_vcpu *vcpu)
{
struct sys_reg_params params;
unsigned long esr = kvm_vcpu_get_esr(vcpu);
@@ -2765,10 +2788,16 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
params.regval = vcpu_get_reg(vcpu, Rt);
params.is_write = !(esr & 1);
- ret = emulate_sys_reg(vcpu, ¶ms);
+ if (params.Op0 == 1) {
+ /* System instructions */
+ ret = emulate_sys_instr(vcpu, ¶ms);
+ } else {
+ /* MRS/MSR instructions */
+ ret = emulate_sys_reg(vcpu, ¶ms);
+ if (!params.is_write)
+ vcpu_set_reg(vcpu, Rt, params.regval);
+ }
- if (!params.is_write)
- vcpu_set_reg(vcpu, Rt, params.regval);
return ret;
}
@@ -3184,6 +3213,7 @@ void kvm_sys_reg_table_init(void)
BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true));
BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true));
BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false));
+ BUG_ON(check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false));
/* We abuse the reset function to overwrite the table itself. */
for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 40/66] KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
When supporting nested virtualization a guest hypervisor executing AT
instructions must be trapped and emulated by the host hypervisor,
because untrapped AT instructions operating on S1E1 will use the wrong
translation regieme (the one used to emulate virtual EL2 in EL1 instead
of virtual EL1) and AT instructions operating on S12 will not work from
EL1.
This patch does several things.
1. List and define all AT system instructions to emulate and document
the emulation design.
2. Implement AT instruction handling logic in EL2. This will be used to
emulate AT instructions executed in the virtual EL2.
AT instruction emulation works by loading the proper processor
context, which depends on the trapped instruction and the virtual
HCR_EL2, to the EL1 virtual memory control registers and executing AT
instructions. Note that ctxt->hw_sys_regs is expected to have the
proper processor context before calling the handling
function(__kvm_at_insn) implemented in this patch.
4. Emulate AT S1E[01] instructions by issuing the same instructions in
EL2. We set the physical EL1 registers, NV and NV1 bits as described in
the AT instruction emulation overview.
5. Emulate AT A12E[01] instructions in two steps: First, do the stage-1
translation by reusing the existing AT emulation functions. Second, do
the stage-2 translation by walking the guest hypervisor's stage-2 page
table in software. Record the translation result to PAR_EL1.
6. Emulate AT S1E2 instructions by issuing the corresponding S1E1
instructions in EL2. We set the physical EL1 registers and the HCR_EL2
register as described in the AT instruction emulation overview.
7. Forward system instruction traps to the virtual EL2 if the corresponding
virtual AT bit is set in the virtual HCR_EL2.
[ Much logic above has been reworked by Marc Zyngier ]
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/include/asm/sysreg.h | 17 +++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/at.c | 231 +++++++++++++++++++++++++++++++
arch/arm64/kvm/hyp/vhe/switch.c | 13 +-
arch/arm64/kvm/sys_regs.c | 201 ++++++++++++++++++++++++++-
7 files changed, 463 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/kvm/at.c
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index eb0d00d8a431..6a4a11fcc9df 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_AT (UL(1) << 44)
#define HCR_NV1 (UL(1) << 43)
#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
@@ -110,6 +111,7 @@
#define VTCR_EL2_TG0_16K TCR_TG0_16K
#define VTCR_EL2_TG0_64K TCR_TG0_64K
#define VTCR_EL2_SH0_MASK TCR_SH0_MASK
+#define VTCR_EL2_SH0_SHIFT TCR_SH0_SHIFT
#define VTCR_EL2_SH0_INNER TCR_SH0_INNER
#define VTCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
#define VTCR_EL2_ORGN0_WBWA TCR_ORGN0_WBWA
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index cf8df032b9c3..c61d52c51e43 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -198,6 +198,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
+extern void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
+extern void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 2704738d644a..625c040e4f72 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -651,6 +651,23 @@
#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
+/* AT instructions */
+#define AT_Op0 1
+#define AT_CRn 7
+
+#define OP_AT_S1E1R sys_insn(AT_Op0, 0, AT_CRn, 8, 0)
+#define OP_AT_S1E1W sys_insn(AT_Op0, 0, AT_CRn, 8, 1)
+#define OP_AT_S1E0R sys_insn(AT_Op0, 0, AT_CRn, 8, 2)
+#define OP_AT_S1E0W sys_insn(AT_Op0, 0, AT_CRn, 8, 3)
+#define OP_AT_S1E1RP sys_insn(AT_Op0, 0, AT_CRn, 9, 0)
+#define OP_AT_S1E1WP sys_insn(AT_Op0, 0, AT_CRn, 9, 1)
+#define OP_AT_S1E2R sys_insn(AT_Op0, 4, AT_CRn, 8, 0)
+#define OP_AT_S1E2W sys_insn(AT_Op0, 4, AT_CRn, 8, 1)
+#define OP_AT_S12E1R sys_insn(AT_Op0, 4, AT_CRn, 8, 4)
+#define OP_AT_S12E1W sys_insn(AT_Op0, 4, AT_CRn, 8, 5)
+#define OP_AT_S12E0R sys_insn(AT_Op0, 4, AT_CRn, 8, 6)
+#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 598526c064f7..464c7ace2fb2 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o emulate-nested.o nested.o \
+ arch_timer.o trng.o emulate-nested.o nested.o at.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/at.c b/arch/arm64/kvm/at.c
new file mode 100644
index 000000000000..c345ef98ca1e
--- /dev/null
+++ b/arch/arm64/kvm/at.c
@@ -0,0 +1,231 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2017 - Linaro Ltd
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/kvm_hyp.h>
+#include <asm/kvm_mmu.h>
+
+struct mmu_config {
+ u64 ttbr0;
+ u64 ttbr1;
+ u64 tcr;
+ u64 sctlr;
+ u64 vttbr;
+ u64 vtcr;
+ u64 hcr;
+};
+
+static void __mmu_config_save(struct mmu_config *config)
+{
+ config->ttbr0 = read_sysreg_el1(SYS_TTBR0);
+ config->ttbr1 = read_sysreg_el1(SYS_TTBR1);
+ config->tcr = read_sysreg_el1(SYS_TCR);
+ config->sctlr = read_sysreg_el1(SYS_SCTLR);
+ config->vttbr = read_sysreg(vttbr_el2);
+ config->vtcr = read_sysreg(vtcr_el2);
+ config->hcr = read_sysreg(hcr_el2);
+}
+
+static void __mmu_config_restore(struct mmu_config *config)
+{
+ write_sysreg_el1(config->ttbr0, SYS_TTBR0);
+ write_sysreg_el1(config->ttbr1, SYS_TTBR1);
+ write_sysreg_el1(config->tcr, SYS_TCR);
+ write_sysreg_el1(config->sctlr, SYS_SCTLR);
+ write_sysreg(config->vttbr, vttbr_el2);
+ write_sysreg(config->vtcr, vtcr_el2);
+ write_sysreg(config->hcr, hcr_el2);
+
+ isb();
+}
+
+void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ /*
+ * If HCR_EL2.{E2H,TGE} == {1,1}, the MMU context is already
+ * the right one (as we trapped from vEL2).
+ */
+ if (vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu))
+ goto skip_mmu_switch;
+
+ /*
+ * FIXME: Obtaining the S2 MMU for a guest guest is horribly
+ * racy, and we may not find it (evicted by another vcpu, for
+ * example).
+ */
+ mmu = lookup_s2_mmu(vcpu->kvm,
+ vcpu_read_sys_reg(vcpu, VTTBR_EL2),
+ vcpu_read_sys_reg(vcpu, HCR_EL2));
+
+ if (WARN_ON(!mmu))
+ goto out;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL1), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL1), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR);
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /*
+ * REVISIT: do we need anything from the guest's VTCR_EL2? If
+ * looks like keeping the hosts configuration is the right
+ * thing to do at this stage (and we could avoid save/restore
+ * it. Keep the host's version for now.
+ */
+ write_sysreg((config.hcr & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+skip_mmu_switch:
+
+ switch (op) {
+ case OP_AT_S1E1R:
+ case OP_AT_S1E1RP:
+ asm volatile("at s1e1r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E1W:
+ case OP_AT_S1E1WP:
+ asm volatile("at s1e1w, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E0R:
+ asm volatile("at s1e0r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E0W:
+ asm volatile("at s1e0w, %0" : : "r" (vaddr));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ isb();
+
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+
+ /*
+ * Failed? let's leave the building now.
+ *
+ * FIXME: how about a failed translation because the shadow S2
+ * wasn't populated? We may need to perform a SW PTW,
+ * populating our shadow S2 and retry the instruction.
+ */
+ if (ctxt_sys_reg(ctxt, PAR_EL1) & 1)
+ goto nopan;
+
+ /* No PAN? No problem. */
+ if (!(*vcpu_cpsr(vcpu) & PSR_PAN_BIT))
+ goto nopan;
+
+ /*
+ * For PAN-involved AT operations, perform the same
+ * translation, using EL0 this time.
+ */
+ switch (op) {
+ case OP_AT_S1E1RP:
+ asm volatile("at s1e0r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E1WP:
+ asm volatile("at s1e0w, %0" : : "r" (vaddr));
+ break;
+ default:
+ goto nopan;
+ }
+
+ /*
+ * If the EL0 translation has succeeded, we need to pretend
+ * the AT operation has failed, as the PAN setting forbids
+ * such a translation.
+ *
+ * FIXME: we hardcode a Level-3 permission fault. We really
+ * should return the real fault level.
+ */
+ if (!(read_sysreg(par_el1) & 1))
+ ctxt_sys_reg(ctxt, PAR_EL1) = 0x1f;
+
+nopan:
+ if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
+ __mmu_config_restore(&config);
+
+out:
+ spin_unlock(&vcpu->kvm->mmu_lock);
+}
+
+void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+ u64 val;
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ mmu = &vcpu->kvm->arch.mmu;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ if (vcpu_el2_e2h_is_set(vcpu)) {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+
+ val = config.hcr;
+ } else {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+
+ val = config.hcr | HCR_NV | HCR_NV1;
+ }
+
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /* FIXME: write S2 MMU VTCR_EL2? */
+ write_sysreg((val & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+ switch (op) {
+ case OP_AT_S1E2R:
+ asm volatile("at s1e1r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E2W:
+ asm volatile("at s1e1w, %0" : : "r" (vaddr));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ isb();
+
+ /* FIXME: handle failed translation due to shadow S2 */
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+
+ __mmu_config_restore(&config);
+ spin_unlock(&vcpu->kvm->mmu_lock);
+}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 79789850639b..7715b8254a76 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -43,9 +43,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
if (!vcpu_el2_e2h_is_set(vcpu)) {
/*
* For a guest hypervisor on v8.0, trap and emulate
- * the EL1 virtual memory control register accesses.
+ * the EL1 virtual memory control register accesses
+ * as well as the AT S1 operations.
*/
- hcr |= HCR_TVM | HCR_TRVM | HCR_NV1;
+ hcr |= HCR_TVM | HCR_TRVM | HCR_AT | HCR_NV1;
} else {
/*
* For a guest hypervisor on v8.1 (VHE), allow to
@@ -68,6 +69,14 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
hcr &= ~HCR_TVM;
hcr |= vhcr_el2 & (HCR_TVM | HCR_TRVM);
+
+ /*
+ * If we're using the EL1 translation regime
+ * (TGE clear), then ensure that AT S1 ops are
+ * trapped too.
+ */
+ if (!vcpu_el2_tge_is_set(vcpu))
+ hcr |= HCR_AT;
}
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 79f6c88425de..9d9108e9fcf7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1625,6 +1625,10 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool forward_at_traps(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_AT);
+}
static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -2153,12 +2157,205 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
-#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
- { SYS_DESC((insn)), (access_fn), NULL, 0, 0, NULL, NULL, (forward_fn) }
+static bool handle_s1e01(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e01(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static bool handle_s1e2(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e2(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static u64 setup_par_aborted(u32 esr)
+{
+ u64 par = 0;
+
+ /* S [9]: fault in the stage 2 translation */
+ par |= (1 << 9);
+ /* FST [6:1]: Fault status code */
+ par |= (esr << 1);
+ /* F [0]: translation is aborted */
+ par |= 1;
+
+ return par;
+}
+
+static u64 setup_par_completed(struct kvm_vcpu *vcpu, struct kvm_s2_trans *out)
+{
+ u64 par, vtcr_sh0;
+
+ /* F [0]: Translation is completed successfully */
+ par = 0;
+ /* ATTR [63:56] */
+ par |= out->upper_attr;
+ /* PA [47:12] */
+ par |= out->output & GENMASK_ULL(11, 0);
+ /* RES1 [11] */
+ par |= (1UL << 11);
+ /* SH [8:7]: Shareability attribute */
+ vtcr_sh0 = vcpu_read_sys_reg(vcpu, VTCR_EL2) & VTCR_EL2_SH0_MASK;
+ par |= (vtcr_sh0 >> VTCR_EL2_SH0_SHIFT) << 7;
+
+ return par;
+}
+
+static bool handle_s12(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r, bool write)
+{
+ u64 par, va;
+ u32 esr;
+ phys_addr_t ipa;
+ struct kvm_s2_trans out;
+ int ret;
+
+ /* Do the stage-1 translation */
+ handle_s1e01(vcpu, p, r);
+ par = vcpu_read_sys_reg(vcpu, PAR_EL1);
+ if (par & 1) {
+ /* The stage-1 translation aborted */
+ return true;
+ }
+
+ /* Do the stage-2 translation */
+ va = p->regval;
+ ipa = (par & GENMASK_ULL(47, 12)) | (va & GENMASK_ULL(11, 0));
+ out.esr = 0;
+ ret = kvm_walk_nested_s2(vcpu, ipa, &out);
+ if (ret < 0)
+ return false;
+
+ /* Check if the stage-2 PTW is aborted */
+ if (out.esr) {
+ esr = out.esr;
+ goto s2_trans_abort;
+ }
+
+ /* Check the access permission */
+ if ((!write && !out.readable) || (write && !out.writable)) {
+ esr = ESR_ELx_FSC_PERM;
+ esr |= out.level & 0x3;
+ goto s2_trans_abort;
+ }
+
+ vcpu_write_sys_reg(vcpu, setup_par_completed(vcpu, &out), PAR_EL1);
+ return true;
+
+s2_trans_abort:
+ vcpu_write_sys_reg(vcpu, setup_par_aborted(esr), PAR_EL1);
+ return true;
+}
+
+static bool handle_s12r(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, false);
+}
+
+static bool handle_s12w(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, true);
+}
+
+/*
+ * AT instruction emulation
+ *
+ * We emulate AT instructions executed in the virtual EL2.
+ * Basic strategy for the stage-1 translation emulation is to load proper
+ * context, which depends on the trapped instruction and the virtual HCR_EL2,
+ * to the EL1 virtual memory control registers and execute S1E[01] instructions
+ * in EL2. See below for more detail.
+ *
+ * For the stage-2 translation, which is necessary for S12E[01] emulation,
+ * we walk the guest hypervisor's stage-2 page table in software.
+ *
+ * The stage-1 translation emulations can be divided into two groups depending
+ * on the translation regime.
+ *
+ * 1. EL2 AT instructions: S1E2x
+ * +-----------------------------------------------------------------------+
+ * | | Setting for the emulation |
+ * | Virtual HCR_EL2.E2H on trap |-----------------------------------------+
+ * | | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |-----------------------------------------------------------------------|
+ * | 0 | vEL2 | (1, 1) | 0 |
+ * | 1 | vEL2 | (0, 0) | 0 |
+ * +-----------------------------------------------------------------------+
+ *
+ * We emulate the EL2 AT instructions by loading virtual EL2 context
+ * to the EL1 virtual memory control registers and executing corresponding
+ * EL1 AT instructions.
+ *
+ * We set physical NV and NV1 bits to use EL2 page table format for non-VHE
+ * guest hypervisor (i.e. HCR_EL2.E2H == 0). As a VHE guest hypervisor uses the
+ * EL1 page table format, we don't set those bits.
+ *
+ * We should clear physical TGE bit not to use the EL2 translation regime when
+ * the host uses the VHE feature.
+ *
+ *
+ * 2. EL0/EL1 AT instructions: S1E[01]x, S12E1x
+ * +----------------------------------------------------------------------+
+ * | Virtual HCR_EL2 on trap | Setting for the emulation |
+ * |----------------------------------------------------------------------+
+ * | (vE2H, vTGE) | (vNV, vNV1) | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |----------------------------------------------------------------------|
+ * | (0, 0)* | (0, 0) | vEL1 | (0, 0) | 0 |
+ * | (0, 0) | (1, 1) | vEL1 | (1, 1) | 0 |
+ * | (1, 1) | (0, 0) | vEL2 | (0, 0) | 0 |
+ * | (1, 1) | (1, 1) | vEL2 | (1, 1) | 0 |
+ * +----------------------------------------------------------------------+
+ *
+ * *For (0, 0) in the 'Virtual HCR_EL2 on trap' column, it actually means
+ * (1, 1). Keep them (0, 0) just for the readability.
+ *
+ * We set physical EL1 virtual memory control registers depending on
+ * (vE2H, vTGE) pair. When the pair is (0, 0) where AT instructions are
+ * supposed to use EL0/EL1 translation regime, we load the EL1 registers with
+ * the virtual EL1 registers (i.e. EL1 registers from the guest hypervisor's
+ * point of view). When the pair is (1, 1), however, AT instructions are defined
+ * to apply EL2 translation regime. To emulate this behavior, we load the EL1
+ * registers with the virtual EL2 context. (i.e the shadow registers)
+ *
+ * We respect the virtual NV and NV1 bit for the emulation. When those bits are
+ * set, it means that a guest hypervisor would like to use EL2 page table format
+ * for the EL1 translation regime. We emulate this by setting the physical
+ * NV and NV1 bits.
+ */
+
+#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
+ { SYS_DESC(OP_##insn), (access_fn), NULL, 0, 0, \
+ NULL, NULL, (forward_fn) }
static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_ISW), access_dcsw },
+
+ SYS_INSN_TO_DESC(AT_S1E1R, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1W, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E0R, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E0W, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1RP, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1WP, handle_s1e01, forward_at_traps),
+
{ SYS_DESC(SYS_DC_CSW), access_dcsw },
{ SYS_DESC(SYS_DC_CISW), access_dcsw },
+
+ SYS_INSN_TO_DESC(AT_S1E2R, handle_s1e2, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S1E2W, handle_s1e2, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E1R, handle_s12r, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E1W, handle_s12w, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E0R, handle_s12r, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E0W, handle_s12w, forward_nv_traps),
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 40/66] KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
When supporting nested virtualization a guest hypervisor executing AT
instructions must be trapped and emulated by the host hypervisor,
because untrapped AT instructions operating on S1E1 will use the wrong
translation regieme (the one used to emulate virtual EL2 in EL1 instead
of virtual EL1) and AT instructions operating on S12 will not work from
EL1.
This patch does several things.
1. List and define all AT system instructions to emulate and document
the emulation design.
2. Implement AT instruction handling logic in EL2. This will be used to
emulate AT instructions executed in the virtual EL2.
AT instruction emulation works by loading the proper processor
context, which depends on the trapped instruction and the virtual
HCR_EL2, to the EL1 virtual memory control registers and executing AT
instructions. Note that ctxt->hw_sys_regs is expected to have the
proper processor context before calling the handling
function(__kvm_at_insn) implemented in this patch.
4. Emulate AT S1E[01] instructions by issuing the same instructions in
EL2. We set the physical EL1 registers, NV and NV1 bits as described in
the AT instruction emulation overview.
5. Emulate AT A12E[01] instructions in two steps: First, do the stage-1
translation by reusing the existing AT emulation functions. Second, do
the stage-2 translation by walking the guest hypervisor's stage-2 page
table in software. Record the translation result to PAR_EL1.
6. Emulate AT S1E2 instructions by issuing the corresponding S1E1
instructions in EL2. We set the physical EL1 registers and the HCR_EL2
register as described in the AT instruction emulation overview.
7. Forward system instruction traps to the virtual EL2 if the corresponding
virtual AT bit is set in the virtual HCR_EL2.
[ Much logic above has been reworked by Marc Zyngier ]
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/include/asm/sysreg.h | 17 +++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/at.c | 231 +++++++++++++++++++++++++++++++
arch/arm64/kvm/hyp/vhe/switch.c | 13 +-
arch/arm64/kvm/sys_regs.c | 201 ++++++++++++++++++++++++++-
7 files changed, 463 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/kvm/at.c
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index eb0d00d8a431..6a4a11fcc9df 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_AT (UL(1) << 44)
#define HCR_NV1 (UL(1) << 43)
#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
@@ -110,6 +111,7 @@
#define VTCR_EL2_TG0_16K TCR_TG0_16K
#define VTCR_EL2_TG0_64K TCR_TG0_64K
#define VTCR_EL2_SH0_MASK TCR_SH0_MASK
+#define VTCR_EL2_SH0_SHIFT TCR_SH0_SHIFT
#define VTCR_EL2_SH0_INNER TCR_SH0_INNER
#define VTCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
#define VTCR_EL2_ORGN0_WBWA TCR_ORGN0_WBWA
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index cf8df032b9c3..c61d52c51e43 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -198,6 +198,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
+extern void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
+extern void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 2704738d644a..625c040e4f72 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -651,6 +651,23 @@
#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
+/* AT instructions */
+#define AT_Op0 1
+#define AT_CRn 7
+
+#define OP_AT_S1E1R sys_insn(AT_Op0, 0, AT_CRn, 8, 0)
+#define OP_AT_S1E1W sys_insn(AT_Op0, 0, AT_CRn, 8, 1)
+#define OP_AT_S1E0R sys_insn(AT_Op0, 0, AT_CRn, 8, 2)
+#define OP_AT_S1E0W sys_insn(AT_Op0, 0, AT_CRn, 8, 3)
+#define OP_AT_S1E1RP sys_insn(AT_Op0, 0, AT_CRn, 9, 0)
+#define OP_AT_S1E1WP sys_insn(AT_Op0, 0, AT_CRn, 9, 1)
+#define OP_AT_S1E2R sys_insn(AT_Op0, 4, AT_CRn, 8, 0)
+#define OP_AT_S1E2W sys_insn(AT_Op0, 4, AT_CRn, 8, 1)
+#define OP_AT_S12E1R sys_insn(AT_Op0, 4, AT_CRn, 8, 4)
+#define OP_AT_S12E1W sys_insn(AT_Op0, 4, AT_CRn, 8, 5)
+#define OP_AT_S12E0R sys_insn(AT_Op0, 4, AT_CRn, 8, 6)
+#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 598526c064f7..464c7ace2fb2 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o emulate-nested.o nested.o \
+ arch_timer.o trng.o emulate-nested.o nested.o at.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/at.c b/arch/arm64/kvm/at.c
new file mode 100644
index 000000000000..c345ef98ca1e
--- /dev/null
+++ b/arch/arm64/kvm/at.c
@@ -0,0 +1,231 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2017 - Linaro Ltd
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/kvm_hyp.h>
+#include <asm/kvm_mmu.h>
+
+struct mmu_config {
+ u64 ttbr0;
+ u64 ttbr1;
+ u64 tcr;
+ u64 sctlr;
+ u64 vttbr;
+ u64 vtcr;
+ u64 hcr;
+};
+
+static void __mmu_config_save(struct mmu_config *config)
+{
+ config->ttbr0 = read_sysreg_el1(SYS_TTBR0);
+ config->ttbr1 = read_sysreg_el1(SYS_TTBR1);
+ config->tcr = read_sysreg_el1(SYS_TCR);
+ config->sctlr = read_sysreg_el1(SYS_SCTLR);
+ config->vttbr = read_sysreg(vttbr_el2);
+ config->vtcr = read_sysreg(vtcr_el2);
+ config->hcr = read_sysreg(hcr_el2);
+}
+
+static void __mmu_config_restore(struct mmu_config *config)
+{
+ write_sysreg_el1(config->ttbr0, SYS_TTBR0);
+ write_sysreg_el1(config->ttbr1, SYS_TTBR1);
+ write_sysreg_el1(config->tcr, SYS_TCR);
+ write_sysreg_el1(config->sctlr, SYS_SCTLR);
+ write_sysreg(config->vttbr, vttbr_el2);
+ write_sysreg(config->vtcr, vtcr_el2);
+ write_sysreg(config->hcr, hcr_el2);
+
+ isb();
+}
+
+void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ /*
+ * If HCR_EL2.{E2H,TGE} == {1,1}, the MMU context is already
+ * the right one (as we trapped from vEL2).
+ */
+ if (vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu))
+ goto skip_mmu_switch;
+
+ /*
+ * FIXME: Obtaining the S2 MMU for a guest guest is horribly
+ * racy, and we may not find it (evicted by another vcpu, for
+ * example).
+ */
+ mmu = lookup_s2_mmu(vcpu->kvm,
+ vcpu_read_sys_reg(vcpu, VTTBR_EL2),
+ vcpu_read_sys_reg(vcpu, HCR_EL2));
+
+ if (WARN_ON(!mmu))
+ goto out;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL1), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL1), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR);
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /*
+ * REVISIT: do we need anything from the guest's VTCR_EL2? If
+ * looks like keeping the hosts configuration is the right
+ * thing to do at this stage (and we could avoid save/restore
+ * it. Keep the host's version for now.
+ */
+ write_sysreg((config.hcr & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+skip_mmu_switch:
+
+ switch (op) {
+ case OP_AT_S1E1R:
+ case OP_AT_S1E1RP:
+ asm volatile("at s1e1r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E1W:
+ case OP_AT_S1E1WP:
+ asm volatile("at s1e1w, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E0R:
+ asm volatile("at s1e0r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E0W:
+ asm volatile("at s1e0w, %0" : : "r" (vaddr));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ isb();
+
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+
+ /*
+ * Failed? let's leave the building now.
+ *
+ * FIXME: how about a failed translation because the shadow S2
+ * wasn't populated? We may need to perform a SW PTW,
+ * populating our shadow S2 and retry the instruction.
+ */
+ if (ctxt_sys_reg(ctxt, PAR_EL1) & 1)
+ goto nopan;
+
+ /* No PAN? No problem. */
+ if (!(*vcpu_cpsr(vcpu) & PSR_PAN_BIT))
+ goto nopan;
+
+ /*
+ * For PAN-involved AT operations, perform the same
+ * translation, using EL0 this time.
+ */
+ switch (op) {
+ case OP_AT_S1E1RP:
+ asm volatile("at s1e0r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E1WP:
+ asm volatile("at s1e0w, %0" : : "r" (vaddr));
+ break;
+ default:
+ goto nopan;
+ }
+
+ /*
+ * If the EL0 translation has succeeded, we need to pretend
+ * the AT operation has failed, as the PAN setting forbids
+ * such a translation.
+ *
+ * FIXME: we hardcode a Level-3 permission fault. We really
+ * should return the real fault level.
+ */
+ if (!(read_sysreg(par_el1) & 1))
+ ctxt_sys_reg(ctxt, PAR_EL1) = 0x1f;
+
+nopan:
+ if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
+ __mmu_config_restore(&config);
+
+out:
+ spin_unlock(&vcpu->kvm->mmu_lock);
+}
+
+void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+ u64 val;
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ mmu = &vcpu->kvm->arch.mmu;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ if (vcpu_el2_e2h_is_set(vcpu)) {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+
+ val = config.hcr;
+ } else {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+
+ val = config.hcr | HCR_NV | HCR_NV1;
+ }
+
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /* FIXME: write S2 MMU VTCR_EL2? */
+ write_sysreg((val & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+ switch (op) {
+ case OP_AT_S1E2R:
+ asm volatile("at s1e1r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E2W:
+ asm volatile("at s1e1w, %0" : : "r" (vaddr));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ isb();
+
+ /* FIXME: handle failed translation due to shadow S2 */
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+
+ __mmu_config_restore(&config);
+ spin_unlock(&vcpu->kvm->mmu_lock);
+}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 79789850639b..7715b8254a76 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -43,9 +43,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
if (!vcpu_el2_e2h_is_set(vcpu)) {
/*
* For a guest hypervisor on v8.0, trap and emulate
- * the EL1 virtual memory control register accesses.
+ * the EL1 virtual memory control register accesses
+ * as well as the AT S1 operations.
*/
- hcr |= HCR_TVM | HCR_TRVM | HCR_NV1;
+ hcr |= HCR_TVM | HCR_TRVM | HCR_AT | HCR_NV1;
} else {
/*
* For a guest hypervisor on v8.1 (VHE), allow to
@@ -68,6 +69,14 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
hcr &= ~HCR_TVM;
hcr |= vhcr_el2 & (HCR_TVM | HCR_TRVM);
+
+ /*
+ * If we're using the EL1 translation regime
+ * (TGE clear), then ensure that AT S1 ops are
+ * trapped too.
+ */
+ if (!vcpu_el2_tge_is_set(vcpu))
+ hcr |= HCR_AT;
}
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 79f6c88425de..9d9108e9fcf7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1625,6 +1625,10 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool forward_at_traps(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_AT);
+}
static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -2153,12 +2157,205 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
-#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
- { SYS_DESC((insn)), (access_fn), NULL, 0, 0, NULL, NULL, (forward_fn) }
+static bool handle_s1e01(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e01(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static bool handle_s1e2(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e2(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static u64 setup_par_aborted(u32 esr)
+{
+ u64 par = 0;
+
+ /* S [9]: fault in the stage 2 translation */
+ par |= (1 << 9);
+ /* FST [6:1]: Fault status code */
+ par |= (esr << 1);
+ /* F [0]: translation is aborted */
+ par |= 1;
+
+ return par;
+}
+
+static u64 setup_par_completed(struct kvm_vcpu *vcpu, struct kvm_s2_trans *out)
+{
+ u64 par, vtcr_sh0;
+
+ /* F [0]: Translation is completed successfully */
+ par = 0;
+ /* ATTR [63:56] */
+ par |= out->upper_attr;
+ /* PA [47:12] */
+ par |= out->output & GENMASK_ULL(11, 0);
+ /* RES1 [11] */
+ par |= (1UL << 11);
+ /* SH [8:7]: Shareability attribute */
+ vtcr_sh0 = vcpu_read_sys_reg(vcpu, VTCR_EL2) & VTCR_EL2_SH0_MASK;
+ par |= (vtcr_sh0 >> VTCR_EL2_SH0_SHIFT) << 7;
+
+ return par;
+}
+
+static bool handle_s12(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r, bool write)
+{
+ u64 par, va;
+ u32 esr;
+ phys_addr_t ipa;
+ struct kvm_s2_trans out;
+ int ret;
+
+ /* Do the stage-1 translation */
+ handle_s1e01(vcpu, p, r);
+ par = vcpu_read_sys_reg(vcpu, PAR_EL1);
+ if (par & 1) {
+ /* The stage-1 translation aborted */
+ return true;
+ }
+
+ /* Do the stage-2 translation */
+ va = p->regval;
+ ipa = (par & GENMASK_ULL(47, 12)) | (va & GENMASK_ULL(11, 0));
+ out.esr = 0;
+ ret = kvm_walk_nested_s2(vcpu, ipa, &out);
+ if (ret < 0)
+ return false;
+
+ /* Check if the stage-2 PTW is aborted */
+ if (out.esr) {
+ esr = out.esr;
+ goto s2_trans_abort;
+ }
+
+ /* Check the access permission */
+ if ((!write && !out.readable) || (write && !out.writable)) {
+ esr = ESR_ELx_FSC_PERM;
+ esr |= out.level & 0x3;
+ goto s2_trans_abort;
+ }
+
+ vcpu_write_sys_reg(vcpu, setup_par_completed(vcpu, &out), PAR_EL1);
+ return true;
+
+s2_trans_abort:
+ vcpu_write_sys_reg(vcpu, setup_par_aborted(esr), PAR_EL1);
+ return true;
+}
+
+static bool handle_s12r(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, false);
+}
+
+static bool handle_s12w(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, true);
+}
+
+/*
+ * AT instruction emulation
+ *
+ * We emulate AT instructions executed in the virtual EL2.
+ * Basic strategy for the stage-1 translation emulation is to load proper
+ * context, which depends on the trapped instruction and the virtual HCR_EL2,
+ * to the EL1 virtual memory control registers and execute S1E[01] instructions
+ * in EL2. See below for more detail.
+ *
+ * For the stage-2 translation, which is necessary for S12E[01] emulation,
+ * we walk the guest hypervisor's stage-2 page table in software.
+ *
+ * The stage-1 translation emulations can be divided into two groups depending
+ * on the translation regime.
+ *
+ * 1. EL2 AT instructions: S1E2x
+ * +-----------------------------------------------------------------------+
+ * | | Setting for the emulation |
+ * | Virtual HCR_EL2.E2H on trap |-----------------------------------------+
+ * | | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |-----------------------------------------------------------------------|
+ * | 0 | vEL2 | (1, 1) | 0 |
+ * | 1 | vEL2 | (0, 0) | 0 |
+ * +-----------------------------------------------------------------------+
+ *
+ * We emulate the EL2 AT instructions by loading virtual EL2 context
+ * to the EL1 virtual memory control registers and executing corresponding
+ * EL1 AT instructions.
+ *
+ * We set physical NV and NV1 bits to use EL2 page table format for non-VHE
+ * guest hypervisor (i.e. HCR_EL2.E2H == 0). As a VHE guest hypervisor uses the
+ * EL1 page table format, we don't set those bits.
+ *
+ * We should clear physical TGE bit not to use the EL2 translation regime when
+ * the host uses the VHE feature.
+ *
+ *
+ * 2. EL0/EL1 AT instructions: S1E[01]x, S12E1x
+ * +----------------------------------------------------------------------+
+ * | Virtual HCR_EL2 on trap | Setting for the emulation |
+ * |----------------------------------------------------------------------+
+ * | (vE2H, vTGE) | (vNV, vNV1) | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |----------------------------------------------------------------------|
+ * | (0, 0)* | (0, 0) | vEL1 | (0, 0) | 0 |
+ * | (0, 0) | (1, 1) | vEL1 | (1, 1) | 0 |
+ * | (1, 1) | (0, 0) | vEL2 | (0, 0) | 0 |
+ * | (1, 1) | (1, 1) | vEL2 | (1, 1) | 0 |
+ * +----------------------------------------------------------------------+
+ *
+ * *For (0, 0) in the 'Virtual HCR_EL2 on trap' column, it actually means
+ * (1, 1). Keep them (0, 0) just for the readability.
+ *
+ * We set physical EL1 virtual memory control registers depending on
+ * (vE2H, vTGE) pair. When the pair is (0, 0) where AT instructions are
+ * supposed to use EL0/EL1 translation regime, we load the EL1 registers with
+ * the virtual EL1 registers (i.e. EL1 registers from the guest hypervisor's
+ * point of view). When the pair is (1, 1), however, AT instructions are defined
+ * to apply EL2 translation regime. To emulate this behavior, we load the EL1
+ * registers with the virtual EL2 context. (i.e the shadow registers)
+ *
+ * We respect the virtual NV and NV1 bit for the emulation. When those bits are
+ * set, it means that a guest hypervisor would like to use EL2 page table format
+ * for the EL1 translation regime. We emulate this by setting the physical
+ * NV and NV1 bits.
+ */
+
+#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
+ { SYS_DESC(OP_##insn), (access_fn), NULL, 0, 0, \
+ NULL, NULL, (forward_fn) }
static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_ISW), access_dcsw },
+
+ SYS_INSN_TO_DESC(AT_S1E1R, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1W, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E0R, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E0W, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1RP, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1WP, handle_s1e01, forward_at_traps),
+
{ SYS_DESC(SYS_DC_CSW), access_dcsw },
{ SYS_DESC(SYS_DC_CISW), access_dcsw },
+
+ SYS_INSN_TO_DESC(AT_S1E2R, handle_s1e2, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S1E2W, handle_s1e2, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E1R, handle_s12r, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E1W, handle_s12w, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E0R, handle_s12r, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E0W, handle_s12w, forward_nv_traps),
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 40/66] KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Andre Przywara, Jintack Lim
From: Jintack Lim <jintack.lim@linaro.org>
When supporting nested virtualization a guest hypervisor executing AT
instructions must be trapped and emulated by the host hypervisor,
because untrapped AT instructions operating on S1E1 will use the wrong
translation regieme (the one used to emulate virtual EL2 in EL1 instead
of virtual EL1) and AT instructions operating on S12 will not work from
EL1.
This patch does several things.
1. List and define all AT system instructions to emulate and document
the emulation design.
2. Implement AT instruction handling logic in EL2. This will be used to
emulate AT instructions executed in the virtual EL2.
AT instruction emulation works by loading the proper processor
context, which depends on the trapped instruction and the virtual
HCR_EL2, to the EL1 virtual memory control registers and executing AT
instructions. Note that ctxt->hw_sys_regs is expected to have the
proper processor context before calling the handling
function(__kvm_at_insn) implemented in this patch.
4. Emulate AT S1E[01] instructions by issuing the same instructions in
EL2. We set the physical EL1 registers, NV and NV1 bits as described in
the AT instruction emulation overview.
5. Emulate AT A12E[01] instructions in two steps: First, do the stage-1
translation by reusing the existing AT emulation functions. Second, do
the stage-2 translation by walking the guest hypervisor's stage-2 page
table in software. Record the translation result to PAR_EL1.
6. Emulate AT S1E2 instructions by issuing the corresponding S1E1
instructions in EL2. We set the physical EL1 registers and the HCR_EL2
register as described in the AT instruction emulation overview.
7. Forward system instruction traps to the virtual EL2 if the corresponding
virtual AT bit is set in the virtual HCR_EL2.
[ Much logic above has been reworked by Marc Zyngier ]
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/include/asm/sysreg.h | 17 +++
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/at.c | 231 +++++++++++++++++++++++++++++++
arch/arm64/kvm/hyp/vhe/switch.c | 13 +-
arch/arm64/kvm/sys_regs.c | 201 ++++++++++++++++++++++++++-
7 files changed, 463 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/kvm/at.c
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index eb0d00d8a431..6a4a11fcc9df 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -14,6 +14,7 @@
/* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46)
+#define HCR_AT (UL(1) << 44)
#define HCR_NV1 (UL(1) << 43)
#define HCR_NV (UL(1) << 42)
#define HCR_API (UL(1) << 41)
@@ -110,6 +111,7 @@
#define VTCR_EL2_TG0_16K TCR_TG0_16K
#define VTCR_EL2_TG0_64K TCR_TG0_64K
#define VTCR_EL2_SH0_MASK TCR_SH0_MASK
+#define VTCR_EL2_SH0_SHIFT TCR_SH0_SHIFT
#define VTCR_EL2_SH0_INNER TCR_SH0_INNER
#define VTCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
#define VTCR_EL2_ORGN0_WBWA TCR_ORGN0_WBWA
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index cf8df032b9c3..c61d52c51e43 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -198,6 +198,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
+extern void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
+extern void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 2704738d644a..625c040e4f72 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -651,6 +651,23 @@
#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
+/* AT instructions */
+#define AT_Op0 1
+#define AT_CRn 7
+
+#define OP_AT_S1E1R sys_insn(AT_Op0, 0, AT_CRn, 8, 0)
+#define OP_AT_S1E1W sys_insn(AT_Op0, 0, AT_CRn, 8, 1)
+#define OP_AT_S1E0R sys_insn(AT_Op0, 0, AT_CRn, 8, 2)
+#define OP_AT_S1E0W sys_insn(AT_Op0, 0, AT_CRn, 8, 3)
+#define OP_AT_S1E1RP sys_insn(AT_Op0, 0, AT_CRn, 9, 0)
+#define OP_AT_S1E1WP sys_insn(AT_Op0, 0, AT_CRn, 9, 1)
+#define OP_AT_S1E2R sys_insn(AT_Op0, 4, AT_CRn, 8, 0)
+#define OP_AT_S1E2W sys_insn(AT_Op0, 4, AT_CRn, 8, 1)
+#define OP_AT_S12E1R sys_insn(AT_Op0, 4, AT_CRn, 8, 4)
+#define OP_AT_S12E1W sys_insn(AT_Op0, 4, AT_CRn, 8, 5)
+#define OP_AT_S12E0R sys_insn(AT_Op0, 4, AT_CRn, 8, 6)
+#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 598526c064f7..464c7ace2fb2 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -16,7 +16,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o \
vgic-sys-reg-v3.o fpsimd.o pmu.o \
- arch_timer.o trng.o emulate-nested.o nested.o \
+ arch_timer.o trng.o emulate-nested.o nested.o at.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/at.c b/arch/arm64/kvm/at.c
new file mode 100644
index 000000000000..c345ef98ca1e
--- /dev/null
+++ b/arch/arm64/kvm/at.c
@@ -0,0 +1,231 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2017 - Linaro Ltd
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/kvm_hyp.h>
+#include <asm/kvm_mmu.h>
+
+struct mmu_config {
+ u64 ttbr0;
+ u64 ttbr1;
+ u64 tcr;
+ u64 sctlr;
+ u64 vttbr;
+ u64 vtcr;
+ u64 hcr;
+};
+
+static void __mmu_config_save(struct mmu_config *config)
+{
+ config->ttbr0 = read_sysreg_el1(SYS_TTBR0);
+ config->ttbr1 = read_sysreg_el1(SYS_TTBR1);
+ config->tcr = read_sysreg_el1(SYS_TCR);
+ config->sctlr = read_sysreg_el1(SYS_SCTLR);
+ config->vttbr = read_sysreg(vttbr_el2);
+ config->vtcr = read_sysreg(vtcr_el2);
+ config->hcr = read_sysreg(hcr_el2);
+}
+
+static void __mmu_config_restore(struct mmu_config *config)
+{
+ write_sysreg_el1(config->ttbr0, SYS_TTBR0);
+ write_sysreg_el1(config->ttbr1, SYS_TTBR1);
+ write_sysreg_el1(config->tcr, SYS_TCR);
+ write_sysreg_el1(config->sctlr, SYS_SCTLR);
+ write_sysreg(config->vttbr, vttbr_el2);
+ write_sysreg(config->vtcr, vtcr_el2);
+ write_sysreg(config->hcr, hcr_el2);
+
+ isb();
+}
+
+void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ /*
+ * If HCR_EL2.{E2H,TGE} == {1,1}, the MMU context is already
+ * the right one (as we trapped from vEL2).
+ */
+ if (vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu))
+ goto skip_mmu_switch;
+
+ /*
+ * FIXME: Obtaining the S2 MMU for a guest guest is horribly
+ * racy, and we may not find it (evicted by another vcpu, for
+ * example).
+ */
+ mmu = lookup_s2_mmu(vcpu->kvm,
+ vcpu_read_sys_reg(vcpu, VTTBR_EL2),
+ vcpu_read_sys_reg(vcpu, HCR_EL2));
+
+ if (WARN_ON(!mmu))
+ goto out;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL1), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL1), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR);
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /*
+ * REVISIT: do we need anything from the guest's VTCR_EL2? If
+ * looks like keeping the hosts configuration is the right
+ * thing to do at this stage (and we could avoid save/restore
+ * it. Keep the host's version for now.
+ */
+ write_sysreg((config.hcr & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+skip_mmu_switch:
+
+ switch (op) {
+ case OP_AT_S1E1R:
+ case OP_AT_S1E1RP:
+ asm volatile("at s1e1r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E1W:
+ case OP_AT_S1E1WP:
+ asm volatile("at s1e1w, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E0R:
+ asm volatile("at s1e0r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E0W:
+ asm volatile("at s1e0w, %0" : : "r" (vaddr));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ isb();
+
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+
+ /*
+ * Failed? let's leave the building now.
+ *
+ * FIXME: how about a failed translation because the shadow S2
+ * wasn't populated? We may need to perform a SW PTW,
+ * populating our shadow S2 and retry the instruction.
+ */
+ if (ctxt_sys_reg(ctxt, PAR_EL1) & 1)
+ goto nopan;
+
+ /* No PAN? No problem. */
+ if (!(*vcpu_cpsr(vcpu) & PSR_PAN_BIT))
+ goto nopan;
+
+ /*
+ * For PAN-involved AT operations, perform the same
+ * translation, using EL0 this time.
+ */
+ switch (op) {
+ case OP_AT_S1E1RP:
+ asm volatile("at s1e0r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E1WP:
+ asm volatile("at s1e0w, %0" : : "r" (vaddr));
+ break;
+ default:
+ goto nopan;
+ }
+
+ /*
+ * If the EL0 translation has succeeded, we need to pretend
+ * the AT operation has failed, as the PAN setting forbids
+ * such a translation.
+ *
+ * FIXME: we hardcode a Level-3 permission fault. We really
+ * should return the real fault level.
+ */
+ if (!(read_sysreg(par_el1) & 1))
+ ctxt_sys_reg(ctxt, PAR_EL1) = 0x1f;
+
+nopan:
+ if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
+ __mmu_config_restore(&config);
+
+out:
+ spin_unlock(&vcpu->kvm->mmu_lock);
+}
+
+void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+ u64 val;
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ mmu = &vcpu->kvm->arch.mmu;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ if (vcpu_el2_e2h_is_set(vcpu)) {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+
+ val = config.hcr;
+ } else {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+
+ val = config.hcr | HCR_NV | HCR_NV1;
+ }
+
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /* FIXME: write S2 MMU VTCR_EL2? */
+ write_sysreg((val & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+ switch (op) {
+ case OP_AT_S1E2R:
+ asm volatile("at s1e1r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E2W:
+ asm volatile("at s1e1w, %0" : : "r" (vaddr));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ isb();
+
+ /* FIXME: handle failed translation due to shadow S2 */
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+
+ __mmu_config_restore(&config);
+ spin_unlock(&vcpu->kvm->mmu_lock);
+}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 79789850639b..7715b8254a76 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -43,9 +43,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
if (!vcpu_el2_e2h_is_set(vcpu)) {
/*
* For a guest hypervisor on v8.0, trap and emulate
- * the EL1 virtual memory control register accesses.
+ * the EL1 virtual memory control register accesses
+ * as well as the AT S1 operations.
*/
- hcr |= HCR_TVM | HCR_TRVM | HCR_NV1;
+ hcr |= HCR_TVM | HCR_TRVM | HCR_AT | HCR_NV1;
} else {
/*
* For a guest hypervisor on v8.1 (VHE), allow to
@@ -68,6 +69,14 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
hcr &= ~HCR_TVM;
hcr |= vhcr_el2 & (HCR_TVM | HCR_TRVM);
+
+ /*
+ * If we're using the EL1 translation regime
+ * (TGE clear), then ensure that AT S1 ops are
+ * trapped too.
+ */
+ if (!vcpu_el2_tge_is_set(vcpu))
+ hcr |= HCR_AT;
}
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 79f6c88425de..9d9108e9fcf7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1625,6 +1625,10 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool forward_at_traps(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_AT);
+}
static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@@ -2153,12 +2157,205 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_SP_EL2), NULL, reset_unknown, SP_EL2 },
};
-#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
- { SYS_DESC((insn)), (access_fn), NULL, 0, 0, NULL, NULL, (forward_fn) }
+static bool handle_s1e01(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e01(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static bool handle_s1e2(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e2(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static u64 setup_par_aborted(u32 esr)
+{
+ u64 par = 0;
+
+ /* S [9]: fault in the stage 2 translation */
+ par |= (1 << 9);
+ /* FST [6:1]: Fault status code */
+ par |= (esr << 1);
+ /* F [0]: translation is aborted */
+ par |= 1;
+
+ return par;
+}
+
+static u64 setup_par_completed(struct kvm_vcpu *vcpu, struct kvm_s2_trans *out)
+{
+ u64 par, vtcr_sh0;
+
+ /* F [0]: Translation is completed successfully */
+ par = 0;
+ /* ATTR [63:56] */
+ par |= out->upper_attr;
+ /* PA [47:12] */
+ par |= out->output & GENMASK_ULL(11, 0);
+ /* RES1 [11] */
+ par |= (1UL << 11);
+ /* SH [8:7]: Shareability attribute */
+ vtcr_sh0 = vcpu_read_sys_reg(vcpu, VTCR_EL2) & VTCR_EL2_SH0_MASK;
+ par |= (vtcr_sh0 >> VTCR_EL2_SH0_SHIFT) << 7;
+
+ return par;
+}
+
+static bool handle_s12(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r, bool write)
+{
+ u64 par, va;
+ u32 esr;
+ phys_addr_t ipa;
+ struct kvm_s2_trans out;
+ int ret;
+
+ /* Do the stage-1 translation */
+ handle_s1e01(vcpu, p, r);
+ par = vcpu_read_sys_reg(vcpu, PAR_EL1);
+ if (par & 1) {
+ /* The stage-1 translation aborted */
+ return true;
+ }
+
+ /* Do the stage-2 translation */
+ va = p->regval;
+ ipa = (par & GENMASK_ULL(47, 12)) | (va & GENMASK_ULL(11, 0));
+ out.esr = 0;
+ ret = kvm_walk_nested_s2(vcpu, ipa, &out);
+ if (ret < 0)
+ return false;
+
+ /* Check if the stage-2 PTW is aborted */
+ if (out.esr) {
+ esr = out.esr;
+ goto s2_trans_abort;
+ }
+
+ /* Check the access permission */
+ if ((!write && !out.readable) || (write && !out.writable)) {
+ esr = ESR_ELx_FSC_PERM;
+ esr |= out.level & 0x3;
+ goto s2_trans_abort;
+ }
+
+ vcpu_write_sys_reg(vcpu, setup_par_completed(vcpu, &out), PAR_EL1);
+ return true;
+
+s2_trans_abort:
+ vcpu_write_sys_reg(vcpu, setup_par_aborted(esr), PAR_EL1);
+ return true;
+}
+
+static bool handle_s12r(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, false);
+}
+
+static bool handle_s12w(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, true);
+}
+
+/*
+ * AT instruction emulation
+ *
+ * We emulate AT instructions executed in the virtual EL2.
+ * Basic strategy for the stage-1 translation emulation is to load proper
+ * context, which depends on the trapped instruction and the virtual HCR_EL2,
+ * to the EL1 virtual memory control registers and execute S1E[01] instructions
+ * in EL2. See below for more detail.
+ *
+ * For the stage-2 translation, which is necessary for S12E[01] emulation,
+ * we walk the guest hypervisor's stage-2 page table in software.
+ *
+ * The stage-1 translation emulations can be divided into two groups depending
+ * on the translation regime.
+ *
+ * 1. EL2 AT instructions: S1E2x
+ * +-----------------------------------------------------------------------+
+ * | | Setting for the emulation |
+ * | Virtual HCR_EL2.E2H on trap |-----------------------------------------+
+ * | | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |-----------------------------------------------------------------------|
+ * | 0 | vEL2 | (1, 1) | 0 |
+ * | 1 | vEL2 | (0, 0) | 0 |
+ * +-----------------------------------------------------------------------+
+ *
+ * We emulate the EL2 AT instructions by loading virtual EL2 context
+ * to the EL1 virtual memory control registers and executing corresponding
+ * EL1 AT instructions.
+ *
+ * We set physical NV and NV1 bits to use EL2 page table format for non-VHE
+ * guest hypervisor (i.e. HCR_EL2.E2H == 0). As a VHE guest hypervisor uses the
+ * EL1 page table format, we don't set those bits.
+ *
+ * We should clear physical TGE bit not to use the EL2 translation regime when
+ * the host uses the VHE feature.
+ *
+ *
+ * 2. EL0/EL1 AT instructions: S1E[01]x, S12E1x
+ * +----------------------------------------------------------------------+
+ * | Virtual HCR_EL2 on trap | Setting for the emulation |
+ * |----------------------------------------------------------------------+
+ * | (vE2H, vTGE) | (vNV, vNV1) | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |----------------------------------------------------------------------|
+ * | (0, 0)* | (0, 0) | vEL1 | (0, 0) | 0 |
+ * | (0, 0) | (1, 1) | vEL1 | (1, 1) | 0 |
+ * | (1, 1) | (0, 0) | vEL2 | (0, 0) | 0 |
+ * | (1, 1) | (1, 1) | vEL2 | (1, 1) | 0 |
+ * +----------------------------------------------------------------------+
+ *
+ * *For (0, 0) in the 'Virtual HCR_EL2 on trap' column, it actually means
+ * (1, 1). Keep them (0, 0) just for the readability.
+ *
+ * We set physical EL1 virtual memory control registers depending on
+ * (vE2H, vTGE) pair. When the pair is (0, 0) where AT instructions are
+ * supposed to use EL0/EL1 translation regime, we load the EL1 registers with
+ * the virtual EL1 registers (i.e. EL1 registers from the guest hypervisor's
+ * point of view). When the pair is (1, 1), however, AT instructions are defined
+ * to apply EL2 translation regime. To emulate this behavior, we load the EL1
+ * registers with the virtual EL2 context. (i.e the shadow registers)
+ *
+ * We respect the virtual NV and NV1 bit for the emulation. When those bits are
+ * set, it means that a guest hypervisor would like to use EL2 page table format
+ * for the EL1 translation regime. We emulate this by setting the physical
+ * NV and NV1 bits.
+ */
+
+#define SYS_INSN_TO_DESC(insn, access_fn, forward_fn) \
+ { SYS_DESC(OP_##insn), (access_fn), NULL, 0, 0, \
+ NULL, NULL, (forward_fn) }
static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_ISW), access_dcsw },
+
+ SYS_INSN_TO_DESC(AT_S1E1R, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1W, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E0R, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E0W, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1RP, handle_s1e01, forward_at_traps),
+ SYS_INSN_TO_DESC(AT_S1E1WP, handle_s1e01, forward_at_traps),
+
{ SYS_DESC(SYS_DC_CSW), access_dcsw },
{ SYS_DESC(SYS_DC_CISW), access_dcsw },
+
+ SYS_INSN_TO_DESC(AT_S1E2R, handle_s1e2, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S1E2W, handle_s1e2, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E1R, handle_s12r, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E1W, handle_s12w, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E0R, handle_s12r, forward_nv_traps),
+ SYS_INSN_TO_DESC(AT_S12E0W, handle_s12w, forward_nv_traps),
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
--
2.29.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 41/66] KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
2021-05-10 16:58 ` Marc Zyngier
(?)
@ 2021-05-10 16:58 ` Marc Zyngier
-1 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
When supporting nested virtualization a guest hypervisor executing TLBI
instructions must be trapped and emulated by the host hypervisor,
because the guest hypervisor can only affect physical TLB entries
relating to its own execution environment (virtual EL2 in EL1) but not
to the nested guests as required by the semantics of the instructions
and TLBI instructions might also result in updates (invalidations) to
shadow page tables.
This patch does several things.
1. List and define all TLBI system instructions to emulate.
2. Emulate TLBI ALLE2(IS) instruction executed in the virtual EL2. Since
we emulate the virtual EL2 in the EL1, we invalidate EL1&0 regime stage
1 TLB entries with setting vttbr_el2 having the VMID of the virtual EL2.
3. Emulate TLBI VAE2* instruction executed in the virtual EL2. Based on the
same principle as TLBI ALLE2 instruction, we can simply emulate those
instructions by executing corresponding VAE1* instructions with the
virtual EL2's VMID assigned by the host hypervisor.
Note that we are able to emulate TLBI ALLE2IS precisely by only
invalidating stage 1 TLB entries via TLBI VMALL1IS instruction, but to
make it simeple, we reuse the existing function, __kvm_tlb_flush_vmid(),
which invalidates both of stage 1 and 2 TLB entries.
4. TLBI ALLE1(IS) instruction invalidates all EL1&0 regime stage 1 and 2
TLB entries (on all PEs in the same Inner Shareable domain). To emulate
these instructions, we first need to clear all the mappings in the
shadow page tables since executing those instructions implies the change
of mappings in the stage 2 page tables maintained by the guest
hypervisor. We then need to invalidate all EL1&0 regime stage 1 and 2
TLB entries of all VMIDs, which are assigned by the host hypervisor, for
this VM.
5. Based on the same principle as TLBI ALLE1(IS) emulation, we clear the
mappings in the shadow stage-2 page tables and invalidate TLB entries.
But this time we do it only for the current VMID from the guest
hypervisor's perspective, not for all VMIDs.
6. Based on the same principle as TLBI ALLE1(IS) and TLBI VMALLS12E1(IS)
emulation, we clear the mappings in the shadow stage-2 page tables and
invalidate TLB entries. We do it only for one mapping for the current
VMID from the guest hypervisor's view.
7. Forward system instruction traps to the virtual EL2 if a
corresponding bit in the virtual HCR_EL2 is set.
8. Even though a guest hypervisor can execute TLBI instructions that are
accesible at EL1 without trap, it's wrong; All those TLBI instructions
work based on current VMID, and when running a guest hypervisor current
VMID is the one for itself, not the one from the virtual vttbr_el2. So
letting a guest hypervisor execute those TLBI instructions results in
invalidating its own TLB entries and leaving invalid TLB entries
unhandled.
Therefore we trap and emulate those TLBI instructions. The emulation is
simple; we find a shadow VMID mapped to the virtual vttbr_el2, set it in
the physical vttbr_el2, then execute the same instruction in EL2.
We don't set HCR_EL2.TTLB bit yet.
[ Changes performed by Marc Zynger:
The TLBI handling code more or less directly execute the same
instruction that has been trapped (with an EL2->EL1 conversion
in the case of an EL2 TLBI), but that's unfortunately not enough:
- TLBIs must be upgraded to the Inner Shareable domain to account
for vcpu migration, just like we already have with HCR_EL2.FB.
- The DSB instruction that synchronises these must thus be on
the Inner Shareable domain as well.
- Prior to executing the TLBI, we need another DSB ISHST to make
sure that the update to the page tables is now visible.
Ordering of system instructions fixed
- The current TLB invalidation code is pretty buggy, as it assume a
page mapping. On the contrary, it is likely that TLB invalidation
will cover more than a single page, and the size should be decided
by the guests configuration (and not the host's).
Since we don't cache the guest mapping sizes in the shadow PT yet,
let's assume the worse case (a block mapping) and invalidate that.
Take this opportunity to fix the decoding of the parameter (it
isn't a straight IPA).
- In general, we always emulate local TBL invalidations as being
as upgraded to the Inner Shareable domain so that we can easily
deal with vcpu migration. This is consistent with the fact that
we set HCR_EL2.FB when running non-nested VMs.
So let's emulate TLBI ALLE2 as ALLE2IS.
]
[ Changes performed by Christoffer Dall:
Sometimes when we are invalidating the TLB for a certain S2 MMU
context, this context can also have EL2 context associated with it
and we have to invalidate this too.
]
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/include/asm/sysreg.h | 36 ++++++
arch/arm64/kvm/hyp/vhe/switch.c | 8 +-
arch/arm64/kvm/hyp/vhe/tlb.c | 81 ++++++++++++
arch/arm64/kvm/mmu.c | 18 ++-
arch/arm64/kvm/sys_regs.c | 212 +++++++++++++++++++++++++++++++
6 files changed, 352 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index c61d52c51e43..22cab61ed15c 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -196,6 +196,8 @@ extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
int level);
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
+extern void __kvm_tlb_vae2is(struct kvm_s2_mmu *mmu, u64 va, u64 sys_encoding);
+extern void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
extern void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 625c040e4f72..d5724eccfc5d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -668,6 +668,42 @@
#define OP_AT_S12E0R sys_insn(AT_Op0, 4, AT_CRn, 8, 6)
#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
+/* TLBI instructions */
+#define TLBI_Op0 1
+#define TLBI_Op1_EL1 0 /* Accessible from EL1 or higher */
+#define TLBI_Op1_EL2 4 /* Accessible from EL2 or higher */
+#define TLBI_CRn 8
+#define tlbi_insn_el1(CRm, Op2) sys_insn(TLBI_Op0, TLBI_Op1_EL1, TLBI_CRn, (CRm), (Op2))
+#define tlbi_insn_el2(CRm, Op2) sys_insn(TLBI_Op0, TLBI_Op1_EL2, TLBI_CRn, (CRm), (Op2))
+
+#define OP_TLBI_VMALLE1IS tlbi_insn_el1(3, 0)
+#define OP_TLBI_VAE1IS tlbi_insn_el1(3, 1)
+#define OP_TLBI_ASIDE1IS tlbi_insn_el1(3, 2)
+#define OP_TLBI_VAAE1IS tlbi_insn_el1(3, 3)
+#define OP_TLBI_VALE1IS tlbi_insn_el1(3, 5)
+#define OP_TLBI_VAALE1IS tlbi_insn_el1(3, 7)
+#define OP_TLBI_VMALLE1 tlbi_insn_el1(7, 0)
+#define OP_TLBI_VAE1 tlbi_insn_el1(7, 1)
+#define OP_TLBI_ASIDE1 tlbi_insn_el1(7, 2)
+#define OP_TLBI_VAAE1 tlbi_insn_el1(7, 3)
+#define OP_TLBI_VALE1 tlbi_insn_el1(7, 5)
+#define OP_TLBI_VAALE1 tlbi_insn_el1(7, 7)
+
+#define OP_TLBI_IPAS2E1IS tlbi_insn_el2(0, 1)
+#define OP_TLBI_IPAS2LE1IS tlbi_insn_el2(0, 5)
+#define OP_TLBI_ALLE2IS tlbi_insn_el2(3, 0)
+#define OP_TLBI_VAE2IS tlbi_insn_el2(3, 1)
+#define OP_TLBI_ALLE1IS tlbi_insn_el2(3, 4)
+#define OP_TLBI_VALE2IS tlbi_insn_el2(3, 5)
+#define OP_TLBI_VMALLS12E1IS tlbi_insn_el2(3, 6)
+#define OP_TLBI_IPAS2E1 tlbi_insn_el2(4, 1)
+#define OP_TLBI_IPAS2LE1 tlbi_insn_el2(4, 5)
+#define OP_TLBI_ALLE2 tlbi_insn_el2(7, 0)
+#define OP_TLBI_VAE2 tlbi_insn_el2(7, 1)
+#define OP_TLBI_ALLE1 tlbi_insn_el2(7, 4)
+#define OP_TLBI_VALE2 tlbi_insn_el2(7, 5)
+#define OP_TLBI_VMALLS12E1 tlbi_insn_el2(7, 6)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 7715b8254a76..b53de6863972 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -46,7 +46,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
* the EL1 virtual memory control register accesses
* as well as the AT S1 operations.
*/
- hcr |= HCR_TVM | HCR_TRVM | HCR_AT | HCR_NV1;
+ hcr |= HCR_TVM | HCR_TRVM | HCR_AT | HCR_TTLB | HCR_NV1;
} else {
/*
* For a guest hypervisor on v8.1 (VHE), allow to
@@ -72,11 +72,11 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
/*
* If we're using the EL1 translation regime
- * (TGE clear), then ensure that AT S1 ops are
- * trapped too.
+ * (TGE clear), then ensure that AT S1 and
+ * TLBI E1 ops are trapped too.
*/
if (!vcpu_el2_tge_is_set(vcpu))
- hcr |= HCR_AT;
+ hcr |= HCR_AT | HCR_TTLB;
}
}
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index 66f17349f0c3..001ac9f2f1b5 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -161,3 +161,84 @@ void __kvm_flush_vm_context(void)
dsb(ish);
}
+
+void __kvm_tlb_vae2is(struct kvm_s2_mmu *mmu, u64 va, u64 sys_encoding)
+{
+ struct tlb_inv_context cxt;
+
+ dsb(ishst);
+
+ /* Switch to requested VMID */
+ __tlb_switch_to_guest(mmu, &cxt);
+
+ /*
+ * Execute the EL1 version of TLBI VAE2* instruction, forcing
+ * an upgrade to the Inner Shareable domain in order to
+ * perform the invalidation on all CPUs.
+ */
+ switch (sys_encoding) {
+ case OP_TLBI_VAE2:
+ case OP_TLBI_VAE2IS:
+ __tlbi(vae1is, va);
+ break;
+ case OP_TLBI_VALE2:
+ case OP_TLBI_VALE2IS:
+ __tlbi(vale1is, va);
+ break;
+ default:
+ break;
+ }
+ dsb(ish);
+ isb();
+
+ __tlb_switch_to_host(&cxt);
+}
+
+void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding)
+{
+ struct tlb_inv_context cxt;
+
+ dsb(ishst);
+
+ /* Switch to requested VMID */
+ __tlb_switch_to_guest(mmu, &cxt);
+
+ /*
+ * Execute the same instruction as the guest hypervisor did,
+ * expanding the scope of local TLB invalidations to the Inner
+ * Shareable domain so that it takes place on all CPUs. This
+ * is equivalent to having HCR_EL2.FB set.
+ */
+ switch (sys_encoding) {
+ case OP_TLBI_VMALLE1:
+ case OP_TLBI_VMALLE1IS:
+ __tlbi(vmalle1is);
+ break;
+ case OP_TLBI_VAE1:
+ case OP_TLBI_VAE1IS:
+ __tlbi(vae1is, val);
+ break;
+ case OP_TLBI_ASIDE1:
+ case OP_TLBI_ASIDE1IS:
+ __tlbi(aside1is, val);
+ break;
+ case OP_TLBI_VAAE1:
+ case OP_TLBI_VAAE1IS:
+ __tlbi(vaae1is, val);
+ break;
+ case OP_TLBI_VALE1:
+ case OP_TLBI_VALE1IS:
+ __tlbi(vale1is, val);
+ break;
+ case OP_TLBI_VAALE1:
+ case OP_TLBI_VAALE1IS:
+ __tlbi(vaale1is, val);
+ break;
+ default:
+ break;
+ }
+ dsb(ish);
+ isb();
+
+ __tlb_switch_to_host(&cxt);
+}
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index fddcbe200573..91cadd8ed57e 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -80,7 +80,23 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot)
*/
void kvm_flush_remote_tlbs(struct kvm *kvm)
{
- kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu);
+ struct kvm_s2_mmu *mmu = &kvm->arch.mmu;
+
+ if (mmu == &kvm->arch.mmu) {
+ /*
+ * For a normal (i.e. non-nested) guest, flush entries for the
+ * given VMID *
+ */
+ kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
+ } else {
+ /*
+ * When supporting nested virtualization, we can have multiple
+ * VMIDs in play for each VCPU in the VM, so it's really not
+ * worth it to try to quiesce the system and flush all the
+ * VMIDs that may be in use, instead just nuke the whole thing.
+ */
+ kvm_call_hyp(__kvm_flush_vm_context);
+ }
}
static bool kvm_is_device_pfn(unsigned long pfn)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9d9108e9fcf7..01df2c16c23b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1630,6 +1630,11 @@ static bool forward_at_traps(struct kvm_vcpu *vcpu)
return forward_traps(vcpu, HCR_AT);
}
+static bool forward_ttlb_traps(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_TTLB);
+}
+
static bool access_elr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -2268,6 +2273,185 @@ static bool handle_s12w(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
return handle_s12(vcpu, p, r, true);
}
+static bool handle_alle2is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ /*
+ * To emulate invalidating all EL2 regime stage 1 TLB entries for all
+ * PEs, executing TLBI VMALLE1IS is enough. But reuse the existing
+ * interface for the simplicity; invalidating stage 2 entries doesn't
+ * affect the correctness.
+ */
+ __kvm_tlb_flush_vmid(&vcpu->kvm->arch.mmu);
+ return true;
+}
+
+static bool handle_vae2is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ /*
+ * Based on the same principle as TLBI ALLE2 instruction
+ * emulation, we emulate TLBI VAE2* instructions by executing
+ * corresponding TLBI VAE1* instructions with the virtual
+ * EL2's VMID assigned by the host hypervisor.
+ */
+ __kvm_tlb_vae2is(&vcpu->kvm->arch.mmu, p->regval, sys_encoding);
+ return true;
+}
+
+static bool handle_alle1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ struct kvm_s2_mmu *mmu = &vcpu->kvm->arch.mmu;
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ /*
+ * Clear all mappings in the shadow page tables and invalidate the stage
+ * 1 and 2 TLB entries via kvm_tlb_flush_vmid_ipa().
+ */
+ kvm_nested_s2_clear(vcpu->kvm);
+
+ if (mmu->vmid.vmid_gen) {
+ /*
+ * Invalidate the stage 1 and 2 TLB entries for the host OS
+ * in a VM only if there is one.
+ */
+ __kvm_tlb_flush_vmid(mmu);
+ }
+
+ spin_unlock(&vcpu->kvm->mmu_lock);
+
+ return true;
+}
+
+static bool handle_vmalls12e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ struct kvm_s2_mmu *mmu;
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ mmu = lookup_s2_mmu(vcpu->kvm, vttbr, HCR_VM);
+ if (mmu)
+ kvm_unmap_stage2_range(mmu, 0, kvm_phys_size(vcpu->kvm));
+
+ mmu = lookup_s2_mmu(vcpu->kvm, vttbr, 0);
+ if (mmu)
+ kvm_unmap_stage2_range(mmu, 0, kvm_phys_size(vcpu->kvm));
+
+ spin_unlock(&vcpu->kvm->mmu_lock);
+
+ return true;
+}
+
+static bool handle_ipas2e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ u64 vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
+ struct kvm_s2_mmu *mmu;
+ u64 base_addr;
+ int max_size;
+
+ /*
+ * We drop a number of things from the supplied value:
+ *
+ * - NS bit: we're non-secure only.
+ *
+ * - TTL field: We already have the granule size from the
+ * VTCR_EL2.TG0 field, and the level is only relevant to the
+ * guest's S2PT.
+ *
+ * - IPA[51:48]: We don't support 52bit IPA just yet...
+ *
+ * And of course, adjust the IPA to be on an actual address.
+ */
+ base_addr = (p->regval & GENMASK_ULL(35, 0)) << 12;
+
+ /* Compute the maximum extent of the invalidation */
+ switch ((vtcr & VTCR_EL2_TG0_MASK)) {
+ case VTCR_EL2_TG0_4K:
+ max_size = SZ_1G;
+ break;
+ case VTCR_EL2_TG0_16K:
+ max_size = SZ_32M;
+ break;
+ case VTCR_EL2_TG0_64K:
+ /*
+ * No, we do not support 52bit IPA in nested yet. Once
+ * we do, this should be 4TB.
+ */
+ /* FIXME: remove the 52bit PA support from the IDregs */
+ max_size = SZ_512M;
+ break;
+ default:
+ BUG();
+ }
+
+ spin_lock(&vcpu->kvm->mmu_lock);
+
+ mmu = lookup_s2_mmu(vcpu->kvm, vttbr, HCR_VM);
+ if (mmu)
+ kvm_unmap_stage2_range(mmu, base_addr, max_size);
+
+ mmu = lookup_s2_mmu(vcpu->kvm, vttbr, 0);
+ if (mmu)
+ kvm_unmap_stage2_range(mmu, base_addr, max_size);
+
+ spin_unlock(&vcpu->kvm->mmu_lock);
+
+ return true;
+}
+
+static bool handle_tlbi_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u32 sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ /*
+ * If we're here, this is because we've trapped on a EL1 TLBI
+ * instruction that affects the EL1 translation regime while
+ * we're running in a context that doesn't allow us to let the
+ * HW do its thing (aka vEL2):
+ *
+ * - HCR_EL2.E2H == 0 : a non-VHE guest
+ * - HCR_EL2.{E2H,TGE} == { 1, 0 } : a VHE guest in guest mode
+ *
+ * We don't expect these helpers to ever be called when running
+ * in a vEL1 context.
+ */
+
+ WARN_ON(!vcpu_mode_el2(vcpu));
+
+ mutex_lock(&vcpu->kvm->lock);
+
+ if ((__vcpu_sys_reg(vcpu, HCR_EL2) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
+ u64 virtual_vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ struct kvm_s2_mmu *mmu;
+
+ mmu = lookup_s2_mmu(vcpu->kvm, virtual_vttbr, HCR_VM);
+ if (mmu)
+ __kvm_tlb_el1_instr(mmu, p->regval, sys_encoding);
+
+ mmu = lookup_s2_mmu(vcpu->kvm, virtual_vttbr, 0);
+ if (mmu)
+ __kvm_tlb_el1_instr(mmu, p->regval, sys_encoding);
+ } else {
+ /*
+ * ARMv8.4-NV allows the guest to change TGE behind
+ * our back, so we always trap EL1 TLBIs from vEL2...
+ */
+ __kvm_tlb_el1_instr(&vcpu->kvm->arch.mmu, p->regval, sys_encoding);
+ }
+
+ mutex_unlock(&vcpu->kvm->lock);
+
+ return true;
+}
+
/*
* AT instruction emulation
*
@@ -2350,12 +2534,40 @@ static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_CSW), access_dcsw },
{ SYS_DESC(SYS_DC_CISW), access_dcsw },
+ SYS_INSN_TO_DESC(TLBI_VMALLE1IS, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VAE1IS, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_ASIDE1IS, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VAAE1IS, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VALE1IS, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VAALE1IS, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VMALLE1, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VAE1, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_ASIDE1, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VAAE1, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VALE1, handle_tlbi_el1, forward_ttlb_traps),
+ SYS_INSN_TO_DESC(TLBI_VAALE1, handle_tlbi_el1, forward_ttlb_traps),
+
SYS_INSN_TO_DESC(AT_S1E2R, handle_s1e2, forward_nv_traps),
SYS_INSN_TO_DESC(AT_S1E2W, handle_s1e2, forward_nv_traps),
SYS_INSN_TO_DESC(AT_S12E1R, handle_s12r, forward_nv_traps),
SYS_INSN_TO_DESC(AT_S12E1W, handle_s12w, forward_nv_traps),
SYS_INSN_TO_DESC(AT_S12E0R, handle_s12r, forward_nv_traps),
SYS_INSN_TO_DESC(AT_S12E0W, handle_s12w, forward_nv_traps),
+
+ SYS_INSN_TO_DESC(TLBI_IPAS2E1IS, handle_ipas2e1is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_IPAS2LE1IS, handle_ipas2e1is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_ALLE2IS, handle_alle2is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_VAE2IS, handle_vae2is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_ALLE1IS, handle_alle1is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_VALE2IS, handle_vae2is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_VMALLS12E1IS, handle_vmalls12e1is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_IPAS2E1, handle_ipas2e1is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_IPAS2LE1, handle_ipas2e1is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_ALLE2, handle_alle2is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_VAE2, handle_vae2is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_ALLE1, handle_alle1is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_VALE2, handle_vae2is, forward_nv_traps),
+ SYS_INSN_TO_DESC(TLBI_VMALLS12E1, handle_vmalls12e1is, forward_nv_traps),
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
--
2.29.2
^ permalink raw reply related [flat|nested] 229+ messages in thread
* [PATCH v4 41/66] KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
@ 2021-05-10 16:58 ` Marc Zyngier
0 siblings, 0 replies; 229+ messages in thread
From: Marc Zyngier @ 2021-05-10 16:58 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: Andre Przywara, Christoffer Dall, Jintack Lim, Haibo Xu,
James Morse, Suzuki K Poulose, Alexandru Elisei, kernel-team,
Jintack Lim
When supporting nested virtualization a guest hypervisor executing TLBI
instructions must be trapped and emulated by the host hypervisor,
because the guest hypervisor can only affect physical TLB entries
relating to its own execution environment (virtual EL2 in EL1) but not
to the nested guests as required by the semantics of the instructions
and TLBI instructions might also result in updates (invalidations) to
shadow page tables.
This patch does several things.
1. List and define all TLBI system instructions to emulate.
2. Emulate TLBI ALLE2(IS) instruction executed in the virtual EL2. Since
we emulate the virtual EL2 in the EL1, we invalidate EL1&0 regime stage
1 TLB entries with setting vttbr_el2 having the VMID of the virtual EL2.
3. Emulate TLBI VAE2* instruction executed in the virtual EL2. Based on the
same principle as TLBI ALLE2 instruction, we can simply emulate those
instructions by executing corresponding VAE1* instructions with the
virtual EL2's VMID assigned by the host hypervisor.
Note that we are able to emulate TLBI ALLE2IS precisely by only
invalidating stage 1 TLB entries via TLBI VMALL1IS instruction, but to
make it simeple, we reuse the existing function, __kvm_tlb_flush_vmid(),
which invalidates both of stage 1 and 2 TLB entries.
4. TLBI ALLE1(IS) instruction invalidates all EL1&0 regime stage 1 and 2
TLB entries (on all PEs in the same Inner Shareable domain). To emulate
these instructions, we first need to clear all the mappings in the
shadow page tables since executing those instructions implies the change
of mappings in the stage 2 page tables maintained by the guest
hypervisor. We then need to invalidate all EL1&0 regime stage 1 and 2
TLB entries of all VMIDs, which are assigned by the host hypervisor, for
this VM.
5. Based on the same principle as TLBI ALLE1(IS) emulation, we clear the
mappings in the shadow stage-2 page tables and invalidate TLB entries.
But this time we do it only for the current VMID from the guest
hypervisor's perspective, not for all VMIDs.
6. Based on the same principle as TLBI ALLE1(IS) and TLBI VMALLS12E1(IS)
emulation, we clear the mappings in the shadow stage-2 page tables and
invalidate TLB entries. We do it only for one mapping for the current
VMID from the guest hypervisor's view.
7. Forward system instruction traps to the virtual EL2 if a
corresponding bit in the virtual HCR_EL2 is set.
8. Even though a guest hypervisor can execute TLBI instructions that are
accesible at EL1 without trap, it's wrong; All those TLBI instructions
work based on current VMID, and when running a guest hypervisor current
VMID is the one for itself, not the one from the virtual vttbr_el2. So
letting a guest hypervisor execute those TLBI instructions results in
invalidating its own TLB entries and leaving invalid TLB entries
unhandled.
Therefore we trap and emulate those TLBI instructions. The emulation is
simple; we find a shadow VMID mapped to the virtual vttbr_el2, set it in
the physical vttbr_el2, then execute the same instruction in EL2.
We don't set HCR_EL2.TTLB bit yet.
[ Changes performed by Marc Zynger:
The TLBI handling code more or less directly execute the same
instruction that has been trapped (with an EL2->EL1 conversion
in the case of an EL2 TLBI), but that's unfortunately not enough:
- TLBIs must be upgraded to the Inner Shareable domain to account
for vcpu migration, just like we already have with HCR_EL2.FB.
- The DSB instruction that synchronises these must thus be on
the Inner Shareable domain as well.
- Prior to executing the TLBI, we need another DSB ISHST to make
sure that the update to the page tables is now visible.
Ordering of system instructions fixed
- The current TLB invalidation code is pretty buggy, as it assume a
page mapping. On the contrary, it is likely that TLB invalidation
will cover more than a single page, and the size should be decided
by the guests configuration (and not the host's).
Since we don't cache the guest mapping sizes in the shadow PT yet,
let's assume the worse case (a block mapping) and invalidate that.
Take this opportunity to fix the decoding of the parameter (it
isn't a straight IPA).
- In general, we always emulate local TBL invalidations as being
as upgraded to the Inner Shareable domain so that we can easily
deal with vcpu migration. This is consistent with the fact that
we set HCR_EL2.FB when running non-nested VMs.
So let's emulate TLBI ALLE2 as ALLE2IS.
]
[ Changes performed by Christoffer Dall:
Sometimes when we are invalidating the TLB for a certain S2 MMU
context, this context can also have EL2 context associated with it
and we have to invalidate this too.
]
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/include/asm/sysreg.h | 36 ++++++
arch/arm64/kvm/hyp/vhe/switch.c | 8 +-
arch/arm64/kvm/hyp/vhe/tlb.c | 81 ++++++++++++
arch/arm64/kvm/mmu.c | 18 ++-
arch/arm64/kvm/sys_regs.c | 212 +++++++++++++++++++++++++++++++
6 files changed, 352 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index c61d52c51e43..22cab61ed15c 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -196,6 +196,8 @@ extern void __kv