All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] KVM: arm64: Host EL2 entry improvements
@ 2020-10-26  9:51 ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

This small series reworks various bits of the host EL2 entry after
Andrew's extensive rework to move from direct function calls to a
SMCCC implementation.

The first 3 patches are plain bug fixes, and candidates for immediate
merge into 5.10.

The following 2 patches allow the use of direct function pointers at
EL2, something that we can't do at the moment (other than PC-relative
addressing). This requires a helper to translate pointers at runtime,
but the result is neat enough. This allows the rewrite of the host HVC
handling in a more maintainable way.

A similar patch removes the direct use of kimage_voffset, which we
won't be able to trust for much longer.

The last two patches are just cleanups and optimisations.

Patches on top of 5.10-rc1.

Marc Zyngier (8):
  KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
  KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
  KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
  KVM: arm64: Add kimg_hyp_va() helper
  KVM: arm64: Turn host HVC handling into a dispatch table
  KVM: arm64: Patch kimage_voffset instead of loading the EL1 value
  KVM: arm64: Simplify __kvm_enable_ssbs()
  KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception

 arch/arm64/include/asm/kvm_asm.h    |   2 -
 arch/arm64/include/asm/kvm_mmu.h    |  16 ++
 arch/arm64/include/asm/sysreg.h     |   1 +
 arch/arm64/kernel/image-vars.h      |   5 +-
 arch/arm64/kvm/hyp/nvhe/host.S      |  14 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S  |  23 ++-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c  | 231 +++++++++++++++++-----------
 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c |  11 --
 arch/arm64/kvm/hyp/nvhe/tlb.c       |   1 -
 arch/arm64/kvm/va_layout.c          |  56 +++++++
 10 files changed, 236 insertions(+), 124 deletions(-)

-- 
2.28.0


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 0/8] KVM: arm64: Host EL2 entry improvements
@ 2020-10-26  9:51 ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

This small series reworks various bits of the host EL2 entry after
Andrew's extensive rework to move from direct function calls to a
SMCCC implementation.

The first 3 patches are plain bug fixes, and candidates for immediate
merge into 5.10.

The following 2 patches allow the use of direct function pointers at
EL2, something that we can't do at the moment (other than PC-relative
addressing). This requires a helper to translate pointers at runtime,
but the result is neat enough. This allows the rewrite of the host HVC
handling in a more maintainable way.

A similar patch removes the direct use of kimage_voffset, which we
won't be able to trust for much longer.

The last two patches are just cleanups and optimisations.

Patches on top of 5.10-rc1.

Marc Zyngier (8):
  KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
  KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
  KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
  KVM: arm64: Add kimg_hyp_va() helper
  KVM: arm64: Turn host HVC handling into a dispatch table
  KVM: arm64: Patch kimage_voffset instead of loading the EL1 value
  KVM: arm64: Simplify __kvm_enable_ssbs()
  KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception

 arch/arm64/include/asm/kvm_asm.h    |   2 -
 arch/arm64/include/asm/kvm_mmu.h    |  16 ++
 arch/arm64/include/asm/sysreg.h     |   1 +
 arch/arm64/kernel/image-vars.h      |   5 +-
 arch/arm64/kvm/hyp/nvhe/host.S      |  14 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S  |  23 ++-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c  | 231 +++++++++++++++++-----------
 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c |  11 --
 arch/arm64/kvm/hyp/nvhe/tlb.c       |   1 -
 arch/arm64/kvm/va_layout.c          |  56 +++++++
 10 files changed, 236 insertions(+), 124 deletions(-)

-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 0/8] KVM: arm64: Host EL2 entry improvements
@ 2020-10-26  9:51 ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

This small series reworks various bits of the host EL2 entry after
Andrew's extensive rework to move from direct function calls to a
SMCCC implementation.

The first 3 patches are plain bug fixes, and candidates for immediate
merge into 5.10.

The following 2 patches allow the use of direct function pointers at
EL2, something that we can't do at the moment (other than PC-relative
addressing). This requires a helper to translate pointers at runtime,
but the result is neat enough. This allows the rewrite of the host HVC
handling in a more maintainable way.

A similar patch removes the direct use of kimage_voffset, which we
won't be able to trust for much longer.

The last two patches are just cleanups and optimisations.

Patches on top of 5.10-rc1.

Marc Zyngier (8):
  KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
  KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
  KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
  KVM: arm64: Add kimg_hyp_va() helper
  KVM: arm64: Turn host HVC handling into a dispatch table
  KVM: arm64: Patch kimage_voffset instead of loading the EL1 value
  KVM: arm64: Simplify __kvm_enable_ssbs()
  KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception

 arch/arm64/include/asm/kvm_asm.h    |   2 -
 arch/arm64/include/asm/kvm_mmu.h    |  16 ++
 arch/arm64/include/asm/sysreg.h     |   1 +
 arch/arm64/kernel/image-vars.h      |   5 +-
 arch/arm64/kvm/hyp/nvhe/host.S      |  14 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S  |  23 ++-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c  | 231 +++++++++++++++++-----------
 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c |  11 --
 arch/arm64/kvm/hyp/nvhe/tlb.c       |   1 -
 arch/arm64/kvm/va_layout.c          |  56 +++++++
 10 files changed, 236 insertions(+), 124 deletions(-)

-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 1/8] KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

The hyp-init code starts by stashing a register in TPIDR_EL2
in in order to free a register. This happens no matter if the
HVC call is legal or not.

Although nothing wrong seems to come out of it, it feels odd
to alter the EL2 state for something that eventually returns
an error.

Instead, use the fact that we know exactly which bits of the
__kvm_hyp_init call are non-zero to perform the check with
a series of EOR/ROR instructions, combined with a build-time
check that the value is the one we expect.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 47224dc62c51..b11a9d7db677 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -57,16 +57,25 @@ __do_hyp_init:
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.lo	__kvm_handle_stub_hvc
 
-	/* Set tpidr_el2 for use by HYP to free a register */
-	msr	tpidr_el2, x2
-
-	mov	x2, #KVM_HOST_SMCCC_FUNC(__kvm_hyp_init)
-	cmp	x0, x2
-	b.eq	1f
+	// We only actively check bits [24:31], and everything
+	// else has to be zero, which we check at build time.
+#if (KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) & 0xFFFFFFFF00FFFFFF)
+#error Unexpected __KVM_HOST_SMCCC_FUNC___kvm_hyp_init value
+#endif
+
+	ror	x0, x0, #24
+	eor	x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 24) & 0xF)
+	ror	x0, x0, #4
+	eor	x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 28) & 0xF)
+	cbz	x0, 1f
 	mov	x0, #SMCCC_RET_NOT_SUPPORTED
 	eret
 
-1:	phys_to_ttbr x0, x1
+1:
+	/* Set tpidr_el2 for use by HYP to free a register */
+	msr	tpidr_el2, x2
+
+	phys_to_ttbr x0, x1
 alternative_if ARM64_HAS_CNP
 	orr	x0, x0, #TTBR_CNP_BIT
 alternative_else_nop_endif
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 1/8] KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

The hyp-init code starts by stashing a register in TPIDR_EL2
in in order to free a register. This happens no matter if the
HVC call is legal or not.

Although nothing wrong seems to come out of it, it feels odd
to alter the EL2 state for something that eventually returns
an error.

Instead, use the fact that we know exactly which bits of the
__kvm_hyp_init call are non-zero to perform the check with
a series of EOR/ROR instructions, combined with a build-time
check that the value is the one we expect.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 47224dc62c51..b11a9d7db677 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -57,16 +57,25 @@ __do_hyp_init:
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.lo	__kvm_handle_stub_hvc
 
-	/* Set tpidr_el2 for use by HYP to free a register */
-	msr	tpidr_el2, x2
-
-	mov	x2, #KVM_HOST_SMCCC_FUNC(__kvm_hyp_init)
-	cmp	x0, x2
-	b.eq	1f
+	// We only actively check bits [24:31], and everything
+	// else has to be zero, which we check at build time.
+#if (KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) & 0xFFFFFFFF00FFFFFF)
+#error Unexpected __KVM_HOST_SMCCC_FUNC___kvm_hyp_init value
+#endif
+
+	ror	x0, x0, #24
+	eor	x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 24) & 0xF)
+	ror	x0, x0, #4
+	eor	x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 28) & 0xF)
+	cbz	x0, 1f
 	mov	x0, #SMCCC_RET_NOT_SUPPORTED
 	eret
 
-1:	phys_to_ttbr x0, x1
+1:
+	/* Set tpidr_el2 for use by HYP to free a register */
+	msr	tpidr_el2, x2
+
+	phys_to_ttbr x0, x1
 alternative_if ARM64_HAS_CNP
 	orr	x0, x0, #TTBR_CNP_BIT
 alternative_else_nop_endif
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 1/8] KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

The hyp-init code starts by stashing a register in TPIDR_EL2
in in order to free a register. This happens no matter if the
HVC call is legal or not.

Although nothing wrong seems to come out of it, it feels odd
to alter the EL2 state for something that eventually returns
an error.

Instead, use the fact that we know exactly which bits of the
__kvm_hyp_init call are non-zero to perform the check with
a series of EOR/ROR instructions, combined with a build-time
check that the value is the one we expect.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 47224dc62c51..b11a9d7db677 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -57,16 +57,25 @@ __do_hyp_init:
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.lo	__kvm_handle_stub_hvc
 
-	/* Set tpidr_el2 for use by HYP to free a register */
-	msr	tpidr_el2, x2
-
-	mov	x2, #KVM_HOST_SMCCC_FUNC(__kvm_hyp_init)
-	cmp	x0, x2
-	b.eq	1f
+	// We only actively check bits [24:31], and everything
+	// else has to be zero, which we check at build time.
+#if (KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) & 0xFFFFFFFF00FFFFFF)
+#error Unexpected __KVM_HOST_SMCCC_FUNC___kvm_hyp_init value
+#endif
+
+	ror	x0, x0, #24
+	eor	x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 24) & 0xF)
+	ror	x0, x0, #4
+	eor	x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 28) & 0xF)
+	cbz	x0, 1f
 	mov	x0, #SMCCC_RET_NOT_SUPPORTED
 	eret
 
-1:	phys_to_ttbr x0, x1
+1:
+	/* Set tpidr_el2 for use by HYP to free a register */
+	msr	tpidr_el2, x2
+
+	phys_to_ttbr x0, x1
 alternative_if ARM64_HAS_CNP
 	orr	x0, x0, #TTBR_CNP_BIT
 alternative_else_nop_endif
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/8] KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

The new calling convention says that pointers coming from the SMCCC
interface are turned into their HYP version in the host HVC handler.
However, there is still a stray kern_hyp_va() in the TLB invalidation
code, which could result in a corrupted pointer.

Drop the spurious conversion.

Fixes: a071261d9318 ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/tlb.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 39ca71ab8866..fbde89a2c6e8 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -128,7 +128,6 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
 	struct tlb_inv_context cxt;
 
 	/* Switch to requested VMID */
-	mmu = kern_hyp_va(mmu);
 	__tlb_switch_to_guest(mmu, &cxt);
 
 	__tlbi(vmalle1);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/8] KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

The new calling convention says that pointers coming from the SMCCC
interface are turned into their HYP version in the host HVC handler.
However, there is still a stray kern_hyp_va() in the TLB invalidation
code, which could result in a corrupted pointer.

Drop the spurious conversion.

Fixes: a071261d9318 ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/tlb.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 39ca71ab8866..fbde89a2c6e8 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -128,7 +128,6 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
 	struct tlb_inv_context cxt;
 
 	/* Switch to requested VMID */
-	mmu = kern_hyp_va(mmu);
 	__tlb_switch_to_guest(mmu, &cxt);
 
 	__tlbi(vmalle1);
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/8] KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

The new calling convention says that pointers coming from the SMCCC
interface are turned into their HYP version in the host HVC handler.
However, there is still a stray kern_hyp_va() in the TLB invalidation
code, which could result in a corrupted pointer.

Drop the spurious conversion.

Fixes: a071261d9318 ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/tlb.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 39ca71ab8866..fbde89a2c6e8 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -128,7 +128,6 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
 	struct tlb_inv_context cxt;
 
 	/* Switch to requested VMID */
-	mmu = kern_hyp_va(mmu);
 	__tlb_switch_to_guest(mmu, &cxt);
 
 	__tlbi(vmalle1);
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/8] KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much
sense as this bit only means something for translation regimes
that include EL0. This obviously isn't the case in the nVHE case,
so let's drop this setting.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/host.S | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index ff9a0f547b9f..ed27f06a31ba 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -17,8 +17,6 @@ SYM_FUNC_START(__host_exit)
 
 	get_host_ctxt	x0, x1
 
-	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
-
 	/* Store the host regs x2 and x3 */
 	stp	x2, x3,   [x0, #CPU_XREG_OFFSET(2)]
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/8] KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much
sense as this bit only means something for translation regimes
that include EL0. This obviously isn't the case in the nVHE case,
so let's drop this setting.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/host.S | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index ff9a0f547b9f..ed27f06a31ba 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -17,8 +17,6 @@ SYM_FUNC_START(__host_exit)
 
 	get_host_ctxt	x0, x1
 
-	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
-
 	/* Store the host regs x2 and x3 */
 	stp	x2, x3,   [x0, #CPU_XREG_OFFSET(2)]
 
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/8] KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much
sense as this bit only means something for translation regimes
that include EL0. This obviously isn't the case in the nVHE case,
so let's drop this setting.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/host.S | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index ff9a0f547b9f..ed27f06a31ba 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -17,8 +17,6 @@ SYM_FUNC_START(__host_exit)
 
 	get_host_ctxt	x0, x1
 
-	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
-
 	/* Store the host regs x2 and x3 */
 	stp	x2, x3,   [x0, #CPU_XREG_OFFSET(2)]
 
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/8] KVM: arm64: Add kimg_hyp_va() helper
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

KVM/arm64 is so far unable to deal with function pointers, as the compiler
will generate the kernel's runtime VA, and not the linear mapping address,
meaning that kern_hyp_va() will give the wrong result.

We so far have been able to use PC-relative addressing, but that's not
always easy to use, and prevents the implementation of things like
the mapping of an index to a pointer.

To allow this, provide a new helper that computes the required
translation from the kernel image to the HYP VA space.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_mmu.h | 16 ++++++++++
 arch/arm64/kvm/va_layout.c       | 50 ++++++++++++++++++++++++++++++++
 2 files changed, 66 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 331394306cce..e0d50e614bd9 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -98,6 +98,22 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v)
 
 #define kern_hyp_va(v) 	((typeof(v))(__kern_hyp_va((unsigned long)(v))))
 
+static __always_inline unsigned long __kimg_hyp_va(unsigned long v)
+{
+	unsigned long offset;
+
+	asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
+				    "movk %0, #0, lsl #16\n"
+				    "movk %0, #0, lsl #32\n"
+				    "movk %0, #0, lsl #48\n",
+				    kvm_update_kimg_phys_offset)
+		     : "=r" (offset));
+
+	return __kern_hyp_va((v - offset) | PAGE_OFFSET);
+}
+
+#define kimg_hyp_va(v) 	((typeof(v))(__kimg_hyp_va((unsigned long)(v))))
+
 /*
  * We currently support using a VM-specified IPA size. For backward
  * compatibility, the default IPA size is fixed to 40bits.
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index e0404bcab019..1d00d2cb93fd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -11,6 +11,7 @@
 #include <asm/debug-monitors.h>
 #include <asm/insn.h>
 #include <asm/kvm_mmu.h>
+#include <asm/memory.h>
 
 /*
  * The LSB of the HYP VA tag
@@ -201,3 +202,52 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
 					   AARCH64_INSN_BRANCH_NOLINK);
 	*updptr++ = cpu_to_le32(insn);
 }
+
+static void generate_mov_q(u64 val, __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	u32 insn, oinsn, rd;
+
+	BUG_ON(nr_inst != 4);
+
+	/* Compute target register */
+	oinsn = le32_to_cpu(*origptr);
+	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+
+	/* movz rd, #(val & 0xffff) */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)val,
+					 0,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_ZERO);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 16) & 0xffff), lsl #16 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 16),
+					 16,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 32) & 0xffff), lsl #32 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 32),
+					 32,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 48) & 0xffff), lsl #48 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 48),
+					 48,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+}
+
+void kvm_update_kimg_phys_offset(struct alt_instr *alt,
+				 __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	generate_mov_q(kimage_voffset + PHYS_OFFSET, origptr, updptr, nr_inst);
+}
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/8] KVM: arm64: Add kimg_hyp_va() helper
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

KVM/arm64 is so far unable to deal with function pointers, as the compiler
will generate the kernel's runtime VA, and not the linear mapping address,
meaning that kern_hyp_va() will give the wrong result.

We so far have been able to use PC-relative addressing, but that's not
always easy to use, and prevents the implementation of things like
the mapping of an index to a pointer.

To allow this, provide a new helper that computes the required
translation from the kernel image to the HYP VA space.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_mmu.h | 16 ++++++++++
 arch/arm64/kvm/va_layout.c       | 50 ++++++++++++++++++++++++++++++++
 2 files changed, 66 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 331394306cce..e0d50e614bd9 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -98,6 +98,22 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v)
 
 #define kern_hyp_va(v) 	((typeof(v))(__kern_hyp_va((unsigned long)(v))))
 
+static __always_inline unsigned long __kimg_hyp_va(unsigned long v)
+{
+	unsigned long offset;
+
+	asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
+				    "movk %0, #0, lsl #16\n"
+				    "movk %0, #0, lsl #32\n"
+				    "movk %0, #0, lsl #48\n",
+				    kvm_update_kimg_phys_offset)
+		     : "=r" (offset));
+
+	return __kern_hyp_va((v - offset) | PAGE_OFFSET);
+}
+
+#define kimg_hyp_va(v) 	((typeof(v))(__kimg_hyp_va((unsigned long)(v))))
+
 /*
  * We currently support using a VM-specified IPA size. For backward
  * compatibility, the default IPA size is fixed to 40bits.
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index e0404bcab019..1d00d2cb93fd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -11,6 +11,7 @@
 #include <asm/debug-monitors.h>
 #include <asm/insn.h>
 #include <asm/kvm_mmu.h>
+#include <asm/memory.h>
 
 /*
  * The LSB of the HYP VA tag
@@ -201,3 +202,52 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
 					   AARCH64_INSN_BRANCH_NOLINK);
 	*updptr++ = cpu_to_le32(insn);
 }
+
+static void generate_mov_q(u64 val, __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	u32 insn, oinsn, rd;
+
+	BUG_ON(nr_inst != 4);
+
+	/* Compute target register */
+	oinsn = le32_to_cpu(*origptr);
+	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+
+	/* movz rd, #(val & 0xffff) */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)val,
+					 0,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_ZERO);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 16) & 0xffff), lsl #16 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 16),
+					 16,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 32) & 0xffff), lsl #32 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 32),
+					 32,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 48) & 0xffff), lsl #48 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 48),
+					 48,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+}
+
+void kvm_update_kimg_phys_offset(struct alt_instr *alt,
+				 __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	generate_mov_q(kimage_voffset + PHYS_OFFSET, origptr, updptr, nr_inst);
+}
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/8] KVM: arm64: Add kimg_hyp_va() helper
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

KVM/arm64 is so far unable to deal with function pointers, as the compiler
will generate the kernel's runtime VA, and not the linear mapping address,
meaning that kern_hyp_va() will give the wrong result.

We so far have been able to use PC-relative addressing, but that's not
always easy to use, and prevents the implementation of things like
the mapping of an index to a pointer.

To allow this, provide a new helper that computes the required
translation from the kernel image to the HYP VA space.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_mmu.h | 16 ++++++++++
 arch/arm64/kvm/va_layout.c       | 50 ++++++++++++++++++++++++++++++++
 2 files changed, 66 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 331394306cce..e0d50e614bd9 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -98,6 +98,22 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v)
 
 #define kern_hyp_va(v) 	((typeof(v))(__kern_hyp_va((unsigned long)(v))))
 
+static __always_inline unsigned long __kimg_hyp_va(unsigned long v)
+{
+	unsigned long offset;
+
+	asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
+				    "movk %0, #0, lsl #16\n"
+				    "movk %0, #0, lsl #32\n"
+				    "movk %0, #0, lsl #48\n",
+				    kvm_update_kimg_phys_offset)
+		     : "=r" (offset));
+
+	return __kern_hyp_va((v - offset) | PAGE_OFFSET);
+}
+
+#define kimg_hyp_va(v) 	((typeof(v))(__kimg_hyp_va((unsigned long)(v))))
+
 /*
  * We currently support using a VM-specified IPA size. For backward
  * compatibility, the default IPA size is fixed to 40bits.
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index e0404bcab019..1d00d2cb93fd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -11,6 +11,7 @@
 #include <asm/debug-monitors.h>
 #include <asm/insn.h>
 #include <asm/kvm_mmu.h>
+#include <asm/memory.h>
 
 /*
  * The LSB of the HYP VA tag
@@ -201,3 +202,52 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
 					   AARCH64_INSN_BRANCH_NOLINK);
 	*updptr++ = cpu_to_le32(insn);
 }
+
+static void generate_mov_q(u64 val, __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	u32 insn, oinsn, rd;
+
+	BUG_ON(nr_inst != 4);
+
+	/* Compute target register */
+	oinsn = le32_to_cpu(*origptr);
+	rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, oinsn);
+
+	/* movz rd, #(val & 0xffff) */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)val,
+					 0,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_ZERO);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 16) & 0xffff), lsl #16 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 16),
+					 16,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 32) & 0xffff), lsl #32 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 32),
+					 32,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+
+	/* movk rd, #((val >> 48) & 0xffff), lsl #48 */
+	insn = aarch64_insn_gen_movewide(rd,
+					 (u16)(val >> 48),
+					 48,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+	*updptr++ = cpu_to_le32(insn);
+}
+
+void kvm_update_kimg_phys_offset(struct alt_instr *alt,
+				 __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	generate_mov_q(kimage_voffset + PHYS_OFFSET, origptr, updptr, nr_inst);
+}
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 5/8] KVM: arm64: Turn host HVC handling into a dispatch table
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

Now that we can use function pointer, use a dispatch table to call
the individual HVC handlers, leading to more maintainable code.

Further improvements include helpers to declare the mapping of
local variables to values passed in the host context.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/image-vars.h     |   1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 227 +++++++++++++++++------------
 2 files changed, 134 insertions(+), 94 deletions(-)

diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 61684a500914..b5b0fdd1043c 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -64,6 +64,7 @@ __efistub__ctype		= _ctype;
 /* Alternative callbacks for init-time patching of nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_patch_vector_branch);
 KVM_NVHE_ALIAS(kvm_update_va_mask);
+KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index e2eafe2c93af..2af8a5e902af 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -12,106 +12,145 @@
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
-#include <kvm/arm_hypercalls.h>
-
-static void handle_host_hcall(unsigned long func_id,
-			      struct kvm_cpu_context *host_ctxt)
-{
-	unsigned long ret = 0;
-
-	switch (func_id) {
-	case KVM_HOST_SMCCC_FUNC(__kvm_vcpu_run): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_vcpu *vcpu = (struct kvm_vcpu *)r1;
-
-		ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_flush_vm_context):
-		__kvm_flush_vm_context();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid_ipa): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-		phys_addr_t ipa = host_ctxt->regs.regs[2];
-		int level = host_ctxt->regs.regs[3];
-
-		__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-
-		__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-
-		__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
-		u64 cntvoff = host_ctxt->regs.regs[1];
-
-		__kvm_timer_set_cntvoff(cntvoff);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
-		__kvm_enable_ssbs();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_get_ich_vtr_el2):
-		ret = __vgic_v3_get_ich_vtr_el2();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_read_vmcr):
-		ret = __vgic_v3_read_vmcr();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_write_vmcr): {
-		u32 vmcr = host_ctxt->regs.regs[1];
-
-		__vgic_v3_write_vmcr(vmcr);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_init_lrs):
-		__vgic_v3_init_lrs();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__kvm_get_mdcr_el2):
-		ret = __kvm_get_mdcr_el2();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_save_aprs): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
-
-		__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
-
-		__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
-		break;
-	}
-	default:
-		/* Invalid host HVC. */
-		host_ctxt->regs.regs[0] = SMCCC_RET_NOT_SUPPORTED;
-		return;
-	}
-
-	host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS;
-	host_ctxt->regs.regs[1] = ret;
+#define cpu_reg(ctxt, r)	(ctxt)->regs.regs[r]
+#define DECLARE_REG(type, name, ctxt, reg)	\
+				type name = (type)cpu_reg(ctxt, (reg))
+
+static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
+
+	cpu_reg(host_ctxt, 1) =  __kvm_vcpu_run(kern_hyp_va(vcpu));
+}
+
+static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_flush_vm_context();
+}
+
+static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+	DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2);
+	DECLARE_REG(int, level, host_ctxt, 3);
+
+	__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
+}
+
+static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+
+	__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
+}
+
+static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+
+	__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
+}
+
+static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_timer_set_cntvoff(cpu_reg(host_ctxt, 1));
+}
+
+static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_enable_ssbs();
+}
+
+static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __vgic_v3_get_ich_vtr_el2();
+}
+
+static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr();
+}
+
+static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt)
+{
+	__vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1));
+}
+
+static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt)
+{
+	__vgic_v3_init_lrs();
+}
+
+static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2();
+}
+
+static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
+
+	__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
+}
+
+static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
+
+	__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
+}
+
+#define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = handle_##x
+
+typedef void (*hcall_t)(struct kvm_cpu_context *);
+
+static const hcall_t host_hcall[] = {
+	HANDLE_FUNC(__kvm_vcpu_run),
+	HANDLE_FUNC(__kvm_flush_vm_context),
+	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
+	HANDLE_FUNC(__kvm_tlb_flush_vmid),
+	HANDLE_FUNC(__kvm_tlb_flush_local_vmid),
+	HANDLE_FUNC(__kvm_timer_set_cntvoff),
+	HANDLE_FUNC(__kvm_enable_ssbs),
+	HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2),
+	HANDLE_FUNC(__vgic_v3_read_vmcr),
+	HANDLE_FUNC(__vgic_v3_write_vmcr),
+	HANDLE_FUNC(__vgic_v3_init_lrs),
+	HANDLE_FUNC(__kvm_get_mdcr_el2),
+	HANDLE_FUNC(__vgic_v3_save_aprs),
+	HANDLE_FUNC(__vgic_v3_restore_aprs),
+};
+
+static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(unsigned long, id, host_ctxt, 0);
+	unsigned long ret = SMCCC_RET_NOT_SUPPORTED;
+	hcall_t hcall;
+
+	id -= KVM_HOST_SMCCC_ID(0);
+
+	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
+		goto inval;
+
+	hcall = host_hcall[id];
+	if (unlikely(!hcall))
+		goto inval;
+
+	hcall = kimg_hyp_va(hcall);
+
+	hcall(host_ctxt);
+	ret = SMCCC_RET_SUCCESS;
+
+inval:
+	cpu_reg(host_ctxt, 0) = ret;
 }
 
 void handle_trap(struct kvm_cpu_context *host_ctxt)
 {
 	u64 esr = read_sysreg_el2(SYS_ESR);
-	unsigned long func_id;
 
-	if (ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64)
+	if (unlikely(ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64))
 		hyp_panic();
 
-	func_id = host_ctxt->regs.regs[0];
-	handle_host_hcall(func_id, host_ctxt);
+	handle_host_hcall(host_ctxt);
 }
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 5/8] KVM: arm64: Turn host HVC handling into a dispatch table
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Now that we can use function pointer, use a dispatch table to call
the individual HVC handlers, leading to more maintainable code.

Further improvements include helpers to declare the mapping of
local variables to values passed in the host context.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/image-vars.h     |   1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 227 +++++++++++++++++------------
 2 files changed, 134 insertions(+), 94 deletions(-)

diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 61684a500914..b5b0fdd1043c 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -64,6 +64,7 @@ __efistub__ctype		= _ctype;
 /* Alternative callbacks for init-time patching of nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_patch_vector_branch);
 KVM_NVHE_ALIAS(kvm_update_va_mask);
+KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index e2eafe2c93af..2af8a5e902af 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -12,106 +12,145 @@
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
-#include <kvm/arm_hypercalls.h>
-
-static void handle_host_hcall(unsigned long func_id,
-			      struct kvm_cpu_context *host_ctxt)
-{
-	unsigned long ret = 0;
-
-	switch (func_id) {
-	case KVM_HOST_SMCCC_FUNC(__kvm_vcpu_run): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_vcpu *vcpu = (struct kvm_vcpu *)r1;
-
-		ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_flush_vm_context):
-		__kvm_flush_vm_context();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid_ipa): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-		phys_addr_t ipa = host_ctxt->regs.regs[2];
-		int level = host_ctxt->regs.regs[3];
-
-		__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-
-		__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-
-		__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
-		u64 cntvoff = host_ctxt->regs.regs[1];
-
-		__kvm_timer_set_cntvoff(cntvoff);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
-		__kvm_enable_ssbs();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_get_ich_vtr_el2):
-		ret = __vgic_v3_get_ich_vtr_el2();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_read_vmcr):
-		ret = __vgic_v3_read_vmcr();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_write_vmcr): {
-		u32 vmcr = host_ctxt->regs.regs[1];
-
-		__vgic_v3_write_vmcr(vmcr);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_init_lrs):
-		__vgic_v3_init_lrs();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__kvm_get_mdcr_el2):
-		ret = __kvm_get_mdcr_el2();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_save_aprs): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
-
-		__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
-
-		__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
-		break;
-	}
-	default:
-		/* Invalid host HVC. */
-		host_ctxt->regs.regs[0] = SMCCC_RET_NOT_SUPPORTED;
-		return;
-	}
-
-	host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS;
-	host_ctxt->regs.regs[1] = ret;
+#define cpu_reg(ctxt, r)	(ctxt)->regs.regs[r]
+#define DECLARE_REG(type, name, ctxt, reg)	\
+				type name = (type)cpu_reg(ctxt, (reg))
+
+static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
+
+	cpu_reg(host_ctxt, 1) =  __kvm_vcpu_run(kern_hyp_va(vcpu));
+}
+
+static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_flush_vm_context();
+}
+
+static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+	DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2);
+	DECLARE_REG(int, level, host_ctxt, 3);
+
+	__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
+}
+
+static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+
+	__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
+}
+
+static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+
+	__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
+}
+
+static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_timer_set_cntvoff(cpu_reg(host_ctxt, 1));
+}
+
+static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_enable_ssbs();
+}
+
+static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __vgic_v3_get_ich_vtr_el2();
+}
+
+static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr();
+}
+
+static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt)
+{
+	__vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1));
+}
+
+static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt)
+{
+	__vgic_v3_init_lrs();
+}
+
+static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2();
+}
+
+static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
+
+	__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
+}
+
+static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
+
+	__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
+}
+
+#define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = handle_##x
+
+typedef void (*hcall_t)(struct kvm_cpu_context *);
+
+static const hcall_t host_hcall[] = {
+	HANDLE_FUNC(__kvm_vcpu_run),
+	HANDLE_FUNC(__kvm_flush_vm_context),
+	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
+	HANDLE_FUNC(__kvm_tlb_flush_vmid),
+	HANDLE_FUNC(__kvm_tlb_flush_local_vmid),
+	HANDLE_FUNC(__kvm_timer_set_cntvoff),
+	HANDLE_FUNC(__kvm_enable_ssbs),
+	HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2),
+	HANDLE_FUNC(__vgic_v3_read_vmcr),
+	HANDLE_FUNC(__vgic_v3_write_vmcr),
+	HANDLE_FUNC(__vgic_v3_init_lrs),
+	HANDLE_FUNC(__kvm_get_mdcr_el2),
+	HANDLE_FUNC(__vgic_v3_save_aprs),
+	HANDLE_FUNC(__vgic_v3_restore_aprs),
+};
+
+static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(unsigned long, id, host_ctxt, 0);
+	unsigned long ret = SMCCC_RET_NOT_SUPPORTED;
+	hcall_t hcall;
+
+	id -= KVM_HOST_SMCCC_ID(0);
+
+	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
+		goto inval;
+
+	hcall = host_hcall[id];
+	if (unlikely(!hcall))
+		goto inval;
+
+	hcall = kimg_hyp_va(hcall);
+
+	hcall(host_ctxt);
+	ret = SMCCC_RET_SUCCESS;
+
+inval:
+	cpu_reg(host_ctxt, 0) = ret;
 }
 
 void handle_trap(struct kvm_cpu_context *host_ctxt)
 {
 	u64 esr = read_sysreg_el2(SYS_ESR);
-	unsigned long func_id;
 
-	if (ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64)
+	if (unlikely(ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64))
 		hyp_panic();
 
-	func_id = host_ctxt->regs.regs[0];
-	handle_host_hcall(func_id, host_ctxt);
+	handle_host_hcall(host_ctxt);
 }
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 5/8] KVM: arm64: Turn host HVC handling into a dispatch table
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

Now that we can use function pointer, use a dispatch table to call
the individual HVC handlers, leading to more maintainable code.

Further improvements include helpers to declare the mapping of
local variables to values passed in the host context.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/image-vars.h     |   1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 227 +++++++++++++++++------------
 2 files changed, 134 insertions(+), 94 deletions(-)

diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 61684a500914..b5b0fdd1043c 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -64,6 +64,7 @@ __efistub__ctype		= _ctype;
 /* Alternative callbacks for init-time patching of nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_patch_vector_branch);
 KVM_NVHE_ALIAS(kvm_update_va_mask);
+KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index e2eafe2c93af..2af8a5e902af 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -12,106 +12,145 @@
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
-#include <kvm/arm_hypercalls.h>
-
-static void handle_host_hcall(unsigned long func_id,
-			      struct kvm_cpu_context *host_ctxt)
-{
-	unsigned long ret = 0;
-
-	switch (func_id) {
-	case KVM_HOST_SMCCC_FUNC(__kvm_vcpu_run): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_vcpu *vcpu = (struct kvm_vcpu *)r1;
-
-		ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_flush_vm_context):
-		__kvm_flush_vm_context();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid_ipa): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-		phys_addr_t ipa = host_ctxt->regs.regs[2];
-		int level = host_ctxt->regs.regs[3];
-
-		__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-
-		__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
-
-		__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
-		u64 cntvoff = host_ctxt->regs.regs[1];
-
-		__kvm_timer_set_cntvoff(cntvoff);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
-		__kvm_enable_ssbs();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_get_ich_vtr_el2):
-		ret = __vgic_v3_get_ich_vtr_el2();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_read_vmcr):
-		ret = __vgic_v3_read_vmcr();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_write_vmcr): {
-		u32 vmcr = host_ctxt->regs.regs[1];
-
-		__vgic_v3_write_vmcr(vmcr);
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_init_lrs):
-		__vgic_v3_init_lrs();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__kvm_get_mdcr_el2):
-		ret = __kvm_get_mdcr_el2();
-		break;
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_save_aprs): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
-
-		__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
-		break;
-	}
-	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
-		unsigned long r1 = host_ctxt->regs.regs[1];
-		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
-
-		__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
-		break;
-	}
-	default:
-		/* Invalid host HVC. */
-		host_ctxt->regs.regs[0] = SMCCC_RET_NOT_SUPPORTED;
-		return;
-	}
-
-	host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS;
-	host_ctxt->regs.regs[1] = ret;
+#define cpu_reg(ctxt, r)	(ctxt)->regs.regs[r]
+#define DECLARE_REG(type, name, ctxt, reg)	\
+				type name = (type)cpu_reg(ctxt, (reg))
+
+static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
+
+	cpu_reg(host_ctxt, 1) =  __kvm_vcpu_run(kern_hyp_va(vcpu));
+}
+
+static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_flush_vm_context();
+}
+
+static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+	DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2);
+	DECLARE_REG(int, level, host_ctxt, 3);
+
+	__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
+}
+
+static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+
+	__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
+}
+
+static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
+
+	__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
+}
+
+static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_timer_set_cntvoff(cpu_reg(host_ctxt, 1));
+}
+
+static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
+{
+	__kvm_enable_ssbs();
+}
+
+static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __vgic_v3_get_ich_vtr_el2();
+}
+
+static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr();
+}
+
+static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt)
+{
+	__vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1));
+}
+
+static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt)
+{
+	__vgic_v3_init_lrs();
+}
+
+static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt)
+{
+	cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2();
+}
+
+static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
+
+	__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
+}
+
+static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
+
+	__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
+}
+
+#define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = handle_##x
+
+typedef void (*hcall_t)(struct kvm_cpu_context *);
+
+static const hcall_t host_hcall[] = {
+	HANDLE_FUNC(__kvm_vcpu_run),
+	HANDLE_FUNC(__kvm_flush_vm_context),
+	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
+	HANDLE_FUNC(__kvm_tlb_flush_vmid),
+	HANDLE_FUNC(__kvm_tlb_flush_local_vmid),
+	HANDLE_FUNC(__kvm_timer_set_cntvoff),
+	HANDLE_FUNC(__kvm_enable_ssbs),
+	HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2),
+	HANDLE_FUNC(__vgic_v3_read_vmcr),
+	HANDLE_FUNC(__vgic_v3_write_vmcr),
+	HANDLE_FUNC(__vgic_v3_init_lrs),
+	HANDLE_FUNC(__kvm_get_mdcr_el2),
+	HANDLE_FUNC(__vgic_v3_save_aprs),
+	HANDLE_FUNC(__vgic_v3_restore_aprs),
+};
+
+static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
+{
+	DECLARE_REG(unsigned long, id, host_ctxt, 0);
+	unsigned long ret = SMCCC_RET_NOT_SUPPORTED;
+	hcall_t hcall;
+
+	id -= KVM_HOST_SMCCC_ID(0);
+
+	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
+		goto inval;
+
+	hcall = host_hcall[id];
+	if (unlikely(!hcall))
+		goto inval;
+
+	hcall = kimg_hyp_va(hcall);
+
+	hcall(host_ctxt);
+	ret = SMCCC_RET_SUCCESS;
+
+inval:
+	cpu_reg(host_ctxt, 0) = ret;
 }
 
 void handle_trap(struct kvm_cpu_context *host_ctxt)
 {
 	u64 esr = read_sysreg_el2(SYS_ESR);
-	unsigned long func_id;
 
-	if (ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64)
+	if (unlikely(ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64))
 		hyp_panic();
 
-	func_id = host_ctxt->regs.regs[0];
-	handle_host_hcall(func_id, host_ctxt);
+	handle_host_hcall(host_ctxt);
 }
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 6/8] KVM: arm64: Patch kimage_voffset instead of loading the EL1 value
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

Directly using the kimage_voffset variable is fine for now, but
will become more problematic as we start distrusting EL1.

Instead, patch the kimage_voffset into the HYP text, ensuring
we don't have to load an untrusted value later on.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/image-vars.h | 4 +---
 arch/arm64/kvm/hyp/nvhe/host.S | 7 ++++++-
 arch/arm64/kvm/va_layout.c     | 6 ++++++
 3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index b5b0fdd1043c..259c704a548a 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -65,13 +65,11 @@ __efistub__ctype		= _ctype;
 KVM_NVHE_ALIAS(kvm_patch_vector_branch);
 KVM_NVHE_ALIAS(kvm_update_va_mask);
 KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
+KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
 
-/* Kernel constant needed to compute idmap addresses. */
-KVM_NVHE_ALIAS(kimage_voffset);
-
 /* Kernel symbols used to call panic() from nVHE hyp code (via ERET). */
 KVM_NVHE_ALIAS(__hyp_panic_string);
 KVM_NVHE_ALIAS(panic);
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index ed27f06a31ba..e2d316d13180 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -115,7 +115,12 @@ SYM_FUNC_END(__hyp_do_panic)
 	 * Preserve x0-x4, which may contain stub parameters.
 	 */
 	ldr	x5, =__kvm_handle_stub_hvc
-	ldr_l	x6, kimage_voffset
+alternative_cb kvm_get_kimage_voffset
+	movz	x6, #0
+	movk	x6, #0, lsl #16
+	movk	x6, #0, lsl #32
+	movk	x6, #0, lsl #48
+alternative_cb_end
 
 	/* x5 = __pa(x5) */
 	sub	x5, x5, x6
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 1d00d2cb93fd..d61117805de0 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -251,3 +251,9 @@ void kvm_update_kimg_phys_offset(struct alt_instr *alt,
 {
 	generate_mov_q(kimage_voffset + PHYS_OFFSET, origptr, updptr, nr_inst);
 }
+
+void kvm_get_kimage_voffset(struct alt_instr *alt,
+			    __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	generate_mov_q(kimage_voffset, origptr, updptr, nr_inst);
+}
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 6/8] KVM: arm64: Patch kimage_voffset instead of loading the EL1 value
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Directly using the kimage_voffset variable is fine for now, but
will become more problematic as we start distrusting EL1.

Instead, patch the kimage_voffset into the HYP text, ensuring
we don't have to load an untrusted value later on.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/image-vars.h | 4 +---
 arch/arm64/kvm/hyp/nvhe/host.S | 7 ++++++-
 arch/arm64/kvm/va_layout.c     | 6 ++++++
 3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index b5b0fdd1043c..259c704a548a 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -65,13 +65,11 @@ __efistub__ctype		= _ctype;
 KVM_NVHE_ALIAS(kvm_patch_vector_branch);
 KVM_NVHE_ALIAS(kvm_update_va_mask);
 KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
+KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
 
-/* Kernel constant needed to compute idmap addresses. */
-KVM_NVHE_ALIAS(kimage_voffset);
-
 /* Kernel symbols used to call panic() from nVHE hyp code (via ERET). */
 KVM_NVHE_ALIAS(__hyp_panic_string);
 KVM_NVHE_ALIAS(panic);
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index ed27f06a31ba..e2d316d13180 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -115,7 +115,12 @@ SYM_FUNC_END(__hyp_do_panic)
 	 * Preserve x0-x4, which may contain stub parameters.
 	 */
 	ldr	x5, =__kvm_handle_stub_hvc
-	ldr_l	x6, kimage_voffset
+alternative_cb kvm_get_kimage_voffset
+	movz	x6, #0
+	movk	x6, #0, lsl #16
+	movk	x6, #0, lsl #32
+	movk	x6, #0, lsl #48
+alternative_cb_end
 
 	/* x5 = __pa(x5) */
 	sub	x5, x5, x6
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 1d00d2cb93fd..d61117805de0 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -251,3 +251,9 @@ void kvm_update_kimg_phys_offset(struct alt_instr *alt,
 {
 	generate_mov_q(kimage_voffset + PHYS_OFFSET, origptr, updptr, nr_inst);
 }
+
+void kvm_get_kimage_voffset(struct alt_instr *alt,
+			    __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	generate_mov_q(kimage_voffset, origptr, updptr, nr_inst);
+}
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 6/8] KVM: arm64: Patch kimage_voffset instead of loading the EL1 value
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

Directly using the kimage_voffset variable is fine for now, but
will become more problematic as we start distrusting EL1.

Instead, patch the kimage_voffset into the HYP text, ensuring
we don't have to load an untrusted value later on.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/image-vars.h | 4 +---
 arch/arm64/kvm/hyp/nvhe/host.S | 7 ++++++-
 arch/arm64/kvm/va_layout.c     | 6 ++++++
 3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index b5b0fdd1043c..259c704a548a 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -65,13 +65,11 @@ __efistub__ctype		= _ctype;
 KVM_NVHE_ALIAS(kvm_patch_vector_branch);
 KVM_NVHE_ALIAS(kvm_update_va_mask);
 KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
+KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
 
-/* Kernel constant needed to compute idmap addresses. */
-KVM_NVHE_ALIAS(kimage_voffset);
-
 /* Kernel symbols used to call panic() from nVHE hyp code (via ERET). */
 KVM_NVHE_ALIAS(__hyp_panic_string);
 KVM_NVHE_ALIAS(panic);
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index ed27f06a31ba..e2d316d13180 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -115,7 +115,12 @@ SYM_FUNC_END(__hyp_do_panic)
 	 * Preserve x0-x4, which may contain stub parameters.
 	 */
 	ldr	x5, =__kvm_handle_stub_hvc
-	ldr_l	x6, kimage_voffset
+alternative_cb kvm_get_kimage_voffset
+	movz	x6, #0
+	movk	x6, #0, lsl #16
+	movk	x6, #0, lsl #32
+	movk	x6, #0, lsl #48
+alternative_cb_end
 
 	/* x5 = __pa(x5) */
 	sub	x5, x5, x6
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 1d00d2cb93fd..d61117805de0 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -251,3 +251,9 @@ void kvm_update_kimg_phys_offset(struct alt_instr *alt,
 {
 	generate_mov_q(kimage_voffset + PHYS_OFFSET, origptr, updptr, nr_inst);
 }
+
+void kvm_get_kimage_voffset(struct alt_instr *alt,
+			    __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	generate_mov_q(kimage_voffset, origptr, updptr, nr_inst);
+}
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 7/8] KVM: arm64: Simplify __kvm_enable_ssbs()
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

Move the setting of SSBS directly into the HVC handler, using
the C helpers rather than the inline asssembly code.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_asm.h    |  2 --
 arch/arm64/include/asm/sysreg.h     |  1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c  |  6 +++++-
 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 11 -----------
 4 files changed, 6 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 54387ccd1ab2..a542c422a036 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -189,8 +189,6 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
 
 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
 
-extern void __kvm_enable_ssbs(void);
-
 extern u64 __vgic_v3_get_ich_vtr_el2(void);
 extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index d52c1b3ce589..c9423f36e05c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -461,6 +461,7 @@
 
 #define SYS_PMCCFILTR_EL0		sys_reg(3, 3, 14, 15, 7)
 
+#define SYS_SCTLR_EL2			sys_reg(3, 4, 1, 0, 0)
 #define SYS_ZCR_EL2			sys_reg(3, 4, 1, 2, 0)
 #define SYS_DACR32_EL2			sys_reg(3, 4, 3, 0, 0)
 #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 2af8a5e902af..5125e934da22 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -58,7 +58,11 @@ static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
 
 static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
 {
-	__kvm_enable_ssbs();
+	u64 tmp;
+
+	tmp = read_sysreg_el2(SYS_SCTLR);
+	tmp |= SCTLR_ELx_DSSBS;
+	write_sysreg_el2(tmp, SYS_SCTLR);
 }
 
 static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
index 88a25fc8fcd3..29305022bc04 100644
--- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
@@ -33,14 +33,3 @@ void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
 	__sysreg_restore_user_state(ctxt);
 	__sysreg_restore_el2_return_state(ctxt);
 }
-
-void __kvm_enable_ssbs(void)
-{
-	u64 tmp;
-
-	asm volatile(
-	"mrs	%0, sctlr_el2\n"
-	"orr	%0, %0, %1\n"
-	"msr	sctlr_el2, %0"
-	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
-}
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 7/8] KVM: arm64: Simplify __kvm_enable_ssbs()
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Move the setting of SSBS directly into the HVC handler, using
the C helpers rather than the inline asssembly code.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_asm.h    |  2 --
 arch/arm64/include/asm/sysreg.h     |  1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c  |  6 +++++-
 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 11 -----------
 4 files changed, 6 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 54387ccd1ab2..a542c422a036 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -189,8 +189,6 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
 
 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
 
-extern void __kvm_enable_ssbs(void);
-
 extern u64 __vgic_v3_get_ich_vtr_el2(void);
 extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index d52c1b3ce589..c9423f36e05c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -461,6 +461,7 @@
 
 #define SYS_PMCCFILTR_EL0		sys_reg(3, 3, 14, 15, 7)
 
+#define SYS_SCTLR_EL2			sys_reg(3, 4, 1, 0, 0)
 #define SYS_ZCR_EL2			sys_reg(3, 4, 1, 2, 0)
 #define SYS_DACR32_EL2			sys_reg(3, 4, 3, 0, 0)
 #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 2af8a5e902af..5125e934da22 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -58,7 +58,11 @@ static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
 
 static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
 {
-	__kvm_enable_ssbs();
+	u64 tmp;
+
+	tmp = read_sysreg_el2(SYS_SCTLR);
+	tmp |= SCTLR_ELx_DSSBS;
+	write_sysreg_el2(tmp, SYS_SCTLR);
 }
 
 static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
index 88a25fc8fcd3..29305022bc04 100644
--- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
@@ -33,14 +33,3 @@ void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
 	__sysreg_restore_user_state(ctxt);
 	__sysreg_restore_el2_return_state(ctxt);
 }
-
-void __kvm_enable_ssbs(void)
-{
-	u64 tmp;
-
-	asm volatile(
-	"mrs	%0, sctlr_el2\n"
-	"orr	%0, %0, %1\n"
-	"msr	sctlr_el2, %0"
-	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
-}
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 7/8] KVM: arm64: Simplify __kvm_enable_ssbs()
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

Move the setting of SSBS directly into the HVC handler, using
the C helpers rather than the inline asssembly code.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_asm.h    |  2 --
 arch/arm64/include/asm/sysreg.h     |  1 +
 arch/arm64/kvm/hyp/nvhe/hyp-main.c  |  6 +++++-
 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 11 -----------
 4 files changed, 6 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 54387ccd1ab2..a542c422a036 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -189,8 +189,6 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
 
 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
 
-extern void __kvm_enable_ssbs(void);
-
 extern u64 __vgic_v3_get_ich_vtr_el2(void);
 extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index d52c1b3ce589..c9423f36e05c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -461,6 +461,7 @@
 
 #define SYS_PMCCFILTR_EL0		sys_reg(3, 3, 14, 15, 7)
 
+#define SYS_SCTLR_EL2			sys_reg(3, 4, 1, 0, 0)
 #define SYS_ZCR_EL2			sys_reg(3, 4, 1, 2, 0)
 #define SYS_DACR32_EL2			sys_reg(3, 4, 3, 0, 0)
 #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 2af8a5e902af..5125e934da22 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -58,7 +58,11 @@ static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
 
 static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
 {
-	__kvm_enable_ssbs();
+	u64 tmp;
+
+	tmp = read_sysreg_el2(SYS_SCTLR);
+	tmp |= SCTLR_ELx_DSSBS;
+	write_sysreg_el2(tmp, SYS_SCTLR);
 }
 
 static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
index 88a25fc8fcd3..29305022bc04 100644
--- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
@@ -33,14 +33,3 @@ void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
 	__sysreg_restore_user_state(ctxt);
 	__sysreg_restore_el2_return_state(ctxt);
 }
-
-void __kvm_enable_ssbs(void)
-{
-	u64 tmp;
-
-	asm volatile(
-	"mrs	%0, sctlr_el2\n"
-	"orr	%0, %0, %1\n"
-	"msr	sctlr_el2, %0"
-	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
-}
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
  2020-10-26  9:51 ` Marc Zyngier
  (?)
@ 2020-10-26  9:51   ` Marc Zyngier
  -1 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Andrew Scull,
	Will Deacon, Quentin Perret, kernel-team

Registers x0/x1 get repeateadly pushed and poped during a host
HVC call. Instead, leave the registers on the stack, saving
a store instruction on the fast path for an add on the slow path.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/host.S | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index e2d316d13180..7b69f9ff8da0 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -13,8 +13,6 @@
 	.text
 
 SYM_FUNC_START(__host_exit)
-	stp	x0, x1, [sp, #-16]!
-
 	get_host_ctxt	x0, x1
 
 	/* Store the host regs x2 and x3 */
@@ -99,13 +97,14 @@ SYM_FUNC_END(__hyp_do_panic)
 	mrs	x0, esr_el2
 	lsr	x0, x0, #ESR_ELx_EC_SHIFT
 	cmp	x0, #ESR_ELx_EC_HVC64
-	ldp	x0, x1, [sp], #16
+	ldp	x0, x1, [sp]		// Don't fixup the stack yet
 	b.ne	__host_exit
 
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.hs	__host_exit
 
+	add	sp, sp, #16
 	/*
 	 * Compute the idmap address of __kvm_handle_stub_hvc and
 	 * jump there. Since we use kimage_voffset, do not use the
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Registers x0/x1 get repeateadly pushed and poped during a host
HVC call. Instead, leave the registers on the stack, saving
a store instruction on the fast path for an add on the slow path.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/host.S | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index e2d316d13180..7b69f9ff8da0 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -13,8 +13,6 @@
 	.text
 
 SYM_FUNC_START(__host_exit)
-	stp	x0, x1, [sp, #-16]!
-
 	get_host_ctxt	x0, x1
 
 	/* Store the host regs x2 and x3 */
@@ -99,13 +97,14 @@ SYM_FUNC_END(__hyp_do_panic)
 	mrs	x0, esr_el2
 	lsr	x0, x0, #ESR_ELx_EC_SHIFT
 	cmp	x0, #ESR_ELx_EC_HVC64
-	ldp	x0, x1, [sp], #16
+	ldp	x0, x1, [sp]		// Don't fixup the stack yet
 	b.ne	__host_exit
 
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.hs	__host_exit
 
+	add	sp, sp, #16
 	/*
 	 * Compute the idmap address of __kvm_handle_stub_hvc and
 	 * jump there. Since we use kimage_voffset, do not use the
-- 
2.28.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
@ 2020-10-26  9:51   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-10-26  9:51 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Quentin Perret, kernel-team, Suzuki K Poulose, James Morse,
	Andrew Scull, Will Deacon, Julien Thierry

Registers x0/x1 get repeateadly pushed and poped during a host
HVC call. Instead, leave the registers on the stack, saving
a store instruction on the fast path for an add on the slow path.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/host.S | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index e2d316d13180..7b69f9ff8da0 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -13,8 +13,6 @@
 	.text
 
 SYM_FUNC_START(__host_exit)
-	stp	x0, x1, [sp, #-16]!
-
 	get_host_ctxt	x0, x1
 
 	/* Store the host regs x2 and x3 */
@@ -99,13 +97,14 @@ SYM_FUNC_END(__hyp_do_panic)
 	mrs	x0, esr_el2
 	lsr	x0, x0, #ESR_ELx_EC_SHIFT
 	cmp	x0, #ESR_ELx_EC_HVC64
-	ldp	x0, x1, [sp], #16
+	ldp	x0, x1, [sp]		// Don't fixup the stack yet
 	b.ne	__host_exit
 
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.hs	__host_exit
 
+	add	sp, sp, #16
 	/*
 	 * Compute the idmap address of __kvm_handle_stub_hvc and
 	 * jump there. Since we use kimage_voffset, do not use the
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/8] KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
  2020-10-26  9:51   ` Marc Zyngier
  (?)
@ 2020-10-26 10:48     ` Vladimir Murzin
  -1 siblings, 0 replies; 45+ messages in thread
From: Vladimir Murzin @ 2020-10-26 10:48 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much
> sense as this bit only means something for translation regimes
> that include EL0. This obviously isn't the case in the nVHE case,
> so let's drop this setting.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/host.S | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index ff9a0f547b9f..ed27f06a31ba 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -17,8 +17,6 @@ SYM_FUNC_START(__host_exit)
>  
>  	get_host_ctxt	x0, x1
>  
> -	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
> -
>  	/* Store the host regs x2 and x3 */
>  	stp	x2, x3,   [x0, #CPU_XREG_OFFSET(2)]
>  
> 

It was originally introduced in cb96408da4e1 (arm64: KVM: VHE: reset PSTATE.PAN on entry to EL2)
and indeed only applies to VHE (I even remember some attempts to put in under CONFIG_ARM64_VHE).

So, if it helps:
 
    Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>

Cheers
Vladimir

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/8] KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
@ 2020-10-26 10:48     ` Vladimir Murzin
  0 siblings, 0 replies; 45+ messages in thread
From: Vladimir Murzin @ 2020-10-26 10:48 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much
> sense as this bit only means something for translation regimes
> that include EL0. This obviously isn't the case in the nVHE case,
> so let's drop this setting.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/host.S | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index ff9a0f547b9f..ed27f06a31ba 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -17,8 +17,6 @@ SYM_FUNC_START(__host_exit)
>  
>  	get_host_ctxt	x0, x1
>  
> -	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
> -
>  	/* Store the host regs x2 and x3 */
>  	stp	x2, x3,   [x0, #CPU_XREG_OFFSET(2)]
>  
> 

It was originally introduced in cb96408da4e1 (arm64: KVM: VHE: reset PSTATE.PAN on entry to EL2)
and indeed only applies to VHE (I even remember some attempts to put in under CONFIG_ARM64_VHE).

So, if it helps:
 
    Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>

Cheers
Vladimir
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/8] KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition
@ 2020-10-26 10:48     ` Vladimir Murzin
  0 siblings, 0 replies; 45+ messages in thread
From: Vladimir Murzin @ 2020-10-26 10:48 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much
> sense as this bit only means something for translation regimes
> that include EL0. This obviously isn't the case in the nVHE case,
> so let's drop this setting.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/host.S | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index ff9a0f547b9f..ed27f06a31ba 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -17,8 +17,6 @@ SYM_FUNC_START(__host_exit)
>  
>  	get_host_ctxt	x0, x1
>  
> -	ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
> -
>  	/* Store the host regs x2 and x3 */
>  	stp	x2, x3,   [x0, #CPU_XREG_OFFSET(2)]
>  
> 

It was originally introduced in cb96408da4e1 (arm64: KVM: VHE: reset PSTATE.PAN on entry to EL2)
and indeed only applies to VHE (I even remember some attempts to put in under CONFIG_ARM64_VHE).

So, if it helps:
 
    Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>

Cheers
Vladimir

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/8] KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
  2020-10-26  9:51   ` Marc Zyngier
  (?)
@ 2020-10-26 14:36     ` Quentin Perret
  -1 siblings, 0 replies; 45+ messages in thread
From: Quentin Perret @ 2020-10-26 14:36 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvm, James Morse, Julien Thierry,
	Suzuki K Poulose, Andrew Scull, Will Deacon, kernel-team

On Monday 26 Oct 2020 at 09:51:09 (+0000), Marc Zyngier wrote:
> The hyp-init code starts by stashing a register in TPIDR_EL2
> in in order to free a register. This happens no matter if the
> HVC call is legal or not.
> 
> Although nothing wrong seems to come out of it, it feels odd
> to alter the EL2 state for something that eventually returns
> an error.
> 
> Instead, use the fact that we know exactly which bits of the
> __kvm_hyp_init call are non-zero to perform the check with
> a series of EOR/ROR instructions, combined with a build-time
> check that the value is the one we expect.

Alternatively, could we make __kvm_hyp_init non-SMCCC compliant? While I
understand how it makes sense to be compliant for 'proper' HVCs, this
one really is an odd one that only makes sense on a very transient state.
That would let us define our convention, and we can just say x0-x18 can
be clobbered like any function call, which eradicates the issue Andrew
tried to avoid with this tpidr_el2 trick.

Thoughts?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/8] KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
@ 2020-10-26 14:36     ` Quentin Perret
  0 siblings, 0 replies; 45+ messages in thread
From: Quentin Perret @ 2020-10-26 14:36 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, linux-arm-kernel, Will Deacon, kvmarm

On Monday 26 Oct 2020 at 09:51:09 (+0000), Marc Zyngier wrote:
> The hyp-init code starts by stashing a register in TPIDR_EL2
> in in order to free a register. This happens no matter if the
> HVC call is legal or not.
> 
> Although nothing wrong seems to come out of it, it feels odd
> to alter the EL2 state for something that eventually returns
> an error.
> 
> Instead, use the fact that we know exactly which bits of the
> __kvm_hyp_init call are non-zero to perform the check with
> a series of EOR/ROR instructions, combined with a build-time
> check that the value is the one we expect.

Alternatively, could we make __kvm_hyp_init non-SMCCC compliant? While I
understand how it makes sense to be compliant for 'proper' HVCs, this
one really is an odd one that only makes sense on a very transient state.
That would let us define our convention, and we can just say x0-x18 can
be clobbered like any function call, which eradicates the issue Andrew
tried to avoid with this tpidr_el2 trick.

Thoughts?

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/8] KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call
@ 2020-10-26 14:36     ` Quentin Perret
  0 siblings, 0 replies; 45+ messages in thread
From: Quentin Perret @ 2020-10-26 14:36 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Suzuki K Poulose, kernel-team, James Morse,
	linux-arm-kernel, Will Deacon, kvmarm, Julien Thierry,
	Andrew Scull

On Monday 26 Oct 2020 at 09:51:09 (+0000), Marc Zyngier wrote:
> The hyp-init code starts by stashing a register in TPIDR_EL2
> in in order to free a register. This happens no matter if the
> HVC call is legal or not.
> 
> Although nothing wrong seems to come out of it, it feels odd
> to alter the EL2 state for something that eventually returns
> an error.
> 
> Instead, use the fact that we know exactly which bits of the
> __kvm_hyp_init call are non-zero to perform the check with
> a series of EOR/ROR instructions, combined with a build-time
> check that the value is the one we expect.

Alternatively, could we make __kvm_hyp_init non-SMCCC compliant? While I
understand how it makes sense to be compliant for 'proper' HVCs, this
one really is an odd one that only makes sense on a very transient state.
That would let us define our convention, and we can just say x0-x18 can
be clobbered like any function call, which eradicates the issue Andrew
tried to avoid with this tpidr_el2 trick.

Thoughts?

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/8] KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
  2020-10-26  9:51   ` Marc Zyngier
  (?)
@ 2020-11-02 13:30     ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 13:30 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> The new calling convention says that pointers coming from the SMCCC
> interface are turned into their HYP version in the host HVC handler.
> However, there is still a stray kern_hyp_va() in the TLB invalidation
> code, which could result in a corrupted pointer.
>
> Drop the spurious conversion.
>
> Fixes: a071261d9318 ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/tlb.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> index 39ca71ab8866..fbde89a2c6e8 100644
> --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> @@ -128,7 +128,6 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
>  	struct tlb_inv_context cxt;
>  
>  	/* Switch to requested VMID */
> -	mmu = kern_hyp_va(mmu);
>  	__tlb_switch_to_guest(mmu, &cxt);
>  
>  	__tlbi(vmalle1);

Looks fine to me, the function handle_host_hcall() already does the required
transformation when handling the __kvm_tlb_flush_local_vmid function id:

case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): { unsigned long r1 =
host_ctxt->regs.regs[1]; struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu)); break; }

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/8] KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
@ 2020-11-02 13:30     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 13:30 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> The new calling convention says that pointers coming from the SMCCC
> interface are turned into their HYP version in the host HVC handler.
> However, there is still a stray kern_hyp_va() in the TLB invalidation
> code, which could result in a corrupted pointer.
>
> Drop the spurious conversion.
>
> Fixes: a071261d9318 ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/tlb.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> index 39ca71ab8866..fbde89a2c6e8 100644
> --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> @@ -128,7 +128,6 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
>  	struct tlb_inv_context cxt;
>  
>  	/* Switch to requested VMID */
> -	mmu = kern_hyp_va(mmu);
>  	__tlb_switch_to_guest(mmu, &cxt);
>  
>  	__tlbi(vmalle1);

Looks fine to me, the function handle_host_hcall() already does the required
transformation when handling the __kvm_tlb_flush_local_vmid function id:

case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): { unsigned long r1 =
host_ctxt->regs.regs[1]; struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu)); break; }

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/8] KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation
@ 2020-11-02 13:30     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 13:30 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> The new calling convention says that pointers coming from the SMCCC
> interface are turned into their HYP version in the host HVC handler.
> However, there is still a stray kern_hyp_va() in the TLB invalidation
> code, which could result in a corrupted pointer.
>
> Drop the spurious conversion.
>
> Fixes: a071261d9318 ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/tlb.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> index 39ca71ab8866..fbde89a2c6e8 100644
> --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> @@ -128,7 +128,6 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
>  	struct tlb_inv_context cxt;
>  
>  	/* Switch to requested VMID */
> -	mmu = kern_hyp_va(mmu);
>  	__tlb_switch_to_guest(mmu, &cxt);
>  
>  	__tlbi(vmalle1);

Looks fine to me, the function handle_host_hcall() already does the required
transformation when handling the __kvm_tlb_flush_local_vmid function id:

case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): { unsigned long r1 =
host_ctxt->regs.regs[1]; struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu)); break; }

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 5/8] KVM: arm64: Turn host HVC handling into a dispatch table
  2020-10-26  9:51   ` Marc Zyngier
  (?)
@ 2020-11-02 14:19     ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 14:19 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Now that we can use function pointer, use a dispatch table to call
> the individual HVC handlers, leading to more maintainable code.
>
> Further improvements include helpers to declare the mapping of
> local variables to values passed in the host context.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kernel/image-vars.h     |   1 +
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 227 +++++++++++++++++------------
>  2 files changed, 134 insertions(+), 94 deletions(-)
>
> diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> index 61684a500914..b5b0fdd1043c 100644
> --- a/arch/arm64/kernel/image-vars.h
> +++ b/arch/arm64/kernel/image-vars.h
> @@ -64,6 +64,7 @@ __efistub__ctype		= _ctype;
>  /* Alternative callbacks for init-time patching of nVHE hyp code. */
>  KVM_NVHE_ALIAS(kvm_patch_vector_branch);
>  KVM_NVHE_ALIAS(kvm_update_va_mask);
> +KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
>  
>  /* Global kernel state accessed by nVHE hyp code. */
>  KVM_NVHE_ALIAS(kvm_vgic_global_state);
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index e2eafe2c93af..2af8a5e902af 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -12,106 +12,145 @@
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  
> -#include <kvm/arm_hypercalls.h>
> -
> -static void handle_host_hcall(unsigned long func_id,
> -			      struct kvm_cpu_context *host_ctxt)
> -{
> -	unsigned long ret = 0;
> -
> -	switch (func_id) {
> -	case KVM_HOST_SMCCC_FUNC(__kvm_vcpu_run): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_vcpu *vcpu = (struct kvm_vcpu *)r1;
> -
> -		ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_flush_vm_context):
> -		__kvm_flush_vm_context();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid_ipa): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -		phys_addr_t ipa = host_ctxt->regs.regs[2];
> -		int level = host_ctxt->regs.regs[3];
> -
> -		__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -
> -		__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -
> -		__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
> -		u64 cntvoff = host_ctxt->regs.regs[1];
> -
> -		__kvm_timer_set_cntvoff(cntvoff);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
> -		__kvm_enable_ssbs();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_get_ich_vtr_el2):
> -		ret = __vgic_v3_get_ich_vtr_el2();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_read_vmcr):
> -		ret = __vgic_v3_read_vmcr();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_write_vmcr): {
> -		u32 vmcr = host_ctxt->regs.regs[1];
> -
> -		__vgic_v3_write_vmcr(vmcr);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_init_lrs):
> -		__vgic_v3_init_lrs();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__kvm_get_mdcr_el2):
> -		ret = __kvm_get_mdcr_el2();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_save_aprs): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
> -
> -		__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
> -
> -		__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
> -		break;
> -	}
> -	default:
> -		/* Invalid host HVC. */
> -		host_ctxt->regs.regs[0] = SMCCC_RET_NOT_SUPPORTED;
> -		return;
> -	}
> -
> -	host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS;
> -	host_ctxt->regs.regs[1] = ret;
> +#define cpu_reg(ctxt, r)	(ctxt)->regs.regs[r]
> +#define DECLARE_REG(type, name, ctxt, reg)	\
> +				type name = (type)cpu_reg(ctxt, (reg))
> +
> +static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
> +
> +	cpu_reg(host_ctxt, 1) =  __kvm_vcpu_run(kern_hyp_va(vcpu));
> +}
> +
> +static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_flush_vm_context();
> +}
> +
> +static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +	DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2);
> +	DECLARE_REG(int, level, host_ctxt, 3);
> +
> +	__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
> +}
> +
> +static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +
> +	__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
> +}
> +
> +static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +
> +	__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
> +}
> +
> +static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_timer_set_cntvoff(cpu_reg(host_ctxt, 1));
> +}
> +
> +static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_enable_ssbs();
> +}
> +
> +static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __vgic_v3_get_ich_vtr_el2();
> +}
> +
> +static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr();
> +}
> +
> +static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt)
> +{
> +	__vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1));
> +}
> +
> +static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt)
> +{
> +	__vgic_v3_init_lrs();
> +}
> +
> +static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2();
> +}
> +
> +static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
> +
> +	__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
> +}
> +
> +static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
> +
> +	__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
> +}
> +
> +#define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = handle_##x
> +
> +typedef void (*hcall_t)(struct kvm_cpu_context *);
> +
> +static const hcall_t host_hcall[] = {
> +	HANDLE_FUNC(__kvm_vcpu_run),
> +	HANDLE_FUNC(__kvm_flush_vm_context),
> +	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
> +	HANDLE_FUNC(__kvm_tlb_flush_vmid),
> +	HANDLE_FUNC(__kvm_tlb_flush_local_vmid),
> +	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> +	HANDLE_FUNC(__kvm_enable_ssbs),
> +	HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2),
> +	HANDLE_FUNC(__vgic_v3_read_vmcr),
> +	HANDLE_FUNC(__vgic_v3_write_vmcr),
> +	HANDLE_FUNC(__vgic_v3_init_lrs),
> +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> +	HANDLE_FUNC(__vgic_v3_save_aprs),
> +	HANDLE_FUNC(__vgic_v3_restore_aprs),
> +};
> +
> +static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> +	unsigned long ret = SMCCC_RET_NOT_SUPPORTED;
> +	hcall_t hcall;
> +
> +	id -= KVM_HOST_SMCCC_ID(0);
> +
> +	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> +		goto inval;
> +
> +	hcall = host_hcall[id];
> +	if (unlikely(!hcall))
> +		goto inval;
> +
> +	hcall = kimg_hyp_va(hcall);
> +
> +	hcall(host_ctxt);
> +	ret = SMCCC_RET_SUCCESS;
> +
> +inval:
> +	cpu_reg(host_ctxt, 0) = ret;
>  }
>  
>  void handle_trap(struct kvm_cpu_context *host_ctxt)
>  {
>  	u64 esr = read_sysreg_el2(SYS_ESR);
> -	unsigned long func_id;
>  
> -	if (ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64)
> +	if (unlikely(ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64))
>  		hyp_panic();
>  
> -	func_id = host_ctxt->regs.regs[0];
> -	handle_host_hcall(func_id, host_ctxt);
> +	handle_host_hcall(host_ctxt);
>  }

I checked that the function handlers are the same as the cases in the switch
statements. In kvm_asm.h there is no restriction regarding function ids not being
consecutive numbers, but handle_host_hcall() checks that hcall is not NULL, so
we're safe against that, or function ids disappearing in the future.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 5/8] KVM: arm64: Turn host HVC handling into a dispatch table
@ 2020-11-02 14:19     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 14:19 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Now that we can use function pointer, use a dispatch table to call
> the individual HVC handlers, leading to more maintainable code.
>
> Further improvements include helpers to declare the mapping of
> local variables to values passed in the host context.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kernel/image-vars.h     |   1 +
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 227 +++++++++++++++++------------
>  2 files changed, 134 insertions(+), 94 deletions(-)
>
> diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> index 61684a500914..b5b0fdd1043c 100644
> --- a/arch/arm64/kernel/image-vars.h
> +++ b/arch/arm64/kernel/image-vars.h
> @@ -64,6 +64,7 @@ __efistub__ctype		= _ctype;
>  /* Alternative callbacks for init-time patching of nVHE hyp code. */
>  KVM_NVHE_ALIAS(kvm_patch_vector_branch);
>  KVM_NVHE_ALIAS(kvm_update_va_mask);
> +KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
>  
>  /* Global kernel state accessed by nVHE hyp code. */
>  KVM_NVHE_ALIAS(kvm_vgic_global_state);
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index e2eafe2c93af..2af8a5e902af 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -12,106 +12,145 @@
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  
> -#include <kvm/arm_hypercalls.h>
> -
> -static void handle_host_hcall(unsigned long func_id,
> -			      struct kvm_cpu_context *host_ctxt)
> -{
> -	unsigned long ret = 0;
> -
> -	switch (func_id) {
> -	case KVM_HOST_SMCCC_FUNC(__kvm_vcpu_run): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_vcpu *vcpu = (struct kvm_vcpu *)r1;
> -
> -		ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_flush_vm_context):
> -		__kvm_flush_vm_context();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid_ipa): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -		phys_addr_t ipa = host_ctxt->regs.regs[2];
> -		int level = host_ctxt->regs.regs[3];
> -
> -		__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -
> -		__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -
> -		__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
> -		u64 cntvoff = host_ctxt->regs.regs[1];
> -
> -		__kvm_timer_set_cntvoff(cntvoff);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
> -		__kvm_enable_ssbs();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_get_ich_vtr_el2):
> -		ret = __vgic_v3_get_ich_vtr_el2();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_read_vmcr):
> -		ret = __vgic_v3_read_vmcr();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_write_vmcr): {
> -		u32 vmcr = host_ctxt->regs.regs[1];
> -
> -		__vgic_v3_write_vmcr(vmcr);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_init_lrs):
> -		__vgic_v3_init_lrs();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__kvm_get_mdcr_el2):
> -		ret = __kvm_get_mdcr_el2();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_save_aprs): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
> -
> -		__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
> -
> -		__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
> -		break;
> -	}
> -	default:
> -		/* Invalid host HVC. */
> -		host_ctxt->regs.regs[0] = SMCCC_RET_NOT_SUPPORTED;
> -		return;
> -	}
> -
> -	host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS;
> -	host_ctxt->regs.regs[1] = ret;
> +#define cpu_reg(ctxt, r)	(ctxt)->regs.regs[r]
> +#define DECLARE_REG(type, name, ctxt, reg)	\
> +				type name = (type)cpu_reg(ctxt, (reg))
> +
> +static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
> +
> +	cpu_reg(host_ctxt, 1) =  __kvm_vcpu_run(kern_hyp_va(vcpu));
> +}
> +
> +static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_flush_vm_context();
> +}
> +
> +static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +	DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2);
> +	DECLARE_REG(int, level, host_ctxt, 3);
> +
> +	__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
> +}
> +
> +static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +
> +	__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
> +}
> +
> +static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +
> +	__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
> +}
> +
> +static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_timer_set_cntvoff(cpu_reg(host_ctxt, 1));
> +}
> +
> +static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_enable_ssbs();
> +}
> +
> +static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __vgic_v3_get_ich_vtr_el2();
> +}
> +
> +static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr();
> +}
> +
> +static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt)
> +{
> +	__vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1));
> +}
> +
> +static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt)
> +{
> +	__vgic_v3_init_lrs();
> +}
> +
> +static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2();
> +}
> +
> +static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
> +
> +	__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
> +}
> +
> +static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
> +
> +	__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
> +}
> +
> +#define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = handle_##x
> +
> +typedef void (*hcall_t)(struct kvm_cpu_context *);
> +
> +static const hcall_t host_hcall[] = {
> +	HANDLE_FUNC(__kvm_vcpu_run),
> +	HANDLE_FUNC(__kvm_flush_vm_context),
> +	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
> +	HANDLE_FUNC(__kvm_tlb_flush_vmid),
> +	HANDLE_FUNC(__kvm_tlb_flush_local_vmid),
> +	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> +	HANDLE_FUNC(__kvm_enable_ssbs),
> +	HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2),
> +	HANDLE_FUNC(__vgic_v3_read_vmcr),
> +	HANDLE_FUNC(__vgic_v3_write_vmcr),
> +	HANDLE_FUNC(__vgic_v3_init_lrs),
> +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> +	HANDLE_FUNC(__vgic_v3_save_aprs),
> +	HANDLE_FUNC(__vgic_v3_restore_aprs),
> +};
> +
> +static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> +	unsigned long ret = SMCCC_RET_NOT_SUPPORTED;
> +	hcall_t hcall;
> +
> +	id -= KVM_HOST_SMCCC_ID(0);
> +
> +	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> +		goto inval;
> +
> +	hcall = host_hcall[id];
> +	if (unlikely(!hcall))
> +		goto inval;
> +
> +	hcall = kimg_hyp_va(hcall);
> +
> +	hcall(host_ctxt);
> +	ret = SMCCC_RET_SUCCESS;
> +
> +inval:
> +	cpu_reg(host_ctxt, 0) = ret;
>  }
>  
>  void handle_trap(struct kvm_cpu_context *host_ctxt)
>  {
>  	u64 esr = read_sysreg_el2(SYS_ESR);
> -	unsigned long func_id;
>  
> -	if (ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64)
> +	if (unlikely(ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64))
>  		hyp_panic();
>  
> -	func_id = host_ctxt->regs.regs[0];
> -	handle_host_hcall(func_id, host_ctxt);
> +	handle_host_hcall(host_ctxt);
>  }

I checked that the function handlers are the same as the cases in the switch
statements. In kvm_asm.h there is no restriction regarding function ids not being
consecutive numbers, but handle_host_hcall() checks that hcall is not NULL, so
we're safe against that, or function ids disappearing in the future.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 5/8] KVM: arm64: Turn host HVC handling into a dispatch table
@ 2020-11-02 14:19     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 14:19 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Now that we can use function pointer, use a dispatch table to call
> the individual HVC handlers, leading to more maintainable code.
>
> Further improvements include helpers to declare the mapping of
> local variables to values passed in the host context.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kernel/image-vars.h     |   1 +
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 227 +++++++++++++++++------------
>  2 files changed, 134 insertions(+), 94 deletions(-)
>
> diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> index 61684a500914..b5b0fdd1043c 100644
> --- a/arch/arm64/kernel/image-vars.h
> +++ b/arch/arm64/kernel/image-vars.h
> @@ -64,6 +64,7 @@ __efistub__ctype		= _ctype;
>  /* Alternative callbacks for init-time patching of nVHE hyp code. */
>  KVM_NVHE_ALIAS(kvm_patch_vector_branch);
>  KVM_NVHE_ALIAS(kvm_update_va_mask);
> +KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset);
>  
>  /* Global kernel state accessed by nVHE hyp code. */
>  KVM_NVHE_ALIAS(kvm_vgic_global_state);
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index e2eafe2c93af..2af8a5e902af 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -12,106 +12,145 @@
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  
> -#include <kvm/arm_hypercalls.h>
> -
> -static void handle_host_hcall(unsigned long func_id,
> -			      struct kvm_cpu_context *host_ctxt)
> -{
> -	unsigned long ret = 0;
> -
> -	switch (func_id) {
> -	case KVM_HOST_SMCCC_FUNC(__kvm_vcpu_run): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_vcpu *vcpu = (struct kvm_vcpu *)r1;
> -
> -		ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_flush_vm_context):
> -		__kvm_flush_vm_context();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid_ipa): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -		phys_addr_t ipa = host_ctxt->regs.regs[2];
> -		int level = host_ctxt->regs.regs[3];
> -
> -		__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -
> -		__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
> -
> -		__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
> -		u64 cntvoff = host_ctxt->regs.regs[1];
> -
> -		__kvm_timer_set_cntvoff(cntvoff);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
> -		__kvm_enable_ssbs();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_get_ich_vtr_el2):
> -		ret = __vgic_v3_get_ich_vtr_el2();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_read_vmcr):
> -		ret = __vgic_v3_read_vmcr();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_write_vmcr): {
> -		u32 vmcr = host_ctxt->regs.regs[1];
> -
> -		__vgic_v3_write_vmcr(vmcr);
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_init_lrs):
> -		__vgic_v3_init_lrs();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__kvm_get_mdcr_el2):
> -		ret = __kvm_get_mdcr_el2();
> -		break;
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_save_aprs): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
> -
> -		__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
> -		break;
> -	}
> -	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
> -		unsigned long r1 = host_ctxt->regs.regs[1];
> -		struct vgic_v3_cpu_if *cpu_if = (struct vgic_v3_cpu_if *)r1;
> -
> -		__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
> -		break;
> -	}
> -	default:
> -		/* Invalid host HVC. */
> -		host_ctxt->regs.regs[0] = SMCCC_RET_NOT_SUPPORTED;
> -		return;
> -	}
> -
> -	host_ctxt->regs.regs[0] = SMCCC_RET_SUCCESS;
> -	host_ctxt->regs.regs[1] = ret;
> +#define cpu_reg(ctxt, r)	(ctxt)->regs.regs[r]
> +#define DECLARE_REG(type, name, ctxt, reg)	\
> +				type name = (type)cpu_reg(ctxt, (reg))
> +
> +static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
> +
> +	cpu_reg(host_ctxt, 1) =  __kvm_vcpu_run(kern_hyp_va(vcpu));
> +}
> +
> +static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_flush_vm_context();
> +}
> +
> +static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +	DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2);
> +	DECLARE_REG(int, level, host_ctxt, 3);
> +
> +	__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
> +}
> +
> +static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +
> +	__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
> +}
> +
> +static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
> +
> +	__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
> +}
> +
> +static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_timer_set_cntvoff(cpu_reg(host_ctxt, 1));
> +}
> +
> +static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
> +{
> +	__kvm_enable_ssbs();
> +}
> +
> +static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __vgic_v3_get_ich_vtr_el2();
> +}
> +
> +static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr();
> +}
> +
> +static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt)
> +{
> +	__vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1));
> +}
> +
> +static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt)
> +{
> +	__vgic_v3_init_lrs();
> +}
> +
> +static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt)
> +{
> +	cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2();
> +}
> +
> +static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
> +
> +	__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
> +}
> +
> +static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
> +
> +	__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
> +}
> +
> +#define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = handle_##x
> +
> +typedef void (*hcall_t)(struct kvm_cpu_context *);
> +
> +static const hcall_t host_hcall[] = {
> +	HANDLE_FUNC(__kvm_vcpu_run),
> +	HANDLE_FUNC(__kvm_flush_vm_context),
> +	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
> +	HANDLE_FUNC(__kvm_tlb_flush_vmid),
> +	HANDLE_FUNC(__kvm_tlb_flush_local_vmid),
> +	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> +	HANDLE_FUNC(__kvm_enable_ssbs),
> +	HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2),
> +	HANDLE_FUNC(__vgic_v3_read_vmcr),
> +	HANDLE_FUNC(__vgic_v3_write_vmcr),
> +	HANDLE_FUNC(__vgic_v3_init_lrs),
> +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> +	HANDLE_FUNC(__vgic_v3_save_aprs),
> +	HANDLE_FUNC(__vgic_v3_restore_aprs),
> +};
> +
> +static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> +{
> +	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> +	unsigned long ret = SMCCC_RET_NOT_SUPPORTED;
> +	hcall_t hcall;
> +
> +	id -= KVM_HOST_SMCCC_ID(0);
> +
> +	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> +		goto inval;
> +
> +	hcall = host_hcall[id];
> +	if (unlikely(!hcall))
> +		goto inval;
> +
> +	hcall = kimg_hyp_va(hcall);
> +
> +	hcall(host_ctxt);
> +	ret = SMCCC_RET_SUCCESS;
> +
> +inval:
> +	cpu_reg(host_ctxt, 0) = ret;
>  }
>  
>  void handle_trap(struct kvm_cpu_context *host_ctxt)
>  {
>  	u64 esr = read_sysreg_el2(SYS_ESR);
> -	unsigned long func_id;
>  
> -	if (ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64)
> +	if (unlikely(ESR_ELx_EC(esr) != ESR_ELx_EC_HVC64))
>  		hyp_panic();
>  
> -	func_id = host_ctxt->regs.regs[0];
> -	handle_host_hcall(func_id, host_ctxt);
> +	handle_host_hcall(host_ctxt);
>  }

I checked that the function handlers are the same as the cases in the switch
statements. In kvm_asm.h there is no restriction regarding function ids not being
consecutive numbers, but handle_host_hcall() checks that hcall is not NULL, so
we're safe against that, or function ids disappearing in the future.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 7/8] KVM: arm64: Simplify __kvm_enable_ssbs()
  2020-10-26  9:51   ` Marc Zyngier
  (?)
@ 2020-11-02 15:30     ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 15:30 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Move the setting of SSBS directly into the HVC handler, using
> the C helpers rather than the inline asssembly code.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_asm.h    |  2 --
>  arch/arm64/include/asm/sysreg.h     |  1 +
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c  |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 11 -----------
>  4 files changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index 54387ccd1ab2..a542c422a036 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -189,8 +189,6 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
>  
>  extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
>  
> -extern void __kvm_enable_ssbs(void);
> -
>  extern u64 __vgic_v3_get_ich_vtr_el2(void);
>  extern u64 __vgic_v3_read_vmcr(void);
>  extern void __vgic_v3_write_vmcr(u32 vmcr);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index d52c1b3ce589..c9423f36e05c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -461,6 +461,7 @@
>  
>  #define SYS_PMCCFILTR_EL0		sys_reg(3, 3, 14, 15, 7)
>  
> +#define SYS_SCTLR_EL2			sys_reg(3, 4, 1, 0, 0)
>  #define SYS_ZCR_EL2			sys_reg(3, 4, 1, 2, 0)
>  #define SYS_DACR32_EL2			sys_reg(3, 4, 3, 0, 0)
>  #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 2af8a5e902af..5125e934da22 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -58,7 +58,11 @@ static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
>  
>  static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
>  {
> -	__kvm_enable_ssbs();
> +	u64 tmp;
> +
> +	tmp = read_sysreg_el2(SYS_SCTLR);
> +	tmp |= SCTLR_ELx_DSSBS;
> +	write_sysreg_el2(tmp, SYS_SCTLR);

This looks identical to me to the inline assembly version:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>  }
>  
>  static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
> diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> index 88a25fc8fcd3..29305022bc04 100644
> --- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> @@ -33,14 +33,3 @@ void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
>  	__sysreg_restore_user_state(ctxt);
>  	__sysreg_restore_el2_return_state(ctxt);
>  }
> -
> -void __kvm_enable_ssbs(void)
> -{
> -	u64 tmp;
> -
> -	asm volatile(
> -	"mrs	%0, sctlr_el2\n"
> -	"orr	%0, %0, %1\n"
> -	"msr	sctlr_el2, %0"
> -	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
> -}

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 7/8] KVM: arm64: Simplify __kvm_enable_ssbs()
@ 2020-11-02 15:30     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 15:30 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Move the setting of SSBS directly into the HVC handler, using
> the C helpers rather than the inline asssembly code.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_asm.h    |  2 --
>  arch/arm64/include/asm/sysreg.h     |  1 +
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c  |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 11 -----------
>  4 files changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index 54387ccd1ab2..a542c422a036 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -189,8 +189,6 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
>  
>  extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
>  
> -extern void __kvm_enable_ssbs(void);
> -
>  extern u64 __vgic_v3_get_ich_vtr_el2(void);
>  extern u64 __vgic_v3_read_vmcr(void);
>  extern void __vgic_v3_write_vmcr(u32 vmcr);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index d52c1b3ce589..c9423f36e05c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -461,6 +461,7 @@
>  
>  #define SYS_PMCCFILTR_EL0		sys_reg(3, 3, 14, 15, 7)
>  
> +#define SYS_SCTLR_EL2			sys_reg(3, 4, 1, 0, 0)
>  #define SYS_ZCR_EL2			sys_reg(3, 4, 1, 2, 0)
>  #define SYS_DACR32_EL2			sys_reg(3, 4, 3, 0, 0)
>  #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 2af8a5e902af..5125e934da22 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -58,7 +58,11 @@ static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
>  
>  static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
>  {
> -	__kvm_enable_ssbs();
> +	u64 tmp;
> +
> +	tmp = read_sysreg_el2(SYS_SCTLR);
> +	tmp |= SCTLR_ELx_DSSBS;
> +	write_sysreg_el2(tmp, SYS_SCTLR);

This looks identical to me to the inline assembly version:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>  }
>  
>  static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
> diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> index 88a25fc8fcd3..29305022bc04 100644
> --- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> @@ -33,14 +33,3 @@ void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
>  	__sysreg_restore_user_state(ctxt);
>  	__sysreg_restore_el2_return_state(ctxt);
>  }
> -
> -void __kvm_enable_ssbs(void)
> -{
> -	u64 tmp;
> -
> -	asm volatile(
> -	"mrs	%0, sctlr_el2\n"
> -	"orr	%0, %0, %1\n"
> -	"msr	sctlr_el2, %0"
> -	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
> -}
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 7/8] KVM: arm64: Simplify __kvm_enable_ssbs()
@ 2020-11-02 15:30     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 15:30 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Move the setting of SSBS directly into the HVC handler, using
> the C helpers rather than the inline asssembly code.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_asm.h    |  2 --
>  arch/arm64/include/asm/sysreg.h     |  1 +
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c  |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 11 -----------
>  4 files changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index 54387ccd1ab2..a542c422a036 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -189,8 +189,6 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
>  
>  extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
>  
> -extern void __kvm_enable_ssbs(void);
> -
>  extern u64 __vgic_v3_get_ich_vtr_el2(void);
>  extern u64 __vgic_v3_read_vmcr(void);
>  extern void __vgic_v3_write_vmcr(u32 vmcr);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index d52c1b3ce589..c9423f36e05c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -461,6 +461,7 @@
>  
>  #define SYS_PMCCFILTR_EL0		sys_reg(3, 3, 14, 15, 7)
>  
> +#define SYS_SCTLR_EL2			sys_reg(3, 4, 1, 0, 0)
>  #define SYS_ZCR_EL2			sys_reg(3, 4, 1, 2, 0)
>  #define SYS_DACR32_EL2			sys_reg(3, 4, 3, 0, 0)
>  #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 2af8a5e902af..5125e934da22 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -58,7 +58,11 @@ static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
>  
>  static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
>  {
> -	__kvm_enable_ssbs();
> +	u64 tmp;
> +
> +	tmp = read_sysreg_el2(SYS_SCTLR);
> +	tmp |= SCTLR_ELx_DSSBS;
> +	write_sysreg_el2(tmp, SYS_SCTLR);

This looks identical to me to the inline assembly version:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>  }
>  
>  static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt)
> diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> index 88a25fc8fcd3..29305022bc04 100644
> --- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
> @@ -33,14 +33,3 @@ void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
>  	__sysreg_restore_user_state(ctxt);
>  	__sysreg_restore_el2_return_state(ctxt);
>  }
> -
> -void __kvm_enable_ssbs(void)
> -{
> -	u64 tmp;
> -
> -	asm volatile(
> -	"mrs	%0, sctlr_el2\n"
> -	"orr	%0, %0, %1\n"
> -	"msr	sctlr_el2, %0"
> -	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
> -}

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
  2020-10-26  9:51   ` Marc Zyngier
  (?)
@ 2020-11-02 16:28     ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 16:28 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team, Will Deacon

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Registers x0/x1 get repeateadly pushed and poped during a host
> HVC call. Instead, leave the registers on the stack, saving
> a store instruction on the fast path for an add on the slow path.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/host.S | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index e2d316d13180..7b69f9ff8da0 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -13,8 +13,6 @@
>  	.text
>  
>  SYM_FUNC_START(__host_exit)
> -	stp	x0, x1, [sp, #-16]!
> -
>  	get_host_ctxt	x0, x1
>  
>  	/* Store the host regs x2 and x3 */
> @@ -99,13 +97,14 @@ SYM_FUNC_END(__hyp_do_panic)
>  	mrs	x0, esr_el2
>  	lsr	x0, x0, #ESR_ELx_EC_SHIFT
>  	cmp	x0, #ESR_ELx_EC_HVC64
> -	ldp	x0, x1, [sp], #16
> +	ldp	x0, x1, [sp]		// Don't fixup the stack yet

If I understand get_host_ctxt correctly, it will clobber x0 and x1, and this is
the first thing that __host_exit does. I think that the values of x0 and x1 are
only needed in host_el1_sync_vect: x0 to compare with HVC_STUB_HCALL_NR below, and
x1 for the call to __kvm_handle_stub_hvc. I was thinking that we can restore x0
just before the comparison with HVC_STUB_HCALL_NR, after the first branch to
__host_exit, to make it clear that it is not used by __host_exit. Not really
important, but it might make the code a bit easier to understand (it looks a bit
weird to me to have x0, x1 clobbered immediately after we restore them from the
stack).

Either way you prefer, the code looks correct to me: __host_exit assumes that x0
and x1 are at the top of the stack when it saves them, and the ADD in
host_el1_sync_vect (when the code doesn't branch to __host_exit) makes sure the
stack pointer is as expected:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>  	b.ne	__host_exit
>  
>  	/* Check for a stub HVC call */
>  	cmp	x0, #HVC_STUB_HCALL_NR
>  	b.hs	__host_exit
>  
> +	add	sp, sp, #16
>  	/*
>  	 * Compute the idmap address of __kvm_handle_stub_hvc and
>  	 * jump there. Since we use kimage_voffset, do not use the

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
@ 2020-11-02 16:28     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 16:28 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Registers x0/x1 get repeateadly pushed and poped during a host
> HVC call. Instead, leave the registers on the stack, saving
> a store instruction on the fast path for an add on the slow path.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/host.S | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index e2d316d13180..7b69f9ff8da0 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -13,8 +13,6 @@
>  	.text
>  
>  SYM_FUNC_START(__host_exit)
> -	stp	x0, x1, [sp, #-16]!
> -
>  	get_host_ctxt	x0, x1
>  
>  	/* Store the host regs x2 and x3 */
> @@ -99,13 +97,14 @@ SYM_FUNC_END(__hyp_do_panic)
>  	mrs	x0, esr_el2
>  	lsr	x0, x0, #ESR_ELx_EC_SHIFT
>  	cmp	x0, #ESR_ELx_EC_HVC64
> -	ldp	x0, x1, [sp], #16
> +	ldp	x0, x1, [sp]		// Don't fixup the stack yet

If I understand get_host_ctxt correctly, it will clobber x0 and x1, and this is
the first thing that __host_exit does. I think that the values of x0 and x1 are
only needed in host_el1_sync_vect: x0 to compare with HVC_STUB_HCALL_NR below, and
x1 for the call to __kvm_handle_stub_hvc. I was thinking that we can restore x0
just before the comparison with HVC_STUB_HCALL_NR, after the first branch to
__host_exit, to make it clear that it is not used by __host_exit. Not really
important, but it might make the code a bit easier to understand (it looks a bit
weird to me to have x0, x1 clobbered immediately after we restore them from the
stack).

Either way you prefer, the code looks correct to me: __host_exit assumes that x0
and x1 are at the top of the stack when it saves them, and the ADD in
host_el1_sync_vect (when the code doesn't branch to __host_exit) makes sure the
stack pointer is as expected:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>  	b.ne	__host_exit
>  
>  	/* Check for a stub HVC call */
>  	cmp	x0, #HVC_STUB_HCALL_NR
>  	b.hs	__host_exit
>  
> +	add	sp, sp, #16
>  	/*
>  	 * Compute the idmap address of __kvm_handle_stub_hvc and
>  	 * jump there. Since we use kimage_voffset, do not use the
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
@ 2020-11-02 16:28     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-11-02 16:28 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: Will Deacon, kernel-team

Hi Marc,

On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Registers x0/x1 get repeateadly pushed and poped during a host
> HVC call. Instead, leave the registers on the stack, saving
> a store instruction on the fast path for an add on the slow path.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/host.S | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index e2d316d13180..7b69f9ff8da0 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -13,8 +13,6 @@
>  	.text
>  
>  SYM_FUNC_START(__host_exit)
> -	stp	x0, x1, [sp, #-16]!
> -
>  	get_host_ctxt	x0, x1
>  
>  	/* Store the host regs x2 and x3 */
> @@ -99,13 +97,14 @@ SYM_FUNC_END(__hyp_do_panic)
>  	mrs	x0, esr_el2
>  	lsr	x0, x0, #ESR_ELx_EC_SHIFT
>  	cmp	x0, #ESR_ELx_EC_HVC64
> -	ldp	x0, x1, [sp], #16
> +	ldp	x0, x1, [sp]		// Don't fixup the stack yet

If I understand get_host_ctxt correctly, it will clobber x0 and x1, and this is
the first thing that __host_exit does. I think that the values of x0 and x1 are
only needed in host_el1_sync_vect: x0 to compare with HVC_STUB_HCALL_NR below, and
x1 for the call to __kvm_handle_stub_hvc. I was thinking that we can restore x0
just before the comparison with HVC_STUB_HCALL_NR, after the first branch to
__host_exit, to make it clear that it is not used by __host_exit. Not really
important, but it might make the code a bit easier to understand (it looks a bit
weird to me to have x0, x1 clobbered immediately after we restore them from the
stack).

Either way you prefer, the code looks correct to me: __host_exit assumes that x0
and x1 are at the top of the stack when it saves them, and the ADD in
host_el1_sync_vect (when the code doesn't branch to __host_exit) makes sure the
stack pointer is as expected:

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>

Thanks,

Alex

>  	b.ne	__host_exit
>  
>  	/* Check for a stub HVC call */
>  	cmp	x0, #HVC_STUB_HCALL_NR
>  	b.hs	__host_exit
>  
> +	add	sp, sp, #16
>  	/*
>  	 * Compute the idmap address of __kvm_handle_stub_hvc and
>  	 * jump there. Since we use kimage_voffset, do not use the

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2020-11-02 16:28 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26  9:51 [PATCH 0/8] KVM: arm64: Host EL2 entry improvements Marc Zyngier
2020-10-26  9:51 ` Marc Zyngier
2020-10-26  9:51 ` Marc Zyngier
2020-10-26  9:51 ` [PATCH 1/8] KVM: arm64: Don't corrupt tpidr_el2 on failed HVC call Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26 14:36   ` Quentin Perret
2020-10-26 14:36     ` Quentin Perret
2020-10-26 14:36     ` Quentin Perret
2020-10-26  9:51 ` [PATCH 2/8] KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidation Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-11-02 13:30   ` Alexandru Elisei
2020-11-02 13:30     ` Alexandru Elisei
2020-11-02 13:30     ` Alexandru Elisei
2020-10-26  9:51 ` [PATCH 3/8] KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transition Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26 10:48   ` Vladimir Murzin
2020-10-26 10:48     ` Vladimir Murzin
2020-10-26 10:48     ` Vladimir Murzin
2020-10-26  9:51 ` [PATCH 4/8] KVM: arm64: Add kimg_hyp_va() helper Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51 ` [PATCH 5/8] KVM: arm64: Turn host HVC handling into a dispatch table Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-11-02 14:19   ` Alexandru Elisei
2020-11-02 14:19     ` Alexandru Elisei
2020-11-02 14:19     ` Alexandru Elisei
2020-10-26  9:51 ` [PATCH 6/8] KVM: arm64: Patch kimage_voffset instead of loading the EL1 value Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51 ` [PATCH 7/8] KVM: arm64: Simplify __kvm_enable_ssbs() Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-11-02 15:30   ` Alexandru Elisei
2020-11-02 15:30     ` Alexandru Elisei
2020-11-02 15:30     ` Alexandru Elisei
2020-10-26  9:51 ` [PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-10-26  9:51   ` Marc Zyngier
2020-11-02 16:28   ` Alexandru Elisei
2020-11-02 16:28     ` Alexandru Elisei
2020-11-02 16:28     ` Alexandru Elisei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.