All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] kvm: arm64: Pointer Authentication handling fixes
@ 2020-06-04 13:33 ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Will Deacon,
	Catalin Marinas, Mark Rutland, kernel-team

I recently discovered that the Pointer Authentication (PtrAuth)
handling code in KVM is busted, and has been for a while. The main
issue is that the we save the host's keys from a preemptible
context. Things will go wrong at some point.

In order to address this, the first patch move the saving of the
host's keys to vcpu_load(). It is done eagerly, which is a bore, but
is at least safe. This is definitely stable material.

The following two patches are adding an optimisation and a fix for a
corner case: we handle key saving and HCR massaging as a fixup, much
like the FPSIMD code. This subsequently allows us to deal with the
ugly case of a guest enabling PtrAuth despite it not being advertised,
resulting in PAC instructions UNDEF'ing while they should be NOPs.

This has been very lightly tested on a model.

Marc Zyngier (3):
  KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
  KVM: arm64: Handle PtrAuth traps early
  KVM: arm64: Enforce PtrAuth being disabled if not advertized

 arch/arm64/include/asm/kvm_emulate.h |  6 ---
 arch/arm64/kvm/arm.c                 |  3 +-
 arch/arm64/kvm/handle_exit.c         | 38 ---------------
 arch/arm64/kvm/hyp/switch.c          | 73 ++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c            | 13 ++---
 5 files changed, 80 insertions(+), 53 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 0/3] kvm: arm64: Pointer Authentication handling fixes
@ 2020-06-04 13:33 ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel; +Cc: kernel-team, Catalin Marinas, Will Deacon

I recently discovered that the Pointer Authentication (PtrAuth)
handling code in KVM is busted, and has been for a while. The main
issue is that the we save the host's keys from a preemptible
context. Things will go wrong at some point.

In order to address this, the first patch move the saving of the
host's keys to vcpu_load(). It is done eagerly, which is a bore, but
is at least safe. This is definitely stable material.

The following two patches are adding an optimisation and a fix for a
corner case: we handle key saving and HCR massaging as a fixup, much
like the FPSIMD code. This subsequently allows us to deal with the
ugly case of a guest enabling PtrAuth despite it not being advertised,
resulting in PAC instructions UNDEF'ing while they should be NOPs.

This has been very lightly tested on a model.

Marc Zyngier (3):
  KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
  KVM: arm64: Handle PtrAuth traps early
  KVM: arm64: Enforce PtrAuth being disabled if not advertized

 arch/arm64/include/asm/kvm_emulate.h |  6 ---
 arch/arm64/kvm/arm.c                 |  3 +-
 arch/arm64/kvm/handle_exit.c         | 38 ---------------
 arch/arm64/kvm/hyp/switch.c          | 73 ++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c            | 13 ++---
 5 files changed, 80 insertions(+), 53 deletions(-)

-- 
2.26.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 0/3] kvm: arm64: Pointer Authentication handling fixes
@ 2020-06-04 13:33 ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: Mark Rutland, kernel-team, Suzuki K Poulose, Catalin Marinas,
	James Morse, Will Deacon, Julien Thierry

I recently discovered that the Pointer Authentication (PtrAuth)
handling code in KVM is busted, and has been for a while. The main
issue is that the we save the host's keys from a preemptible
context. Things will go wrong at some point.

In order to address this, the first patch move the saving of the
host's keys to vcpu_load(). It is done eagerly, which is a bore, but
is at least safe. This is definitely stable material.

The following two patches are adding an optimisation and a fix for a
corner case: we handle key saving and HCR massaging as a fixup, much
like the FPSIMD code. This subsequently allows us to deal with the
ugly case of a guest enabling PtrAuth despite it not being advertised,
resulting in PAC instructions UNDEF'ing while they should be NOPs.

This has been very lightly tested on a model.

Marc Zyngier (3):
  KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
  KVM: arm64: Handle PtrAuth traps early
  KVM: arm64: Enforce PtrAuth being disabled if not advertized

 arch/arm64/include/asm/kvm_emulate.h |  6 ---
 arch/arm64/kvm/arm.c                 |  3 +-
 arch/arm64/kvm/handle_exit.c         | 38 ---------------
 arch/arm64/kvm/hyp/switch.c          | 73 ++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c            | 13 ++---
 5 files changed, 80 insertions(+), 53 deletions(-)

-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/3] KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
  2020-06-04 13:33 ` Marc Zyngier
  (?)
@ 2020-06-04 13:33   ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Will Deacon,
	Catalin Marinas, Mark Rutland, kernel-team, stable

When using the PtrAuth feature in a guest, we need to save the host's
keys before allowing the guest to program them. For that, we dump
them in a per-CPU data structure (the so called host context).

But both call sites that do this are in preemptible context,
which may end up in disaster should the vcpu thread get preempted
before reentering the guest.

Instead, save the keys eagerly on each vcpu_load(). This has an
increased overhead, but is at least safe.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h |  6 ------
 arch/arm64/kvm/arm.c                 | 18 +++++++++++++++++-
 arch/arm64/kvm/handle_exit.c         | 19 ++-----------------
 3 files changed, 19 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index a30b4eec7cb4..977843e4d5fb 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -112,12 +112,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
 	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
 }
 
-static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
-{
-	if (vcpu_has_ptrauth(vcpu))
-		vcpu_ptrauth_disable(vcpu);
-}
-
 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
 {
 	return vcpu->arch.vsesr_el2;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d6988401c22a..152049c5055d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -337,6 +337,12 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	int *last_ran;
@@ -370,7 +376,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	else
 		vcpu_set_wfx_traps(vcpu);
 
-	vcpu_ptrauth_setup_lazy(vcpu);
+	if (vcpu_has_ptrauth(vcpu)) {
+		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
+
+		__ptrauth_save_key(ctxt->sys_regs, APIA);
+		__ptrauth_save_key(ctxt->sys_regs, APIB);
+		__ptrauth_save_key(ctxt->sys_regs, APDA);
+		__ptrauth_save_key(ctxt->sys_regs, APDB);
+		__ptrauth_save_key(ctxt->sys_regs, APGA);
+
+		vcpu_ptrauth_disable(vcpu);
+	}
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index eb194696ef62..065251efa2e6 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-#define __ptrauth_save_key(regs, key)						\
-({										\
-	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
-	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
-})
-
 /*
  * Handle the guest trying to use a ptrauth instruction, or trying to access a
  * ptrauth register.
  */
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *ctxt;
-
-	if (vcpu_has_ptrauth(vcpu)) {
+	if (vcpu_has_ptrauth(vcpu))
 		vcpu_ptrauth_enable(vcpu);
-		ctxt = vcpu->arch.host_cpu_context;
-		__ptrauth_save_key(ctxt->sys_regs, APIA);
-		__ptrauth_save_key(ctxt->sys_regs, APIB);
-		__ptrauth_save_key(ctxt->sys_regs, APDA);
-		__ptrauth_save_key(ctxt->sys_regs, APDB);
-		__ptrauth_save_key(ctxt->sys_regs, APGA);
-	} else {
+	else
 		kvm_inject_undefined(vcpu);
-	}
 }
 
 /*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 1/3] KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
@ 2020-06-04 13:33   ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: kernel-team, Catalin Marinas, stable, Will Deacon

When using the PtrAuth feature in a guest, we need to save the host's
keys before allowing the guest to program them. For that, we dump
them in a per-CPU data structure (the so called host context).

But both call sites that do this are in preemptible context,
which may end up in disaster should the vcpu thread get preempted
before reentering the guest.

Instead, save the keys eagerly on each vcpu_load(). This has an
increased overhead, but is at least safe.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h |  6 ------
 arch/arm64/kvm/arm.c                 | 18 +++++++++++++++++-
 arch/arm64/kvm/handle_exit.c         | 19 ++-----------------
 3 files changed, 19 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index a30b4eec7cb4..977843e4d5fb 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -112,12 +112,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
 	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
 }
 
-static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
-{
-	if (vcpu_has_ptrauth(vcpu))
-		vcpu_ptrauth_disable(vcpu);
-}
-
 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
 {
 	return vcpu->arch.vsesr_el2;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d6988401c22a..152049c5055d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -337,6 +337,12 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	int *last_ran;
@@ -370,7 +376,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	else
 		vcpu_set_wfx_traps(vcpu);
 
-	vcpu_ptrauth_setup_lazy(vcpu);
+	if (vcpu_has_ptrauth(vcpu)) {
+		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
+
+		__ptrauth_save_key(ctxt->sys_regs, APIA);
+		__ptrauth_save_key(ctxt->sys_regs, APIB);
+		__ptrauth_save_key(ctxt->sys_regs, APDA);
+		__ptrauth_save_key(ctxt->sys_regs, APDB);
+		__ptrauth_save_key(ctxt->sys_regs, APGA);
+
+		vcpu_ptrauth_disable(vcpu);
+	}
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index eb194696ef62..065251efa2e6 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-#define __ptrauth_save_key(regs, key)						\
-({										\
-	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
-	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
-})
-
 /*
  * Handle the guest trying to use a ptrauth instruction, or trying to access a
  * ptrauth register.
  */
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *ctxt;
-
-	if (vcpu_has_ptrauth(vcpu)) {
+	if (vcpu_has_ptrauth(vcpu))
 		vcpu_ptrauth_enable(vcpu);
-		ctxt = vcpu->arch.host_cpu_context;
-		__ptrauth_save_key(ctxt->sys_regs, APIA);
-		__ptrauth_save_key(ctxt->sys_regs, APIB);
-		__ptrauth_save_key(ctxt->sys_regs, APDA);
-		__ptrauth_save_key(ctxt->sys_regs, APDB);
-		__ptrauth_save_key(ctxt->sys_regs, APGA);
-	} else {
+	else
 		kvm_inject_undefined(vcpu);
-	}
 }
 
 /*
-- 
2.26.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 1/3] KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
@ 2020-06-04 13:33   ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: Mark Rutland, kernel-team, Suzuki K Poulose, Catalin Marinas,
	stable, James Morse, Will Deacon, Julien Thierry

When using the PtrAuth feature in a guest, we need to save the host's
keys before allowing the guest to program them. For that, we dump
them in a per-CPU data structure (the so called host context).

But both call sites that do this are in preemptible context,
which may end up in disaster should the vcpu thread get preempted
before reentering the guest.

Instead, save the keys eagerly on each vcpu_load(). This has an
increased overhead, but is at least safe.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h |  6 ------
 arch/arm64/kvm/arm.c                 | 18 +++++++++++++++++-
 arch/arm64/kvm/handle_exit.c         | 19 ++-----------------
 3 files changed, 19 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index a30b4eec7cb4..977843e4d5fb 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -112,12 +112,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
 	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
 }
 
-static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
-{
-	if (vcpu_has_ptrauth(vcpu))
-		vcpu_ptrauth_disable(vcpu);
-}
-
 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
 {
 	return vcpu->arch.vsesr_el2;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d6988401c22a..152049c5055d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -337,6 +337,12 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	int *last_ran;
@@ -370,7 +376,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	else
 		vcpu_set_wfx_traps(vcpu);
 
-	vcpu_ptrauth_setup_lazy(vcpu);
+	if (vcpu_has_ptrauth(vcpu)) {
+		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
+
+		__ptrauth_save_key(ctxt->sys_regs, APIA);
+		__ptrauth_save_key(ctxt->sys_regs, APIB);
+		__ptrauth_save_key(ctxt->sys_regs, APDA);
+		__ptrauth_save_key(ctxt->sys_regs, APDB);
+		__ptrauth_save_key(ctxt->sys_regs, APGA);
+
+		vcpu_ptrauth_disable(vcpu);
+	}
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index eb194696ef62..065251efa2e6 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-#define __ptrauth_save_key(regs, key)						\
-({										\
-	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
-	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
-})
-
 /*
  * Handle the guest trying to use a ptrauth instruction, or trying to access a
  * ptrauth register.
  */
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *ctxt;
-
-	if (vcpu_has_ptrauth(vcpu)) {
+	if (vcpu_has_ptrauth(vcpu))
 		vcpu_ptrauth_enable(vcpu);
-		ctxt = vcpu->arch.host_cpu_context;
-		__ptrauth_save_key(ctxt->sys_regs, APIA);
-		__ptrauth_save_key(ctxt->sys_regs, APIB);
-		__ptrauth_save_key(ctxt->sys_regs, APDA);
-		__ptrauth_save_key(ctxt->sys_regs, APDB);
-		__ptrauth_save_key(ctxt->sys_regs, APGA);
-	} else {
+	else
 		kvm_inject_undefined(vcpu);
-	}
 }
 
 /*
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early
  2020-06-04 13:33 ` Marc Zyngier
  (?)
@ 2020-06-04 13:33   ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Will Deacon,
	Catalin Marinas, Mark Rutland, kernel-team

The current way we deal with PtrAuth is a bit heavy handed:

- We forcefully save the host's keys on each vcpu_load()
- Handling the PtrAuth trap forces us to go all the way back
  to the exit handling code to just set the HCR bits

Overall, this is pretty heavy handed. A better approach would be
to handle it the same way we deal with the FPSIMD registers:

- On vcpu_load() disable PtrAuth for the guest
- On first use, save the host's keys, enable PtrAuth in the
  guest

Crutially, this can happen as a fixup, which is done very early
on exit. We can then reenter the guest immediately without
leaving the hypervisor role.

Another thing is that it simplify the rest of the host handling:
exiting all the way to the host means that the only possible
outcome for this trap is to inject an UNDEF.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/arm.c         | 17 +----------
 arch/arm64/kvm/handle_exit.c | 17 ++---------
 arch/arm64/kvm/hyp/switch.c  | 59 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c    | 13 +++-----
 4 files changed, 68 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 152049c5055d..14b747266607 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -337,12 +337,6 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
-#define __ptrauth_save_key(regs, key)						\
-({										\
-	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
-	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
-})
-
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	int *last_ran;
@@ -376,17 +370,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	else
 		vcpu_set_wfx_traps(vcpu);
 
-	if (vcpu_has_ptrauth(vcpu)) {
-		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
-
-		__ptrauth_save_key(ctxt->sys_regs, APIA);
-		__ptrauth_save_key(ctxt->sys_regs, APIB);
-		__ptrauth_save_key(ctxt->sys_regs, APDA);
-		__ptrauth_save_key(ctxt->sys_regs, APDB);
-		__ptrauth_save_key(ctxt->sys_regs, APGA);
-
+	if (vcpu_has_ptrauth(vcpu))
 		vcpu_ptrauth_disable(vcpu);
-	}
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 065251efa2e6..5a02d4c90559 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,25 +162,14 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-/*
- * Handle the guest trying to use a ptrauth instruction, or trying to access a
- * ptrauth register.
- */
-void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
-{
-	if (vcpu_has_ptrauth(vcpu))
-		vcpu_ptrauth_enable(vcpu);
-	else
-		kvm_inject_undefined(vcpu);
-}
-
 /*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP).
+ * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
+ * that we can do is give the guest an UNDEF.
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	kvm_inject_undefined(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index c07a45643cd4..2a50b3771c3b 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -490,6 +490,62 @@ static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
+{
+	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu));
+	u32 ec = kvm_vcpu_trap_get_class(vcpu);
+	struct kvm_cpu_context *ctxt;
+	u64 val;
+
+	if (!vcpu_has_ptrauth(vcpu))
+		return false;
+
+	switch (ec) {
+	case ESR_ELx_EC_PAC:
+		break;
+	case ESR_ELx_EC_SYS64:
+		switch (sysreg) {
+		case SYS_APIAKEYLO_EL1:
+		case SYS_APIAKEYHI_EL1:
+		case SYS_APIBKEYLO_EL1:
+		case SYS_APIBKEYHI_EL1:
+		case SYS_APDAKEYLO_EL1:
+		case SYS_APDAKEYHI_EL1:
+		case SYS_APDBKEYLO_EL1:
+		case SYS_APDBKEYHI_EL1:
+		case SYS_APGAKEYLO_EL1:
+		case SYS_APGAKEYHI_EL1:
+			break;
+		default:
+			return false;
+		}
+		break;
+	default:
+		return false;
+	}
+
+	ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+	__ptrauth_save_key(ctxt->sys_regs, APIA);
+	__ptrauth_save_key(ctxt->sys_regs, APIB);
+	__ptrauth_save_key(ctxt->sys_regs, APDA);
+	__ptrauth_save_key(ctxt->sys_regs, APDB);
+	__ptrauth_save_key(ctxt->sys_regs, APGA);
+
+	vcpu_ptrauth_enable(vcpu);
+
+	val = read_sysreg(hcr_el2);
+	val |= (HCR_API | HCR_APK);
+	write_sysreg(val, hcr_el2);
+
+	return true;
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (__hyp_handle_fpsimd(vcpu))
 		return true;
 
+	if (__hyp_handle_ptrauth(vcpu))
+		return true;
+
 	if (!__populate_fault_info(vcpu))
 		return true;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ad1d57501d6d..564995084cf8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
 			 struct sys_reg_params *p,
 			 const struct sys_reg_desc *rd)
 {
-	kvm_arm_vcpu_ptrauth_trap(vcpu);
-
 	/*
-	 * Return false for both cases as we never skip the trapped
-	 * instruction:
-	 *
-	 * - Either we re-execute the same key register access instruction
-	 *   after enabling ptrauth.
-	 * - Or an UNDEF is injected as ptrauth is not supported/enabled.
+	 * If we land here, that is because we didn't fixup the access on exit
+	 * by allowing the PtrAuth sysregs. The only way this happens is when
+	 * the guest does not have PtrAuth support enabled.
 	 */
+	kvm_inject_undefined(vcpu);
+
 	return false;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early
@ 2020-06-04 13:33   ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel; +Cc: kernel-team, Catalin Marinas, Will Deacon

The current way we deal with PtrAuth is a bit heavy handed:

- We forcefully save the host's keys on each vcpu_load()
- Handling the PtrAuth trap forces us to go all the way back
  to the exit handling code to just set the HCR bits

Overall, this is pretty heavy handed. A better approach would be
to handle it the same way we deal with the FPSIMD registers:

- On vcpu_load() disable PtrAuth for the guest
- On first use, save the host's keys, enable PtrAuth in the
  guest

Crutially, this can happen as a fixup, which is done very early
on exit. We can then reenter the guest immediately without
leaving the hypervisor role.

Another thing is that it simplify the rest of the host handling:
exiting all the way to the host means that the only possible
outcome for this trap is to inject an UNDEF.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/arm.c         | 17 +----------
 arch/arm64/kvm/handle_exit.c | 17 ++---------
 arch/arm64/kvm/hyp/switch.c  | 59 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c    | 13 +++-----
 4 files changed, 68 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 152049c5055d..14b747266607 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -337,12 +337,6 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
-#define __ptrauth_save_key(regs, key)						\
-({										\
-	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
-	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
-})
-
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	int *last_ran;
@@ -376,17 +370,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	else
 		vcpu_set_wfx_traps(vcpu);
 
-	if (vcpu_has_ptrauth(vcpu)) {
-		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
-
-		__ptrauth_save_key(ctxt->sys_regs, APIA);
-		__ptrauth_save_key(ctxt->sys_regs, APIB);
-		__ptrauth_save_key(ctxt->sys_regs, APDA);
-		__ptrauth_save_key(ctxt->sys_regs, APDB);
-		__ptrauth_save_key(ctxt->sys_regs, APGA);
-
+	if (vcpu_has_ptrauth(vcpu))
 		vcpu_ptrauth_disable(vcpu);
-	}
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 065251efa2e6..5a02d4c90559 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,25 +162,14 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-/*
- * Handle the guest trying to use a ptrauth instruction, or trying to access a
- * ptrauth register.
- */
-void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
-{
-	if (vcpu_has_ptrauth(vcpu))
-		vcpu_ptrauth_enable(vcpu);
-	else
-		kvm_inject_undefined(vcpu);
-}
-
 /*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP).
+ * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
+ * that we can do is give the guest an UNDEF.
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	kvm_inject_undefined(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index c07a45643cd4..2a50b3771c3b 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -490,6 +490,62 @@ static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
+{
+	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu));
+	u32 ec = kvm_vcpu_trap_get_class(vcpu);
+	struct kvm_cpu_context *ctxt;
+	u64 val;
+
+	if (!vcpu_has_ptrauth(vcpu))
+		return false;
+
+	switch (ec) {
+	case ESR_ELx_EC_PAC:
+		break;
+	case ESR_ELx_EC_SYS64:
+		switch (sysreg) {
+		case SYS_APIAKEYLO_EL1:
+		case SYS_APIAKEYHI_EL1:
+		case SYS_APIBKEYLO_EL1:
+		case SYS_APIBKEYHI_EL1:
+		case SYS_APDAKEYLO_EL1:
+		case SYS_APDAKEYHI_EL1:
+		case SYS_APDBKEYLO_EL1:
+		case SYS_APDBKEYHI_EL1:
+		case SYS_APGAKEYLO_EL1:
+		case SYS_APGAKEYHI_EL1:
+			break;
+		default:
+			return false;
+		}
+		break;
+	default:
+		return false;
+	}
+
+	ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+	__ptrauth_save_key(ctxt->sys_regs, APIA);
+	__ptrauth_save_key(ctxt->sys_regs, APIB);
+	__ptrauth_save_key(ctxt->sys_regs, APDA);
+	__ptrauth_save_key(ctxt->sys_regs, APDB);
+	__ptrauth_save_key(ctxt->sys_regs, APGA);
+
+	vcpu_ptrauth_enable(vcpu);
+
+	val = read_sysreg(hcr_el2);
+	val |= (HCR_API | HCR_APK);
+	write_sysreg(val, hcr_el2);
+
+	return true;
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (__hyp_handle_fpsimd(vcpu))
 		return true;
 
+	if (__hyp_handle_ptrauth(vcpu))
+		return true;
+
 	if (!__populate_fault_info(vcpu))
 		return true;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ad1d57501d6d..564995084cf8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
 			 struct sys_reg_params *p,
 			 const struct sys_reg_desc *rd)
 {
-	kvm_arm_vcpu_ptrauth_trap(vcpu);
-
 	/*
-	 * Return false for both cases as we never skip the trapped
-	 * instruction:
-	 *
-	 * - Either we re-execute the same key register access instruction
-	 *   after enabling ptrauth.
-	 * - Or an UNDEF is injected as ptrauth is not supported/enabled.
+	 * If we land here, that is because we didn't fixup the access on exit
+	 * by allowing the PtrAuth sysregs. The only way this happens is when
+	 * the guest does not have PtrAuth support enabled.
 	 */
+	kvm_inject_undefined(vcpu);
+
 	return false;
 }
 
-- 
2.26.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early
@ 2020-06-04 13:33   ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: Mark Rutland, kernel-team, Suzuki K Poulose, Catalin Marinas,
	James Morse, Will Deacon, Julien Thierry

The current way we deal with PtrAuth is a bit heavy handed:

- We forcefully save the host's keys on each vcpu_load()
- Handling the PtrAuth trap forces us to go all the way back
  to the exit handling code to just set the HCR bits

Overall, this is pretty heavy handed. A better approach would be
to handle it the same way we deal with the FPSIMD registers:

- On vcpu_load() disable PtrAuth for the guest
- On first use, save the host's keys, enable PtrAuth in the
  guest

Crutially, this can happen as a fixup, which is done very early
on exit. We can then reenter the guest immediately without
leaving the hypervisor role.

Another thing is that it simplify the rest of the host handling:
exiting all the way to the host means that the only possible
outcome for this trap is to inject an UNDEF.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/arm.c         | 17 +----------
 arch/arm64/kvm/handle_exit.c | 17 ++---------
 arch/arm64/kvm/hyp/switch.c  | 59 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c    | 13 +++-----
 4 files changed, 68 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 152049c5055d..14b747266607 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -337,12 +337,6 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
-#define __ptrauth_save_key(regs, key)						\
-({										\
-	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
-	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
-})
-
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	int *last_ran;
@@ -376,17 +370,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	else
 		vcpu_set_wfx_traps(vcpu);
 
-	if (vcpu_has_ptrauth(vcpu)) {
-		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
-
-		__ptrauth_save_key(ctxt->sys_regs, APIA);
-		__ptrauth_save_key(ctxt->sys_regs, APIB);
-		__ptrauth_save_key(ctxt->sys_regs, APDA);
-		__ptrauth_save_key(ctxt->sys_regs, APDB);
-		__ptrauth_save_key(ctxt->sys_regs, APGA);
-
+	if (vcpu_has_ptrauth(vcpu))
 		vcpu_ptrauth_disable(vcpu);
-	}
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 065251efa2e6..5a02d4c90559 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,25 +162,14 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-/*
- * Handle the guest trying to use a ptrauth instruction, or trying to access a
- * ptrauth register.
- */
-void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
-{
-	if (vcpu_has_ptrauth(vcpu))
-		vcpu_ptrauth_enable(vcpu);
-	else
-		kvm_inject_undefined(vcpu);
-}
-
 /*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP).
+ * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
+ * that we can do is give the guest an UNDEF.
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	kvm_inject_undefined(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index c07a45643cd4..2a50b3771c3b 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -490,6 +490,62 @@ static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
+{
+	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu));
+	u32 ec = kvm_vcpu_trap_get_class(vcpu);
+	struct kvm_cpu_context *ctxt;
+	u64 val;
+
+	if (!vcpu_has_ptrauth(vcpu))
+		return false;
+
+	switch (ec) {
+	case ESR_ELx_EC_PAC:
+		break;
+	case ESR_ELx_EC_SYS64:
+		switch (sysreg) {
+		case SYS_APIAKEYLO_EL1:
+		case SYS_APIAKEYHI_EL1:
+		case SYS_APIBKEYLO_EL1:
+		case SYS_APIBKEYHI_EL1:
+		case SYS_APDAKEYLO_EL1:
+		case SYS_APDAKEYHI_EL1:
+		case SYS_APDBKEYLO_EL1:
+		case SYS_APDBKEYHI_EL1:
+		case SYS_APGAKEYLO_EL1:
+		case SYS_APGAKEYHI_EL1:
+			break;
+		default:
+			return false;
+		}
+		break;
+	default:
+		return false;
+	}
+
+	ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+	__ptrauth_save_key(ctxt->sys_regs, APIA);
+	__ptrauth_save_key(ctxt->sys_regs, APIB);
+	__ptrauth_save_key(ctxt->sys_regs, APDA);
+	__ptrauth_save_key(ctxt->sys_regs, APDB);
+	__ptrauth_save_key(ctxt->sys_regs, APGA);
+
+	vcpu_ptrauth_enable(vcpu);
+
+	val = read_sysreg(hcr_el2);
+	val |= (HCR_API | HCR_APK);
+	write_sysreg(val, hcr_el2);
+
+	return true;
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (__hyp_handle_fpsimd(vcpu))
 		return true;
 
+	if (__hyp_handle_ptrauth(vcpu))
+		return true;
+
 	if (!__populate_fault_info(vcpu))
 		return true;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ad1d57501d6d..564995084cf8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
 			 struct sys_reg_params *p,
 			 const struct sys_reg_desc *rd)
 {
-	kvm_arm_vcpu_ptrauth_trap(vcpu);
-
 	/*
-	 * Return false for both cases as we never skip the trapped
-	 * instruction:
-	 *
-	 * - Either we re-execute the same key register access instruction
-	 *   after enabling ptrauth.
-	 * - Or an UNDEF is injected as ptrauth is not supported/enabled.
+	 * If we land here, that is because we didn't fixup the access on exit
+	 * by allowing the PtrAuth sysregs. The only way this happens is when
+	 * the guest does not have PtrAuth support enabled.
 	 */
+	kvm_inject_undefined(vcpu);
+
 	return false;
 }
 
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
  2020-06-04 13:33 ` Marc Zyngier
  (?)
@ 2020-06-04 13:33   ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Will Deacon,
	Catalin Marinas, Mark Rutland, kernel-team

Even if we don't expose PtrAuth to a guest, the guest can still
write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
and execute PtrAuth instructions from the NOP space. This has
the effect of trapping to EL2, and we currently inject an UNDEF.
This is definitely the wrong thing to do, as the architecture says
that these instructions should behave as NOPs.

Instead, we can simply reset the offending SCTLR_EL1 bits to
zero, and resume the guest. It can still observe the SCTLR bits
being set and then being cleared by magic, but that's much better
than delivering an unexpected extension.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/handle_exit.c | 12 ------------
 arch/arm64/kvm/hyp/switch.c  | 18 ++++++++++++++++--
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5a02d4c90559..98d8adf6f865 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,17 +162,6 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-/*
- * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
- * that we can do is give the guest an UNDEF.
- */
-static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
-	kvm_inject_undefined(vcpu);
-	return 1;
-}
-
 static exit_handle_fn arm_exit_handlers[] = {
 	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
 	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
@@ -195,7 +184,6 @@ static exit_handle_fn arm_exit_handlers[] = {
 	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
-	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 2a50b3771c3b..fc09c3dfa466 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -503,8 +503,22 @@ static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *ctxt;
 	u64 val;
 
-	if (!vcpu_has_ptrauth(vcpu))
-		return false;
+	if (!vcpu_has_ptrauth(vcpu)) {
+		if (ec != ESR_ELx_EC_PAC)
+			return false;
+
+		/*
+		 * Interesting situation: the guest has enabled PtrAuth,
+		 * despite KVM not advertising it. Fix SCTLR_El1 on behalf
+		 * of the guest (the bits should behave as RES0 anyway).
+		 */
+		val = read_sysreg_el1(SYS_SCTLR);
+		val &= ~(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
+			 SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
+		write_sysreg_el1(val, SYS_SCTLR);
+
+		return true;
+	}
 
 	switch (ec) {
 	case ESR_ELx_EC_PAC:
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
@ 2020-06-04 13:33   ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel; +Cc: kernel-team, Catalin Marinas, Will Deacon

Even if we don't expose PtrAuth to a guest, the guest can still
write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
and execute PtrAuth instructions from the NOP space. This has
the effect of trapping to EL2, and we currently inject an UNDEF.
This is definitely the wrong thing to do, as the architecture says
that these instructions should behave as NOPs.

Instead, we can simply reset the offending SCTLR_EL1 bits to
zero, and resume the guest. It can still observe the SCTLR bits
being set and then being cleared by magic, but that's much better
than delivering an unexpected extension.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/handle_exit.c | 12 ------------
 arch/arm64/kvm/hyp/switch.c  | 18 ++++++++++++++++--
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5a02d4c90559..98d8adf6f865 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,17 +162,6 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-/*
- * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
- * that we can do is give the guest an UNDEF.
- */
-static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
-	kvm_inject_undefined(vcpu);
-	return 1;
-}
-
 static exit_handle_fn arm_exit_handlers[] = {
 	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
 	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
@@ -195,7 +184,6 @@ static exit_handle_fn arm_exit_handlers[] = {
 	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
-	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 2a50b3771c3b..fc09c3dfa466 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -503,8 +503,22 @@ static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *ctxt;
 	u64 val;
 
-	if (!vcpu_has_ptrauth(vcpu))
-		return false;
+	if (!vcpu_has_ptrauth(vcpu)) {
+		if (ec != ESR_ELx_EC_PAC)
+			return false;
+
+		/*
+		 * Interesting situation: the guest has enabled PtrAuth,
+		 * despite KVM not advertising it. Fix SCTLR_El1 on behalf
+		 * of the guest (the bits should behave as RES0 anyway).
+		 */
+		val = read_sysreg_el1(SYS_SCTLR);
+		val &= ~(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
+			 SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
+		write_sysreg_el1(val, SYS_SCTLR);
+
+		return true;
+	}
 
 	switch (ec) {
 	case ESR_ELx_EC_PAC:
-- 
2.26.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
@ 2020-06-04 13:33   ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-04 13:33 UTC (permalink / raw)
  To: kvm, kvmarm, linux-arm-kernel
  Cc: Mark Rutland, kernel-team, Suzuki K Poulose, Catalin Marinas,
	James Morse, Will Deacon, Julien Thierry

Even if we don't expose PtrAuth to a guest, the guest can still
write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
and execute PtrAuth instructions from the NOP space. This has
the effect of trapping to EL2, and we currently inject an UNDEF.
This is definitely the wrong thing to do, as the architecture says
that these instructions should behave as NOPs.

Instead, we can simply reset the offending SCTLR_EL1 bits to
zero, and resume the guest. It can still observe the SCTLR bits
being set and then being cleared by magic, but that's much better
than delivering an unexpected extension.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/handle_exit.c | 12 ------------
 arch/arm64/kvm/hyp/switch.c  | 18 ++++++++++++++++--
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5a02d4c90559..98d8adf6f865 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -162,17 +162,6 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-/*
- * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
- * that we can do is give the guest an UNDEF.
- */
-static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
-	kvm_inject_undefined(vcpu);
-	return 1;
-}
-
 static exit_handle_fn arm_exit_handlers[] = {
 	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
 	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
@@ -195,7 +184,6 @@ static exit_handle_fn arm_exit_handlers[] = {
 	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
-	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 2a50b3771c3b..fc09c3dfa466 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -503,8 +503,22 @@ static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *ctxt;
 	u64 val;
 
-	if (!vcpu_has_ptrauth(vcpu))
-		return false;
+	if (!vcpu_has_ptrauth(vcpu)) {
+		if (ec != ESR_ELx_EC_PAC)
+			return false;
+
+		/*
+		 * Interesting situation: the guest has enabled PtrAuth,
+		 * despite KVM not advertising it. Fix SCTLR_El1 on behalf
+		 * of the guest (the bits should behave as RES0 anyway).
+		 */
+		val = read_sysreg_el1(SYS_SCTLR);
+		val &= ~(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
+			 SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
+		write_sysreg_el1(val, SYS_SCTLR);
+
+		return true;
+	}
 
 	switch (ec) {
 	case ESR_ELx_EC_PAC:
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
  2020-06-04 13:33   ` Marc Zyngier
  (?)
@ 2020-06-04 15:04     ` Mark Rutland
  -1 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, kvmarm, linux-arm-kernel, James Morse, Julien Thierry,
	Suzuki K Poulose, Will Deacon, Catalin Marinas, kernel-team,
	stable

On Thu, Jun 04, 2020 at 02:33:52PM +0100, Marc Zyngier wrote:
> When using the PtrAuth feature in a guest, we need to save the host's
> keys before allowing the guest to program them. For that, we dump
> them in a per-CPU data structure (the so called host context).
> 
> But both call sites that do this are in preemptible context,
> which may end up in disaster should the vcpu thread get preempted
> before reentering the guest.

Yuck!

> Instead, save the keys eagerly on each vcpu_load(). This has an
> increased overhead, but is at least safe.
> 
> Cc: stable@vger.kernel.org
> Signed-off-by: Marc Zyngier <maz@kernel.org>

This looks sound to me given kvm_arch_vcpu_load() is surrounded with
get_cpu() .. put_cpu() and gets called when the thread is preempted.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> ---
>  arch/arm64/include/asm/kvm_emulate.h |  6 ------
>  arch/arm64/kvm/arm.c                 | 18 +++++++++++++++++-
>  arch/arm64/kvm/handle_exit.c         | 19 ++-----------------
>  3 files changed, 19 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index a30b4eec7cb4..977843e4d5fb 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -112,12 +112,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>  	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
>  }
>  
> -static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> -{
> -	if (vcpu_has_ptrauth(vcpu))
> -		vcpu_ptrauth_disable(vcpu);
> -}
> -
>  static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
>  {
>  	return vcpu->arch.vsesr_el2;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index d6988401c22a..152049c5055d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -337,6 +337,12 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
>  	preempt_enable();
>  }
>  
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
>  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
>  	int *last_ran;
> @@ -370,7 +376,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	else
>  		vcpu_set_wfx_traps(vcpu);
>  
> -	vcpu_ptrauth_setup_lazy(vcpu);
> +	if (vcpu_has_ptrauth(vcpu)) {
> +		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
> +
> +		__ptrauth_save_key(ctxt->sys_regs, APIA);
> +		__ptrauth_save_key(ctxt->sys_regs, APIB);
> +		__ptrauth_save_key(ctxt->sys_regs, APDA);
> +		__ptrauth_save_key(ctxt->sys_regs, APDB);
> +		__ptrauth_save_key(ctxt->sys_regs, APGA);
> +
> +		vcpu_ptrauth_disable(vcpu);
> +	}
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index eb194696ef62..065251efa2e6 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -#define __ptrauth_save_key(regs, key)						\
> -({										\
> -	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> -	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> -})
> -
>  /*
>   * Handle the guest trying to use a ptrauth instruction, or trying to access a
>   * ptrauth register.
>   */
>  void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
>  {
> -	struct kvm_cpu_context *ctxt;
> -
> -	if (vcpu_has_ptrauth(vcpu)) {
> +	if (vcpu_has_ptrauth(vcpu))
>  		vcpu_ptrauth_enable(vcpu);
> -		ctxt = vcpu->arch.host_cpu_context;
> -		__ptrauth_save_key(ctxt->sys_regs, APIA);
> -		__ptrauth_save_key(ctxt->sys_regs, APIB);
> -		__ptrauth_save_key(ctxt->sys_regs, APDA);
> -		__ptrauth_save_key(ctxt->sys_regs, APDB);
> -		__ptrauth_save_key(ctxt->sys_regs, APGA);
> -	} else {
> +	else
>  		kvm_inject_undefined(vcpu);
> -	}
>  }
>  
>  /*
> -- 
> 2.26.2
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
@ 2020-06-04 15:04     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Catalin Marinas, stable, Will Deacon, kvmarm,
	linux-arm-kernel

On Thu, Jun 04, 2020 at 02:33:52PM +0100, Marc Zyngier wrote:
> When using the PtrAuth feature in a guest, we need to save the host's
> keys before allowing the guest to program them. For that, we dump
> them in a per-CPU data structure (the so called host context).
> 
> But both call sites that do this are in preemptible context,
> which may end up in disaster should the vcpu thread get preempted
> before reentering the guest.

Yuck!

> Instead, save the keys eagerly on each vcpu_load(). This has an
> increased overhead, but is at least safe.
> 
> Cc: stable@vger.kernel.org
> Signed-off-by: Marc Zyngier <maz@kernel.org>

This looks sound to me given kvm_arch_vcpu_load() is surrounded with
get_cpu() .. put_cpu() and gets called when the thread is preempted.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> ---
>  arch/arm64/include/asm/kvm_emulate.h |  6 ------
>  arch/arm64/kvm/arm.c                 | 18 +++++++++++++++++-
>  arch/arm64/kvm/handle_exit.c         | 19 ++-----------------
>  3 files changed, 19 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index a30b4eec7cb4..977843e4d5fb 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -112,12 +112,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>  	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
>  }
>  
> -static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> -{
> -	if (vcpu_has_ptrauth(vcpu))
> -		vcpu_ptrauth_disable(vcpu);
> -}
> -
>  static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
>  {
>  	return vcpu->arch.vsesr_el2;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index d6988401c22a..152049c5055d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -337,6 +337,12 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
>  	preempt_enable();
>  }
>  
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
>  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
>  	int *last_ran;
> @@ -370,7 +376,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	else
>  		vcpu_set_wfx_traps(vcpu);
>  
> -	vcpu_ptrauth_setup_lazy(vcpu);
> +	if (vcpu_has_ptrauth(vcpu)) {
> +		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
> +
> +		__ptrauth_save_key(ctxt->sys_regs, APIA);
> +		__ptrauth_save_key(ctxt->sys_regs, APIB);
> +		__ptrauth_save_key(ctxt->sys_regs, APDA);
> +		__ptrauth_save_key(ctxt->sys_regs, APDB);
> +		__ptrauth_save_key(ctxt->sys_regs, APGA);
> +
> +		vcpu_ptrauth_disable(vcpu);
> +	}
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index eb194696ef62..065251efa2e6 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -#define __ptrauth_save_key(regs, key)						\
> -({										\
> -	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> -	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> -})
> -
>  /*
>   * Handle the guest trying to use a ptrauth instruction, or trying to access a
>   * ptrauth register.
>   */
>  void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
>  {
> -	struct kvm_cpu_context *ctxt;
> -
> -	if (vcpu_has_ptrauth(vcpu)) {
> +	if (vcpu_has_ptrauth(vcpu))
>  		vcpu_ptrauth_enable(vcpu);
> -		ctxt = vcpu->arch.host_cpu_context;
> -		__ptrauth_save_key(ctxt->sys_regs, APIA);
> -		__ptrauth_save_key(ctxt->sys_regs, APIB);
> -		__ptrauth_save_key(ctxt->sys_regs, APDA);
> -		__ptrauth_save_key(ctxt->sys_regs, APDB);
> -		__ptrauth_save_key(ctxt->sys_regs, APGA);
> -	} else {
> +	else
>  		kvm_inject_undefined(vcpu);
> -	}
>  }
>  
>  /*
> -- 
> 2.26.2
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/3] KVM: arm64: Save the host's PtrAuth keys in non-preemptible context
@ 2020-06-04 15:04     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Suzuki K Poulose, Catalin Marinas, stable,
	James Morse, Julien Thierry, Will Deacon, kvmarm,
	linux-arm-kernel

On Thu, Jun 04, 2020 at 02:33:52PM +0100, Marc Zyngier wrote:
> When using the PtrAuth feature in a guest, we need to save the host's
> keys before allowing the guest to program them. For that, we dump
> them in a per-CPU data structure (the so called host context).
> 
> But both call sites that do this are in preemptible context,
> which may end up in disaster should the vcpu thread get preempted
> before reentering the guest.

Yuck!

> Instead, save the keys eagerly on each vcpu_load(). This has an
> increased overhead, but is at least safe.
> 
> Cc: stable@vger.kernel.org
> Signed-off-by: Marc Zyngier <maz@kernel.org>

This looks sound to me given kvm_arch_vcpu_load() is surrounded with
get_cpu() .. put_cpu() and gets called when the thread is preempted.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> ---
>  arch/arm64/include/asm/kvm_emulate.h |  6 ------
>  arch/arm64/kvm/arm.c                 | 18 +++++++++++++++++-
>  arch/arm64/kvm/handle_exit.c         | 19 ++-----------------
>  3 files changed, 19 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index a30b4eec7cb4..977843e4d5fb 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -112,12 +112,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>  	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
>  }
>  
> -static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> -{
> -	if (vcpu_has_ptrauth(vcpu))
> -		vcpu_ptrauth_disable(vcpu);
> -}
> -
>  static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
>  {
>  	return vcpu->arch.vsesr_el2;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index d6988401c22a..152049c5055d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -337,6 +337,12 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
>  	preempt_enable();
>  }
>  
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
>  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
>  	int *last_ran;
> @@ -370,7 +376,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	else
>  		vcpu_set_wfx_traps(vcpu);
>  
> -	vcpu_ptrauth_setup_lazy(vcpu);
> +	if (vcpu_has_ptrauth(vcpu)) {
> +		struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
> +
> +		__ptrauth_save_key(ctxt->sys_regs, APIA);
> +		__ptrauth_save_key(ctxt->sys_regs, APIB);
> +		__ptrauth_save_key(ctxt->sys_regs, APDA);
> +		__ptrauth_save_key(ctxt->sys_regs, APDB);
> +		__ptrauth_save_key(ctxt->sys_regs, APGA);
> +
> +		vcpu_ptrauth_disable(vcpu);
> +	}
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index eb194696ef62..065251efa2e6 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -#define __ptrauth_save_key(regs, key)						\
> -({										\
> -	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> -	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> -})
> -
>  /*
>   * Handle the guest trying to use a ptrauth instruction, or trying to access a
>   * ptrauth register.
>   */
>  void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
>  {
> -	struct kvm_cpu_context *ctxt;
> -
> -	if (vcpu_has_ptrauth(vcpu)) {
> +	if (vcpu_has_ptrauth(vcpu))
>  		vcpu_ptrauth_enable(vcpu);
> -		ctxt = vcpu->arch.host_cpu_context;
> -		__ptrauth_save_key(ctxt->sys_regs, APIA);
> -		__ptrauth_save_key(ctxt->sys_regs, APIB);
> -		__ptrauth_save_key(ctxt->sys_regs, APDA);
> -		__ptrauth_save_key(ctxt->sys_regs, APDB);
> -		__ptrauth_save_key(ctxt->sys_regs, APGA);
> -	} else {
> +	else
>  		kvm_inject_undefined(vcpu);
> -	}
>  }
>  
>  /*
> -- 
> 2.26.2
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early
  2020-06-04 13:33   ` Marc Zyngier
  (?)
@ 2020-06-04 15:23     ` Mark Rutland
  -1 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, kvmarm, linux-arm-kernel, James Morse, Julien Thierry,
	Suzuki K Poulose, Will Deacon, Catalin Marinas, kernel-team

On Thu, Jun 04, 2020 at 02:33:53PM +0100, Marc Zyngier wrote:
> The current way we deal with PtrAuth is a bit heavy handed:
> 
> - We forcefully save the host's keys on each vcpu_load()
> - Handling the PtrAuth trap forces us to go all the way back
>   to the exit handling code to just set the HCR bits
> 
> Overall, this is pretty heavy handed. A better approach would be
> to handle it the same way we deal with the FPSIMD registers:
> 
> - On vcpu_load() disable PtrAuth for the guest
> - On first use, save the host's keys, enable PtrAuth in the
>   guest
> 
> Crutially, this can happen as a fixup, which is done very early
> on exit. We can then reenter the guest immediately without
> leaving the hypervisor role.
> 
> Another thing is that it simplify the rest of the host handling:
> exiting all the way to the host means that the only possible
> outcome for this trap is to inject an UNDEF.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/arm.c         | 17 +----------
>  arch/arm64/kvm/handle_exit.c | 17 ++---------
>  arch/arm64/kvm/hyp/switch.c  | 59 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/sys_regs.c    | 13 +++-----
>  4 files changed, 68 insertions(+), 38 deletions(-)

[...]

> +static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
> +{
> +	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu));
> +	u32 ec = kvm_vcpu_trap_get_class(vcpu);
> +	struct kvm_cpu_context *ctxt;
> +	u64 val;
> +
> +	if (!vcpu_has_ptrauth(vcpu))
> +		return false;
> +
> +	switch (ec) {
> +	case ESR_ELx_EC_PAC:
> +		break;
> +	case ESR_ELx_EC_SYS64:
> +		switch (sysreg) {
> +		case SYS_APIAKEYLO_EL1:
> +		case SYS_APIAKEYHI_EL1:
> +		case SYS_APIBKEYLO_EL1:
> +		case SYS_APIBKEYHI_EL1:
> +		case SYS_APDAKEYLO_EL1:
> +		case SYS_APDAKEYHI_EL1:
> +		case SYS_APDBKEYLO_EL1:
> +		case SYS_APDBKEYHI_EL1:
> +		case SYS_APGAKEYLO_EL1:
> +		case SYS_APGAKEYHI_EL1:
> +			break;
> +		default:
> +			return false;
> +		}
> +		break;
> +	default:
> +		return false;
> +	}

The ESR triage looks correct, but I think it might be clearer split out
into a helper, since you can avoid the default cases with direct
returns, and you could avoid the nested switch, e.g.

static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr)
{
	u32 ec = ESR_ELx_EC(esr);

	if (ec == ESR_ELx_EC_PAC)
		return true;

	if (ec != ESR_ELx_EC_SYS64)
		return false;
	
	switch (esr_sys64_to_sysreg(esr)) {
	case SYS_APIAKEYLO_EL1:
	case SYS_APIAKEYHI_EL1:
	case SYS_APIBKEYLO_EL1:
	case SYS_APIBKEYHI_EL1:
	case SYS_APDAKEYLO_EL1:
	case SYS_APDAKEYHI_EL1:
	case SYS_APDBKEYLO_EL1:
	case SYS_APDBKEYHI_EL1:
	case SYS_APGAKEYLO_EL1:
	case SYS_APGAKEYHI_EL1:
		return true;
	}

	return false;
}


> +
> +	ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
> +
> +	vcpu_ptrauth_enable(vcpu);
> +
> +	val = read_sysreg(hcr_el2);
> +	val |= (HCR_API | HCR_APK);
> +	write_sysreg(val, hcr_el2);
> +
> +	return true;
> +}
> +
>  /*
>   * Return true when we were able to fixup the guest exit and should return to
>   * the guest, false when we should restore the host state and return to the
> @@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	if (__hyp_handle_fpsimd(vcpu))
>  		return true;
>  
> +	if (__hyp_handle_ptrauth(vcpu))
> +		return true;
> +
>  	if (!__populate_fault_info(vcpu))
>  		return true;
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index ad1d57501d6d..564995084cf8 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>  			 struct sys_reg_params *p,
>  			 const struct sys_reg_desc *rd)
>  {
> -	kvm_arm_vcpu_ptrauth_trap(vcpu);
> -
>  	/*
> -	 * Return false for both cases as we never skip the trapped
> -	 * instruction:
> -	 *
> -	 * - Either we re-execute the same key register access instruction
> -	 *   after enabling ptrauth.
> -	 * - Or an UNDEF is injected as ptrauth is not supported/enabled.
> +	 * If we land here, that is because we didn't fixup the access on exit
> +	 * by allowing the PtrAuth sysregs. The only way this happens is when
> +	 * the guest does not have PtrAuth support enabled.
>  	 */
> +	kvm_inject_undefined(vcpu);
> +
>  	return false;
>  }
>  
> -- 
> 2.26.2
> 

Regardless of the suggestion above, this looks sound to me. I agree that
it's much nicer to handle this in hyp, and AFAICT the context switch
should do the right thing, so:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early
@ 2020-06-04 15:23     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Catalin Marinas, Will Deacon, kvmarm, linux-arm-kernel

On Thu, Jun 04, 2020 at 02:33:53PM +0100, Marc Zyngier wrote:
> The current way we deal with PtrAuth is a bit heavy handed:
> 
> - We forcefully save the host's keys on each vcpu_load()
> - Handling the PtrAuth trap forces us to go all the way back
>   to the exit handling code to just set the HCR bits
> 
> Overall, this is pretty heavy handed. A better approach would be
> to handle it the same way we deal with the FPSIMD registers:
> 
> - On vcpu_load() disable PtrAuth for the guest
> - On first use, save the host's keys, enable PtrAuth in the
>   guest
> 
> Crutially, this can happen as a fixup, which is done very early
> on exit. We can then reenter the guest immediately without
> leaving the hypervisor role.
> 
> Another thing is that it simplify the rest of the host handling:
> exiting all the way to the host means that the only possible
> outcome for this trap is to inject an UNDEF.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/arm.c         | 17 +----------
>  arch/arm64/kvm/handle_exit.c | 17 ++---------
>  arch/arm64/kvm/hyp/switch.c  | 59 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/sys_regs.c    | 13 +++-----
>  4 files changed, 68 insertions(+), 38 deletions(-)

[...]

> +static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
> +{
> +	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu));
> +	u32 ec = kvm_vcpu_trap_get_class(vcpu);
> +	struct kvm_cpu_context *ctxt;
> +	u64 val;
> +
> +	if (!vcpu_has_ptrauth(vcpu))
> +		return false;
> +
> +	switch (ec) {
> +	case ESR_ELx_EC_PAC:
> +		break;
> +	case ESR_ELx_EC_SYS64:
> +		switch (sysreg) {
> +		case SYS_APIAKEYLO_EL1:
> +		case SYS_APIAKEYHI_EL1:
> +		case SYS_APIBKEYLO_EL1:
> +		case SYS_APIBKEYHI_EL1:
> +		case SYS_APDAKEYLO_EL1:
> +		case SYS_APDAKEYHI_EL1:
> +		case SYS_APDBKEYLO_EL1:
> +		case SYS_APDBKEYHI_EL1:
> +		case SYS_APGAKEYLO_EL1:
> +		case SYS_APGAKEYHI_EL1:
> +			break;
> +		default:
> +			return false;
> +		}
> +		break;
> +	default:
> +		return false;
> +	}

The ESR triage looks correct, but I think it might be clearer split out
into a helper, since you can avoid the default cases with direct
returns, and you could avoid the nested switch, e.g.

static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr)
{
	u32 ec = ESR_ELx_EC(esr);

	if (ec == ESR_ELx_EC_PAC)
		return true;

	if (ec != ESR_ELx_EC_SYS64)
		return false;
	
	switch (esr_sys64_to_sysreg(esr)) {
	case SYS_APIAKEYLO_EL1:
	case SYS_APIAKEYHI_EL1:
	case SYS_APIBKEYLO_EL1:
	case SYS_APIBKEYHI_EL1:
	case SYS_APDAKEYLO_EL1:
	case SYS_APDAKEYHI_EL1:
	case SYS_APDBKEYLO_EL1:
	case SYS_APDBKEYHI_EL1:
	case SYS_APGAKEYLO_EL1:
	case SYS_APGAKEYHI_EL1:
		return true;
	}

	return false;
}


> +
> +	ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
> +
> +	vcpu_ptrauth_enable(vcpu);
> +
> +	val = read_sysreg(hcr_el2);
> +	val |= (HCR_API | HCR_APK);
> +	write_sysreg(val, hcr_el2);
> +
> +	return true;
> +}
> +
>  /*
>   * Return true when we were able to fixup the guest exit and should return to
>   * the guest, false when we should restore the host state and return to the
> @@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	if (__hyp_handle_fpsimd(vcpu))
>  		return true;
>  
> +	if (__hyp_handle_ptrauth(vcpu))
> +		return true;
> +
>  	if (!__populate_fault_info(vcpu))
>  		return true;
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index ad1d57501d6d..564995084cf8 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>  			 struct sys_reg_params *p,
>  			 const struct sys_reg_desc *rd)
>  {
> -	kvm_arm_vcpu_ptrauth_trap(vcpu);
> -
>  	/*
> -	 * Return false for both cases as we never skip the trapped
> -	 * instruction:
> -	 *
> -	 * - Either we re-execute the same key register access instruction
> -	 *   after enabling ptrauth.
> -	 * - Or an UNDEF is injected as ptrauth is not supported/enabled.
> +	 * If we land here, that is because we didn't fixup the access on exit
> +	 * by allowing the PtrAuth sysregs. The only way this happens is when
> +	 * the guest does not have PtrAuth support enabled.
>  	 */
> +	kvm_inject_undefined(vcpu);
> +
>  	return false;
>  }
>  
> -- 
> 2.26.2
> 

Regardless of the suggestion above, this looks sound to me. I agree that
it's much nicer to handle this in hyp, and AFAICT the context switch
should do the right thing, so:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early
@ 2020-06-04 15:23     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Suzuki K Poulose, Catalin Marinas, James Morse,
	Julien Thierry, Will Deacon, kvmarm, linux-arm-kernel

On Thu, Jun 04, 2020 at 02:33:53PM +0100, Marc Zyngier wrote:
> The current way we deal with PtrAuth is a bit heavy handed:
> 
> - We forcefully save the host's keys on each vcpu_load()
> - Handling the PtrAuth trap forces us to go all the way back
>   to the exit handling code to just set the HCR bits
> 
> Overall, this is pretty heavy handed. A better approach would be
> to handle it the same way we deal with the FPSIMD registers:
> 
> - On vcpu_load() disable PtrAuth for the guest
> - On first use, save the host's keys, enable PtrAuth in the
>   guest
> 
> Crutially, this can happen as a fixup, which is done very early
> on exit. We can then reenter the guest immediately without
> leaving the hypervisor role.
> 
> Another thing is that it simplify the rest of the host handling:
> exiting all the way to the host means that the only possible
> outcome for this trap is to inject an UNDEF.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/arm.c         | 17 +----------
>  arch/arm64/kvm/handle_exit.c | 17 ++---------
>  arch/arm64/kvm/hyp/switch.c  | 59 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/sys_regs.c    | 13 +++-----
>  4 files changed, 68 insertions(+), 38 deletions(-)

[...]

> +static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
> +{
> +	u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu));
> +	u32 ec = kvm_vcpu_trap_get_class(vcpu);
> +	struct kvm_cpu_context *ctxt;
> +	u64 val;
> +
> +	if (!vcpu_has_ptrauth(vcpu))
> +		return false;
> +
> +	switch (ec) {
> +	case ESR_ELx_EC_PAC:
> +		break;
> +	case ESR_ELx_EC_SYS64:
> +		switch (sysreg) {
> +		case SYS_APIAKEYLO_EL1:
> +		case SYS_APIAKEYHI_EL1:
> +		case SYS_APIBKEYLO_EL1:
> +		case SYS_APIBKEYHI_EL1:
> +		case SYS_APDAKEYLO_EL1:
> +		case SYS_APDAKEYHI_EL1:
> +		case SYS_APDBKEYLO_EL1:
> +		case SYS_APDBKEYHI_EL1:
> +		case SYS_APGAKEYLO_EL1:
> +		case SYS_APGAKEYHI_EL1:
> +			break;
> +		default:
> +			return false;
> +		}
> +		break;
> +	default:
> +		return false;
> +	}

The ESR triage looks correct, but I think it might be clearer split out
into a helper, since you can avoid the default cases with direct
returns, and you could avoid the nested switch, e.g.

static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr)
{
	u32 ec = ESR_ELx_EC(esr);

	if (ec == ESR_ELx_EC_PAC)
		return true;

	if (ec != ESR_ELx_EC_SYS64)
		return false;
	
	switch (esr_sys64_to_sysreg(esr)) {
	case SYS_APIAKEYLO_EL1:
	case SYS_APIAKEYHI_EL1:
	case SYS_APIBKEYLO_EL1:
	case SYS_APIBKEYHI_EL1:
	case SYS_APDAKEYLO_EL1:
	case SYS_APDAKEYHI_EL1:
	case SYS_APDBKEYLO_EL1:
	case SYS_APDBKEYHI_EL1:
	case SYS_APGAKEYLO_EL1:
	case SYS_APGAKEYHI_EL1:
		return true;
	}

	return false;
}


> +
> +	ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
> +
> +	vcpu_ptrauth_enable(vcpu);
> +
> +	val = read_sysreg(hcr_el2);
> +	val |= (HCR_API | HCR_APK);
> +	write_sysreg(val, hcr_el2);
> +
> +	return true;
> +}
> +
>  /*
>   * Return true when we were able to fixup the guest exit and should return to
>   * the guest, false when we should restore the host state and return to the
> @@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	if (__hyp_handle_fpsimd(vcpu))
>  		return true;
>  
> +	if (__hyp_handle_ptrauth(vcpu))
> +		return true;
> +
>  	if (!__populate_fault_info(vcpu))
>  		return true;
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index ad1d57501d6d..564995084cf8 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>  			 struct sys_reg_params *p,
>  			 const struct sys_reg_desc *rd)
>  {
> -	kvm_arm_vcpu_ptrauth_trap(vcpu);
> -
>  	/*
> -	 * Return false for both cases as we never skip the trapped
> -	 * instruction:
> -	 *
> -	 * - Either we re-execute the same key register access instruction
> -	 *   after enabling ptrauth.
> -	 * - Or an UNDEF is injected as ptrauth is not supported/enabled.
> +	 * If we land here, that is because we didn't fixup the access on exit
> +	 * by allowing the PtrAuth sysregs. The only way this happens is when
> +	 * the guest does not have PtrAuth support enabled.
>  	 */
> +	kvm_inject_undefined(vcpu);
> +
>  	return false;
>  }
>  
> -- 
> 2.26.2
> 

Regardless of the suggestion above, this looks sound to me. I agree that
it's much nicer to handle this in hyp, and AFAICT the context switch
should do the right thing, so:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
  2020-06-04 13:33   ` Marc Zyngier
  (?)
@ 2020-06-04 15:39     ` Mark Rutland
  -1 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, kvmarm, linux-arm-kernel, James Morse, Julien Thierry,
	Suzuki K Poulose, Will Deacon, Catalin Marinas, kernel-team

Hi Marc,

On Thu, Jun 04, 2020 at 02:33:54PM +0100, Marc Zyngier wrote:
> Even if we don't expose PtrAuth to a guest, the guest can still
> write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
> and execute PtrAuth instructions from the NOP space. This has
> the effect of trapping to EL2, and we currently inject an UNDEF.

I think it's worth noting that this is an ill-behaved guest, as those
bits are RES0 when pointer authentication isn't implemented.

The rationale for RES0/RES1 bits is that new HW can rely on old SW
programming them with the 0/1 as appropriate, and that old SW that does
not do so may encounter behaviour which from its PoV is UNPREDICTABLE.
The SW side of the contract is that you must program them as 0/1 unless
you know they're allocated with a specific meaning.

With that in mind I think the current behaviour is legitimate: from the
guest's PoV it's the same as there being a distinct extension which it
is not aware of where the En{I,D}{A,B} bits means "trap some HINTs to
EL1".

I don't think that we should attempt to work around broken software here
unless we absolutely have to, as it only adds complexity for no real
gain.

Thanks,
Mark.

> This is definitely the wrong thing to do, as the architecture says
> that these instructions should behave as NOPs.
> 
> Instead, we can simply reset the offending SCTLR_EL1 bits to
> zero, and resume the guest. It can still observe the SCTLR bits
> being set and then being cleared by magic, but that's much better
> than delivering an unexpected extension.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/handle_exit.c | 12 ------------
>  arch/arm64/kvm/hyp/switch.c  | 18 ++++++++++++++++--
>  2 files changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 5a02d4c90559..98d8adf6f865 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -162,17 +162,6 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -/*
> - * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> - * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
> - * that we can do is give the guest an UNDEF.
> - */
> -static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> -{
> -	kvm_inject_undefined(vcpu);
> -	return 1;
> -}
> -
>  static exit_handle_fn arm_exit_handlers[] = {
>  	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
>  	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
> @@ -195,7 +184,6 @@ static exit_handle_fn arm_exit_handlers[] = {
>  	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
> -	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
>  };
>  
>  static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 2a50b3771c3b..fc09c3dfa466 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -503,8 +503,22 @@ static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>  	struct kvm_cpu_context *ctxt;
>  	u64 val;
>  
> -	if (!vcpu_has_ptrauth(vcpu))
> -		return false;
> +	if (!vcpu_has_ptrauth(vcpu)) {
> +		if (ec != ESR_ELx_EC_PAC)
> +			return false;
> +
> +		/*
> +		 * Interesting situation: the guest has enabled PtrAuth,
> +		 * despite KVM not advertising it. Fix SCTLR_El1 on behalf
> +		 * of the guest (the bits should behave as RES0 anyway).
> +		 */
> +		val = read_sysreg_el1(SYS_SCTLR);
> +		val &= ~(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
> +			 SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
> +		write_sysreg_el1(val, SYS_SCTLR);
> +
> +		return true;
> +	}
>  
>  	switch (ec) {
>  	case ESR_ELx_EC_PAC:
> -- 
> 2.26.2
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
@ 2020-06-04 15:39     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Catalin Marinas, Will Deacon, kvmarm, linux-arm-kernel

Hi Marc,

On Thu, Jun 04, 2020 at 02:33:54PM +0100, Marc Zyngier wrote:
> Even if we don't expose PtrAuth to a guest, the guest can still
> write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
> and execute PtrAuth instructions from the NOP space. This has
> the effect of trapping to EL2, and we currently inject an UNDEF.

I think it's worth noting that this is an ill-behaved guest, as those
bits are RES0 when pointer authentication isn't implemented.

The rationale for RES0/RES1 bits is that new HW can rely on old SW
programming them with the 0/1 as appropriate, and that old SW that does
not do so may encounter behaviour which from its PoV is UNPREDICTABLE.
The SW side of the contract is that you must program them as 0/1 unless
you know they're allocated with a specific meaning.

With that in mind I think the current behaviour is legitimate: from the
guest's PoV it's the same as there being a distinct extension which it
is not aware of where the En{I,D}{A,B} bits means "trap some HINTs to
EL1".

I don't think that we should attempt to work around broken software here
unless we absolutely have to, as it only adds complexity for no real
gain.

Thanks,
Mark.

> This is definitely the wrong thing to do, as the architecture says
> that these instructions should behave as NOPs.
> 
> Instead, we can simply reset the offending SCTLR_EL1 bits to
> zero, and resume the guest. It can still observe the SCTLR bits
> being set and then being cleared by magic, but that's much better
> than delivering an unexpected extension.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/handle_exit.c | 12 ------------
>  arch/arm64/kvm/hyp/switch.c  | 18 ++++++++++++++++--
>  2 files changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 5a02d4c90559..98d8adf6f865 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -162,17 +162,6 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -/*
> - * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> - * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
> - * that we can do is give the guest an UNDEF.
> - */
> -static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> -{
> -	kvm_inject_undefined(vcpu);
> -	return 1;
> -}
> -
>  static exit_handle_fn arm_exit_handlers[] = {
>  	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
>  	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
> @@ -195,7 +184,6 @@ static exit_handle_fn arm_exit_handlers[] = {
>  	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
> -	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
>  };
>  
>  static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 2a50b3771c3b..fc09c3dfa466 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -503,8 +503,22 @@ static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>  	struct kvm_cpu_context *ctxt;
>  	u64 val;
>  
> -	if (!vcpu_has_ptrauth(vcpu))
> -		return false;
> +	if (!vcpu_has_ptrauth(vcpu)) {
> +		if (ec != ESR_ELx_EC_PAC)
> +			return false;
> +
> +		/*
> +		 * Interesting situation: the guest has enabled PtrAuth,
> +		 * despite KVM not advertising it. Fix SCTLR_El1 on behalf
> +		 * of the guest (the bits should behave as RES0 anyway).
> +		 */
> +		val = read_sysreg_el1(SYS_SCTLR);
> +		val &= ~(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
> +			 SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
> +		write_sysreg_el1(val, SYS_SCTLR);
> +
> +		return true;
> +	}
>  
>  	switch (ec) {
>  	case ESR_ELx_EC_PAC:
> -- 
> 2.26.2
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
@ 2020-06-04 15:39     ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2020-06-04 15:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel-team, kvm, Suzuki K Poulose, Catalin Marinas, James Morse,
	Julien Thierry, Will Deacon, kvmarm, linux-arm-kernel

Hi Marc,

On Thu, Jun 04, 2020 at 02:33:54PM +0100, Marc Zyngier wrote:
> Even if we don't expose PtrAuth to a guest, the guest can still
> write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
> and execute PtrAuth instructions from the NOP space. This has
> the effect of trapping to EL2, and we currently inject an UNDEF.

I think it's worth noting that this is an ill-behaved guest, as those
bits are RES0 when pointer authentication isn't implemented.

The rationale for RES0/RES1 bits is that new HW can rely on old SW
programming them with the 0/1 as appropriate, and that old SW that does
not do so may encounter behaviour which from its PoV is UNPREDICTABLE.
The SW side of the contract is that you must program them as 0/1 unless
you know they're allocated with a specific meaning.

With that in mind I think the current behaviour is legitimate: from the
guest's PoV it's the same as there being a distinct extension which it
is not aware of where the En{I,D}{A,B} bits means "trap some HINTs to
EL1".

I don't think that we should attempt to work around broken software here
unless we absolutely have to, as it only adds complexity for no real
gain.

Thanks,
Mark.

> This is definitely the wrong thing to do, as the architecture says
> that these instructions should behave as NOPs.
> 
> Instead, we can simply reset the offending SCTLR_EL1 bits to
> zero, and resume the guest. It can still observe the SCTLR bits
> being set and then being cleared by magic, but that's much better
> than delivering an unexpected extension.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/handle_exit.c | 12 ------------
>  arch/arm64/kvm/hyp/switch.c  | 18 ++++++++++++++++--
>  2 files changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 5a02d4c90559..98d8adf6f865 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -162,17 +162,6 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -/*
> - * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> - * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
> - * that we can do is give the guest an UNDEF.
> - */
> -static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> -{
> -	kvm_inject_undefined(vcpu);
> -	return 1;
> -}
> -
>  static exit_handle_fn arm_exit_handlers[] = {
>  	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
>  	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
> @@ -195,7 +184,6 @@ static exit_handle_fn arm_exit_handlers[] = {
>  	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
> -	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
>  };
>  
>  static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 2a50b3771c3b..fc09c3dfa466 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -503,8 +503,22 @@ static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>  	struct kvm_cpu_context *ctxt;
>  	u64 val;
>  
> -	if (!vcpu_has_ptrauth(vcpu))
> -		return false;
> +	if (!vcpu_has_ptrauth(vcpu)) {
> +		if (ec != ESR_ELx_EC_PAC)
> +			return false;
> +
> +		/*
> +		 * Interesting situation: the guest has enabled PtrAuth,
> +		 * despite KVM not advertising it. Fix SCTLR_El1 on behalf
> +		 * of the guest (the bits should behave as RES0 anyway).
> +		 */
> +		val = read_sysreg_el1(SYS_SCTLR);
> +		val &= ~(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
> +			 SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
> +		write_sysreg_el1(val, SYS_SCTLR);
> +
> +		return true;
> +	}
>  
>  	switch (ec) {
>  	case ESR_ELx_EC_PAC:
> -- 
> 2.26.2
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
  2020-06-04 15:39     ` Mark Rutland
  (?)
@ 2020-06-09  7:38       ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-09  7:38 UTC (permalink / raw)
  To: Mark Rutland
  Cc: kvm, kvmarm, linux-arm-kernel, James Morse, Julien Thierry,
	Suzuki K Poulose, Will Deacon, Catalin Marinas, kernel-team

Hi Mark,

On 2020-06-04 16:39, Mark Rutland wrote:
> Hi Marc,
> 
> On Thu, Jun 04, 2020 at 02:33:54PM +0100, Marc Zyngier wrote:
>> Even if we don't expose PtrAuth to a guest, the guest can still
>> write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
>> and execute PtrAuth instructions from the NOP space. This has
>> the effect of trapping to EL2, and we currently inject an UNDEF.
> 
> I think it's worth noting that this is an ill-behaved guest, as those
> bits are RES0 when pointer authentication isn't implemented.
> 
> The rationale for RES0/RES1 bits is that new HW can rely on old SW
> programming them with the 0/1 as appropriate, and that old SW that does
> not do so may encounter behaviour which from its PoV is UNPREDICTABLE.
> The SW side of the contract is that you must program them as 0/1 unless
> you know they're allocated with a specific meaning.
> 
> With that in mind I think the current behaviour is legitimate: from the
> guest's PoV it's the same as there being a distinct extension which it
> is not aware of where the En{I,D}{A,B} bits means "trap some HINTs to
> EL1".
> 
> I don't think that we should attempt to work around broken software 
> here
> unless we absolutely have to, as it only adds complexity for no real
> gain.

Fair enough. I was worried of the behaviour difference between HW 
without
PtrAuth and a guest with HW not advertised. Ideally, they should have
the same behaviour, but the architecture feels a bit brittle here.

Anyway, I'll drop this patch, and hopefully no guest will play this
game (they'll know pretty quickly about the issue anyway).

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
@ 2020-06-09  7:38       ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-09  7:38 UTC (permalink / raw)
  To: Mark Rutland
  Cc: kernel-team, kvm, Catalin Marinas, Will Deacon, kvmarm, linux-arm-kernel

Hi Mark,

On 2020-06-04 16:39, Mark Rutland wrote:
> Hi Marc,
> 
> On Thu, Jun 04, 2020 at 02:33:54PM +0100, Marc Zyngier wrote:
>> Even if we don't expose PtrAuth to a guest, the guest can still
>> write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
>> and execute PtrAuth instructions from the NOP space. This has
>> the effect of trapping to EL2, and we currently inject an UNDEF.
> 
> I think it's worth noting that this is an ill-behaved guest, as those
> bits are RES0 when pointer authentication isn't implemented.
> 
> The rationale for RES0/RES1 bits is that new HW can rely on old SW
> programming them with the 0/1 as appropriate, and that old SW that does
> not do so may encounter behaviour which from its PoV is UNPREDICTABLE.
> The SW side of the contract is that you must program them as 0/1 unless
> you know they're allocated with a specific meaning.
> 
> With that in mind I think the current behaviour is legitimate: from the
> guest's PoV it's the same as there being a distinct extension which it
> is not aware of where the En{I,D}{A,B} bits means "trap some HINTs to
> EL1".
> 
> I don't think that we should attempt to work around broken software 
> here
> unless we absolutely have to, as it only adds complexity for no real
> gain.

Fair enough. I was worried of the behaviour difference between HW 
without
PtrAuth and a guest with HW not advertised. Ideally, they should have
the same behaviour, but the architecture feels a bit brittle here.

Anyway, I'll drop this patch, and hopefully no guest will play this
game (they'll know pretty quickly about the issue anyway).

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized
@ 2020-06-09  7:38       ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-09  7:38 UTC (permalink / raw)
  To: Mark Rutland
  Cc: kernel-team, kvm, Suzuki K Poulose, Catalin Marinas, James Morse,
	Julien Thierry, Will Deacon, kvmarm, linux-arm-kernel

Hi Mark,

On 2020-06-04 16:39, Mark Rutland wrote:
> Hi Marc,
> 
> On Thu, Jun 04, 2020 at 02:33:54PM +0100, Marc Zyngier wrote:
>> Even if we don't expose PtrAuth to a guest, the guest can still
>> write to its SCTIRLE_1 register and set the En{I,D}{A,B} bits
>> and execute PtrAuth instructions from the NOP space. This has
>> the effect of trapping to EL2, and we currently inject an UNDEF.
> 
> I think it's worth noting that this is an ill-behaved guest, as those
> bits are RES0 when pointer authentication isn't implemented.
> 
> The rationale for RES0/RES1 bits is that new HW can rely on old SW
> programming them with the 0/1 as appropriate, and that old SW that does
> not do so may encounter behaviour which from its PoV is UNPREDICTABLE.
> The SW side of the contract is that you must program them as 0/1 unless
> you know they're allocated with a specific meaning.
> 
> With that in mind I think the current behaviour is legitimate: from the
> guest's PoV it's the same as there being a distinct extension which it
> is not aware of where the En{I,D}{A,B} bits means "trap some HINTs to
> EL1".
> 
> I don't think that we should attempt to work around broken software 
> here
> unless we absolutely have to, as it only adds complexity for no real
> gain.

Fair enough. I was worried of the behaviour difference between HW 
without
PtrAuth and a guest with HW not advertised. Ideally, they should have
the same behaviour, but the architecture feels a bit brittle here.

Anyway, I'll drop this patch, and hopefully no guest will play this
game (they'll know pretty quickly about the issue anyway).

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-06-09  7:38 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-04 13:33 [PATCH 0/3] kvm: arm64: Pointer Authentication handling fixes Marc Zyngier
2020-06-04 13:33 ` Marc Zyngier
2020-06-04 13:33 ` Marc Zyngier
2020-06-04 13:33 ` [PATCH 1/3] KVM: arm64: Save the host's PtrAuth keys in non-preemptible context Marc Zyngier
2020-06-04 13:33   ` Marc Zyngier
2020-06-04 13:33   ` Marc Zyngier
2020-06-04 15:04   ` Mark Rutland
2020-06-04 15:04     ` Mark Rutland
2020-06-04 15:04     ` Mark Rutland
2020-06-04 13:33 ` [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early Marc Zyngier
2020-06-04 13:33   ` Marc Zyngier
2020-06-04 13:33   ` Marc Zyngier
2020-06-04 15:23   ` Mark Rutland
2020-06-04 15:23     ` Mark Rutland
2020-06-04 15:23     ` Mark Rutland
2020-06-04 13:33 ` [PATCH 3/3] KVM: arm64: Enforce PtrAuth being disabled if not advertized Marc Zyngier
2020-06-04 13:33   ` Marc Zyngier
2020-06-04 13:33   ` Marc Zyngier
2020-06-04 15:39   ` Mark Rutland
2020-06-04 15:39     ` Mark Rutland
2020-06-04 15:39     ` Mark Rutland
2020-06-09  7:38     ` Marc Zyngier
2020-06-09  7:38       ` Marc Zyngier
2020-06-09  7:38       ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.