linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest
@ 2018-10-17 10:47 Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 1/7] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 4.19-rc7 and generic pointer authentication patch
series[1]. The first two patch in this series was originally posted by
Mark Rutland earlier[2,3] and contains some history of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel can be found in Kristina's generic pointer authentication
patch series[1].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC intructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialised (Only APIAKey now) which will be used by PAC. When world switch
happens between host and guest then this key is exchanged.

There were some review comments by Christoffer Dall in the original series[2,3]
and this patch series tries to implement them. The original series enabled pointer
authentication for both userspace and kvm userspace. However it is now
bifurcated and this series contains only KVM guest support.

Changes since v2 [2,3]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://patchwork.kernel.org/cover/10627655/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutland@arm.com/
[3]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutland@arm.com/


Linux (4.19-rc7 based):

Amit Daniel Kachhap (6):
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: context-switch ptrauth registers
  arm64/kvm: add a userspace option to enable pointer authentication
  arm64/kvm: enable pointer authentication cpufeature conditionally
  arm64/kvm: control accessibility of ptrauth key registers
  arm64: docs: document KVM support of pointer authentication

Mark Rutland (2):
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: context-switch ptrauth registers

 Documentation/arm64/pointer-authentication.txt |  8 ++-
 Documentation/virtual/kvm/api.txt              |  2 +
 arch/arm/include/asm/kvm_host.h                |  8 +++
 arch/arm64/include/asm/cpufeature.h            |  6 ++
 arch/arm64/include/asm/kvm_asm.h               |  2 +
 arch/arm64/include/asm/kvm_host.h              | 45 ++++++++++++-
 arch/arm64/include/asm/kvm_hyp.h               |  7 ++
 arch/arm64/include/uapi/asm/kvm.h              |  1 +
 arch/arm64/kernel/traps.c                      |  1 +
 arch/arm64/kvm/handle_exit.c                   | 24 ++++---
 arch/arm64/kvm/hyp/Makefile                    |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c                | 89 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c                    | 23 +++++--
 arch/arm64/kvm/hyp/sysreg-sr.c                 | 11 ++++
 arch/arm64/kvm/hyp/tlb.c                       |  6 +-
 arch/arm64/kvm/reset.c                         |  3 +
 arch/arm64/kvm/sys_regs.c                      | 91 ++++++++++++++++++++------
 include/uapi/linux/kvm.h                       |  1 +
 virt/kvm/arm/arm.c                             |  5 ++
 19 files changed, 295 insertions(+), 39 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git
Amit Daniel Kachhap (1):
  arm/kvm: arm64: Add a vcpu feature for pointer authentication

 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/include/asm/kvm.h             | 3 +++
 arm/aarch64/include/kvm/kvm-arch.h        | 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/kvm-cpu.c                     | 5 +++++
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 7 +++++++
 include/linux/kvm.h                       | 1 +
 9 files changed, 25 insertions(+), 1 deletion(-)
-- 
2.7.4


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/7] arm64/kvm: preserve host HCR_EL2 value
  2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
@ 2018-10-17 10:47 ` Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland,
	Amit Daniel Kachhap

From: Mark Rutland <mark.rutland@arm.com>

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initilisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h   |  2 ++
 arch/arm64/include/asm/kvm_asm.h  |  2 ++
 arch/arm64/include/asm/kvm_host.h | 14 ++++++++++++--
 arch/arm64/kvm/hyp/switch.c       | 15 +++++++++------
 arch/arm64/kvm/hyp/sysreg-sr.c    | 11 +++++++++++
 arch/arm64/kvm/hyp/tlb.c          |  6 +++++-
 virt/kvm/arm/arm.c                |  2 ++
 7 files changed, 43 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 3ad482d..dd32934 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
 	kvm_call_hyp(__init_stage2_translation);
 }
 
+static inline void __cpu_copy_host_registers(void) {}
+
 static inline int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
 {
 	return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 102b5a5..181c388 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -74,6 +74,8 @@ extern u32 __kvm_get_mdcr_el2(void);
 
 extern u32 __init_stage2_translation(void);
 
+extern u64 __read_hyp_hcr_el2(void);
+
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)						\
 	({								\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3d6d733..0c0e243 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -194,13 +194,17 @@ enum vcpu_sysreg {
 
 #define NR_COPRO_REGS	(NR_SYS_REGS * 2)
 
+struct kvm_cpu_init_host_regs {
+	u64 hcr_el2;
+};
+
 struct kvm_cpu_context {
 	struct kvm_regs	gp_regs;
 	union {
 		u64 sys_regs[NR_SYS_REGS];
 		u32 copro[NR_COPRO_REGS];
 	};
-
+	struct kvm_cpu_init_host_regs init_regs;
 	struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -209,7 +213,7 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
-	/* HYP configuration */
+	/* Guest HYP configuration */
 	u64 hcr_el2;
 	u32 mdcr_el2;
 
@@ -448,6 +452,12 @@ static inline void __cpu_init_stage2(void)
 		  "PARange is %d bits, unsupported configuration!", parange);
 }
 
+static inline void __cpu_copy_host_registers(void)
+{
+	kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
+	host_cxt->init_regs.hcr_el2 = kvm_call_hyp(__read_hyp_hcr_el2);
+}
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index a1c32c1..fa7dab9 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -139,15 +139,15 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 		__activate_traps_nvhe(vcpu);
 }
 
-static void deactivate_traps_vhe(void)
+static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
 {
 	extern char vectors[];	/* kernel exception vectors */
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
 	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
 	write_sysreg(vectors, vbar_el1);
 }
 
-static void __hyp_text __deactivate_traps_nvhe(void)
+static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
 {
 	u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
@@ -157,12 +157,15 @@ static void __hyp_text __deactivate_traps_nvhe(void)
 	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+	write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
 static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *host_ctxt;
+
+	host_ctxt = vcpu->arch.host_cpu_context;
 	/*
 	 * If we pended a virtual abort, preserve it until it gets
 	 * cleared. See D1.14.3 (Virtual Interrupts) for details, but
@@ -173,9 +176,9 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 		vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
 
 	if (has_vhe())
-		deactivate_traps_vhe();
+		deactivate_traps_vhe(host_ctxt);
 	else
-		__deactivate_traps_nvhe();
+		__deactivate_traps_nvhe(host_ctxt);
 }
 
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 9ce2239..26b1c63 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -288,3 +288,14 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.sysregs_loaded_on_cpu = false;
 }
+
+/**
+ * __read_hyp_hcr_el2 - Returns hcr_el2 register value
+ *
+ * This function acts as a function handler parameter for kvm_call_hyp and
+ * may be called from EL1 exception level to fetch the register value.
+ */
+u64 __hyp_text __read_hyp_hcr_el2(void)
+{
+	return read_sysreg(hcr_el2);
+}
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 131c777..04fc75a 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -50,12 +50,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
 
 static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
 {
+	u64 val;
+
 	/*
 	 * We're done with the TLB operation, let's restore the host's
 	 * view of HCR_EL2.
 	 */
 	write_sysreg(0, vttbr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	val = read_sysreg(hcr_el2);
+	val |= HCR_TGE;
+	write_sysreg(val, hcr_el2);
 }
 
 static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index c92053b..b3d1ee4 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1322,6 +1322,8 @@ static void cpu_hyp_reinit(void)
 
 	if (vgic_present)
 		kvm_vgic_init_cpu_hardware();
+
+	__cpu_copy_host_registers();
 }
 
 static void _kvm_arch_hardware_enable(void *discard)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers
  2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 1/7] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
@ 2018-10-17 10:47 ` Amit Daniel Kachhap
  2018-11-02  8:37   ` Christoffer Dall
  2018-10-17 10:47 ` [PATCH v3 3/7] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland,
	Amit Daniel Kachhap

From: Mark Rutland <mark.rutland@arm.com>

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work.

When we schedule a vcpu, we enable guest usage of pointer
authentication instructions and accesses to the keys. After these are
enabled, we allow context-switching the keys.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these attempts will result in an UNDEF
being taken by the guest.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h     |  2 +
 arch/arm64/include/asm/cpufeature.h |  6 +++
 arch/arm64/include/asm/kvm_host.h   | 29 +++++++++++++++
 arch/arm64/include/asm/kvm_hyp.h    |  7 ++++
 arch/arm64/kernel/traps.c           |  1 +
 arch/arm64/kvm/handle_exit.c        | 24 +++++++-----
 arch/arm64/kvm/hyp/Makefile         |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c     | 73 +++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c         |  8 ++++
 arch/arm64/kvm/sys_regs.c           | 40 ++++++++++++++++----
 virt/kvm/arm/arm.c                  |  3 ++
 11 files changed, 177 insertions(+), 17 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index dd32934..0ad3c3f 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -351,6 +351,8 @@ static inline int kvm_arm_have_ssbd(void)
 
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_start(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_stop(struct kvm_vcpu *vcpu) {}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index af4ca92..967977f 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -517,6 +517,12 @@ static inline bool system_supports_sve(void)
 		cpus_have_const_cap(ARM64_SVE);
 }
 
+static inline bool system_supports_ptrauth(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+}
+
 #define ARM64_SSBD_UNKNOWN		-1
 #define ARM64_SSBD_FORCE_DISABLE	0
 #define ARM64_SSBD_KERNEL		1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0c0e243..ecba6d5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -144,6 +144,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1,
+	APIBKEYLO_EL1,
+	APIBKEYHI_EL1,
+	APDAKEYLO_EL1,
+	APDAKEYHI_EL1,
+	APDBKEYLO_EL1,
+	APDBKEYHI_EL1,
+	APGAKEYLO_EL1,
+	APGAKEYHI_EL1,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -426,6 +438,23 @@ static inline bool kvm_arch_check_sve_has_vhe(void)
 		return true;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+static inline void kvm_arm_vcpu_ptrauth_start(struct kvm_vcpu *vcpu)
+{
+	/* Enable ptrauth for the vcpu */
+	if (system_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+}
+static inline void kvm_arm_vcpu_ptrauth_stop(struct kvm_vcpu *vcpu)
+{
+	/* Disable ptrauth for the vcpu */
+	if (system_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
+
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 384c343..3f4b844 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -152,6 +152,13 @@ bool __fpsimd_enabled(void);
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
 void deactivate_traps_vhe_put(void);
 
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+			       struct kvm_cpu_context *host_ctxt,
+			       struct kvm_cpu_context *guest_ctxt);
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+			      struct kvm_cpu_context *host_ctxt,
+			      struct kvm_cpu_context *guest_ctxt);
+
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 039e9ff..8c7ff96 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -557,6 +557,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
 	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
 	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
+	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",
 	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
 	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
 	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 53759b3..798158d 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register. This trap should not occur as we enable ptrauth during
+ * vcpu schedule itself but is anyway kept here for any unfortunate scenario.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (system_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+	else
+		kvm_inject_undefined(vcpu);
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP).
+ * a NOP), or guest EL1 access to a ptrauth register.
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 2fabc2d..85ddb7f 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -20,6 +20,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
 obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
+obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
 
 # KVM code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 0000000..6e96908
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
+ *
+ * Copyright 2018 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_hyp.h>
+#include <asm/pointer_auth.h>
+
+static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_save_key(ctxt->sys_regs, APIA);
+	__ptrauth_save_key(ctxt->sys_regs, APIB);
+	__ptrauth_save_key(ctxt->sys_regs, APDA);
+	__ptrauth_save_key(ctxt->sys_regs, APDB);
+	__ptrauth_save_key(ctxt->sys_regs, APGA);
+}
+
+#define __ptrauth_restore_key(regs, key) 					\
+({										\
+	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
+	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_restore_key(ctxt->sys_regs, APIA);
+	__ptrauth_restore_key(ctxt->sys_regs, APIB);
+	__ptrauth_restore_key(ctxt->sys_regs, APDA);
+	__ptrauth_restore_key(ctxt->sys_regs, APDB);
+	__ptrauth_restore_key(ctxt->sys_regs, APGA);
+}
+
+void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+					  struct kvm_cpu_context *host_ctxt,
+					  struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(host_ctxt);
+	__ptrauth_restore_state(guest_ctxt);
+}
+
+void __no_ptrauth __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+					 struct kvm_cpu_context *host_ctxt,
+					 struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(guest_ctxt);
+	__ptrauth_restore_state(host_ctxt);
+}
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index fa7dab9..714ee5b 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
+
 	__set_guest_arch_workaround_state(vcpu);
 
 	do {
@@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	__set_host_arch_workaround_state(vcpu);
 
+	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
+
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
 	__deactivate_traps(vcpu);
@@ -562,6 +566,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_state_nvhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
+
 	__set_guest_arch_workaround_state(vcpu);
 
 	do {
@@ -573,6 +579,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 
 	__set_host_arch_workaround_state(vcpu);
 
+	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
+
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
 	__timer_disable_traps(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1ca592d..6af6c7d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
@@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
 			kvm_debug("SVE unsupported for guests, suppressing\n");
 
 		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
-	} else if (id == SYS_ID_AA64ISAR1_EL1) {
-		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
-			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
 	} else if (id == SYS_ID_AA64MMFR1_EL1) {
 		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
 			kvm_debug("LORegions unsupported for guests, suppressing\n");
@@ -1316,6 +1334,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index b3d1ee4..71efc60 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -386,10 +386,13 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_start(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 {
+	kvm_arm_vcpu_ptrauth_stop(vcpu);
 	kvm_arch_vcpu_put_fp(vcpu);
 	kvm_vcpu_put_sysregs(vcpu);
 	kvm_timer_vcpu_put(vcpu);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/7] arm64/kvm: add a userspace option to enable pointer authentication
  2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 1/7] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
@ 2018-10-17 10:47 ` Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 4/7] arm64/kvm: enable pointer authentication cpufeature conditionally Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to select this feature.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 Documentation/virtual/kvm/api.txt |  2 ++
 arch/arm/include/asm/kvm_host.h   |  4 ++++
 arch/arm64/include/asm/kvm_host.h |  8 ++++----
 arch/arm64/include/uapi/asm/kvm.h |  1 +
 arch/arm64/kvm/handle_exit.c      |  2 +-
 arch/arm64/kvm/hyp/ptrauth-sr.c   | 16 ++++++++++++++++
 arch/arm64/kvm/reset.c            |  3 +++
 include/uapi/linux/kvm.h          |  1 +
 8 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 647f941..45f3b94 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2585,6 +2585,8 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
 	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
 	  Depends on KVM_CAP_ARM_PMU_V3.
+	- KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
+	  Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 0ad3c3f..4543afc 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -353,6 +353,10 @@ static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_vcpu_ptrauth_start(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_vcpu_ptrauth_stop(struct kvm_vcpu *vcpu) {}
+static inline bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ecba6d5..e909d96 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_SLEEP \
 	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -440,19 +440,19 @@ static inline bool kvm_arch_check_sve_has_vhe(void)
 
 void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
 void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu);
 static inline void kvm_arm_vcpu_ptrauth_start(struct kvm_vcpu *vcpu)
 {
 	/* Enable ptrauth for the vcpu */
-	if (system_supports_ptrauth())
+	if (system_supports_ptrauth() && kvm_arm_vcpu_ptrauth_allowed(vcpu))
 		kvm_arm_vcpu_ptrauth_enable(vcpu);
 }
 static inline void kvm_arm_vcpu_ptrauth_stop(struct kvm_vcpu *vcpu)
 {
 	/* Disable ptrauth for the vcpu */
-	if (system_supports_ptrauth())
+	if (system_supports_ptrauth() && kvm_arm_vcpu_ptrauth_allowed(vcpu))
 		kvm_arm_vcpu_ptrauth_disable(vcpu);
 }
-
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
 
 static inline void kvm_arch_hardware_unsetup(void) {}
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..5f82ca1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH		4 /* VCPU uses address authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 798158d..95eb4e6 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -180,7 +180,7 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
  */
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
 {
-	if (system_supports_ptrauth())
+	if (system_supports_ptrauth() && kvm_arm_vcpu_ptrauth_allowed(vcpu))
 		kvm_arm_vcpu_ptrauth_enable(vcpu);
 	else
 		kvm_inject_undefined(vcpu);
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
index 6e96908..3c718f5 100644
--- a/arch/arm64/kvm/hyp/ptrauth-sr.c
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -71,3 +71,19 @@ void __no_ptrauth __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
 	__ptrauth_save_state(guest_ctxt);
 	__ptrauth_restore_state(host_ctxt);
 }
+
+/**
+ * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in vcpu
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function will be used to enable/disable ptrauth in guest as configured
+ * by the KVM userspace API.
+ */
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+	if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features))
+		return true;
+	else
+		return false;
+}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index e37c78b..e32d666 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -85,6 +85,9 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_VCPU_EVENTS:
 		r = 1;
 		break;
+	case KVM_CAP_ARM_PTRAUTH:
+		r = system_supports_ptrauth();
+		break;
 	default:
 		r = 0;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 251be35..d886878 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -953,6 +953,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_NESTED_STATE 157
 #define KVM_CAP_ARM_INJECT_SERROR_ESR 158
 #define KVM_CAP_MSR_PLATFORM_INFO 159
+#define KVM_CAP_ARM_PTRAUTH 160
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/7] arm64/kvm: enable pointer authentication cpufeature conditionally
  2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2018-10-17 10:47 ` [PATCH v3 3/7] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
@ 2018-10-17 10:47 ` Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 5/7] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

According to userspace settings, pointer authentication cpufeature
is enabled/disabled from guests.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/kvm/sys_regs.c | 33 +++++++++++++++++++++------------
 1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6af6c7d..ce6144a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
 }
 
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)
 {
 	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
 			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
@@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
 			kvm_debug("SVE unsupported for guests, suppressing\n");
 
 		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
+	} else if (id == SYS_ID_AA64ISAR1_EL1) {
+		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
+					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+		if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
+			kvm_debug("ptrauth unsupported for guests, suppressing\n");
+			val &= ~ptrauth_mask;
+		}
 	} else if (id == SYS_ID_AA64MMFR1_EL1) {
 		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
 			kvm_debug("LORegions unsupported for guests, suppressing\n");
@@ -1086,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
 	if (p->is_write)
 		return write_to_read_only(vcpu, p, r);
 
-	p->regval = read_id_reg(r, raz);
+	p->regval = read_id_reg(vcpu, r, raz);
 	return true;
 }
 
@@ -1115,17 +1124,17 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
  * are stored, and for set_id_reg() we don't allow the effective value
  * to be changed.
  */
-static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-			bool raz)
+static int __get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+			void __user *uaddr, bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
-	const u64 val = read_id_reg(rd, raz);
+	const u64 val = read_id_reg(vcpu, rd, raz);
 
 	return reg_to_user(uaddr, &val, id);
 }
 
-static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-			bool raz)
+static int __set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+			void __user *uaddr, bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
 	int err;
@@ -1136,7 +1145,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 		return err;
 
 	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(rd, raz))
+	if (val != read_id_reg(vcpu, rd, raz))
 		return -EINVAL;
 
 	return 0;
@@ -1145,25 +1154,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, false);
+	return __get_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, false);
+	return __set_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, true);
+	return __get_id_reg(vcpu, rd, uaddr, true);
 }
 
 static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, true);
+	return __set_id_reg(vcpu, rd, uaddr, true);
 }
 
 /* sys_reg_desc initialiser for known cpufeature ID registers */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 5/7] arm64/kvm: control accessibility of ptrauth key registers
  2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2018-10-17 10:47 ` [PATCH v3 4/7] arm64/kvm: enable pointer authentication cpufeature conditionally Amit Daniel Kachhap
@ 2018-10-17 10:47 ` Amit Daniel Kachhap
  2018-10-17 10:47 ` [PATCH v3 6/7] arm64: docs: document KVM support of pointer authentication Amit Daniel Kachhap
  2018-10-17 10:48 ` [PATCH v3 7/7] arm/kvm: arm64: Add a vcpu feature for " Amit Daniel Kachhap
  6 siblings, 0 replies; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

According to userspace settings, ptrauth key registers are conditionally
present in guest system register list based on user specified flag
KVM_ARM_VCPU_PTRAUTH.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/kvm/sys_regs.c | 42 +++++++++++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ce6144a..09302b2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1343,12 +1343,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
-	PTRAUTH_KEY(APIA),
-	PTRAUTH_KEY(APIB),
-	PTRAUTH_KEY(APDA),
-	PTRAUTH_KEY(APDB),
-	PTRAUTH_KEY(APGA),
-
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
@@ -1500,6 +1494,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
 };
 
+static const struct sys_reg_desc ptrauth_reg_descs[] = {
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+};
+
 static bool trap_dbgidr(struct kvm_vcpu *vcpu,
 			struct sys_reg_params *p,
 			const struct sys_reg_desc *r)
@@ -2100,6 +2102,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 	r = find_reg(params, table, num);
 	if (!r)
 		r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+		r = find_reg(params, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 
 	if (likely(r)) {
 		perform_access(vcpu, params, r);
@@ -2213,6 +2217,8 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 	r = find_reg_by_id(id, &params, table, num);
 	if (!r)
 		r = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+		r = find_reg(&params, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 
 	/* Not saved in the sys_reg array and not otherwise accessible? */
 	if (r && !(r->reg || r->get_user))
@@ -2494,18 +2500,22 @@ static int walk_one_sys_reg(const struct sys_reg_desc *rd,
 }
 
 /* Assumed ordered tables, see kvm_sys_reg_table_init. */
-static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind,
+			const struct sys_reg_desc *desc, unsigned int len)
 {
 	const struct sys_reg_desc *i1, *i2, *end1, *end2;
 	unsigned int total = 0;
 	size_t num;
 	int err;
 
+	if (desc == ptrauth_reg_descs && !kvm_arm_vcpu_ptrauth_allowed(vcpu))
+		return total;
+
 	/* We check for duplicates here, to allow arch-specific overrides. */
 	i1 = get_target_table(vcpu->arch.target, true, &num);
 	end1 = i1 + num;
-	i2 = sys_reg_descs;
-	end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
+	i2 = desc;
+	end2 = desc + len;
 
 	BUG_ON(i1 == end1 || i2 == end2);
 
@@ -2533,7 +2543,10 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)
 {
 	return ARRAY_SIZE(invariant_sys_regs)
 		+ num_demux_regs()
-		+ walk_sys_regs(vcpu, (u64 __user *)NULL);
+		+ walk_sys_regs(vcpu, (u64 __user *)NULL, sys_reg_descs,
+				ARRAY_SIZE(sys_reg_descs))
+		+ walk_sys_regs(vcpu, (u64 __user *)NULL, ptrauth_reg_descs,
+				ARRAY_SIZE(ptrauth_reg_descs));
 }
 
 int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
@@ -2548,7 +2561,12 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 		uindices++;
 	}
 
-	err = walk_sys_regs(vcpu, uindices);
+	err = walk_sys_regs(vcpu, uindices, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	if (err < 0)
+		return err;
+	uindices += err;
+
+	err = walk_sys_regs(vcpu, uindices, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 	if (err < 0)
 		return err;
 	uindices += err;
@@ -2582,6 +2600,7 @@ void kvm_sys_reg_table_init(void)
 	BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs)));
 	BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs)));
 	BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs)));
+	BUG_ON(check_sysreg_table(ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs)));
 
 	/* We abuse the reset function to overwrite the table itself. */
 	for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
@@ -2623,6 +2642,7 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 
 	/* Generic chip reset first (so target could override). */
 	reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	reset_sys_reg_descs(vcpu, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 
 	table = get_target_table(vcpu->arch.target, true, &num);
 	reset_sys_reg_descs(vcpu, table, num);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 6/7] arm64: docs: document KVM support of pointer authentication
  2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2018-10-17 10:47 ` [PATCH v3 5/7] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
@ 2018-10-17 10:47 ` Amit Daniel Kachhap
  2018-11-02  8:39   ` Christoffer Dall
  2018-10-17 10:48 ` [PATCH v3 7/7] arm/kvm: arm64: Add a vcpu feature for " Amit Daniel Kachhap
  6 siblings, 1 reply; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

The documentation is updated to help in using pointer authentication
for KVM guests.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 8a9cb57..b00d735 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -78,7 +78,13 @@ to TTBR1 addresses (e.g. kernel pointers).
 Virtualization
 --------------
 
-Pointer authentication is not currently supported in KVM guests. KVM
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag requesting this feature to be enabled. Without
+this flag, pointer authentication is not enabled in KVM guests and KVM
 will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
 the feature will result in an UNDEFINED exception being injected into
 the guest.
+
+The flag to enable this feature is KVM_ARM_VCPU_PTRAUTH and should be
+used in KVM API KVM_ARM_VCPU_INIT. The pointer authentication key
+registers are hidden from userspace if this feature is not enabled.
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 7/7] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (5 preceding siblings ...)
  2018-10-17 10:47 ` [PATCH v3 6/7] arm64: docs: document KVM support of pointer authentication Amit Daniel Kachhap
@ 2018-10-17 10:48 ` Amit Daniel Kachhap
  6 siblings, 0 replies; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-10-17 10:48 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

This is a runtime feature and can be enabled by --ptrauth option.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/include/asm/kvm.h             | 3 +++
 arm/aarch64/include/kvm/kvm-arch.h        | 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/kvm-cpu.c                     | 5 +++++
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 7 +++++++
 include/linux/kvm.h                       | 1 +
 9 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..5779767 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,6 @@
 #define ARM_CPU_ID		0, 0, 0
 #define ARM_CPU_ID_MPIDR	5
 
+unsigned int kvm__cpu_ptrauth_get_feature(void) {}
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index c286035..0fd183d 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -98,6 +98,9 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 
+/* CPU uses address authentication and A key */
+#define KVM_ARM_VCPU_PTRAUTH		4
+
 struct kvm_vcpu_init {
 	__u32 target;
 	__u32 features[7];
diff --git a/arm/aarch64/include/kvm/kvm-arch.h b/arm/aarch64/include/kvm/kvm-arch.h
index 9de623a..bd566cb 100644
--- a/arm/aarch64/include/kvm/kvm-arch.h
+++ b/arm/aarch64/include/kvm/kvm-arch.h
@@ -11,4 +11,5 @@
 
 #include "arm-common/kvm-arch.h"
 
+
 #endif /* KVM__KVM_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
 			"Create PMUv3 device"),				\
 	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
 			"Specify random seed for Kernel Address Space "	\
-			"Layout Randomization (KASLR)"),
+			"Layout Randomization (KASLR)"),		\
+	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
+			"Enable address authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..f7b64b7 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL		3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1	0
 
+unsigned int kvm__cpu_ptrauth_get_feature(void);
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 1b29374..10da2cb 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -123,6 +123,11 @@ void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
 		return reset_vcpu_aarch64(vcpu);
 }
 
+unsigned int kvm__cpu_ptrauth_get_feature(void)
+{
+	return (1UL << KVM_ARM_VCPU_PTRAUTH);
+}
+
 int kvm_cpu__get_endianness(struct kvm_cpu *vcpu)
 {
 	struct kvm_one_reg reg;
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 6a196f1..eb872db 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
 	bool		aarch32_guest;
 	bool		has_pmuv3;
 	u64		kaslr_seed;
+	bool		has_ptrauth;
 	enum irqchip_type irqchip;
 };
 
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..5afd727 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
 		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
 	}
 
+	/* Set KVM_ARM_VCPU_PTRAUTH_I_A if available */
+	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+		if (kvm->cfg.arch.has_ptrauth)
+			vcpu_init.features[0] |=
+					kvm__cpu_ptrauth_get_feature();
+	}
+
 	/*
 	 * If the preferred target ioctl is successful then
 	 * use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index f51d508..ffd8f5c 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -883,6 +883,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_PPC_MMU_RADIX 134
 #define KVM_CAP_PPC_MMU_HASH_V3 135
 #define KVM_CAP_IMMEDIATE_EXIT 136
+#define KVM_CAP_ARM_PTRAUTH 160
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers
  2018-10-17 10:47 ` [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
@ 2018-11-02  8:37   ` Christoffer Dall
  2018-11-12 22:32     ` Catalin Marinas
  0 siblings, 1 reply; 13+ messages in thread
From: Christoffer Dall @ 2018-11-02  8:37 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland

On Wed, Oct 17, 2018 at 04:17:55PM +0530, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work.
> 
> When we schedule a vcpu, we enable guest usage of pointer
> authentication instructions and accesses to the keys. After these are
> enabled, we allow context-switching the keys.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). When the guest is scheduled on a
> physical CPU lacking the feature, these attempts will result in an UNDEF
> being taken by the guest.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm/include/asm/kvm_host.h     |  2 +
>  arch/arm64/include/asm/cpufeature.h |  6 +++
>  arch/arm64/include/asm/kvm_host.h   | 29 +++++++++++++++
>  arch/arm64/include/asm/kvm_hyp.h    |  7 ++++
>  arch/arm64/kernel/traps.c           |  1 +
>  arch/arm64/kvm/handle_exit.c        | 24 +++++++-----
>  arch/arm64/kvm/hyp/Makefile         |  1 +
>  arch/arm64/kvm/hyp/ptrauth-sr.c     | 73 +++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/switch.c         |  8 ++++
>  arch/arm64/kvm/sys_regs.c           | 40 ++++++++++++++++----
>  virt/kvm/arm/arm.c                  |  3 ++
>  11 files changed, 177 insertions(+), 17 deletions(-)
>  create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c
> 

[...]

> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> new file mode 100644
> index 0000000..6e96908
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -0,0 +1,73 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
> + *
> + * Copyright 2018 Arm Limited
> + * Author: Mark Rutland <mark.rutland@arm.com>
> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
> + */
> +#include <linux/compiler.h>
> +#include <linux/kvm_host.h>
> +
> +#include <asm/cpucaps.h>
> +#include <asm/cpufeature.h>
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_hyp.h>
> +#include <asm/pointer_auth.h>
> +
> +static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
> +}
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> +{
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
> +}
> +
> +#define __ptrauth_restore_key(regs, key) 					\
> +({										\
> +	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
> +	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static __always_inline void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
> +{
> +	__ptrauth_restore_key(ctxt->sys_regs, APIA);
> +	__ptrauth_restore_key(ctxt->sys_regs, APIB);
> +	__ptrauth_restore_key(ctxt->sys_regs, APDA);
> +	__ptrauth_restore_key(ctxt->sys_regs, APDB);
> +	__ptrauth_restore_key(ctxt->sys_regs, APGA);
> +}
> +
> +void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +					  struct kvm_cpu_context *host_ctxt,
> +					  struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(host_ctxt);
> +	__ptrauth_restore_state(guest_ctxt);
> +}
> +
> +void __no_ptrauth __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +					 struct kvm_cpu_context *host_ctxt,
> +					 struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(guest_ctxt);
> +	__ptrauth_restore_state(host_ctxt);
> +}
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index fa7dab9..714ee5b 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  	sysreg_restore_guest_state_vhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> +
>  	__set_guest_arch_workaround_state(vcpu);
>  
>  	do {
> @@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  
>  	__set_host_arch_workaround_state(vcpu);
>  
> +	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> +
>  	sysreg_save_guest_state_vhe(guest_ctxt);
>  
>  	__deactivate_traps(vcpu);
> @@ -562,6 +566,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
>  	__sysreg_restore_state_nvhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> +
>  	__set_guest_arch_workaround_state(vcpu);
>  
>  	do {
> @@ -573,6 +579,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
>  
>  	__set_host_arch_workaround_state(vcpu);
>  
> +	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> +
>  	__sysreg_save_state_nvhe(guest_ctxt);
>  	__sysreg32_save_state(vcpu);
>  	__timer_disable_traps(vcpu);

Two questions:

 - Can we limit all ptrauth functionality to VHE systems so that we
   don't need to touch the non-VHE path and so that we don't need any of
   the __hyp_text stuff?

 - Can we move all the save/restore logic to vcpu load/put as long as
   the host kernel itself isn't using ptrauth, and if the host kernel at
   some point begins to use ptrauth, can we have a hook to save/restore
   at that time (similar to what we do for FPSIMD) to avoid this
   overhead on every switch?


Thanks,

    Christoffer

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 6/7] arm64: docs: document KVM support of pointer authentication
  2018-10-17 10:47 ` [PATCH v3 6/7] arm64: docs: document KVM support of pointer authentication Amit Daniel Kachhap
@ 2018-11-02  8:39   ` Christoffer Dall
  0 siblings, 0 replies; 13+ messages in thread
From: Christoffer Dall @ 2018-11-02  8:39 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland

On Wed, Oct 17, 2018 at 04:17:59PM +0530, Amit Daniel Kachhap wrote:
> The documentation is updated to help in using pointer authentication
> for KVM guests.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  Documentation/arm64/pointer-authentication.txt | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 8a9cb57..b00d735 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -78,7 +78,13 @@ to TTBR1 addresses (e.g. kernel pointers).
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> +Pointer authentication is enabled in KVM guest when virtual machine is
> +created by passing a flag requesting this feature to be enabled. Without
> +this flag, pointer authentication is not enabled in KVM guests and KVM
>  will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>  the feature will result in an UNDEFINED exception being injected into
>  the guest.
> +
> +The flag to enable this feature is KVM_ARM_VCPU_PTRAUTH and should be
> +used in KVM API KVM_ARM_VCPU_INIT. The pointer authentication key
> +registers are hidden from userspace if this feature is not enabled.
> -- 
> 2.7.4
> 

I think this is placed in the wrong file.

Any information about the KVM API should go in
Documentation/virtual/kvm/api.txt.

The only information about KVM that belongs in this file would be host
running a VM with ptrauth can affect the host's ptrauth state (if that
applies).


Thanks,

    Christoffer

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers
  2018-11-02  8:37   ` Christoffer Dall
@ 2018-11-12 22:32     ` Catalin Marinas
  2018-11-13 13:44       ` Christoffer Dall
  0 siblings, 1 reply; 13+ messages in thread
From: Catalin Marinas @ 2018-11-12 22:32 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Amit Daniel Kachhap, Mark Rutland, Andrew Jones, Marc Zyngier,
	Will Deacon, linux-kernel, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-arm-kernel

On Fri, Nov 02, 2018 at 09:37:25AM +0100, Christoffer Dall wrote:
> On Wed, Oct 17, 2018 at 04:17:55PM +0530, Amit Daniel Kachhap wrote:
> > From: Mark Rutland <mark.rutland@arm.com>
> > 
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work.
> > 
> > When we schedule a vcpu, we enable guest usage of pointer
> > authentication instructions and accesses to the keys. After these are
> > enabled, we allow context-switching the keys.
> > 
> > Pointer authentication consists of address authentication and generic
> > authentication, and CPUs in a system might have varied support for
> > either. Where support for either feature is not uniform, it is hidden
> > from guests via ID register emulation, as a result of the cpufeature
> > framework in the host.
> > 
> > Unfortunately, address authentication and generic authentication cannot
> > be trapped separately, as the architecture provides a single EL2 trap
> > covering both. If we wish to expose one without the other, we cannot
> > prevent a (badly-written) guest from intermittently using a feature
> > which is not uniformly supported (when scheduled on a physical CPU which
> > supports the relevant feature). When the guest is scheduled on a
> > physical CPU lacking the feature, these attempts will result in an UNDEF
> > being taken by the guest.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Cc: Christoffer Dall <christoffer.dall@arm.com>
> > Cc: kvmarm@lists.cs.columbia.edu
[...] 
> Two questions:
> 
>  - Can we limit all ptrauth functionality to VHE systems so that we
>    don't need to touch the non-VHE path and so that we don't need any of
>    the __hyp_text stuff?

I would say yes. ARMv8.3 implies v8.1, so can enable ptrauth only when
VHE is built into the kernel and present in the CPU implementation.

>  - Can we move all the save/restore logic to vcpu load/put as long as
>    the host kernel itself isn't using ptrauth, and if the host kernel at
>    some point begins to use ptrauth, can we have a hook to save/restore
>    at that time (similar to what we do for FPSIMD) to avoid this
>    overhead on every switch?

We will probably enable ptrauth for the kernel as well fairly soon, so I
don't think we should base the KVM assumption on the no ptrauth in
kernel use-case.

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers
  2018-11-12 22:32     ` Catalin Marinas
@ 2018-11-13 13:44       ` Christoffer Dall
  2018-11-15 14:33         ` Amit Daniel Kachhap
  0 siblings, 1 reply; 13+ messages in thread
From: Christoffer Dall @ 2018-11-13 13:44 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Amit Daniel Kachhap, Mark Rutland, Andrew Jones, Marc Zyngier,
	Will Deacon, linux-kernel, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-arm-kernel

On Mon, Nov 12, 2018 at 10:32:12PM +0000, Catalin Marinas wrote:
> On Fri, Nov 02, 2018 at 09:37:25AM +0100, Christoffer Dall wrote:
> > On Wed, Oct 17, 2018 at 04:17:55PM +0530, Amit Daniel Kachhap wrote:
> > > From: Mark Rutland <mark.rutland@arm.com>
> > > 
> > > When pointer authentication is supported, a guest may wish to use it.
> > > This patch adds the necessary KVM infrastructure for this to work.
> > > 
> > > When we schedule a vcpu, we enable guest usage of pointer
> > > authentication instructions and accesses to the keys. After these are
> > > enabled, we allow context-switching the keys.
> > > 
> > > Pointer authentication consists of address authentication and generic
> > > authentication, and CPUs in a system might have varied support for
> > > either. Where support for either feature is not uniform, it is hidden
> > > from guests via ID register emulation, as a result of the cpufeature
> > > framework in the host.
> > > 
> > > Unfortunately, address authentication and generic authentication cannot
> > > be trapped separately, as the architecture provides a single EL2 trap
> > > covering both. If we wish to expose one without the other, we cannot
> > > prevent a (badly-written) guest from intermittently using a feature
> > > which is not uniformly supported (when scheduled on a physical CPU which
> > > supports the relevant feature). When the guest is scheduled on a
> > > physical CPU lacking the feature, these attempts will result in an UNDEF
> > > being taken by the guest.
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> > > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > > Cc: Christoffer Dall <christoffer.dall@arm.com>
> > > Cc: kvmarm@lists.cs.columbia.edu
> [...] 
> > Two questions:
> > 
> >  - Can we limit all ptrauth functionality to VHE systems so that we
> >    don't need to touch the non-VHE path and so that we don't need any of
> >    the __hyp_text stuff?
> 
> I would say yes. ARMv8.3 implies v8.1, so can enable ptrauth only when
> VHE is built into the kernel and present in the CPU implementation.
> 

Sounds good.

> >  - Can we move all the save/restore logic to vcpu load/put as long as
> >    the host kernel itself isn't using ptrauth, and if the host kernel at
> >    some point begins to use ptrauth, can we have a hook to save/restore
> >    at that time (similar to what we do for FPSIMD) to avoid this
> >    overhead on every switch?
> 
> We will probably enable ptrauth for the kernel as well fairly soon, so I
> don't think we should base the KVM assumption on the no ptrauth in
> kernel use-case.
> 

I assume in this case ptrauth will be used for all of the kernel,
including most of the KVM code?

In that case, I wonder if we always need to context-switch ptrauth
configruation state or if we can be lazy until the guest actually uses
the feature?


Thanks,

    Christoffer

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers
  2018-11-13 13:44       ` Christoffer Dall
@ 2018-11-15 14:33         ` Amit Daniel Kachhap
  0 siblings, 0 replies; 13+ messages in thread
From: Amit Daniel Kachhap @ 2018-11-15 14:33 UTC (permalink / raw)
  To: christoffer.dall
  Cc: Catalin Marinas, Mark Rutland, drjones, marc.zyngier,
	Will Deacon, linux-kernel, kristina.martsenko, kvmarm,
	ramana.radhakrishnan, Dave Martin, LAK

On Tue, Nov 13, 2018 at 7:16 PM Christoffer Dall
<christoffer.dall@arm.com> wrote:
>
> On Mon, Nov 12, 2018 at 10:32:12PM +0000, Catalin Marinas wrote:
> > On Fri, Nov 02, 2018 at 09:37:25AM +0100, Christoffer Dall wrote:
> > > On Wed, Oct 17, 2018 at 04:17:55PM +0530, Amit Daniel Kachhap wrote:
> > > > From: Mark Rutland <mark.rutland@arm.com>
> > > >
> > > > When pointer authentication is supported, a guest may wish to use it.
> > > > This patch adds the necessary KVM infrastructure for this to work.
> > > >
> > > > When we schedule a vcpu, we enable guest usage of pointer
> > > > authentication instructions and accesses to the keys. After these are
> > > > enabled, we allow context-switching the keys.
> > > >
> > > > Pointer authentication consists of address authentication and generic
> > > > authentication, and CPUs in a system might have varied support for
> > > > either. Where support for either feature is not uniform, it is hidden
> > > > from guests via ID register emulation, as a result of the cpufeature
> > > > framework in the host.
> > > >
> > > > Unfortunately, address authentication and generic authentication cannot
> > > > be trapped separately, as the architecture provides a single EL2 trap
> > > > covering both. If we wish to expose one without the other, we cannot
> > > > prevent a (badly-written) guest from intermittently using a feature
> > > > which is not uniformly supported (when scheduled on a physical CPU which
> > > > supports the relevant feature). When the guest is scheduled on a
> > > > physical CPU lacking the feature, these attempts will result in an UNDEF
> > > > being taken by the guest.
> > > >
> > > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> > > > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > > > Cc: Christoffer Dall <christoffer.dall@arm.com>
> > > > Cc: kvmarm@lists.cs.columbia.edu
> > [...]
> > > Two questions:
> > >
> > >  - Can we limit all ptrauth functionality to VHE systems so that we
> > >    don't need to touch the non-VHE path and so that we don't need any of
> > >    the __hyp_text stuff?
> >
> > I would say yes. ARMv8.3 implies v8.1, so can enable ptrauth only when
> > VHE is built into the kernel and present in the CPU implementation.
> >
>
> Sounds good.
>
> > >  - Can we move all the save/restore logic to vcpu load/put as long as
> > >    the host kernel itself isn't using ptrauth, and if the host kernel at
> > >    some point begins to use ptrauth, can we have a hook to save/restore
> > >    at that time (similar to what we do for FPSIMD) to avoid this
> > >    overhead on every switch?
> >
> > We will probably enable ptrauth for the kernel as well fairly soon, so I
> > don't think we should base the KVM assumption on the no ptrauth in
> > kernel use-case.
> >
>
> I assume in this case ptrauth will be used for all of the kernel,
> including most of the KVM code?
>
> In that case, I wonder if we always need to context-switch ptrauth
> configruation state or if we can be lazy until the guest actually uses
> the feature?

Sorry for the delayed reply. Lazy switching is possible and was
present in earlier Mark's Rutland v2 version.
However  removed it from v3 version as a mandatory user option to
enable ptrauth is added and to make it look
simpler. But yes both can exist together but with 1 trap cost if guest
always uses ptrauth.

Thanks,
Amit
>
>
> Thanks,
>
>     Christoffer

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-11-15 14:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-17 10:47 [PATCH v3 0/7] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
2018-10-17 10:47 ` [PATCH v3 1/7] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
2018-10-17 10:47 ` [PATCH v3 2/7] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
2018-11-02  8:37   ` Christoffer Dall
2018-11-12 22:32     ` Catalin Marinas
2018-11-13 13:44       ` Christoffer Dall
2018-11-15 14:33         ` Amit Daniel Kachhap
2018-10-17 10:47 ` [PATCH v3 3/7] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
2018-10-17 10:47 ` [PATCH v3 4/7] arm64/kvm: enable pointer authentication cpufeature conditionally Amit Daniel Kachhap
2018-10-17 10:47 ` [PATCH v3 5/7] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
2018-10-17 10:47 ` [PATCH v3 6/7] arm64: docs: document KVM support of pointer authentication Amit Daniel Kachhap
2018-11-02  8:39   ` Christoffer Dall
2018-10-17 10:48 ` [PATCH v3 7/7] arm/kvm: arm64: Add a vcpu feature for " Amit Daniel Kachhap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).