linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest
@ 2018-12-18  7:56 Amit Daniel Kachhap
  2018-12-18  7:56 ` [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2018-12-18  7:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 4.20-rc5 and generic pointer authentication patch
series[1]. The first two patch in this series was originally posted by
Mark Rutland earlier[2,3] and contains some history of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel can be found in Kristina's generic pointer authentication
patch series[1].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC intructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialised which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

There were some review comments by Christoffer Dall in the original series[2,3,4]
and this patch series tries to implement them. The original series enabled pointer
authentication for both userspace and kvm userspace. However it is now
bifurcated and this series contains only KVM guest support.

Changes since v3 [4]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.
* Rebased to new version of generic ptrauth patches [1].

Changes since v2 [2,3]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lkml.org/lkml/2018/12/7/666
[2]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutland@arm.com/
[3]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutland@arm.com/
[4]: https://lkml.org/lkml/2018/10/17/594


Linux (4.20-rc5 based):


Amit Daniel Kachhap (5):
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: context-switch ptrauth registers
  arm64/kvm: add a userspace option to enable pointer authentication
  arm64/kvm: enable pointer authentication cpufeature conditionally
  arm64/kvm: control accessibility of ptrauth key registers

 Documentation/arm64/pointer-authentication.txt | 13 ++--
 Documentation/virtual/kvm/api.txt              |  4 ++
 arch/arm/include/asm/kvm_host.h                |  7 ++
 arch/arm64/include/asm/cpufeature.h            |  6 ++
 arch/arm64/include/asm/kvm_asm.h               |  2 +
 arch/arm64/include/asm/kvm_host.h              | 41 +++++++++++-
 arch/arm64/include/asm/kvm_hyp.h               |  7 ++
 arch/arm64/include/uapi/asm/kvm.h              |  1 +
 arch/arm64/kernel/traps.c                      |  1 +
 arch/arm64/kvm/handle_exit.c                   | 24 ++++---
 arch/arm64/kvm/hyp/Makefile                    |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c                | 89 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c                    | 19 ++++--
 arch/arm64/kvm/hyp/sysreg-sr.c                 | 11 ++++
 arch/arm64/kvm/hyp/tlb.c                       |  6 +-
 arch/arm64/kvm/reset.c                         |  3 +
 arch/arm64/kvm/sys_regs.c                      | 91 ++++++++++++++++++++------
 include/uapi/linux/kvm.h                       |  1 +
 virt/kvm/arm/arm.c                             |  4 ++
 19 files changed, 289 insertions(+), 42 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git
Amit Daniel Kachhap (1):
  arm/kvm: arm64: Add a vcpu feature for pointer authentication

 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/include/asm/kvm.h             | 3 +++
 arm/aarch64/include/kvm/kvm-arch.h        | 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/kvm-cpu.c                     | 5 +++++
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 7 +++++++
 include/linux/kvm.h                       | 1 +
 9 files changed, 25 insertions(+), 1 deletion(-)
-- 
2.7.4


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value
  2018-12-18  7:56 [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
@ 2018-12-18  7:56 ` Amit Daniel Kachhap
  2019-01-04 17:50   ` James Morse
  2018-12-18  7:56 ` [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2018-12-18  7:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initilisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h   |  2 ++
 arch/arm64/include/asm/kvm_asm.h  |  2 ++
 arch/arm64/include/asm/kvm_host.h | 14 ++++++++++++--
 arch/arm64/kvm/hyp/switch.c       | 15 +++++++++------
 arch/arm64/kvm/hyp/sysreg-sr.c    | 11 +++++++++++
 arch/arm64/kvm/hyp/tlb.c          |  6 +++++-
 virt/kvm/arm/arm.c                |  2 ++
 7 files changed, 43 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 5ca5d9a..0f012c8 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
 	kvm_call_hyp(__init_stage2_translation);
 }
 
+static inline void __cpu_copy_host_registers(void) {}
+
 static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 {
 	return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index aea01a0..25ac9fa 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -73,6 +73,8 @@ extern void __vgic_v3_init_lrs(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern u64 __read_hyp_hcr_el2(void);
+
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)						\
 	({								\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 52fbc82..1b9eed9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -196,13 +196,17 @@ enum vcpu_sysreg {
 
 #define NR_COPRO_REGS	(NR_SYS_REGS * 2)
 
+struct kvm_cpu_init_host_regs {
+	u64 hcr_el2;
+};
+
 struct kvm_cpu_context {
 	struct kvm_regs	gp_regs;
 	union {
 		u64 sys_regs[NR_SYS_REGS];
 		u32 copro[NR_COPRO_REGS];
 	};
-
+	struct kvm_cpu_init_host_regs init_regs;
 	struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -211,7 +215,7 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
-	/* HYP configuration */
+	/* Guest HYP configuration */
 	u64 hcr_el2;
 	u32 mdcr_el2;
 
@@ -455,6 +459,12 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 static inline void __cpu_init_stage2(void) {}
 
+static inline void __cpu_copy_host_registers(void)
+{
+	kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
+	host_cxt->init_regs.hcr_el2 = kvm_call_hyp(__read_hyp_hcr_el2);
+}
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index f6e02cc..85a2a5c 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -139,15 +139,15 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 		__activate_traps_nvhe(vcpu);
 }
 
-static void deactivate_traps_vhe(void)
+static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
 {
 	extern char vectors[];	/* kernel exception vectors */
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
 	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
 	write_sysreg(vectors, vbar_el1);
 }
 
-static void __hyp_text __deactivate_traps_nvhe(void)
+static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
 {
 	u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
@@ -157,12 +157,15 @@ static void __hyp_text __deactivate_traps_nvhe(void)
 	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+	write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
 static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *host_ctxt;
+
+	host_ctxt = vcpu->arch.host_cpu_context;
 	/*
 	 * If we pended a virtual abort, preserve it until it gets
 	 * cleared. See D1.14.3 (Virtual Interrupts) for details, but
@@ -173,9 +176,9 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 		vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
 
 	if (has_vhe())
-		deactivate_traps_vhe();
+		deactivate_traps_vhe(host_ctxt);
 	else
-		__deactivate_traps_nvhe();
+		__deactivate_traps_nvhe(host_ctxt);
 }
 
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 68d6f7c..29ef34d 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -316,3 +316,14 @@ void __hyp_text __kvm_enable_ssbs(void)
 	"msr	sctlr_el2, %0"
 	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
 }
+
+/**
+ * __read_hyp_hcr_el2 - Returns hcr_el2 register value
+ *
+ * This function acts as a function handler parameter for kvm_call_hyp and
+ * may be called from EL1 exception level to fetch the register value.
+ */
+u64 __hyp_text __read_hyp_hcr_el2(void)
+{
+	return read_sysreg(hcr_el2);
+}
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 4dbd9c6..ad6282d 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -50,12 +50,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
 
 static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
 {
+	u64 val;
+
 	/*
 	 * We're done with the TLB operation, let's restore the host's
 	 * view of HCR_EL2.
 	 */
 	write_sysreg(0, vttbr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	val = read_sysreg(hcr_el2);
+	val |= HCR_TGE;
+	write_sysreg(val, hcr_el2);
 }
 
 static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 2377497..feda456 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1318,6 +1318,8 @@ static void cpu_hyp_reinit(void)
 
 	if (vgic_present)
 		kvm_vgic_init_cpu_hardware();
+
+	__cpu_copy_host_registers();
 }
 
 static void _kvm_arch_hardware_enable(void *discard)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers
  2018-12-18  7:56 [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2018-12-18  7:56 ` [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
@ 2018-12-18  7:56 ` Amit Daniel Kachhap
  2019-01-04 17:57   ` James Morse
  2018-12-18  7:56 ` [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2018-12-18  7:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
into the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these attemts will result in an UNDEF
being taken by the guest.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h     |  1 +
 arch/arm64/include/asm/cpufeature.h |  6 +++
 arch/arm64/include/asm/kvm_host.h   | 24 ++++++++++++
 arch/arm64/include/asm/kvm_hyp.h    |  7 ++++
 arch/arm64/kernel/traps.c           |  1 +
 arch/arm64/kvm/handle_exit.c        | 24 +++++++-----
 arch/arm64/kvm/hyp/Makefile         |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c     | 73 +++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c         |  4 ++
 arch/arm64/kvm/sys_regs.c           | 40 ++++++++++++++++----
 virt/kvm/arm/arm.c                  |  2 +
 11 files changed, 166 insertions(+), 17 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 0f012c8..02d9bfc 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -351,6 +351,7 @@ static inline int kvm_arm_have_ssbd(void)
 
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) {}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1c8393f..ac7d496 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void)
 		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
 }
 
+static inline bool system_supports_ptrauth(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+}
+
 #define ARM64_SSBD_UNKNOWN		-1
 #define ARM64_SSBD_FORCE_DISABLE	0
 #define ARM64_SSBD_KERNEL		1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 1b9eed9..629712d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -146,6 +146,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1,
+	APIBKEYLO_EL1,
+	APIBKEYHI_EL1,
+	APDAKEYLO_EL1,
+	APDAKEYHI_EL1,
+	APDBKEYLO_EL1,
+	APDBKEYHI_EL1,
+	APGAKEYLO_EL1,
+	APGAKEYHI_EL1,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -439,6 +451,18 @@ static inline bool kvm_arch_check_sve_has_vhe(void)
 		return true;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+
+static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu)
+{
+	/* Disable ptrauth and use it in a lazy context via traps */
+	if (has_vhe() && system_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
+
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 23aca66..ebffd44 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -152,6 +152,13 @@ bool __fpsimd_enabled(void);
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
 void deactivate_traps_vhe_put(void);
 
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+			       struct kvm_cpu_context *host_ctxt,
+			       struct kvm_cpu_context *guest_ctxt);
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+			      struct kvm_cpu_context *host_ctxt,
+			      struct kvm_cpu_context *guest_ctxt);
+
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 5f4d9ac..90e18de 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -748,6 +748,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
 	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
 	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
+	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",
 	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
 	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
 	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index ab35929..5c47a8f47 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register. This trap should not occur as we enable ptrauth during
+ * vcpu schedule itself but is anyway kept here for any unfortunate scenario.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (system_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+	else
+		kvm_inject_undefined(vcpu);
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP).
+ * a NOP), or guest EL1 access to a ptrauth register.
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 82d1904..17cec99 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
 obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
 
 # KVM code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 0000000..1bfaf74
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
+ *
+ * Copyright 2018 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_hyp.h>
+#include <asm/pointer_auth.h>
+
+static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_save_key(ctxt->sys_regs, APIA);
+	__ptrauth_save_key(ctxt->sys_regs, APIB);
+	__ptrauth_save_key(ctxt->sys_regs, APDA);
+	__ptrauth_save_key(ctxt->sys_regs, APDB);
+	__ptrauth_save_key(ctxt->sys_regs, APGA);
+}
+
+#define __ptrauth_restore_key(regs, key) 					\
+({										\
+	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
+	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_restore_key(ctxt->sys_regs, APIA);
+	__ptrauth_restore_key(ctxt->sys_regs, APIB);
+	__ptrauth_restore_key(ctxt->sys_regs, APDA);
+	__ptrauth_restore_key(ctxt->sys_regs, APDB);
+	__ptrauth_restore_key(ctxt->sys_regs, APGA);
+}
+
+void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+					  struct kvm_cpu_context *host_ctxt,
+					  struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(host_ctxt);
+	__ptrauth_restore_state(guest_ctxt);
+}
+
+void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+					 struct kvm_cpu_context *host_ctxt,
+					 struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(guest_ctxt);
+	__ptrauth_restore_state(host_ctxt);
+}
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 85a2a5c..6c57552 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
+
 	__set_guest_arch_workaround_state(vcpu);
 
 	do {
@@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	__set_host_arch_workaround_state(vcpu);
 
+	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
+
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
 	__deactivate_traps(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1ca592d..6af6c7d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
@@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
 			kvm_debug("SVE unsupported for guests, suppressing\n");
 
 		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
-	} else if (id == SYS_ID_AA64ISAR1_EL1) {
-		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
-			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
 	} else if (id == SYS_ID_AA64MMFR1_EL1) {
 		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
 			kvm_debug("LORegions unsupported for guests, suppressing\n");
@@ -1316,6 +1334,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index feda456..1f3b6ed 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_config(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication
  2018-12-18  7:56 [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2018-12-18  7:56 ` [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
  2018-12-18  7:56 ` [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
@ 2018-12-18  7:56 ` Amit Daniel Kachhap
  2019-01-04 17:57   ` James Morse
  2018-12-18  7:56 ` [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2018-12-18  7:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to select this feature.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  9 +++++----
 Documentation/virtual/kvm/api.txt              |  4 ++++
 arch/arm/include/asm/kvm_host.h                |  4 ++++
 arch/arm64/include/asm/kvm_host.h              |  7 ++++---
 arch/arm64/include/uapi/asm/kvm.h              |  1 +
 arch/arm64/kvm/handle_exit.c                   |  2 +-
 arch/arm64/kvm/hyp/ptrauth-sr.c                | 16 ++++++++++++++++
 arch/arm64/kvm/reset.c                         |  3 +++
 include/uapi/linux/kvm.h                       |  1 +
 9 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 5baca42..8c0f338 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,7 +87,8 @@ used to get and set the keys for a thread.
 Virtualization
 --------------
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
+to be enabled. Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index cd209f7..e20583a 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2634,6 +2634,10 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
 	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
 	  Depends on KVM_CAP_ARM_PMU_V3.
+	- KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
+	  Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
+	  set, then the KVM guest allows the execution of pointer authentication
+	  instructions or treats them as undefined if not set.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 02d9bfc..62a85d9 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -352,6 +352,10 @@ static inline int kvm_arm_have_ssbd(void)
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) {}
+static inline bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 629712d..f853a95 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_SLEEP \
 	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -453,14 +453,15 @@ static inline bool kvm_arch_check_sve_has_vhe(void)
 
 void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
 void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu);
 
 static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu)
 {
 	/* Disable ptrauth and use it in a lazy context via traps */
-	if (has_vhe() && system_supports_ptrauth())
+	if (has_vhe() && system_supports_ptrauth()
+			&& kvm_arm_vcpu_ptrauth_allowed(vcpu))
 		kvm_arm_vcpu_ptrauth_disable(vcpu);
 }
-
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
 
 static inline void kvm_arch_hardware_unsetup(void) {}
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..5f82ca1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH		4 /* VCPU uses address authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5c47a8f47..3394028 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -180,7 +180,7 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
  */
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
 {
-	if (system_supports_ptrauth())
+	if (system_supports_ptrauth() && kvm_arm_vcpu_ptrauth_allowed(vcpu))
 		kvm_arm_vcpu_ptrauth_enable(vcpu);
 	else
 		kvm_inject_undefined(vcpu);
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
index 1bfaf74..03999bb 100644
--- a/arch/arm64/kvm/hyp/ptrauth-sr.c
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -71,3 +71,19 @@ void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
 	__ptrauth_save_state(guest_ctxt);
 	__ptrauth_restore_state(host_ctxt);
 }
+
+/**
+ * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in vcpu
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function will be used to enable/disable ptrauth in guest as configured
+ * by the KVM userspace API.
+ */
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+	if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features))
+		return true;
+	else
+		return false;
+}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index b72a3dd..4656070 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -91,6 +91,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_VM_IPA_SIZE:
 		r = kvm_ipa_limit;
 		break;
+	case KVM_CAP_ARM_PTRAUTH:
+		r = system_supports_ptrauth();
+		break;
 	default:
 		r = 0;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2b7a652..fa48660 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -975,6 +975,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 163
 #define KVM_CAP_EXCEPTION_PAYLOAD 164
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
+#define KVM_CAP_ARM_PTRAUTH 166
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally
  2018-12-18  7:56 [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2018-12-18  7:56 ` [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
@ 2018-12-18  7:56 ` Amit Daniel Kachhap
  2019-01-04 17:58   ` James Morse
  2018-12-18  7:56 ` [PATCH v4 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
  2018-12-18  7:56 ` [PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
  5 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2018-12-18  7:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

According to userspace settings, pointer authentication cpufeature
is enabled/disabled from guests.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  3 +++
 arch/arm64/kvm/sys_regs.c                      | 33 ++++++++++++++++----------
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 8c0f338..a65dca2 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -92,3 +92,6 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
 to be enabled. Without this flag, pointer authentication is not enabled
 in KVM guests and attempted use of the feature will result in an UNDEFINED
 exception being injected into the guest.
+
+Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will mask the
+feature bits from ID_AA64ISAR1_EL1.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6af6c7d..ce6144a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
 }
 
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)
 {
 	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
 			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
@@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
 			kvm_debug("SVE unsupported for guests, suppressing\n");
 
 		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
+	} else if (id == SYS_ID_AA64ISAR1_EL1) {
+		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
+					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+		if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
+			kvm_debug("ptrauth unsupported for guests, suppressing\n");
+			val &= ~ptrauth_mask;
+		}
 	} else if (id == SYS_ID_AA64MMFR1_EL1) {
 		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
 			kvm_debug("LORegions unsupported for guests, suppressing\n");
@@ -1086,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
 	if (p->is_write)
 		return write_to_read_only(vcpu, p, r);
 
-	p->regval = read_id_reg(r, raz);
+	p->regval = read_id_reg(vcpu, r, raz);
 	return true;
 }
 
@@ -1115,17 +1124,17 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
  * are stored, and for set_id_reg() we don't allow the effective value
  * to be changed.
  */
-static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-			bool raz)
+static int __get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+			void __user *uaddr, bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
-	const u64 val = read_id_reg(rd, raz);
+	const u64 val = read_id_reg(vcpu, rd, raz);
 
 	return reg_to_user(uaddr, &val, id);
 }
 
-static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-			bool raz)
+static int __set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+			void __user *uaddr, bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
 	int err;
@@ -1136,7 +1145,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 		return err;
 
 	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(rd, raz))
+	if (val != read_id_reg(vcpu, rd, raz))
 		return -EINVAL;
 
 	return 0;
@@ -1145,25 +1154,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, false);
+	return __get_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, false);
+	return __set_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, true);
+	return __get_id_reg(vcpu, rd, uaddr, true);
 }
 
 static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, true);
+	return __set_id_reg(vcpu, rd, uaddr, true);
 }
 
 /* sys_reg_desc initialiser for known cpufeature ID registers */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 5/6] arm64/kvm: control accessibility of ptrauth key registers
  2018-12-18  7:56 [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2018-12-18  7:56 ` [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally Amit Daniel Kachhap
@ 2018-12-18  7:56 ` Amit Daniel Kachhap
  2018-12-18  7:56 ` [PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
  5 siblings, 0 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2018-12-18  7:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

According to userspace settings, ptrauth key registers are conditionally
present in guest system register list based on user specified flag
KVM_ARM_VCPU_PTRAUTH.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  3 +-
 arch/arm64/kvm/sys_regs.c                      | 42 +++++++++++++++++++-------
 2 files changed, 33 insertions(+), 12 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index a65dca2..729055a 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -94,4 +94,5 @@ in KVM guests and attempted use of the feature will result in an UNDEFINED
 exception being injected into the guest.
 
 Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will mask the
-feature bits from ID_AA64ISAR1_EL1.
+feature bits from ID_AA64ISAR1_EL1 and pointer authentication key registers
+are hidden from userspace.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ce6144a..09302b2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1343,12 +1343,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
-	PTRAUTH_KEY(APIA),
-	PTRAUTH_KEY(APIB),
-	PTRAUTH_KEY(APDA),
-	PTRAUTH_KEY(APDB),
-	PTRAUTH_KEY(APGA),
-
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
@@ -1500,6 +1494,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
 };
 
+static const struct sys_reg_desc ptrauth_reg_descs[] = {
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+};
+
 static bool trap_dbgidr(struct kvm_vcpu *vcpu,
 			struct sys_reg_params *p,
 			const struct sys_reg_desc *r)
@@ -2100,6 +2102,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 	r = find_reg(params, table, num);
 	if (!r)
 		r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+		r = find_reg(params, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 
 	if (likely(r)) {
 		perform_access(vcpu, params, r);
@@ -2213,6 +2217,8 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 	r = find_reg_by_id(id, &params, table, num);
 	if (!r)
 		r = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+		r = find_reg(&params, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 
 	/* Not saved in the sys_reg array and not otherwise accessible? */
 	if (r && !(r->reg || r->get_user))
@@ -2494,18 +2500,22 @@ static int walk_one_sys_reg(const struct sys_reg_desc *rd,
 }
 
 /* Assumed ordered tables, see kvm_sys_reg_table_init. */
-static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind,
+			const struct sys_reg_desc *desc, unsigned int len)
 {
 	const struct sys_reg_desc *i1, *i2, *end1, *end2;
 	unsigned int total = 0;
 	size_t num;
 	int err;
 
+	if (desc == ptrauth_reg_descs && !kvm_arm_vcpu_ptrauth_allowed(vcpu))
+		return total;
+
 	/* We check for duplicates here, to allow arch-specific overrides. */
 	i1 = get_target_table(vcpu->arch.target, true, &num);
 	end1 = i1 + num;
-	i2 = sys_reg_descs;
-	end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
+	i2 = desc;
+	end2 = desc + len;
 
 	BUG_ON(i1 == end1 || i2 == end2);
 
@@ -2533,7 +2543,10 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)
 {
 	return ARRAY_SIZE(invariant_sys_regs)
 		+ num_demux_regs()
-		+ walk_sys_regs(vcpu, (u64 __user *)NULL);
+		+ walk_sys_regs(vcpu, (u64 __user *)NULL, sys_reg_descs,
+				ARRAY_SIZE(sys_reg_descs))
+		+ walk_sys_regs(vcpu, (u64 __user *)NULL, ptrauth_reg_descs,
+				ARRAY_SIZE(ptrauth_reg_descs));
 }
 
 int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
@@ -2548,7 +2561,12 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 		uindices++;
 	}
 
-	err = walk_sys_regs(vcpu, uindices);
+	err = walk_sys_regs(vcpu, uindices, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	if (err < 0)
+		return err;
+	uindices += err;
+
+	err = walk_sys_regs(vcpu, uindices, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 	if (err < 0)
 		return err;
 	uindices += err;
@@ -2582,6 +2600,7 @@ void kvm_sys_reg_table_init(void)
 	BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs)));
 	BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs)));
 	BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs)));
+	BUG_ON(check_sysreg_table(ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs)));
 
 	/* We abuse the reset function to overwrite the table itself. */
 	for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
@@ -2623,6 +2642,7 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 
 	/* Generic chip reset first (so target could override). */
 	reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	reset_sys_reg_descs(vcpu, ptrauth_reg_descs, ARRAY_SIZE(ptrauth_reg_descs));
 
 	table = get_target_table(vcpu->arch.target, true, &num);
 	reset_sys_reg_descs(vcpu, table, num);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2018-12-18  7:56 [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2018-12-18  7:56 ` [PATCH v4 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
@ 2018-12-18  7:56 ` Amit Daniel Kachhap
  2019-01-04 17:59   ` James Morse
  5 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2018-12-18  7:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland

This is a runtime feature and can be enabled by --ptrauth option.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/include/asm/kvm.h             | 3 +++
 arm/aarch64/include/kvm/kvm-arch.h        | 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/aarch64/kvm-cpu.c                     | 5 +++++
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 7 +++++++
 include/linux/kvm.h                       | 1 +
 9 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..5779767 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,6 @@
 #define ARM_CPU_ID		0, 0, 0
 #define ARM_CPU_ID_MPIDR	5
 
+unsigned int kvm__cpu_ptrauth_get_feature(void) {}
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index c286035..0fd183d 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -98,6 +98,9 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 
+/* CPU uses address authentication and A key */
+#define KVM_ARM_VCPU_PTRAUTH		4
+
 struct kvm_vcpu_init {
 	__u32 target;
 	__u32 features[7];
diff --git a/arm/aarch64/include/kvm/kvm-arch.h b/arm/aarch64/include/kvm/kvm-arch.h
index 9de623a..bd566cb 100644
--- a/arm/aarch64/include/kvm/kvm-arch.h
+++ b/arm/aarch64/include/kvm/kvm-arch.h
@@ -11,4 +11,5 @@
 
 #include "arm-common/kvm-arch.h"
 
+
 #endif /* KVM__KVM_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
 			"Create PMUv3 device"),				\
 	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
 			"Specify random seed for Kernel Address Space "	\
-			"Layout Randomization (KASLR)"),
+			"Layout Randomization (KASLR)"),		\
+	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
+			"Enable address authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..f7b64b7 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL		3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1	0
 
+unsigned int kvm__cpu_ptrauth_get_feature(void);
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 1b29374..10da2cb 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -123,6 +123,11 @@ void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
 		return reset_vcpu_aarch64(vcpu);
 }
 
+unsigned int kvm__cpu_ptrauth_get_feature(void)
+{
+	return (1UL << KVM_ARM_VCPU_PTRAUTH);
+}
+
 int kvm_cpu__get_endianness(struct kvm_cpu *vcpu)
 {
 	struct kvm_one_reg reg;
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 6a196f1..eb872db 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
 	bool		aarch32_guest;
 	bool		has_pmuv3;
 	u64		kaslr_seed;
+	bool		has_ptrauth;
 	enum irqchip_type irqchip;
 };
 
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..5afd727 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
 		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
 	}
 
+	/* Set KVM_ARM_VCPU_PTRAUTH_I_A if available */
+	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+		if (kvm->cfg.arch.has_ptrauth)
+			vcpu_init.features[0] |=
+					kvm__cpu_ptrauth_get_feature();
+	}
+
 	/*
 	 * If the preferred target ioctl is successful then
 	 * use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index f51d508..ffd8f5c 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -883,6 +883,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_PPC_MMU_RADIX 134
 #define KVM_CAP_PPC_MMU_HASH_V3 135
 #define KVM_CAP_IMMEDIATE_EXIT 136
+#define KVM_CAP_ARM_PTRAUTH 166
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value
  2018-12-18  7:56 ` [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
@ 2019-01-04 17:50   ` James Morse
  2019-01-08  5:16     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: James Morse @ 2019-01-04 17:50 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi Amit,

On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> the host HCR when switching to/from a guest HCR. The saving of the
> register is done once during cpu hypervisor initialization state and is
> just restored after switch from guest.
> 
> For fetching HCR_EL2 during kvm initilisation, a hyp call is made using

(initialisation)


> kvm_call_hyp and is helpful in NHVE case.
> 
> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().


> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index aea01a0..25ac9fa 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -73,6 +73,8 @@ extern void __vgic_v3_init_lrs(void);
>  
>  extern u32 __kvm_get_mdcr_el2(void);
>  
> +extern u64 __read_hyp_hcr_el2(void);

How come this isn't __kvm_get_hcr_el2() like mdcr?


> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 52fbc82..1b9eed9 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -196,13 +196,17 @@ enum vcpu_sysreg {
>  
>  #define NR_COPRO_REGS	(NR_SYS_REGS * 2)
>  
> +struct kvm_cpu_init_host_regs {
> +	u64 hcr_el2;
> +};
> +
>  struct kvm_cpu_context {
>  	struct kvm_regs	gp_regs;
>  	union {
>  		u64 sys_regs[NR_SYS_REGS];
>  		u32 copro[NR_COPRO_REGS];
>  	};
> -
> +	struct kvm_cpu_init_host_regs init_regs;
>  	struct kvm_vcpu *__hyp_running_vcpu;
>  };

Hmm, so we grow every vcpu's struct kvm_cpu_context with some host-only registers...


> @@ -211,7 +215,7 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
>  struct kvm_vcpu_arch {
>  	struct kvm_cpu_context ctxt;
>  
> -	/* HYP configuration */
> +	/* Guest HYP configuration */
>  	u64 hcr_el2;
>  	u32 mdcr_el2;

... but they aren't actually host-only.


I think it would be tidier to move these two into struct kvm_cpu_context (not as
some init_host state), as both host and vcpu's have these values.
You could then add the mdcr_el2 stashing to your __cpu_copy_host_registers()
too. This way they both work in the same way, otherwise one is per-cpu, the
other is in a special bit of only the host's kvm_cpu_context.


> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index f6e02cc..85a2a5c 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -139,15 +139,15 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>  		__activate_traps_nvhe(vcpu);
>  }
>  
> -static void deactivate_traps_vhe(void)
> +static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
>  {
>  	extern char vectors[];	/* kernel exception vectors */
> -	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
> +	write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
>  	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
>  	write_sysreg(vectors, vbar_el1);
>  }
>  
> -static void __hyp_text __deactivate_traps_nvhe(void)
> +static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
>  {
>  	u64 mdcr_el2 = read_sysreg(mdcr_el2);
>  
> @@ -157,12 +157,15 @@ static void __hyp_text __deactivate_traps_nvhe(void)
>  	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
>  
>  	write_sysreg(mdcr_el2, mdcr_el2);

Strangely we try to rebuild the host's mdcr value here. If we had the host mdcr
value in host_ctxt we could restore it directly.


> -	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
> +	write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
>  	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>  }

>  static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_cpu_context *host_ctxt;
> +
> +	host_ctxt = vcpu->arch.host_cpu_context;
>  	/*
>  	 * If we pended a virtual abort, preserve it until it gets
>  	 * cleared. See D1.14.3 (Virtual Interrupts) for details, but
> @@ -173,9 +176,9 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>  		vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
>  
>  	if (has_vhe())
> -		deactivate_traps_vhe();
> +		deactivate_traps_vhe(host_ctxt);
>  	else
> -		__deactivate_traps_nvhe();
> +		__deactivate_traps_nvhe(host_ctxt);
>  }

(Alternatively each of these deactivate_traps() calls could retrieve the
host_ctxt directly as its a per-cpu variable, but as we have the struct vcpu
here, this is probably better.)


Thanks,

James

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers
  2018-12-18  7:56 ` [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
@ 2019-01-04 17:57   ` James Morse
  2019-01-09 10:01     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: James Morse @ 2019-01-04 17:57 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	Mark Rutland

Hi Amit,

On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> into the kernel and present into CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again.

Why start again?
Taking the trap at all suggests the guest knows about pointer-auth, if it uses
it, its probably in some context switch code. It would be good to avoid trapping
that.


> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature).

Yuck!


> When the guest is scheduled on a
> physical CPU lacking the feature, these attemts will result in an UNDEF

(attempts)

> being taken by the guest.

Can we only expose the feature to a guest if both address and generic
authentication are supported? (and hide it from the id register otherwise).

This avoids having to know if this cpu supports address/generic when the system
doesn't. We would need to scrub the values to avoid guest-values being left
loaded when something else is running.


> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1c8393f..ac7d496 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void)
>  		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
>  }
>  
> +static inline bool system_supports_ptrauth(void)
> +{
> +	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> +		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
> +}

(mainline has a system_supports_address_auth(), I guess this will be folded
around a bit during the rebase).

kvm_supports_ptrauth() that checks the two architected algorithm caps?


> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index ab35929..5c47a8f47 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  }
>  
>  /*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register. This trap should not occur as we enable ptrauth during
> + * vcpu schedule itself but is anyway kept here for any unfortunate scenario.

... so we don't need this? Or if it ever runs its indicative of a bug?


> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> +	if (system_supports_ptrauth())
> +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> +	else
> +		kvm_inject_undefined(vcpu);
> +}
> +
> +/*
>   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> - * a NOP).
> + * a NOP), or guest EL1 access to a ptrauth register.
>   */
>  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	/*
> -	 * We don't currently support ptrauth in a guest, and we mask the ID
> -	 * registers to prevent well-behaved guests from trying to make use of
> -	 * it.
> -	 *
> -	 * Inject an UNDEF, as if the feature really isn't present.
> -	 */
> -	kvm_inject_undefined(vcpu);
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>  	return 1;
>  }

> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> new file mode 100644
> index 0000000..1bfaf74
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -0,0 +1,73 @@

> +{
> +	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
> +}

(If you include the the IS_ENABLED() in here too, the compiler can work out
pointer-auth was compiled out and prune all the unused static functions.)


> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})

(shame we can't re-use the arch code's macros for these)


> +static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> +{
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);

> +	__ptrauth_save_key(ctxt->sys_regs, APGA);

I thought the generic authentication bits had a separate ID-register-field/cap?
But you only checked address-auth above. Is it possible the CPU doesn't have
this register?

(but I think we should only support this if both generic&address are supported)


> +}

> +void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +					  struct kvm_cpu_context *host_ctxt,
> +					  struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(host_ctxt);
> +	__ptrauth_restore_state(guest_ctxt);
> +}
> +
> +void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +					 struct kvm_cpu_context *host_ctxt,
> +					 struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(guest_ctxt);
> +	__ptrauth_restore_state(host_ctxt);
> +}


> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 85a2a5c..6c57552 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  	sysreg_restore_guest_state_vhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> +
>  	__set_guest_arch_workaround_state(vcpu);
>  
>  	do {
> @@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  
>  	__set_host_arch_workaround_state(vcpu);
>  
> +	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> +
>  	sysreg_save_guest_state_vhe(guest_ctxt);
>  
>  	__deactivate_traps(vcpu);

This is deep in the world-switch code, meaning we save/restore 10 sysregs for
every entry/exit. As the kernel isn't using pointer-auth, wouldn't it be better
to defer the save/restore to the preempt notifier using kvm_arch_vcpu_load()?

I've seen the discussion that the kernel may start using this 'soon':
lore.kernel.org/r/20181112223212.5o4ipc5kt5ziuupt@localhost

(does anyone know when soon is?)

... but it doesn't today, and save/restoring these in C becomes impossible when
that happens, which the arch code is doing too, so we aren't inventing a new
problem.


probable-tangent: {
If the kernel starts using ptrauth, it will need to switch these registers in
the entry asm, and in __cpu_switch_to()... there will be new asm macro's to do this.

Can we make it so that the same series that does that, can easily do KVM too?

If we use the preempt-notifier stuff the GET/SET_ONE_REG code knows that today
the ptrauth values are always loaded on the cpu, it doesn't need to look in the
sys_reg_descs[] array.
This means the preempt notifier could save/restore them to a struct ptrauth_keys
in struct kvm_vcpu_arch. This lets us use the arch codes __ptrauth_key_install()
and ptrauth_keys_switch() helpers today, instead of duplicating them because the
layout is slightly different.

Once these become asm macros, the same structure is already in use, so we can
(cough) probably feed it a different base pointer for guest-entry/ret_to_user.
guest-exit would almost be the same as kernel_enter, its only the
save-guest-values that would be specific to kvm.

(oh, and the GET/SET_ONE_REG code would need to know to look elsewhere for the
guest values, but it looks like your patch 5 is already doing some of this).

}


> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 1ca592d..6af6c7d 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}
> +

> @@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>  			kvm_debug("SVE unsupported for guests, suppressing\n");
>  
>  		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> -	} else if (id == SYS_ID_AA64ISAR1_EL1) {
> -		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> -					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> -					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> -					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);

Don't we still want to remove the imp-def bits? and the APA/GPA bits if we
decide there is incomplete support.


> -		if (val & ptrauth_mask)
> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> -		val &= ~ptrauth_mask;
>  	} else if (id == SYS_ID_AA64MMFR1_EL1) {
>  		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
>  			kvm_debug("LORegions unsupported for guests, suppressing\n");


> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index feda456..1f3b6ed 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		vcpu_clear_wfe_traps(vcpu);
>  	else
>  		vcpu_set_wfe_traps(vcpu);
> +
> +	kvm_arm_vcpu_ptrauth_config(vcpu);

(doesn't this unconfigure ptrauth?!)

>  }

Thanks,

James

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication
  2018-12-18  7:56 ` [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
@ 2019-01-04 17:57   ` James Morse
  2019-01-09 10:13     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 18+ messages in thread
From: James Morse @ 2019-01-04 17:57 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi Amit,

On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> This feature will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> supply this parameter instead of creating a new API.
> 
> A new register is not created to pass this parameter via
> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> supplied is enough to select this feature.

What is the motivation for doing this? Doesn't ptrauth 'just' need turning on?
It doesn't need additional setup to be useable, or rely on some qemu support to
work properly. There isn't any hidden state that can't be migrated in the usual way.
Is it just because we don't want to commit to the ABI?


> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 5baca42..8c0f338 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,7 +87,8 @@ used to get and set the keys for a thread.
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.
> +Pointer authentication is enabled in KVM guest when virtual machine is
> +created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
> +to be enabled. Without this flag, pointer authentication is not enabled
> +in KVM guests and attempted use of the feature will result in an UNDEFINED
> +exception being injected into the guest.

> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index cd209f7..e20583a 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2634,6 +2634,10 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>  	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>  	  Depends on KVM_CAP_ARM_PMU_V3.
> +	- KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
> +	  Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
> +	  set, then the KVM guest allows the execution of pointer authentication
> +	  instructions or treats them as undefined if not set.

> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> index 1bfaf74..03999bb 100644
> --- a/arch/arm64/kvm/hyp/ptrauth-sr.c
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -71,3 +71,19 @@ void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
>  	__ptrauth_save_state(guest_ctxt);
>  	__ptrauth_restore_state(host_ctxt);
>  }
> +
> +/**
> + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in vcpu
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function will be used to enable/disable ptrauth in guest as configured
> + * by the KVM userspace API.
> + */
> +bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
> +{
> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features))
> +		return true;
> +	else
> +		return false;
> +}

Can't you return the result of test_bit() directly?


Thanks,

James

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally
  2018-12-18  7:56 ` [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally Amit Daniel Kachhap
@ 2019-01-04 17:58   ` James Morse
  2019-01-09 10:16     ` Amit Daniel Kachhap
  2019-01-28  7:02     ` Amit Daniel Kachhap
  0 siblings, 2 replies; 18+ messages in thread
From: James Morse @ 2019-01-04 17:58 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi Amit,

On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> According to userspace settings, pointer authentication cpufeature
> is enabled/disabled from guests.

This reads like the guest is changing something in the host. Isn't this hiding
the id-register values from the guest?


> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 6af6c7d..ce6144a 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>  			kvm_debug("SVE unsupported for guests, suppressing\n");
>  
>  		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> +	} else if (id == SYS_ID_AA64ISAR1_EL1) {
> +		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> +					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> +					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> +					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> +		if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
> +			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> +			val &= ~ptrauth_mask;
> +		}

I think this hunk should have been in the previous patch as otherwise its a
bisection oddity.

Could you merge this hunk with the previous patch, and move the mechanical bits
that pass vcpu around to a prior preparatory patch.

(I'm still unsure if we need to hide this as a user-controlled policy)


Thanks,

James

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2018-12-18  7:56 ` [PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
@ 2019-01-04 17:59   ` James Morse
  0 siblings, 0 replies; 18+ messages in thread
From: James Morse @ 2019-01-04 17:59 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi Amit,

On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> This is a runtime feature and can be enabled by --ptrauth option.

It took me a little while to realise this was a kvmtool patch. could you put
something in the patch-subject to make this obvious? e.g:
https://lists.cs.columbia.edu/pipermail/kvmarm/2018-September/thread.html#32734


Thanks,

James

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-01-04 17:50   ` James Morse
@ 2019-01-08  5:16     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2019-01-08  5:16 UTC (permalink / raw)
  To: James Morse
  Cc: LAK, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi,

On Sat, Jan 5, 2019 at 12:05 AM James Morse <james.morse@arm.com> wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> > is a constant value. This works today, as the host HCR_EL2 value is
> > always the same, but this will get in the way of supporting extensions
> > that require HCR_EL2 bits to be set conditionally for the host.
> >
> > To allow such features to work without KVM having to explicitly handle
> > every possible host feature combination, this patch has KVM save/restore
> > the host HCR when switching to/from a guest HCR. The saving of the
> > register is done once during cpu hypervisor initialization state and is
> > just restored after switch from guest.
> >
> > For fetching HCR_EL2 during kvm initilisation, a hyp call is made using
>
> (initialisation)
>
>
> > kvm_call_hyp and is helpful in NHVE case.
> >
> > For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> > to toggle the TGE bit with a RMW sequence, as we already do in
> > __tlb_switch_to_guest_vhe().
>
>
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index aea01a0..25ac9fa 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -73,6 +73,8 @@ extern void __vgic_v3_init_lrs(void);
> >
> >  extern u32 __kvm_get_mdcr_el2(void);
> >
> > +extern u64 __read_hyp_hcr_el2(void);
>
> How come this isn't __kvm_get_hcr_el2() like mdcr?
yes.
>
>
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 52fbc82..1b9eed9 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -196,13 +196,17 @@ enum vcpu_sysreg {
> >
> >  #define NR_COPRO_REGS        (NR_SYS_REGS * 2)
> >
> > +struct kvm_cpu_init_host_regs {
> > +     u64 hcr_el2;
> > +};
> > +
> >  struct kvm_cpu_context {
> >       struct kvm_regs gp_regs;
> >       union {
> >               u64 sys_regs[NR_SYS_REGS];
> >               u32 copro[NR_COPRO_REGS];
> >       };
> > -
> > +     struct kvm_cpu_init_host_regs init_regs;
> >       struct kvm_vcpu *__hyp_running_vcpu;
> >  };
>
> Hmm, so we grow every vcpu's struct kvm_cpu_context with some host-only registers...
>
>
> > @@ -211,7 +215,7 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
> >  struct kvm_vcpu_arch {
> >       struct kvm_cpu_context ctxt;
> >
> > -     /* HYP configuration */
> > +     /* Guest HYP configuration */
> >       u64 hcr_el2;
> >       u32 mdcr_el2;
>
> ... but they aren't actually host-only.
>
>
> I think it would be tidier to move these two into struct kvm_cpu_context (not as
> some init_host state), as both host and vcpu's have these values.
> You could then add the mdcr_el2 stashing to your __cpu_copy_host_registers()
> too. This way they both work in the same way, otherwise one is per-cpu, the
> other is in a special bit of only the host's kvm_cpu_context.
>
Your suggestion looks doable. I will implement in next iteration.
>
> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index f6e02cc..85a2a5c 100644
> > --- a/arch/arm64/kvm/hyp/switch.c
> > +++ b/arch/arm64/kvm/hyp/switch.c
> > @@ -139,15 +139,15 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
> >               __activate_traps_nvhe(vcpu);
> >  }
> >
> > -static void deactivate_traps_vhe(void)
> > +static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
> >  {
> >       extern char vectors[];  /* kernel exception vectors */
> > -     write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
> > +     write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
> >       write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
> >       write_sysreg(vectors, vbar_el1);
> >  }
> >
> > -static void __hyp_text __deactivate_traps_nvhe(void)
> > +static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
> >  {
> >       u64 mdcr_el2 = read_sysreg(mdcr_el2);
> >
> > @@ -157,12 +157,15 @@ static void __hyp_text __deactivate_traps_nvhe(void)
> >       mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> >
> >       write_sysreg(mdcr_el2, mdcr_el2);
>
> Strangely we try to rebuild the host's mdcr value here. If we had the host mdcr
> value in host_ctxt we could restore it directly.
yes. I will check if initial value host value is same as calculated.
>
>
> > -     write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
> > +     write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
> >       write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
> >  }
>
> >  static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
> >  {
> > +     struct kvm_cpu_context *host_ctxt;
> > +
> > +     host_ctxt = vcpu->arch.host_cpu_context;
> >       /*
> >        * If we pended a virtual abort, preserve it until it gets
> >        * cleared. See D1.14.3 (Virtual Interrupts) for details, but
> > @@ -173,9 +176,9 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
> >               vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
> >
> >       if (has_vhe())
> > -             deactivate_traps_vhe();
> > +             deactivate_traps_vhe(host_ctxt);
> >       else
> > -             __deactivate_traps_nvhe();
> > +             __deactivate_traps_nvhe(host_ctxt);
> >  }
>
> (Alternatively each of these deactivate_traps() calls could retrieve the
> host_ctxt directly as its a per-cpu variable, but as we have the struct vcpu
> here, this is probably better.)
>
>
> Thanks,
>
> James

//Amit

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers
  2019-01-04 17:57   ` James Morse
@ 2019-01-09 10:01     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2019-01-09 10:01 UTC (permalink / raw)
  To: James Morse
  Cc: LAK, Christoffer Dall, Marc Zyngier, Catalin Marinas,
	Will Deacon, Andrew Jones, Dave Martin, Ramana Radhakrishnan,
	kvmarm, Kristina Martsenko, linux-kernel, Mark Rutland

Hi,

On Sat, Jan 5, 2019 at 12:05 AM James Morse <james.morse@arm.com> wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work, with
> > a semi-lazy context switch of the pointer auth state.
> >
> > Pointer authentication feature is only enabled when VHE is built
> > into the kernel and present into CPU implementation so only VHE code
> > paths are modified.
> >
> > When we schedule a vcpu, we disable guest usage of pointer
> > authentication instructions and accesses to the keys. While these are
> > disabled, we avoid context-switching the keys. When we trap the guest
> > trying to use pointer authentication functionality, we change to eagerly
> > context-switching the keys, and enable the feature. The next time the
> > vcpu is scheduled out/in, we start again.
>
> Why start again?
> Taking the trap at all suggests the guest knows about pointer-auth, if it uses
> it, its probably in some context switch code. It would be good to avoid trapping
> that.
This is a pointer to the earlier discussion
https://lore.kernel.org/lkml/20180409125818.GE10904@cbox/.
It seems there is some agreement reached to flip ptrauth status in
every vcpu load/unload
as this would cater to both ptrauth enabled/disabled application.
>
>
> > Pointer authentication consists of address authentication and generic
> > authentication, and CPUs in a system might have varied support for
> > either. Where support for either feature is not uniform, it is hidden
> > from guests via ID register emulation, as a result of the cpufeature
> > framework in the host.
> >
> > Unfortunately, address authentication and generic authentication cannot
> > be trapped separately, as the architecture provides a single EL2 trap
> > covering both. If we wish to expose one without the other, we cannot
> > prevent a (badly-written) guest from intermittently using a feature
> > which is not uniformly supported (when scheduled on a physical CPU which
> > supports the relevant feature).
>
> Yuck!
:)
>
>
> > When the guest is scheduled on a
> > physical CPU lacking the feature, these attemts will result in an UNDEF
>
> (attempts)
ok.
>
> > being taken by the guest.
>
> Can we only expose the feature to a guest if both address and generic
> authentication are supported? (and hide it from the id register otherwise).
>
> This avoids having to know if this cpu supports address/generic when the system
> doesn't. We would need to scrub the values to avoid guest-values being left
> loaded when something else is running.
Yes it can be done. Currently it is done indirectly by userspace
option. Agree with you on this.
>
>
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index 1c8393f..ac7d496 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void)
> >               cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
> >  }
> >
> > +static inline bool system_supports_ptrauth(void)
> > +{
> > +     return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> > +             cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
> > +}
>
> (mainline has a system_supports_address_auth(), I guess this will be folded
> around a bit during the rebase).
>
> system_supports_ptrauth() that checks the two architected algorithm caps?
yes. I will add.
>
>
> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> > index ab35929..5c47a8f47 100644
> > --- a/arch/arm64/kvm/handle_exit.c
> > +++ b/arch/arm64/kvm/handle_exit.c
> > @@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  }
> >
> >  /*
> > + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> > + * ptrauth register. This trap should not occur as we enable ptrauth during
> > + * vcpu schedule itself but is anyway kept here for any unfortunate scenario.
>
> ... so we don't need this? Or if it ever runs its indicative of a bug?
Sorry This comment is confusing and is leftover of last V3 version.
>
>
> > + */
> > +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> > +{
> > +     if (system_supports_ptrauth())
> > +             kvm_arm_vcpu_ptrauth_enable(vcpu);
> > +     else
> > +             kvm_inject_undefined(vcpu);
> > +}
> > +
> > +/*
> >   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> > - * a NOP).
> > + * a NOP), or guest EL1 access to a ptrauth register.
> >   */
> >  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  {
> > -     /*
> > -      * We don't currently support ptrauth in a guest, and we mask the ID
> > -      * registers to prevent well-behaved guests from trying to make use of
> > -      * it.
> > -      *
> > -      * Inject an UNDEF, as if the feature really isn't present.
> > -      */
> > -     kvm_inject_undefined(vcpu);
> > +     kvm_arm_vcpu_ptrauth_trap(vcpu);
> >       return 1;
> >  }
>
> > diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > new file mode 100644
> > index 0000000..1bfaf74
> > --- /dev/null
> > +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > @@ -0,0 +1,73 @@
>
> > +{
> > +     return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
> > +}
>
> (If you include the the IS_ENABLED() in here too, the compiler can work out
> pointer-auth was compiled out and prune all the unused static functions.)
ok.
>
>
> > +#define __ptrauth_save_key(regs, key)                                                \
> > +({                                                                           \
> > +     regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);       \
> > +     regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);       \
> > +})
>
> (shame we can't re-use the arch code's macros for these)
>
>
> > +static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> > +{
> > +     __ptrauth_save_key(ctxt->sys_regs, APIA);
> > +     __ptrauth_save_key(ctxt->sys_regs, APIB);
> > +     __ptrauth_save_key(ctxt->sys_regs, APDA);
> > +     __ptrauth_save_key(ctxt->sys_regs, APDB);
>
> > +     __ptrauth_save_key(ctxt->sys_regs, APGA);
>
> I thought the generic authentication bits had a separate ID-register-field/cap?
> But you only checked address-auth above. Is it possible the CPU doesn't have
> this register?
Yes it may be possible and generic ptrauth userspace patches by
Kristina has a separate
check for generic key. I will add a similar check here.
>
> (but I think we should only support this if both generic&address are supported)
>
>
> > +}
>
> > +void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> > +                                       struct kvm_cpu_context *host_ctxt,
> > +                                       struct kvm_cpu_context *guest_ctxt)
> > +{
> > +     if (!__ptrauth_is_enabled(vcpu))
> > +             return;
> > +
> > +     __ptrauth_save_state(host_ctxt);
> > +     __ptrauth_restore_state(guest_ctxt);
> > +}
> > +
> > +void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> > +                                      struct kvm_cpu_context *host_ctxt,
> > +                                      struct kvm_cpu_context *guest_ctxt)
> > +{
> > +     if (!__ptrauth_is_enabled(vcpu))
> > +             return;
> > +
> > +     __ptrauth_save_state(guest_ctxt);
> > +     __ptrauth_restore_state(host_ctxt);
> > +}
>
>
> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index 85a2a5c..6c57552 100644
> > --- a/arch/arm64/kvm/hyp/switch.c
> > +++ b/arch/arm64/kvm/hyp/switch.c
> > @@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> >       sysreg_restore_guest_state_vhe(guest_ctxt);
> >       __debug_switch_to_guest(vcpu);
> >
> > +     __ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> > +
> >       __set_guest_arch_workaround_state(vcpu);
> >
> >       do {
> > @@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> >
> >       __set_host_arch_workaround_state(vcpu);
> >
> > +     __ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> > +
> >       sysreg_save_guest_state_vhe(guest_ctxt);
> >
> >       __deactivate_traps(vcpu);
>
> This is deep in the world-switch code, meaning we save/restore 10 sysregs for
> every entry/exit. As the kernel isn't using pointer-auth, wouldn't it be better
> to defer the save/restore to the preempt notifier using kvm_arch_vcpu_load()?
>
> I've seen the discussion that the kernel may start using this 'soon':
> lore.kernel.org/r/20181112223212.5o4ipc5kt5ziuupt@localhost
>
> (does anyone know when soon is?)
One version is already posted and next version is worked upon.
>
> ... but it doesn't today, and save/restoring these in C becomes impossible when
> that happens, which the arch code is doing too, so we aren't inventing a new
> problem.
Save and restore of key possible in C function with GCC function attribute like
sign-return-address=none. I left the switch function as of now and
only this change is needed
to enable it for kernel ptrauth.
>
>
> probable-tangent: {
> If the kernel starts using ptrauth, it will need to switch these registers in
> the entry asm, and in __cpu_switch_to()... there will be new asm macro's to do this.
>
> Can we make it so that the same series that does that, can easily do KVM too?
Yes it should be possible.
>
> If we use the preempt-notifier stuff the GET/SET_ONE_REG code knows that today
> the ptrauth values are always loaded on the cpu, it doesn't need to look in the
> sys_reg_descs[] array.
> This means the preempt notifier could save/restore them to a struct ptrauth_keys
> in struct kvm_vcpu_arch. This lets us use the arch codes __ptrauth_key_install()
> and ptrauth_keys_switch() helpers today, instead of duplicating them because the
> layout is slightly different.
Since vcpu is a thread so keys are allocated using the current user
ptrauth patches and
I can see in guest switch that has keys are present in guest context.
Do you see advantage
in allocating keys in the above way?
>
> Once these become asm macros, the same structure is already in use, so we can
> (cough) probably feed it a different base pointer for guest-entry/ret_to_user.
> guest-exit would almost be the same as kernel_enter, its only the
> save-guest-values that would be specific to kvm.
yes reusing will be better.
>
> (oh, and the GET/SET_ONE_REG code would need to know to look elsewhere for the
> guest values, but it looks like your patch 5 is already doing some of this).
ok.
>
> }
>
>
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 1ca592d..6af6c7d 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >       { SYS_DESC(SYS_PMEVTYPERn_EL0(n)),                                      \
> >         access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
> >
> > +
> > +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> > +{
> > +     vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> > +}
> > +
> > +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> > +{
> > +     vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> > +}
> > +
>
> > @@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> >                       kvm_debug("SVE unsupported for guests, suppressing\n");
> >
> >               val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > -     } else if (id == SYS_ID_AA64ISAR1_EL1) {
> > -             const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > -                                      (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > -                                      (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > -                                      (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>
> Don't we still want to remove the imp-def bits? and the APA/GPA bits if we
> decide there is incomplete support.
I think it was decided to enable all features even though they are not
supported. I cannot find the link though.
>
>
> > -             if (val & ptrauth_mask)
> > -                     kvm_debug("ptrauth unsupported for guests, suppressing\n");
> > -             val &= ~ptrauth_mask;
> >       } else if (id == SYS_ID_AA64MMFR1_EL1) {
> >               if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
> >                       kvm_debug("LORegions unsupported for guests, suppressing\n");
>
>
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index feda456..1f3b6ed 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >               vcpu_clear_wfe_traps(vcpu);
> >       else
> >               vcpu_set_wfe_traps(vcpu);
> > +
> > +     kvm_arm_vcpu_ptrauth_config(vcpu);
>
> (doesn't this unconfigure ptrauth?!)
ok will change the name to something like kvm_arm_vcpu_ptrauth_reset.
>
> >  }
>
> Thanks,
>
> James

//Amit

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-01-04 17:57   ` James Morse
@ 2019-01-09 10:13     ` Amit Daniel Kachhap
  2019-01-31 16:19       ` James Morse
  0 siblings, 1 reply; 18+ messages in thread
From: Amit Daniel Kachhap @ 2019-01-09 10:13 UTC (permalink / raw)
  To: James Morse
  Cc: LAK, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi,

On Sat, Jan 5, 2019 at 12:05 AM James Morse <james.morse@arm.com> wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > This feature will allow the KVM guest to allow the handling of
> > pointer authentication instructions or to treat them as undefined
> > if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> > supply this parameter instead of creating a new API.
> >
> > A new register is not created to pass this parameter via
> > SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> > supplied is enough to select this feature.
>
> What is the motivation for doing this? Doesn't ptrauth 'just' need turning on?
> It doesn't need additional setup to be useable, or rely on some qemu support to
> work properly. There isn't any hidden state that can't be migrated in the usual way.
> Is it just because we don't want to commit to the ABI?
This allows migration of guest to non pointer authenticated supported
systems and hides the extra ptrauth registers.
Basically this suggestion was given by Christoffer
(https://lore.kernel.org/lkml/20180206123847.GY21802@cbox/).
 I don't have strong reservation to have this option and can be
dropped if this doesn't make sense.
>
>
> > diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> > index 5baca42..8c0f338 100644
> > --- a/Documentation/arm64/pointer-authentication.txt
> > +++ b/Documentation/arm64/pointer-authentication.txt
> > @@ -87,7 +87,8 @@ used to get and set the keys for a thread.
> >  Virtualization
> >  --------------
> >
> > -Pointer authentication is not currently supported in KVM guests. KVM
> > -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> > -the feature will result in an UNDEFINED exception being injected into
> > -the guest.
> > +Pointer authentication is enabled in KVM guest when virtual machine is
> > +created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
> > +to be enabled. Without this flag, pointer authentication is not enabled
> > +in KVM guests and attempted use of the feature will result in an UNDEFINED
> > +exception being injected into the guest.
>
> > diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> > index cd209f7..e20583a 100644
> > --- a/Documentation/virtual/kvm/api.txt
> > +++ b/Documentation/virtual/kvm/api.txt
> > @@ -2634,6 +2634,10 @@ Possible features:
> >         Depends on KVM_CAP_ARM_PSCI_0_2.
> >       - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
> >         Depends on KVM_CAP_ARM_PMU_V3.
> > +     - KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
> > +       Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
> > +       set, then the KVM guest allows the execution of pointer authentication
> > +       instructions or treats them as undefined if not set.
>
> > diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > index 1bfaf74..03999bb 100644
> > --- a/arch/arm64/kvm/hyp/ptrauth-sr.c
> > +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > @@ -71,3 +71,19 @@ void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> >       __ptrauth_save_state(guest_ctxt);
> >       __ptrauth_restore_state(host_ctxt);
> >  }
> > +
> > +/**
> > + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in vcpu
> > + *
> > + * @vcpu: The VCPU pointer
> > + *
> > + * This function will be used to enable/disable ptrauth in guest as configured
> > + * by the KVM userspace API.
> > + */
> > +bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
> > +{
> > +     if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features))
> > +             return true;
> > +     else
> > +             return false;
> > +}
>
> Can't you return the result of test_bit() directly?
ok.
>
>
> Thanks,
>
> James

//Amit

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally
  2019-01-04 17:58   ` James Morse
@ 2019-01-09 10:16     ` Amit Daniel Kachhap
  2019-01-28  7:02     ` Amit Daniel Kachhap
  1 sibling, 0 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2019-01-09 10:16 UTC (permalink / raw)
  To: James Morse
  Cc: LAK, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi,

On Fri, Jan 4, 2019 at 11:32 PM James Morse <james.morse@arm.com> wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > According to userspace settings, pointer authentication cpufeature
> > is enabled/disabled from guests.
>
> This reads like the guest is changing something in the host. Isn't this hiding
> the id-register values from the guest?
>
>
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 6af6c7d..ce6144a 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> >                       kvm_debug("SVE unsupported for guests, suppressing\n");
> >
> >               val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > +     } else if (id == SYS_ID_AA64ISAR1_EL1) {
> > +             const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > +                                      (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > +                                      (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > +                                      (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> > +             if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
> > +                     kvm_debug("ptrauth unsupported for guests, suppressing\n");
> > +                     val &= ~ptrauth_mask;
> > +             }
>
> I think this hunk should have been in the previous patch as otherwise its a
> bisection oddity.
>
> Could you merge this hunk with the previous patch, and move the mechanical bits
> that pass vcpu around to a prior preparatory patch.
Yes will do.
>
> (I'm still unsure if we need to hide this as a user-controlled policy)
>
>
> Thanks,
>
> James
//Amit

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally
  2019-01-04 17:58   ` James Morse
  2019-01-09 10:16     ` Amit Daniel Kachhap
@ 2019-01-28  7:02     ` Amit Daniel Kachhap
  1 sibling, 0 replies; 18+ messages in thread
From: Amit Daniel Kachhap @ 2019-01-28  7:02 UTC (permalink / raw)
  To: James Morse
  Cc: LAK, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi James,
On Fri, Jan 4, 2019 at 11:32 PM James Morse <james.morse@arm.com> wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > According to userspace settings, pointer authentication cpufeature
> > is enabled/disabled from guests.
>
> This reads like the guest is changing something in the host. Isn't this hiding
> the id-register values from the guest?
I dropped this patch altogether in V5 series and now only key
registers are masked
if userspace disables it.

Thanks,
Amit Daniel
>
>
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 6af6c7d..ce6144a 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> >                       kvm_debug("SVE unsupported for guests, suppressing\n");
> >
> >               val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > +     } else if (id == SYS_ID_AA64ISAR1_EL1) {
> > +             const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > +                                      (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > +                                      (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > +                                      (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> > +             if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
> > +                     kvm_debug("ptrauth unsupported for guests, suppressing\n");
> > +                     val &= ~ptrauth_mask;
> > +             }
>
> I think this hunk should have been in the previous patch as otherwise its a
> bisection oddity.
>
> Could you merge this hunk with the previous patch, and move the mechanical bits
> that pass vcpu around to a prior preparatory patch.
>
> (I'm still unsure if we need to hide this as a user-controlled policy)
>
>
> Thanks,
>
> James

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-01-09 10:13     ` Amit Daniel Kachhap
@ 2019-01-31 16:19       ` James Morse
  0 siblings, 0 replies; 18+ messages in thread
From: James Morse @ 2019-01-31 16:19 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: LAK, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, Dave Martin,
	linux-kernel

Hi Amit,

On 09/01/2019 10:13, Amit Daniel Kachhap wrote:
> On Sat, Jan 5, 2019 at 12:05 AM James Morse <james.morse@arm.com> wrote:
>> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
>>> This feature will allow the KVM guest to allow the handling of
>>> pointer authentication instructions or to treat them as undefined
>>> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
>>> supply this parameter instead of creating a new API.
>>>
>>> A new register is not created to pass this parameter via
>>> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
>>> supplied is enough to select this feature.
>>
>> What is the motivation for doing this? Doesn't ptrauth 'just' need turning on?
>> It doesn't need additional setup to be useable, or rely on some qemu support to
>> work properly. There isn't any hidden state that can't be migrated in the usual way.
>> Is it just because we don't want to commit to the ABI?

> This allows migration of guest to non pointer authenticated supported
> systems and hides the extra ptrauth registers.

The MIDR already has to match, so the hardware must be the same. I guess this
lets us hide the new feature so old-guests can migrate to a new-kernel without a
write to the id registers failing.


> Basically this suggestion was given by Christoffer
> (https://lore.kernel.org/lkml/20180206123847.GY21802@cbox/).

Ah, Christoffer asked for it, that's reason enough!


Thanks,

James

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-01-31 16:19 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-18  7:56 [PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
2018-12-18  7:56 ` [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
2019-01-04 17:50   ` James Morse
2019-01-08  5:16     ` Amit Daniel Kachhap
2018-12-18  7:56 ` [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
2019-01-04 17:57   ` James Morse
2019-01-09 10:01     ` Amit Daniel Kachhap
2018-12-18  7:56 ` [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
2019-01-04 17:57   ` James Morse
2019-01-09 10:13     ` Amit Daniel Kachhap
2019-01-31 16:19       ` James Morse
2018-12-18  7:56 ` [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally Amit Daniel Kachhap
2019-01-04 17:58   ` James Morse
2019-01-09 10:16     ` Amit Daniel Kachhap
2019-01-28  7:02     ` Amit Daniel Kachhap
2018-12-18  7:56 ` [PATCH v4 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
2018-12-18  7:56 ` [PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
2019-01-04 17:59   ` James Morse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).