linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support
@ 2017-11-27 16:37 Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 01/12] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
                   ` (11 more replies)
  0 siblings, 12 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

This series adds support for the ARMv8.3 pointer authentication extension,
enabling userspace return address protection with recent versions of GCC.

Since RFC [1]:
* Make the KVM context switch (semi-lazy)
* Rebase to v4.13-rc1
* Improve pointer authentication documentation
* Add hwcap documentation
* Various minor cleanups

Since v1 [2]:
* Rebase to v4.15-rc1
* Settle on per-process keys
* Strip PACs when unwinding userspace
* Don't expose an XPAC hwcap (this is implied by ID registers)
* Leave APIB, ABPDA, APDB, and APGA keys unsupported for now
* Support IMP DEF algorithms
* Rely on KVM ID register emulation
* Various cleanups

While there are use-cases for keys other than APIAKey, the only software that
I'm aware of with pointer authentication support is GCC, which only makes use
of APIAKey. I'm happy to add support for other keys as users appear.

I've pushed the series to the arm64/pointer-auth branch [3] of my linux tree.
I've also pushed out a necessary bootwrapper patch to the pointer-auth branch
[4] of my bootwrapper repo.


Extension Overview 
==================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate strip a PAC from a pointer

If authentication succeeds, the code is removed, yielding the original pointer.
If authentication fails, bits are set in the pointer such that it is guaranteed
to cause a fault if used.

These instructions can make use of four keys:

* APIAKey (A.K.A. Instruction A key)
* APIBKey (A.K.A. Instruction B key)
* APDAKey (A.K.A. Data A key)
* APDBKey (A.K.A. Data B Key)

A subset of these instruction encodings have been allocated from the HINT
space, and will operate as NOPs on any ARMv8-A parts which do not feature the
extension (or if purposefully disabled by the kernel). Software using only this
subset of the instructions should function correctly on all ARMv8-A parts.

Additionally, instructions are added to authenticate small blocks of memory in
similar fashion, using APGAKey (A.K.A. Generic key).


This Series
===========

This series enables the use of instructions using APIAKey, which is initialised
and maintained per-process (shared by all threads). This series does not add
support for APIBKey, APDAKey, APDBKey, nor APGAKey.

I've given this some basic testing with a homebrew test suite. More ideally,
we'd add some tests to the kernel source tree.

I've added some basic KVM support, which relies on the recently introduced ID
register emulation to hide mismatched support from guests.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-April/498941.html
[2] https://lkml.kernel.org/r/1500480092-28480-1-git-send-email-mark.rutland@arm.com
[3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/pointer-auth
[4] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git pointer-auth


Mark Rutland (12):
  asm-generic: mm_hooks: allow hooks to be overridden individually
  arm64: add pointer authentication register bits
  arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
  arm64/cpufeature: detect pointer authentication
  arm64: Don't trap host pointer auth use to EL2
  arm64: add basic pointer authentication support
  arm64: expose user PAC bit positions via ptrace
  arm64: perf: strip PAC when unwinding userspace
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: context-switch ptrauth registers
  arm64: enable pointer authentication
  arm64: docs: document pointer authentication

 Documentation/arm64/booting.txt                |   8 ++
 Documentation/arm64/elf_hwcaps.txt             |   6 ++
 Documentation/arm64/pointer-authentication.txt |  85 ++++++++++++++++++++
 arch/arm64/Kconfig                             |  23 ++++++
 arch/arm64/include/asm/cpucaps.h               |   8 +-
 arch/arm64/include/asm/esr.h                   |   3 +-
 arch/arm64/include/asm/kvm_arm.h               |   3 +-
 arch/arm64/include/asm/kvm_host.h              |  28 ++++++-
 arch/arm64/include/asm/kvm_hyp.h               |   7 ++
 arch/arm64/include/asm/mmu.h                   |   5 ++
 arch/arm64/include/asm/mmu_context.h           |  25 +++++-
 arch/arm64/include/asm/pointer_auth.h          | 104 +++++++++++++++++++++++++
 arch/arm64/include/asm/sysreg.h                |  30 +++++++
 arch/arm64/include/uapi/asm/hwcap.h            |   1 +
 arch/arm64/include/uapi/asm/ptrace.h           |   7 ++
 arch/arm64/kernel/cpufeature.c                 | 103 ++++++++++++++++++++++++
 arch/arm64/kernel/cpuinfo.c                    |   1 +
 arch/arm64/kernel/head.S                       |  19 ++++-
 arch/arm64/kernel/perf_callchain.c             |   5 +-
 arch/arm64/kernel/ptrace.c                     |  38 +++++++++
 arch/arm64/kvm/handle_exit.c                   |  21 +++++
 arch/arm64/kvm/hyp/Makefile                    |   1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c                |  91 ++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c                    |   9 ++-
 arch/arm64/kvm/hyp/tlb.c                       |   6 +-
 arch/arm64/kvm/sys_regs.c                      |  32 ++++++++
 include/asm-generic/mm_hooks.h                 |  11 +++
 include/uapi/linux/elf.h                       |   1 +
 28 files changed, 668 insertions(+), 13 deletions(-)
 create mode 100644 Documentation/arm64/pointer-authentication.txt
 create mode 100644 arch/arm64/include/asm/pointer_auth.h
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

-- 
2.11.0

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 01/12] asm-generic: mm_hooks: allow hooks to be overridden individually
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
@ 2017-11-27 16:37 ` Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 02/12] arm64: add pointer authentication register bits Mark Rutland
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

Currently, an architecture must either implement all of the mm hooks
itself, or use all of those provided by the asm-generic implementation.
When an architecture only needs to override a single hook, it must copy
the stub implementations from the asm-generic version.

To avoid this repetition, allow each hook to be overridden indiviually,
by placing each under an #ifndef block. As architectures providing their
own hooks can't include this file today, this shouldn't adversely affect
any existing hooks.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arch@vger.kernel.org
---
 include/asm-generic/mm_hooks.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
index ea189d88a3cc..34edf0850d49 100644
--- a/include/asm-generic/mm_hooks.h
+++ b/include/asm-generic/mm_hooks.h
@@ -7,30 +7,41 @@
 #ifndef _ASM_GENERIC_MM_HOOKS_H
 #define _ASM_GENERIC_MM_HOOKS_H
 
+#ifndef arch_dup_mmap
 static inline void arch_dup_mmap(struct mm_struct *oldmm,
 				 struct mm_struct *mm)
 {
 }
+#endif
 
+#ifndef arch_exit_mmap
 static inline void arch_exit_mmap(struct mm_struct *mm)
 {
 }
+#endif
 
+#ifndef arch_unmap
 static inline void arch_unmap(struct mm_struct *mm,
 			struct vm_area_struct *vma,
 			unsigned long start, unsigned long end)
 {
 }
+#endif
 
+#ifndef arch_bprm_mm_init
 static inline void arch_bprm_mm_init(struct mm_struct *mm,
 				     struct vm_area_struct *vma)
 {
 }
+#endif
 
+#ifndef arch_vma_access_permitted
 static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
 		bool write, bool execute, bool foreign)
 {
 	/* by default, allow everything */
 	return true;
 }
+#endif
+
 #endif	/* _ASM_GENERIC_MM_HOOKS_H */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 02/12] arm64: add pointer authentication register bits
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 01/12] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
@ 2017-11-27 16:37 ` Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 03/12] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

The ARMv8.3 pointer authentication extension adds:

* New fields in ID_AA64ISAR1 to report the presence of pointer
  authentication functionality.

* New control bits in SCTLR_ELx to enable this functionality.

* New system registers to hold the keys necessary for this
  functionality.

* A new ESR_ELx.EC code used when the new instructions are affected by
  configurable traps

This patch adds the relevant definitions to <asm/sysreg.h> and
<asm/esr.h> for these, to be used by subsequent patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/esr.h    |  3 ++-
 arch/arm64/include/asm/sysreg.h | 30 ++++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 014d7d8edcf9..5c628fa31cec 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -30,7 +30,8 @@
 #define ESR_ELx_EC_CP14_LS	(0x06)
 #define ESR_ELx_EC_FP_ASIMD	(0x07)
 #define ESR_ELx_EC_CP10_ID	(0x08)
-/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_PAC		(0x09)
+/* Unallocated EC: 0x0A - 0x0B */
 #define ESR_ELx_EC_CP14_64	(0x0C)
 /* Unallocated EC: 0x0d */
 #define ESR_ELx_EC_ILL		(0x0E)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 08cc88574659..a67cfedfa3af 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -170,6 +170,19 @@
 #define SYS_TTBR1_EL1			sys_reg(3, 0, 2, 0, 1)
 #define SYS_TCR_EL1			sys_reg(3, 0, 2, 0, 2)
 
+#define SYS_APIAKEYLO_EL1		sys_reg(3, 0, 2, 1, 0)
+#define SYS_APIAKEYHI_EL1		sys_reg(3, 0, 2, 1, 1)
+#define SYS_APIBKEYLO_EL1		sys_reg(3, 0, 2, 1, 2)
+#define SYS_APIBKEYHI_EL1		sys_reg(3, 0, 2, 1, 3)
+
+#define SYS_APDAKEYLO_EL1		sys_reg(3, 0, 2, 2, 0)
+#define SYS_APDAKEYHI_EL1		sys_reg(3, 0, 2, 2, 1)
+#define SYS_APDBKEYLO_EL1		sys_reg(3, 0, 2, 2, 2)
+#define SYS_APDBKEYHI_EL1		sys_reg(3, 0, 2, 2, 3)
+
+#define SYS_APGAKEYLO_EL1		sys_reg(3, 0, 2, 3, 0)
+#define SYS_APGAKEYHI_EL1		sys_reg(3, 0, 2, 3, 1)
+
 #define SYS_ICC_PMR_EL1			sys_reg(3, 0, 4, 6, 0)
 
 #define SYS_AFSR0_EL1			sys_reg(3, 0, 5, 1, 0)
@@ -397,7 +410,11 @@
 #define SYS_ICH_LR15_EL2		__SYS__LR8_EL2(7)
 
 /* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_ENIA	(1 << 31)
+#define SCTLR_ELx_ENIB	(1 << 30)
+#define SCTLR_ELx_ENDA	(1 << 27)
 #define SCTLR_ELx_EE    (1 << 25)
+#define SCTLR_ELx_ENDB	(1 << 13)
 #define SCTLR_ELx_I	(1 << 12)
 #define SCTLR_ELx_SA	(1 << 3)
 #define SCTLR_ELx_C	(1 << 2)
@@ -431,11 +448,24 @@
 #define ID_AA64ISAR0_AES_SHIFT		4
 
 /* id_aa64isar1 */
+#define ID_AA64ISAR1_GPI_SHIFT		28
+#define ID_AA64ISAR1_GPA_SHIFT		24
 #define ID_AA64ISAR1_LRCPC_SHIFT	20
 #define ID_AA64ISAR1_FCMA_SHIFT		16
 #define ID_AA64ISAR1_JSCVT_SHIFT	12
+#define ID_AA64ISAR1_API_SHIFT		8
+#define ID_AA64ISAR1_APA_SHIFT		4
 #define ID_AA64ISAR1_DPB_SHIFT		0
 
+#define ID_AA64ISAR1_APA_NI		0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
+#define ID_AA64ISAR1_API_NI		0x0
+#define ID_AA64ISAR1_API_IMP_DEF	0x1
+#define ID_AA64ISAR1_GPA_NI		0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
+#define ID_AA64ISAR1_GPI_NI		0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
+
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_SVE_SHIFT		32
 #define ID_AA64PFR0_GIC_SHIFT		24
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 03/12] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 01/12] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 02/12] arm64: add pointer authentication register bits Mark Rutland
@ 2017-11-27 16:37 ` Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 04/12] arm64/cpufeature: detect pointer authentication Mark Rutland
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

>From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:

* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algorithm

This patch adds the requisite definitions so that we can identify the
presence of this functionality.

For the timebeing, the features are hidden from both KVM guests and
userspace. As marking them with FTR_HIDDEN only hides them from
userspace, they are also protected with ifdeffery on
CONFIG_ARM64_POINTER_AUTHENTICATION.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c5ba0097887f..1883cdffcdf7 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -137,9 +137,17 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
+#endif
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+#endif
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 04/12] arm64/cpufeature: detect pointer authentication
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (2 preceding siblings ...)
  2017-11-27 16:37 ` [PATCHv2 03/12] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
@ 2017-11-27 16:37 ` Mark Rutland
  2017-11-27 16:37 ` [PATCHv2 05/12] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.

It is assumed that if all CPUs support an IMP DEF algorithm, the same
algorithm is used across all CPUs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  8 +++-
 arch/arm64/kernel/cpufeature.c   | 82 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 2ff7c5e8efab..d2830ce5c1c7 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -41,7 +41,13 @@
 #define ARM64_WORKAROUND_CAVIUM_30115		20
 #define ARM64_HAS_DCPOP				21
 #define ARM64_SVE				22
+#define ARM64_HAS_ADDRESS_AUTH_ARCH		23
+#define ARM64_HAS_ADDRESS_AUTH_IMP_DEF		24
+#define ARM64_HAS_ADDRESS_AUTH			25
+#define ARM64_HAS_GENERIC_AUTH_ARCH		26
+#define ARM64_HAS_GENERIC_AUTH_IMP_DEF		27
+#define ARM64_HAS_GENERIC_AUTH			28
 
-#define ARM64_NCAPS				23
+#define ARM64_NCAPS				29
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 1883cdffcdf7..babd4c173092 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -853,6 +853,36 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
 					ID_AA64PFR0_FP_SHIFT) < 0;
 }
 
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+	bool api, apa;
+
+	apa = cpuid_feature_extract_unsigned_field(isar1,
+					ID_AA64ISAR1_APA_SHIFT) > 0;
+	api = cpuid_feature_extract_unsigned_field(isar1,
+					ID_AA64ISAR1_API_SHIFT) > 0;
+
+	return apa || api;
+}
+
+static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
+			     int __unused)
+{
+	u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+	bool gpi, gpa;
+
+	gpa = cpuid_feature_extract_unsigned_field(isar1,
+					ID_AA64ISAR1_GPA_SHIFT) > 0;
+	gpi = cpuid_feature_extract_unsigned_field(isar1,
+					ID_AA64ISAR1_GPI_SHIFT) > 0;
+
+	return gpa || gpi;
+}
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -970,6 +1000,58 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.enable = sve_kernel_enable,
 	},
 #endif /* CONFIG_ARM64_SVE */
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	{
+		.desc = "Address authentication (architected algorithm)",
+		.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
+		.def_scope = SCOPE_SYSTEM,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_APA_SHIFT,
+		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
+		.matches = has_cpuid_feature,
+	},
+	{
+		.desc = "Address authentication (IMP DEF algorithm)",
+		.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
+		.def_scope = SCOPE_SYSTEM,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_API_SHIFT,
+		.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
+		.matches = has_cpuid_feature,
+	},
+	{
+		.capability = ARM64_HAS_ADDRESS_AUTH,
+		.def_scope = SCOPE_SYSTEM,
+		.matches = has_address_auth,
+	},
+	{
+		.desc = "Generic authentication (architected algorithm)",
+		.capability = ARM64_HAS_GENERIC_AUTH_ARCH,
+		.def_scope = SCOPE_SYSTEM,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_GPA_SHIFT,
+		.min_field_value = ID_AA64ISAR1_GPA_ARCHITECTED,
+		.matches = has_cpuid_feature
+	},
+	{
+		.desc = "Generic authentication (IMP DEF algorithm)",
+		.capability = ARM64_HAS_GENERIC_AUTH_IMP_DEF,
+		.def_scope = SCOPE_SYSTEM,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_GPI_SHIFT,
+		.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
+		.matches = has_cpuid_feature
+	},
+	{
+		.capability = ARM64_HAS_GENERIC_AUTH,
+		.def_scope = SCOPE_SYSTEM,
+		.matches = has_generic_auth,
+	},
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
 	{},
 };
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 05/12] arm64: Don't trap host pointer auth use to EL2
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (3 preceding siblings ...)
  2017-11-27 16:37 ` [PATCHv2 04/12] arm64/cpufeature: detect pointer authentication Mark Rutland
@ 2017-11-27 16:37 ` Mark Rutland
  2018-02-06 12:39   ` Christoffer Dall
  2017-11-27 16:38 ` [PATCHv2 06/12] arm64: add basic pointer authentication support Mark Rutland
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:37 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2 (where we will not be
able to handle them).

This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels, EL2 access is controlled by EL3, and we need not set
anything.

This does not enable support for KVM guests, since KVM manages HCR_EL2
itself.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <cdall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h |  2 ++
 arch/arm64/kernel/head.S         | 19 +++++++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 7f069ff37f06..62854d5d1d3b 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -23,6 +23,8 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+#define HCR_API		(UL(1) << 41)
+#define HCR_APK		(UL(1) << 40)
 #define HCR_E2H		(UL(1) << 34)
 #define HCR_ID		(UL(1) << 33)
 #define HCR_CD		(UL(1) << 32)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 67e86a0f57ac..06a96e9af26b 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -415,10 +415,25 @@ CPU_LE(	bic	x0, x0, #(1 << 25)	)	// Clear the EE bit for EL2
 
 	/* Hyp configuration. */
 	mov	x0, #HCR_RW			// 64-bit EL1
-	cbz	x2, set_hcr
+	cbz	x2, 1f
 	orr	x0, x0, #HCR_TGE		// Enable Host Extensions
 	orr	x0, x0, #HCR_E2H
-set_hcr:
+1:
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	/*
+	 * Disable pointer authentication traps to EL2. The HCR_EL2.{APK,API}
+	 * bits exist iff at least one authentication mechanism is implemented.
+	 */
+	mrs	x1, id_aa64isar1_el1
+	mov_q	x3, ((0xf << ID_AA64ISAR1_GPI_SHIFT) | \
+		     (0xf << ID_AA64ISAR1_GPA_SHIFT) | \
+		     (0xf << ID_AA64ISAR1_API_SHIFT) | \
+		     (0xf << ID_AA64ISAR1_APA_SHIFT))
+	and	x1, x1, x3
+	cbz	x1, 1f
+	orr	x0, x0, #(HCR_APK | HCR_API)
+1:
+#endif
 	msr	hcr_el2, x0
 	isb
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 06/12] arm64: add basic pointer authentication support
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (4 preceding siblings ...)
  2017-11-27 16:37 ` [PATCHv2 05/12] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
@ 2017-11-27 16:38 ` Mark Rutland
  2018-05-22 19:06   ` Adam Wallis
  2017-11-27 16:38 ` [PATCHv2 07/12] arm64: expose user PAC bit positions via ptrace Mark Rutland
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:38 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey. The kernel maintains an APIAKey value
for each process (shared by all threads within), which is initialised to
a random value at exec() time.

To describe that address authentication instructions are available, the
ID_AA64ISAR0.{APA,API} fields are exposed to userspace. A new hwcap,
APIA, is added to describe that the kernel manages APIAKey.

Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
and will behave as NOPs. These may be made use of in future patches.

No support is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/mmu.h          |  5 ++
 arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
 arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
 arch/arm64/include/uapi/asm/hwcap.h   |  1 +
 arch/arm64/kernel/cpufeature.c        | 17 ++++++-
 arch/arm64/kernel/cpuinfo.c           |  1 +
 6 files changed, 134 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm64/include/asm/pointer_auth.h

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 0d34bf0a89c7..2bcdf7f923ba 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -16,12 +16,17 @@
 #ifndef __ASM_MMU_H
 #define __ASM_MMU_H
 
+#include <asm/pointer_auth.h>
+
 #define MMCF_AARCH32	0x1	/* mm context flag for AArch32 executables */
 
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
 	unsigned long	flags;
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	struct ptrauth_keys	ptrauth_keys;
+#endif
 } mm_context_t;
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 3257895a9b5e..06757a537bd7 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -31,7 +31,6 @@
 #include <asm/cacheflush.h>
 #include <asm/cpufeature.h>
 #include <asm/proc-fns.h>
-#include <asm-generic/mm_hooks.h>
 #include <asm/cputype.h>
 #include <asm/pgtable.h>
 #include <asm/sysreg.h>
@@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
 #define destroy_context(mm)		do { } while(0)
 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
 
-#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
+static inline int init_new_context(struct task_struct *tsk,
+			struct mm_struct *mm)
+{
+	atomic64_set(&mm->context.id, 0);
+	mm_ctx_ptrauth_init(&mm->context);
+
+	return 0;
+}
 
 /*
  * This is called when "tsk" is about to enter lazy TLB mode.
@@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
 		return;
 	}
 
+	mm_ctx_ptrauth_switch(&next->context);
+
 	check_and_switch_context(next, cpu);
 }
 
@@ -226,6 +234,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 
 void verify_cpu_asid_bits(void);
 
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+				 struct mm_struct *mm)
+{
+	mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
+}
+#define arch_dup_mmap arch_dup_mmap
+
+/*
+ * We need to override arch_dup_mmap before including the generic hooks, which
+ * are otherwise sufficient for us.
+ */
+#include <asm-generic/mm_hooks.h>
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* !__ASM_MMU_CONTEXT_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
new file mode 100644
index 000000000000..964da0c3dc48
--- /dev/null
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2016 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_POINTER_AUTH_H
+#define __ASM_POINTER_AUTH_H
+
+#include <linux/random.h>
+
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+/*
+ * Each key is a 128-bit quantity which is split accross a pair of 64-bit
+ * registers (Lo and Hi).
+ */
+struct ptrauth_key {
+	unsigned long lo, hi;
+};
+
+/*
+ * We give each process its own instruction A key (APIAKey), which is shared by
+ * all threads. This is inherited upon fork(), and reinitialised upon exec*().
+ * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
+ * instructions behaving as NOPs.
+ */
+struct ptrauth_keys {
+	struct ptrauth_key apia;
+};
+
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+{
+	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+		return;
+
+	get_random_bytes(keys, sizeof(*keys));
+}
+
+#define __ptrauth_key_install(k, v)			\
+do {							\
+	write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1);	\
+	write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1);	\
+} while (0)
+
+static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+		return;
+
+	__ptrauth_key_install(APIA, keys->apia);
+}
+
+static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
+				    struct ptrauth_keys *new)
+{
+	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+		return;
+
+	*new = *old;
+}
+
+#define mm_ctx_ptrauth_init(ctx) \
+	ptrauth_keys_init(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_switch(ctx) \
+	ptrauth_keys_switch(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_dup(oldctx, newctx) \
+	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)
+
+#else
+#define mm_ctx_ptrauth_init(ctx)
+#define mm_ctx_ptrauth_switch(ctx)
+#define mm_ctx_ptrauth_dup(oldctx, newctx)
+#endif
+
+#endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index cda76fa8b9b2..20daa89d839c 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -43,5 +43,6 @@
 #define HWCAP_ASIMDDP		(1 << 20)
 #define HWCAP_SHA512		(1 << 21)
 #define HWCAP_SVE		(1 << 22)
+#define HWCAP_APIA		(1 << 23)
 
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index babd4c173092..9df232d16845 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -145,8 +145,8 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
 #ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
 #endif
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
@@ -832,6 +832,15 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 	return is_kernel_in_hyp_mode();
 }
 
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+static int cpu_enable_address_auth(void *__unused)
+{
+	config_sctlr_el1(0, SCTLR_ELx_ENIA);
+
+	return 0;
+}
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
+
 static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
 			   int __unused)
 {
@@ -1025,6 +1034,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.capability = ARM64_HAS_ADDRESS_AUTH,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = has_address_auth,
+		.enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
@@ -1092,6 +1102,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
 #ifdef CONFIG_ARM64_SVE
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
 #endif
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_APA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_APIA),
+#endif
 	{},
 };
 
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 1e2554543506..88db0328c366 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -76,6 +76,7 @@ static const char *const hwcap_str[] = {
 	"asimddp",
 	"sha512",
 	"sve",
+	"apia",
 	NULL
 };
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 07/12] arm64: expose user PAC bit positions via ptrace
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (5 preceding siblings ...)
  2017-11-27 16:38 ` [PATCHv2 06/12] arm64: add basic pointer authentication support Mark Rutland
@ 2017-11-27 16:38 ` Mark Rutland
  2017-11-27 16:38 ` [PATCHv2 08/12] arm64: perf: strip PAC when unwinding userspace Mark Rutland
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:38 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.

For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.

This patch adds a new structure with masks describing the location of
the PAC bits in userspace instruction and data pointers (i.e. those
addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
By clearing these bits from pointers, userspace can acquire the PAC-less
versions.

This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the feature is enabled. Otherwise, it is
hidden.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yao Qi <yao.qi@arm.com>
---
 arch/arm64/include/asm/pointer_auth.h |  8 ++++++++
 arch/arm64/include/uapi/asm/ptrace.h  |  7 +++++++
 arch/arm64/kernel/ptrace.c            | 38 +++++++++++++++++++++++++++++++++++
 include/uapi/linux/elf.h              |  1 +
 4 files changed, 54 insertions(+)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 964da0c3dc48..b08ebec4b806 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -16,9 +16,11 @@
 #ifndef __ASM_POINTER_AUTH_H
 #define __ASM_POINTER_AUTH_H
 
+#include <linux/bitops.h>
 #include <linux/random.h>
 
 #include <asm/cpufeature.h>
+#include <asm/memory.h>
 #include <asm/sysreg.h>
 
 #ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
@@ -71,6 +73,12 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
 	*new = *old;
 }
 
+/*
+ * The EL0 pointer bits used by a pointer authentication code.
+ * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
+ */
+#define ptrauth_pac_mask() 	GENMASK(54, VA_BITS)
+
 #define mm_ctx_ptrauth_init(ctx) \
 	ptrauth_keys_init(&(ctx)->ptrauth_keys)
 
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 98c4ce55d9c3..4994d718771a 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -228,6 +228,13 @@ struct user_sve_header {
 		  SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags)	\
 		: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))
 
+/* pointer authentication masks (NT_ARM_PAC_MASK) */
+
+struct user_pac_mask {
+	__u64		data_mask;
+	__u64		insn_mask;
+};
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 7c44658b316d..6901dce44c8d 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -44,6 +44,7 @@
 #include <asm/cpufeature.h>
 #include <asm/debug-monitors.h>
 #include <asm/pgtable.h>
+#include <asm/pointer_auth.h>
 #include <asm/stacktrace.h>
 #include <asm/syscall.h>
 #include <asm/traps.h>
@@ -951,6 +952,30 @@ static int sve_set(struct task_struct *target,
 
 #endif /* CONFIG_ARM64_SVE */
 
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+static int pac_mask_get(struct task_struct *target,
+			const struct user_regset *regset,
+			unsigned int pos, unsigned int count,
+			void *kbuf, void __user *ubuf)
+{
+	/*
+	 * The PAC bits can differ across data and instruction pointers
+	 * depending on TCR_EL1.TBID*, which we may make use of in future, so
+	 * we expose separate masks.
+	 */
+	unsigned long mask = ptrauth_pac_mask();
+	struct user_pac_mask uregs = {
+		.data_mask = mask,
+		.insn_mask = mask,
+	};
+
+	if (!cpus_have_cap(ARM64_HAS_ADDRESS_AUTH))
+		return -EINVAL;
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
+}
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
+
 enum aarch64_regset {
 	REGSET_GPR,
 	REGSET_FPR,
@@ -963,6 +988,9 @@ enum aarch64_regset {
 #ifdef CONFIG_ARM64_SVE
 	REGSET_SVE,
 #endif
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	REGSET_PAC_MASK,
+#endif
 };
 
 static const struct user_regset aarch64_regsets[] = {
@@ -1032,6 +1060,16 @@ static const struct user_regset aarch64_regsets[] = {
 		.get_size = sve_get_size,
 	},
 #endif
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	[REGSET_PAC_MASK] = {
+		.core_note_type = NT_ARM_PAC_MASK,
+		.n = sizeof(struct user_pac_mask) / sizeof(u64),
+		.size = sizeof(u64),
+		.align = sizeof(u64),
+		.get = pac_mask_get,
+		/* this cannot be set dynamically */
+	},
+#endif
 };
 
 static const struct user_regset_view user_aarch64_view = {
diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index bb6836986200..4ca58005e04a 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -419,6 +419,7 @@ typedef struct elf64_shdr {
 #define NT_ARM_HW_WATCH	0x403		/* ARM hardware watchpoint registers */
 #define NT_ARM_SYSTEM_CALL	0x404	/* ARM system call number */
 #define NT_ARM_SVE	0x405		/* ARM Scalable Vector Extension registers */
+#define NT_ARM_PAC_MASK		0x406	/* ARM pointer authentication code masks */
 #define NT_METAG_CBUF	0x500		/* Metag catch buffer registers */
 #define NT_METAG_RPIPE	0x501		/* Metag read pipeline state */
 #define NT_METAG_TLS	0x502		/* Metag TLS pointer */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 08/12] arm64: perf: strip PAC when unwinding userspace
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (6 preceding siblings ...)
  2017-11-27 16:38 ` [PATCHv2 07/12] arm64: expose user PAC bit positions via ptrace Mark Rutland
@ 2017-11-27 16:38 ` Mark Rutland
  2017-11-27 16:38 ` [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:38 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

When the kernel is unwinding userspace callchains, we can't expect that
the userspace consumer of these callchains has the data necessary to
strip the PAC from the stored LR.

This patch has the kernel strip the PAC from user stackframes when the
in-kernel unwinder is used. This only affects the LR value, and not the
FP.

This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yao Qi <yao.qi@arm.com>
---
 arch/arm64/include/asm/pointer_auth.h | 7 +++++++
 arch/arm64/kernel/perf_callchain.c    | 5 ++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index b08ebec4b806..07788ce755bc 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -79,6 +79,12 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
  */
 #define ptrauth_pac_mask() 	GENMASK(54, VA_BITS)
 
+/* Only valid for EL0 TTBR0 instruction pointers */
+static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
+{
+	return ptr & ~ptrauth_pac_mask();
+}
+
 #define mm_ctx_ptrauth_init(ctx) \
 	ptrauth_keys_init(&(ctx)->ptrauth_keys)
 
@@ -89,6 +95,7 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
 	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)
 
 #else
+#define ptrauth_strip_insn_pac(lr)	(lr)
 #define mm_ctx_ptrauth_init(ctx)
 #define mm_ctx_ptrauth_switch(ctx)
 #define mm_ctx_ptrauth_dup(oldctx, newctx)
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index bcafd7dcfe8b..928204f6ab08 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -35,6 +35,7 @@ user_backtrace(struct frame_tail __user *tail,
 {
 	struct frame_tail buftail;
 	unsigned long err;
+	unsigned long lr;
 
 	/* Also check accessibility of one struct frame_tail beyond */
 	if (!access_ok(VERIFY_READ, tail, sizeof(buftail)))
@@ -47,7 +48,9 @@ user_backtrace(struct frame_tail __user *tail,
 	if (err)
 		return NULL;
 
-	perf_callchain_store(entry, buftail.lr);
+	lr = ptrauth_strip_insn_pac(buftail.lr);
+
+	perf_callchain_store(entry, lr);
 
 	/*
 	 * Frame pointers should strictly progress back up the stack
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (7 preceding siblings ...)
  2017-11-27 16:38 ` [PATCHv2 08/12] arm64: perf: strip PAC when unwinding userspace Mark Rutland
@ 2017-11-27 16:38 ` Mark Rutland
  2018-02-06 12:39   ` Christoffer Dall
  2017-11-27 16:38 ` [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers Mark Rutland
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:38 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR.

For __{activate,deactivate}_traps(), the HCR save/restore is made common
across the !VHE and VHE paths. As the host and guest HCR values must
have E2H set when VHE is in use, register redirection should always be
in effect at EL2, and this change should not adversely affect the VHE
code.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The now unused HCR_HOST_VHE_FLAGS definition is removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h  | 1 -
 arch/arm64/include/asm/kvm_host.h | 5 ++++-
 arch/arm64/kvm/hyp/switch.c       | 5 +++--
 arch/arm64/kvm/hyp/tlb.c          | 6 +++++-
 4 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 62854d5d1d3b..aa02b05430e8 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -84,7 +84,6 @@
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
-#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* TCR_EL2 Registers bits */
 #define TCR_EL2_RES1		((1 << 31) | (1 << 23))
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 674912d7a571..39184aa3e2f2 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -199,10 +199,13 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
-	/* HYP configuration */
+	/* Guest HYP configuration */
 	u64 hcr_el2;
 	u32 mdcr_el2;
 
+	/* Host HYP configuration */
+	u64 host_hcr_el2;
+
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 525c01f48867..2205f0be3ced 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -71,6 +71,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
 
+	vcpu->arch.host_hcr_el2 = read_sysreg(hcr_el2);
+
 	/*
 	 * We are about to set CPTR_EL2.TFP to trap all floating point
 	 * register accesses to EL2, however, the ARM ARM clearly states that
@@ -116,7 +118,6 @@ static void __hyp_text __deactivate_traps_vhe(void)
 		    MDCR_EL2_TPMS;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
 	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
 	write_sysreg(vectors, vbar_el1);
 }
@@ -129,7 +130,6 @@ static void __hyp_text __deactivate_traps_nvhe(void)
 	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_RW, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
@@ -151,6 +151,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 	__deactivate_traps_arch()();
 	write_sysreg(0, hstr_el2);
 	write_sysreg(0, pmuserenr_el0);
+	write_sysreg(vcpu->arch.host_hcr_el2, hcr_el2);
 }
 
 static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 73464a96c365..c2b0680efa2c 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -49,12 +49,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
 
 static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
 {
+	u64 val;
+
 	/*
 	 * We're done with the TLB operation, let's restore the host's
 	 * view of HCR_EL2.
 	 */
 	write_sysreg(0, vttbr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	val = read_sysreg(hcr_el2);
+	val |= HCR_TGE;
+	write_sysreg(val, hcr_el2);
 }
 
 static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (8 preceding siblings ...)
  2017-11-27 16:38 ` [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
@ 2017-11-27 16:38 ` Mark Rutland
  2018-02-06 12:38   ` Christoffer Dall
  2017-11-27 16:38 ` [PATCHv2 11/12] arm64: enable pointer authentication Mark Rutland
  2017-11-27 16:38 ` [PATCHv2 12/12] arm64: docs: document " Mark Rutland
  11 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:38 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these atetmps will result in an UNDEF
being taken by the guest.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <cdall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_host.h | 23 +++++++++-
 arch/arm64/include/asm/kvm_hyp.h  |  7 +++
 arch/arm64/kvm/handle_exit.c      | 21 +++++++++
 arch/arm64/kvm/hyp/Makefile       |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c   | 91 +++++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c       |  4 ++
 arch/arm64/kvm/sys_regs.c         | 32 ++++++++++++++
 7 files changed, 178 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 39184aa3e2f2..2fc21a2a75a7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -136,6 +136,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1,
+	APIBKEYLO_EL1,
+	APIBKEYHI_EL1,
+	APDAKEYLO_EL1,
+	APDAKEYHI_EL1,
+	APDBKEYLO_EL1,
+	APDBKEYHI_EL1,
+	APGAKEYLO_EL1,
+	APGAKEYHI_EL1,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -363,10 +375,19 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
 	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+
+static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
+{
+	kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
+
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
 void kvm_arm_init_debug(void);
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 08d3bb66c8b7..d0dd924cb175 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -152,6 +152,13 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
 bool __fpsimd_enabled(void);
 
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+			       struct kvm_cpu_context *host_ctxt,
+			       struct kvm_cpu_context *guest_ctxt);
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+			      struct kvm_cpu_context *host_ctxt,
+			      struct kvm_cpu_context *guest_ctxt);
+
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
 
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index b71247995469..d9aff3c86551 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -136,6 +136,26 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return ret;
 }
 
+/*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) ||
+	    cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+	} else {
+		kvm_inject_undefined(vcpu);
+	}
+}
+
+static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return 1;
+}
+
 static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
 	u32 hsr = kvm_vcpu_get_hsr(vcpu);
@@ -176,6 +196,7 @@ static exit_handle_fn arm_exit_handlers[] = {
 	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
+	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index f04400d494b7..2c2c3bd90cc0 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
 obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
+obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
 
 # KVM code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 000000000000..2784fb373296
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,91 @@
+/*
+ * Copyright (C) 2017 ARM Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_hyp.h>
+
+static bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
+{
+	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
+		__ptrauth_save_key(ctxt->sys_regs, APIA);
+		__ptrauth_save_key(ctxt->sys_regs, APIB);
+		__ptrauth_save_key(ctxt->sys_regs, APDA);
+		__ptrauth_save_key(ctxt->sys_regs, APDB);
+	}
+
+	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+		__ptrauth_save_key(ctxt->sys_regs, APGA);
+	}
+}
+
+#define __ptrauth_restore_key(regs, key) 					\
+({										\
+	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
+	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
+{
+
+	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
+		__ptrauth_restore_key(ctxt->sys_regs, APIA);
+		__ptrauth_restore_key(ctxt->sys_regs, APIB);
+		__ptrauth_restore_key(ctxt->sys_regs, APDA);
+		__ptrauth_restore_key(ctxt->sys_regs, APDB);
+	}
+
+	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+		__ptrauth_restore_key(ctxt->sys_regs, APGA);
+	}
+}
+
+void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+					  struct kvm_cpu_context *host_ctxt,
+					  struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(host_ctxt);
+	__ptrauth_restore_state(guest_ctxt);
+}
+
+void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+					 struct kvm_cpu_context *host_ctxt,
+					 struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(guest_ctxt);
+	__ptrauth_restore_state(host_ctxt);
+}
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 2205f0be3ced..d9be2762ac1a 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -315,6 +315,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__sysreg_restore_guest_state(guest_ctxt);
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
+	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
+
 	/* Jump in the fire! */
 again:
 	exit_code = __guest_enter(vcpu, host_ctxt);
@@ -373,6 +375,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	fp_enabled = __fpsimd_enabled();
 
+	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
+
 	__sysreg_save_guest_state(guest_ctxt);
 	__sysreg32_save_state(vcpu);
 	__timer_disable_traps(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1830ebc227d1..5fe3b2588bec 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -838,6 +838,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
@@ -1156,6 +1182,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 11/12] arm64: enable pointer authentication
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (9 preceding siblings ...)
  2017-11-27 16:38 ` [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers Mark Rutland
@ 2017-11-27 16:38 ` Mark Rutland
  2017-11-27 16:38 ` [PATCHv2 12/12] arm64: docs: document " Mark Rutland
  11 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:38 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

Now that all the necessary bits are in place for userspace / KVM guest
pointer authentication, add the necessary Kconfig logic to allow this to
be enabled.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a93339f5178f..f7cb4ca8a6fc 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1013,6 +1013,29 @@ config ARM64_PMEM
 
 endmenu
 
+menu "ARMv8.3 architectural features"
+
+config ARM64_POINTER_AUTHENTICATION
+	bool "Enable support for pointer authentication"
+	default y
+	help
+	  Pointer authentication (part of the ARMv8.3 Extensions) provides
+	  instructions for signing and authenticating pointers against secret
+	  keys, which can be used to mitigate Return Oriented Programming (ROP)
+	  and other attacks.
+
+	  This option enables these instructions at EL0 (i.e. for userspace).
+
+	  Choosing this option will cause the kernel to initialise secret keys
+	  for each process at exec() time, with these keys being
+	  context-switched along with the process.
+
+	  The feature is detected at runtime. If the feature is not present in
+	  hardware it will not be advertised to userspace nor will it be
+	  enabled.
+
+endmenu
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 12/12] arm64: docs: document pointer authentication
  2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (10 preceding siblings ...)
  2017-11-27 16:38 ` [PATCHv2 11/12] arm64: enable pointer authentication Mark Rutland
@ 2017-11-27 16:38 ` Mark Rutland
  2017-11-28 15:07   ` Andrew Jones
  11 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2017-11-27 16:38 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	mark.rutland, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yao Qi <yao.qi@arm.com>
---
 Documentation/arm64/booting.txt                |  8 +++
 Documentation/arm64/elf_hwcaps.txt             |  6 ++
 Documentation/arm64/pointer-authentication.txt | 85 ++++++++++++++++++++++++++
 3 files changed, 99 insertions(+)
 create mode 100644 Documentation/arm64/pointer-authentication.txt

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 8d0df62c3fe0..8df9f4658d6f 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
     ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
   - The DT or ACPI tables must describe a GICv2 interrupt controller.
 
+  For CPUs with pointer authentication functionality:
+  - If EL3 is present:
+    SCR_EL3.APK (bit 16) must be initialised to 0b1
+    SCR_EL3.API (bit 17) must be initialised to 0b1
+  - If the kernel is entered at EL1:
+    HCR_EL2.APK (bit 40) must be initialised to 0b1
+    HCR_EL2.API (bit 41) must be initialised to 0b1
+
 The requirements described above for CPU mode, caches, MMUs, architected
 timers, coherency and system registers apply to all CPUs.  All CPUs must
 enter the kernel in the same exception level.
diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index 89edba12a9e0..6cf40e419a9d 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -158,3 +158,9 @@ HWCAP_SHA512
 HWCAP_SVE
 
     Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
+
+HWCAP_APIA
+
+    EL0 AddPac and Auth functionality using APIAKey_EL1 is enabled, as
+    described by Documentation/arm64/pointer-authentication.txt.
+
diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
new file mode 100644
index 000000000000..e9b5c6bdb84f
--- /dev/null
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -0,0 +1,85 @@
+Pointer authentication in AArch64 Linux
+=======================================
+
+Author: Mark Rutland <mark.rutland@arm.com>
+Date: 2017-07-19
+
+This document briefly describes the provision of pointer authentication
+functionality in AArch64 Linux.
+
+
+Architecture overview
+---------------------
+
+The ARMv8.3 Pointer Authentication extension adds primitives that can be
+used to mitigate certain classes of attack where an attacker can corrupt
+the contents of some memory (e.g. the stack).
+
+The extension uses a Pointer Authentication Code (PAC) to determine
+whether pointers have been modified unexpectedly. A PAC is derived from
+a pointer, another value (such as the stack pointer), and a secret key
+held in system registers.
+
+The extension adds instructions to insert a valid PAC into a pointer,
+and to verify/remove the PAC from a pointer. The PAC occupies a number
+of high-order bits of the pointer, which varies dependent on the
+configured virtual address size and whether pointer tagging is in use.
+
+A subset of these instructions have been allocated from the HINT
+encoding space. In the absence of the extension (or when disabled),
+these instructions behave as NOPs. Applications and libraries using
+these instructions operate correctly regardless of the presence of the
+extension.
+
+
+Basic support
+-------------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and relevant HW
+support is present, the kernel will assign a random APIAKey value to
+each process at exec*() time. This key is shared by all threads within
+the process, and the key is preserved across fork(). Presence of
+functionality using APIAKey is advertised via HWCAP_APIA.
+
+Recent versions of GCC can compile code with APIAKey-based return
+address protection when passed the -msign-return-address option. This
+uses instructions in the HINT space, and such code can run on systems
+without the pointer authentication extension.
+
+The remaining instruction and data keys (APIBKey, APDAKey, APDBKey) are
+reserved for future use, and instructions using these keys must not be
+used by software until a purpose and scope for their use has been
+decided. To enable future software using these keys to function on
+contemporary kernels, where possible, instructions using these keys are
+made to behave as NOPs.
+
+The generic key (APGAKey) is currently unsupported. Instructions using
+the generic key must not be used by software.
+
+
+Debugging
+---------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and relevant HW
+support is present, the kernel will expose the position of TTBR0 PAC
+bits in the NT_ARM_PAC_MASK regset (struct user_pac_mask), which
+userspace can acqure via PTRACE_GETREGSET.
+
+Separate masks are exposed for data pointers and instruction pointers,
+as the set of PAC bits can vary between the two. Debuggers should not
+expect that HWCAP_APIA implies the presence (or non-presence) of this
+regset -- in future the kernel may support the use of APIBKey, APDAKey,
+and/or APBAKey, even in the absence of APIAKey.
+
+Note that the masks apply to TTBR0 addresses, and are not valid to apply
+to TTBR1 addresses (e.g. kernel pointers).
+
+
+Virtualization
+--------------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and uniform HW
+support is present, KVM will context switch all keys used by vCPUs.
+Otherwise, the feature is disabled. When disabled, accesses to keys, or
+use of instructions enabled within the guest will trap to EL2, and an
+UNDEFINED exception will be injected into the guest.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 12/12] arm64: docs: document pointer authentication
  2017-11-27 16:38 ` [PATCHv2 12/12] arm64: docs: document " Mark Rutland
@ 2017-11-28 15:07   ` Andrew Jones
  2017-12-04 12:39     ` Mark Rutland
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2017-11-28 15:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-arch, cdall, arnd, marc.zyngier,
	catalin.marinas, yao.qi, kernel-hardening, will.deacon,
	linux-kernel, awallis, kvmarm

Hi Mark,

On Mon, Nov 27, 2017 at 04:38:06PM +0000, Mark Rutland wrote:
> Now that we've added code to support pointer authentication, add some
> documentation so that people can figure out if/how to use it.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Yao Qi <yao.qi@arm.com>
> ---
>  Documentation/arm64/booting.txt                |  8 +++
>  Documentation/arm64/elf_hwcaps.txt             |  6 ++
>  Documentation/arm64/pointer-authentication.txt | 85 ++++++++++++++++++++++++++
>  3 files changed, 99 insertions(+)
>  create mode 100644 Documentation/arm64/pointer-authentication.txt
> 
> diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
> index 8d0df62c3fe0..8df9f4658d6f 100644
> --- a/Documentation/arm64/booting.txt
> +++ b/Documentation/arm64/booting.txt
> @@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
>      ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
>    - The DT or ACPI tables must describe a GICv2 interrupt controller.
>  
> +  For CPUs with pointer authentication functionality:
> +  - If EL3 is present:
> +    SCR_EL3.APK (bit 16) must be initialised to 0b1
> +    SCR_EL3.API (bit 17) must be initialised to 0b1
> +  - If the kernel is entered at EL1:
> +    HCR_EL2.APK (bit 40) must be initialised to 0b1
> +    HCR_EL2.API (bit 41) must be initialised to 0b1
> +
>  The requirements described above for CPU mode, caches, MMUs, architected
>  timers, coherency and system registers apply to all CPUs.  All CPUs must
>  enter the kernel in the same exception level.
> diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> index 89edba12a9e0..6cf40e419a9d 100644
> --- a/Documentation/arm64/elf_hwcaps.txt
> +++ b/Documentation/arm64/elf_hwcaps.txt
> @@ -158,3 +158,9 @@ HWCAP_SHA512
>  HWCAP_SVE
>  
>      Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
> +
> +HWCAP_APIA
> +
> +    EL0 AddPac and Auth functionality using APIAKey_EL1 is enabled, as
> +    described by Documentation/arm64/pointer-authentication.txt.
> +
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> new file mode 100644
> index 000000000000..e9b5c6bdb84f
> --- /dev/null
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -0,0 +1,85 @@
> +Pointer authentication in AArch64 Linux
> +=======================================
> +
> +Author: Mark Rutland <mark.rutland@arm.com>
> +Date: 2017-07-19
> +
> +This document briefly describes the provision of pointer authentication
> +functionality in AArch64 Linux.
> +
> +
> +Architecture overview
> +---------------------
> +
> +The ARMv8.3 Pointer Authentication extension adds primitives that can be
> +used to mitigate certain classes of attack where an attacker can corrupt
> +the contents of some memory (e.g. the stack).
> +
> +The extension uses a Pointer Authentication Code (PAC) to determine
> +whether pointers have been modified unexpectedly. A PAC is derived from
> +a pointer, another value (such as the stack pointer), and a secret key
> +held in system registers.
> +
> +The extension adds instructions to insert a valid PAC into a pointer,
> +and to verify/remove the PAC from a pointer. The PAC occupies a number
> +of high-order bits of the pointer, which varies dependent on the
> +configured virtual address size and whether pointer tagging is in use.
> +
> +A subset of these instructions have been allocated from the HINT
> +encoding space. In the absence of the extension (or when disabled),
> +these instructions behave as NOPs. Applications and libraries using
> +these instructions operate correctly regardless of the presence of the
> +extension.

Correctly, but obviously without the additional security. So I assume
it's expected that applications that demand this security to probe for
it themselves, presumably by the checking the HWCAP. Is that correct?

> +
> +
> +Basic support
> +-------------
> +
> +When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and relevant HW
> +support is present, the kernel will assign a random APIAKey value to
> +each process at exec*() time. This key is shared by all threads within
> +the process, and the key is preserved across fork(). Presence of
> +functionality using APIAKey is advertised via HWCAP_APIA.
> +
> +Recent versions of GCC can compile code with APIAKey-based return
> +address protection when passed the -msign-return-address option. This
> +uses instructions in the HINT space, and such code can run on systems
> +without the pointer authentication extension.
> +
> +The remaining instruction and data keys (APIBKey, APDAKey, APDBKey) are
> +reserved for future use, and instructions using these keys must not be
> +used by software until a purpose and scope for their use has been
> +decided. To enable future software using these keys to function on
> +contemporary kernels, where possible, instructions using these keys are
> +made to behave as NOPs.
> +
> +The generic key (APGAKey) is currently unsupported. Instructions using
> +the generic key must not be used by software.
> +
> +
> +Debugging
> +---------
> +
> +When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and relevant HW
> +support is present, the kernel will expose the position of TTBR0 PAC
> +bits in the NT_ARM_PAC_MASK regset (struct user_pac_mask), which
> +userspace can acqure via PTRACE_GETREGSET.
> +
> +Separate masks are exposed for data pointers and instruction pointers,
> +as the set of PAC bits can vary between the two. Debuggers should not
> +expect that HWCAP_APIA implies the presence (or non-presence) of this
> +regset -- in future the kernel may support the use of APIBKey, APDAKey,
> +and/or APBAKey, even in the absence of APIAKey.
> +
> +Note that the masks apply to TTBR0 addresses, and are not valid to apply
> +to TTBR1 addresses (e.g. kernel pointers).
> +
> +
> +Virtualization
> +--------------
> +
> +When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and uniform HW
> +support is present, KVM will context switch all keys used by vCPUs.
> +Otherwise, the feature is disabled. When disabled, accesses to keys, or
> +use of instructions enabled within the guest will trap to EL2, and an
> +UNDEFINED exception will be injected into the guest.

If host applications will just run, with the instructions behaving like
NOPs, when the extension is either not present or not enabled, then
shouldn't guest applications also just run? I.e. instead of injecting
UNDEF, just treat the instructions as NOPs. Or did I misunderstand the
trapping? Does use of the instructions at EL0 trap to EL1 or EL2?

Thanks,
drew

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 12/12] arm64: docs: document pointer authentication
  2017-11-28 15:07   ` Andrew Jones
@ 2017-12-04 12:39     ` Mark Rutland
  2017-12-04 12:49       ` Andrew Jones
  0 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2017-12-04 12:39 UTC (permalink / raw)
  To: Andrew Jones
  Cc: linux-arm-kernel, linux-arch, cdall, arnd, marc.zyngier,
	catalin.marinas, yao.qi, kernel-hardening, will.deacon,
	linux-kernel, awallis, kvmarm

On Tue, Nov 28, 2017 at 04:07:26PM +0100, Andrew Jones wrote:
> Hi Mark,

Hi Drew,

> On Mon, Nov 27, 2017 at 04:38:06PM +0000, Mark Rutland wrote:
> > +Architecture overview
> > +---------------------
> > +
> > +The ARMv8.3 Pointer Authentication extension adds primitives that can be
> > +used to mitigate certain classes of attack where an attacker can corrupt
> > +the contents of some memory (e.g. the stack).
> > +
> > +The extension uses a Pointer Authentication Code (PAC) to determine
> > +whether pointers have been modified unexpectedly. A PAC is derived from
> > +a pointer, another value (such as the stack pointer), and a secret key
> > +held in system registers.
> > +
> > +The extension adds instructions to insert a valid PAC into a pointer,
> > +and to verify/remove the PAC from a pointer. The PAC occupies a number
> > +of high-order bits of the pointer, which varies dependent on the
> > +configured virtual address size and whether pointer tagging is in use.
> > +
> > +A subset of these instructions have been allocated from the HINT
> > +encoding space. In the absence of the extension (or when disabled),
> > +these instructions behave as NOPs. Applications and libraries using
> > +these instructions operate correctly regardless of the presence of the
> > +extension.
> 
> Correctly, but obviously without the additional security. So I assume
> it's expected that applications that demand this security to probe for
> it themselves, presumably by the checking the HWCAP. Is that correct?

Yes. Applications which wish to mandate pointer authentication
(presumably using instructions outside of the HINT space) must check the
relevant HWCAP first.

[...]

> > +Virtualization
> > +--------------
> > +
> > +When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and uniform HW
> > +support is present, KVM will context switch all keys used by vCPUs.
> > +Otherwise, the feature is disabled. When disabled, accesses to keys, or
> > +use of instructions enabled within the guest will trap to EL2, and an
> > +UNDEFINED exception will be injected into the guest.
> 
> If host applications will just run, with the instructions behaving like
> NOPs, when the extension is either not present or not enabled, then
> shouldn't guest applications also just run?

The enabled/disabled wording is probably the confusing bit here.

At EL1 we have conditional enables for instructions using
AP{I,D}{A,B}Key, which behave as NOPs when disabled.

At EL2 we have a single conditional trap for all instructions using
pointer authentication, that traps to EL2 when instructions are not
NOP'd by EL1.

So "disabled by EL2" is actually "trapped by EL2", and "disabled by EL1"
is "NOP'd by EL1".

> I.e. instead of injecting UNDEF, just treat the instructions as NOPs.
> Or did I misunderstand the trapping?

I think the documentation explained it poorly. Did the above help?

> Does use of the instructions at EL0 trap to EL1 or EL2?

If disabled by EL1, the instructions behave as NOPs (regardless of the
EL2 traps).

If enabled by EL1, but trapped by EL2, the instructions trap to EL2.

If enabled by EL1, and not trapped by EL2, the instructions work as
usual.

I'll see if I can document this better.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 12/12] arm64: docs: document pointer authentication
  2017-12-04 12:39     ` Mark Rutland
@ 2017-12-04 12:49       ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2017-12-04 12:49 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-arch, cdall, arnd, marc.zyngier,
	catalin.marinas, yao.qi, kernel-hardening, will.deacon,
	linux-kernel, awallis, kvmarm

On Mon, Dec 04, 2017 at 12:39:33PM +0000, Mark Rutland wrote:
> On Tue, Nov 28, 2017 at 04:07:26PM +0100, Andrew Jones wrote:
> > Hi Mark,
> 
> Hi Drew,
> 
> > On Mon, Nov 27, 2017 at 04:38:06PM +0000, Mark Rutland wrote:
> > > +Architecture overview
> > > +---------------------
> > > +
> > > +The ARMv8.3 Pointer Authentication extension adds primitives that can be
> > > +used to mitigate certain classes of attack where an attacker can corrupt
> > > +the contents of some memory (e.g. the stack).
> > > +
> > > +The extension uses a Pointer Authentication Code (PAC) to determine
> > > +whether pointers have been modified unexpectedly. A PAC is derived from
> > > +a pointer, another value (such as the stack pointer), and a secret key
> > > +held in system registers.
> > > +
> > > +The extension adds instructions to insert a valid PAC into a pointer,
> > > +and to verify/remove the PAC from a pointer. The PAC occupies a number
> > > +of high-order bits of the pointer, which varies dependent on the
> > > +configured virtual address size and whether pointer tagging is in use.
> > > +
> > > +A subset of these instructions have been allocated from the HINT
> > > +encoding space. In the absence of the extension (or when disabled),
> > > +these instructions behave as NOPs. Applications and libraries using
> > > +these instructions operate correctly regardless of the presence of the
> > > +extension.
> > 
> > Correctly, but obviously without the additional security. So I assume
> > it's expected that applications that demand this security to probe for
> > it themselves, presumably by the checking the HWCAP. Is that correct?
> 
> Yes. Applications which wish to mandate pointer authentication
> (presumably using instructions outside of the HINT space) must check the
> relevant HWCAP first.
> 
> [...]
> 
> > > +Virtualization
> > > +--------------
> > > +
> > > +When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and uniform HW
> > > +support is present, KVM will context switch all keys used by vCPUs.
> > > +Otherwise, the feature is disabled. When disabled, accesses to keys, or
> > > +use of instructions enabled within the guest will trap to EL2, and an
> > > +UNDEFINED exception will be injected into the guest.
> > 
> > If host applications will just run, with the instructions behaving like
> > NOPs, when the extension is either not present or not enabled, then
> > shouldn't guest applications also just run?
> 
> The enabled/disabled wording is probably the confusing bit here.
> 
> At EL1 we have conditional enables for instructions using
> AP{I,D}{A,B}Key, which behave as NOPs when disabled.
> 
> At EL2 we have a single conditional trap for all instructions using
> pointer authentication, that traps to EL2 when instructions are not
> NOP'd by EL1.
> 
> So "disabled by EL2" is actually "trapped by EL2", and "disabled by EL1"
> is "NOP'd by EL1".
> 
> > I.e. instead of injecting UNDEF, just treat the instructions as NOPs.
> > Or did I misunderstand the trapping?
> 
> I think the documentation explained it poorly. Did the above help?

Yes, both the above and the below have helped me understand. Thanks for
the clarification!

drew

> 
> > Does use of the instructions at EL0 trap to EL1 or EL2?
> 
> If disabled by EL1, the instructions behave as NOPs (regardless of the
> EL2 traps).
> 
> If enabled by EL1, but trapped by EL2, the instructions trap to EL2.
> 
> If enabled by EL1, and not trapped by EL2, the instructions work as
> usual.
> 
> I'll see if I can document this better.
> 
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers
  2017-11-27 16:38 ` [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers Mark Rutland
@ 2018-02-06 12:38   ` Christoffer Dall
  2018-03-09 14:28     ` Mark Rutland
  0 siblings, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2018-02-06 12:38 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, arnd, catalin.marinas, cdall, kvmarm,
	linux-arch, marc.zyngier, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

On Mon, Nov 27, 2017 at 04:38:04PM +0000, Mark Rutland wrote:
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> When we schedule a vcpu, 

That's not quite what the code does, the code only does this when we
schedule back a preempted or blocked vcpu thread.

> we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). 

We could choose to always trap and emulate in software in this case,
couldn't we?  (not saying we should though).

Also, this patch doesn't let userspace decide if we should hide or
expose the feature to guests, and will expose new system registers to
userspace.  That means that on hardware supporting pointer
authentication, with this patch, it's not possible to migrate to a
machine which doesn't have the support.  That's probably a good thing
(false sense of security etc.), but I wonder if we should have a
mechanism for userspace to ask for pointer authentication in the guest
and only if that's enabled, do we expose the feature to the guest and in
the system register list to user space as well?

> When the guest is scheduled on a
> physical CPU lacking the feature, these atetmps will result in an UNDEF

attempts

> being taken by the guest.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Christoffer Dall <cdall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/include/asm/kvm_host.h | 23 +++++++++-
>  arch/arm64/include/asm/kvm_hyp.h  |  7 +++
>  arch/arm64/kvm/handle_exit.c      | 21 +++++++++
>  arch/arm64/kvm/hyp/Makefile       |  1 +
>  arch/arm64/kvm/hyp/ptrauth-sr.c   | 91 +++++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/switch.c       |  4 ++
>  arch/arm64/kvm/sys_regs.c         | 32 ++++++++++++++
>  7 files changed, 178 insertions(+), 1 deletion(-)
>  create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 39184aa3e2f2..2fc21a2a75a7 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -136,6 +136,18 @@ enum vcpu_sysreg {
>  	PMSWINC_EL0,	/* Software Increment Register */
>  	PMUSERENR_EL0,	/* User Enable Register */
>  
> +	/* Pointer Authentication Registers */
> +	APIAKEYLO_EL1,
> +	APIAKEYHI_EL1,
> +	APIBKEYLO_EL1,
> +	APIBKEYHI_EL1,
> +	APDAKEYLO_EL1,
> +	APDAKEYHI_EL1,
> +	APDBKEYLO_EL1,
> +	APDBKEYHI_EL1,
> +	APGAKEYLO_EL1,
> +	APGAKEYHI_EL1,
> +
>  	/* 32bit specific registers. Keep them at the end of the range */
>  	DACR32_EL2,	/* Domain Access Control Register */
>  	IFSR32_EL2,	/* Instruction Fault Status Register */
> @@ -363,10 +375,19 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
>  	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
>  }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
> +
>  static inline void kvm_arch_hardware_unsetup(void) {}
>  static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>  static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> +
> +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
> +{
> +	kvm_arm_vcpu_ptrauth_disable(vcpu);
> +}
> +

I still find this decision to begin trapping again quite arbitrary, and
would at least prefer this to be in vcpu_load (which would make the
behavior match the commit text as well).

My expectation would be that if a guest is running software with pointer
authentication enabled, then it's likely to either keep using the
feature, or not use it at all, so I would make this a one-time flag.


Thanks,
-Christoffer

>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  
>  void kvm_arm_init_debug(void);
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 08d3bb66c8b7..d0dd924cb175 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -152,6 +152,13 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
>  void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
>  bool __fpsimd_enabled(void);
>  
> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +			       struct kvm_cpu_context *host_ctxt,
> +			       struct kvm_cpu_context *guest_ctxt);
> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +			      struct kvm_cpu_context *host_ctxt,
> +			      struct kvm_cpu_context *guest_ctxt);
> +
>  u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
>  void __noreturn __hyp_do_panic(unsigned long, ...);
>  
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index b71247995469..d9aff3c86551 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -136,6 +136,26 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return ret;
>  }
>  
> +/*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register.
> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) ||
> +	    cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> +	} else {
> +		kvm_inject_undefined(vcpu);
> +	}
> +}
> +
> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return 1;
> +}
> +
>  static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
>  	u32 hsr = kvm_vcpu_get_hsr(vcpu);
> @@ -176,6 +196,7 @@ static exit_handle_fn arm_exit_handlers[] = {
>  	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
> +	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
>  };
>  
>  static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
> index f04400d494b7..2c2c3bd90cc0 100644
> --- a/arch/arm64/kvm/hyp/Makefile
> +++ b/arch/arm64/kvm/hyp/Makefile
> @@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
>  obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>  obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
>  obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
> +obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
>  
>  # KVM code is run at a different exception code with a different map, so
>  # compiler instrumentation that inserts callbacks or checks into the code may
> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> new file mode 100644
> index 000000000000..2784fb373296
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -0,0 +1,91 @@
> +/*
> + * Copyright (C) 2017 ARM Ltd
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/compiler.h>
> +#include <linux/kvm_host.h>
> +
> +#include <asm/cpucaps.h>
> +#include <asm/cpufeature.h>
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_hyp.h>
> +
> +static bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
> +}
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
> +		__ptrauth_save_key(ctxt->sys_regs, APIA);
> +		__ptrauth_save_key(ctxt->sys_regs, APIB);
> +		__ptrauth_save_key(ctxt->sys_regs, APDA);
> +		__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	}
> +
> +	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> +		__ptrauth_save_key(ctxt->sys_regs, APGA);
> +	}
> +}
> +
> +#define __ptrauth_restore_key(regs, key) 					\
> +({										\
> +	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
> +	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
> +{
> +
> +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
> +		__ptrauth_restore_key(ctxt->sys_regs, APIA);
> +		__ptrauth_restore_key(ctxt->sys_regs, APIB);
> +		__ptrauth_restore_key(ctxt->sys_regs, APDA);
> +		__ptrauth_restore_key(ctxt->sys_regs, APDB);
> +	}
> +
> +	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> +		__ptrauth_restore_key(ctxt->sys_regs, APGA);
> +	}
> +}
> +
> +void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +					  struct kvm_cpu_context *host_ctxt,
> +					  struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(host_ctxt);
> +	__ptrauth_restore_state(guest_ctxt);
> +}
> +
> +void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +					 struct kvm_cpu_context *host_ctxt,
> +					 struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(guest_ctxt);
> +	__ptrauth_restore_state(host_ctxt);
> +}
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 2205f0be3ced..d9be2762ac1a 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -315,6 +315,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  	__sysreg_restore_guest_state(guest_ctxt);
>  	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
>  
> +	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> +
>  	/* Jump in the fire! */
>  again:
>  	exit_code = __guest_enter(vcpu, host_ctxt);
> @@ -373,6 +375,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  
>  	fp_enabled = __fpsimd_enabled();
>  
> +	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> +
>  	__sysreg_save_guest_state(guest_ctxt);
>  	__sysreg32_save_state(vcpu);
>  	__timer_disable_traps(vcpu);
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 1830ebc227d1..5fe3b2588bec 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -838,6 +838,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}
> +
> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *rd)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return false;
> +}
> +
> +#define __PTRAUTH_KEY(k)						\
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
> +
> +#define PTRAUTH_KEY(k)							\
> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
> +
>  static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
> @@ -1156,6 +1182,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>  	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>  
> +	PTRAUTH_KEY(APIA),
> +	PTRAUTH_KEY(APIB),
> +	PTRAUTH_KEY(APDA),
> +	PTRAUTH_KEY(APDB),
> +	PTRAUTH_KEY(APGA),
> +
>  	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>  	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>  	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 05/12] arm64: Don't trap host pointer auth use to EL2
  2017-11-27 16:37 ` [PATCHv2 05/12] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
@ 2018-02-06 12:39   ` Christoffer Dall
  2018-02-12 16:00     ` Mark Rutland
  0 siblings, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2018-02-06 12:39 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, arnd, catalin.marinas, cdall, kvmarm,
	linux-arch, marc.zyngier, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

Hi Mark,

On Mon, Nov 27, 2017 at 04:37:59PM +0000, Mark Rutland wrote:
> To allow EL0 (and/or EL1) to use pointer authentication functionality,
> we must ensure that pointer authentication instructions and accesses to
> pointer authentication keys are not trapped to EL2 (where we will not be
> able to handle them).

...on non-VHE systems, presumably?

> 
> This patch ensures that HCR_EL2 is configured appropriately when the
> kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
> ensuring that EL1 can access keys and permit EL0 use of instructions.
> For VHE kernels, EL2 access is controlled by EL3, and we need not set
> anything.


for VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
and it doesn't matter how we configure HCR_EL2.{API,APK}.

(Because you do actually set these bits when the features are present if
I read the code correctly).


> 
> This does not enable support for KVM guests, since KVM manages HCR_EL2
> itself.

        (...when running VMs.)


Besides the nits:

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <cdall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/include/asm/kvm_arm.h |  2 ++
>  arch/arm64/kernel/head.S         | 19 +++++++++++++++++--
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 7f069ff37f06..62854d5d1d3b 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -23,6 +23,8 @@
>  #include <asm/types.h>
>  
>  /* Hyp Configuration Register (HCR) bits */
> +#define HCR_API		(UL(1) << 41)
> +#define HCR_APK		(UL(1) << 40)
>  #define HCR_E2H		(UL(1) << 34)
>  #define HCR_ID		(UL(1) << 33)
>  #define HCR_CD		(UL(1) << 32)
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 67e86a0f57ac..06a96e9af26b 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -415,10 +415,25 @@ CPU_LE(	bic	x0, x0, #(1 << 25)	)	// Clear the EE bit for EL2
>  
>  	/* Hyp configuration. */
>  	mov	x0, #HCR_RW			// 64-bit EL1
> -	cbz	x2, set_hcr
> +	cbz	x2, 1f
>  	orr	x0, x0, #HCR_TGE		// Enable Host Extensions
>  	orr	x0, x0, #HCR_E2H
> -set_hcr:
> +1:
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +	/*
> +	 * Disable pointer authentication traps to EL2. The HCR_EL2.{APK,API}
> +	 * bits exist iff at least one authentication mechanism is implemented.
> +	 */
> +	mrs	x1, id_aa64isar1_el1
> +	mov_q	x3, ((0xf << ID_AA64ISAR1_GPI_SHIFT) | \
> +		     (0xf << ID_AA64ISAR1_GPA_SHIFT) | \
> +		     (0xf << ID_AA64ISAR1_API_SHIFT) | \
> +		     (0xf << ID_AA64ISAR1_APA_SHIFT))
> +	and	x1, x1, x3
> +	cbz	x1, 1f
> +	orr	x0, x0, #(HCR_APK | HCR_API)
> +1:
> +#endif
>  	msr	hcr_el2, x0
>  	isb
>  
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value
  2017-11-27 16:38 ` [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
@ 2018-02-06 12:39   ` Christoffer Dall
  2018-04-09 14:57     ` Mark Rutland
  0 siblings, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2018-02-06 12:39 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, arnd, catalin.marinas, cdall, kvmarm,
	linux-arch, marc.zyngier, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

On Mon, Nov 27, 2017 at 04:38:03PM +0000, Mark Rutland wrote:
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> the host HCR when switching to/from a guest HCR.
> 
> For __{activate,deactivate}_traps(), the HCR save/restore is made common
> across the !VHE and VHE paths. As the host and guest HCR values must
> have E2H set when VHE is in use, register redirection should always be
> in effect at EL2, and this change should not adversely affect the VHE
> code.
> 
> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().
> 
> The now unused HCR_HOST_VHE_FLAGS definition is removed.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Reviewed-by: Christoffer Dall <cdall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/include/asm/kvm_arm.h  | 1 -
>  arch/arm64/include/asm/kvm_host.h | 5 ++++-
>  arch/arm64/kvm/hyp/switch.c       | 5 +++--
>  arch/arm64/kvm/hyp/tlb.c          | 6 +++++-
>  4 files changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 62854d5d1d3b..aa02b05430e8 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -84,7 +84,6 @@
>  			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
>  #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
>  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
> -#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>  
>  /* TCR_EL2 Registers bits */
>  #define TCR_EL2_RES1		((1 << 31) | (1 << 23))
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 674912d7a571..39184aa3e2f2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -199,10 +199,13 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
>  struct kvm_vcpu_arch {
>  	struct kvm_cpu_context ctxt;
>  
> -	/* HYP configuration */
> +	/* Guest HYP configuration */
>  	u64 hcr_el2;
>  	u32 mdcr_el2;
>  
> +	/* Host HYP configuration */
> +	u64 host_hcr_el2;
> +
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 525c01f48867..2205f0be3ced 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -71,6 +71,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>  {
>  	u64 val;
>  
> +	vcpu->arch.host_hcr_el2 = read_sysreg(hcr_el2);
> +

Looking back at this, it seems excessive to switch this at every
round-trip.  I think it should be possible to have this as a single
global (or per-CPU) variable that gets restored directly when returning
from the VM.

Thanks,
-Christoffer

>  	/*
>  	 * We are about to set CPTR_EL2.TFP to trap all floating point
>  	 * register accesses to EL2, however, the ARM ARM clearly states that
> @@ -116,7 +118,6 @@ static void __hyp_text __deactivate_traps_vhe(void)
>  		    MDCR_EL2_TPMS;
>  
>  	write_sysreg(mdcr_el2, mdcr_el2);
> -	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
>  	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
>  	write_sysreg(vectors, vbar_el1);
>  }
> @@ -129,7 +130,6 @@ static void __hyp_text __deactivate_traps_nvhe(void)
>  	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
>  
>  	write_sysreg(mdcr_el2, mdcr_el2);
> -	write_sysreg(HCR_RW, hcr_el2);
>  	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>  }
>  
> @@ -151,6 +151,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>  	__deactivate_traps_arch()();
>  	write_sysreg(0, hstr_el2);
>  	write_sysreg(0, pmuserenr_el0);
> +	write_sysreg(vcpu->arch.host_hcr_el2, hcr_el2);
>  }
>  
>  static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
> index 73464a96c365..c2b0680efa2c 100644
> --- a/arch/arm64/kvm/hyp/tlb.c
> +++ b/arch/arm64/kvm/hyp/tlb.c
> @@ -49,12 +49,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
>  
>  static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
>  {
> +	u64 val;
> +
>  	/*
>  	 * We're done with the TLB operation, let's restore the host's
>  	 * view of HCR_EL2.
>  	 */
>  	write_sysreg(0, vttbr_el2);
> -	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
> +	val = read_sysreg(hcr_el2);
> +	val |= HCR_TGE;
> +	write_sysreg(val, hcr_el2);
>  }
>  
>  static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 05/12] arm64: Don't trap host pointer auth use to EL2
  2018-02-06 12:39   ` Christoffer Dall
@ 2018-02-12 16:00     ` Mark Rutland
  0 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-12 16:00 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-arm-kernel, arnd, catalin.marinas, cdall, kvmarm,
	linux-arch, marc.zyngier, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

On Tue, Feb 06, 2018 at 01:39:06PM +0100, Christoffer Dall wrote:
> Hi Mark,
> 
> On Mon, Nov 27, 2017 at 04:37:59PM +0000, Mark Rutland wrote:
> > To allow EL0 (and/or EL1) to use pointer authentication functionality,
> > we must ensure that pointer authentication instructions and accesses to
> > pointer authentication keys are not trapped to EL2 (where we will not be
> > able to handle them).
> 
> ...on non-VHE systems, presumably?

For EL0 usage, we don't want to trap even in the absence of VHE, so I'll
drop the bit in brackets entirely.

> > This patch ensures that HCR_EL2 is configured appropriately when the
> > kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
> > ensuring that EL1 can access keys and permit EL0 use of instructions.
> > For VHE kernels, EL2 access is controlled by EL3, and we need not set
> > anything.
> 
> 
> for VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
> and it doesn't matter how we configure HCR_EL2.{API,APK}.
> 
> (Because you do actually set these bits when the features are present if
> I read the code correctly).

Ah, true. I've taken your proposed wording.

> > This does not enable support for KVM guests, since KVM manages HCR_EL2
> > itself.
> 
>         (...when running VMs.)
> 
> 
> Besides the nits:
> 
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>

Cheers!

Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers
  2018-02-06 12:38   ` Christoffer Dall
@ 2018-03-09 14:28     ` Mark Rutland
  2018-04-09 12:58       ` Christoffer Dall
  0 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2018-03-09 14:28 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-arm-kernel, arnd, catalin.marinas, cdall, kvmarm,
	linux-arch, marc.zyngier, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

On Tue, Feb 06, 2018 at 01:38:47PM +0100, Christoffer Dall wrote:
> On Mon, Nov 27, 2017 at 04:38:04PM +0000, Mark Rutland wrote:
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work, with
> > a semi-lazy context switch of the pointer auth state.
> > 
> > When we schedule a vcpu, 
> 
> That's not quite what the code does, the code only does this when we
> schedule back a preempted or blocked vcpu thread.

Does that only leave the case of the vCPU being scheduled for the first
time? Or am I missing something else?

[...]

> > we disable guest usage of pointer
> > authentication instructions and accesses to the keys. While these are
> > disabled, we avoid context-switching the keys. When we trap the guest
> > trying to use pointer authentication functionality, we change to eagerly
> > context-switching the keys, and enable the feature. The next time the
> > vcpu is scheduled out/in, we start again.
> > 
> > Pointer authentication consists of address authentication and generic
> > authentication, and CPUs in a system might have varied support for
> > either. Where support for either feature is not uniform, it is hidden
> > from guests via ID register emulation, as a result of the cpufeature
> > framework in the host.
> > 
> > Unfortunately, address authentication and generic authentication cannot
> > be trapped separately, as the architecture provides a single EL2 trap
> > covering both. If we wish to expose one without the other, we cannot
> > prevent a (badly-written) guest from intermittently using a feature
> > which is not uniformly supported (when scheduled on a physical CPU which
> > supports the relevant feature). 
> 
> We could choose to always trap and emulate in software in this case,
> couldn't we?  (not saying we should though).

Practically speaking, we cannot. Emulating the feature would be so
detrimental to performance as to render the feature useless --  every
function prologue/epilogue in a guest would end up trapping to hyp.

Additionally, if the algorithm is IMP DEF, we simply don't know how to
emulate it.

[...]

> Also, this patch doesn't let userspace decide if we should hide or
> expose the feature to guests, and will expose new system registers to
> userspace.  That means that on hardware supporting pointer
> authentication, with this patch, it's not possible to migrate to a
> machine which doesn't have the support.  That's probably a good thing
> (false sense of security etc.), but I wonder if we should have a
> mechanism for userspace to ask for pointer authentication in the guest
> and only if that's enabled, do we expose the feature to the guest and in
> the system register list to user space as well?

Making this an opt-in makes sense to me, given it affects migration.
I'll take a look at what we do for SVE.

[...]

> > When the guest is scheduled on a
> > physical CPU lacking the feature, these atetmps will result in an UNDEF
> 
> attempts

Whoops. Fixed.

[...]

> > +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
> > +{
> > +	kvm_arm_vcpu_ptrauth_disable(vcpu);
> > +}
> > +
> 
> I still find this decision to begin trapping again quite arbitrary, and
> would at least prefer this to be in vcpu_load (which would make the
> behavior match the commit text as well).

Sure, done.

> My expectation would be that if a guest is running software with pointer
> authentication enabled, then it's likely to either keep using the
> feature, or not use it at all, so I would make this a one-time flag.

I think it's likely that some applications will use ptrauth while others
do not. Even if the gust OS supports ptrauth, KVM may repeatedly preempt
an application that doesn't use it, and we'd win in that case.

There are also some rarer cases, like kexec in a guest from a
ptrauth-aware kernel to a ptrauth-oblivious one.

I don't have strong feelings either way, and I have no data.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers
  2018-03-09 14:28     ` Mark Rutland
@ 2018-04-09 12:58       ` Christoffer Dall
  2018-04-09 14:37         ` Mark Rutland
  0 siblings, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2018-04-09 12:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Christoffer Dall, linux-arch, cdall, arnd, marc.zyngier,
	catalin.marinas, yao.qi, kernel-hardening, will.deacon,
	linux-kernel, awallis, kvmarm, linux-arm-kernel

Hi Mark,

[Sorry for late reply]

On Fri, Mar 09, 2018 at 02:28:38PM +0000, Mark Rutland wrote:
> On Tue, Feb 06, 2018 at 01:38:47PM +0100, Christoffer Dall wrote:
> > On Mon, Nov 27, 2017 at 04:38:04PM +0000, Mark Rutland wrote:
> > > When pointer authentication is supported, a guest may wish to use it.
> > > This patch adds the necessary KVM infrastructure for this to work, with
> > > a semi-lazy context switch of the pointer auth state.
> > > 
> > > When we schedule a vcpu, 
> > 
> > That's not quite what the code does, the code only does this when we
> > schedule back a preempted or blocked vcpu thread.
> 
> Does that only leave the case of the vCPU being scheduled for the first
> time? Or am I missing something else?
> 
> [...]

In the current patch, you're only calling kvm_arm_vcpu_ptrauth_disable()
from kvm_arch_sched_in() which is only called on the preempt notifier
patch, which leaves out every time we enter the guest from userspace and
therefore also the initial run of the vCPU (assuming there's no
preemption in the kernel prior to running the first time).

vcpu_load() takes care of all the cases.

> 

[...]

> > 
> > I still find this decision to begin trapping again quite arbitrary, and
> > would at least prefer this to be in vcpu_load (which would make the
> > behavior match the commit text as well).
> 
> Sure, done.
> 
> > My expectation would be that if a guest is running software with pointer
> > authentication enabled, then it's likely to either keep using the
> > feature, or not use it at all, so I would make this a one-time flag.
> 
> I think it's likely that some applications will use ptrauth while others
> do not. Even if the gust OS supports ptrauth, KVM may repeatedly preempt
> an application that doesn't use it, and we'd win in that case.
> 
> There are also some rarer cases, like kexec in a guest from a
> ptrauth-aware kernel to a ptrauth-oblivious one.
> 
> I don't have strong feelings either way, and I have no data.
> 

I think your intuition sounds sane, and let's reset the flag on every
vcpu_load, and we can always revisit when we have hardware and data if
someone reports a performance issue.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers
  2018-04-09 12:58       ` Christoffer Dall
@ 2018-04-09 14:37         ` Mark Rutland
  0 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-04-09 14:37 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Christoffer Dall, linux-arch, cdall, arnd, marc.zyngier,
	catalin.marinas, yao.qi, kernel-hardening, will.deacon,
	linux-kernel, awallis, kvmarm, linux-arm-kernel

On Mon, Apr 09, 2018 at 02:58:18PM +0200, Christoffer Dall wrote:
> Hi Mark,
> 
> [Sorry for late reply]
> 
> On Fri, Mar 09, 2018 at 02:28:38PM +0000, Mark Rutland wrote:
> > On Tue, Feb 06, 2018 at 01:38:47PM +0100, Christoffer Dall wrote:
> > > On Mon, Nov 27, 2017 at 04:38:04PM +0000, Mark Rutland wrote:
> > > > When pointer authentication is supported, a guest may wish to use it.
> > > > This patch adds the necessary KVM infrastructure for this to work, with
> > > > a semi-lazy context switch of the pointer auth state.
> > > > 
> > > > When we schedule a vcpu, 
> > > 
> > > That's not quite what the code does, the code only does this when we
> > > schedule back a preempted or blocked vcpu thread.
> > 
> > Does that only leave the case of the vCPU being scheduled for the first
> > time? Or am I missing something else?
> > 
> > [...]
> 
> In the current patch, you're only calling kvm_arm_vcpu_ptrauth_disable()
> from kvm_arch_sched_in() which is only called on the preempt notifier
> patch, which leaves out every time we enter the guest from userspace and
> therefore also the initial run of the vCPU (assuming there's no
> preemption in the kernel prior to running the first time).
> 
> vcpu_load() takes care of all the cases.

I see.

> > > I still find this decision to begin trapping again quite arbitrary, and
> > > would at least prefer this to be in vcpu_load (which would make the
> > > behavior match the commit text as well).
> > 
> > Sure, done.
> > 
> > > My expectation would be that if a guest is running software with pointer
> > > authentication enabled, then it's likely to either keep using the
> > > feature, or not use it at all, so I would make this a one-time flag.
> > 
> > I think it's likely that some applications will use ptrauth while others
> > do not. Even if the gust OS supports ptrauth, KVM may repeatedly preempt
> > an application that doesn't use it, and we'd win in that case.
> > 
> > There are also some rarer cases, like kexec in a guest from a
> > ptrauth-aware kernel to a ptrauth-oblivious one.
> > 
> > I don't have strong feelings either way, and I have no data.
> 
> I think your intuition sounds sane, and let's reset the flag on every
> vcpu_load, and we can always revisit when we have hardware and data if
> someone reports a performance issue.

Cool. I've switched to vcpu_load() locally, and will use that in v3.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value
  2018-02-06 12:39   ` Christoffer Dall
@ 2018-04-09 14:57     ` Mark Rutland
  2018-04-09 19:03       ` Christoffer Dall
  0 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2018-04-09 14:57 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-arm-kernel, arnd, catalin.marinas, kvmarm, linux-arch,
	marc.zyngier, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

On Tue, Feb 06, 2018 at 01:39:15PM +0100, Christoffer Dall wrote:
> On Mon, Nov 27, 2017 at 04:38:03PM +0000, Mark Rutland wrote:
> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index 525c01f48867..2205f0be3ced 100644
> > --- a/arch/arm64/kvm/hyp/switch.c
> > +++ b/arch/arm64/kvm/hyp/switch.c
> > @@ -71,6 +71,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
> >  {
> >  	u64 val;
> >  
> > +	vcpu->arch.host_hcr_el2 = read_sysreg(hcr_el2);
> > +
> 
> Looking back at this, it seems excessive to switch this at every
> round-trip.  I think it should be possible to have this as a single
> global (or per-CPU) variable that gets restored directly when returning
> from the VM.

I suspect this needs to be per-cpu, to account for heterogeneous
systems.

I guess if we move hcr_el2 into kvm_cpu_context, that gives us a
per-vcpu copy for guests, and a per-cpu copy for the host (in the global
kvm_host_cpu_state).

I'll have a look at how gnarly that turns out. I'm not sure how we can
initialise that sanely for the !VHE case to match whatever el2_setup
did.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value
  2018-04-09 14:57     ` Mark Rutland
@ 2018-04-09 19:03       ` Christoffer Dall
  0 siblings, 0 replies; 26+ messages in thread
From: Christoffer Dall @ 2018-04-09 19:03 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, arnd, catalin.marinas, kvmarm, linux-arch,
	marc.zyngier, suzuki.poulose, will.deacon, yao.qi,
	kernel-hardening, linux-kernel, awallis

On Mon, Apr 09, 2018 at 03:57:09PM +0100, Mark Rutland wrote:
> On Tue, Feb 06, 2018 at 01:39:15PM +0100, Christoffer Dall wrote:
> > On Mon, Nov 27, 2017 at 04:38:03PM +0000, Mark Rutland wrote:
> > > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > > index 525c01f48867..2205f0be3ced 100644
> > > --- a/arch/arm64/kvm/hyp/switch.c
> > > +++ b/arch/arm64/kvm/hyp/switch.c
> > > @@ -71,6 +71,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
> > >  {
> > >  	u64 val;
> > >  
> > > +	vcpu->arch.host_hcr_el2 = read_sysreg(hcr_el2);
> > > +
> > 
> > Looking back at this, it seems excessive to switch this at every
> > round-trip.  I think it should be possible to have this as a single
> > global (or per-CPU) variable that gets restored directly when returning
> > from the VM.
> 
> I suspect this needs to be per-cpu, to account for heterogeneous
> systems.
> 
> I guess if we move hcr_el2 into kvm_cpu_context, that gives us a
> per-vcpu copy for guests, and a per-cpu copy for the host (in the global
> kvm_host_cpu_state).
> 
> I'll have a look at how gnarly that turns out. I'm not sure how we can
> initialise that sanely for the !VHE case to match whatever el2_setup
> did.

There's no harm in jumping down to EL2 to read a register during the
initialization phase.  All it requires is an annotation of the callee
function, and a kvm_call_hyp(), and it's actually quite fast unless you
start saving/restoring a bunch of additional system registers.  See how
we call __kvm_set_tpidr_el2() for example.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 06/12] arm64: add basic pointer authentication support
  2017-11-27 16:38 ` [PATCHv2 06/12] arm64: add basic pointer authentication support Mark Rutland
@ 2018-05-22 19:06   ` Adam Wallis
  0 siblings, 0 replies; 26+ messages in thread
From: Adam Wallis @ 2018-05-22 19:06 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel
  Cc: arnd, catalin.marinas, cdall, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, yao.qi, kernel-hardening,
	linux-kernel

On 11/27/2017 11:38 AM, Mark Rutland wrote:
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey. The kernel maintains an APIAKey value
> for each process (shared by all threads within), which is initialised to
> a random value at exec() time.
> 
> To describe that address authentication instructions are available, the
> ID_AA64ISAR0.{APA,API} fields are exposed to userspace. A new hwcap,
> APIA, is added to describe that the kernel manages APIAKey.
> 
> Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
> and will behave as NOPs. These may be made use of in future patches.
> 
> No support is added for the generic key (APGAKey), though this cannot be
> trapped or made to behave as a NOP. Its presence is not advertised with
> a hwcap.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/mmu.h          |  5 ++
>  arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
>  arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
>  arch/arm64/include/uapi/asm/hwcap.h   |  1 +
>  arch/arm64/kernel/cpufeature.c        | 17 ++++++-
>  arch/arm64/kernel/cpuinfo.c           |  1 +
>  6 files changed, 134 insertions(+), 4 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pointer_auth.h

Mark, I was able to verify that a buffer overflow exploit results in a segfault
with these PAC patches. When I compile the same binary without
"-msign-return-address=none", I am able to successfully overflow the stack and
execute malicious code.

Thanks
Adam

Tested-by: Adam Wallis <awallis@codeaurora.org>


-- 
Adam Wallis
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2018-05-22 19:06 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-27 16:37 [PATCHv2 00/12] ARMv8.3 pointer authentication userspace support Mark Rutland
2017-11-27 16:37 ` [PATCHv2 01/12] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
2017-11-27 16:37 ` [PATCHv2 02/12] arm64: add pointer authentication register bits Mark Rutland
2017-11-27 16:37 ` [PATCHv2 03/12] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
2017-11-27 16:37 ` [PATCHv2 04/12] arm64/cpufeature: detect pointer authentication Mark Rutland
2017-11-27 16:37 ` [PATCHv2 05/12] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
2018-02-06 12:39   ` Christoffer Dall
2018-02-12 16:00     ` Mark Rutland
2017-11-27 16:38 ` [PATCHv2 06/12] arm64: add basic pointer authentication support Mark Rutland
2018-05-22 19:06   ` Adam Wallis
2017-11-27 16:38 ` [PATCHv2 07/12] arm64: expose user PAC bit positions via ptrace Mark Rutland
2017-11-27 16:38 ` [PATCHv2 08/12] arm64: perf: strip PAC when unwinding userspace Mark Rutland
2017-11-27 16:38 ` [PATCHv2 09/12] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
2018-02-06 12:39   ` Christoffer Dall
2018-04-09 14:57     ` Mark Rutland
2018-04-09 19:03       ` Christoffer Dall
2017-11-27 16:38 ` [PATCHv2 10/12] arm64/kvm: context-switch ptrauth registers Mark Rutland
2018-02-06 12:38   ` Christoffer Dall
2018-03-09 14:28     ` Mark Rutland
2018-04-09 12:58       ` Christoffer Dall
2018-04-09 14:37         ` Mark Rutland
2017-11-27 16:38 ` [PATCHv2 11/12] arm64: enable pointer authentication Mark Rutland
2017-11-27 16:38 ` [PATCHv2 12/12] arm64: docs: document " Mark Rutland
2017-11-28 15:07   ` Andrew Jones
2017-12-04 12:39     ` Mark Rutland
2017-12-04 12:49       ` Andrew Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).