linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/11] ARMv8.3 pointer authentication userspace support
@ 2017-07-19 16:01 Mark Rutland
  2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
                   ` (14 more replies)
  0 siblings, 15 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

This series adds support for the ARMv8.3 pointer authentication extension.

Since RFC [1]:
* Make the KVM context switch (semi-lazy)
* Rebase to v4.13-rc1
* Improve pointer authentication documentation
* Add hwcap documentation
* Various minor cleanups

I've pushed the series to the arm64/pointer-auth branch [2] of my linux tree.
I've also pushed out a necessary bootwrapper patch to the pointer-auth branch
[3] of my bootwrapper repo.


Extension Overview 
==================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate strip a PAC from a pointer

If authentication succeeds, the code is removed, yielding the original pointer.
If authentication fails, bits are set in the pointer such that it is guaranteed
to cause a fault if used.

These instructions can make use of four keys:

* APIAKey (A.K.A. Instruction A key)
* APIBKey (A.K.A. Instruction B key)
* APDAKey (A.K.A. Data A key)
* APDBKey (A.K.A. Data B Key)

A subset of these instruction encodings have been allocated from the HINT
space, and will operate as NOPs on any ARMv8 parts which do not feature the
extension (or if purposefully disabled by the kernel). Software using only this
subset of the instructions should function correctly on all ARMv8-A parts.

Additionally, instructions are added to authenticate small blocks of memory in
similar fashion, using APGAKey (A.K.A. Generic key).


This Series
===========

This series enables the use of instructions using APIAKey, which is initialised
and maintained per-process (shared by all threads). This series does not add
support for APIBKey, APDAKey, APDBKey, nor APGAKey. The series only supports
the use of an architected algorithm.

I've given this some basic testing with a homebrew test suite. More ideally,
we'd add some tests to the kernel source tree.

I've added some basic KVM support, but this doesn't cater for systems with
mismatched support. Looking forward, we'll need ID register emulation in KVM so
that we can hide features from guests to cater for such cases.


Open questions
==============

* Should keys be per-thread rather than per-process?

  My understanding is that glibc can't (currently) handle threads having
  different keys, but it might be that another libc would prefer per-thread
  keys. If desired, we could add a mechanism for a thread to re-initialize its
  keys without an exec*().

* Do we need a separate hwcap for XPAC* instructions?

  Library code performing stack unwinding may need to interoperate with other
  code which may or may not be using pointer authentication. It may be
  desirable to use XPAC* rather than attempting authentication and/or acquiring
  the PAC masks via ptrace.

  As we may expose APIBKey (potentially separately from APIAKey) in future,
  HWCAP_APIA cannot be used to determine when these instruction can/should be
  used.

* Should we expose a per-process data key now, to go with the insn key?

  I don't currently have a use-case for this.

* Should we expose generic authentication (i.e. APGAKey)?

  I don't currently have a use-case for this.

* Should the kernel remove PACs when unwinding user stacks?

  This is simple to do, but it's arguably placing a policy in the kernel as to
  what we expect user stacks to look like. Regardless, userspace will have to
  perform this when unwinding with DWARF.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-April/498941.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/pointer-auth
[3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git pointer-auth

Mark Rutland (11):
  arm64: docs: describe ELF hwcaps
  asm-generic: mm_hooks: allow hooks to be overridden individually
  arm64: add pointer authentication register bits
  arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
  arm64/cpufeature: detect pointer authentication
  arm64: Don't trap host pointer auth use to EL2
  arm64: add basic pointer authentication support
  arm64: expose user PAC bit positions via ptrace
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: context-switch ptrauth registers
  arm64: docs: document pointer authentication

 Documentation/arm64/booting.txt                |   8 ++
 Documentation/arm64/elf_hwcaps.txt             | 138 +++++++++++++++++++++++++
 Documentation/arm64/pointer-authentication.txt |  85 +++++++++++++++
 arch/arm64/Kconfig                             |  23 +++++
 arch/arm64/include/asm/cpucaps.h               |   4 +-
 arch/arm64/include/asm/esr.h                   |   3 +-
 arch/arm64/include/asm/kvm_arm.h               |   3 +-
 arch/arm64/include/asm/kvm_host.h              |  28 ++++-
 arch/arm64/include/asm/kvm_hyp.h               |   7 ++
 arch/arm64/include/asm/mmu.h                   |   5 +
 arch/arm64/include/asm/mmu_context.h           |  25 ++++-
 arch/arm64/include/asm/pointer_auth.h          |  97 +++++++++++++++++
 arch/arm64/include/asm/sysreg.h                |  30 ++++++
 arch/arm64/include/uapi/asm/hwcap.h            |   1 +
 arch/arm64/include/uapi/asm/ptrace.h           |   5 +
 arch/arm64/kernel/cpufeature.c                 |  39 ++++++-
 arch/arm64/kernel/cpuinfo.c                    |   1 +
 arch/arm64/kernel/head.S                       |  19 +++-
 arch/arm64/kernel/ptrace.c                     |  39 +++++++
 arch/arm64/kvm/handle_exit.c                   |  21 ++++
 arch/arm64/kvm/hyp/Makefile                    |   1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c                |  91 ++++++++++++++++
 arch/arm64/kvm/hyp/switch.c                    |   9 +-
 arch/arm64/kvm/hyp/tlb.c                       |   6 +-
 arch/arm64/kvm/sys_regs.c                      |  32 ++++++
 include/asm-generic/mm_hooks.h                 |  11 ++
 include/uapi/linux/elf.h                       |   1 +
 27 files changed, 719 insertions(+), 13 deletions(-)
 create mode 100644 Documentation/arm64/elf_hwcaps.txt
 create mode 100644 Documentation/arm64/pointer-authentication.txt
 create mode 100644 arch/arm64/include/asm/pointer_auth.h
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

-- 
1.9.1

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 01/11] arm64: docs: describe ELF hwcaps
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-21 17:05   ` Dave Martin
                     ` (2 more replies)
  2017-07-19 16:01 ` [PATCH 02/11] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
                   ` (13 subsequent siblings)
  14 siblings, 3 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

We don't document our ELF hwcaps, leaving developers to interpret them
according to hearsay, guesswork, or (in exceptional cases) inspection of
the current kernel code.

This is less than optimal, and it would be far better if we had some
definitive description of each of the ELF hwcaps that developers could
refer to.

This patch adds a document describing the (native) arm64 ELF hwcaps.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <Dave.Martin@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 Documentation/arm64/elf_hwcaps.txt | 133 +++++++++++++++++++++++++++++++++++++
 1 file changed, 133 insertions(+)
 create mode 100644 Documentation/arm64/elf_hwcaps.txt

diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
new file mode 100644
index 0000000..7bc2921
--- /dev/null
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -0,0 +1,133 @@
+ARM64 ELF hwcaps
+================
+
+This document describes the usage and semantics of the arm64 ELF hwcaps.
+
+
+1. Introduction
+---------------
+
+Some hardware or software features are only available on some CPU
+implementations, and/or with certain kernel configurations, but have no
+architected discovery mechanism available to userspace code at EL0. The
+kernel exposes the presence of these features to userspace through a set
+of flags called hwcaps, exposed in the auxilliary vector.
+
+Userspace software can test for features by acquiring the AT_HWCAP entry
+of the auxilliary vector, and testing whether the relevant flags are
+set, e.g.
+
+bool floating_point_is_present(void)
+{
+	unsigned long hwcaps = getauxval(AT_HWCAP);
+	if (hwcaps & HWCAP_FP)
+		return true;
+
+	return false;
+}
+
+Where software relies on a feature described by a hwcap, it should check
+the relevant hwcap flag to verify that the feature is present before
+attempting to make use of the feature.
+
+Features cannot be probed reliably through other means. When a feature
+is not available, attempting to use it may result in unpredictable
+behaviour, and is not guaranteed to result in any reliable indication
+that the feature is unavailable, such as a SIGILL.
+
+
+2. Interpretation of hwcaps
+---------------------------
+
+The majority of hwcaps are intended to indicate the presence of features
+which are described by architected ID registers inaccessible to
+userspace code at EL0. These hwcaps are defined in terms of ID register
+fields, and should be interpreted with reference to the definition of
+these fields in the ARM Architecture Reference Manual (ARM ARM).
+
+Such hwcaps are described below in the form:
+
+    Functionality implied by idreg.field == val.
+
+Such hwcaps indicate the availability of functionality that the ARM ARM
+defines as being present when idreg.field has value val, but do not
+indicate that idreg.field is precisely equal to val, nor do they
+indicate the absence of functionality implied by other values of
+idreg.field.
+
+Other hwcaps may indicate the presence of features which cannot be
+described by ID registers alone. These may be described without
+reference to ID registers, and may refer to other documentation.
+
+
+3. The hwcaps exposed in AT_HWCAP
+---------------------------------
+
+HWCAP_FP
+
+    Functionality implied by ID_AA64PFR0_EL1.FP == 0b0000.
+
+HWCAP_ASIMD
+
+    Functionality implied by ID_AA64PFR0_EL1.AdvSIMD == 0b0000.
+
+HWCAP_EVTSTRM
+
+    The generic timer is configured to generate events at a frequency of
+    approximately 100KHz.
+
+HWCAP_AES
+
+    Functionality implied by ID_AA64ISAR1_EL1.AES == 0b0001.
+
+HWCAP_PMULL
+
+    Functionality implied by ID_AA64ISAR1_EL1.AES == 0b0010.
+
+HWCAP_SHA1
+
+    Functionality implied by ID_AA64ISAR0_EL1.SHA1 == 0b0001.
+
+HWCAP_SHA2
+
+    Functionality implied by ID_AA64ISAR0_EL1.SHA2 == 0b0001.
+
+HWCAP_CRC32
+
+    Functionality implied by ID_AA64ISAR0_EL1.CRC32 == 0b0001.
+
+HWCAP_ATOMICS
+
+    Functionality implied by ID_AA64ISAR0_EL1.Atomic == 0b0010.
+
+HWCAP_FPHP
+
+    Functionality implied by ID_AA64PFR0_EL1.FP == 0b0001.
+
+HWCAP_ASIMDHP
+
+    Functionality implied by ID_AA64PFR0_EL1.AdvSIMD == 0b0001.
+
+HWCAP_CPUID
+
+    EL0 access to certain ID registers is available, to the extent
+    described by Documentation/arm64/cpu-feature-registers.txt.
+
+    These ID registers may imply the availability of features.
+
+HWCAP_ASIMDRDM
+
+    Functionality implied by ID_AA64ISAR0_EL1.RDM == 0b0001.
+
+HWCAP_JSCVT
+
+    Functionality implied by ID_AA64ISAR1_EL1.JSCVT == 0b0001.
+
+HWCAP_FCMA
+
+    Functionality implied by ID_AA64ISAR1_EL1.FCMA == 0b0001.
+
+HWCAP_LRCPC
+
+    Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0001.
+
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 02/11] asm-generic: mm_hooks: allow hooks to be overridden individually
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
  2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-19 16:01 ` [PATCH 03/11] arm64: add pointer authentication register bits Mark Rutland
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

Currently, an architecture must either implement all of the mm hooks
itself, or use all of those provided by the asm-generic implementation.
When an architecture only needs to override a single hook, it must copy
the stub implementations from the asm-generic version.

To avoid this repetition, allow each hook to be overridden indiviually,
by placing each under an #ifndef block. As architectures providing their
own hooks can't include this file today, this shouldn't adversely affect
any existing hooks.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arch@vger.kernel.org
---
 include/asm-generic/mm_hooks.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
index 41e5b67..b4bf1b8 100644
--- a/include/asm-generic/mm_hooks.h
+++ b/include/asm-generic/mm_hooks.h
@@ -6,30 +6,41 @@
 #ifndef _ASM_GENERIC_MM_HOOKS_H
 #define _ASM_GENERIC_MM_HOOKS_H
 
+#ifndef arch_dup_mmap
 static inline void arch_dup_mmap(struct mm_struct *oldmm,
 				 struct mm_struct *mm)
 {
 }
+#endif
 
+#ifndef arch_exit_mmap
 static inline void arch_exit_mmap(struct mm_struct *mm)
 {
 }
+#endif
 
+#ifndef arch_unmap
 static inline void arch_unmap(struct mm_struct *mm,
 			struct vm_area_struct *vma,
 			unsigned long start, unsigned long end)
 {
 }
+#endif
 
+#ifndef arch_bprm_mm_init
 static inline void arch_bprm_mm_init(struct mm_struct *mm,
 				     struct vm_area_struct *vma)
 {
 }
+#endif
 
+#ifndef arch_vma_access_permitted
 static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
 		bool write, bool execute, bool foreign)
 {
 	/* by default, allow everything */
 	return true;
 }
+#endif
+
 #endif	/* _ASM_GENERIC_MM_HOOKS_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 03/11] arm64: add pointer authentication register bits
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
  2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
  2017-07-19 16:01 ` [PATCH 02/11] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-19 16:01 ` [PATCH 04/11] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

The ARMv8.3 pointer authentication extension adds:

* New fields in ID_AA64ISAR1 to report the presence of pointer
  authentication functionality.

* New control bits in SCTLR_ELx to enable this functionality.

* New system registers to hold the keys necessary for this
  functionality.

* A new ESR_ELx.EC code used when the new instructions are affected by
  configurable traps

This patch adds the relevant definitions to <asm/sysreg.h> and
<asm/esr.h> for these, to be used by subsequent patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/esr.h    |  3 ++-
 arch/arm64/include/asm/sysreg.h | 30 ++++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 8cabd57..f6ebc1f 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -30,7 +30,8 @@
 #define ESR_ELx_EC_CP14_LS	(0x06)
 #define ESR_ELx_EC_FP_ASIMD	(0x07)
 #define ESR_ELx_EC_CP10_ID	(0x08)
-/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_PAC		(0x09)
+/* Unallocated EC: 0x0A - 0x0B */
 #define ESR_ELx_EC_CP14_64	(0x0C)
 /* Unallocated EC: 0x0d */
 #define ESR_ELx_EC_ILL		(0x0E)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 16e44fa..d902cc5 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -164,6 +164,19 @@
 #define SYS_TTBR1_EL1			sys_reg(3, 0, 2, 0, 1)
 #define SYS_TCR_EL1			sys_reg(3, 0, 2, 0, 2)
 
+#define SYS_APIAKEYLO_EL1		sys_reg(3, 0, 2, 1, 0)
+#define SYS_APIAKEYHI_EL1		sys_reg(3, 0, 2, 1, 1)
+#define SYS_APIBKEYLO_EL1		sys_reg(3, 0, 2, 1, 2)
+#define SYS_APIBKEYHI_EL1		sys_reg(3, 0, 2, 1, 3)
+
+#define SYS_APDAKEYLO_EL1		sys_reg(3, 0, 2, 2, 0)
+#define SYS_APDAKEYHI_EL1		sys_reg(3, 0, 2, 2, 1)
+#define SYS_APDBKEYLO_EL1		sys_reg(3, 0, 2, 2, 2)
+#define SYS_APDBKEYHI_EL1		sys_reg(3, 0, 2, 2, 3)
+
+#define SYS_APGAKEYLO_EL1		sys_reg(3, 0, 2, 3, 0)
+#define SYS_APGAKEYHI_EL1		sys_reg(3, 0, 2, 3, 1)
+
 #define SYS_ICC_PMR_EL1			sys_reg(3, 0, 4, 6, 0)
 
 #define SYS_AFSR0_EL1			sys_reg(3, 0, 5, 1, 0)
@@ -296,7 +309,11 @@
 #define SYS_ICH_LR15_EL2		__SYS__LR8_EL2(7)
 
 /* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_ENIA	(1 << 31)
+#define SCTLR_ELx_ENIB	(1 << 30)
+#define SCTLR_ELx_ENDA	(1 << 27)
 #define SCTLR_ELx_EE    (1 << 25)
+#define SCTLR_ELx_ENDB	(1 << 13)
 #define SCTLR_ELx_I	(1 << 12)
 #define SCTLR_ELx_SA	(1 << 3)
 #define SCTLR_ELx_C	(1 << 2)
@@ -326,9 +343,22 @@
 #define ID_AA64ISAR0_AES_SHIFT		4
 
 /* id_aa64isar1 */
+#define ID_AA64ISAR1_GPI_SHIFT		28
+#define ID_AA64ISAR1_GPA_SHIFT		24
 #define ID_AA64ISAR1_LRCPC_SHIFT	20
 #define ID_AA64ISAR1_FCMA_SHIFT		16
 #define ID_AA64ISAR1_JSCVT_SHIFT	12
+#define ID_AA64ISAR1_APA_SHIFT		4
+#define ID_AA64ISAR1_API_SHIFT		8
+
+#define ID_AA64ISAR1_APA_NI		0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED	0x1
+#define ID_AA64ISAR1_API_NI		0x0
+#define ID_AA64ISAR1_API_IMP_DEF	0x1
+#define ID_AA64ISAR1_GPA_NI		0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED	0x1
+#define ID_AA64ISAR1_GPI_NI		0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF	0x1
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_GIC_SHIFT		24
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 04/11] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (2 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 03/11] arm64: add pointer authentication register bits Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-24 10:54   ` Suzuki K Poulose
  2017-07-19 16:01 ` [PATCH 05/11] arm64/cpufeature: detect pointer authentication Mark Rutland
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

>From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:

* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algoithm

This patch adds the requisite definitions so that we can identify the
presence of this functionality. For the timebeing, the features are
hidden from userspace.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f9e0064..b23ad83 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -120,7 +120,11 @@ static int __init register_cpu_hwcaps_dumper(void)
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
-	ARM64_FTR_END,
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+	ARM64_FTR_END
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 05/11] arm64/cpufeature: detect pointer authentication
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (3 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 04/11] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-19 16:01 ` [PATCH 06/11] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.

Currently, this only detects the presence of an architected algorithm.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  4 +++-
 arch/arm64/kernel/cpufeature.c   | 22 ++++++++++++++++++++++
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 8d2272c..903b458 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -39,7 +39,9 @@
 #define ARM64_WORKAROUND_QCOM_FALKOR_E1003	18
 #define ARM64_WORKAROUND_858921			19
 #define ARM64_WORKAROUND_CAVIUM_30115		20
+#define ARM64_HAS_ADDRESS_AUTH			21
+#define ARM64_HAS_GENERIC_AUTH			22
 
-#define ARM64_NCAPS				21
+#define ARM64_NCAPS				23
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b23ad83..4016b1e7 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -892,6 +892,28 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
 		.min_field_value = 0,
 		.matches = has_no_fpsimd,
 	},
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	{
+		.desc = "Address authentication (architected algorithm)",
+		.capability = ARM64_HAS_ADDRESS_AUTH,
+		.def_scope = SCOPE_SYSTEM,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_APA_SHIFT,
+		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
+		.matches = has_cpuid_feature,
+	},
+	{
+		.desc = "Generic authentication (architected algorithm)",
+		.capability = ARM64_HAS_GENERIC_AUTH,
+		.def_scope = SCOPE_SYSTEM,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_GPA_SHIFT,
+		.min_field_value = ID_AA64ISAR1_GPA_ARCHITECTED,
+		.matches = has_cpuid_feature
+	},
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
 	{},
 };
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 06/11] arm64: Don't trap host pointer auth use to EL2
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (4 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 05/11] arm64/cpufeature: detect pointer authentication Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-24 10:37   ` Dave Martin
  2017-07-19 16:01 ` [PATCH 07/11] arm64: add basic pointer authentication support Mark Rutland
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2 (where we will not be
able to handle them).

This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels, EL2 access is controlled by EL3, and we need not set
anything.

This does not enable support for KVM guests, since KVM manages HCR_EL2
itself.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h |  2 ++
 arch/arm64/kernel/head.S         | 19 +++++++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 61d694c..c1267e8 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -23,6 +23,8 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+#define HCR_API		(UL(1) << 41)
+#define HCR_APK		(UL(1) << 40)
 #define HCR_E2H		(UL(1) << 34)
 #define HCR_ID		(UL(1) << 33)
 #define HCR_CD		(UL(1) << 32)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 973df7d..8b8e8d7 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -412,10 +412,25 @@ CPU_LE(	bic	x0, x0, #(1 << 25)	)	// Clear the EE bit for EL2
 
 	/* Hyp configuration. */
 	mov	x0, #HCR_RW			// 64-bit EL1
-	cbz	x2, set_hcr
+	cbz	x2, 1f
 	orr	x0, x0, #HCR_TGE		// Enable Host Extensions
 	orr	x0, x0, #HCR_E2H
-set_hcr:
+1:
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	/*
+	 * Disable pointer authentication traps to EL2. The HCR_EL2.{APK,API}
+	 * bits exist iff at least one authentication mechanism is implemented.
+	 */
+	mrs	x1, id_aa64isar1_el1
+	mov_q	x3, ((0xf << ID_AA64ISAR1_GPI_SHIFT) | \
+		     (0xf << ID_AA64ISAR1_GPA_SHIFT) | \
+		     (0xf << ID_AA64ISAR1_API_SHIFT) | \
+		     (0xf << ID_AA64ISAR1_APA_SHIFT))
+	and	x1, x1, x3
+	cbz	x1, 1f
+	orr	x0, x0, #(HCR_APK | HCR_API)
+1:
+#endif
 	msr	hcr_el2, x0
 	isb
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 07/11] arm64: add basic pointer authentication support
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (5 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 06/11] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-25 15:26   ` Dave Martin
  2017-08-11  7:46   ` Yao Qi
  2017-07-19 16:01 ` [PATCH 08/11] arm64: expose user PAC bit positions via ptrace Mark Rutland
                   ` (7 subsequent siblings)
  14 siblings, 2 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey. The kernel maintains an APIAKey value
for each process (shared by all threads within), which is initialised to
a random value at exec() time.

Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
and will behave as NOPs. These may be made use of in future patches.

No support is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig                    | 23 +++++++++
 arch/arm64/include/asm/mmu.h          |  5 ++
 arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
 arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
 arch/arm64/include/uapi/asm/hwcap.h   |  1 +
 arch/arm64/kernel/cpufeature.c        | 11 +++++
 arch/arm64/kernel/cpuinfo.c           |  1 +
 7 files changed, 153 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/pointer_auth.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index dfd9086..15a9931 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -962,6 +962,29 @@ config ARM64_UAO
 
 endmenu
 
+menu "ARMv8.3 architectural features"
+
+config ARM64_POINTER_AUTHENTICATION
+	bool "Enable support for pointer authentication"
+	default y
+	help
+	  Pointer authentication (part of the ARMv8.3 Extensions) provides
+	  instructions for signing and authenticating pointers against secret
+	  keys, which can be used to mitigate Return Oriented Programming (ROP)
+	  and other attacks.
+
+	  This option enables these instructions at EL0 (i.e. for userspace).
+
+	  Choosing this option will cause the kernel to initialise secret keys
+	  for each process at exec() time, with these keys being
+	  context-switched along with the process.
+
+	  The feature is detected at runtime. If the feature is not present in
+	  hardware it will not be advertised to userspace nor will it be
+	  enabled.
+
+endmenu
+
 config ARM64_MODULE_CMODEL_LARGE
 	bool
 
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 5468c83..6a848f3 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -16,10 +16,15 @@
 #ifndef __ASM_MMU_H
 #define __ASM_MMU_H
 
+#include <asm/pointer_auth.h>
+
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
 	unsigned long	flags;
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	struct ptrauth_keys	ptrauth_keys;
+#endif
 } mm_context_t;
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 3257895a..06757a5 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -31,7 +31,6 @@
 #include <asm/cacheflush.h>
 #include <asm/cpufeature.h>
 #include <asm/proc-fns.h>
-#include <asm-generic/mm_hooks.h>
 #include <asm/cputype.h>
 #include <asm/pgtable.h>
 #include <asm/sysreg.h>
@@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
 #define destroy_context(mm)		do { } while(0)
 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
 
-#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
+static inline int init_new_context(struct task_struct *tsk,
+			struct mm_struct *mm)
+{
+	atomic64_set(&mm->context.id, 0);
+	mm_ctx_ptrauth_init(&mm->context);
+
+	return 0;
+}
 
 /*
  * This is called when "tsk" is about to enter lazy TLB mode.
@@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
 		return;
 	}
 
+	mm_ctx_ptrauth_switch(&next->context);
+
 	check_and_switch_context(next, cpu);
 }
 
@@ -226,6 +234,19 @@ static inline void __switch_mm(struct mm_struct *next)
 
 void verify_cpu_asid_bits(void);
 
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+				 struct mm_struct *mm)
+{
+	mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
+}
+#define arch_dup_mmap arch_dup_mmap
+
+/*
+ * We need to override arch_dup_mmap before including the generic hooks, which
+ * are otherwise sufficient for us.
+ */
+#include <asm-generic/mm_hooks.h>
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* !__ASM_MMU_CONTEXT_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
new file mode 100644
index 0000000..964da0c
--- /dev/null
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2016 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_POINTER_AUTH_H
+#define __ASM_POINTER_AUTH_H
+
+#include <linux/random.h>
+
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+/*
+ * Each key is a 128-bit quantity which is split accross a pair of 64-bit
+ * registers (Lo and Hi).
+ */
+struct ptrauth_key {
+	unsigned long lo, hi;
+};
+
+/*
+ * We give each process its own instruction A key (APIAKey), which is shared by
+ * all threads. This is inherited upon fork(), and reinitialised upon exec*().
+ * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
+ * instructions behaving as NOPs.
+ */
+struct ptrauth_keys {
+	struct ptrauth_key apia;
+};
+
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+{
+	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+		return;
+
+	get_random_bytes(keys, sizeof(*keys));
+}
+
+#define __ptrauth_key_install(k, v)			\
+do {							\
+	write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1);	\
+	write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1);	\
+} while (0)
+
+static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+		return;
+
+	__ptrauth_key_install(APIA, keys->apia);
+}
+
+static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
+				    struct ptrauth_keys *new)
+{
+	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+		return;
+
+	*new = *old;
+}
+
+#define mm_ctx_ptrauth_init(ctx) \
+	ptrauth_keys_init(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_switch(ctx) \
+	ptrauth_keys_switch(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_dup(oldctx, newctx) \
+	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)
+
+#else
+#define mm_ctx_ptrauth_init(ctx)
+#define mm_ctx_ptrauth_switch(ctx)
+#define mm_ctx_ptrauth_dup(oldctx, newctx)
+#endif
+
+#endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 4e187ce..0481c73 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -35,5 +35,6 @@
 #define HWCAP_JSCVT		(1 << 13)
 #define HWCAP_FCMA		(1 << 14)
 #define HWCAP_LRCPC		(1 << 15)
+#define HWCAP_APIA		(1 << 16)
 
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4016b1e7..7e2885e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -778,6 +778,15 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 	return is_kernel_in_hyp_mode();
 }
 
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+static int cpu_enable_address_auth(void *__unused)
+{
+	config_sctlr_el1(0, SCTLR_ELx_ENIA);
+
+	return 0;
+}
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
+
 static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
 			   int __unused)
 {
@@ -902,6 +911,7 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
 		.field_pos = ID_AA64ISAR1_APA_SHIFT,
 		.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
 		.matches = has_cpuid_feature,
+		.enable = cpu_enable_address_auth,
 	},
 	{
 		.desc = "Generic authentication (architected algorithm)",
@@ -945,6 +955,7 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_APA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_APIA),
 	{},
 };
 
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index f495ee5..b5bd2d3 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -68,6 +68,7 @@
 	"jscvt",
 	"fcma",
 	"lrcpc",
+	"apia",
 	NULL
 };
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 08/11] arm64: expose user PAC bit positions via ptrace
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (6 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 07/11] arm64: add basic pointer authentication support Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-19 16:01 ` [PATCH 09/11] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.

For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.

This patch adds a new structure with masks describing the location of
the PAC bits in userspace instruction and data pointers (i.e. those
addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
By clearing these bits from pointers, userspace can acquire the PAC-less
versions.

This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the feature is enabled. Otherwise, it is
hidden.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jiong Wang <jiong.wang@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yao Qi <yao.qi@arm.com>
---
 arch/arm64/include/asm/pointer_auth.h |  8 +++++++
 arch/arm64/include/uapi/asm/ptrace.h  |  5 +++++
 arch/arm64/kernel/ptrace.c            | 39 +++++++++++++++++++++++++++++++++++
 include/uapi/linux/elf.h              |  1 +
 4 files changed, 53 insertions(+)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 964da0c..ae72c7c 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -16,9 +16,11 @@
 #ifndef __ASM_POINTER_AUTH_H
 #define __ASM_POINTER_AUTH_H
 
+#include <linux/bitops.h>
 #include <linux/random.h>
 
 #include <asm/cpufeature.h>
+#include <asm/memory.h>
 #include <asm/sysreg.h>
 
 #ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
@@ -71,6 +73,12 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
 	*new = *old;
 }
 
+/*
+ * The pointer bits used by a pointer authentication code.
+ * If we were to use tagged pointers, bits 63:56 would also apply.
+ */
+#define ptrauth_pac_mask() 	GENMASK(54, VA_BITS)
+
 #define mm_ctx_ptrauth_init(ctx) \
 	ptrauth_keys_init(&(ctx)->ptrauth_keys)
 
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index d1ff83d..5092fbf 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -90,6 +90,11 @@ struct user_hwdebug_state {
 	}		dbg_regs[16];
 };
 
+struct user_pac_mask {
+	__u64		data_mask;
+	__u64		insn_mask;
+};
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 1b38c01..fae9d50 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -40,8 +40,10 @@
 #include <linux/elf.h>
 
 #include <asm/compat.h>
+#include <asm/cpufeature.h>
 #include <asm/debug-monitors.h>
 #include <asm/pgtable.h>
+#include <asm/pointer_auth.h>
 #include <asm/syscall.h>
 #include <asm/traps.h>
 #include <asm/system_misc.h>
@@ -701,6 +703,30 @@ static int system_call_set(struct task_struct *target,
 	return ret;
 }
 
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+static int pac_mask_get(struct task_struct *target,
+			const struct user_regset *regset,
+			unsigned int pos, unsigned int count,
+			void *kbuf, void __user *ubuf)
+{
+	/*
+	 * The PAC bits can differ across data and instruction pointers
+	 * depending on TCR_EL1.TBID*, which we may make use of in future, so
+	 * we expose separate masks.
+	 */
+	unsigned long mask = ptrauth_pac_mask();
+	struct user_pac_mask uregs = {
+		.data_mask = mask,
+		.insn_mask = mask,
+	};
+
+	if (!cpus_have_cap(ARM64_HAS_ADDRESS_AUTH))
+		return -EINVAL;
+
+	return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
+}
+#endif
+
 enum aarch64_regset {
 	REGSET_GPR,
 	REGSET_FPR,
@@ -710,6 +736,9 @@ enum aarch64_regset {
 	REGSET_HW_WATCH,
 #endif
 	REGSET_SYSTEM_CALL,
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	REGSET_PAC_MASK,
+#endif
 };
 
 static const struct user_regset aarch64_regsets[] = {
@@ -767,6 +796,16 @@ enum aarch64_regset {
 		.get = system_call_get,
 		.set = system_call_set,
 	},
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+	[REGSET_PAC_MASK] = {
+		.core_note_type = NT_ARM_PAC_MASK,
+		.n = sizeof(struct user_pac_mask) / sizeof(u64),
+		.size = sizeof(u64),
+		.align = sizeof(u64),
+		.get = pac_mask_get,
+		/* this cannot be set dynamically */
+	},
+#endif
 };
 
 static const struct user_regset_view user_aarch64_view = {
diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index b5280db..60652f1 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -416,6 +416,7 @@
 #define NT_ARM_HW_BREAK	0x402		/* ARM hardware breakpoint registers */
 #define NT_ARM_HW_WATCH	0x403		/* ARM hardware watchpoint registers */
 #define NT_ARM_SYSTEM_CALL	0x404	/* ARM system call number */
+#define NT_ARM_PAC_MASK		0x406	/* ARM pointer authentication code masks */
 #define NT_METAG_CBUF	0x500		/* Metag catch buffer registers */
 #define NT_METAG_RPIPE	0x501		/* Metag read pipeline state */
 #define NT_METAG_TLS	0x502		/* Metag TLS pointer */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 09/11] arm64/kvm: preserve host HCR_EL2 value
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (7 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 08/11] arm64: expose user PAC bit positions via ptrace Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-08-01 11:00   ` Christoffer Dall
  2017-07-19 16:01 ` [PATCH 10/11] arm64/kvm: context-switch ptrauth registers Mark Rutland
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR.

For __{activate,deactivate}_traps(), the HCR save/restore is made common
across the !VHE and VHE paths. As the host and guest HCR values must
have E2H set when VHE is in use, register redirection should always be
in effect at EL2, and this change should not adversely affect the VHE
code.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The now unused HCR_HOST_VHE_FLAGS definition is removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_arm.h  | 1 -
 arch/arm64/include/asm/kvm_host.h | 5 ++++-
 arch/arm64/kvm/hyp/switch.c       | 5 +++--
 arch/arm64/kvm/hyp/tlb.c          | 6 +++++-
 4 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c1267e8..7b9c898 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -84,7 +84,6 @@
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
-#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* TCR_EL2 Registers bits */
 #define TCR_EL2_RES1		((1 << 31) | (1 << 23))
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d686300..0d7c3dd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -198,10 +198,13 @@ struct kvm_cpu_context {
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
-	/* HYP configuration */
+	/* Guest HYP configuration */
 	u64 hcr_el2;
 	u32 mdcr_el2;
 
+	/* Host HYP configuration */
+	u64 host_hcr_el2;
+
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 945e79c..6108813 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -71,6 +71,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
 
+	vcpu->arch.host_hcr_el2 = read_sysreg(hcr_el2);
+
 	/*
 	 * We are about to set CPTR_EL2.TFP to trap all floating point
 	 * register accesses to EL2, however, the ARM ARM clearly states that
@@ -110,7 +112,6 @@ static void __hyp_text __deactivate_traps_vhe(void)
 		    MDCR_EL2_TPMS;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
 	write_sysreg(CPACR_EL1_FPEN, cpacr_el1);
 	write_sysreg(vectors, vbar_el1);
 }
@@ -123,7 +124,6 @@ static void __hyp_text __deactivate_traps_nvhe(void)
 	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_RW, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
@@ -145,6 +145,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 	__deactivate_traps_arch()();
 	write_sysreg(0, hstr_el2);
 	write_sysreg(0, pmuserenr_el0);
+	write_sysreg(vcpu->arch.host_hcr_el2, hcr_el2);
 }
 
 static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 73464a9..c2b0680 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -49,12 +49,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
 
 static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
 {
+	u64 val;
+
 	/*
 	 * We're done with the TLB operation, let's restore the host's
 	 * view of HCR_EL2.
 	 */
 	write_sysreg(0, vttbr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	val = read_sysreg(hcr_el2);
+	val |= HCR_TGE;
+	write_sysreg(val, hcr_el2);
 }
 
 static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 10/11] arm64/kvm: context-switch ptrauth registers
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (8 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 09/11] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-08-01 11:00   ` Christoffer Dall
  2017-07-19 16:01 ` [PATCH 11/11] arm64: docs: document pointer authentication Mark Rutland
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.

This is sufficient for systems with uniform pointer authentication
support. For systems with mismatched support, it will be necessary to
hide the feature from the guest's view of the ID registers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/include/asm/kvm_host.h | 23 +++++++++-
 arch/arm64/include/asm/kvm_hyp.h  |  7 +++
 arch/arm64/kvm/handle_exit.c      | 21 +++++++++
 arch/arm64/kvm/hyp/Makefile       |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c   | 91 +++++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c       |  4 ++
 arch/arm64/kvm/sys_regs.c         | 32 ++++++++++++++
 7 files changed, 178 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0d7c3dd..f97defa 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -135,6 +135,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1,
+	APIBKEYLO_EL1,
+	APIBKEYHI_EL1,
+	APDAKEYLO_EL1,
+	APDAKEYHI_EL1,
+	APDBKEYLO_EL1,
+	APDBKEYHI_EL1,
+	APGAKEYLO_EL1,
+	APGAKEYHI_EL1,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -368,10 +380,19 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
 	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+
+static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
+{
+	kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
+
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
 void kvm_arm_init_debug(void);
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 4572a9b..3093f35 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -152,6 +152,13 @@ void __debug_restore_state(struct kvm_vcpu *vcpu,
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
 bool __fpsimd_enabled(void);
 
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+			       struct kvm_cpu_context *host_ctxt,
+			       struct kvm_cpu_context *guest_ctxt);
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+			      struct kvm_cpu_context *host_ctxt,
+			      struct kvm_cpu_context *guest_ctxt);
+
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
 
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 17d8a16..9fc561f 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -136,6 +136,26 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return ret;
 }
 
+/*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) &&
+	    cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+	} else {
+		kvm_inject_undefined(vcpu);
+	}
+}
+
+static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return 1;
+}
+
 static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
 	u32 hsr = kvm_vcpu_get_hsr(vcpu);
@@ -168,6 +188,7 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
 	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
+	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 14c4e3b..91f2100 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -18,6 +18,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
 obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
+obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
 
 # KVM code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 0000000..2784fb3
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,91 @@
+/*
+ * Copyright (C) 2017 ARM Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_hyp.h>
+
+static bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
+{
+	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
+		__ptrauth_save_key(ctxt->sys_regs, APIA);
+		__ptrauth_save_key(ctxt->sys_regs, APIB);
+		__ptrauth_save_key(ctxt->sys_regs, APDA);
+		__ptrauth_save_key(ctxt->sys_regs, APDB);
+	}
+
+	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+		__ptrauth_save_key(ctxt->sys_regs, APGA);
+	}
+}
+
+#define __ptrauth_restore_key(regs, key) 					\
+({										\
+	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
+	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
+{
+
+	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
+		__ptrauth_restore_key(ctxt->sys_regs, APIA);
+		__ptrauth_restore_key(ctxt->sys_regs, APIB);
+		__ptrauth_restore_key(ctxt->sys_regs, APDA);
+		__ptrauth_restore_key(ctxt->sys_regs, APDB);
+	}
+
+	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+		__ptrauth_restore_key(ctxt->sys_regs, APGA);
+	}
+}
+
+void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+					  struct kvm_cpu_context *host_ctxt,
+					  struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(host_ctxt);
+	__ptrauth_restore_state(guest_ctxt);
+}
+
+void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+					 struct kvm_cpu_context *host_ctxt,
+					 struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(guest_ctxt);
+	__ptrauth_restore_state(host_ctxt);
+}
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 6108813..df609d9 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -309,6 +309,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__sysreg_restore_guest_state(guest_ctxt);
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
+	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
+
 	/* Jump in the fire! */
 again:
 	exit_code = __guest_enter(vcpu, host_ctxt);
@@ -367,6 +369,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	fp_enabled = __fpsimd_enabled();
 
+	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
+
 	__sysreg_save_guest_state(guest_ctxt);
 	__sysreg32_save_state(vcpu);
 	__timer_save_state(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7786288..0f7d9ae 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -837,6 +837,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
@@ -950,6 +976,12 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 11/11] arm64: docs: document pointer authentication
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (9 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 10/11] arm64/kvm: context-switch ptrauth registers Mark Rutland
@ 2017-07-19 16:01 ` Mark Rutland
  2017-07-21 17:05 ` [PATCH 00/11] ARMv8.3 pointer authentication userspace support Dave Martin
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, mark.rutland, suzuki.poulose,
	will.deacon, yao.qi, linux-kernel, kernel-hardening

Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jiong Wang <jiong.wang@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yao Qi <yao.qi@arm.com>
---
 Documentation/arm64/booting.txt                |  8 +++
 Documentation/arm64/elf_hwcaps.txt             |  5 ++
 Documentation/arm64/pointer-authentication.txt | 85 ++++++++++++++++++++++++++
 3 files changed, 98 insertions(+)
 create mode 100644 Documentation/arm64/pointer-authentication.txt

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 8d0df62..8df9f46 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
     ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
   - The DT or ACPI tables must describe a GICv2 interrupt controller.
 
+  For CPUs with pointer authentication functionality:
+  - If EL3 is present:
+    SCR_EL3.APK (bit 16) must be initialised to 0b1
+    SCR_EL3.API (bit 17) must be initialised to 0b1
+  - If the kernel is entered at EL1:
+    HCR_EL2.APK (bit 40) must be initialised to 0b1
+    HCR_EL2.API (bit 41) must be initialised to 0b1
+
 The requirements described above for CPU mode, caches, MMUs, architected
 timers, coherency and system registers apply to all CPUs.  All CPUs must
 enter the kernel in the same exception level.
diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index 7bc2921..91c4441 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -131,3 +131,8 @@ HWCAP_LRCPC
 
     Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0001.
 
+HWCAP_APIA
+
+    EL0 AddPac and Auth functionality using APIAKey_EL1 is enabled, as
+    described by Documentation/arm64/pointer-authentication.txt.
+
diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
new file mode 100644
index 0000000..e9b5c6b
--- /dev/null
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -0,0 +1,85 @@
+Pointer authentication in AArch64 Linux
+=======================================
+
+Author: Mark Rutland <mark.rutland@arm.com>
+Date: 2017-07-19
+
+This document briefly describes the provision of pointer authentication
+functionality in AArch64 Linux.
+
+
+Architecture overview
+---------------------
+
+The ARMv8.3 Pointer Authentication extension adds primitives that can be
+used to mitigate certain classes of attack where an attacker can corrupt
+the contents of some memory (e.g. the stack).
+
+The extension uses a Pointer Authentication Code (PAC) to determine
+whether pointers have been modified unexpectedly. A PAC is derived from
+a pointer, another value (such as the stack pointer), and a secret key
+held in system registers.
+
+The extension adds instructions to insert a valid PAC into a pointer,
+and to verify/remove the PAC from a pointer. The PAC occupies a number
+of high-order bits of the pointer, which varies dependent on the
+configured virtual address size and whether pointer tagging is in use.
+
+A subset of these instructions have been allocated from the HINT
+encoding space. In the absence of the extension (or when disabled),
+these instructions behave as NOPs. Applications and libraries using
+these instructions operate correctly regardless of the presence of the
+extension.
+
+
+Basic support
+-------------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and relevant HW
+support is present, the kernel will assign a random APIAKey value to
+each process at exec*() time. This key is shared by all threads within
+the process, and the key is preserved across fork(). Presence of
+functionality using APIAKey is advertised via HWCAP_APIA.
+
+Recent versions of GCC can compile code with APIAKey-based return
+address protection when passed the -msign-return-address option. This
+uses instructions in the HINT space, and such code can run on systems
+without the pointer authentication extension.
+
+The remaining instruction and data keys (APIBKey, APDAKey, APDBKey) are
+reserved for future use, and instructions using these keys must not be
+used by software until a purpose and scope for their use has been
+decided. To enable future software using these keys to function on
+contemporary kernels, where possible, instructions using these keys are
+made to behave as NOPs.
+
+The generic key (APGAKey) is currently unsupported. Instructions using
+the generic key must not be used by software.
+
+
+Debugging
+---------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and relevant HW
+support is present, the kernel will expose the position of TTBR0 PAC
+bits in the NT_ARM_PAC_MASK regset (struct user_pac_mask), which
+userspace can acqure via PTRACE_GETREGSET.
+
+Separate masks are exposed for data pointers and instruction pointers,
+as the set of PAC bits can vary between the two. Debuggers should not
+expect that HWCAP_APIA implies the presence (or non-presence) of this
+regset -- in future the kernel may support the use of APIBKey, APDAKey,
+and/or APBAKey, even in the absence of APIAKey.
+
+Note that the masks apply to TTBR0 addresses, and are not valid to apply
+to TTBR1 addresses (e.g. kernel pointers).
+
+
+Virtualization
+--------------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and uniform HW
+support is present, KVM will context switch all keys used by vCPUs.
+Otherwise, the feature is disabled. When disabled, accesses to keys, or
+use of instructions enabled within the guest will trap to EL2, and an
+UNDEFINED exception will be injected into the guest.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (10 preceding siblings ...)
  2017-07-19 16:01 ` [PATCH 11/11] arm64: docs: document pointer authentication Mark Rutland
@ 2017-07-21 17:05 ` Dave Martin
  2017-07-25 12:06   ` Mark Rutland
  2017-07-24 11:52 ` Yao Qi
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 35+ messages in thread
From: Dave Martin @ 2017-07-21 17:05 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-arch, arnd, jiong.wang, marc.zyngier,
	catalin.marinas, yao.qi, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening, kvmarm, christoffer.dall

On Wed, Jul 19, 2017 at 05:01:21PM +0100, Mark Rutland wrote:
> This series adds support for the ARMv8.3 pointer authentication extension.
> 
> Since RFC [1]:
> * Make the KVM context switch (semi-lazy)
> * Rebase to v4.13-rc1
> * Improve pointer authentication documentation
> * Add hwcap documentation
> * Various minor cleanups
> 
> I've pushed the series to the arm64/pointer-auth branch [2] of my linux tree.
> I've also pushed out a necessary bootwrapper patch to the pointer-auth branch
> [3] of my bootwrapper repo.

[...]

> Open questions
> ==============
> 
> * Should keys be per-thread rather than per-process?
> 
>   My understanding is that glibc can't (currently) handle threads having
>   different keys, but it might be that another libc would prefer per-thread

Can you elaborate?

It's not valid to do a function return from one thread to another.

>   keys. If desired, we could add a mechanism for a thread to re-initialize its
>   keys without an exec*().

Switching from per-process to per-thread keys would be an ABI break
unless it's opt-in.

> * Do we need a separate hwcap for XPAC* instructions?
> 
>   Library code performing stack unwinding may need to interoperate with other
>   code which may or may not be using pointer authentication. It may be
>   desirable to use XPAC* rather than attempting authentication and/or acquiring
>   the PAC masks via ptrace.
> 
>   As we may expose APIBKey (potentially separately from APIAKey) in future,
>   HWCAP_APIA cannot be used to determine when these instruction can/should be
>   used.

Can the availability of XPAC* be determined from the feature regs, or
is that insufficient?

This is a little different from the keys, which the kernel must
provision / allow to be set in order for them to be useful.

> * Should we expose a per-process data key now, to go with the insn key?
> 
>   I don't currently have a use-case for this.
> 
> * Should we expose generic authentication (i.e. APGAKey)?
> 
>   I don't currently have a use-case for this.

I guess there's no ABI impact for adding these later, so maybe it's not
urgent if nobody shouts.

> * Should the kernel remove PACs when unwinding user stacks?
> 
>   This is simple to do, but it's arguably placing a policy in the kernel as to
>   what we expect user stacks to look like. Regardless, userspace will have to
>   perform this when unwinding with DWARF.

Not sure.  This is arguably not more gross than related things the
kernel already does, and may be inefficient for userspace to do e.g.,
when capturing perf backtraces.  Still gross though.

Side question: do you know whether there will be DWARF / ELF annotations
for this?  Since ptr auth is a compile-time option, it is plausible that
an attribute could be added to indicate that an image uses it.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/11] arm64: docs: describe ELF hwcaps
  2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
@ 2017-07-21 17:05   ` Dave Martin
  2017-07-24 10:47     ` Suzuki K Poulose
  2017-08-03 14:49   ` Catalin Marinas
  2017-08-03 17:57   ` Kees Cook
  2 siblings, 1 reply; 35+ messages in thread
From: Dave Martin @ 2017-07-21 17:05 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-arch, arnd, jiong.wang, marc.zyngier,
	catalin.marinas, yao.qi, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening, kvmarm, christoffer.dall

On Wed, Jul 19, 2017 at 05:01:22PM +0100, Mark Rutland wrote:
> We don't document our ELF hwcaps, leaving developers to interpret them
> according to hearsay, guesswork, or (in exceptional cases) inspection of
> the current kernel code.
> 
> This is less than optimal, and it would be far better if we had some
> definitive description of each of the ELF hwcaps that developers could
> refer to.
> 
> This patch adds a document describing the (native) arm64 ELF hwcaps.

Minor nit: what do the hwcaps have to do with ELF really?  Can we just
call them "hwcaps"?

I'm not sure of the history here.

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <Dave.Martin@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  Documentation/arm64/elf_hwcaps.txt | 133 +++++++++++++++++++++++++++++++++++++
>  1 file changed, 133 insertions(+)
>  create mode 100644 Documentation/arm64/elf_hwcaps.txt
> 
> diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> new file mode 100644
> index 0000000..7bc2921
> --- /dev/null
> +++ b/Documentation/arm64/elf_hwcaps.txt
> @@ -0,0 +1,133 @@
> +ARM64 ELF hwcaps
> +================
> +
> +This document describes the usage and semantics of the arm64 ELF hwcaps.
> +
> +
> +1. Introduction
> +---------------
> +
> +Some hardware or software features are only available on some CPU
> +implementations, and/or with certain kernel configurations, but have no
> +architected discovery mechanism available to userspace code at EL0. The
> +kernel exposes the presence of these features to userspace through a set
> +of flags called hwcaps, exposed in the auxilliary vector.
> +
> +Userspace software can test for features by acquiring the AT_HWCAP entry
> +of the auxilliary vector, and testing whether the relevant flags are
> +set, e.g.
> +
> +bool floating_point_is_present(void)
> +{
> +	unsigned long hwcaps = getauxval(AT_HWCAP);
> +	if (hwcaps & HWCAP_FP)
> +		return true;
> +
> +	return false;
> +}
> +
> +Where software relies on a feature described by a hwcap, it should check
> +the relevant hwcap flag to verify that the feature is present before
> +attempting to make use of the feature.
> +
> +Features cannot be probed reliably through other means. When a feature
> +is not available, attempting to use it may result in unpredictable

This says that features cannot be probed reliably via the (emulated) ID
registers available with HWCAP_CPUID.

So, what use is the ID register emulation?

For each of hwcaps and cpuid, a particular feature may be reported as
present (y), absent (n), or not described at all (x):

hwcap>    x n y
cpuid:
   x      N N Y
   n      N N
   y      Y   Y

I've filled in the straightforward cases, where software may (Y) or must
not (N) use the feature.

In the cases left blank, hwcap and cpuid disagree.

Are we confident that should never be observed -- i.e., it's a kernel
bug if seen?  If so, we can fill Ys in there.  But we need to be clear
about cases where the hwcap doesn't mean exactly the same as the
corresponding CPUID feature.  The hwcap may tell software it can assume
that certain kernel ABI extensions related to that CPU feature are
available for example.

This also affects how HWCAP_CPUID is described below.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 06/11] arm64: Don't trap host pointer auth use to EL2
  2017-07-19 16:01 ` [PATCH 06/11] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
@ 2017-07-24 10:37   ` Dave Martin
  0 siblings, 0 replies; 35+ messages in thread
From: Dave Martin @ 2017-07-24 10:37 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-arch, arnd, jiong.wang, marc.zyngier,
	catalin.marinas, yao.qi, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening, kvmarm, christoffer.dall

On Wed, Jul 19, 2017 at 05:01:27PM +0100, Mark Rutland wrote:
> To allow EL0 (and/or EL1) to use pointer authentication functionality,
> we must ensure that pointer authentication instructions and accesses to
> pointer authentication keys are not trapped to EL2 (where we will not be
> able to handle them).
> 
> This patch ensures that HCR_EL2 is configured appropriately when the
> kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
> ensuring that EL1 can access keys and permit EL0 use of instructions.
> For VHE kernels, EL2 access is controlled by EL3, and we need not set
> anything.
> 
> This does not enable support for KVM guests, since KVM manages HCR_EL2
> itself.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/include/asm/kvm_arm.h |  2 ++
>  arch/arm64/kernel/head.S         | 19 +++++++++++++++++--
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 61d694c..c1267e8 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -23,6 +23,8 @@
>  #include <asm/types.h>
>  
>  /* Hyp Configuration Register (HCR) bits */
> +#define HCR_API		(UL(1) << 41)
> +#define HCR_APK		(UL(1) << 40)
>  #define HCR_E2H		(UL(1) << 34)
>  #define HCR_ID		(UL(1) << 33)
>  #define HCR_CD		(UL(1) << 32)
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 973df7d..8b8e8d7 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -412,10 +412,25 @@ CPU_LE(	bic	x0, x0, #(1 << 25)	)	// Clear the EE bit for EL2
>  
>  	/* Hyp configuration. */
>  	mov	x0, #HCR_RW			// 64-bit EL1
> -	cbz	x2, set_hcr
> +	cbz	x2, 1f

Can we keep the label name here?  It still seems appropriate.

>  	orr	x0, x0, #HCR_TGE		// Enable Host Extensions
>  	orr	x0, x0, #HCR_E2H
> -set_hcr:
> +1:
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +	/*
> +	 * Disable pointer authentication traps to EL2. The HCR_EL2.{APK,API}
> +	 * bits exist iff at least one authentication mechanism is implemented.
> +	 */
> +	mrs	x1, id_aa64isar1_el1
> +	mov_q	x3, ((0xf << ID_AA64ISAR1_GPI_SHIFT) | \
> +		     (0xf << ID_AA64ISAR1_GPA_SHIFT) | \
> +		     (0xf << ID_AA64ISAR1_API_SHIFT) | \
> +		     (0xf << ID_AA64ISAR1_APA_SHIFT))

Redundant outer (), I think -- mov_q protects its argument.

> +	and	x1, x1, x3
> +	cbz	x1, 1f

tst + b.eq?

> +	orr	x0, x0, #(HCR_APK | HCR_API)
> +1:
> +#endif
>  	msr	hcr_el2, x0
>  	isb

Cheers
---Dave

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/11] arm64: docs: describe ELF hwcaps
  2017-07-21 17:05   ` Dave Martin
@ 2017-07-24 10:47     ` Suzuki K Poulose
  0 siblings, 0 replies; 35+ messages in thread
From: Suzuki K Poulose @ 2017-07-24 10:47 UTC (permalink / raw)
  To: Dave Martin, Mark Rutland
  Cc: linux-arm-kernel, linux-arch, arnd, jiong.wang, marc.zyngier,
	catalin.marinas, yao.qi, will.deacon, linux-kernel,
	kernel-hardening, kvmarm, christoffer.dall

On 21/07/17 18:05, Dave Martin wrote:
> On Wed, Jul 19, 2017 at 05:01:22PM +0100, Mark Rutland wrote:
>> We don't document our ELF hwcaps, leaving developers to interpret them
>> according to hearsay, guesswork, or (in exceptional cases) inspection of
>> the current kernel code.
>>
>> This is less than optimal, and it would be far better if we had some
>> definitive description of each of the ELF hwcaps that developers could
>> refer to.
>>
>> This patch adds a document describing the (native) arm64 ELF hwcaps.
>
> Minor nit: what do the hwcaps have to do with ELF really?  Can we just
> call them "hwcaps"?
>
> I'm not sure of the history here.

Dave,

This is something that I requested to avoid confusing it with the CPU hwcaps,
(stored in the variable named hwcaps), we maintain in the kernel for capabilities.
Though, what we describe in this document is not specific to ELF format as
you have mentioned, it is easy to get confused with the former.

>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Dave Martin <Dave.Martin@arm.com>
>> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>  Documentation/arm64/elf_hwcaps.txt | 133 +++++++++++++++++++++++++++++++++++++
>>  1 file changed, 133 insertions(+)
>>  create mode 100644 Documentation/arm64/elf_hwcaps.txt
>>
>> diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
>> new file mode 100644
>> index 0000000..7bc2921
>> --- /dev/null
>> +++ b/Documentation/arm64/elf_hwcaps.txt
>> @@ -0,0 +1,133 @@
>> +ARM64 ELF hwcaps
>> +================
>> +
>> +This document describes the usage and semantics of the arm64 ELF hwcaps.
>> +
>> +
>> +1. Introduction
>> +---------------
>> +
>> +Some hardware or software features are only available on some CPU
>> +implementations, and/or with certain kernel configurations, but have no
>> +architected discovery mechanism available to userspace code at EL0. The
>> +kernel exposes the presence of these features to userspace through a set
>> +of flags called hwcaps, exposed in the auxilliary vector.
>> +
>> +Userspace software can test for features by acquiring the AT_HWCAP entry
>> +of the auxilliary vector, and testing whether the relevant flags are
>> +set, e.g.
>> +
>> +bool floating_point_is_present(void)
>> +{
>> +	unsigned long hwcaps = getauxval(AT_HWCAP);
>> +	if (hwcaps & HWCAP_FP)
>> +		return true;
>> +
>> +	return false;
>> +}
>> +
>> +Where software relies on a feature described by a hwcap, it should check
>> +the relevant hwcap flag to verify that the feature is present before
>> +attempting to make use of the feature.
>> +
>> +Features cannot be probed reliably through other means. When a feature
>> +is not available, attempting to use it may result in unpredictable
>
> This says that features cannot be probed reliably via the (emulated) ID
> registers available with HWCAP_CPUID.
>
> So, what use is the ID register emulation?
>
> For each of hwcaps and cpuid, a particular feature may be reported as
> present (y), absent (n), or not described at all (x):
>
> hwcap>    x n y
> cpuid:
>    x      N N Y
>    n      N N
>    y      Y   Y
>
> I've filled in the straightforward cases, where software may (Y) or must
> not (N) use the feature.
>
> In the cases left blank, hwcap and cpuid disagree.
>
> Are we confident that should never be observed -- i.e., it's a kernel
> bug if seen?  If so, we can fill Ys in there.  But we need to be clear

CPUID=> n and HWCAP=> Y is Kernel bug, indeed, as both HWCAP and CPUID uses the
same underlying data for the decision. However, the other case is a bit complicated,
depending on whether we decide to continue adding (we do add at the moment) "new HWCAP"
bits for the newer features.
If we do add the new HWCAPs, that could possible create conflicts on older kernels
which doesn't know about the feature, while the CPUID could expose the feature.
If we don't add the HWCAP, there is no reliable way for the userspace to decide if
the kernel supports the feature or not.

So, in either case, we need to cap the value exposed by CPUID to the "kernel supported
feature level" to avoid problems like this.

Thoughts ?

Suzuki



> about cases where the hwcap doesn't mean exactly the same as the
> corresponding CPUID feature.  The hwcap may tell software it can assume
> that certain kernel ABI extensions related to that CPU feature are
> available for example.
>
> This also affects how HWCAP_CPUID is described below.
>
> [...]
>
> Cheers
> ---Dave
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 04/11] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
  2017-07-19 16:01 ` [PATCH 04/11] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
@ 2017-07-24 10:54   ` Suzuki K Poulose
  0 siblings, 0 replies; 35+ messages in thread
From: Suzuki K Poulose @ 2017-07-24 10:54 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, will.deacon, yao.qi,
	linux-kernel, kernel-hardening

On 19/07/17 17:01, Mark Rutland wrote:
> From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
> has four fields describing the presence of pointer authentication
> functionality:
>
> * APA - address authentication present, using an architected algorithm
> * API - address authentication present, using an IMP DEF algorithm
> * GPA - generic authentication present, using an architected algorithm
> * GPI - generic authentication present, using an IMP DEF algoithm
>
> This patch adds the requisite definitions so that we can identify the
> presence of this functionality. For the timebeing, the features are
> hidden from userspace.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9f9e0064..b23ad83 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -120,7 +120,11 @@ static int __init register_cpu_hwcaps_dumper(void)
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> -	ARM64_FTR_END,
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> +	ARM64_FTR_END
>  };

minor nit: Could we keep the fields in the order of their positions in the register ?

With that,

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (11 preceding siblings ...)
  2017-07-21 17:05 ` [PATCH 00/11] ARMv8.3 pointer authentication userspace support Dave Martin
@ 2017-07-24 11:52 ` Yao Qi
  2017-07-25 11:32 ` Yao Qi
  2017-07-25 14:12 ` [kernel-hardening] " Li Kun
  14 siblings, 0 replies; 35+ messages in thread
From: Yao Qi @ 2017-07-24 11:52 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening

On 19/07/17 17:01, Mark Rutland wrote:
> * Should the kernel remove PACs when unwinding user stacks?
>
>   This is simple to do, but it's arguably placing a policy in the kernel as to
>   what we expect user stacks to look like. Regardless, userspace will have to
>   perform this when unwinding with DWARF.

I am not sure what do you mean.  Do you mean stripping a PAC from a
pointer during unwinding, so that user space can unwind the program
without being aware of PAC?  Can kernel remove PAC from all instruction
pointers?  Note that user space debugger may try to unwind in coredump,
so if the contents dumped into coredump including PAC bits, debuggers
still have to be aware of PAC (unless kernel can remove all PAC bits
during coredump too).

IMO, kernel needs to tell the truth to reflect the process/task state
to the user space, and leave the user space to consume.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (12 preceding siblings ...)
  2017-07-24 11:52 ` Yao Qi
@ 2017-07-25 11:32 ` Yao Qi
  2017-07-25 16:01   ` Mark Rutland
  2017-07-25 14:12 ` [kernel-hardening] " Li Kun
  14 siblings, 1 reply; 35+ messages in thread
From: Yao Qi @ 2017-07-25 11:32 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening

On 19/07/17 17:01, Mark Rutland wrote:
> If authentication fails, bits are set in the pointer such that it is guaranteed
> to cause a fault if used.

How does user space know the fault is caused by authentication fail?
When GDB is debugging a program, and it failed in pointer
authentication, I assume GDB only knows that the program receives signal
SIGSEGV, but how does GDB or user know why does the program get SIGSEGV?
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-21 17:05 ` [PATCH 00/11] ARMv8.3 pointer authentication userspace support Dave Martin
@ 2017-07-25 12:06   ` Mark Rutland
  2017-07-25 14:00     ` Jiong Wang
  2017-08-11 11:29     ` Mark Rutland
  0 siblings, 2 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-25 12:06 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, linux-arch, arnd, jiong.wang, marc.zyngier,
	catalin.marinas, yao.qi, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening, kvmarm, christoffer.dall

On Fri, Jul 21, 2017 at 06:05:09PM +0100, Dave Martin wrote:
> On Wed, Jul 19, 2017 at 05:01:21PM +0100, Mark Rutland wrote:
> > This series adds support for the ARMv8.3 pointer authentication extension.

> > Open questions
> > ==============
> > 
> > * Should keys be per-thread rather than per-process?
> > 
> >   My understanding is that glibc can't (currently) handle threads having
> >   different keys, but it might be that another libc would prefer per-thread
> 
> Can you elaborate?
> 
> It's not valid to do a function return from one thread to another.

Regardless of whether it's valid per the C spec or POSIX, some people
use {set,get}context and {set,long}jmp in this manner (IIRC, QEMU does
this), and my understanding is that similar tricks are in use in the
bowels of glibc.

Otherwise, my preference would be to have per-thread keys from day one.

> >   keys. If desired, we could add a mechanism for a thread to re-initialize its
> >   keys without an exec*().
> 
> Switching from per-process to per-thread keys would be an ABI break
> unless it's opt-in.

Indeed, which is why I suggested an opt-in mechanism.

> > * Do we need a separate hwcap for XPAC* instructions?
> > 
> >   Library code performing stack unwinding may need to interoperate with other
> >   code which may or may not be using pointer authentication. It may be
> >   desirable to use XPAC* rather than attempting authentication and/or acquiring
> >   the PAC masks via ptrace.
> > 
> >   As we may expose APIBKey (potentially separately from APIAKey) in future,
> >   HWCAP_APIA cannot be used to determine when these instruction can/should be
> >   used.
> 
> Can the availability of XPAC* be determined from the feature regs, or
> is that insufficient?

I'd forgotten to fix up the id reg emulation to enable this, but this
should be sufficient when wired up.

The painful part is that there are several fields which can imply XPAC*
are present, so userspace has to remember to check all of those fields.

[...]

> > * Should the kernel remove PACs when unwinding user stacks?
> > 
> >   This is simple to do, but it's arguably placing a policy in the kernel as to
> >   what we expect user stacks to look like. Regardless, userspace will have to
> >   perform this when unwinding with DWARF.
> 
> Not sure.  This is arguably not more gross than related things the
> kernel already does, and may be inefficient for userspace to do e.g.,
> when capturing perf backtraces.  Still gross though.
> 
> Side question: do you know whether there will be DWARF / ELF annotations
> for this?  Since ptr auth is a compile-time option, it is plausible that
> an attribute could be added to indicate that an image uses it.

Jiong, Yao, can you answer this?

I think that there's DWARF metadata for unwinding, but I'm not sure
there's an ELF annotation on an image.

Note that you may link with libraries which may or may not use pointer
auth, so a single process can have a mixture of code using pointer auth,
and code which does not.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-25 12:06   ` Mark Rutland
@ 2017-07-25 14:00     ` Jiong Wang
  2017-08-11 11:29     ` Mark Rutland
  1 sibling, 0 replies; 35+ messages in thread
From: Jiong Wang @ 2017-07-25 14:00 UTC (permalink / raw)
  To: Dave P Martin
  Cc: linux-arm-kernel, linux-arch, arnd, Marc Zyngier,
	Catalin Marinas, Yao Qi, Suzuki Poulose, Will Deacon,
	linux-kernel, kernel-hardening, kvmarm, christoffer.dall,
	Mark Rutland

> > * Should the kernel remove PACs when unwinding user stacks?
> >
> >   This is simple to do, but it's arguably placing a policy in the kernel as to
> >   what we expect user stacks to look like. Regardless, userspace will have to
> >   perform this when unwinding with DWARF.
>>
>> Not sure.  This is arguably not more gross than related things the
>> kernel already does, and may be inefficient for userspace to do e.g.,
>> when capturing perf backtraces.  Still gross though.
>>
>> Side question: do you know whether there will be DWARF / ELF annotations
>> for this?  Since ptr auth is a compile-time option, it is plausible that
>> an attribute could be added to indicate that an image uses it.

> Jiong, Yao, can you answer this?
>
> I think that there's DWARF metadata for unwinding, but I'm not sure
> there's an ELF annotation on an image.
>
> Note that you may link with libraries which may or may not use pointer
> auth, so a single process can have a mixture of code using pointer auth,
> and code which does not.

Yes, there is new DWARF frame information for pointer authentication to describe
the signing status at instruction level.  There is no ELF annotation on an image.

As the use cases of pointer authentication extension in GCC are about return
address signing.  The DWARF extension is mostly around describing signed LR so
the unwinder could have a way to figure out the original value of it to continue
unwinding.

In general, whenever return address, i.e. LR register, is mangled or restored by
hardware instruction, compiler (or assembly writer) is expected to generate a
DW_CFA_AARCH64_negate_ra_state CFA instruction.

For DWARF unwinder, during unwinding, whenever a DW_CFA_AARCH64_negate_ra_state
is hit, the unwinder toggle the LR signing status and kept it in bit zero (lsb)
of a new DWARF register AARCH64_DWARF_PAUTH_RA_STATE whose value must be honored
later when unwinding the value of LR. If the lsb of AARCH64_DWARF_PAUTH_RA_STATE
is set, it means the return address is mangled, then the unwinder needs to
restore LR by either masking off the signature (userspace unwinders need ptrace
interface to get this) or executing signature strip instruction (can only be
done by native unwinder) or executing authentication instruction (can only be
done by native unwinder).

Please see the following links for more details:
  https://gcc.gnu.org/ml/gcc-patches/2017-01/msg00376.html
  https://gcc.gnu.org/ml/gcc-patches/2016-11/msg03010.html


Regards,
Jiong

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [kernel-hardening] [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
                   ` (13 preceding siblings ...)
  2017-07-25 11:32 ` Yao Qi
@ 2017-07-25 14:12 ` Li Kun
  2017-07-25 15:12   ` Mark Rutland
  14 siblings, 1 reply; 35+ messages in thread
From: Li Kun @ 2017-07-25 14:12 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, arnd, catalin.marinas, christoffer.dall,
	Dave.Martin, jiong.wang, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, yao.qi, linux-kernel,
	kernel-hardening

Hi Mark,

Could you please give us some information about the impact to 
performance to help us evaluating

the influence to the system?

Thanks a lot.


Best Regards


在 2017/7/20 0:01, Mark Rutland 写道:
> This series adds support for the ARMv8.3 pointer authentication extension.
>
> Since RFC [1]:
> * Make the KVM context switch (semi-lazy)
> * Rebase to v4.13-rc1
> * Improve pointer authentication documentation
> * Add hwcap documentation
> * Various minor cleanups
>
> I've pushed the series to the arm64/pointer-auth branch [2] of my linux tree.
> I've also pushed out a necessary bootwrapper patch to the pointer-auth branch
> [3] of my bootwrapper repo.
>
>
> Extension Overview
> ==================
>
> The ARMv8.3 pointer authentication extension adds functionality to detect
> modification of pointer values, mitigating certain classes of attack such as
> stack smashing, and making return oriented programming attacks harder
>
> The extension introduces the concept of a pointer authentication code (PAC),
> which is stored in some upper bits of pointers. Each PAC is derived from the
> original pointer, another 64-bit value (e.g. the stack pointer), and a secret
> 128-bit key.
>
> New instructions are added which can be used to:
>
> * Insert a PAC into a pointer
> * Strip a PAC from a pointer
> * Authenticate strip a PAC from a pointer
>
> If authentication succeeds, the code is removed, yielding the original pointer.
> If authentication fails, bits are set in the pointer such that it is guaranteed
> to cause a fault if used.
>
> These instructions can make use of four keys:
>
> * APIAKey (A.K.A. Instruction A key)
> * APIBKey (A.K.A. Instruction B key)
> * APDAKey (A.K.A. Data A key)
> * APDBKey (A.K.A. Data B Key)
>
> A subset of these instruction encodings have been allocated from the HINT
> space, and will operate as NOPs on any ARMv8 parts which do not feature the
> extension (or if purposefully disabled by the kernel). Software using only this
> subset of the instructions should function correctly on all ARMv8-A parts.
>
> Additionally, instructions are added to authenticate small blocks of memory in
> similar fashion, using APGAKey (A.K.A. Generic key).
>
>
> This Series
> ===========
>
> This series enables the use of instructions using APIAKey, which is initialised
> and maintained per-process (shared by all threads). This series does not add
> support for APIBKey, APDAKey, APDBKey, nor APGAKey. The series only supports
> the use of an architected algorithm.
>
> I've given this some basic testing with a homebrew test suite. More ideally,
> we'd add some tests to the kernel source tree.
>
> I've added some basic KVM support, but this doesn't cater for systems with
> mismatched support. Looking forward, we'll need ID register emulation in KVM so
> that we can hide features from guests to cater for such cases.
>
>
> Open questions
> ==============
>
> * Should keys be per-thread rather than per-process?
>
>    My understanding is that glibc can't (currently) handle threads having
>    different keys, but it might be that another libc would prefer per-thread
>    keys. If desired, we could add a mechanism for a thread to re-initialize its
>    keys without an exec*().
>
> * Do we need a separate hwcap for XPAC* instructions?
>
>    Library code performing stack unwinding may need to interoperate with other
>    code which may or may not be using pointer authentication. It may be
>    desirable to use XPAC* rather than attempting authentication and/or acquiring
>    the PAC masks via ptrace.
>
>    As we may expose APIBKey (potentially separately from APIAKey) in future,
>    HWCAP_APIA cannot be used to determine when these instruction can/should be
>    used.
>
> * Should we expose a per-process data key now, to go with the insn key?
>
>    I don't currently have a use-case for this.
>
> * Should we expose generic authentication (i.e. APGAKey)?
>
>    I don't currently have a use-case for this.
>
> * Should the kernel remove PACs when unwinding user stacks?
>
>    This is simple to do, but it's arguably placing a policy in the kernel as to
>    what we expect user stacks to look like. Regardless, userspace will have to
>    perform this when unwinding with DWARF.
>
> Thanks,
> Mark.
>
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-April/498941.html
> [2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/pointer-auth
> [3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git pointer-auth
>
> Mark Rutland (11):
>    arm64: docs: describe ELF hwcaps
>    asm-generic: mm_hooks: allow hooks to be overridden individually
>    arm64: add pointer authentication register bits
>    arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
>    arm64/cpufeature: detect pointer authentication
>    arm64: Don't trap host pointer auth use to EL2
>    arm64: add basic pointer authentication support
>    arm64: expose user PAC bit positions via ptrace
>    arm64/kvm: preserve host HCR_EL2 value
>    arm64/kvm: context-switch ptrauth registers
>    arm64: docs: document pointer authentication
>
>   Documentation/arm64/booting.txt                |   8 ++
>   Documentation/arm64/elf_hwcaps.txt             | 138 +++++++++++++++++++++++++
>   Documentation/arm64/pointer-authentication.txt |  85 +++++++++++++++
>   arch/arm64/Kconfig                             |  23 +++++
>   arch/arm64/include/asm/cpucaps.h               |   4 +-
>   arch/arm64/include/asm/esr.h                   |   3 +-
>   arch/arm64/include/asm/kvm_arm.h               |   3 +-
>   arch/arm64/include/asm/kvm_host.h              |  28 ++++-
>   arch/arm64/include/asm/kvm_hyp.h               |   7 ++
>   arch/arm64/include/asm/mmu.h                   |   5 +
>   arch/arm64/include/asm/mmu_context.h           |  25 ++++-
>   arch/arm64/include/asm/pointer_auth.h          |  97 +++++++++++++++++
>   arch/arm64/include/asm/sysreg.h                |  30 ++++++
>   arch/arm64/include/uapi/asm/hwcap.h            |   1 +
>   arch/arm64/include/uapi/asm/ptrace.h           |   5 +
>   arch/arm64/kernel/cpufeature.c                 |  39 ++++++-
>   arch/arm64/kernel/cpuinfo.c                    |   1 +
>   arch/arm64/kernel/head.S                       |  19 +++-
>   arch/arm64/kernel/ptrace.c                     |  39 +++++++
>   arch/arm64/kvm/handle_exit.c                   |  21 ++++
>   arch/arm64/kvm/hyp/Makefile                    |   1 +
>   arch/arm64/kvm/hyp/ptrauth-sr.c                |  91 ++++++++++++++++
>   arch/arm64/kvm/hyp/switch.c                    |   9 +-
>   arch/arm64/kvm/hyp/tlb.c                       |   6 +-
>   arch/arm64/kvm/sys_regs.c                      |  32 ++++++
>   include/asm-generic/mm_hooks.h                 |  11 ++
>   include/uapi/linux/elf.h                       |   1 +
>   27 files changed, 719 insertions(+), 13 deletions(-)
>   create mode 100644 Documentation/arm64/elf_hwcaps.txt
>   create mode 100644 Documentation/arm64/pointer-authentication.txt
>   create mode 100644 arch/arm64/include/asm/pointer_auth.h
>   create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c
>

-- 
Best Regards
Li Kun

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [kernel-hardening] [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-25 14:12 ` [kernel-hardening] " Li Kun
@ 2017-07-25 15:12   ` Mark Rutland
  0 siblings, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-25 15:12 UTC (permalink / raw)
  To: Li Kun
  Cc: linux-arm-kernel, arnd, catalin.marinas, christoffer.dall,
	Dave.Martin, jiong.wang, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, yao.qi, linux-kernel,
	kernel-hardening

On Tue, Jul 25, 2017 at 10:12:24PM +0800, Li Kun wrote:
> Hi Mark,

Hi,

> Could you please give us some information about the impact to
> performance to help us evaluating the influence to the system?

I'm afraid I don't have any performance details to share, as this series
has been developed and tested using models, and the performance impact
of pointer authentication will be very much dependent on the
microarchitecture.

I have assumed that this shouldn't regress performance on systems
without pointer authentication functionality, but I haven't yet
attempted to verify that.

>From this series, I'd expect the largest cost to be the semi-lazy KVM
context-switch, for guests which are continuously using pointer
authentication.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/11] arm64: add basic pointer authentication support
  2017-07-19 16:01 ` [PATCH 07/11] arm64: add basic pointer authentication support Mark Rutland
@ 2017-07-25 15:26   ` Dave Martin
  2017-08-11  7:46   ` Yao Qi
  1 sibling, 0 replies; 35+ messages in thread
From: Dave Martin @ 2017-07-25 15:26 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-arch, arnd, jiong.wang, marc.zyngier,
	catalin.marinas, yao.qi, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening, kvmarm, christoffer.dall

On Wed, Jul 19, 2017 at 05:01:28PM +0100, Mark Rutland wrote:
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey. The kernel maintains an APIAKey value
> for each process (shared by all threads within), which is initialised to
> a random value at exec() time.
> 
> Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
> and will behave as NOPs. These may be made use of in future patches.
> 
> No support is added for the generic key (APGAKey), though this cannot be
> trapped or made to behave as a NOP. Its presence is not advertised with
> a hwcap.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/Kconfig                    | 23 +++++++++
>  arch/arm64/include/asm/mmu.h          |  5 ++
>  arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
>  arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
>  arch/arm64/include/uapi/asm/hwcap.h   |  1 +
>  arch/arm64/kernel/cpufeature.c        | 11 +++++
>  arch/arm64/kernel/cpuinfo.c           |  1 +
>  7 files changed, 153 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pointer_auth.h
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index dfd9086..15a9931 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -962,6 +962,29 @@ config ARM64_UAO
>  
>  endmenu
>  
> +menu "ARMv8.3 architectural features"
> +
> +config ARM64_POINTER_AUTHENTICATION
> +	bool "Enable support for pointer authentication"
> +	default y
> +	help
> +	  Pointer authentication (part of the ARMv8.3 Extensions) provides
> +	  instructions for signing and authenticating pointers against secret
> +	  keys, which can be used to mitigate Return Oriented Programming (ROP)
> +	  and other attacks.
> +
> +	  This option enables these instructions at EL0 (i.e. for userspace).
> +
> +	  Choosing this option will cause the kernel to initialise secret keys
> +	  for each process at exec() time, with these keys being
> +	  context-switched along with the process.
> +
> +	  The feature is detected at runtime. If the feature is not present in
> +	  hardware it will not be advertised to userspace nor will it be
> +	  enabled.

Should we describe which keys are supported here, or will this option
always turn on all the keys/instructions that the kernel implements to
date?

> +
> +endmenu
> +
>  config ARM64_MODULE_CMODEL_LARGE
>  	bool
>  
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 5468c83..6a848f3 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -16,10 +16,15 @@
>  #ifndef __ASM_MMU_H
>  #define __ASM_MMU_H
>  
> +#include <asm/pointer_auth.h>
> +
>  typedef struct {
>  	atomic64_t	id;
>  	void		*vdso;
>  	unsigned long	flags;
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +	struct ptrauth_keys	ptrauth_keys;
> +#endif
>  } mm_context_t;
>  
>  /*
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 3257895a..06757a5 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -31,7 +31,6 @@
>  #include <asm/cacheflush.h>
>  #include <asm/cpufeature.h>
>  #include <asm/proc-fns.h>
> -#include <asm-generic/mm_hooks.h>
>  #include <asm/cputype.h>
>  #include <asm/pgtable.h>
>  #include <asm/sysreg.h>
> @@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
>  #define destroy_context(mm)		do { } while(0)
>  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>  
> -#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
> +static inline int init_new_context(struct task_struct *tsk,
> +			struct mm_struct *mm)
> +{
> +	atomic64_set(&mm->context.id, 0);
> +	mm_ctx_ptrauth_init(&mm->context);

For this stuff in general, I wonder whether we should move this code
away from mm and to thread_strct and the process/thread paths, otherwise
we'll just have to move it all around later if ptrauth is ever to be
supported per-thread.

This would also remove the need to have individually overridable arch
mm hooks.

Adding an extra 16 bytes to thread_struct is probably not the end of the
world.  thread_struct is already well over half a K.  We could de-dupe
by refcounting or similar, but it may not be worth the complexity.

> +
> +	return 0;
> +}
>  
>  /*
>   * This is called when "tsk" is about to enter lazy TLB mode.
> @@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
>  		return;
>  	}
>  
> +	mm_ctx_ptrauth_switch(&next->context);
> +
>  	check_and_switch_context(next, cpu);
>  }
>  
> @@ -226,6 +234,19 @@ static inline void __switch_mm(struct mm_struct *next)
>  
>  void verify_cpu_asid_bits(void);
>  
> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> +				 struct mm_struct *mm)
> +{
> +	mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
> +}
> +#define arch_dup_mmap arch_dup_mmap
> +
> +/*
> + * We need to override arch_dup_mmap before including the generic hooks, which
> + * are otherwise sufficient for us.
> + */
> +#include <asm-generic/mm_hooks.h>
> +
>  #endif /* !__ASSEMBLY__ */
>  
>  #endif /* !__ASM_MMU_CONTEXT_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> new file mode 100644
> index 0000000..964da0c
> --- /dev/null
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -0,0 +1,89 @@
> +/*
> + * Copyright (C) 2016 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_POINTER_AUTH_H
> +#define __ASM_POINTER_AUTH_H
> +
> +#include <linux/random.h>
> +
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +/*
> + * Each key is a 128-bit quantity which is split accross a pair of 64-bit
> + * registers (Lo and Hi).
> + */
> +struct ptrauth_key {
> +	unsigned long lo, hi;
> +};
> +
> +/*
> + * We give each process its own instruction A key (APIAKey), which is shared by
> + * all threads. This is inherited upon fork(), and reinitialised upon exec*().
> + * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
> + * instructions behaving as NOPs.
> + */
> +struct ptrauth_keys {
> +	struct ptrauth_key apia;
> +};
> +
> +static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	get_random_bytes(keys, sizeof(*keys));
> +}
> +
> +#define __ptrauth_key_install(k, v)			\
> +do {							\
> +	write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1);	\
> +	write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1);	\

(v)

though moderately crazy usage would be required in order for this to go
wrong.

> +} while (0)
> +
> +static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	__ptrauth_key_install(APIA, keys->apia);
> +}
> +
> +static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
> +				    struct ptrauth_keys *new)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	*new = *old;

This seems an odd thing to do.  Surely, by design we never want two
processes to share the same keys?  Don't we always proceed to nuke
the keys via mm_ctx_ptrauth_init() anyway?

> +}
> +
> +#define mm_ctx_ptrauth_init(ctx) \
> +	ptrauth_keys_init(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_switch(ctx) \
> +	ptrauth_keys_switch(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_dup(oldctx, newctx) \
> +	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-25 11:32 ` Yao Qi
@ 2017-07-25 16:01   ` Mark Rutland
  0 siblings, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-07-25 16:01 UTC (permalink / raw)
  To: Yao Qi
  Cc: linux-arm-kernel, arnd, catalin.marinas, christoffer.dall,
	Dave.Martin, jiong.wang, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, linux-kernel, kernel-hardening

Hi,

On Tue, Jul 25, 2017 at 12:32:10PM +0100, Yao Qi wrote:
> On 19/07/17 17:01, Mark Rutland wrote:
> > If authentication fails, bits are set in the pointer such that it is
> > guaranteed to cause a fault if used.
> 
> How does user space know the fault is caused by authentication fail?

Strictly speaking, it does not, and neither does the kernel.

In general, it cannot know whether this is the case, as an
authentication failure does not result in an immediate exception.

> When GDB is debugging a program, and it failed in pointer
> authentication, I assume GDB only knows that the program receives signal
> SIGSEGV, but how does GDB or user know why does the program get SIGSEGV?

I think in practice, the user has to determine this for themselves. I do
not believe that it is possible to reliably determine whether a given
fault was caused by an authentication failure.

For example, consider an authentication failure in a function epilogue:

	ldp	x29, x30, [sp], #FRAME_SIZE
	autiasp
	ret

When AUTIASP fails to authenticate the x30 value, it ensures that x30 is
a faulting value, by forcing some of the high bits to particular values,
but that's all. A user can set those bits in the same way.

The RET loads the LR value into the PC, which subsequently triggers an
instruction abort for the PC value (i.e. we branch to the bogus address
*then* take a fault at that address).

At that point, we have no information to determine what the CPU was
executing before it branched to the bogus address. So we cannot know how
the address was generated, and therefore cannot know if it was an
authentication failure.

For data pointers, you could generate bogus pointers with a large
out-of-bounds array index, whereby the bogus pointer looked like the
result of a pointer authentication failure.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] arm64/kvm: context-switch ptrauth registers
  2017-07-19 16:01 ` [PATCH 10/11] arm64/kvm: context-switch ptrauth registers Mark Rutland
@ 2017-08-01 11:00   ` Christoffer Dall
  2017-08-01 14:26     ` Mark Rutland
  0 siblings, 1 reply; 35+ messages in thread
From: Christoffer Dall @ 2017-08-01 11:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, arnd, catalin.marinas, christoffer.dall,
	Dave.Martin, jiong.wang, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, yao.qi, linux-kernel,
	kernel-hardening

On Wed, Jul 19, 2017 at 05:01:31PM +0100, Mark Rutland wrote:
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again.
> 
> This is sufficient for systems with uniform pointer authentication
> support. For systems with mismatched support, it will be necessary to

What is mismatched support?  You mean systems where one CPU has ptrauth
and another one doesn't (or if they both have it but in different ways)?

> hide the feature from the guest's view of the ID registers.

I think the work Drew is pondering to allow userspace more control of
what we emulate to the guest can semi-automatically take care of this as
well.

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/include/asm/kvm_host.h | 23 +++++++++-
>  arch/arm64/include/asm/kvm_hyp.h  |  7 +++
>  arch/arm64/kvm/handle_exit.c      | 21 +++++++++
>  arch/arm64/kvm/hyp/Makefile       |  1 +
>  arch/arm64/kvm/hyp/ptrauth-sr.c   | 91 +++++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/switch.c       |  4 ++
>  arch/arm64/kvm/sys_regs.c         | 32 ++++++++++++++
>  7 files changed, 178 insertions(+), 1 deletion(-)
>  create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 0d7c3dd..f97defa 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -135,6 +135,18 @@ enum vcpu_sysreg {
>  	PMSWINC_EL0,	/* Software Increment Register */
>  	PMUSERENR_EL0,	/* User Enable Register */
>  
> +	/* Pointer Authentication Registers */
> +	APIAKEYLO_EL1,
> +	APIAKEYHI_EL1,
> +	APIBKEYLO_EL1,
> +	APIBKEYHI_EL1,
> +	APDAKEYLO_EL1,
> +	APDAKEYHI_EL1,
> +	APDBKEYLO_EL1,
> +	APDBKEYHI_EL1,
> +	APGAKEYLO_EL1,
> +	APGAKEYHI_EL1,
> +
>  	/* 32bit specific registers. Keep them at the end of the range */
>  	DACR32_EL2,	/* Domain Access Control Register */
>  	IFSR32_EL2,	/* Instruction Fault Status Register */
> @@ -368,10 +380,19 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
>  	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
>  }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
> +
>  static inline void kvm_arch_hardware_unsetup(void) {}
>  static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>  static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> +
> +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
> +{
> +	kvm_arm_vcpu_ptrauth_disable(vcpu);
> +}
> +

why sched_in and not vcpu_load?  (vcpu_load happens whenever you're
returning from userspace for example, and not only when you've been
scheduled away and are coming back).

And why do we want to 'reset' the behavior of KVM when the host
schedules a VCPU thread?

If I understand the logic correctly, what you're establishing with the
appraoch of initially trapping use of ptrauth is to avoid
saving/restoring the state if the guest dosn't use ptrauth, but then if
the guest starts using it, it's likely to keep using it, and therefore
we start saving/restoring the registers.

Why is the host's decision to schedule something else on this physical
CPU a good indication that we should no longer save/restore these
registers for the VCPU?  Wouldn't it make more sense to have a flag on
the VCPU, and potentially a counter, so that if you switch X times
without ever touching the registers again, you can stop saving/restoring
the state, if that's even something we care about?

Another thing that comes to mind; does the kernel running a KVM's VCPU
thread (and handling softirqs, ISRs, etc.) ever use the ptrauth system,
or does that only happen when we go to userspace?  If the latter, we
could handle the save/restore entirely in vcpu_put/vcpu_load instead. I
don't mind picking up that bit as part of my ongoing optimization work
later though, if you're eager to get these patches merged.

>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  
>  void kvm_arm_init_debug(void);
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 4572a9b..3093f35 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -152,6 +152,13 @@ void __debug_restore_state(struct kvm_vcpu *vcpu,
>  void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
>  bool __fpsimd_enabled(void);
>  
> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +			       struct kvm_cpu_context *host_ctxt,
> +			       struct kvm_cpu_context *guest_ctxt);
> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +			      struct kvm_cpu_context *host_ctxt,
> +			      struct kvm_cpu_context *guest_ctxt);
> +
>  u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
>  void __noreturn __hyp_do_panic(unsigned long, ...);
>  
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 17d8a16..9fc561f 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -136,6 +136,26 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return ret;
>  }
>  
> +/*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register.
> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) &&
> +	    cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> +	} else {
> +		kvm_inject_undefined(vcpu);

How can we trap here without having ARM64_HAS_[ADDRESS,GENERIC]_AUTH ?

If this to cover the case if we only have one of the two or is there a
case where we trap ptrauth registers/instructions even though the CPU
doesn't have the support?

> +	}
> +}
> +
> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return 1;
> +}
> +
>  static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
>  	u32 hsr = kvm_vcpu_get_hsr(vcpu);
> @@ -168,6 +188,7 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	[ESR_ELx_EC_BKPT32]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_BRK64]	= kvm_handle_guest_debug,
>  	[ESR_ELx_EC_FP_ASIMD]	= handle_no_fpsimd,
> +	[ESR_ELx_EC_PAC]	= kvm_handle_ptrauth,
>  };
>  
>  static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
> index 14c4e3b..91f2100 100644
> --- a/arch/arm64/kvm/hyp/Makefile
> +++ b/arch/arm64/kvm/hyp/Makefile
> @@ -18,6 +18,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
>  obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>  obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
>  obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
> +obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
>  
>  # KVM code is run at a different exception code with a different map, so
>  # compiler instrumentation that inserts callbacks or checks into the code may
> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> new file mode 100644
> index 0000000..2784fb3
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -0,0 +1,91 @@
> +/*
> + * Copyright (C) 2017 ARM Ltd
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/compiler.h>
> +#include <linux/kvm_host.h>
> +
> +#include <asm/cpucaps.h>
> +#include <asm/cpufeature.h>
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_hyp.h>
> +
> +static bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
> +}
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
> +		__ptrauth_save_key(ctxt->sys_regs, APIA);
> +		__ptrauth_save_key(ctxt->sys_regs, APIB);
> +		__ptrauth_save_key(ctxt->sys_regs, APDA);
> +		__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	}
> +
> +	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> +		__ptrauth_save_key(ctxt->sys_regs, APGA);
> +	}

Aren't we ever only enabling the save/restore if we have both caps, so
why are we checking it here again?

> +}
> +
> +#define __ptrauth_restore_key(regs, key) 					\
> +({										\
> +	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
> +	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
> +{
> +
> +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
> +		__ptrauth_restore_key(ctxt->sys_regs, APIA);
> +		__ptrauth_restore_key(ctxt->sys_regs, APIB);
> +		__ptrauth_restore_key(ctxt->sys_regs, APDA);
> +		__ptrauth_restore_key(ctxt->sys_regs, APDB);
> +	}
> +
> +	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> +		__ptrauth_restore_key(ctxt->sys_regs, APGA);
> +	}

same question

> +}
> +
> +void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +					  struct kvm_cpu_context *host_ctxt,
> +					  struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(host_ctxt);
> +	__ptrauth_restore_state(guest_ctxt);
> +}
> +
> +void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +					 struct kvm_cpu_context *host_ctxt,
> +					 struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(guest_ctxt);
> +	__ptrauth_restore_state(host_ctxt);
> +}
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 6108813..df609d9 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -309,6 +309,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  	__sysreg_restore_guest_state(guest_ctxt);
>  	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
>  
> +	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> +
>  	/* Jump in the fire! */
>  again:
>  	exit_code = __guest_enter(vcpu, host_ctxt);
> @@ -367,6 +369,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  
>  	fp_enabled = __fpsimd_enabled();
>  
> +	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> +
>  	__sysreg_save_guest_state(guest_ctxt);
>  	__sysreg32_save_state(vcpu);
>  	__timer_save_state(vcpu);
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7786288..0f7d9ae 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -837,6 +837,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}
> +
> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *rd)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return false;
> +}
> +
> +#define __PTRAUTH_KEY(k)						\
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
> +
> +#define PTRAUTH_KEY(k)							\
> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
> +
>  static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
> @@ -950,6 +976,12 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>  	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>  	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>  
> +	PTRAUTH_KEY(APIA),
> +	PTRAUTH_KEY(APIB),
> +	PTRAUTH_KEY(APDA),
> +	PTRAUTH_KEY(APDB),
> +	PTRAUTH_KEY(APGA),
> +
>  	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>  	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>  	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> -- 
> 1.9.1
> 
Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 09/11] arm64/kvm: preserve host HCR_EL2 value
  2017-07-19 16:01 ` [PATCH 09/11] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
@ 2017-08-01 11:00   ` Christoffer Dall
  0 siblings, 0 replies; 35+ messages in thread
From: Christoffer Dall @ 2017-08-01 11:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, arnd, catalin.marinas, christoffer.dall,
	Dave.Martin, jiong.wang, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, yao.qi, linux-kernel,
	kernel-hardening

On Wed, Jul 19, 2017 at 05:01:30PM +0100, Mark Rutland wrote:
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> the host HCR when switching to/from a guest HCR.
> 
> For __{activate,deactivate}_traps(), the HCR save/restore is made common
> across the !VHE and VHE paths. As the host and guest HCR values must
> have E2H set when VHE is in use, register redirection should always be
> in effect at EL2, and this change should not adversely affect the VHE
> code.
> 
> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().
> 
> The now unused HCR_HOST_VHE_FLAGS definition is removed.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/include/asm/kvm_arm.h  | 1 -
>  arch/arm64/include/asm/kvm_host.h | 5 ++++-
>  arch/arm64/kvm/hyp/switch.c       | 5 +++--
>  arch/arm64/kvm/hyp/tlb.c          | 6 +++++-
>  4 files changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index c1267e8..7b9c898 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -84,7 +84,6 @@
>  			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
>  #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
>  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
> -#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>  
>  /* TCR_EL2 Registers bits */
>  #define TCR_EL2_RES1		((1 << 31) | (1 << 23))
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index d686300..0d7c3dd 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -198,10 +198,13 @@ struct kvm_cpu_context {
>  struct kvm_vcpu_arch {
>  	struct kvm_cpu_context ctxt;
>  
> -	/* HYP configuration */
> +	/* Guest HYP configuration */
>  	u64 hcr_el2;
>  	u32 mdcr_el2;
>  
> +	/* Host HYP configuration */
> +	u64 host_hcr_el2;
> +
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 945e79c..6108813 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -71,6 +71,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>  {
>  	u64 val;
>  
> +	vcpu->arch.host_hcr_el2 = read_sysreg(hcr_el2);
> +
>  	/*
>  	 * We are about to set CPTR_EL2.TFP to trap all floating point
>  	 * register accesses to EL2, however, the ARM ARM clearly states that
> @@ -110,7 +112,6 @@ static void __hyp_text __deactivate_traps_vhe(void)
>  		    MDCR_EL2_TPMS;
>  
>  	write_sysreg(mdcr_el2, mdcr_el2);
> -	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
>  	write_sysreg(CPACR_EL1_FPEN, cpacr_el1);
>  	write_sysreg(vectors, vbar_el1);
>  }
> @@ -123,7 +124,6 @@ static void __hyp_text __deactivate_traps_nvhe(void)
>  	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
>  
>  	write_sysreg(mdcr_el2, mdcr_el2);
> -	write_sysreg(HCR_RW, hcr_el2);
>  	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>  }
>  
> @@ -145,6 +145,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>  	__deactivate_traps_arch()();
>  	write_sysreg(0, hstr_el2);
>  	write_sysreg(0, pmuserenr_el0);
> +	write_sysreg(vcpu->arch.host_hcr_el2, hcr_el2);
>  }
>  
>  static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
> index 73464a9..c2b0680 100644
> --- a/arch/arm64/kvm/hyp/tlb.c
> +++ b/arch/arm64/kvm/hyp/tlb.c
> @@ -49,12 +49,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
>  
>  static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
>  {
> +	u64 val;
> +
>  	/*
>  	 * We're done with the TLB operation, let's restore the host's
>  	 * view of HCR_EL2.
>  	 */
>  	write_sysreg(0, vttbr_el2);
> -	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
> +	val = read_sysreg(hcr_el2);
> +	val |= HCR_TGE;
> +	write_sysreg(val, hcr_el2);
>  }
>  
>  static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
> -- 
> 1.9.1
> 

Reviewed-by: Christoffer Dall <cdall@linaro.org>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] arm64/kvm: context-switch ptrauth registers
  2017-08-01 11:00   ` Christoffer Dall
@ 2017-08-01 14:26     ` Mark Rutland
  2017-08-01 14:32       ` Will Deacon
  2017-08-01 17:02       ` Christoffer Dall
  0 siblings, 2 replies; 35+ messages in thread
From: Mark Rutland @ 2017-08-01 14:26 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-arm-kernel, arnd, catalin.marinas, christoffer.dall,
	Dave.Martin, jiong.wang, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, yao.qi, linux-kernel,
	kernel-hardening

On Tue, Aug 01, 2017 at 01:00:14PM +0200, Christoffer Dall wrote:
> On Wed, Jul 19, 2017 at 05:01:31PM +0100, Mark Rutland wrote:
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work, with
> > a semi-lazy context switch of the pointer auth state.
> > 
> > When we schedule a vcpu, we disable guest usage of pointer
> > authentication instructions and accesses to the keys. While these are
> > disabled, we avoid context-switching the keys. When we trap the guest
> > trying to use pointer authentication functionality, we change to eagerly
> > context-switching the keys, and enable the feature. The next time the
> > vcpu is scheduled out/in, we start again.
> > 
> > This is sufficient for systems with uniform pointer authentication
> > support. For systems with mismatched support, it will be necessary to
> 
> What is mismatched support?  You mean systems where one CPU has ptrauth
> and another one doesn't (or if they both have it but in different ways)?

Both! Any case where the support is not uniform across all CPUs.

A CPU can implement address auth and/or generic auth, and either may use
an architected algorithm or an IMP DEF algorithm.

Even if all CPUs report an IMP DEF algorithm, the particular algorithm
may differ across CPUs.

> > hide the feature from the guest's view of the ID registers.
> 
> I think the work Drew is pondering to allow userspace more control of
> what we emulate to the guest can semi-automatically take care of this as
> well.

I'll take a look.

[...]

> > +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
> > +{
> > +	kvm_arm_vcpu_ptrauth_disable(vcpu);
> > +}
> > +
> 
> why sched_in and not vcpu_load?  (vcpu_load happens whenever you're
> returning from userspace for example, and not only when you've been
> scheduled away and are coming back).

I think this is the result of going searching for similar lazy context
switching, and stumbling on some (fairly different) code on x86.

It looks like I can use vcpu_load() instead, so I'll give that a shot
for v2.

> And why do we want to 'reset' the behavior of KVM when the host
> schedules a VCPU thread?
> 
> If I understand the logic correctly, what you're establishing with the
> appraoch of initially trapping use of ptrauth is to avoid
> saving/restoring the state if the guest dosn't use ptrauth, but then if
> the guest starts using it, it's likely to keep using it, and therefore
> we start saving/restoring the registers.

Yes.

> Why is the host's decision to schedule something else on this physical
> CPU a good indication that we should no longer save/restore these
> registers for the VCPU?

I guess it's not.

Initially, I was followed the same approach we take for fpsimd, leaving
the feature trapped until we take a shallow trap to hyp. Handing all the
sysreg traps in hyp was unwieldy, so I moved that down to the kernel
proper. That meant I couldn't enable the trap when switching from host
to guest, and doing so in the regular context switch seemed like the
next best option.

> Wouldn't it make more sense to have a flag on the VCPU, and
> potentially a counter, so that if you switch X times without ever
> touching the registers again, you can stop saving/restoring the state,
> if that's even something we care about?

Something like that could make more sense. It should be fairly simple to
implement a decaying counter to determine when to trap.

I'd steered clear of optimising the lazy heuristic as I'm testing with
models, and I don't have numbers that would be representative of real
HW.

> Another thing that comes to mind; does the kernel running a KVM's VCPU
> thread (and handling softirqs, ISRs, etc.) ever use the ptrauth system,
> or does that only happen when we go to userspace? 

Today, only userspace uses pointer auth, and the kernel does not.
However, in-kernel usage is on the cards.

> If the latter, we could handle the save/restore entirely in
> vcpu_put/vcpu_load instead. I don't mind picking up that bit as part
> of my ongoing optimization work later though, if you're eager to get
> these patches merged.

I'd avoided that so far, since it would be undone when in-kernel usage
is implemented. If you prefer, I can implement that for now.

[...]

> > +/*
> > + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> > + * ptrauth register.
> > + */
> > +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> > +{
> > +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) &&
> > +	    cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> > +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> > +	} else {
> > +		kvm_inject_undefined(vcpu);
> 
> How can we trap here without having ARM64_HAS_[ADDRESS,GENERIC]_AUTH ?

We'll trap if the current CPU supports one of the two (with an
architected algorithm), but some other CPU does not (or uses an IMP DEF
algorithm). Note that we're checking that all CPUs have the feature.

> If this to cover the case if we only have one of the two or is there a
> case where we trap ptrauth registers/instructions even though the CPU
> doesn't have the support?

It's there to cater for the case that some CPUs lack a feature that
others have, so that we expose a uniform view to guests.

There's a single control to trap both address/generic authentication, so
it has to check that support is uniform for both. 

I'd meant to fix this up to not be so pessimistic -- we could support
one without that other, so long as it is uniformly absent. e.g. if all
CPUs support address auth and all CPUs have no support for generic auth.

I'll need to add negative cpucaps to detect that. I'll try to sort that
out for v2.

[...]

> > +static void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> > +{
> > +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
> > +		__ptrauth_save_key(ctxt->sys_regs, APIA);
> > +		__ptrauth_save_key(ctxt->sys_regs, APIB);
> > +		__ptrauth_save_key(ctxt->sys_regs, APDA);
> > +		__ptrauth_save_key(ctxt->sys_regs, APDB);
> > +	}
> > +
> > +	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> > +		__ptrauth_save_key(ctxt->sys_regs, APGA);
> > +	}
> 
> Aren't we ever only enabling the save/restore if we have both caps, so
> why are we checking it here again?

Sorry; I'd meant to fix things up so that we could support one without
the other. Likewise for __ptrauth_restore_state().

I'll try to fix this up for v2.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] arm64/kvm: context-switch ptrauth registers
  2017-08-01 14:26     ` Mark Rutland
@ 2017-08-01 14:32       ` Will Deacon
  2017-08-01 17:02       ` Christoffer Dall
  1 sibling, 0 replies; 35+ messages in thread
From: Will Deacon @ 2017-08-01 14:32 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Christoffer Dall, linux-arm-kernel, arnd, catalin.marinas,
	christoffer.dall, Dave.Martin, jiong.wang, kvmarm, linux-arch,
	marc.zyngier, suzuki.poulose, yao.qi, linux-kernel,
	kernel-hardening

On Tue, Aug 01, 2017 at 03:26:07PM +0100, Mark Rutland wrote:
> On Tue, Aug 01, 2017 at 01:00:14PM +0200, Christoffer Dall wrote:
> > On Wed, Jul 19, 2017 at 05:01:31PM +0100, Mark Rutland wrote:
> > > When pointer authentication is supported, a guest may wish to use it.
> > > This patch adds the necessary KVM infrastructure for this to work, with
> > > a semi-lazy context switch of the pointer auth state.
> > > 
> > > When we schedule a vcpu, we disable guest usage of pointer
> > > authentication instructions and accesses to the keys. While these are
> > > disabled, we avoid context-switching the keys. When we trap the guest
> > > trying to use pointer authentication functionality, we change to eagerly
> > > context-switching the keys, and enable the feature. The next time the
> > > vcpu is scheduled out/in, we start again.
> > > 
> > > This is sufficient for systems with uniform pointer authentication
> > > support. For systems with mismatched support, it will be necessary to
> > 
> > What is mismatched support?  You mean systems where one CPU has ptrauth
> > and another one doesn't (or if they both have it but in different ways)?
> 
> Both! Any case where the support is not uniform across all CPUs.
> 
> A CPU can implement address auth and/or generic auth, and either may use
> an architected algorithm or an IMP DEF algorithm.
> 
> Even if all CPUs report an IMP DEF algorithm, the particular algorithm
> may differ across CPUs.

I know you don't like it, but I think we should resort to MIDR at that point
because otherwise IMP DEF algorithms will never be used by Linux and people
will complain.

Will

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] arm64/kvm: context-switch ptrauth registers
  2017-08-01 14:26     ` Mark Rutland
  2017-08-01 14:32       ` Will Deacon
@ 2017-08-01 17:02       ` Christoffer Dall
  1 sibling, 0 replies; 35+ messages in thread
From: Christoffer Dall @ 2017-08-01 17:02 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Christoffer Dall, linux-arm-kernel, arnd, catalin.marinas,
	Dave.Martin, jiong.wang, kvmarm, linux-arch, marc.zyngier,
	suzuki.poulose, will.deacon, yao.qi, linux-kernel,
	kernel-hardening

On Tue, Aug 01, 2017 at 03:26:07PM +0100, Mark Rutland wrote:
> On Tue, Aug 01, 2017 at 01:00:14PM +0200, Christoffer Dall wrote:
> > On Wed, Jul 19, 2017 at 05:01:31PM +0100, Mark Rutland wrote:
> > > When pointer authentication is supported, a guest may wish to use it.
> > > This patch adds the necessary KVM infrastructure for this to work, with
> > > a semi-lazy context switch of the pointer auth state.
> > > 
> > > When we schedule a vcpu, we disable guest usage of pointer
> > > authentication instructions and accesses to the keys. While these are
> > > disabled, we avoid context-switching the keys. When we trap the guest
> > > trying to use pointer authentication functionality, we change to eagerly
> > > context-switching the keys, and enable the feature. The next time the
> > > vcpu is scheduled out/in, we start again.
> > > 
> > > This is sufficient for systems with uniform pointer authentication
> > > support. For systems with mismatched support, it will be necessary to
> > 
> > What is mismatched support?  You mean systems where one CPU has ptrauth
> > and another one doesn't (or if they both have it but in different ways)?
> 
> Both! Any case where the support is not uniform across all CPUs.
> 
> A CPU can implement address auth and/or generic auth, and either may use
> an architected algorithm or an IMP DEF algorithm.
> 
> Even if all CPUs report an IMP DEF algorithm, the particular algorithm
> may differ across CPUs.
> 
> > > hide the feature from the guest's view of the ID registers.
> > 
> > I think the work Drew is pondering to allow userspace more control of
> > what we emulate to the guest can semi-automatically take care of this as
> > well.
> 
> I'll take a look.
> 
> [...]
> 
> > > +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
> > > +{
> > > +	kvm_arm_vcpu_ptrauth_disable(vcpu);
> > > +}
> > > +
> > 
> > why sched_in and not vcpu_load?  (vcpu_load happens whenever you're
> > returning from userspace for example, and not only when you've been
> > scheduled away and are coming back).
> 
> I think this is the result of going searching for similar lazy context
> switching, and stumbling on some (fairly different) code on x86.
> 
> It looks like I can use vcpu_load() instead, so I'll give that a shot
> for v2.
> 
> > And why do we want to 'reset' the behavior of KVM when the host
> > schedules a VCPU thread?
> > 
> > If I understand the logic correctly, what you're establishing with the
> > appraoch of initially trapping use of ptrauth is to avoid
> > saving/restoring the state if the guest dosn't use ptrauth, but then if
> > the guest starts using it, it's likely to keep using it, and therefore
> > we start saving/restoring the registers.
> 
> Yes.
> 
> > Why is the host's decision to schedule something else on this physical
> > CPU a good indication that we should no longer save/restore these
> > registers for the VCPU?
> 
> I guess it's not.
> 
> Initially, I was followed the same approach we take for fpsimd, leaving
> the feature trapped until we take a shallow trap to hyp. Handing all the
> sysreg traps in hyp was unwieldy, so I moved that down to the kernel
> proper. That meant I couldn't enable the trap when switching from host
> to guest, and doing so in the regular context switch seemed like the
> next best option.
> 
> > Wouldn't it make more sense to have a flag on the VCPU, and
> > potentially a counter, so that if you switch X times without ever
> > touching the registers again, you can stop saving/restoring the state,
> > if that's even something we care about?
> 
> Something like that could make more sense. It should be fairly simple to
> implement a decaying counter to determine when to trap.
> 
> I'd steered clear of optimising the lazy heuristic as I'm testing with
> models, and I don't have numbers that would be representative of real
> HW.
> 
> > Another thing that comes to mind; does the kernel running a KVM's VCPU
> > thread (and handling softirqs, ISRs, etc.) ever use the ptrauth system,
> > or does that only happen when we go to userspace? 
> 
> Today, only userspace uses pointer auth, and the kernel does not.
> However, in-kernel usage is on the cards.
> 
> > If the latter, we could handle the save/restore entirely in
> > vcpu_put/vcpu_load instead. I don't mind picking up that bit as part
> > of my ongoing optimization work later though, if you're eager to get
> > these patches merged.
> 
> I'd avoided that so far, since it would be undone when in-kernel usage
> is implemented. If you prefer, I can implement that for now.
> 
> [...]
> 

I think we should just do a simple flag for now, and once we have
hardware and can measure things, we can add more advanced support like a
decaying counter or a doing this at vcpu_put/vcpu_load instead.

I can then deal with the headache of making sure this performs well on
systems that don't have the hardware support once things are merged,
because I'm looking at that already.

> > > +/*
> > > + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> > > + * ptrauth register.
> > > + */
> > > +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> > > +{
> > > +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) &&
> > > +	    cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> > > +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> > > +	} else {
> > > +		kvm_inject_undefined(vcpu);
> > 
> > How can we trap here without having ARM64_HAS_[ADDRESS,GENERIC]_AUTH ?
> 
> We'll trap if the current CPU supports one of the two (with an
> architected algorithm), but some other CPU does not (or uses an IMP DEF
> algorithm). Note that we're checking that all CPUs have the feature.
> 
> > If this to cover the case if we only have one of the two or is there a
> > case where we trap ptrauth registers/instructions even though the CPU
> > doesn't have the support?
> 
> It's there to cater for the case that some CPUs lack a feature that
> others have, so that we expose a uniform view to guests.
> 
> There's a single control to trap both address/generic authentication, so
> it has to check that support is uniform for both. 
> 
> I'd meant to fix this up to not be so pessimistic -- we could support
> one without that other, so long as it is uniformly absent. e.g. if all
> CPUs support address auth and all CPUs have no support for generic auth.
> 
> I'll need to add negative cpucaps to detect that. I'll try to sort that
> out for v2.
> 
> [...]
> 
> > > +static void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> > > +{
> > > +	if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
> > > +		__ptrauth_save_key(ctxt->sys_regs, APIA);
> > > +		__ptrauth_save_key(ctxt->sys_regs, APIB);
> > > +		__ptrauth_save_key(ctxt->sys_regs, APDA);
> > > +		__ptrauth_save_key(ctxt->sys_regs, APDB);
> > > +	}
> > > +
> > > +	if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> > > +		__ptrauth_save_key(ctxt->sys_regs, APGA);
> > > +	}
> > 
> > Aren't we ever only enabling the save/restore if we have both caps, so
> > why are we checking it here again?
> 
> Sorry; I'd meant to fix things up so that we could support one without
> the other. Likewise for __ptrauth_restore_state().
> 
> I'll try to fix this up for v2.
> 

Sounds good.  Thanks for otherwise producing a well-readable patch.

-Christoffer

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/11] arm64: docs: describe ELF hwcaps
  2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
  2017-07-21 17:05   ` Dave Martin
@ 2017-08-03 14:49   ` Catalin Marinas
  2017-08-03 17:57   ` Kees Cook
  2 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2017-08-03 14:49 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-arch, arnd, jiong.wang, marc.zyngier,
	yao.qi, suzuki.poulose, will.deacon, linux-kernel, Dave.Martin,
	kernel-hardening, kvmarm, christoffer.dall

On Wed, Jul 19, 2017 at 05:01:22PM +0100, Mark Rutland wrote:
> +3. The hwcaps exposed in AT_HWCAP
> +---------------------------------
> +
> +HWCAP_FP
> +
> +    Functionality implied by ID_AA64PFR0_EL1.FP == 0b0000.

Aren't these too restrictive? Linux would still present HWCAP_FP even
when ID_AA64PFR0_EL1.FP == 1. I think we should replace the strict
equal with greater than or equal, also mentioning that the field is
signed (or refer to the cpuid where the sign of the fields is
described).

-- 
Catalin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/11] arm64: docs: describe ELF hwcaps
  2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
  2017-07-21 17:05   ` Dave Martin
  2017-08-03 14:49   ` Catalin Marinas
@ 2017-08-03 17:57   ` Kees Cook
  2 siblings, 0 replies; 35+ messages in thread
From: Kees Cook @ 2017-08-03 17:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, Arnd Bergmann, Catalin Marinas,
	Christoffer Dall, Dave Martin, jiong.wang, kvmarm, linux-arch,
	Marc Zyngier, Suzuki K. Poulose, Will Deacon, yao.qi, LKML,
	kernel-hardening, linux-doc

On Wed, Jul 19, 2017 at 9:01 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> We don't document our ELF hwcaps, leaving developers to interpret them
> according to hearsay, guesswork, or (in exceptional cases) inspection of
> the current kernel code.
>
> This is less than optimal, and it would be far better if we had some
> definitive description of each of the ELF hwcaps that developers could
> refer to.
>
> This patch adds a document describing the (native) arm64 ELF hwcaps.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <Dave.Martin@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  Documentation/arm64/elf_hwcaps.txt | 133 +++++++++++++++++++++++++++++++++++++

With the kernel docs moving to ReST markup[1], perhaps reformat this
to a .rst file and link to it from somewhere sensible in the ReST
tree, perhaps the userspace API section in
Documentation/userspace-api/index.rst?

-Kees

[1] https://www.kernel.org/doc/html/latest/doc-guide/sphinx.html

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/11] arm64: add basic pointer authentication support
  2017-07-19 16:01 ` [PATCH 07/11] arm64: add basic pointer authentication support Mark Rutland
  2017-07-25 15:26   ` Dave Martin
@ 2017-08-11  7:46   ` Yao Qi
  2017-08-11  8:45     ` Dave Martin
  1 sibling, 1 reply; 35+ messages in thread
From: Yao Qi @ 2017-08-11  7:46 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel
  Cc: arnd, catalin.marinas, christoffer.dall, Dave.Martin, jiong.wang,
	kvmarm, linux-arch, marc.zyngier, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening

Hi Mark,

On 19/07/17 17:01, Mark Rutland wrote:
> +#define HWCAP_APIA           (1 << 16)

Can you rename it to HWCAP_ARM64_APIA or HWCAP_ARM_APIA?  When we
use it in user space, at least in GDB, we usually do this,

#ifndef HWCAP_APIA
#define HWCAP_APIA (1 << 16)
#endif

However, the code use this macro can be compiled on !arm64 host.
If HWCAP_APIA is defined on other !arm64 host and its value is not
(1 << 16), the program "aarch64_hwcap & HWCAP_APIA ? XXX : XXX;" is
wrong, and compiler doesn't complain.

I notice that mips, mn10300, sparc, and s390 define their HWCAP this
way, like HWCAP_SPARC_FLUSH, HWCAP_MIPS_R6, HWCAP_S390_DFP, etc.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/11] arm64: add basic pointer authentication support
  2017-08-11  7:46   ` Yao Qi
@ 2017-08-11  8:45     ` Dave Martin
  0 siblings, 0 replies; 35+ messages in thread
From: Dave Martin @ 2017-08-11  8:45 UTC (permalink / raw)
  To: Yao Qi
  Cc: Mark Rutland, linux-arm-kernel, linux-arch, arnd, jiong.wang,
	marc.zyngier, catalin.marinas, suzuki.poulose, will.deacon,
	linux-kernel, kernel-hardening, kvmarm, christoffer.dall

On Fri, Aug 11, 2017 at 08:46:28AM +0100, Yao Qi wrote:
> Hi Mark,
> 
> On 19/07/17 17:01, Mark Rutland wrote:
> >+#define HWCAP_APIA           (1 << 16)
> 
> Can you rename it to HWCAP_ARM64_APIA or HWCAP_ARM_APIA?  When we
> use it in user space, at least in GDB, we usually do this,
> 
> #ifndef HWCAP_APIA
> #define HWCAP_APIA (1 << 16)
> #endif
> 
> However, the code use this macro can be compiled on !arm64 host.
> If HWCAP_APIA is defined on other !arm64 host and its value is not
> (1 << 16), the program "aarch64_hwcap & HWCAP_APIA ? XXX : XXX;" is
> wrong, and compiler doesn't complain.
> 
> I notice that mips, mn10300, sparc, and s390 define their HWCAP this
> way, like HWCAP_SPARC_FLUSH, HWCAP_MIPS_R6, HWCAP_S390_DFP, etc.

(Sticking my oar in because this would apply to HWCAP_SVE too.)

It would have been a good idea I guess, but historically arm, arm64, x86
(for the one HWCAP2_* flag I can find) and unicore32 don't do this.
That can't change now without an API break, and changing the naming
scheme just for new hwcaps just seems messy.

Including multiple arches' headers in the same compilation unit isn't
guaranteed to work sensibly at all AFAICT -- it seems best no to rely
on it.

In the above, you could be doing something like

#ifdef HWCAP_APIA
#if HWCAP_APIA != (1 << 16)
#error "HWCAP_APIA value mismatch"
#else
#undef HWCAP_APIA
#endif
#endif

#define HWCAP_APIA (1 << 16)

...or...

#define HWCAP_ARM64_APIA (1 << 16)

(i.e., unconditionally, and with a well-behaved compile-time error if
there is a definition already).

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/11] ARMv8.3 pointer authentication userspace support
  2017-07-25 12:06   ` Mark Rutland
  2017-07-25 14:00     ` Jiong Wang
@ 2017-08-11 11:29     ` Mark Rutland
  1 sibling, 0 replies; 35+ messages in thread
From: Mark Rutland @ 2017-08-11 11:29 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arch, arnd, jiong.wang, marc.zyngier, catalin.marinas,
	yao.qi, will.deacon, linux-kernel, kernel-hardening, kvmarm,
	linux-arm-kernel

On Tue, Jul 25, 2017 at 01:06:43PM +0100, Mark Rutland wrote:
> On Fri, Jul 21, 2017 at 06:05:09PM +0100, Dave Martin wrote:
> > On Wed, Jul 19, 2017 at 05:01:21PM +0100, Mark Rutland wrote:
> > > This series adds support for the ARMv8.3 pointer authentication extension.
> 
> > > Open questions
> > > ==============
> > > 
> > > * Should keys be per-thread rather than per-process?
> > > 
> > >   My understanding is that glibc can't (currently) handle threads having
> > >   different keys, but it might be that another libc would prefer per-thread
> > 
> > Can you elaborate?
> > 
> > It's not valid to do a function return from one thread to another.
> 
> Regardless of whether it's valid per the C spec or POSIX, some people
> use {set,get}context and {set,long}jmp in this manner (IIRC, QEMU does
> this), and my understanding is that similar tricks are in use in the
> bowels of glibc.
> 
> Otherwise, my preference would be to have per-thread keys from day one.

Having considered comments I've received elsewhere, I've reversed my
position here. I think per-process keys are the more
sensible default since:

* This will allow us to protect function pointers in shared
  datastructures such as vtables.

* Tasks have their own stacks, and values leaked from one stack cannot
  be used to spoof return addresses on another.

* If an attacker can take control of one thread, they've already gained
  code execution and/or primitives that can be used to attack the
  process by other means.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2017-08-11 11:30 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
2017-07-21 17:05   ` Dave Martin
2017-07-24 10:47     ` Suzuki K Poulose
2017-08-03 14:49   ` Catalin Marinas
2017-08-03 17:57   ` Kees Cook
2017-07-19 16:01 ` [PATCH 02/11] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
2017-07-19 16:01 ` [PATCH 03/11] arm64: add pointer authentication register bits Mark Rutland
2017-07-19 16:01 ` [PATCH 04/11] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
2017-07-24 10:54   ` Suzuki K Poulose
2017-07-19 16:01 ` [PATCH 05/11] arm64/cpufeature: detect pointer authentication Mark Rutland
2017-07-19 16:01 ` [PATCH 06/11] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
2017-07-24 10:37   ` Dave Martin
2017-07-19 16:01 ` [PATCH 07/11] arm64: add basic pointer authentication support Mark Rutland
2017-07-25 15:26   ` Dave Martin
2017-08-11  7:46   ` Yao Qi
2017-08-11  8:45     ` Dave Martin
2017-07-19 16:01 ` [PATCH 08/11] arm64: expose user PAC bit positions via ptrace Mark Rutland
2017-07-19 16:01 ` [PATCH 09/11] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
2017-08-01 11:00   ` Christoffer Dall
2017-07-19 16:01 ` [PATCH 10/11] arm64/kvm: context-switch ptrauth registers Mark Rutland
2017-08-01 11:00   ` Christoffer Dall
2017-08-01 14:26     ` Mark Rutland
2017-08-01 14:32       ` Will Deacon
2017-08-01 17:02       ` Christoffer Dall
2017-07-19 16:01 ` [PATCH 11/11] arm64: docs: document pointer authentication Mark Rutland
2017-07-21 17:05 ` [PATCH 00/11] ARMv8.3 pointer authentication userspace support Dave Martin
2017-07-25 12:06   ` Mark Rutland
2017-07-25 14:00     ` Jiong Wang
2017-08-11 11:29     ` Mark Rutland
2017-07-24 11:52 ` Yao Qi
2017-07-25 11:32 ` Yao Qi
2017-07-25 16:01   ` Mark Rutland
2017-07-25 14:12 ` [kernel-hardening] " Li Kun
2017-07-25 15:12   ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).