All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Martin <Dave.Martin@arm.com>
To: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org,
	arnd@arndb.de, jiong.wang@arm.com, marc.zyngier@arm.com,
	catalin.marinas@arm.com, yao.qi@arm.com, suzuki.poulose@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	kernel-hardening@lists.openwall.com,
	kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org
Subject: Re: [PATCH 07/11] arm64: add basic pointer authentication support
Date: Tue, 25 Jul 2017 16:26:50 +0100	[thread overview]
Message-ID: <20170725152649.GE6321@e103592.cambridge.arm.com> (raw)
In-Reply-To: <1500480092-28480-8-git-send-email-mark.rutland@arm.com>

On Wed, Jul 19, 2017 at 05:01:28PM +0100, Mark Rutland wrote:
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey. The kernel maintains an APIAKey value
> for each process (shared by all threads within), which is initialised to
> a random value at exec() time.
> 
> Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
> and will behave as NOPs. These may be made use of in future patches.
> 
> No support is added for the generic key (APGAKey), though this cannot be
> trapped or made to behave as a NOP. Its presence is not advertised with
> a hwcap.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/Kconfig                    | 23 +++++++++
>  arch/arm64/include/asm/mmu.h          |  5 ++
>  arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
>  arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
>  arch/arm64/include/uapi/asm/hwcap.h   |  1 +
>  arch/arm64/kernel/cpufeature.c        | 11 +++++
>  arch/arm64/kernel/cpuinfo.c           |  1 +
>  7 files changed, 153 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pointer_auth.h
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index dfd9086..15a9931 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -962,6 +962,29 @@ config ARM64_UAO
>  
>  endmenu
>  
> +menu "ARMv8.3 architectural features"
> +
> +config ARM64_POINTER_AUTHENTICATION
> +	bool "Enable support for pointer authentication"
> +	default y
> +	help
> +	  Pointer authentication (part of the ARMv8.3 Extensions) provides
> +	  instructions for signing and authenticating pointers against secret
> +	  keys, which can be used to mitigate Return Oriented Programming (ROP)
> +	  and other attacks.
> +
> +	  This option enables these instructions at EL0 (i.e. for userspace).
> +
> +	  Choosing this option will cause the kernel to initialise secret keys
> +	  for each process at exec() time, with these keys being
> +	  context-switched along with the process.
> +
> +	  The feature is detected at runtime. If the feature is not present in
> +	  hardware it will not be advertised to userspace nor will it be
> +	  enabled.

Should we describe which keys are supported here, or will this option
always turn on all the keys/instructions that the kernel implements to
date?

> +
> +endmenu
> +
>  config ARM64_MODULE_CMODEL_LARGE
>  	bool
>  
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 5468c83..6a848f3 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -16,10 +16,15 @@
>  #ifndef __ASM_MMU_H
>  #define __ASM_MMU_H
>  
> +#include <asm/pointer_auth.h>
> +
>  typedef struct {
>  	atomic64_t	id;
>  	void		*vdso;
>  	unsigned long	flags;
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +	struct ptrauth_keys	ptrauth_keys;
> +#endif
>  } mm_context_t;
>  
>  /*
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 3257895a..06757a5 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -31,7 +31,6 @@
>  #include <asm/cacheflush.h>
>  #include <asm/cpufeature.h>
>  #include <asm/proc-fns.h>
> -#include <asm-generic/mm_hooks.h>
>  #include <asm/cputype.h>
>  #include <asm/pgtable.h>
>  #include <asm/sysreg.h>
> @@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
>  #define destroy_context(mm)		do { } while(0)
>  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>  
> -#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
> +static inline int init_new_context(struct task_struct *tsk,
> +			struct mm_struct *mm)
> +{
> +	atomic64_set(&mm->context.id, 0);
> +	mm_ctx_ptrauth_init(&mm->context);

For this stuff in general, I wonder whether we should move this code
away from mm and to thread_strct and the process/thread paths, otherwise
we'll just have to move it all around later if ptrauth is ever to be
supported per-thread.

This would also remove the need to have individually overridable arch
mm hooks.

Adding an extra 16 bytes to thread_struct is probably not the end of the
world.  thread_struct is already well over half a K.  We could de-dupe
by refcounting or similar, but it may not be worth the complexity.

> +
> +	return 0;
> +}
>  
>  /*
>   * This is called when "tsk" is about to enter lazy TLB mode.
> @@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
>  		return;
>  	}
>  
> +	mm_ctx_ptrauth_switch(&next->context);
> +
>  	check_and_switch_context(next, cpu);
>  }
>  
> @@ -226,6 +234,19 @@ static inline void __switch_mm(struct mm_struct *next)
>  
>  void verify_cpu_asid_bits(void);
>  
> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> +				 struct mm_struct *mm)
> +{
> +	mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
> +}
> +#define arch_dup_mmap arch_dup_mmap
> +
> +/*
> + * We need to override arch_dup_mmap before including the generic hooks, which
> + * are otherwise sufficient for us.
> + */
> +#include <asm-generic/mm_hooks.h>
> +
>  #endif /* !__ASSEMBLY__ */
>  
>  #endif /* !__ASM_MMU_CONTEXT_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> new file mode 100644
> index 0000000..964da0c
> --- /dev/null
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -0,0 +1,89 @@
> +/*
> + * Copyright (C) 2016 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_POINTER_AUTH_H
> +#define __ASM_POINTER_AUTH_H
> +
> +#include <linux/random.h>
> +
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +/*
> + * Each key is a 128-bit quantity which is split accross a pair of 64-bit
> + * registers (Lo and Hi).
> + */
> +struct ptrauth_key {
> +	unsigned long lo, hi;
> +};
> +
> +/*
> + * We give each process its own instruction A key (APIAKey), which is shared by
> + * all threads. This is inherited upon fork(), and reinitialised upon exec*().
> + * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
> + * instructions behaving as NOPs.
> + */
> +struct ptrauth_keys {
> +	struct ptrauth_key apia;
> +};
> +
> +static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	get_random_bytes(keys, sizeof(*keys));
> +}
> +
> +#define __ptrauth_key_install(k, v)			\
> +do {							\
> +	write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1);	\
> +	write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1);	\

(v)

though moderately crazy usage would be required in order for this to go
wrong.

> +} while (0)
> +
> +static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	__ptrauth_key_install(APIA, keys->apia);
> +}
> +
> +static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
> +				    struct ptrauth_keys *new)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	*new = *old;

This seems an odd thing to do.  Surely, by design we never want two
processes to share the same keys?  Don't we always proceed to nuke
the keys via mm_ctx_ptrauth_init() anyway?

> +}
> +
> +#define mm_ctx_ptrauth_init(ctx) \
> +	ptrauth_keys_init(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_switch(ctx) \
> +	ptrauth_keys_switch(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_dup(oldctx, newctx) \
> +	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)

[...]

Cheers
---Dave

WARNING: multiple messages have this Message-ID (diff)
From: Dave Martin <Dave.Martin@arm.com>
To: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arch@vger.kernel.org, arnd@arndb.de, jiong.wang@arm.com,
	marc.zyngier@arm.com, catalin.marinas@arm.com, yao.qi@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	kernel-hardening@lists.openwall.com,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 07/11] arm64: add basic pointer authentication support
Date: Tue, 25 Jul 2017 16:26:50 +0100	[thread overview]
Message-ID: <20170725152649.GE6321@e103592.cambridge.arm.com> (raw)
In-Reply-To: <1500480092-28480-8-git-send-email-mark.rutland@arm.com>

On Wed, Jul 19, 2017 at 05:01:28PM +0100, Mark Rutland wrote:
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey. The kernel maintains an APIAKey value
> for each process (shared by all threads within), which is initialised to
> a random value at exec() time.
> 
> Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
> and will behave as NOPs. These may be made use of in future patches.
> 
> No support is added for the generic key (APGAKey), though this cannot be
> trapped or made to behave as a NOP. Its presence is not advertised with
> a hwcap.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/Kconfig                    | 23 +++++++++
>  arch/arm64/include/asm/mmu.h          |  5 ++
>  arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
>  arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
>  arch/arm64/include/uapi/asm/hwcap.h   |  1 +
>  arch/arm64/kernel/cpufeature.c        | 11 +++++
>  arch/arm64/kernel/cpuinfo.c           |  1 +
>  7 files changed, 153 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pointer_auth.h
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index dfd9086..15a9931 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -962,6 +962,29 @@ config ARM64_UAO
>  
>  endmenu
>  
> +menu "ARMv8.3 architectural features"
> +
> +config ARM64_POINTER_AUTHENTICATION
> +	bool "Enable support for pointer authentication"
> +	default y
> +	help
> +	  Pointer authentication (part of the ARMv8.3 Extensions) provides
> +	  instructions for signing and authenticating pointers against secret
> +	  keys, which can be used to mitigate Return Oriented Programming (ROP)
> +	  and other attacks.
> +
> +	  This option enables these instructions at EL0 (i.e. for userspace).
> +
> +	  Choosing this option will cause the kernel to initialise secret keys
> +	  for each process at exec() time, with these keys being
> +	  context-switched along with the process.
> +
> +	  The feature is detected at runtime. If the feature is not present in
> +	  hardware it will not be advertised to userspace nor will it be
> +	  enabled.

Should we describe which keys are supported here, or will this option
always turn on all the keys/instructions that the kernel implements to
date?

> +
> +endmenu
> +
>  config ARM64_MODULE_CMODEL_LARGE
>  	bool
>  
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 5468c83..6a848f3 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -16,10 +16,15 @@
>  #ifndef __ASM_MMU_H
>  #define __ASM_MMU_H
>  
> +#include <asm/pointer_auth.h>
> +
>  typedef struct {
>  	atomic64_t	id;
>  	void		*vdso;
>  	unsigned long	flags;
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +	struct ptrauth_keys	ptrauth_keys;
> +#endif
>  } mm_context_t;
>  
>  /*
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 3257895a..06757a5 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -31,7 +31,6 @@
>  #include <asm/cacheflush.h>
>  #include <asm/cpufeature.h>
>  #include <asm/proc-fns.h>
> -#include <asm-generic/mm_hooks.h>
>  #include <asm/cputype.h>
>  #include <asm/pgtable.h>
>  #include <asm/sysreg.h>
> @@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
>  #define destroy_context(mm)		do { } while(0)
>  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>  
> -#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
> +static inline int init_new_context(struct task_struct *tsk,
> +			struct mm_struct *mm)
> +{
> +	atomic64_set(&mm->context.id, 0);
> +	mm_ctx_ptrauth_init(&mm->context);

For this stuff in general, I wonder whether we should move this code
away from mm and to thread_strct and the process/thread paths, otherwise
we'll just have to move it all around later if ptrauth is ever to be
supported per-thread.

This would also remove the need to have individually overridable arch
mm hooks.

Adding an extra 16 bytes to thread_struct is probably not the end of the
world.  thread_struct is already well over half a K.  We could de-dupe
by refcounting or similar, but it may not be worth the complexity.

> +
> +	return 0;
> +}
>  
>  /*
>   * This is called when "tsk" is about to enter lazy TLB mode.
> @@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
>  		return;
>  	}
>  
> +	mm_ctx_ptrauth_switch(&next->context);
> +
>  	check_and_switch_context(next, cpu);
>  }
>  
> @@ -226,6 +234,19 @@ static inline void __switch_mm(struct mm_struct *next)
>  
>  void verify_cpu_asid_bits(void);
>  
> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> +				 struct mm_struct *mm)
> +{
> +	mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
> +}
> +#define arch_dup_mmap arch_dup_mmap
> +
> +/*
> + * We need to override arch_dup_mmap before including the generic hooks, which
> + * are otherwise sufficient for us.
> + */
> +#include <asm-generic/mm_hooks.h>
> +
>  #endif /* !__ASSEMBLY__ */
>  
>  #endif /* !__ASM_MMU_CONTEXT_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> new file mode 100644
> index 0000000..964da0c
> --- /dev/null
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -0,0 +1,89 @@
> +/*
> + * Copyright (C) 2016 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_POINTER_AUTH_H
> +#define __ASM_POINTER_AUTH_H
> +
> +#include <linux/random.h>
> +
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +/*
> + * Each key is a 128-bit quantity which is split accross a pair of 64-bit
> + * registers (Lo and Hi).
> + */
> +struct ptrauth_key {
> +	unsigned long lo, hi;
> +};
> +
> +/*
> + * We give each process its own instruction A key (APIAKey), which is shared by
> + * all threads. This is inherited upon fork(), and reinitialised upon exec*().
> + * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
> + * instructions behaving as NOPs.
> + */
> +struct ptrauth_keys {
> +	struct ptrauth_key apia;
> +};
> +
> +static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	get_random_bytes(keys, sizeof(*keys));
> +}
> +
> +#define __ptrauth_key_install(k, v)			\
> +do {							\
> +	write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1);	\
> +	write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1);	\

(v)

though moderately crazy usage would be required in order for this to go
wrong.

> +} while (0)
> +
> +static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	__ptrauth_key_install(APIA, keys->apia);
> +}
> +
> +static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
> +				    struct ptrauth_keys *new)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	*new = *old;

This seems an odd thing to do.  Surely, by design we never want two
processes to share the same keys?  Don't we always proceed to nuke
the keys via mm_ctx_ptrauth_init() anyway?

> +}
> +
> +#define mm_ctx_ptrauth_init(ctx) \
> +	ptrauth_keys_init(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_switch(ctx) \
> +	ptrauth_keys_switch(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_dup(oldctx, newctx) \
> +	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)

[...]

Cheers
---Dave

WARNING: multiple messages have this Message-ID (diff)
From: Dave.Martin@arm.com (Dave Martin)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 07/11] arm64: add basic pointer authentication support
Date: Tue, 25 Jul 2017 16:26:50 +0100	[thread overview]
Message-ID: <20170725152649.GE6321@e103592.cambridge.arm.com> (raw)
In-Reply-To: <1500480092-28480-8-git-send-email-mark.rutland@arm.com>

On Wed, Jul 19, 2017 at 05:01:28PM +0100, Mark Rutland wrote:
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey. The kernel maintains an APIAKey value
> for each process (shared by all threads within), which is initialised to
> a random value at exec() time.
> 
> Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
> and will behave as NOPs. These may be made use of in future patches.
> 
> No support is added for the generic key (APGAKey), though this cannot be
> trapped or made to behave as a NOP. Its presence is not advertised with
> a hwcap.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/Kconfig                    | 23 +++++++++
>  arch/arm64/include/asm/mmu.h          |  5 ++
>  arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
>  arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
>  arch/arm64/include/uapi/asm/hwcap.h   |  1 +
>  arch/arm64/kernel/cpufeature.c        | 11 +++++
>  arch/arm64/kernel/cpuinfo.c           |  1 +
>  7 files changed, 153 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pointer_auth.h
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index dfd9086..15a9931 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -962,6 +962,29 @@ config ARM64_UAO
>  
>  endmenu
>  
> +menu "ARMv8.3 architectural features"
> +
> +config ARM64_POINTER_AUTHENTICATION
> +	bool "Enable support for pointer authentication"
> +	default y
> +	help
> +	  Pointer authentication (part of the ARMv8.3 Extensions) provides
> +	  instructions for signing and authenticating pointers against secret
> +	  keys, which can be used to mitigate Return Oriented Programming (ROP)
> +	  and other attacks.
> +
> +	  This option enables these instructions at EL0 (i.e. for userspace).
> +
> +	  Choosing this option will cause the kernel to initialise secret keys
> +	  for each process at exec() time, with these keys being
> +	  context-switched along with the process.
> +
> +	  The feature is detected at runtime. If the feature is not present in
> +	  hardware it will not be advertised to userspace nor will it be
> +	  enabled.

Should we describe which keys are supported here, or will this option
always turn on all the keys/instructions that the kernel implements to
date?

> +
> +endmenu
> +
>  config ARM64_MODULE_CMODEL_LARGE
>  	bool
>  
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 5468c83..6a848f3 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -16,10 +16,15 @@
>  #ifndef __ASM_MMU_H
>  #define __ASM_MMU_H
>  
> +#include <asm/pointer_auth.h>
> +
>  typedef struct {
>  	atomic64_t	id;
>  	void		*vdso;
>  	unsigned long	flags;
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +	struct ptrauth_keys	ptrauth_keys;
> +#endif
>  } mm_context_t;
>  
>  /*
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 3257895a..06757a5 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -31,7 +31,6 @@
>  #include <asm/cacheflush.h>
>  #include <asm/cpufeature.h>
>  #include <asm/proc-fns.h>
> -#include <asm-generic/mm_hooks.h>
>  #include <asm/cputype.h>
>  #include <asm/pgtable.h>
>  #include <asm/sysreg.h>
> @@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
>  #define destroy_context(mm)		do { } while(0)
>  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>  
> -#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
> +static inline int init_new_context(struct task_struct *tsk,
> +			struct mm_struct *mm)
> +{
> +	atomic64_set(&mm->context.id, 0);
> +	mm_ctx_ptrauth_init(&mm->context);

For this stuff in general, I wonder whether we should move this code
away from mm and to thread_strct and the process/thread paths, otherwise
we'll just have to move it all around later if ptrauth is ever to be
supported per-thread.

This would also remove the need to have individually overridable arch
mm hooks.

Adding an extra 16 bytes to thread_struct is probably not the end of the
world.  thread_struct is already well over half a K.  We could de-dupe
by refcounting or similar, but it may not be worth the complexity.

> +
> +	return 0;
> +}
>  
>  /*
>   * This is called when "tsk" is about to enter lazy TLB mode.
> @@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
>  		return;
>  	}
>  
> +	mm_ctx_ptrauth_switch(&next->context);
> +
>  	check_and_switch_context(next, cpu);
>  }
>  
> @@ -226,6 +234,19 @@ static inline void __switch_mm(struct mm_struct *next)
>  
>  void verify_cpu_asid_bits(void);
>  
> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> +				 struct mm_struct *mm)
> +{
> +	mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
> +}
> +#define arch_dup_mmap arch_dup_mmap
> +
> +/*
> + * We need to override arch_dup_mmap before including the generic hooks, which
> + * are otherwise sufficient for us.
> + */
> +#include <asm-generic/mm_hooks.h>
> +
>  #endif /* !__ASSEMBLY__ */
>  
>  #endif /* !__ASM_MMU_CONTEXT_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> new file mode 100644
> index 0000000..964da0c
> --- /dev/null
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -0,0 +1,89 @@
> +/*
> + * Copyright (C) 2016 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_POINTER_AUTH_H
> +#define __ASM_POINTER_AUTH_H
> +
> +#include <linux/random.h>
> +
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +/*
> + * Each key is a 128-bit quantity which is split accross a pair of 64-bit
> + * registers (Lo and Hi).
> + */
> +struct ptrauth_key {
> +	unsigned long lo, hi;
> +};
> +
> +/*
> + * We give each process its own instruction A key (APIAKey), which is shared by
> + * all threads. This is inherited upon fork(), and reinitialised upon exec*().
> + * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
> + * instructions behaving as NOPs.
> + */
> +struct ptrauth_keys {
> +	struct ptrauth_key apia;
> +};
> +
> +static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	get_random_bytes(keys, sizeof(*keys));
> +}
> +
> +#define __ptrauth_key_install(k, v)			\
> +do {							\
> +	write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1);	\
> +	write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1);	\

(v)

though moderately crazy usage would be required in order for this to go
wrong.

> +} while (0)
> +
> +static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	__ptrauth_key_install(APIA, keys->apia);
> +}
> +
> +static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
> +				    struct ptrauth_keys *new)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	*new = *old;

This seems an odd thing to do.  Surely, by design we never want two
processes to share the same keys?  Don't we always proceed to nuke
the keys via mm_ctx_ptrauth_init() anyway?

> +}
> +
> +#define mm_ctx_ptrauth_init(ctx) \
> +	ptrauth_keys_init(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_switch(ctx) \
> +	ptrauth_keys_switch(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_dup(oldctx, newctx) \
> +	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)

[...]

Cheers
---Dave

WARNING: multiple messages have this Message-ID (diff)
From: Dave Martin <Dave.Martin@arm.com>
To: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org,
	arnd@arndb.de, jiong.wang@arm.com, marc.zyngier@arm.com,
	catalin.marinas@arm.com, yao.qi@arm.com, suzuki.poulose@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	kernel-hardening@lists.openwall.com,
	kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org
Subject: [kernel-hardening] Re: [PATCH 07/11] arm64: add basic pointer authentication support
Date: Tue, 25 Jul 2017 16:26:50 +0100	[thread overview]
Message-ID: <20170725152649.GE6321@e103592.cambridge.arm.com> (raw)
In-Reply-To: <1500480092-28480-8-git-send-email-mark.rutland@arm.com>

On Wed, Jul 19, 2017 at 05:01:28PM +0100, Mark Rutland wrote:
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey. The kernel maintains an APIAKey value
> for each process (shared by all threads within), which is initialised to
> a random value at exec() time.
> 
> Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
> and will behave as NOPs. These may be made use of in future patches.
> 
> No support is added for the generic key (APGAKey), though this cannot be
> trapped or made to behave as a NOP. Its presence is not advertised with
> a hwcap.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/Kconfig                    | 23 +++++++++
>  arch/arm64/include/asm/mmu.h          |  5 ++
>  arch/arm64/include/asm/mmu_context.h  | 25 +++++++++-
>  arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
>  arch/arm64/include/uapi/asm/hwcap.h   |  1 +
>  arch/arm64/kernel/cpufeature.c        | 11 +++++
>  arch/arm64/kernel/cpuinfo.c           |  1 +
>  7 files changed, 153 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pointer_auth.h
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index dfd9086..15a9931 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -962,6 +962,29 @@ config ARM64_UAO
>  
>  endmenu
>  
> +menu "ARMv8.3 architectural features"
> +
> +config ARM64_POINTER_AUTHENTICATION
> +	bool "Enable support for pointer authentication"
> +	default y
> +	help
> +	  Pointer authentication (part of the ARMv8.3 Extensions) provides
> +	  instructions for signing and authenticating pointers against secret
> +	  keys, which can be used to mitigate Return Oriented Programming (ROP)
> +	  and other attacks.
> +
> +	  This option enables these instructions at EL0 (i.e. for userspace).
> +
> +	  Choosing this option will cause the kernel to initialise secret keys
> +	  for each process at exec() time, with these keys being
> +	  context-switched along with the process.
> +
> +	  The feature is detected at runtime. If the feature is not present in
> +	  hardware it will not be advertised to userspace nor will it be
> +	  enabled.

Should we describe which keys are supported here, or will this option
always turn on all the keys/instructions that the kernel implements to
date?

> +
> +endmenu
> +
>  config ARM64_MODULE_CMODEL_LARGE
>  	bool
>  
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 5468c83..6a848f3 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -16,10 +16,15 @@
>  #ifndef __ASM_MMU_H
>  #define __ASM_MMU_H
>  
> +#include <asm/pointer_auth.h>
> +
>  typedef struct {
>  	atomic64_t	id;
>  	void		*vdso;
>  	unsigned long	flags;
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +	struct ptrauth_keys	ptrauth_keys;
> +#endif
>  } mm_context_t;
>  
>  /*
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 3257895a..06757a5 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -31,7 +31,6 @@
>  #include <asm/cacheflush.h>
>  #include <asm/cpufeature.h>
>  #include <asm/proc-fns.h>
> -#include <asm-generic/mm_hooks.h>
>  #include <asm/cputype.h>
>  #include <asm/pgtable.h>
>  #include <asm/sysreg.h>
> @@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
>  #define destroy_context(mm)		do { } while(0)
>  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>  
> -#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
> +static inline int init_new_context(struct task_struct *tsk,
> +			struct mm_struct *mm)
> +{
> +	atomic64_set(&mm->context.id, 0);
> +	mm_ctx_ptrauth_init(&mm->context);

For this stuff in general, I wonder whether we should move this code
away from mm and to thread_strct and the process/thread paths, otherwise
we'll just have to move it all around later if ptrauth is ever to be
supported per-thread.

This would also remove the need to have individually overridable arch
mm hooks.

Adding an extra 16 bytes to thread_struct is probably not the end of the
world.  thread_struct is already well over half a K.  We could de-dupe
by refcounting or similar, but it may not be worth the complexity.

> +
> +	return 0;
> +}
>  
>  /*
>   * This is called when "tsk" is about to enter lazy TLB mode.
> @@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
>  		return;
>  	}
>  
> +	mm_ctx_ptrauth_switch(&next->context);
> +
>  	check_and_switch_context(next, cpu);
>  }
>  
> @@ -226,6 +234,19 @@ static inline void __switch_mm(struct mm_struct *next)
>  
>  void verify_cpu_asid_bits(void);
>  
> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> +				 struct mm_struct *mm)
> +{
> +	mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
> +}
> +#define arch_dup_mmap arch_dup_mmap
> +
> +/*
> + * We need to override arch_dup_mmap before including the generic hooks, which
> + * are otherwise sufficient for us.
> + */
> +#include <asm-generic/mm_hooks.h>
> +
>  #endif /* !__ASSEMBLY__ */
>  
>  #endif /* !__ASM_MMU_CONTEXT_H */
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> new file mode 100644
> index 0000000..964da0c
> --- /dev/null
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -0,0 +1,89 @@
> +/*
> + * Copyright (C) 2016 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_POINTER_AUTH_H
> +#define __ASM_POINTER_AUTH_H
> +
> +#include <linux/random.h>
> +
> +#include <asm/cpufeature.h>
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> +/*
> + * Each key is a 128-bit quantity which is split accross a pair of 64-bit
> + * registers (Lo and Hi).
> + */
> +struct ptrauth_key {
> +	unsigned long lo, hi;
> +};
> +
> +/*
> + * We give each process its own instruction A key (APIAKey), which is shared by
> + * all threads. This is inherited upon fork(), and reinitialised upon exec*().
> + * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
> + * instructions behaving as NOPs.
> + */
> +struct ptrauth_keys {
> +	struct ptrauth_key apia;
> +};
> +
> +static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	get_random_bytes(keys, sizeof(*keys));
> +}
> +
> +#define __ptrauth_key_install(k, v)			\
> +do {							\
> +	write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1);	\
> +	write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1);	\

(v)

though moderately crazy usage would be required in order for this to go
wrong.

> +} while (0)
> +
> +static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	__ptrauth_key_install(APIA, keys->apia);
> +}
> +
> +static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
> +				    struct ptrauth_keys *new)
> +{
> +	if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> +		return;
> +
> +	*new = *old;

This seems an odd thing to do.  Surely, by design we never want two
processes to share the same keys?  Don't we always proceed to nuke
the keys via mm_ctx_ptrauth_init() anyway?

> +}
> +
> +#define mm_ctx_ptrauth_init(ctx) \
> +	ptrauth_keys_init(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_switch(ctx) \
> +	ptrauth_keys_switch(&(ctx)->ptrauth_keys)
> +
> +#define mm_ctx_ptrauth_dup(oldctx, newctx) \
> +	ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)

[...]

Cheers
---Dave

  reply	other threads:[~2017-07-25 15:26 UTC|newest]

Thread overview: 124+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-19 16:01 [PATCH 00/11] ARMv8.3 pointer authentication userspace support Mark Rutland
2017-07-19 16:01 ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01 ` Mark Rutland
2017-07-19 16:01 ` [PATCH 01/11] arm64: docs: describe ELF hwcaps Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-21 17:05   ` Dave Martin
2017-07-21 17:05     ` [kernel-hardening] " Dave Martin
2017-07-21 17:05     ` Dave Martin
2017-07-21 17:05     ` Dave Martin
2017-07-24 10:47     ` Suzuki K Poulose
2017-07-24 10:47       ` [kernel-hardening] " Suzuki K Poulose
2017-07-24 10:47       ` Suzuki K Poulose
2017-08-03 14:49   ` Catalin Marinas
2017-08-03 14:49     ` [kernel-hardening] " Catalin Marinas
2017-08-03 14:49     ` Catalin Marinas
2017-08-03 17:57   ` Kees Cook
2017-08-03 17:57     ` [kernel-hardening] " Kees Cook
2017-08-03 17:57     ` Kees Cook
2017-08-03 17:57     ` Kees Cook
2017-07-19 16:01 ` [PATCH 02/11] asm-generic: mm_hooks: allow hooks to be overridden individually Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01 ` [PATCH 03/11] arm64: add pointer authentication register bits Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01 ` [PATCH 04/11] arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-24 10:54   ` Suzuki K Poulose
2017-07-24 10:54     ` [kernel-hardening] " Suzuki K Poulose
2017-07-24 10:54     ` Suzuki K Poulose
2017-07-19 16:01 ` [PATCH 05/11] arm64/cpufeature: detect pointer authentication Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01 ` [PATCH 06/11] arm64: Don't trap host pointer auth use to EL2 Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-24 10:37   ` Dave Martin
2017-07-24 10:37     ` [kernel-hardening] " Dave Martin
2017-07-24 10:37     ` Dave Martin
2017-07-19 16:01 ` [PATCH 07/11] arm64: add basic pointer authentication support Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-25 15:26   ` Dave Martin [this message]
2017-07-25 15:26     ` [kernel-hardening] " Dave Martin
2017-07-25 15:26     ` Dave Martin
2017-07-25 15:26     ` Dave Martin
2017-08-11  7:46   ` Yao Qi
2017-08-11  7:46     ` [kernel-hardening] " Yao Qi
2017-08-11  7:46     ` Yao Qi
2017-08-11  8:45     ` Dave Martin
2017-08-11  8:45       ` [kernel-hardening] " Dave Martin
2017-08-11  8:45       ` Dave Martin
2017-08-11  8:45       ` Dave Martin
2017-07-19 16:01 ` [PATCH 08/11] arm64: expose user PAC bit positions via ptrace Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01 ` [PATCH 09/11] arm64/kvm: preserve host HCR_EL2 value Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-08-01 11:00   ` Christoffer Dall
2017-08-01 11:00     ` [kernel-hardening] " Christoffer Dall
2017-08-01 11:00     ` Christoffer Dall
2017-08-01 11:00     ` Christoffer Dall
2017-07-19 16:01 ` [PATCH 10/11] arm64/kvm: context-switch ptrauth registers Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-08-01 11:00   ` Christoffer Dall
2017-08-01 11:00     ` [kernel-hardening] " Christoffer Dall
2017-08-01 11:00     ` Christoffer Dall
2017-08-01 11:00     ` Christoffer Dall
2017-08-01 14:26     ` Mark Rutland
2017-08-01 14:26       ` [kernel-hardening] " Mark Rutland
2017-08-01 14:26       ` Mark Rutland
2017-08-01 14:32       ` Will Deacon
2017-08-01 14:32         ` [kernel-hardening] " Will Deacon
2017-08-01 14:32         ` Will Deacon
2017-08-01 17:02       ` Christoffer Dall
2017-08-01 17:02         ` [kernel-hardening] " Christoffer Dall
2017-08-01 17:02         ` Christoffer Dall
2017-07-19 16:01 ` [PATCH 11/11] arm64: docs: document pointer authentication Mark Rutland
2017-07-19 16:01   ` [kernel-hardening] " Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-19 16:01   ` Mark Rutland
2017-07-21 17:05 ` [PATCH 00/11] ARMv8.3 pointer authentication userspace support Dave Martin
2017-07-21 17:05   ` [kernel-hardening] " Dave Martin
2017-07-21 17:05   ` Dave Martin
2017-07-21 17:05   ` Dave Martin
2017-07-25 12:06   ` Mark Rutland
2017-07-25 12:06     ` [kernel-hardening] " Mark Rutland
2017-07-25 12:06     ` Mark Rutland
2017-07-25 14:00     ` Jiong Wang
2017-07-25 14:00       ` [kernel-hardening] " Jiong Wang
2017-07-25 14:00       ` Jiong Wang
2017-07-25 14:00       ` Jiong Wang
2017-08-11 11:29     ` Mark Rutland
2017-08-11 11:29       ` [kernel-hardening] " Mark Rutland
2017-08-11 11:29       ` Mark Rutland
2017-07-24 11:52 ` Yao Qi
2017-07-24 11:52   ` [kernel-hardening] " Yao Qi
2017-07-24 11:52   ` Yao Qi
2017-07-25 11:32 ` Yao Qi
2017-07-25 11:32   ` [kernel-hardening] " Yao Qi
2017-07-25 11:32   ` Yao Qi
2017-07-25 16:01   ` Mark Rutland
2017-07-25 16:01     ` [kernel-hardening] " Mark Rutland
2017-07-25 16:01     ` Mark Rutland
2017-07-25 14:12 ` [kernel-hardening] " Li Kun
2017-07-25 14:12   ` Li Kun
2017-07-25 14:12   ` Li Kun
2017-07-25 14:12   ` Li Kun
2017-07-25 15:12   ` Mark Rutland
2017-07-25 15:12     ` Mark Rutland
2017-07-25 15:12     ` Mark Rutland
2017-07-25 15:12     ` Mark Rutland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170725152649.GE6321@e103592.cambridge.arm.com \
    --to=dave.martin@arm.com \
    --cc=arnd@arndb.de \
    --cc=catalin.marinas@arm.com \
    --cc=christoffer.dall@linaro.org \
    --cc=jiong.wang@arm.com \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=mark.rutland@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will.deacon@arm.com \
    --cc=yao.qi@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.