All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2]  arm64: Support Enhanced PAN
@ 2021-01-19 16:07 Vladimir Murzin
  2021-01-19 16:07 ` [PATCH v3 1/2] arm64: Support execute-only permissions with " Vladimir Murzin
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Vladimir Murzin @ 2021-01-19 16:07 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: catalin.marinas, keescook, dave.martin

Hi

ARM architecture gains support of Enhanced Privileged Access Never
(EPAN) which allows Privileged Access Never to be used with
Execute-only mappings.

As a consequence 24cecc377463 ("arm64: Revert support for execute-only
user mappings") can be revisited and re-enabled.

Changelog:

	RFC -> v1
	    - removed cap check in pte_valid_not_user (per Catalin)
	    - local_flush_tlb_all() in cpu_enable_epan() (per Catalin)
	    - reordered with CnP (per Catalin)
	    - s/HWCAP2_EPAN/HWCAP2_EXECONLY/ (per Catalin)

	v1 -> v2
	    - rebased on for-next/uaccess (for INIT_SCTLR_EL1_MMU_ON)
	    - moved EPAN enable to proc.S (via INIT_SCTLR_EL1_MMU_ON),
	      so no need in enable method from cpufeature, no need to
	      keep ordering relative to CnP (per Catalin)

	v2 -> v3
	    - rebased on 5.11-rc4

Thanks!

Vladimir Murzin (2):
  arm64: Support execute-only permissions with Enhanced PAN
  arm64: Introduce HWCAPS2_EXECONLY

 arch/arm64/Kconfig                    | 17 +++++++++++++++++
 arch/arm64/include/asm/cpucaps.h      |  3 ++-
 arch/arm64/include/asm/hwcap.h        |  1 +
 arch/arm64/include/asm/pgtable-prot.h |  5 +++--
 arch/arm64/include/asm/pgtable.h      | 14 +++++++++++++-
 arch/arm64/include/asm/sysreg.h       |  4 +++-
 arch/arm64/include/uapi/asm/hwcap.h   |  1 +
 arch/arm64/kernel/cpufeature.c        | 15 +++++++++++++++
 arch/arm64/kernel/cpuinfo.c           |  1 +
 arch/arm64/mm/fault.c                 |  3 +++
 10 files changed, 59 insertions(+), 5 deletions(-)

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-01-19 16:07 [PATCH v3 0/2] arm64: Support Enhanced PAN Vladimir Murzin
@ 2021-01-19 16:07 ` Vladimir Murzin
  2021-01-26 11:03   ` Will Deacon
  2021-01-19 16:07 ` [PATCH v3 2/2] arm64: Introduce HWCAPS2_EXECONLY Vladimir Murzin
  2021-01-26 11:09 ` [PATCH v3 0/2] arm64: Support Enhanced PAN Will Deacon
  2 siblings, 1 reply; 13+ messages in thread
From: Vladimir Murzin @ 2021-01-19 16:07 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: catalin.marinas, keescook, dave.martin

Enhanced Privileged Access Never (EPAN) allows Privileged Access Never
to be used with Execute-only mappings.

Absence of such support was a reason for 24cecc377463 ("arm64: Revert
support for execute-only user mappings"). Thus now it can be revisited
and re-enabled.

Cc: Kees Cook <keescook@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 arch/arm64/Kconfig                    | 17 +++++++++++++++++
 arch/arm64/include/asm/cpucaps.h      |  3 ++-
 arch/arm64/include/asm/pgtable-prot.h |  5 +++--
 arch/arm64/include/asm/pgtable.h      | 14 +++++++++++++-
 arch/arm64/include/asm/sysreg.h       |  3 ++-
 arch/arm64/kernel/cpufeature.c        | 12 ++++++++++++
 arch/arm64/mm/fault.c                 |  3 +++
 7 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b..e63cc18 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1053,6 +1053,9 @@ config ARCH_WANT_HUGE_PMD_SHARE
 config ARCH_HAS_CACHE_LINE_SIZE
 	def_bool y
 
+config ARCH_HAS_FILTER_PGPROT
+	def_bool y
+
 config ARCH_ENABLE_SPLIT_PMD_PTLOCK
 	def_bool y if PGTABLE_LEVELS > 2
 
@@ -1673,6 +1676,20 @@ config ARM64_MTE
 
 endmenu
 
+menu "ARMv8.7 architectural features"
+
+config ARM64_EPAN
+	bool "Enable support for Enhanced Privileged Access Never (EPAN)"
+	default y
+	depends on ARM64_PAN
+	help
+	 Enhanced Privileged Access Never (EPAN) allows Privileged
+	 Access Never to be used with Execute-only mappings.
+
+	 The feature is detected at runtime, and will remain disabled
+	 if the cpu does not implement the feature.
+endmenu
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index b77d997..9e3ec4d 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -66,7 +66,8 @@
 #define ARM64_WORKAROUND_1508412		58
 #define ARM64_HAS_LDAPR				59
 #define ARM64_KVM_PROTECTED_MODE		60
+#define ARM64_HAS_EPAN				61
 
-#define ARM64_NCAPS				61
+#define ARM64_NCAPS				62
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 046be78..f91c2aa 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -88,12 +88,13 @@ extern bool arm64_use_ng_mappings;
 #define PAGE_SHARED_EXEC	__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
 #define PAGE_READONLY		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
 #define PAGE_READONLY_EXEC	__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
+#define PAGE_EXECONLY		__pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
 
 #define __P000  PAGE_NONE
 #define __P001  PAGE_READONLY
 #define __P010  PAGE_READONLY
 #define __P011  PAGE_READONLY
-#define __P100  PAGE_READONLY_EXEC
+#define __P100  PAGE_EXECONLY
 #define __P101  PAGE_READONLY_EXEC
 #define __P110  PAGE_READONLY_EXEC
 #define __P111  PAGE_READONLY_EXEC
@@ -102,7 +103,7 @@ extern bool arm64_use_ng_mappings;
 #define __S001  PAGE_READONLY
 #define __S010  PAGE_SHARED
 #define __S011  PAGE_SHARED
-#define __S100  PAGE_READONLY_EXEC
+#define __S100  PAGE_EXECONLY
 #define __S101  PAGE_READONLY_EXEC
 #define __S110  PAGE_SHARED_EXEC
 #define __S111  PAGE_SHARED_EXEC
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 5015627..0196849 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -114,7 +114,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
 
 #define pte_valid(pte)		(!!(pte_val(pte) & PTE_VALID))
 #define pte_valid_not_user(pte) \
-	((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)
+	((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
 #define pte_valid_user(pte) \
 	((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))
 
@@ -982,6 +982,18 @@ static inline bool arch_faults_on_old_pte(void)
 }
 #define arch_faults_on_old_pte arch_faults_on_old_pte
 
+static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
+{
+	if (cpus_have_const_cap(ARM64_HAS_EPAN))
+		return prot;
+
+	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
+		return prot;
+
+	return PAGE_READONLY_EXEC;
+}
+
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_PGTABLE_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 8b5e7e5..47e9fdc 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -591,6 +591,7 @@
 	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
 
 /* SCTLR_EL1 specific flags. */
+#define SCTLR_EL1_EPAN		(BIT(57))
 #define SCTLR_EL1_ATA0		(BIT(42))
 
 #define SCTLR_EL1_TCF0_SHIFT	38
@@ -631,7 +632,7 @@
 	 SCTLR_EL1_SED  | SCTLR_ELx_I    | SCTLR_EL1_DZE  | SCTLR_EL1_UCT   | \
 	 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \
 	 SCTLR_ELx_ATA  | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI   | \
-	 SCTLR_EL1_RES1)
+	 SCTLR_EL1_EPAN | SCTLR_EL1_RES1)
 
 /* MAIR_ELx memory attributes (used by Linux) */
 #define MAIR_ATTR_DEVICE_nGnRnE		UL(0x00)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e99edde..9d85956 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1774,6 +1774,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.cpu_enable = cpu_enable_pan,
 	},
 #endif /* CONFIG_ARM64_PAN */
+#ifdef CONFIG_ARM64_EPAN
+	{
+		.desc = "Enhanced Privileged Access Never",
+		.capability = ARM64_HAS_EPAN,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64MMFR1_EL1,
+		.field_pos = ID_AA64MMFR1_PAN_SHIFT,
+		.sign = FTR_UNSIGNED,
+		.min_field_value = 3,
+	},
+#endif /* CONFIG_ARM64_EPAN */
 #ifdef CONFIG_ARM64_LSE_ATOMICS
 	{
 		.desc = "LSE atomic instructions",
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 3c40da4..c32095f6 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -529,6 +529,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr,
 	if (faulthandler_disabled() || !mm)
 		goto no_context;
 
+	if (cpus_have_const_cap(ARM64_HAS_EPAN))
+		vm_flags &= ~VM_EXEC;
+
 	if (user_mode(regs))
 		mm_flags |= FAULT_FLAG_USER;
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/2] arm64: Introduce HWCAPS2_EXECONLY
  2021-01-19 16:07 [PATCH v3 0/2] arm64: Support Enhanced PAN Vladimir Murzin
  2021-01-19 16:07 ` [PATCH v3 1/2] arm64: Support execute-only permissions with " Vladimir Murzin
@ 2021-01-19 16:07 ` Vladimir Murzin
  2021-01-26 11:07   ` Will Deacon
  2021-01-26 11:09 ` [PATCH v3 0/2] arm64: Support Enhanced PAN Will Deacon
  2 siblings, 1 reply; 13+ messages in thread
From: Vladimir Murzin @ 2021-01-19 16:07 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: catalin.marinas, keescook, dave.martin

With EPAN supported it might be handy to user know that PROT_EXEC
gives execute-only permission, so advertise it via HWCAPS2_EXECONLY

Cc: Kees Cook <keescook@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 arch/arm64/include/asm/hwcap.h      | 1 +
 arch/arm64/include/asm/sysreg.h     | 1 +
 arch/arm64/include/uapi/asm/hwcap.h | 1 +
 arch/arm64/kernel/cpufeature.c      | 3 +++
 arch/arm64/kernel/cpuinfo.c         | 1 +
 5 files changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index 9a5498c..5ee5bce 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -105,6 +105,7 @@
 #define KERNEL_HWCAP_RNG		__khwcap2_feature(RNG)
 #define KERNEL_HWCAP_BTI		__khwcap2_feature(BTI)
 #define KERNEL_HWCAP_MTE		__khwcap2_feature(MTE)
+#define KERNEL_HWCAP_EXECONLY		__khwcap2_feature(EXECONLY)
 
 /*
  * This yields a mask that user programs can use to figure out what
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 47e9fdc..135aa66 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -811,6 +811,7 @@
 
 #define ID_AA64MMFR1_VMIDBITS_8		0
 #define ID_AA64MMFR1_VMIDBITS_16	2
+#define ID_AA64MMFR1_EPAN		3
 
 /* id_aa64mmfr2 */
 #define ID_AA64MMFR2_E0PD_SHIFT		60
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index b8f41aa..61471f4 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -75,5 +75,6 @@
 #define HWCAP2_RNG		(1 << 16)
 #define HWCAP2_BTI		(1 << 17)
 #define HWCAP2_MTE		(1 << 18)
+#define HWCAP2_EXECONLY		(1 << 19)
 
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9d85956..bc80cc2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2285,6 +2285,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
 #ifdef CONFIG_ARM64_MTE
 	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
 #endif /* CONFIG_ARM64_MTE */
+#ifdef CONFIG_ARM64_EPAN
+	HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_PAN_SHIFT, FTR_UNSIGNED, ID_AA64MMFR1_EPAN, CAP_HWCAP, KERNEL_HWCAP_EXECONLY),
+#endif
 	{},
 };
 
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 77605ae..34c98d9 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -94,6 +94,7 @@ static const char *const hwcap_str[] = {
 	[KERNEL_HWCAP_RNG]		= "rng",
 	[KERNEL_HWCAP_BTI]		= "bti",
 	[KERNEL_HWCAP_MTE]		= "mte",
+	[KERNEL_HWCAP_EXECONLY]		= "xo",
 };
 
 #ifdef CONFIG_COMPAT
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-01-19 16:07 ` [PATCH v3 1/2] arm64: Support execute-only permissions with " Vladimir Murzin
@ 2021-01-26 11:03   ` Will Deacon
  2021-01-26 12:05     ` Catalin Marinas
  2021-03-12 11:17     ` Vladimir Murzin
  0 siblings, 2 replies; 13+ messages in thread
From: Will Deacon @ 2021-01-26 11:03 UTC (permalink / raw)
  To: Vladimir Murzin; +Cc: catalin.marinas, keescook, linux-arm-kernel, dave.martin

On Tue, Jan 19, 2021 at 04:07:22PM +0000, Vladimir Murzin wrote:
> Enhanced Privileged Access Never (EPAN) allows Privileged Access Never
> to be used with Execute-only mappings.
> 
> Absence of such support was a reason for 24cecc377463 ("arm64: Revert
> support for execute-only user mappings"). Thus now it can be revisited
> and re-enabled.
> 
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
> ---
>  arch/arm64/Kconfig                    | 17 +++++++++++++++++
>  arch/arm64/include/asm/cpucaps.h      |  3 ++-
>  arch/arm64/include/asm/pgtable-prot.h |  5 +++--
>  arch/arm64/include/asm/pgtable.h      | 14 +++++++++++++-
>  arch/arm64/include/asm/sysreg.h       |  3 ++-
>  arch/arm64/kernel/cpufeature.c        | 12 ++++++++++++
>  arch/arm64/mm/fault.c                 |  3 +++
>  7 files changed, 52 insertions(+), 5 deletions(-)

(please cc me on arm64 patches)

> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index f39568b..e63cc18 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1053,6 +1053,9 @@ config ARCH_WANT_HUGE_PMD_SHARE
>  config ARCH_HAS_CACHE_LINE_SIZE
>  	def_bool y
>  
> +config ARCH_HAS_FILTER_PGPROT
> +	def_bool y
> +
>  config ARCH_ENABLE_SPLIT_PMD_PTLOCK
>  	def_bool y if PGTABLE_LEVELS > 2
>  
> @@ -1673,6 +1676,20 @@ config ARM64_MTE
>  
>  endmenu
>  
> +menu "ARMv8.7 architectural features"
> +
> +config ARM64_EPAN
> +	bool "Enable support for Enhanced Privileged Access Never (EPAN)"
> +	default y
> +	depends on ARM64_PAN
> +	help
> +	 Enhanced Privileged Access Never (EPAN) allows Privileged
> +	 Access Never to be used with Execute-only mappings.
> +
> +	 The feature is detected at runtime, and will remain disabled
> +	 if the cpu does not implement the feature.
> +endmenu
> +
>  config ARM64_SVE
>  	bool "ARM Scalable Vector Extension support"
>  	default y
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index b77d997..9e3ec4d 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -66,7 +66,8 @@
>  #define ARM64_WORKAROUND_1508412		58
>  #define ARM64_HAS_LDAPR				59
>  #define ARM64_KVM_PROTECTED_MODE		60
> +#define ARM64_HAS_EPAN				61
>  
> -#define ARM64_NCAPS				61
> +#define ARM64_NCAPS				62
>  
>  #endif /* __ASM_CPUCAPS_H */
> diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
> index 046be78..f91c2aa 100644
> --- a/arch/arm64/include/asm/pgtable-prot.h
> +++ b/arch/arm64/include/asm/pgtable-prot.h
> @@ -88,12 +88,13 @@ extern bool arm64_use_ng_mappings;
>  #define PAGE_SHARED_EXEC	__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
>  #define PAGE_READONLY		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
>  #define PAGE_READONLY_EXEC	__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
> +#define PAGE_EXECONLY		__pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
>  
>  #define __P000  PAGE_NONE
>  #define __P001  PAGE_READONLY
>  #define __P010  PAGE_READONLY
>  #define __P011  PAGE_READONLY
> -#define __P100  PAGE_READONLY_EXEC
> +#define __P100  PAGE_EXECONLY
>  #define __P101  PAGE_READONLY_EXEC
>  #define __P110  PAGE_READONLY_EXEC
>  #define __P111  PAGE_READONLY_EXEC
> @@ -102,7 +103,7 @@ extern bool arm64_use_ng_mappings;
>  #define __S001  PAGE_READONLY
>  #define __S010  PAGE_SHARED
>  #define __S011  PAGE_SHARED
> -#define __S100  PAGE_READONLY_EXEC
> +#define __S100  PAGE_EXECONLY
>  #define __S101  PAGE_READONLY_EXEC
>  #define __S110  PAGE_SHARED_EXEC
>  #define __S111  PAGE_SHARED_EXEC
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 5015627..0196849 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -114,7 +114,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
>  
>  #define pte_valid(pte)		(!!(pte_val(pte) & PTE_VALID))

We used to have a useful comment here describing why we're looking at UXN.

There was also one next to the protection_map[], which I think we should add
back.

>  #define pte_valid_not_user(pte) \
> -	((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)
> +	((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
>  #define pte_valid_user(pte) \
>  	((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))

Won't pte_valid_user() go wrong for exec-only mappings now?

>  
> @@ -982,6 +982,18 @@ static inline bool arch_faults_on_old_pte(void)
>  }
>  #define arch_faults_on_old_pte arch_faults_on_old_pte
>  
> +static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +
>  #endif /* !__ASSEMBLY__ */
>  
>  #endif /* __ASM_PGTABLE_H */
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 8b5e7e5..47e9fdc 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -591,6 +591,7 @@
>  	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
>  
>  /* SCTLR_EL1 specific flags. */
> +#define SCTLR_EL1_EPAN		(BIT(57))
>  #define SCTLR_EL1_ATA0		(BIT(42))
>  
>  #define SCTLR_EL1_TCF0_SHIFT	38
> @@ -631,7 +632,7 @@
>  	 SCTLR_EL1_SED  | SCTLR_ELx_I    | SCTLR_EL1_DZE  | SCTLR_EL1_UCT   | \
>  	 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \
>  	 SCTLR_ELx_ATA  | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI   | \
> -	 SCTLR_EL1_RES1)
> +	 SCTLR_EL1_EPAN | SCTLR_EL1_RES1)

Why is this handled differently to normal PAN, where the SCTLR is written in
cpu_enable_pan()?

>  /* MAIR_ELx memory attributes (used by Linux) */
>  #define MAIR_ATTR_DEVICE_nGnRnE		UL(0x00)
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e99edde..9d85956 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1774,6 +1774,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.cpu_enable = cpu_enable_pan,
>  	},
>  #endif /* CONFIG_ARM64_PAN */
> +#ifdef CONFIG_ARM64_EPAN
> +	{
> +		.desc = "Enhanced Privileged Access Never",
> +		.capability = ARM64_HAS_EPAN,
> +		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.matches = has_cpuid_feature,
> +		.sys_reg = SYS_ID_AA64MMFR1_EL1,
> +		.field_pos = ID_AA64MMFR1_PAN_SHIFT,
> +		.sign = FTR_UNSIGNED,
> +		.min_field_value = 3,
> +	},
> +#endif /* CONFIG_ARM64_EPAN */
>  #ifdef CONFIG_ARM64_LSE_ATOMICS
>  	{
>  		.desc = "LSE atomic instructions",
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 3c40da4..c32095f6 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -529,6 +529,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr,
>  	if (faulthandler_disabled() || !mm)
>  		goto no_context;
>  
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		vm_flags &= ~VM_EXEC;

This needs a comment, and I think it would be cleaner to rework the vm_flags
initialisation along the lines of:

	unsigned long vm_flags;
	unsigned long mm_flags = FAULT_FLAG_DEFAULT;

	if (user_mode(regs))
		mm_flags |= FAULT_FLAG_USER;

	if (is_el0_instruction_abort(esr)) {
		vm_flags = VM_EXEC;
		mm_flags |= FAULT_FLAG_INSTRUCTION;
	} else if (is_write_abort(esr)) {
		vm_flags = VM_WRITE;
		mm_flags |= FAULT_FLAG_WRITE;
	} else {
		vm_flags = VM_READ | VM_WRITE;
		if (!cpus_have_const_cap(ARM64_HAS_EPAN))
			vm_flags |= VM_EXEC:
	}

(but again, please add a comment to that last part because I still don't
really follow what you're doing)

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/2] arm64: Introduce HWCAPS2_EXECONLY
  2021-01-19 16:07 ` [PATCH v3 2/2] arm64: Introduce HWCAPS2_EXECONLY Vladimir Murzin
@ 2021-01-26 11:07   ` Will Deacon
  0 siblings, 0 replies; 13+ messages in thread
From: Will Deacon @ 2021-01-26 11:07 UTC (permalink / raw)
  To: Vladimir Murzin; +Cc: catalin.marinas, keescook, linux-arm-kernel, dave.martin

On Tue, Jan 19, 2021 at 04:07:23PM +0000, Vladimir Murzin wrote:
> With EPAN supported it might be handy to user know that PROT_EXEC
> gives execute-only permission, so advertise it via HWCAPS2_EXECONLY

I don't want to merge this without a concrete need for it in userspace.

Kees -- do you see this being necessary? I'm having a hard time
understanding what an application would do differently at runtime based on
this hwcap.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/2]  arm64: Support Enhanced PAN
  2021-01-19 16:07 [PATCH v3 0/2] arm64: Support Enhanced PAN Vladimir Murzin
  2021-01-19 16:07 ` [PATCH v3 1/2] arm64: Support execute-only permissions with " Vladimir Murzin
  2021-01-19 16:07 ` [PATCH v3 2/2] arm64: Introduce HWCAPS2_EXECONLY Vladimir Murzin
@ 2021-01-26 11:09 ` Will Deacon
  2021-03-12 11:18   ` Vladimir Murzin
  2 siblings, 1 reply; 13+ messages in thread
From: Will Deacon @ 2021-01-26 11:09 UTC (permalink / raw)
  To: Vladimir Murzin; +Cc: catalin.marinas, keescook, linux-arm-kernel, dave.martin

On Tue, Jan 19, 2021 at 04:07:21PM +0000, Vladimir Murzin wrote:
> ARM architecture gains support of Enhanced Privileged Access Never
> (EPAN) which allows Privileged Access Never to be used with
> Execute-only mappings.
> 
> As a consequence 24cecc377463 ("arm64: Revert support for execute-only
> user mappings") can be revisited and re-enabled.

Does ptdump.c need updating too?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-01-26 11:03   ` Will Deacon
@ 2021-01-26 12:05     ` Catalin Marinas
  2021-01-26 12:23       ` Will Deacon
  2021-03-12 11:17     ` Vladimir Murzin
  1 sibling, 1 reply; 13+ messages in thread
From: Catalin Marinas @ 2021-01-26 12:05 UTC (permalink / raw)
  To: Will Deacon; +Cc: Vladimir Murzin, keescook, linux-arm-kernel, dave.martin

On Tue, Jan 26, 2021 at 11:03:06AM +0000, Will Deacon wrote:
> On Tue, Jan 19, 2021 at 04:07:22PM +0000, Vladimir Murzin wrote:
> >  #define pte_valid_not_user(pte) \
> > -	((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)
> > +	((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
> >  #define pte_valid_user(pte) \
> >  	((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))
> 
> Won't pte_valid_user() go wrong for exec-only mappings now?

I wondered about the same in November (and reproducing my comment
below):

https://lore.kernel.org/linux-arm-kernel/20201119182236.GN4376@gaia/

pte_valid_user() checks for both PTE_VALID and PTE_USER. In theory, a
!PTE_UXN is also user-accessible but it's only used in gup_pte_range()
via pte_access_permitted(). If "access" in the latter means only
read/write, we should be ok. If we keep it as is, a comment would
be useful.

> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 8b5e7e5..47e9fdc 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -591,6 +591,7 @@
> >  	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
> >  
> >  /* SCTLR_EL1 specific flags. */
> > +#define SCTLR_EL1_EPAN		(BIT(57))
> >  #define SCTLR_EL1_ATA0		(BIT(42))
> >  
> >  #define SCTLR_EL1_TCF0_SHIFT	38
> > @@ -631,7 +632,7 @@
> >  	 SCTLR_EL1_SED  | SCTLR_ELx_I    | SCTLR_EL1_DZE  | SCTLR_EL1_UCT   | \
> >  	 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \
> >  	 SCTLR_ELx_ATA  | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI   | \
> > -	 SCTLR_EL1_RES1)
> > +	 SCTLR_EL1_EPAN | SCTLR_EL1_RES1)
> 
> Why is this handled differently to normal PAN, where the SCTLR is written in
> cpu_enable_pan()?

That's how it was done in an early version but I suggested we move it to
the default SCTLR bits to save some lines:

https://lore.kernel.org/linux-arm-kernel/20201202182303.GC21091@gaia/

With classic PAN, we only enable it if all the CPUs support it. For EPAN
that's not necessary since EPAN should depend on PAN being enabled.

It's also cached in the TLB, so it's a bit of a hassle with CnP. See
this past discussion:

https://lore.kernel.org/linux-arm-kernel/X7P+r%2Fl3ewvaf1aV@trantor/

> > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > index 3c40da4..c32095f6 100644
> > --- a/arch/arm64/mm/fault.c
> > +++ b/arch/arm64/mm/fault.c
> > @@ -529,6 +529,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr,
> >  	if (faulthandler_disabled() || !mm)
> >  		goto no_context;
> >  
> > +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> > +		vm_flags &= ~VM_EXEC;
> 
> This needs a comment, and I think it would be cleaner to rework the vm_flags
> initialisation along the lines of:
> 
> 	unsigned long vm_flags;
> 	unsigned long mm_flags = FAULT_FLAG_DEFAULT;
> 
> 	if (user_mode(regs))
> 		mm_flags |= FAULT_FLAG_USER;
> 
> 	if (is_el0_instruction_abort(esr)) {
> 		vm_flags = VM_EXEC;
> 		mm_flags |= FAULT_FLAG_INSTRUCTION;
> 	} else if (is_write_abort(esr)) {
> 		vm_flags = VM_WRITE;
> 		mm_flags |= FAULT_FLAG_WRITE;
> 	} else {
> 		vm_flags = VM_READ | VM_WRITE;
> 		if (!cpus_have_const_cap(ARM64_HAS_EPAN))
> 			vm_flags |= VM_EXEC:
> 	}
> 
> (but again, please add a comment to that last part because I still don't
> really follow what you're doing)

Every time I come across this vm_flags handling I have to spend some
minutes to re-understand what it does. So vm_flags tells us what bits we
must have in vma->vm_flags for the fault to be benign. But we start with
all rwx flags and clear VM_EXEC if EPAN since exec does not imply read,
making it more confusing.

I think your rewrite is cleaner, maybe with some comment why we add
VM_EXEC when !EPAN.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-01-26 12:05     ` Catalin Marinas
@ 2021-01-26 12:23       ` Will Deacon
  2021-03-12 11:19         ` Vladimir Murzin
  0 siblings, 1 reply; 13+ messages in thread
From: Will Deacon @ 2021-01-26 12:23 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Vladimir Murzin, keescook, linux-arm-kernel, dave.martin

On Tue, Jan 26, 2021 at 12:05:42PM +0000, Catalin Marinas wrote:
> On Tue, Jan 26, 2021 at 11:03:06AM +0000, Will Deacon wrote:
> > On Tue, Jan 19, 2021 at 04:07:22PM +0000, Vladimir Murzin wrote:
> > >  #define pte_valid_not_user(pte) \
> > > -	((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)
> > > +	((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
> > >  #define pte_valid_user(pte) \
> > >  	((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))
> > 
> > Won't pte_valid_user() go wrong for exec-only mappings now?
> 
> I wondered about the same in November (and reproducing my comment
> below):
> 
> https://lore.kernel.org/linux-arm-kernel/20201119182236.GN4376@gaia/
> 
> pte_valid_user() checks for both PTE_VALID and PTE_USER. In theory, a
> !PTE_UXN is also user-accessible but it's only used in gup_pte_range()
> via pte_access_permitted(). If "access" in the latter means only
> read/write, we should be ok. If we keep it as is, a comment would
> be useful.

I think we should ensure that:

  pte_valid(pte) && (pte_valid_not_user(pte) || pte_valid_user(pte))

is always true, but I don't think it is after this patch. It reminds me
of some of the tests that Anshuman wrote.

> > > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > > index 8b5e7e5..47e9fdc 100644
> > > --- a/arch/arm64/include/asm/sysreg.h
> > > +++ b/arch/arm64/include/asm/sysreg.h
> > > @@ -591,6 +591,7 @@
> > >  	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
> > >  
> > >  /* SCTLR_EL1 specific flags. */
> > > +#define SCTLR_EL1_EPAN		(BIT(57))
> > >  #define SCTLR_EL1_ATA0		(BIT(42))
> > >  
> > >  #define SCTLR_EL1_TCF0_SHIFT	38
> > > @@ -631,7 +632,7 @@
> > >  	 SCTLR_EL1_SED  | SCTLR_ELx_I    | SCTLR_EL1_DZE  | SCTLR_EL1_UCT   | \
> > >  	 SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \
> > >  	 SCTLR_ELx_ATA  | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI   | \
> > > -	 SCTLR_EL1_RES1)
> > > +	 SCTLR_EL1_EPAN | SCTLR_EL1_RES1)
> > 
> > Why is this handled differently to normal PAN, where the SCTLR is written in
> > cpu_enable_pan()?
> 
> That's how it was done in an early version but I suggested we move it to
> the default SCTLR bits to save some lines:
> 
> https://lore.kernel.org/linux-arm-kernel/20201202182303.GC21091@gaia/
> 
> With classic PAN, we only enable it if all the CPUs support it. For EPAN
> that's not necessary since EPAN should depend on PAN being enabled.

Ok, let's just hope we don't have any CPU errata requiring us to disable
it.

> It's also cached in the TLB, so it's a bit of a hassle with CnP. See
> this past discussion:
> 
> https://lore.kernel.org/linux-arm-kernel/X7P+r%2Fl3ewvaf1aV@trantor/

Thanks. None of this is in the Arm ARM afaict, so I can't figure this stuff
out on my own.

> > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > > index 3c40da4..c32095f6 100644
> > > --- a/arch/arm64/mm/fault.c
> > > +++ b/arch/arm64/mm/fault.c
> > > @@ -529,6 +529,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr,
> > >  	if (faulthandler_disabled() || !mm)
> > >  		goto no_context;
> > >  
> > > +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> > > +		vm_flags &= ~VM_EXEC;
> > 
> > This needs a comment, and I think it would be cleaner to rework the vm_flags
> > initialisation along the lines of:
> > 
> > 	unsigned long vm_flags;
> > 	unsigned long mm_flags = FAULT_FLAG_DEFAULT;
> > 
> > 	if (user_mode(regs))
> > 		mm_flags |= FAULT_FLAG_USER;
> > 
> > 	if (is_el0_instruction_abort(esr)) {
> > 		vm_flags = VM_EXEC;
> > 		mm_flags |= FAULT_FLAG_INSTRUCTION;
> > 	} else if (is_write_abort(esr)) {
> > 		vm_flags = VM_WRITE;
> > 		mm_flags |= FAULT_FLAG_WRITE;
> > 	} else {
> > 		vm_flags = VM_READ | VM_WRITE;
> > 		if (!cpus_have_const_cap(ARM64_HAS_EPAN))
> > 			vm_flags |= VM_EXEC:
> > 	}
> > 
> > (but again, please add a comment to that last part because I still don't
> > really follow what you're doing)
> 
> Every time I come across this vm_flags handling I have to spend some
> minutes to re-understand what it does. So vm_flags tells us what bits we
> must have in vma->vm_flags for the fault to be benign. But we start with
> all rwx flags and clear VM_EXEC if EPAN since exec does not imply read,
> making it more confusing.
> 
> I think your rewrite is cleaner, maybe with some comment why we add
> VM_EXEC when !EPAN.

Agreed.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-01-26 11:03   ` Will Deacon
  2021-01-26 12:05     ` Catalin Marinas
@ 2021-03-12 11:17     ` Vladimir Murzin
  1 sibling, 0 replies; 13+ messages in thread
From: Vladimir Murzin @ 2021-03-12 11:17 UTC (permalink / raw)
  To: Will Deacon; +Cc: linux-arm-kernel, keescook, dave.martin, catalin.marinas

On 1/26/21 11:03 AM, Will Deacon wrote:
> On Tue, Jan 19, 2021 at 04:07:22PM +0000, Vladimir Murzin wrote:
>> Enhanced Privileged Access Never (EPAN) allows Privileged Access Never
>> to be used with Execute-only mappings.
>>
>> Absence of such support was a reason for 24cecc377463 ("arm64: Revert
>> support for execute-only user mappings"). Thus now it can be revisited
>> and re-enabled.
>>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
>> ---
>>  arch/arm64/Kconfig                    | 17 +++++++++++++++++
>>  arch/arm64/include/asm/cpucaps.h      |  3 ++-
>>  arch/arm64/include/asm/pgtable-prot.h |  5 +++--
>>  arch/arm64/include/asm/pgtable.h      | 14 +++++++++++++-
>>  arch/arm64/include/asm/sysreg.h       |  3 ++-
>>  arch/arm64/kernel/cpufeature.c        | 12 ++++++++++++
>>  arch/arm64/mm/fault.c                 |  3 +++
>>  7 files changed, 52 insertions(+), 5 deletions(-)
> 
> (please cc me on arm64 patches)

Ah, surely I should have done that!

snip...

>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -114,7 +114,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
>>  
>>  #define pte_valid(pte)		(!!(pte_val(pte) & PTE_VALID))
> 
> We used to have a useful comment here describing why we're looking at UXN.
> 
> There was also one next to the protection_map[], which I think we should add
> back.

Noted.

It seems that the rest has been covered by Catalin...

Cheers
Vladimir

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/2] arm64: Support Enhanced PAN
  2021-01-26 11:09 ` [PATCH v3 0/2] arm64: Support Enhanced PAN Will Deacon
@ 2021-03-12 11:18   ` Vladimir Murzin
  0 siblings, 0 replies; 13+ messages in thread
From: Vladimir Murzin @ 2021-03-12 11:18 UTC (permalink / raw)
  To: Will Deacon; +Cc: linux-arm-kernel, keescook, dave.martin, catalin.marinas

On 1/26/21 11:09 AM, Will Deacon wrote:
> On Tue, Jan 19, 2021 at 04:07:21PM +0000, Vladimir Murzin wrote:
>> ARM architecture gains support of Enhanced Privileged Access Never
>> (EPAN) which allows Privileged Access Never to be used with
>> Execute-only mappings.
>>
>> As a consequence 24cecc377463 ("arm64: Revert support for execute-only
>> user mappings") can be revisited and re-enabled.
> 
> Does ptdump.c need updating too?

I had a look at ptdump.c and IIUIC that supposed to pretty print bits for
page table entry. Thus with EPAN supported we stop seeing "USR" for execute
only mapping, but it correctly reflects state of pte. What I'm missing?

Cheers
Vladimir


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-01-26 12:23       ` Will Deacon
@ 2021-03-12 11:19         ` Vladimir Murzin
  2021-03-12 11:20           ` Will Deacon
  2021-03-12 12:23           ` Catalin Marinas
  0 siblings, 2 replies; 13+ messages in thread
From: Vladimir Murzin @ 2021-03-12 11:19 UTC (permalink / raw)
  To: Will Deacon, Catalin Marinas; +Cc: linux-arm-kernel, keescook, dave.martin

On 1/26/21 12:23 PM, Will Deacon wrote:
> On Tue, Jan 26, 2021 at 12:05:42PM +0000, Catalin Marinas wrote:
>> On Tue, Jan 26, 2021 at 11:03:06AM +0000, Will Deacon wrote:
>>> On Tue, Jan 19, 2021 at 04:07:22PM +0000, Vladimir Murzin wrote:
>>>>  #define pte_valid_not_user(pte) \
>>>> -	((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)
>>>> +	((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
>>>>  #define pte_valid_user(pte) \
>>>>  	((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))
>>>
>>> Won't pte_valid_user() go wrong for exec-only mappings now?
>>
>> I wondered about the same in November (and reproducing my comment
>> below):
>>
>> https://lore.kernel.org/linux-arm-kernel/20201119182236.GN4376@gaia/
>>
>> pte_valid_user() checks for both PTE_VALID and PTE_USER. In theory, a
>> !PTE_UXN is also user-accessible but it's only used in gup_pte_range()
>> via pte_access_permitted(). If "access" in the latter means only
>> read/write, we should be ok. If we keep it as is, a comment would
>> be useful.
> 
> I think we should ensure that:
> 
>   pte_valid(pte) && (pte_valid_not_user(pte) || pte_valid_user(pte))
> 
> is always true, but I don't think it is after this patch. It reminds me
> of some of the tests that Anshuman wrote.

Something like

/*
 * Most of user mappings would have PTE_USER bit set except Execute-only
 * where both PTE_USER and PTE_UXN bits not set
 */
#define pte_valid_user(pte)                                            \
      (((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) || \
         ((pte_val(pte) & (PTE_VALID | PTE_UXN)) == PTE_VALID))



Cheers
Vladimir

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-03-12 11:19         ` Vladimir Murzin
@ 2021-03-12 11:20           ` Will Deacon
  2021-03-12 12:23           ` Catalin Marinas
  1 sibling, 0 replies; 13+ messages in thread
From: Will Deacon @ 2021-03-12 11:20 UTC (permalink / raw)
  To: Vladimir Murzin; +Cc: Catalin Marinas, linux-arm-kernel, keescook, dave.martin

On Fri, Mar 12, 2021 at 11:19:02AM +0000, Vladimir Murzin wrote:
> On 1/26/21 12:23 PM, Will Deacon wrote:
> > On Tue, Jan 26, 2021 at 12:05:42PM +0000, Catalin Marinas wrote:
> >> On Tue, Jan 26, 2021 at 11:03:06AM +0000, Will Deacon wrote:
> >>> On Tue, Jan 19, 2021 at 04:07:22PM +0000, Vladimir Murzin wrote:
> >>>>  #define pte_valid_not_user(pte) \
> >>>> -	((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)
> >>>> +	((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
> >>>>  #define pte_valid_user(pte) \
> >>>>  	((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))
> >>>
> >>> Won't pte_valid_user() go wrong for exec-only mappings now?
> >>
> >> I wondered about the same in November (and reproducing my comment
> >> below):
> >>
> >> https://lore.kernel.org/linux-arm-kernel/20201119182236.GN4376@gaia/
> >>
> >> pte_valid_user() checks for both PTE_VALID and PTE_USER. In theory, a
> >> !PTE_UXN is also user-accessible but it's only used in gup_pte_range()
> >> via pte_access_permitted(). If "access" in the latter means only
> >> read/write, we should be ok. If we keep it as is, a comment would
> >> be useful.
> > 
> > I think we should ensure that:
> > 
> >   pte_valid(pte) && (pte_valid_not_user(pte) || pte_valid_user(pte))
> > 
> > is always true, but I don't think it is after this patch. It reminds me
> > of some of the tests that Anshuman wrote.
> 
> Something like
> 
> /*
>  * Most of user mappings would have PTE_USER bit set except Execute-only
>  * where both PTE_USER and PTE_UXN bits not set
>  */
> #define pte_valid_user(pte)                                            \
>       (((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) || \
>          ((pte_val(pte) & (PTE_VALID | PTE_UXN)) == PTE_VALID))

Yeah, assuming the compiler doesn't make a dog's dinner of it.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] arm64: Support execute-only permissions with Enhanced PAN
  2021-03-12 11:19         ` Vladimir Murzin
  2021-03-12 11:20           ` Will Deacon
@ 2021-03-12 12:23           ` Catalin Marinas
  1 sibling, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2021-03-12 12:23 UTC (permalink / raw)
  To: Vladimir Murzin; +Cc: Will Deacon, linux-arm-kernel, keescook, dave.martin

On Fri, Mar 12, 2021 at 11:19:02AM +0000, Vladimir Murzin wrote:
> On 1/26/21 12:23 PM, Will Deacon wrote:
> > On Tue, Jan 26, 2021 at 12:05:42PM +0000, Catalin Marinas wrote:
> >> On Tue, Jan 26, 2021 at 11:03:06AM +0000, Will Deacon wrote:
> >>> On Tue, Jan 19, 2021 at 04:07:22PM +0000, Vladimir Murzin wrote:
> >>>>  #define pte_valid_not_user(pte) \
> >>>> -	((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)
> >>>> +	((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
> >>>>  #define pte_valid_user(pte) \
> >>>>  	((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))
> >>>
> >>> Won't pte_valid_user() go wrong for exec-only mappings now?
> >>
> >> I wondered about the same in November (and reproducing my comment
> >> below):
> >>
> >> https://lore.kernel.org/linux-arm-kernel/20201119182236.GN4376@gaia/
> >>
> >> pte_valid_user() checks for both PTE_VALID and PTE_USER. In theory, a
> >> !PTE_UXN is also user-accessible but it's only used in gup_pte_range()
> >> via pte_access_permitted(). If "access" in the latter means only
> >> read/write, we should be ok. If we keep it as is, a comment would
> >> be useful.
> > 
> > I think we should ensure that:
> > 
> >   pte_valid(pte) && (pte_valid_not_user(pte) || pte_valid_user(pte))
> > 
> > is always true, but I don't think it is after this patch. It reminds me
> > of some of the tests that Anshuman wrote.
> 
> Something like
> 
> /*
>  * Most of user mappings would have PTE_USER bit set except Execute-only
>  * where both PTE_USER and PTE_UXN bits not set
>  */
> #define pte_valid_user(pte)                                            \
>       (((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) || \
>          ((pte_val(pte) & (PTE_VALID | PTE_UXN)) == PTE_VALID))

This is equivalent to (valid && (user || !uxn)) which matches how the
EPAN is now specified.

However, I think this change breaks pte_access_permitted() which would
now return true even for execute-only mappings. Since such pages are not
readable, nor writable by user, get_user_pages_fast() should fail. For
example, if we have a futex in such execute-only mapping,
get_futex_key() succeeds with the above. Similarly for the
iov_iter_get_pages().

The slow-path get_user_pages() does the checks on the vma flags and this
can be overridden with FOLL_FORCE (it doesn't use
pte_access_permitted()). I haven't seen any get_user_pages_fast() called
with FOLL_FORCE but even if it does, it looks like it's ignored as the
code relies on pte_access_permitted() only.

I'd say lets scrap pte_valid_user() altogether and move the original
logic to pte_access_permitted() with a comment that it must return false
for execute-only user mappings.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-03-12 12:25 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-19 16:07 [PATCH v3 0/2] arm64: Support Enhanced PAN Vladimir Murzin
2021-01-19 16:07 ` [PATCH v3 1/2] arm64: Support execute-only permissions with " Vladimir Murzin
2021-01-26 11:03   ` Will Deacon
2021-01-26 12:05     ` Catalin Marinas
2021-01-26 12:23       ` Will Deacon
2021-03-12 11:19         ` Vladimir Murzin
2021-03-12 11:20           ` Will Deacon
2021-03-12 12:23           ` Catalin Marinas
2021-03-12 11:17     ` Vladimir Murzin
2021-01-19 16:07 ` [PATCH v3 2/2] arm64: Introduce HWCAPS2_EXECONLY Vladimir Murzin
2021-01-26 11:07   ` Will Deacon
2021-01-26 11:09 ` [PATCH v3 0/2] arm64: Support Enhanced PAN Will Deacon
2021-03-12 11:18   ` Vladimir Murzin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.