All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/4] arm64: compat: Add kuser helpers config option
@ 2019-04-02 16:27 Vincenzo Frascino
  2019-04-02 16:27 ` [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Vincenzo Frascino
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 16:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

Currently on arm64 compat kuser helper are enabled by default.

To be on pair with arm32, this patchset makes it possible to disable
the kuser helpers by adding a CONFIG_KUSER_HELPERS option which is
enabled by default to avoid compatibility issues.

When the config option is disabled:
 - The kuser helpers-related code is not compiled with the kernel.
 - The kuser helpers mapping, for any compat process, at 0xffff0000
   is not done.
 - Any attempt to use a kuser helper from a compat process will result
   in a segfault.

Changes:
--------
v3:
  - Fix aarch32_alloc_vdso_pages()
v2:
  - Rebased on 5.1-rc3.
  - Addressed review comments.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

Vincenzo Frascino (4):
  arm64: compat: Alloc separate pages for vectors and sigpage
  arm64: compat: Split kuser32
  arm64: compat: Refactor aarch32_alloc_vdso_pages()
  arm64: compat: Add KUSER_HELPERS config option

 arch/arm64/Kconfig                 |  28 +++++
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/Makefile         |   5 +-
 arch/arm64/kernel/kuser32.S        |  66 ++----------
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/sigreturn32.S    |  46 ++++++++
 arch/arm64/kernel/vdso.c           | 165 +++++++++++++++++++++++------
 9 files changed, 225 insertions(+), 102 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
  2019-04-02 16:27 [PATCH v3 0/4] arm64: compat: Add kuser helpers config option Vincenzo Frascino
@ 2019-04-02 16:27 ` Vincenzo Frascino
  2019-04-10 18:10   ` Will Deacon
  2019-04-02 16:27 ` [PATCH v3 2/4] arm64: compat: Split kuser32 Vincenzo Frascino
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 16:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

In the current implementation AArch32 installs a special page called
"[vectors]" that contains sigreturn trampolines and kuser helpers,
and this is done at fixed address specified by the kuser helpers ABI.

Having sigreturn trampolines and kuser helpers in the same page, makes
difficult to maintain compatibility with arm because it makes not
possible to disable kuser helpers.

Address the problem creating separate pages for vectors and sigpage in
a similar fashion to what happens today on arm.

Change as well the meaning of mm->context.vdso for AArch32 compat since
it now points to sigpage and not to vectors anymore in order to make
simpler the implementation of the signal handling (the address of
sigpage is randomized).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/vdso.c           | 121 ++++++++++++++++++++++-------
 5 files changed, 102 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 6adc1a90e7e6..355d120b78cb 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -214,10 +214,10 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 	set_thread_flag(TIF_32BIT);					\
  })
 #define COMPAT_ARCH_DLINFO
-extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
-				      int uses_interp);
+extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
+					  int uses_interp);
 #define compat_arch_setup_additional_pages \
-					aarch32_setup_vectors_page
+					aarch32_setup_additional_pages
 
 #endif /* CONFIG_COMPAT */
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5d9ce62bdebd..07c873fce961 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -78,9 +78,9 @@
 #endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifdef CONFIG_COMPAT
-#define AARCH32_VECTORS_BASE	0xffff0000
+#define AARCH32_KUSER_BASE	0xffff0000
 #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
-				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
+				AARCH32_KUSER_BASE : STACK_TOP_MAX)
 #else
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 81abea0b7650..58e288aaf0ba 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,8 +20,6 @@
 #ifdef CONFIG_COMPAT
 #include <linux/compat.h>
 
-#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
-
 int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
 		       struct pt_regs *regs);
 int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index cb7800acd19f..3846a1b710b5 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	compat_ulong_t retcode;
 	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
 	int thumb;
+	void *sigreturn_base;
 
 	/* Check if the handler is written for ARM or Thumb */
 	thumb = handler & 1;
@@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	} else {
 		/* Set up sigreturn pointer */
 		unsigned int idx = thumb << 1;
+		sigreturn_base = current->mm->context.vdso;
 
 		if (ka->sa.sa_flags & SA_SIGINFO)
 			idx += 3;
 
-		retcode = AARCH32_VECTORS_BASE +
-			  AARCH32_KERN_SIGRET_CODE_OFFSET +
+		retcode = ptr_to_compat(sigreturn_base) +
 			  (idx << 2) + thumb;
 	}
 
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 2d419006ad43..16f8fce5c501 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -1,5 +1,7 @@
 /*
- * VDSO implementation for AArch64 and vector page setup for AArch32.
+ * VDSO implementation for AArch64 and for AArch32:
+ * AArch64: vDSO implementation contains pages setup and data page update.
+ * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
  *
  * Copyright (C) 2012 ARM Limited
  *
@@ -53,61 +55,126 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
 /*
  * Create and map the vectors page for AArch32 tasks.
  */
-static struct page *vectors_page[1] __ro_after_init;
+/*
+ * aarch32_vdso_pages:
+ * 0 - kuser helpers
+ * 1 - sigreturn code
+ */
+#define C_VECTORS	0
+#define C_SIGPAGE	1
+#define C_PAGES		(C_SIGPAGE + 1)
+static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init;
+static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = {
+	{
+		/* Must be named [vectors] for compatibility with arm. */
+		.name	= "[vectors]",
+		.pages	= &aarch32_vdso_pages[C_VECTORS],
+	},
+	{
+		/* Must be named [sigpage] for compatibility with arm. */
+		.name	= "[sigpage]",
+		.pages	= &aarch32_vdso_pages[C_SIGPAGE],
+	},
+};
 
-static int __init alloc_vectors_page(void)
+static int __init aarch32_alloc_vdso_pages(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
 	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
 
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
 	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vpage;
+	unsigned long vdso_pages[C_PAGES];
 
-	vpage = get_zeroed_page(GFP_ATOMIC);
+	vdso_pages[C_VECTORS] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[C_VECTORS])
+		return -ENOMEM;
 
-	if (!vpage)
+	vdso_pages[C_SIGPAGE] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[C_SIGPAGE])
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
-		kuser_sz);
+	memcpy((void *)(vdso_pages[C_VECTORS] + 0x1000 - kuser_sz),
+	       __kuser_helper_start,
+	       kuser_sz);
 
 	/* sigreturn code */
-	memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
-               __aarch32_sigret_code_start, sigret_sz);
+	memcpy((void *)vdso_pages[C_SIGPAGE],
+	       __aarch32_sigret_code_start,
+	       sigret_sz);
 
-	flush_icache_range(vpage, vpage + PAGE_SIZE);
-	vectors_page[0] = virt_to_page(vpage);
+	flush_icache_range(vdso_pages[C_VECTORS],
+			   vdso_pages[C_VECTORS] + PAGE_SIZE);
+	flush_icache_range(vdso_pages[C_SIGPAGE],
+			   vdso_pages[C_SIGPAGE] + PAGE_SIZE);
+
+	aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]);
+	aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]);
 
 	return 0;
 }
-arch_initcall(alloc_vectors_page);
+arch_initcall(aarch32_alloc_vdso_pages);
 
-int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
-	struct mm_struct *mm = current->mm;
-	unsigned long addr = AARCH32_VECTORS_BASE;
-	static const struct vm_special_mapping spec = {
-		.name	= "[vectors]",
-		.pages	= vectors_page,
+	void *ret;
+
+	/* The kuser helpers must be mapped at the ABI-defined high address */
+	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
+				       VM_READ | VM_EXEC |
+				       VM_MAYREAD | VM_MAYEXEC,
+				       &aarch32_vdso_spec[C_VECTORS]);
+
+	return PTR_ERR_OR_ZERO(ret);
+}
 
-	};
+static int aarch32_sigreturn_setup(struct mm_struct *mm)
+{
+	unsigned long addr;
 	void *ret;
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-	current->mm->context.vdso = (void *)addr;
+	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
+	if (IS_ERR_VALUE(addr)) {
+		ret = ERR_PTR(addr);
+		goto out;
+	}
 
-	/* Map vectors page at the high address. */
+	/*
+	 * VM_MAYWRITE is required to allow gdb to Copy-on-Write and
+	 * set breakpoints.
+	 */
 	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
-				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
-				       &spec);
+				       VM_READ | VM_EXEC | VM_MAYREAD |
+				       VM_MAYWRITE | VM_MAYEXEC,
+				       &aarch32_vdso_spec[C_SIGPAGE]);
+	if (IS_ERR(ret))
+		goto out;
 
-	up_write(&mm->mmap_sem);
+	mm->context.vdso = (void *)addr;
 
+out:
 	return PTR_ERR_OR_ZERO(ret);
 }
+
+int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	int ret;
+
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
+	ret = aarch32_kuser_helpers_setup(mm);
+	if (ret)
+		goto out;
+
+	ret = aarch32_sigreturn_setup(mm);
+
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
 #endif /* CONFIG_COMPAT */
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 2/4] arm64: compat: Split kuser32
  2019-04-02 16:27 [PATCH v3 0/4] arm64: compat: Add kuser helpers config option Vincenzo Frascino
  2019-04-02 16:27 ` [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Vincenzo Frascino
@ 2019-04-02 16:27 ` Vincenzo Frascino
  2019-04-02 16:27 ` [PATCH v3 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages() Vincenzo Frascino
  2019-04-02 16:27 ` [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option Vincenzo Frascino
  3 siblings, 0 replies; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 16:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

To make it possible to disable kuser helpers in aarch32 we need to
divide the kuser and the sigreturn functionalities.

Split the current version of kuser32 in kuser32 (for kuser helpers)
and sigreturn32 (for sigreturn helpers).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kernel/Makefile      |  2 +-
 arch/arm64/kernel/kuser32.S     | 59 ++-------------------------------
 arch/arm64/kernel/sigreturn32.S | 46 +++++++++++++++++++++++++
 3 files changed, 50 insertions(+), 57 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index cd434d0719c1..50f76b88a967 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -28,7 +28,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
 obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
-					   sys_compat.o
+					   sigreturn32.o sys_compat.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index 997e6b27ff6a..c5f2bbafd723 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -1,24 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
 /*
- * Low-level user helpers placed in the vectors page for AArch32.
+ * AArch32 user helpers.
  * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
  *
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- *
- *
- * AArch32 user helpers.
+ * Copyright (C) 2012-2018 ARM Ltd.
  *
  * Each segment is 32-byte aligned and will be moved to the top of the high
  * vector page.  New segments (if ever needed) must be added in front of
@@ -77,42 +63,3 @@ __kuser_helper_version:			// 0xffff0ffc
 	.word	((__kuser_helper_end - __kuser_helper_start) >> 5)
 	.globl	__kuser_helper_end
 __kuser_helper_end:
-
-/*
- * AArch32 sigreturn code
- *
- * For ARM syscalls, the syscall number has to be loaded into r7.
- * We do not support an OABI userspace.
- *
- * For Thumb syscalls, we also pass the syscall number via r7. We therefore
- * need two 16-bit instructions.
- */
-	.globl __aarch32_sigret_code_start
-__aarch32_sigret_code_start:
-
-	/*
-	 * ARM Code
-	 */
-	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
-
-	/*
-	 * ARM code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
-
-        .globl __aarch32_sigret_code_end
-__aarch32_sigret_code_end:
diff --git a/arch/arm64/kernel/sigreturn32.S b/arch/arm64/kernel/sigreturn32.S
new file mode 100644
index 000000000000..475d30d471ac
--- /dev/null
+++ b/arch/arm64/kernel/sigreturn32.S
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AArch32 sigreturn code.
+ * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
+ *
+ * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
+ * Copyright (C) 2012-2018 ARM Ltd.
+ *
+ * For ARM syscalls, the syscall number has to be loaded into r7.
+ * We do not support an OABI userspace.
+ *
+ * For Thumb syscalls, we also pass the syscall number via r7. We therefore
+ * need two 16-bit instructions.
+ */
+
+#include <asm/unistd.h>
+
+	.globl __aarch32_sigret_code_start
+__aarch32_sigret_code_start:
+
+	/*
+	 * ARM Code
+	 */
+	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3		// mov	r7, #__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef		// svc	#__NR_compat_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
+
+	/*
+	 * ARM code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
+
+        .globl __aarch32_sigret_code_end
+__aarch32_sigret_code_end:
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
  2019-04-02 16:27 [PATCH v3 0/4] arm64: compat: Add kuser helpers config option Vincenzo Frascino
  2019-04-02 16:27 ` [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Vincenzo Frascino
  2019-04-02 16:27 ` [PATCH v3 2/4] arm64: compat: Split kuser32 Vincenzo Frascino
@ 2019-04-02 16:27 ` Vincenzo Frascino
  2019-04-02 16:36   ` Catalin Marinas
  2019-04-02 16:27 ` [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option Vincenzo Frascino
  3 siblings, 1 reply; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 16:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

aarch32_alloc_vdso_pages() needs to the refactored to make it
easier to disable kuser helpers.

Divide the function in aarch32_alloc_kuser_vdso_page() and
aarch32_alloc_sigreturn_vdso_page().

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/kernel/vdso.c | 59 +++++++++++++++++++++++++++-------------
 1 file changed, 40 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 16f8fce5c501..7ee676c345ed 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -77,40 +77,61 @@ static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = {
 	},
 };
 
-static int __init aarch32_alloc_vdso_pages(void)
+static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
-	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
-
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
-	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vdso_pages[C_PAGES];
+	unsigned long vdso_page;
 
-	vdso_pages[C_VECTORS] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[C_VECTORS])
-		return -ENOMEM;
-
-	vdso_pages[C_SIGPAGE] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[C_SIGPAGE])
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)(vdso_pages[C_VECTORS] + 0x1000 - kuser_sz),
+	memcpy((void *)(vdso_page + 0x1000 - kuser_sz),
 	       __kuser_helper_start,
 	       kuser_sz);
 
+	aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_page);
+
+	flush_dcache_page(aarch32_vdso_pages[C_VECTORS]);
+
+	return 0;
+}
+
+static int aarch32_alloc_sigreturn_vdso_page(void)
+{
+	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
+	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
+	unsigned long vdso_page;
+
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
+		return -ENOMEM;
+
 	/* sigreturn code */
-	memcpy((void *)vdso_pages[C_SIGPAGE],
+	memcpy((void *)vdso_page,
 	       __aarch32_sigret_code_start,
 	       sigret_sz);
 
-	flush_icache_range(vdso_pages[C_VECTORS],
-			   vdso_pages[C_VECTORS] + PAGE_SIZE);
-	flush_icache_range(vdso_pages[C_SIGPAGE],
-			   vdso_pages[C_SIGPAGE] + PAGE_SIZE);
+	aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_page);
+
+	flush_dcache_page(aarch32_vdso_pages[C_SIGPAGE]);
+
+	return 0;
+}
+
+static int __init aarch32_alloc_vdso_pages(void)
+{
+	int kuser_err, sigreturn_err;
+
+	kuser_err = aarch32_alloc_kuser_vdso_page();
+	if (kuser_err)
+		return kuser_err;
 
-	aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]);
-	aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]);
+	sigreturn_err = aarch32_alloc_sigreturn_vdso_page();
+	if (sigreturn_err)
+		return sigreturn_err;
 
 	return 0;
 }
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option
  2019-04-02 16:27 [PATCH v3 0/4] arm64: compat: Add kuser helpers config option Vincenzo Frascino
                   ` (2 preceding siblings ...)
  2019-04-02 16:27 ` [PATCH v3 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages() Vincenzo Frascino
@ 2019-04-02 16:27 ` Vincenzo Frascino
  2019-04-04 10:08   ` Catalin Marinas
  3 siblings, 1 reply; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 16:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

When kuser helpers are enabled the kernel maps the relative code at
a fixed address (0xffff0000). Making configurable the option to disable
them means that the kernel can remove this mapping and any access to
this memory area results in a sigfault.

Add a KUSER_HELPERS config option that can be used to disable the
mapping when it is turned off.

This option can be turned off if and only if the applications are
designed specifically for the platform and they do not make use of the
kuser helpers code.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/Kconfig          | 28 ++++++++++++++++++++++++++++
 arch/arm64/kernel/Makefile  |  3 ++-
 arch/arm64/kernel/kuser32.S |  7 +++----
 arch/arm64/kernel/vdso.c    | 15 +++++++++++++++
 4 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9eba5de..aa28884a2376 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1494,6 +1494,34 @@ config COMPAT
 
 	  If you want to execute 32-bit userspace applications, say Y.
 
+config KUSER_HELPERS
+	bool "Enable kuser helpers page for 32 bit applications."
+	depends on COMPAT
+	default y
+	help
+	  Warning: disabling this option may break 32-bit user programs.
+
+	  Provide kuser helpers to compat tasks. The kernel provides
+	  helper code to userspace in read only form at a fixed location
+	  to allow userspace to be independent of the CPU type fitted to
+	  the system. This permits binaries to be run on ARMv4 through
+	  to ARMv8 without modification.
+
+	  See Documentation/arm/kernel_user_helpers.txt for details.
+
+	  However, the fixed address nature of these helpers can be used
+	  by ROP (return orientated programming) authors when creating
+	  exploits.
+
+	  If all of the binaries and libraries which run on your platform
+	  are built specifically for your platform, and make no use of
+	  these helpers, then you can turn this option off to hinder
+	  such exploits. However, in that case, if a binary or library
+	  relying on those helpers is run, it will not function correctly.
+
+	  Say N here only if you are absolutely certain that you do not
+	  need these helpers; otherwise, the safe option is to say Y.
+
 config SYSVIPC_COMPAT
 	def_bool y
 	depends on COMPAT && SYSVIPC
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 50f76b88a967..c7bd0794855a 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
 $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
-obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
+obj-$(CONFIG_COMPAT)			+= sys32.o signal32.o			\
 					   sigreturn32.o sys_compat.o
+obj-$(CONFIG_KUSER_HELPERS)		+= kuser32.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index c5f2bbafd723..49825e9e421e 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -6,10 +6,9 @@
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
  * Copyright (C) 2012-2018 ARM Ltd.
  *
- * Each segment is 32-byte aligned and will be moved to the top of the high
- * vector page.  New segments (if ever needed) must be added in front of
- * existing ones.  This mechanism should be used only for things that are
- * really small and justified, and not be abused freely.
+ * The kuser helpers below are mapped at a fixed address by
+ * aarch32_setup_additional_pages() and are provided for compatibility
+ * reasons with 32 bit (aarch32) applications that need them.
  *
  * See Documentation/arm/kernel_user_helpers.txt for formal definitions.
  */
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 7ee676c345ed..f012f90000bd 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -77,6 +77,7 @@ static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = {
 	},
 };
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
@@ -98,6 +99,12 @@ static int aarch32_alloc_kuser_vdso_page(void)
 
 	return 0;
 }
+#else
+static int aarch32_alloc_kuser_vdso_page(void)
+{
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPER */
 
 static int aarch32_alloc_sigreturn_vdso_page(void)
 {
@@ -137,6 +144,7 @@ static int __init aarch32_alloc_vdso_pages(void)
 }
 arch_initcall(aarch32_alloc_vdso_pages);
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
 	void *ret;
@@ -149,6 +157,13 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 
 	return PTR_ERR_OR_ZERO(ret);
 }
+#else
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
+{
+	/* kuser helpers not enabled */
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPERS */
 
 static int aarch32_sigreturn_setup(struct mm_struct *mm)
 {
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
  2019-04-02 16:27 ` [PATCH v3 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages() Vincenzo Frascino
@ 2019-04-02 16:36   ` Catalin Marinas
  0 siblings, 0 replies; 15+ messages in thread
From: Catalin Marinas @ 2019-04-02 16:36 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: Mark Rutland, Will Deacon, linux-arm-kernel

On Tue, Apr 02, 2019 at 05:27:56PM +0100, Vincenzo Frascino wrote:
> +static int __init aarch32_alloc_vdso_pages(void)
> +{
> +	int kuser_err, sigreturn_err;
> +
> +	kuser_err = aarch32_alloc_kuser_vdso_page();
> +	if (kuser_err)
> +		return kuser_err;
>  
> -	aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]);
> -	aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]);
> +	sigreturn_err = aarch32_alloc_sigreturn_vdso_page();
> +	if (sigreturn_err)
> +		return sigreturn_err;
>  
>  	return 0;

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

(nitpick: could have used a single "err" variable; no need to re-spin)

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option
  2019-04-02 16:27 ` [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option Vincenzo Frascino
@ 2019-04-04 10:08   ` Catalin Marinas
  2019-04-04 14:07     ` Vincenzo Frascino
  0 siblings, 1 reply; 15+ messages in thread
From: Catalin Marinas @ 2019-04-04 10:08 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: Mark Rutland, Will Deacon, linux-arm-kernel

On Tue, Apr 02, 2019 at 05:27:57PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9eba5de..aa28884a2376 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1494,6 +1494,34 @@ config COMPAT
>  
>  	  If you want to execute 32-bit userspace applications, say Y.
>  
> +config KUSER_HELPERS
> +	bool "Enable kuser helpers page for 32 bit applications."
> +	depends on COMPAT
> +	default y
> +	help
> +	  Warning: disabling this option may break 32-bit user programs.
> +
> +	  Provide kuser helpers to compat tasks. The kernel provides
> +	  helper code to userspace in read only form at a fixed location
> +	  to allow userspace to be independent of the CPU type fitted to
> +	  the system. This permits binaries to be run on ARMv4 through
> +	  to ARMv8 without modification.
> +
> +	  See Documentation/arm/kernel_user_helpers.txt for details.
> +
> +	  However, the fixed address nature of these helpers can be used
> +	  by ROP (return orientated programming) authors when creating
> +	  exploits.
> +
> +	  If all of the binaries and libraries which run on your platform
> +	  are built specifically for your platform, and make no use of
> +	  these helpers, then you can turn this option off to hinder
> +	  such exploits. However, in that case, if a binary or library
> +	  relying on those helpers is run, it will not function correctly.
> +
> +	  Say N here only if you are absolutely certain that you do not
> +	  need these helpers; otherwise, the safe option is to say Y.

In order to close the potential issue with the user unmapping the
vectors page in the 64K configuration, we'd need the change below as
well (on top of the TASK_SIZE_32 patch I queued for -rc4).

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 228af56ece48..fcd0e691b1ea 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -57,7 +57,7 @@
 #define TASK_SIZE_64		(UL(1) << vabits_user)
 
 #ifdef CONFIG_COMPAT
-#ifdef CONFIG_ARM64_64K_PAGES
+#if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
 /*
  * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
  * by the compat vectors page.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option
  2019-04-04 10:08   ` Catalin Marinas
@ 2019-04-04 14:07     ` Vincenzo Frascino
  2019-04-04 14:51       ` Catalin Marinas
  0 siblings, 1 reply; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-04 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Rutland, linux-arm-kernel

On 04/04/2019 11:08, Catalin Marinas wrote:
> On Tue, Apr 02, 2019 at 05:27:57PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 7e34b9eba5de..aa28884a2376 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1494,6 +1494,34 @@ config COMPAT
>>  
>>  	  If you want to execute 32-bit userspace applications, say Y.
>>  
>> +config KUSER_HELPERS
>> +	bool "Enable kuser helpers page for 32 bit applications."
>> +	depends on COMPAT
>> +	default y
>> +	help
>> +	  Warning: disabling this option may break 32-bit user programs.
>> +
>> +	  Provide kuser helpers to compat tasks. The kernel provides
>> +	  helper code to userspace in read only form at a fixed location
>> +	  to allow userspace to be independent of the CPU type fitted to
>> +	  the system. This permits binaries to be run on ARMv4 through
>> +	  to ARMv8 without modification.
>> +
>> +	  See Documentation/arm/kernel_user_helpers.txt for details.
>> +
>> +	  However, the fixed address nature of these helpers can be used
>> +	  by ROP (return orientated programming) authors when creating
>> +	  exploits.
>> +
>> +	  If all of the binaries and libraries which run on your platform
>> +	  are built specifically for your platform, and make no use of
>> +	  these helpers, then you can turn this option off to hinder
>> +	  such exploits. However, in that case, if a binary or library
>> +	  relying on those helpers is run, it will not function correctly.
>> +
>> +	  Say N here only if you are absolutely certain that you do not
>> +	  need these helpers; otherwise, the safe option is to say Y.
> 
> In order to close the potential issue with the user unmapping the
> vectors page in the 64K configuration, we'd need the change below as
> well (on top of the TASK_SIZE_32 patch I queued for -rc4).
> 
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 228af56ece48..fcd0e691b1ea 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -57,7 +57,7 @@
>  #define TASK_SIZE_64		(UL(1) << vabits_user)
>  
>  #ifdef CONFIG_COMPAT
> -#ifdef CONFIG_ARM64_64K_PAGES
> +#if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
>  /*
>   * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
>   * by the compat vectors page.
> 

I agree, this change is required when the new config option is added. Do you
want me to send an additional patch or will this be folded with the existing series?

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option
  2019-04-04 14:07     ` Vincenzo Frascino
@ 2019-04-04 14:51       ` Catalin Marinas
  2019-04-04 15:01         ` Will Deacon
  0 siblings, 1 reply; 15+ messages in thread
From: Catalin Marinas @ 2019-04-04 14:51 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: Mark Rutland, Will Deacon, linux-arm-kernel

On Thu, Apr 04, 2019 at 03:07:43PM +0100, Vincenzo Frascino wrote:
> On 04/04/2019 11:08, Catalin Marinas wrote:
> > On Tue, Apr 02, 2019 at 05:27:57PM +0100, Vincenzo Frascino wrote:
> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> >> index 7e34b9eba5de..aa28884a2376 100644
> >> --- a/arch/arm64/Kconfig
> >> +++ b/arch/arm64/Kconfig
> >> @@ -1494,6 +1494,34 @@ config COMPAT
> >>  
> >>  	  If you want to execute 32-bit userspace applications, say Y.
> >>  
> >> +config KUSER_HELPERS
> >> +	bool "Enable kuser helpers page for 32 bit applications."
> >> +	depends on COMPAT
> >> +	default y
> >> +	help
> >> +	  Warning: disabling this option may break 32-bit user programs.
> >> +
> >> +	  Provide kuser helpers to compat tasks. The kernel provides
> >> +	  helper code to userspace in read only form at a fixed location
> >> +	  to allow userspace to be independent of the CPU type fitted to
> >> +	  the system. This permits binaries to be run on ARMv4 through
> >> +	  to ARMv8 without modification.
> >> +
> >> +	  See Documentation/arm/kernel_user_helpers.txt for details.
> >> +
> >> +	  However, the fixed address nature of these helpers can be used
> >> +	  by ROP (return orientated programming) authors when creating
> >> +	  exploits.
> >> +
> >> +	  If all of the binaries and libraries which run on your platform
> >> +	  are built specifically for your platform, and make no use of
> >> +	  these helpers, then you can turn this option off to hinder
> >> +	  such exploits. However, in that case, if a binary or library
> >> +	  relying on those helpers is run, it will not function correctly.
> >> +
> >> +	  Say N here only if you are absolutely certain that you do not
> >> +	  need these helpers; otherwise, the safe option is to say Y.
> > 
> > In order to close the potential issue with the user unmapping the
> > vectors page in the 64K configuration, we'd need the change below as
> > well (on top of the TASK_SIZE_32 patch I queued for -rc4).
> > 
> > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> > index 228af56ece48..fcd0e691b1ea 100644
> > --- a/arch/arm64/include/asm/processor.h
> > +++ b/arch/arm64/include/asm/processor.h
> > @@ -57,7 +57,7 @@
> >  #define TASK_SIZE_64		(UL(1) << vabits_user)
> >  
> >  #ifdef CONFIG_COMPAT
> > -#ifdef CONFIG_ARM64_64K_PAGES
> > +#if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
> >  /*
> >   * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
> >   * by the compat vectors page.
> > 
> 
> I agree, this change is required when the new config option is added. Do you
> want me to send an additional patch or will this be folded with the existing series?

Folding it into patch 4/4 would be best but our arm64 for-next/core is
based on -rc3 and the TASK_SIZE_32 reduction didn't go in yet:

https://lore.kernel.org/linux-arm-kernel/20190401113014.20866-1-vincenzo.frascino@arm.com/

I can drop the TASK_SIZE_32 patch from for-next/fixes (arguably, it's
not a regression and we cc stable anyway) and Will can merge it before
this series. Otherwise, the fixup above can go in at 5.2-rc1.

Will, any preference?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option
  2019-04-04 14:51       ` Catalin Marinas
@ 2019-04-04 15:01         ` Will Deacon
  0 siblings, 0 replies; 15+ messages in thread
From: Will Deacon @ 2019-04-04 15:01 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Mark Rutland, Vincenzo Frascino, linux-arm-kernel

On Thu, Apr 04, 2019 at 03:51:23PM +0100, Catalin Marinas wrote:
> On Thu, Apr 04, 2019 at 03:07:43PM +0100, Vincenzo Frascino wrote:
> > On 04/04/2019 11:08, Catalin Marinas wrote:
> > > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> > > index 228af56ece48..fcd0e691b1ea 100644
> > > --- a/arch/arm64/include/asm/processor.h
> > > +++ b/arch/arm64/include/asm/processor.h
> > > @@ -57,7 +57,7 @@
> > >  #define TASK_SIZE_64		(UL(1) << vabits_user)
> > >  
> > >  #ifdef CONFIG_COMPAT
> > > -#ifdef CONFIG_ARM64_64K_PAGES
> > > +#if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
> > >  /*
> > >   * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
> > >   * by the compat vectors page.
> > > 
> > 
> > I agree, this change is required when the new config option is added. Do you
> > want me to send an additional patch or will this be folded with the existing series?
> 
> Folding it into patch 4/4 would be best but our arm64 for-next/core is
> based on -rc3 and the TASK_SIZE_32 reduction didn't go in yet:
> 
> https://lore.kernel.org/linux-arm-kernel/20190401113014.20866-1-vincenzo.frascino@arm.com/
> 
> I can drop the TASK_SIZE_32 patch from for-next/fixes (arguably, it's
> not a regression and we cc stable anyway) and Will can merge it before
> this series. Otherwise, the fixup above can go in at 5.2-rc1.
> 
> Will, any preference?

Agreed on taking this via for-next/core instead of fixes.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
  2019-04-02 16:27 ` [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Vincenzo Frascino
@ 2019-04-10 18:10   ` Will Deacon
  2019-04-12 11:08     ` Vincenzo Frascino
  0 siblings, 1 reply; 15+ messages in thread
From: Will Deacon @ 2019-04-10 18:10 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: Mark Rutland, Catalin Marinas, linux-arm-kernel

On Tue, Apr 02, 2019 at 05:27:54PM +0100, Vincenzo Frascino wrote:
> In the current implementation AArch32 installs a special page called
> "[vectors]" that contains sigreturn trampolines and kuser helpers,

Doesn't make sense. How about:

"For AArch32 tasks, we install a special "[vectors]" page that contains
the sigreturn trampolines and kuser helpers."

> and this is done at fixed address specified by the kuser helpers ABI.

which is mapped at a fixed address...

> Having sigreturn trampolines and kuser helpers in the same page, makes
> difficult to maintain compatibility with arm because it makes not
> possible to disable kuser helpers.

Replace with:

"Having the sigreturn trampolines in the same page as the kuser helpers
 makes it impossible to disable the kuser helpers independently."

> Address the problem creating separate pages for vectors and sigpage in
> a similar fashion to what happens today on arm.

"Follow the Arm implementation, by moving the signal trampolines out of
 the "[vectors]" page and into their own "[sigpage]"".

> Change as well the meaning of mm->context.vdso for AArch32 compat since
> it now points to sigpage and not to vectors anymore in order to make
> simpler the implementation of the signal handling (the address of
> sigpage is randomized).

This is an implementation detail and can be dropped.

> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
>  arch/arm64/include/asm/elf.h       |   6 +-
>  arch/arm64/include/asm/processor.h |   4 +-
>  arch/arm64/include/asm/signal32.h  |   2 -
>  arch/arm64/kernel/signal32.c       |   5 +-
>  arch/arm64/kernel/vdso.c           | 121 ++++++++++++++++++++++-------
>  5 files changed, 102 insertions(+), 36 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
> index 6adc1a90e7e6..355d120b78cb 100644
> --- a/arch/arm64/include/asm/elf.h
> +++ b/arch/arm64/include/asm/elf.h
> @@ -214,10 +214,10 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
>  	set_thread_flag(TIF_32BIT);					\
>   })
>  #define COMPAT_ARCH_DLINFO
> -extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
> -				      int uses_interp);
> +extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
> +					  int uses_interp);
>  #define compat_arch_setup_additional_pages \
> -					aarch32_setup_vectors_page
> +					aarch32_setup_additional_pages
>  
>  #endif /* CONFIG_COMPAT */
>  
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 5d9ce62bdebd..07c873fce961 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -78,9 +78,9 @@
>  #endif /* CONFIG_ARM64_FORCE_52BIT */
>  
>  #ifdef CONFIG_COMPAT
> -#define AARCH32_VECTORS_BASE	0xffff0000
> +#define AARCH32_KUSER_BASE	0xffff0000
>  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
> -				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
> +				AARCH32_KUSER_BASE : STACK_TOP_MAX)
>  #else
>  #define STACK_TOP		STACK_TOP_MAX
>  #endif /* CONFIG_COMPAT */
> diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
> index 81abea0b7650..58e288aaf0ba 100644
> --- a/arch/arm64/include/asm/signal32.h
> +++ b/arch/arm64/include/asm/signal32.h
> @@ -20,8 +20,6 @@
>  #ifdef CONFIG_COMPAT
>  #include <linux/compat.h>
>  
> -#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
> -
>  int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
>  		       struct pt_regs *regs);
>  int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index cb7800acd19f..3846a1b710b5 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
>  	compat_ulong_t retcode;
>  	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
>  	int thumb;
> +	void *sigreturn_base;
>  
>  	/* Check if the handler is written for ARM or Thumb */
>  	thumb = handler & 1;
> @@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
>  	} else {
>  		/* Set up sigreturn pointer */
>  		unsigned int idx = thumb << 1;
> +		sigreturn_base = current->mm->context.vdso;
>  
>  		if (ka->sa.sa_flags & SA_SIGINFO)
>  			idx += 3;
>  
> -		retcode = AARCH32_VECTORS_BASE +
> -			  AARCH32_KERN_SIGRET_CODE_OFFSET +
> +		retcode = ptr_to_compat(sigreturn_base) +
>  			  (idx << 2) + thumb;
>  	}
>  
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index 2d419006ad43..16f8fce5c501 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -1,5 +1,7 @@
>  /*
> - * VDSO implementation for AArch64 and vector page setup for AArch32.
> + * VDSO implementation for AArch64 and for AArch32:
> + * AArch64: vDSO implementation contains pages setup and data page update.
> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>   *
>   * Copyright (C) 2012 ARM Limited
>   *
> @@ -53,61 +55,126 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>  /*
>   * Create and map the vectors page for AArch32 tasks.
>   */
> -static struct page *vectors_page[1] __ro_after_init;
> +/*
> + * aarch32_vdso_pages:
> + * 0 - kuser helpers
> + * 1 - sigreturn code
> + */
> +#define C_VECTORS	0

C_KUSER might make more sense, and is consistent with AARCH32_KUSER_BASE.

> +#define C_SIGPAGE	1
> +#define C_PAGES		(C_SIGPAGE + 1)
> +static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init;
> +static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = {
> +	{
> +		/* Must be named [vectors] for compatibility with arm. */
> +		.name	= "[vectors]",
> +		.pages	= &aarch32_vdso_pages[C_VECTORS],
> +	},
> +	{
> +		/* Must be named [sigpage] for compatibility with arm. */
> +		.name	= "[sigpage]",
> +		.pages	= &aarch32_vdso_pages[C_SIGPAGE],
> +	},
> +};
>  
> -static int __init alloc_vectors_page(void)
> +static int __init aarch32_alloc_vdso_pages(void)

Premature renaming of the function?

>  {
>  	extern char __kuser_helper_start[], __kuser_helper_end[];
>  	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
>  
>  	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
>  	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
> -	unsigned long vpage;
> +	unsigned long vdso_pages[C_PAGES];
>  
> -	vpage = get_zeroed_page(GFP_ATOMIC);
> +	vdso_pages[C_VECTORS] = get_zeroed_page(GFP_ATOMIC);
> +	if (!vdso_pages[C_VECTORS])
> +		return -ENOMEM;
>  
> -	if (!vpage)
> +	vdso_pages[C_SIGPAGE] = get_zeroed_page(GFP_ATOMIC);
> +	if (!vdso_pages[C_SIGPAGE])
>  		return -ENOMEM;

You leak the kuser page if this fails.

>  	/* kuser helpers */
> -	memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
> -		kuser_sz);
> +	memcpy((void *)(vdso_pages[C_VECTORS] + 0x1000 - kuser_sz),
> +	       __kuser_helper_start,
> +	       kuser_sz);
>  
>  	/* sigreturn code */
> -	memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
> -               __aarch32_sigret_code_start, sigret_sz);
> +	memcpy((void *)vdso_pages[C_SIGPAGE],
> +	       __aarch32_sigret_code_start,
> +	       sigret_sz);
>  
> -	flush_icache_range(vpage, vpage + PAGE_SIZE);
> -	vectors_page[0] = virt_to_page(vpage);
> +	flush_icache_range(vdso_pages[C_VECTORS],
> +			   vdso_pages[C_VECTORS] + PAGE_SIZE);
> +	flush_icache_range(vdso_pages[C_SIGPAGE],
> +			   vdso_pages[C_SIGPAGE] + PAGE_SIZE);
> +
> +	aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]);
> +	aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]);
>  
>  	return 0;
>  }
> -arch_initcall(alloc_vectors_page);
> +arch_initcall(aarch32_alloc_vdso_pages);
>  
> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>  {
> -	struct mm_struct *mm = current->mm;
> -	unsigned long addr = AARCH32_VECTORS_BASE;
> -	static const struct vm_special_mapping spec = {
> -		.name	= "[vectors]",
> -		.pages	= vectors_page,
> +	void *ret;
> +
> +	/* The kuser helpers must be mapped at the ABI-defined high address */
> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
> +				       VM_READ | VM_EXEC |
> +				       VM_MAYREAD | VM_MAYEXEC,

How come you don't need VM_MAYWRITE here...

> +	/*
> +	 * VM_MAYWRITE is required to allow gdb to Copy-on-Write and
> +	 * set breakpoints.
> +	 */
>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> -				       &spec);
> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> +				       VM_MAYWRITE | VM_MAYEXEC,
> +				       &aarch32_vdso_spec[C_SIGPAGE]);

... but you introduce it here? Also, shouldn't this be a separate change
so it can be treated as a fix?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
  2019-04-10 18:10   ` Will Deacon
@ 2019-04-12 11:08     ` Vincenzo Frascino
  2019-04-12 11:55       ` Will Deacon
  0 siblings, 1 reply; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-12 11:08 UTC (permalink / raw)
  To: Will Deacon; +Cc: Mark Rutland, Catalin Marinas, linux-arm-kernel

Hi Will,

thank you for your review.

On 10/04/2019 19:10, Will Deacon wrote:
> On Tue, Apr 02, 2019 at 05:27:54PM +0100, Vincenzo Frascino wrote:
>> In the current implementation AArch32 installs a special page called
>> "[vectors]" that contains sigreturn trampolines and kuser helpers,
> 
> Doesn't make sense. How about:
> 
> "For AArch32 tasks, we install a special "[vectors]" page that contains
> the sigreturn trampolines and kuser helpers."
> 
>> and this is done at fixed address specified by the kuser helpers ABI.
> 
> which is mapped at a fixed address...
> 
>> Having sigreturn trampolines and kuser helpers in the same page, makes
>> difficult to maintain compatibility with arm because it makes not
>> possible to disable kuser helpers.
> 
> Replace with:
> 
> "Having the sigreturn trampolines in the same page as the kuser helpers
>  makes it impossible to disable the kuser helpers independently."
> 
>> Address the problem creating separate pages for vectors and sigpage in
>> a similar fashion to what happens today on arm.
> 
> "Follow the Arm implementation, by moving the signal trampolines out of
>  the "[vectors]" page and into their own "[sigpage]"".
> 
>> Change as well the meaning of mm->context.vdso for AArch32 compat since
>> it now points to sigpage and not to vectors anymore in order to make
>> simpler the implementation of the signal handling (the address of
>> sigpage is randomized).
> 
> This is an implementation detail and can be dropped.
> 

Thanks for this, I will rework my patch description in v4.

>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>> ---
>>  arch/arm64/include/asm/elf.h       |   6 +-
>>  arch/arm64/include/asm/processor.h |   4 +-
>>  arch/arm64/include/asm/signal32.h  |   2 -
>>  arch/arm64/kernel/signal32.c       |   5 +-
>>  arch/arm64/kernel/vdso.c           | 121 ++++++++++++++++++++++-------
>>  5 files changed, 102 insertions(+), 36 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
>> index 6adc1a90e7e6..355d120b78cb 100644
>> --- a/arch/arm64/include/asm/elf.h
>> +++ b/arch/arm64/include/asm/elf.h
>> @@ -214,10 +214,10 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
>>  	set_thread_flag(TIF_32BIT);					\
>>   })
>>  #define COMPAT_ARCH_DLINFO
>> -extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
>> -				      int uses_interp);
>> +extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
>> +					  int uses_interp);
>>  #define compat_arch_setup_additional_pages \
>> -					aarch32_setup_vectors_page
>> +					aarch32_setup_additional_pages
>>  
>>  #endif /* CONFIG_COMPAT */
>>  
>> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
>> index 5d9ce62bdebd..07c873fce961 100644
>> --- a/arch/arm64/include/asm/processor.h
>> +++ b/arch/arm64/include/asm/processor.h
>> @@ -78,9 +78,9 @@
>>  #endif /* CONFIG_ARM64_FORCE_52BIT */
>>  
>>  #ifdef CONFIG_COMPAT
>> -#define AARCH32_VECTORS_BASE	0xffff0000
>> +#define AARCH32_KUSER_BASE	0xffff0000
>>  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
>> -				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
>> +				AARCH32_KUSER_BASE : STACK_TOP_MAX)
>>  #else
>>  #define STACK_TOP		STACK_TOP_MAX
>>  #endif /* CONFIG_COMPAT */
>> diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
>> index 81abea0b7650..58e288aaf0ba 100644
>> --- a/arch/arm64/include/asm/signal32.h
>> +++ b/arch/arm64/include/asm/signal32.h
>> @@ -20,8 +20,6 @@
>>  #ifdef CONFIG_COMPAT
>>  #include <linux/compat.h>
>>  
>> -#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
>> -
>>  int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
>>  		       struct pt_regs *regs);
>>  int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
>> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
>> index cb7800acd19f..3846a1b710b5 100644
>> --- a/arch/arm64/kernel/signal32.c
>> +++ b/arch/arm64/kernel/signal32.c
>> @@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
>>  	compat_ulong_t retcode;
>>  	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
>>  	int thumb;
>> +	void *sigreturn_base;
>>  
>>  	/* Check if the handler is written for ARM or Thumb */
>>  	thumb = handler & 1;
>> @@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
>>  	} else {
>>  		/* Set up sigreturn pointer */
>>  		unsigned int idx = thumb << 1;
>> +		sigreturn_base = current->mm->context.vdso;
>>  
>>  		if (ka->sa.sa_flags & SA_SIGINFO)
>>  			idx += 3;
>>  
>> -		retcode = AARCH32_VECTORS_BASE +
>> -			  AARCH32_KERN_SIGRET_CODE_OFFSET +
>> +		retcode = ptr_to_compat(sigreturn_base) +
>>  			  (idx << 2) + thumb;
>>  	}
>>  
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index 2d419006ad43..16f8fce5c501 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -1,5 +1,7 @@
>>  /*
>> - * VDSO implementation for AArch64 and vector page setup for AArch32.
>> + * VDSO implementation for AArch64 and for AArch32:
>> + * AArch64: vDSO implementation contains pages setup and data page update.
>> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>>   *
>>   * Copyright (C) 2012 ARM Limited
>>   *
>> @@ -53,61 +55,126 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>>  /*
>>   * Create and map the vectors page for AArch32 tasks.
>>   */
>> -static struct page *vectors_page[1] __ro_after_init;
>> +/*
>> + * aarch32_vdso_pages:
>> + * 0 - kuser helpers
>> + * 1 - sigreturn code
>> + */
>> +#define C_VECTORS	0
> 
> C_KUSER might make more sense, and is consistent with AARCH32_KUSER_BASE.
>

C_VECTORS seems consistent with the name of the page it refers to.

>> +#define C_SIGPAGE	1
>> +#define C_PAGES		(C_SIGPAGE + 1)
>> +static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init;
>> +static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = {
>> +	{
>> +		/* Must be named [vectors] for compatibility with arm. */
>> +		.name	= "[vectors]",
>> +		.pages	= &aarch32_vdso_pages[C_VECTORS],
>> +	},
>> +	{
>> +		/* Must be named [sigpage] for compatibility with arm. */
>> +		.name	= "[sigpage]",
>> +		.pages	= &aarch32_vdso_pages[C_SIGPAGE],
>> +	},
>> +};
>>  
>> -static int __init alloc_vectors_page(void)
>> +static int __init aarch32_alloc_vdso_pages(void)
> 
> Premature renaming of the function?
> 

I named/renamed everything that refers to aarch32 as "aarch32_" to make it
easier to follow the code flow.

>>  {
>>  	extern char __kuser_helper_start[], __kuser_helper_end[];
>>  	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
>>  
>>  	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
>>  	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
>> -	unsigned long vpage;
>> +	unsigned long vdso_pages[C_PAGES];
>>  
>> -	vpage = get_zeroed_page(GFP_ATOMIC);
>> +	vdso_pages[C_VECTORS] = get_zeroed_page(GFP_ATOMIC);
>> +	if (!vdso_pages[C_VECTORS])
>> +		return -ENOMEM;
>>  
>> -	if (!vpage)
>> +	vdso_pages[C_SIGPAGE] = get_zeroed_page(GFP_ATOMIC);
>> +	if (!vdso_pages[C_SIGPAGE])
>>  		return -ENOMEM;
> 
> You leak the kuser page if this fails.
> 

Thanks for this, I will definitely fix it in v4.

>>  	/* kuser helpers */
>> -	memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
>> -		kuser_sz);
>> +	memcpy((void *)(vdso_pages[C_VECTORS] + 0x1000 - kuser_sz),
>> +	       __kuser_helper_start,
>> +	       kuser_sz);
>>  
>>  	/* sigreturn code */
>> -	memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
>> -               __aarch32_sigret_code_start, sigret_sz);
>> +	memcpy((void *)vdso_pages[C_SIGPAGE],
>> +	       __aarch32_sigret_code_start,
>> +	       sigret_sz);
>>  
>> -	flush_icache_range(vpage, vpage + PAGE_SIZE);
>> -	vectors_page[0] = virt_to_page(vpage);
>> +	flush_icache_range(vdso_pages[C_VECTORS],
>> +			   vdso_pages[C_VECTORS] + PAGE_SIZE);
>> +	flush_icache_range(vdso_pages[C_SIGPAGE],
>> +			   vdso_pages[C_SIGPAGE] + PAGE_SIZE);
>> +
>> +	aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]);
>> +	aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]);
>>  
>>  	return 0;
>>  }
>> -arch_initcall(alloc_vectors_page);
>> +arch_initcall(aarch32_alloc_vdso_pages);
>>  
>> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
>> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>>  {
>> -	struct mm_struct *mm = current->mm;
>> -	unsigned long addr = AARCH32_VECTORS_BASE;
>> -	static const struct vm_special_mapping spec = {
>> -		.name	= "[vectors]",
>> -		.pages	= vectors_page,
>> +	void *ret;
>> +
>> +	/* The kuser helpers must be mapped at the ABI-defined high address */
>> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
>> +				       VM_READ | VM_EXEC |
>> +				       VM_MAYREAD | VM_MAYEXEC,
> 
> How come you don't need VM_MAYWRITE here...
> 

This is to keep it consistent with what the arm (32 bit) implementation does.

My understanding is that the kuser code is executed in user mode for efficiency
reasons but it is too close to the kernel to be implemented in user libraries
and that the kernel can change its internal implementation from version to
version as far as it guarantees the "interface" (entry points and results).
Based on this gdb should not need to put a breakpoint inside the kuser helpers code.

And if we consider as well that the fixed address nature of the helpers could be
used from ROP authors during the creation of exploits probably we want to
prevent gdb to set a breakpoint there hence the proposed patch does not contain
VM_MAYWRITE.

I had a look to arm implementation and it seems the it defines the vector page
as READONLY_EXEC and there is no VM_MAYWRITE in the vm_flags.

I could extend the comment accordingly.

>> +	/*
>> +	 * VM_MAYWRITE is required to allow gdb to Copy-on-Write and
>> +	 * set breakpoints.
>> +	 */
>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>> -				       &spec);
>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>> +				       VM_MAYWRITE | VM_MAYEXEC,
>> +				       &aarch32_vdso_spec[C_SIGPAGE]);
> 
> ... but you introduce it here?Also, shouldn't this be a separate change
> so it can be treated as a fix?

This is again to keep it consistent with what the arm implementation does.

Since the separation (vectors/sigpage) has been added in this patch, I am not
sure on how we can treat this change as a separate patch, at least based on what
I mentioned above. Could you please explain?

> 
> Will
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
  2019-04-12 11:08     ` Vincenzo Frascino
@ 2019-04-12 11:55       ` Will Deacon
  2019-04-12 13:28         ` Vincenzo Frascino
  2019-04-12 13:30         ` Russell King - ARM Linux admin
  0 siblings, 2 replies; 15+ messages in thread
From: Will Deacon @ 2019-04-12 11:55 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: Mark Rutland, Catalin Marinas, linux, linux-arm-kernel

[+rmk]

On Fri, Apr 12, 2019 at 12:08:30PM +0100, Vincenzo Frascino wrote:
> On 10/04/2019 19:10, Will Deacon wrote:
> > On Tue, Apr 02, 2019 at 05:27:54PM +0100, Vincenzo Frascino wrote:
> >> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> >> index 5d9ce62bdebd..07c873fce961 100644
> >> --- a/arch/arm64/include/asm/processor.h
> >> +++ b/arch/arm64/include/asm/processor.h
> >> @@ -78,9 +78,9 @@
> >>  #endif /* CONFIG_ARM64_FORCE_52BIT */
> >>  
> >>  #ifdef CONFIG_COMPAT
> >> -#define AARCH32_VECTORS_BASE	0xffff0000
> >> +#define AARCH32_KUSER_BASE	0xffff0000
> >>  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
> >> -				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
> >> +				AARCH32_KUSER_BASE : STACK_TOP_MAX)
> >>  #else
> >>  #define STACK_TOP		STACK_TOP_MAX
> >>  #endif /* CONFIG_COMPAT */
> >> diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
> >> index 81abea0b7650..58e288aaf0ba 100644
> >> --- a/arch/arm64/include/asm/signal32.h
> >> +++ b/arch/arm64/include/asm/signal32.h
> >> @@ -20,8 +20,6 @@
> >>  #ifdef CONFIG_COMPAT
> >>  #include <linux/compat.h>
> >>  
> >> -#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
> >> -
> >>  int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
> >>  		       struct pt_regs *regs);
> >>  int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
> >> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> >> index cb7800acd19f..3846a1b710b5 100644
> >> --- a/arch/arm64/kernel/signal32.c
> >> +++ b/arch/arm64/kernel/signal32.c
> >> @@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
> >>  	compat_ulong_t retcode;
> >>  	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
> >>  	int thumb;
> >> +	void *sigreturn_base;
> >>  
> >>  	/* Check if the handler is written for ARM or Thumb */
> >>  	thumb = handler & 1;
> >> @@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
> >>  	} else {
> >>  		/* Set up sigreturn pointer */
> >>  		unsigned int idx = thumb << 1;
> >> +		sigreturn_base = current->mm->context.vdso;
> >>  
> >>  		if (ka->sa.sa_flags & SA_SIGINFO)
> >>  			idx += 3;
> >>  
> >> -		retcode = AARCH32_VECTORS_BASE +
> >> -			  AARCH32_KERN_SIGRET_CODE_OFFSET +
> >> +		retcode = ptr_to_compat(sigreturn_base) +
> >>  			  (idx << 2) + thumb;
> >>  	}
> >>  
> >> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> >> index 2d419006ad43..16f8fce5c501 100644
> >> --- a/arch/arm64/kernel/vdso.c
> >> +++ b/arch/arm64/kernel/vdso.c
> >> @@ -1,5 +1,7 @@
> >>  /*
> >> - * VDSO implementation for AArch64 and vector page setup for AArch32.
> >> + * VDSO implementation for AArch64 and for AArch32:
> >> + * AArch64: vDSO implementation contains pages setup and data page update.
> >> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
> >>   *
> >>   * Copyright (C) 2012 ARM Limited
> >>   *
> >> @@ -53,61 +55,126 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
> >>  /*
> >>   * Create and map the vectors page for AArch32 tasks.
> >>   */
> >> -static struct page *vectors_page[1] __ro_after_init;
> >> +/*
> >> + * aarch32_vdso_pages:
> >> + * 0 - kuser helpers
> >> + * 1 - sigreturn code
> >> + */
> >> +#define C_VECTORS	0
> > 
> > C_KUSER might make more sense, and is consistent with AARCH32_KUSER_BASE.
> >
> 
> C_VECTORS seems consistent with the name of the page it refers to.

Ok, then why change AARCH32_VECTORS_BASE? ;)

> >> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
> >> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
> >>  {
> >> -	struct mm_struct *mm = current->mm;
> >> -	unsigned long addr = AARCH32_VECTORS_BASE;
> >> -	static const struct vm_special_mapping spec = {
> >> -		.name	= "[vectors]",
> >> -		.pages	= vectors_page,
> >> +	void *ret;
> >> +
> >> +	/* The kuser helpers must be mapped at the ABI-defined high address */
> >> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
> >> +				       VM_READ | VM_EXEC |
> >> +				       VM_MAYREAD | VM_MAYEXEC,
> > 
> > How come you don't need VM_MAYWRITE here...
> > 
> 
> This is to keep it consistent with what the arm (32 bit) implementation does.
> 
> My understanding is that the kuser code is executed in user mode for efficiency
> reasons but it is too close to the kernel to be implemented in user libraries
> and that the kernel can change its internal implementation from version to
> version as far as it guarantees the "interface" (entry points and results).
> Based on this gdb should not need to put a breakpoint inside the kuser helpers code.

Hmm, but couldn't you apply the same reasoning to the sigpage?

[also, talking to Russell, he makes the very good point that you can't CoW
 the page containing the vectors because that would give userspace control
 over the vectors themselves!]

> And if we consider as well that the fixed address nature of the helpers could be
> used from ROP authors during the creation of exploits probably we want to
> prevent gdb to set a breakpoint there hence the proposed patch does not contain
> VM_MAYWRITE.

I'm not sure I buy the ROP angle... you need to GUP the thing to get the
write to happen. Maybe you could do it with a futex or something, but I'm
also not sure that's really a viable attack.

> I had a look to arm implementation and it seems the it defines the vector page
> as READONLY_EXEC and there is no VM_MAYWRITE in the vm_flags.
> 
> I could extend the comment accordingly.

I initially thought that the gate VMA would mean that VM_MAYWRITE was
implicit for GUP, but actually that's handled explicitly in get_gate_page():

	/* user gate pages are read-only */
	if (gup_flags & FOLL_WRITE)
		return -EFAULT;

so yes, I agree with you that this is consistent with 32-bit. What I'm not
sure about is why we need to CoW the sigpage. But I suppose being compatible
with 32-bit is the aim of the game, so this is all moot.

> >> +	/*
> >> +	 * VM_MAYWRITE is required to allow gdb to Copy-on-Write and
> >> +	 * set breakpoints.
> >> +	 */
> >>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> >> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> >> -				       &spec);
> >> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> >> +				       VM_MAYWRITE | VM_MAYEXEC,
> >> +				       &aarch32_vdso_spec[C_SIGPAGE]);
> > 
> > ... but you introduce it here?Also, shouldn't this be a separate change
> > so it can be treated as a fix?
> 
> This is again to keep it consistent with what the arm implementation does.
> 
> Since the separation (vectors/sigpage) has been added in this patch, I am not
> sure on how we can treat this change as a separate patch, at least based on what
> I mentioned above. Could you please explain?

Sorry, I was getting ahead of myself with that. If we need to add
VM_MAYWRITE to the compat kuser helpers (which we could safely do on
64-bit), then we could do it as a fix. However, we don't need to do that
do please ignore my comment above.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
  2019-04-12 11:55       ` Will Deacon
@ 2019-04-12 13:28         ` Vincenzo Frascino
  2019-04-12 13:30         ` Russell King - ARM Linux admin
  1 sibling, 0 replies; 15+ messages in thread
From: Vincenzo Frascino @ 2019-04-12 13:28 UTC (permalink / raw)
  To: Will Deacon; +Cc: Mark Rutland, Catalin Marinas, linux, linux-arm-kernel

On 12/04/2019 12:55, Will Deacon wrote:
> [+rmk]
> 
> On Fri, Apr 12, 2019 at 12:08:30PM +0100, Vincenzo Frascino wrote:
>> On 10/04/2019 19:10, Will Deacon wrote:
>>> On Tue, Apr 02, 2019 at 05:27:54PM +0100, Vincenzo Frascino wrote:
>>>> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
>>>> index 5d9ce62bdebd..07c873fce961 100644
>>>> --- a/arch/arm64/include/asm/processor.h
>>>> +++ b/arch/arm64/include/asm/processor.h
>>>> @@ -78,9 +78,9 @@
>>>>  #endif /* CONFIG_ARM64_FORCE_52BIT */
>>>>  
>>>>  #ifdef CONFIG_COMPAT
>>>> -#define AARCH32_VECTORS_BASE	0xffff0000
>>>> +#define AARCH32_KUSER_BASE	0xffff0000
>>>>  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
>>>> -				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
>>>> +				AARCH32_KUSER_BASE : STACK_TOP_MAX)
>>>>  #else
>>>>  #define STACK_TOP		STACK_TOP_MAX
>>>>  #endif /* CONFIG_COMPAT */
>>>> diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
>>>> index 81abea0b7650..58e288aaf0ba 100644
>>>> --- a/arch/arm64/include/asm/signal32.h
>>>> +++ b/arch/arm64/include/asm/signal32.h
>>>> @@ -20,8 +20,6 @@
>>>>  #ifdef CONFIG_COMPAT
>>>>  #include <linux/compat.h>
>>>>  
>>>> -#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
>>>> -
>>>>  int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
>>>>  		       struct pt_regs *regs);
>>>>  int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
>>>> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
>>>> index cb7800acd19f..3846a1b710b5 100644
>>>> --- a/arch/arm64/kernel/signal32.c
>>>> +++ b/arch/arm64/kernel/signal32.c
>>>> @@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
>>>>  	compat_ulong_t retcode;
>>>>  	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
>>>>  	int thumb;
>>>> +	void *sigreturn_base;
>>>>  
>>>>  	/* Check if the handler is written for ARM or Thumb */
>>>>  	thumb = handler & 1;
>>>> @@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
>>>>  	} else {
>>>>  		/* Set up sigreturn pointer */
>>>>  		unsigned int idx = thumb << 1;
>>>> +		sigreturn_base = current->mm->context.vdso;
>>>>  
>>>>  		if (ka->sa.sa_flags & SA_SIGINFO)
>>>>  			idx += 3;
>>>>  
>>>> -		retcode = AARCH32_VECTORS_BASE +
>>>> -			  AARCH32_KERN_SIGRET_CODE_OFFSET +
>>>> +		retcode = ptr_to_compat(sigreturn_base) +
>>>>  			  (idx << 2) + thumb;
>>>>  	}
>>>>  
>>>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>>>> index 2d419006ad43..16f8fce5c501 100644
>>>> --- a/arch/arm64/kernel/vdso.c
>>>> +++ b/arch/arm64/kernel/vdso.c
>>>> @@ -1,5 +1,7 @@
>>>>  /*
>>>> - * VDSO implementation for AArch64 and vector page setup for AArch32.
>>>> + * VDSO implementation for AArch64 and for AArch32:
>>>> + * AArch64: vDSO implementation contains pages setup and data page update.
>>>> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>>>>   *
>>>>   * Copyright (C) 2012 ARM Limited
>>>>   *
>>>> @@ -53,61 +55,126 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>>>>  /*
>>>>   * Create and map the vectors page for AArch32 tasks.
>>>>   */
>>>> -static struct page *vectors_page[1] __ro_after_init;
>>>> +/*
>>>> + * aarch32_vdso_pages:
>>>> + * 0 - kuser helpers
>>>> + * 1 - sigreturn code
>>>> + */
>>>> +#define C_VECTORS	0
>>>
>>> C_KUSER might make more sense, and is consistent with AARCH32_KUSER_BASE.
>>>
>>
>> C_VECTORS seems consistent with the name of the page it refers to.
> 
> Ok, then why change AARCH32_VECTORS_BASE? ;)
> 

Right! ;) I will revert back to AARCH32_VECTORS_BASE to keep it consistent.

>>>> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
>>>> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>>>>  {
>>>> -	struct mm_struct *mm = current->mm;
>>>> -	unsigned long addr = AARCH32_VECTORS_BASE;
>>>> -	static const struct vm_special_mapping spec = {
>>>> -		.name	= "[vectors]",
>>>> -		.pages	= vectors_page,
>>>> +	void *ret;
>>>> +
>>>> +	/* The kuser helpers must be mapped at the ABI-defined high address */
>>>> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
>>>> +				       VM_READ | VM_EXEC |
>>>> +				       VM_MAYREAD | VM_MAYEXEC,
>>>
>>> How come you don't need VM_MAYWRITE here...
>>>
>>
>> This is to keep it consistent with what the arm (32 bit) implementation does.
>>
>> My understanding is that the kuser code is executed in user mode for efficiency
>> reasons but it is too close to the kernel to be implemented in user libraries
>> and that the kernel can change its internal implementation from version to
>> version as far as it guarantees the "interface" (entry points and results).
>> Based on this gdb should not need to put a breakpoint inside the kuser helpers code.
> 
> Hmm, but couldn't you apply the same reasoning to the sigpage?
> 

We could if it make sense for arm to change it accordingly, but for the moment I
would prefer to maintain consistency.

> [also, talking to Russell, he makes the very good point that you can't CoW
>  the page containing the vectors because that would give userspace control
>  over the vectors themselves!]
> 
>> And if we consider as well that the fixed address nature of the helpers could be
>> used from ROP authors during the creation of exploits probably we want to
>> prevent gdb to set a breakpoint there hence the proposed patch does not contain
>> VM_MAYWRITE.
> 
> I'm not sure I buy the ROP angle... you need to GUP the thing to get the
> write to happen. Maybe you could do it with a futex or something, but I'm
> also not sure that's really a viable attack.
> 
>> I had a look to arm implementation and it seems the it defines the vector page
>> as READONLY_EXEC and there is no VM_MAYWRITE in the vm_flags.
>>
>> I could extend the comment accordingly.
> 
> I initially thought that the gate VMA would mean that VM_MAYWRITE was
> implicit for GUP, but actually that's handled explicitly in get_gate_page():
> 
> 	/* user gate pages are read-only */
> 	if (gup_flags & FOLL_WRITE)
> 		return -EFAULT;
> 
> so yes, I agree with you that this is consistent with 32-bit. What I'm not
> sure about is why we need to CoW the sigpage. But I suppose being compatible
> with 32-bit is the aim of the game, so this is all moot.
> 

Agreed.

>>>> +	/*
>>>> +	 * VM_MAYWRITE is required to allow gdb to Copy-on-Write and
>>>> +	 * set breakpoints.
>>>> +	 */
>>>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>>>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>>>> -				       &spec);
>>>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>>>> +				       VM_MAYWRITE | VM_MAYEXEC,
>>>> +				       &aarch32_vdso_spec[C_SIGPAGE]);
>>>
>>> ... but you introduce it here?Also, shouldn't this be a separate change
>>> so it can be treated as a fix?
>>
>> This is again to keep it consistent with what the arm implementation does.
>>
>> Since the separation (vectors/sigpage) has been added in this patch, I am not
>> sure on how we can treat this change as a separate patch, at least based on what
>> I mentioned above. Could you please explain?
> 
> Sorry, I was getting ahead of myself with that. If we need to add
> VM_MAYWRITE to the compat kuser helpers (which we could safely do on
> 64-bit), then we could do it as a fix. However, we don't need to do that
> do please ignore my comment above.

Ok, I am going to re-post the set with the proposed changes.

> 
> Will
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
  2019-04-12 11:55       ` Will Deacon
  2019-04-12 13:28         ` Vincenzo Frascino
@ 2019-04-12 13:30         ` Russell King - ARM Linux admin
  1 sibling, 0 replies; 15+ messages in thread
From: Russell King - ARM Linux admin @ 2019-04-12 13:30 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Vincenzo Frascino, linux-arm-kernel

On Fri, Apr 12, 2019 at 12:55:07PM +0100, Will Deacon wrote:
> On Fri, Apr 12, 2019 at 12:08:30PM +0100, Vincenzo Frascino wrote:
> > On 10/04/2019 19:10, Will Deacon wrote:
> > > On Tue, Apr 02, 2019 at 05:27:54PM +0100, Vincenzo Frascino wrote:
> > >> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
> > >>  {
> > >> -	struct mm_struct *mm = current->mm;
> > >> -	unsigned long addr = AARCH32_VECTORS_BASE;
> > >> -	static const struct vm_special_mapping spec = {
> > >> -		.name	= "[vectors]",
> > >> -		.pages	= vectors_page,
> > >> +	void *ret;
> > >> +
> > >> +	/* The kuser helpers must be mapped at the ABI-defined high address */
> > >> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
> > >> +				       VM_READ | VM_EXEC |
> > >> +				       VM_MAYREAD | VM_MAYEXEC,
> > > 
> > > How come you don't need VM_MAYWRITE here...
> > > 
> > 
> > This is to keep it consistent with what the arm (32 bit) implementation does.
> > 
> > My understanding is that the kuser code is executed in user mode for efficiency
> > reasons but it is too close to the kernel to be implemented in user libraries
> > and that the kernel can change its internal implementation from version to
> > version as far as it guarantees the "interface" (entry points and results).
> > Based on this gdb should not need to put a breakpoint inside the kuser helpers code.

That is not the point at all.  The kuser code is to give userspace
independence of two things:

1. The CPU architecture it is running on (whether it be SMP or UP.)
2. The configuration of the kernel.
3. The method with which the per-thread value is obtained.
4. Atomic compare-exchange.

If it was only (1), then maybe it would be possible to have had the
userspace dynamic linker figure out which libraries to load, but (2),
(3) and (4) make that extremely complex to manage.

For (3), there are two ways to get the thread value:

1. Reading it from the userspace visible thread-private CPU register.
   This is fine for CPUs that implement it, but not all ARM CPUs
   implement this register.

2. Reading it from 0xffff0ffc, but only when the kernel is configured
   to write it there.

Which one works is a function of the kernel configuration (hence 2)
and the CPU capabilities.

For (4), there is a need to provide modern libraries with a way to
implement the cmpxchg() semantics.  This is easy in ARMv6+, as there
are the load-exclusive and store-exclusive instructions, but on
earlier architectures, there is only one atomic instruction which
is essentially an exchange operation - and that can't be used to
provide cmpxchg() semantics.

To work around that, there is code in the kuser page which provides
cmpxchg() semantics with the help of the kernel fixing things up if
an exception happens while userspace is executing that code.

Pushing that code into a library means that the kernel has to be aware,
whenever _any_ exception occurs, whether the PC is in that code, and
dealing with that efficiently if it isn't at a fixed address becomes
problematical - it's overhead incurred on almost every exception.
What's more is that the kernel has to be aware of the exact code so
it knows which range of PC values are required to be fixed up - since
the fixup involves changing the userland state, inappropriately changing
it will corrupt the program execution.

> Hmm, but couldn't you apply the same reasoning to the sigpage?

The sigpage is mapped in the userspace address range, and gdb has the
expectation that it can set breakpoints (which involves writing via
ptrace) to pages.  If we can't set breakpoints in the sigpage or vdso,
we end up losing control of the executable while trying to single step
it if the executable branches into such places - or gdb refuses to
step.

> [also, talking to Russell, he makes the very good point that you can't CoW
>  the page containing the vectors because that would give userspace control
>  over the vectors themselves!]

Yes, the vectors page is very special and gdb knows about it - it can't
be CoW'd because that would give userspace a way to modify the machine
vectors, thereby taking control of the machine in a privileged execution
state.

> > And if we consider as well that the fixed address nature of the helpers could be
> > used from ROP authors during the creation of exploits probably we want to
> > prevent gdb to set a breakpoint there hence the proposed patch does not contain
> > VM_MAYWRITE.
> 
> I'm not sure I buy the ROP angle... you need to GUP the thing to get the
> write to happen. Maybe you could do it with a futex or something, but I'm
> also not sure that's really a viable attack.

This came up on 32-bit.  Yes, ROP is a concern, that's why we changed
things around a bit a few years ago, and also made it possible to
disable the kuser helpers page entirely for maximum security - but
doing so has been biting people.  Last merge window, I merged a patch
which made the kernel state when such an executable was run, because
people were complaining that it was difficult to work out what was
happening.

ROP here is not about changing the data in the page, but exploiting the
instructions already there.  In the old says, most of the page was
filled with zeros, which meant you could branch before a kuser helper
and we'd execute "andeq r0, r0, r0" instructions until we hit one.
One of the changes to improve security was to fill the page with
instructions guaranteed to fault in either ARM or Thumb mode, thereby
reducing the possibility of ROP based attack on systems where the kuser
page was still present.

Another change that was made was to move the vector stubs out of the
vectors page into a page at 0xffff1000 which was inaccessible to the
user, so the only instructions userspace can see are the branches in
each of the vectors to the inaccessible code at 0xffff1000.  This also
has the effect of hiding the address of the kernel entry points from
userspace.

> > I had a look to arm implementation and it seems the it defines the vector page
> > as READONLY_EXEC and there is no VM_MAYWRITE in the vm_flags.
> > 
> > I could extend the comment accordingly.
> 
> I initially thought that the gate VMA would mean that VM_MAYWRITE was
> implicit for GUP, but actually that's handled explicitly in get_gate_page():
> 
> 	/* user gate pages are read-only */
> 	if (gup_flags & FOLL_WRITE)
> 		return -EFAULT;
> 
> so yes, I agree with you that this is consistent with 32-bit. What I'm not
> sure about is why we need to CoW the sigpage. But I suppose being compatible
> with 32-bit is the aim of the game, so this is all moot.

On 32-bit ARM, the "gate VMA" describes the vectors page mapping, not
the sigpage.  The sigpage is CoW-able to support gdb.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-04-12 13:31 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-02 16:27 [PATCH v3 0/4] arm64: compat: Add kuser helpers config option Vincenzo Frascino
2019-04-02 16:27 ` [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Vincenzo Frascino
2019-04-10 18:10   ` Will Deacon
2019-04-12 11:08     ` Vincenzo Frascino
2019-04-12 11:55       ` Will Deacon
2019-04-12 13:28         ` Vincenzo Frascino
2019-04-12 13:30         ` Russell King - ARM Linux admin
2019-04-02 16:27 ` [PATCH v3 2/4] arm64: compat: Split kuser32 Vincenzo Frascino
2019-04-02 16:27 ` [PATCH v3 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages() Vincenzo Frascino
2019-04-02 16:36   ` Catalin Marinas
2019-04-02 16:27 ` [PATCH v3 4/4] arm64: compat: Add KUSER_HELPERS config option Vincenzo Frascino
2019-04-04 10:08   ` Catalin Marinas
2019-04-04 14:07     ` Vincenzo Frascino
2019-04-04 14:51       ` Catalin Marinas
2019-04-04 15:01         ` Will Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.